Will we successfully adapt to a future defined by artificial intelligence? Susan Athey is cautiously optimistic.
“Whenever anybody says, ‘Oh, we’re not going to have enough jobs,’ that’s actually not a well-defined statement,” explains Athey, professor of economics at Stanford Graduate School of Business, on the most recent episode of the If/Then podcast. “There’s a lot of things people could do that would be valuable. The real question is whether the receivers of those services can pay for them.”
Athey, who is also the founding director of the Golub Capital Social Impact Lab, studies the impact of technological innovations on workers, businesses, and society (among other things). With careful investment and prudent policy, she believes much of the world will weather the disruption that AI is already creating.
“I think that economically it’s perfectly possible to manage these transitions, but I’m worried about our ability to pull together and row in the same direction,” she says. On top of the “technical shock,” things could be complicated by governance problems or trade wars. “Basically, if we have any unforced errors, then I become more concerned.”
To sort it all out, Athey is at work on both macro-level questions and ground-level initiatives.
“Trying to tackle the problems that technology will create for society can definitely be overwhelming,” she says. “I bounce back and forth between wanting to solve these big-picture, economy-wide problems and building projects, which are very time consuming.” However, “doing these projects helps give me confidence to understand what is going to happen in the future from a policy perspective,” Athey adds. “So I’ve basically just decided not to choose and continue to do both.”
This episode also features Tom Butt of East Brother Light Station.
Listen & Subscribe
If/Then is a podcast from Stanford Graduate School of Business that examines research findings that can help us navigate the complex issues we face in business, leadership, and society. Each episode features an interview with a Stanford GSB faculty member.
Note: This transcript was generated by an automated system and has been lightly edited for clarity. It may contain errors or omissions.
Kevin: Rising from a small, rocky island in the San Francisco Bay, at the top of a two-story Victorian building, is a lighthouse called East Brother.
Tom Butt: It’s actually the oldest lighthouse inside San Francisco Bay.
Kevin: Tom Butt is the head of a nonprofit group that has maintained the historic structure since 1980, when the Coast Guard abandoned it.
Tom Butt: They did with East Brother what they did with a lot of lighthouses. They just boarded it up and walked away from it.
Kevin: But thanks to the efforts of locals and preservationists, the historic lighthouse is now immaculate. A thriving guest house business supports its upkeep, and on a recent volunteer day, a motor boat ferried a dozen passengers to the island.
Kevin: Two innkeepers live at East Brother and run the guest house, but its volunteers who handle minor maintenance:
There’s the paint peeling on the foghorn engine room. The picket fence posts leaning over. The rust spots on the white walls that wrap the light tower.
Tom Butt: This place really takes a beating out here, I’ll tell you.
Kevin: Nevertheless, the lighthouse has stood since it was built 150 years ago.
Tom Butt: It tells the story of a critical time in maritime history.
Kevin: And a story of how quickly things change.
Tom Butt: You know, with modern navigation tools, a lot of these things are essentially obsolete. But back when they built this, lighthouses were one of the main ways that mariners could navigate into the bay and navigate up the Sacramento River.
Kevin: Although lighthouses still guide ships, all but a few were automated many years ago.
Tom Butt: The fact that the light and fog horn could become automated, that’s happened at all the lighthouses all over the country, probably all over the world. That’s a huge change because, the original plan of the Coast Guard as we understood it was basically to tear these buildings down and put the light up on a pole.
Kevin: When East Brother changed hands — from the Coast Guard to volunteers — its new operators had a dilemma: they wanted to keep the human element of the lighthouse alive, even though there was no longer any clear way to pay for it.
Tom Butt: We needed to find a way to create a revenue stream so we wouldn’t have to keep going out and looking for grants.
Kevin: After some brainstorming, they came up with the idea for the guest house as a way to make money and keep people on the island. Today that idea provides more than enough revenue to cover costs, including the salary of two innkeepers — a job that attracts a lot of attention whenever it’s posted.
Tom Butt: Lighthouses have always had a sort of a romanticism about it. People, I can’t tell you why, people just are really into lighthouses.
Kevin: Advancements in technology will upend our world, and when new innovations replace human labor, how do we adapt?
Susan Athey, the Economics of Technology Professor at Stanford Graduate School of Business, studies these shifts and what they mean for workers, businesses, and society.
Susan Athey: Whenever anybody says, “Oh, we’re not going to have enough jobs,” that’s actually not a well defined statement because, there’s a lot of jobs that are things people could do that would be valuable. The real question is whether the receivers of those services can pay for them.
Kevin: How we navigate these shifts depends on the choices that we make. If we act wisely, AI and other innovations can serve society and create opportunity. How we design these transitions is our focus today.
Kevin Cool: This is If/Then, a podcast from Stanford Graduate School of Business, where we examine research findings that can help us navigate the complex issues facing us in business, leadership, and society.
I’m Kevin Cool, senior editor at Stanford GSB.
Kevin Cool: Can you start by talking about a study that you did in Sweden that dealt with the impact of job losses there?
Susan Athey: So with a couple of collaborators from Sweden, we studied the impact of about 22,000 layoffs that occurred over a period of a couple of decades. And that type of study had been done before, people had showed that layoffs had negative effects on people, but the new wrinkle that we added in our study was to use machine learning techniques that I’d recently developed in order to identify in a data driven way exactly which groups of people were most impacted by layoffs. And, that can help in turn direct policy.
And we were also interested in how long those impacts last. And so what we found was that the most impacted bottom 10 or 20 percent were really impacted badly. So their earnings were down, 40 to 45 percent in the first year, and their earnings remained down by about 15 percent ten years later. So that’s quite a long term negative shock.
Then we try to say, well, all right, what’s, the characteristics of these people? And we look at a whole bunch of different characteristics, but if you had to pick a couple of things, age and education together accounted for a lot of the difference.
So that suggests that there may be groups of workers that are really not going to recover from such a layoff. And especially if they’re older and towards the end of the career, we may want to think about different kinds of policies than for the younger workers, where really helping them transition into something new may be feasible and possible.
Kevin Cool: One thing that seems to be consistent in a lot of your work, Susan, is that there is a high value on low-cost, highly-scalable solutions. Is that what the goal should be in these sorts of situations?
Susan Athey: The promise of digital technology, especially for helping the poor, developing countries, vulnerable workers is its scale economies. I wanted to take those same insights and apply them to something that might not be funded purely for profit, but if you can cover the fixed costs that you could scale them more broadly. And so, yes, that’s absolutely part of the thesis to figure out how you can do low-cost interventions.
Another reason I was thinking about that specifically in the last few years, I started some of this work just before or during the pandemic. And that was a time when especially, you know, online learning and people’s participation in online learning had a step change up. So one of the other things that we saw during the pandemic is some governments were looking at trying to do online learning, nonstandard credentials while people were home, and so we started working with Coursera.
Coursera, in particular, was interested in sharing some of their resources with people in developing countries. Some of them are offered for a fee. They were interested in scaling those up more broadly, and we looked into possibly measuring the impact of such training on a larger scale.
One big question we had was if you did get a Coursera non-standard credential, would it even make a difference?
And is this really a good use of time? If you put it on your CV, is anybody going to pay attention? So we ran a study with Coursera where we created a new product feature that helped people put a Coursera credential on their LinkedIn profile in two clicks.
And we studied that specifically for people in developing countries or people without a college degree. And then we wanted to see, well like at a larger scale can we actually help people get jobs?
So over a six month period, we had about 800,000 people who didn’t have a college degree or were from a developing country who finished Coursera courses in a bunch of areas like related to data science and IT, other things like that.
And so we took each person at the point they finished their course and we randomly gave them access to the feature that made it easy for them to post their credential on LinkedIn.
Kevin Cool: I see.
Susan Athey: So some people got the feature and some people didn’t. With the feature, you just clicked two buttons, and then the credential would get posted on your LinkedIn profile. So it wasn’t just something you wrote on your profile, but it actually clicked through to a description of the certificate and what it meant.
Kevin Cool: So it sort of automatically vetted it at the same time.
Susan Athey: Exactly. It was a credible credential.
Kevin Cool: Yeah, yeah.
Susan Athey: And so we randomized 800,000 people into 50/50 treated-control over a six month period and then we watched how they got jobs, and we found that indeed posting the credentials did have an impact on getting a job. And we tracked the jobs, not by surveying people, which has a very low response rate, but by watching their LinkedIn profiles and seeing who posted a job. And again, this effect persisted over many months.
Kevin Cool: So policymakers who might have been considering the possibility of, say, subsidizing, training for folks to get an online certificate, now we have at least some evidence that if it’s paired with something like the LinkedIn feature that you describe, then it probably is worth it.
Susan Athey: Exactly. And as soon as Coursera saw our results, they actually rolled out the feature platform wide. So
Kevin Cool: Ahhhh.
Susan Athey: Now millions of people are enjoying the feature.
Kevin Cool: So in that study what’s the lesson for people who aren’t in a situation where they are being automatically given that prompt?
Susan Athey: There’s a big benefit to showcasing the credentials you have and especially having them on your digital profiles for industries that do a lot of digital recruiting.
Now I want to caveat that actually that the world is changing rather quickly there. So AI is getting better and better at reading resumes. And also, AI is getting better and better at writing resumes. So, the way that a particular thing on a resume gets read could be a moving target in that environment. But, I think, generally, the prospects for having a verifiable credential still look quite good, because it can be sort of validated and an AI resume screening tool, in principle, should be able to pick that up.
Kevin Cool: Sure.
Susan Athey: Now, if everybody has a credential. It also doesn’t make you stand out. So putting a credential on your CV is going to still have impact that’s related to supply and demand. And so you do have to signal that you have something scarce in order for it to have value.
Kevin Cool: Mmhmm. What does it mean if we have AI on both ends of the transaction? We have AI writing the cover letters and producing the resumes, and we have AI evaluating the cover letters and the resumes. What does that world look like?
Susan Athey: That’s a great question and it’s something I think about a lot. I used to get emails maybe, you know, every other week of someone who wanted to work for me somewhere in the world and they would maybe a high school student, a college student, a grad student, all sorts of people, email professors, and they would tell me about how their interests matched up with my research perfectly.
Now I get one of these like every other day. And many of them are written by ChatGPT, clearly. And so it’s basically removed the signaling value of taking the time to write a thoughtful email. So now I mostly delete all those emails. There’s too many of them. I can’t answer them. I can’t read them. So basically taking the cost away of sending the email, removed a friction that was playing an important role.
Kevin Cool: Right. It’s a differentiator, right? Yeah,
Susan Athey: Having to incur a friction is a differentiator. So this is not a new problem, but in the employment context, it’s going to be a very important problem. And employers actually had gotten there first with automated screening tools. But now with ChatGPT, the employees have caught up.
A friend of mine was telling me about how they were, thinking about, you know, how you would design a new marketplace to really lean into this idea that the robots were talking to each other. It will be different though, because you have to do something about the fact that, people, if, if I have a robotic agent that’s applying on my behalf, and it’s having an interview with a robotic hiring agent. The fact that I can apply to infinite jobs and conduct infinite interviews in parallel changes the game in a variety of ways. but of course you’re wasting the time of another robot.
Kevin Cool: [Laughs]
Susan Athey: So
Kevin Cool: And we don’t want to do that.
Susan Athey: So the time cost maybe isn’t the problem, but at some point the person needs to incur some real cost to show that they’re interested and that they’re a good fit.
Kevin Cool: There are a lot of concerns, I think, in the general population about what AI will mean, longer term, in terms of its disruption to employment and so on. What is your view about both what we should be wary of with respect to AI and what the potential is that would either mitigate those concerns or create new opportunities and new pathways that don’t exist now that actually might make the world better, easier, cheaper?
Susan Athey: So whenever anybody says, “Oh, we’re not going to have enough jobs,” that’s actually not a well-defined statement because, there’s a lot of jobs that are things people could do that would be valuable. The real question is whether the receivers of those services can pay for them. And so there’s a few things that kind of scale with the size of the population.
So childcare, elder care, health care, you know, seeing a theme? There’s a bunch of, you know, personal services in general that, some of it can be provided by technology. Maybe an AI coach on my phone is, is good for helping me do exercise, but some things you’re still going to want a person for.
And I think there’s enough things that we have valuable work for most of the population. Imagine how much healthier you would be if you could actually talk to your doctor, like, once a month. Or they actually could follow up with you and fine tune your treatment or make sure that things were working for you. Most of us are way undertreated and we’re just guessing and checking in just very periodically. Think about obesity, think about general metabolic health. There’s a bunch of stuff where with some concerted effort we could make people healthier, which would lower costs in the long run, so it’d be a good investment.
But somebody has to pay for this. So people can pay for it themselves if they have good jobs and if the services aren’t too expensive. Or the government can provide them. And a lot of these things are either subsidized or funded, like teachers and childcare are funded by the government. So there’s a pretty big role for the government in all of this, and of course elder care, largely funded by the government.
So, in that kind of a world, what can AI do? AI can help the service provider be a better and safer service provider, even if they don’t have a lot of training. And so I can give an example of another study we did in Cameroon.
One problem in Africa is there’s not enough educated nurses. So for the nurses, we designed a tablet application that helped them ask the right questions of the patients in order to give targeted information back to the patient that was customized to their specific concerns and situations and that was consistent with best medical practice. That had a huge positive impact, and also the nurses loved it.
So those types of programs, they’re easy to build. The problem going forward with AI is that they may be too easy to build. There’s gonna be a whole proliferation of them. So we need actually some vetting of them. We need a modest number of them that are actually good and give the right answers. But if we had those things and they were vetted and tested, then we could actually have a lot more people be nurses and teachers and coaches and elder care providers.
So those are examples where a little bit of investment, like a very modest amount of investment, something that hundreds of people could build, you know, we’re not talking, massive, massive amounts of money because it’s like building software. Um, you could help service providers be better service providers.
And then you’d have to combine that with the government funding the provision of those services unless the economy is doing so well that, you know, lots of people can just afford to buy those on their own.
So some mix of public and private investment could actually help us transition to an economy where there’s lots of participation where actually the technology has allowed a bunch of people to do better jobs And make everybody healthier and reduce health care costs. So that all sounds great.
What am I worried about? On the flip side is if we have any localities that are very focused in something like there’s a major employer or set of employers that are doing the same kind of jobs and those get automated all at once. So my favorite example for the last 10 or 15 years has been call centers. And one reason that’s an example is that there are lots of parts of the world that were former manufacturing centers or mining centers.
Kevin Cool: Right.
Susan Athey: That where those have already been shut down. And so then the next thing those regions did is they invested in call centers because you had these very low wage workers in these very low cost of living areas, no jobs. So put a call center there. But the call centers, my prediction was once they get automated, they’re all going to kind of get automated together. And one reason is that the call centers are all using software already to help the workers answer the calls. And they already have been connected to the relevant databases of information, and they’re already recording all of those calls. So if you take all of that data, plus the IT infrastructure that exists, and throw now modern AI at it, with very little effort, you can make a lot of those workers obsolete.
And so we are starting to see that already, the technology is here and the adoption is happening. And so you could kind of shut down for the second time, say a former mining town or region of a country that was focusing on call centers, and it’s very unclear what those people are going to do next.
Kevin Cool: Yeah. And if they’re low wage people in poorer areas and so on, those effects then because it’s in a concentrated area are even larger than they might be.
Susan Athey: They spiral down to the restaurants, the hairdressers, everything, the real estate. So those areas can collapse. And so I think there are parts of the U.S. that could be vulnerable to this. I think the U.S. is probably better able to diversify and recover, but I think some other countries might have more trouble with that.
You know, I think if we can preserve our economic institutions, it’s certainly feasible, with a very strong and very effective government intervention at both the state and national level, that the U.S. can come through this transition fine.
If, on the other hand, you know, we have trouble with governance, people decide they don’t like redistribution, if we have to trade or trade wars or things like that that pile on top of the technical shocks, basically, if we have any unforced errors, then, I become more concerned.
Kevin Cool: What is it like for you to be thinking about all of this? It just seems so massive in terms of how to chip away at it.
Susan Athey: Trying to tackle the problems that technology will create for society can definitely be overwhelming. And every day I get up and I ask myself, you know, am I spending my time in the right way? So over the last, five to seven years, I was trying to build out case studies in advance of examples where technology could be beneficial with the understanding that if we found some, especially those that would help workers or that would help some of these really big problems, then we would be prepared to counterbalance any negative effects of technology as they come in.
But then there’s a whole nother set of policy type of questions like how should we think about macroeconomic impacts? What should we do about open source models? And how do we weigh the potential national security concerns against the massive economic benefits of having low prices.
I just bounce back and forth between wanting to solve these big picture kind of economy wide problems and these building projects, where the building projects are very time consuming. Each project takes a huge amount of time. On the other hand, doing the building projects helps give confidence to understand what is going to happen in the future from the policy perspective. So I’ve, I’ve basically just decided not to choose and continue do both.
Kevin Cool: Well, one of the great things about being associated with the GSB, and I would actually extend this to Stanford and any research institution is that it’s an optimistic enterprise. It seems to me. Are you optimistic?
Susan Athey: Cautiously optimistic.
Kevin Cool: Cautiously optimistic. Okay.
Susan Athey: I think the thing that concerns me most actually is that in advance of figuring out some of the societally beneficial applications of technology before we’ve really got that project done, we’ve figured out how to get people very angry, very quickly, through communication.
So I do think it’s hard to manage through crises and change when everybody’s angry at everybody else. And in the end, of course, this is a problem as old as time and propaganda and sensational headlines and all of that have been a problem forever and they’ve been utilized forever. But the ability to really get people’s emotions going in a very targeted, very personal way, like all of us have something that will make us mad.
Kevin Cool: Yeah. And the triggers are at the ready, everywhere. Yeah.
Susan Athey: Yeah, wouldn’t it be nice to live in a society where we all felt like we were on the same team? And were willing to make investments to make the communities around us better. That’s actually my biggest concern. I think that economically it’s perfectly possible to manage these transitions. But I’m worried about our ability to pull together and row in the same direction.
Kevin Cool: If/Then is a podcast from Stanford Graduate School of Business. I’m your host, Kevin Cool. Our show is written and produced by Making Room and the Content and Design team at the GSB.
Our show is written and produced by Making Room and the content and design team at the GSB. Our managing producers are Michael McDowell and Elizabeth Wyleczuk-Stern. Executive producers are Sorel Husbands Denholtz and Jim Colgan. Sound design and additional production support from Mumble Media and Aech Ashe. And a special thanks to Tom Butt from the East Brother Lighthouse.
For more on our faculty and their research, find Stanford GSB online, at gsb.stanford.edu, or on social media @stanfordgsb. If you enjoyed today’s conversation, consider sharing it with a friend or colleague and remember to subscribe to If/Then wherever you get your podcasts or leave us a review. It really helps other listeners find the show.
We’d also love to hear from you. Is there a subject you’d like us to cover, something that sparked your curiosity or a story or perspective that you’d like to share? Email us at ifthenpod@stanford.edu. That’s I-F-T-H-E-N-P-O-D@stanford.edu. Thanks for listening. We’ll be back with another episode soon.
For media inquiries, visit the Newsroom.