A black and white photograph of Guido Imbens, the Applied Econometrics Professor and Professor of Economics at Stanford GSB. | Credit: Elena Zhukova.

Guido Imbens, the Applied Econometrics Professor and Professor of Economics at Stanford GSB | Elena Zhukova

The breakthrough came on a warm, clear afternoon early in the summer of 1991. For months, Guido Imbens and his colleague Joshua Angrist had been contemplating a variation on a question that had vexed philosophers and other observers of the world since the days of Aristotle: How do you prove causation?

Imbens had been sitting in his office in Harvard’s economics department, thinking about the problem for so long that his head hurt. When the seed of an idea popped into his mind, he jotted down some notes, paused, and realized, “This is pretty much the best thought I’ve had.”

Imbens, now the Applied Econometrics Professor and Professor of Economics at Stanford GSB, has devoted his career to developing ever more precise methods for answering questions of cause and effect. Though it involves applied statistics and an alphabet soup of Greek symbology, it’s far from a purely academic pursuit: Understanding causality is perhaps the closest you can get to having a crystal ball. If you can find the causal relationship between two events — if you can pinpoint which one caused the other — then you can better understand why things are the way they are and possibly even predict how future events will play out.

The cleanest way to answer a cause-and-effect question is through a randomized control trial, such as the large-scale clinical trials used to prove the efficacy of COVID-19 vaccines. Of course, it’s not always ethical, or practical, to run experiments involving people. Researchers cannot withhold education from children to determine the effects on their future income, nor adjust immigration levels to see how they affect the labor market.

That’s where Imbens’ work comes in. In many cases, researchers can extract lessons from what’s known as a natural experiment — a real-world situation brought about by chance or policy that roughly mirrors a randomized control trial. It’s easier said than done. As Imbens explains, one of the trickiest questions in empirical economics is “once you move away from clear and crisp randomized experiments, what can you learn from observational data?”

The ideas that Imbens and Angrist formulated would help refine the answer to that question and spark a transformation in their discipline, imbuing it with a new sense of reliability and relevance. Imbens’ contributions eventually led to him sharing the 2021 Nobel Memorial Prize in Economic Sciences with Angrist, now at MIT, and David Card of the University of California, Berkeley.

Such a result must have seemed impossibly remote on that summer afternoon in 1991. Still, Imbens recalls his excitement as he set off, notepad in hand, for the Greenhouse Café to meet with Angrist. Striding along the tree-lined sidewalk, Imbens thought, “We’ve got this. We’ve nailed this.”

 

In 1979, when Guido (pronounced HEE-do) was a 17-year-old high school student in the Netherlands, his economics teacher lent him a slim hardcover book called Econometrics by Jan Tinbergen. Though Imbens couldn’t make much sense of the book’s equations, the notion that you could wield mathematics as a tool to shape economic policy and benefit society stuck with him. He applied to study econometrics at Erasmus University in Rotterdam and was accepted to the program that Tinbergen, who won the first Nobel Prize in Economic Sciences in 1969, had founded.

Imbens had an aptitude for econometrics. During his time in Rotterdam, a visiting American professor named Marcus Berliant offered a rigorous course that probably should have been taught to graduate students. On the first day of class, there were five undergraduates. On the second day, there were three. After a week, it was just Imbens, plus a handful of faculty. Impressed and a little amused, Berliant was the first person to suggest that Imbens pursue a PhD.

Imbens would go on to earn his doctorate in economics at Brown, and in 1990, he applied for a position at Harvard. That’s when he met Angrist.

At first, it seemed unlikely the two would become collaborators, much less friends. Angrist, who had been teaching at the university for one year, had strong views on which research questions were not worthwhile as well as which candidate Harvard should not hire — namely, Imbens. In Angrist’s opinion, the subject of Imbens’ dissertation “wasn’t a very interesting problem.” Harvard hired him as an assistant professor anyway.

Quote
I didn’t have the big picture view that this was going to change the way the profession viewed empirical work.
Author Name
Guido Imbens

The new colleagues quickly got to talking about solutions to problems that Angrist did think were worthwhile. “In the short run, they just seemed like interesting problems to work on,” Imbens says. “I didn’t have the big picture view that this was going to change the way the profession viewed empirical work.”

In the 1980s, most economists viewed empirical research in the same way that picky eaters view street food — they weren’t sure how it was made, and they didn’t quite trust it. In a famous 1983 paper, Ed Leamer, an economist at UCLA, argued that much of economists’ data analysis involved making unreasonable assumptions and relying on “whimsical” inferences. He quipped, “Hardly anyone takes data analyses seriously. Or perhaps more accurately, hardly anyone takes anyone else’s data analyses seriously.”

It was in this context that Imbens and Angrist began discussing a recent paper in which Angrist had used the Vietnam War-era draft lottery as a natural experiment to estimate the causal effect of military service on lifetime earnings. Analyzing Social Security data, he revealed that veterans who served during the period earned approximately 15 percent less than non-veterans, even 10 years after leaving the military. His results were credible and interesting enough to merit publication in The American Economic Review, but there was something about the methodology that didn’t sit well with him or Imbens.

The econometricians James Heckman and Charles Manski had recently argued that, in a natural experiment, there should be some factor — an instrumental variable — that neatly splits a population into subsets, with one subset experiencing an intervention and another subset not experiencing that intervention. With the draft, this neat division wasn’t possible. Some men whose numbers were called never served, while some who were not drafted volunteered and served anyway. Still, Imbens and Angrist suspected that there was still a way to obtain credible results using the draft lottery as a natural experiment.

These were the sort of dilemmas that the two young professors would discuss on Saturday mornings as they did their laundry in the basement of Angrist’s apartment building. Sitting across from each other on empty machines, legs dangling, they’d compete with the rumbling of the dryers as they lobbed ideas back and forth. (Each year, the Nobel Prize Museum asks laureates for an object that represents their work. “My plan is to give them a box of laundry detergent,” Imbens says.)

Angrist had experience doing empirical work and could focus on inconsistencies between methodology and real-world studies with laser-like precision. As for Imbens, “he was super methodical,” Angrist says. He’d work through a problem step by step until the logic led him somewhere interesting. If Angrist posed a question during one of their weekend brainstorming sessions, Imbens would show up in Angrist’s office on Monday with a sheet full of notes.

 

For months, Imbens and Angrist pondered what, if anything, researchers could learn from seemingly imperfect natural experiments like the draft lottery. Imbens read what statisticians had to say on the subject, though sometimes it seemed their papers were in a different language. “I was very junior, so I wasn’t bothered by the fact that I had to make a lot of investments into thinking about how other disciplines think,” he says.

Then, early in the summer of 1991, inspiration struck. Building on the work of Donald Rubin, the chair of Harvard’s statistics department, Imbens and Angrist realized that using instrumental variable methods couldn’t tell you about the causal effect for an entire study population, but it could tell you about the causal effect for a specific subpopulation.

Take Angrist’s draft lottery study. Some men were volunteers; they would join the military regardless of whether they were drafted. Other men would never serve, even if they were drafted — say, due to medical exemptions. You needed to exclude data from those two groups, because whatever led them to enlist or be rejected might affect how much income they brought in down the road. Once you statistically filtered out those two groups, you were left with men who served only because they were drafted. You could then determine the causal effect of military service on earnings for that specific group. “That may not be the thing you’re most interested in, but that’s all you’re going to get,” Imbens says.

This was the insight that Imbens and Angrist discussed on that early summer afternoon, which they eventually developed into a groundbreaking paper on what they called the local average treatment effect, or LATE. It showed that even if a natural experiment was not completely analogous to a randomized control trial, you could still learn something significant from it. “We made precise exactly what you can learn in those cases,” Imbens says.

Angrist and Imbens decided to show a very early draft of their paper to Rubin, the statistician whose work had inspired them. When Rubin called a few weeks later, Imbens recalls him saying, “The way you’re doing it is all wrong — but I think, actually, you guys are really onto something.” Though Rubin started as a critic, “he eventually became an enthusiastic supporter of what we were trying to do, and he helped us generalize it further,” Angrist says. “Being able to estimate causal effects in subgroups was a relatively old idea in statistics,” Rubin says, “but the interesting thing to me was they were able to show that, under relatively plausible assumptions, you could actually estimate the causal effect for a stratum that was not identified from the data alone.” That was new.

The LATE paper was published in the journal Econometrica in 1994. (“We had the theorem in ’91,” Angrist says, but “in economics, publication is like pulling teeth.”) It was simple. It was elegant. It was largely ignored.

Though Imbens and Angrist were confident that they had contributed something novel and useful to the field, few economists saw the value in their work. Card, their Harvard colleague and mentor Gary Chamberlain, and Princeton economist Alan Krueger (the only one to coauthor papers with all three future laureates) were among the exceptions. One detractor, Princeton’s Sir Angus Deaton, would write that to use the LATE methodology was to “let [a light] fall where it may and then proclaim that whatever it illuminates is what we were looking for all along.”

Imbens and Angrist went on to coauthor a related paper with Rubin on how to identify causal effects using instrumental variables, and Imbens and Rubin cotaught a class at Harvard on causal inference. (The course catalog mistakenly advertised a class on “casual” inference — not the last time this mistake would be made.) Imbens continued refining the methodology for gleaning causal effects from natural experiments, but his ideas were slow to gain traction. In 1997, Harvard rejected his application for tenure and he made his way to the West Coast.

 

One day in 1999, while Imbens was teaching at UCLA, he heard that Susan Athey was going to be giving a talk at the University of Southern California. A few years earlier, The New York Times had declared Athey “the top draft pick in economics,” and Imbens had been on the Harvard job panel that interviewed her after she had earned her PhD from Stanford GSB in 1995.

Imbens braved the Los Angeles traffic to catch the presentation, and afterward he joined Athey and a group of USC faculty for dinner. The two hit it off, and in 2002, they were married in what Imbens joked was “a very high-powered economics wedding,” with Angrist as best man and a total of four future Nobel laureates in attendance. (Card would have made five, but he sent his regrets along with a bottle of wine.)

From the outset, the couple formed an intellectual partnership. Their first collaboration began when Athey was analyzing the effect of counties adopting enhanced 911 systems to direct ambulances based on caller ID. She ran into some econometric questions and asked Imbens about them, but he didn’t know the answers right away. “That’s a good signal that there’s something interesting there,” Athey says. To date, Imbens and Athey have authored more than 20 papers together.

When Imbens is trying to move beyond a conventional way of analyzing causal effects, “he spends a lot of time in the intuition space,” Athey says. He’ll sink deep into a problem, mulling it over until he can pin down exactly what’s making him uneasy. Once he has that intuitive understanding, “he can see when you shouldn’t be satisfied with the math.”

In 2012, both Imbens and Athey accepted positions at Stanford GSB. They moved with their three young children to a house on the Stanford campus. On his office door, Imbens affixed an orange placard that read “ECONOMISCHE FACULTEIT,” acquired under murky circumstances in Rotterdam years ago.

Quote
Imbens and his colleagues developed a new terminology for economists, says Dean Jon Levin. “It changed the way people think, and that was incredibly powerful.”

Imbens was drawn to the business school because of its interdisciplinary nature. “There are computer scientists, there are statisticians, there are all different areas of subject matter knowledge,” he says. Plus, with Silicon Valley nearby, “this is just a unique place, where so many different parts of the world that are interested in these questions are in close proximity.” In 2014, Imbens spent the summer as an “intern” at Facebook, learning about what sorts of econometric questions were plaguing the tech industry. He tells his students that econometricians need to be in close contact with researchers doing applied work to ensure that the methodologies they’re developing are practical.

At Stanford GSB, Imbens says, “my research has continued to evolve around causality and trying to find credible ways of getting causal effects.” He has done work developing new methods to account for complex interactions between individuals, ways to combine randomized experiments with observational studies, and a set of statistical techniques known as synthetic control methods. In collaboration with Athey, one of the first economists to adapt machine learning to causal questions, he has done foundational work in this rapidly expanding area. Imbens also directs the Stanford Causal Science Center, an interdisciplinary community that applies causal inference methods to statistics, social sciences, computer science, biomedical sciences, and law.

As Imbens continued to push the limits of what could be learned from natural experiments, a change had been brewing in empirical economics. In 2010, Angrist and Jörn-Steffen Pischke coined the term “credibility revolution” to describe how the field had evolved since Leamer’s bleak assessment back in 1983. No longer did data-driven research suffer from a “distressing lack of robustness,” they wrote. Instead, economists were using well-designed studies to provide hard numbers that could inform policymakers and bridge the gap “between the real world and the ivory tower.”

Stanford GSB Dean Jonathan Levin, an economist himself, attributes this radical shift to two factors: the astronomical rise in computing power and data analysis, as well as changes in how researchers think about using data to answer causal questions. In addition to developing a toolkit for drawing precise conclusions from observational data, Imbens and his colleagues also developed a new terminology, Levin says. “It became the way people talked in seminars and asked questions and thought about things,” he says. “Once that language was there, it changed the way people think, and that was incredibly powerful.”

Over time, the research methods they pioneered became dominant in economic research, and even spread to the other social sciences and medicine. In 2000, four years after it had published, a paper by Imbens, Angrist, and Rubin on identifying causal effects using instrumental variables had been cited 162 times. As of today, it has been cited over 6,500 times.

 

At 2:13 a.m. on the morning of October 11, 2021, Imbens woke to the sound of his cellphone ringing. The caller ID showed a number from Sweden. He and Athey stared at the phone for a moment, the gravity of the situation sinking in. “When you get that phone call in the middle of the night, it takes on this dreamlike quality,” she says.

Imbens answered, and learned that he, along with Angrist and Card, had won the 2021 Nobel Prize in Economic Sciences.

He also learned that he would need to be dressed and ready for a live press conference in precisely 20 minutes. He and Athey woke up their kids to share the news. Then he squeezed in a quick conversation with his brother and sister in the Netherlands.

A gaggle of press and PR people arrived in the driveway. The Imbens kids searched for extension cords for everyone’s laptops, and even began cooking a breakfast of scrambled eggs and pancakes for the crew. Wanting to give Imbens time to savor the moment, Athey dove into managing the onslaught of media requests: “BBC wants to talk to him on Skype! Wait, does he even have a Skype account?” It was a joyful sort of chaos.

Once the sun came up, a Stanford media team shot a video with Imbens and his kids to help explain his work to the many people suddenly keen to learn more. (An early NPR report stated that Imbens, Angrist, and Card had analyzed “casual relationships.”) Sitting next to him on a couch in their backyard, his 10-year-old daughter, Sylvia, offered a concise explanation of natural experiments. “It’s very interesting how you can take data from things that were completely not intended for you… and then use it to just draw these astounding conclusions,” she said.

That afternoon, Imbens showed up to his weekly student workshop, “almost as if nothing had happened,” says doctoral candidate Michael Pollmann. His advisees congratulated him with a bottle of champagne, but after a few minutes of chatter, Imbens insisted on getting down to business, answering questions and offering feedback on students’ research.

Later that evening, still in a daze, Imbens spoke with Angrist and Card on a video call. They remembered Krueger, who had died in 2019, and reminisced about the work they’d done — where it all began, and how far the field has come since.

“Nobody had really started off on that journey thinking that this is where it would go,” Imbens says, “but this does feel like an end to a journey — a very unexpected end. And it was just great sharing that.”

For media inquiries, visit the Newsroom.

Explore More