January 23, 2026
| by Dave GilsonThe creation of superhuman artificial intelligence could lead to two worst-case scenarios, says Charles Jones, a professor of economics at Stanford Graduate School of Business. In the first, the power to kill everyone — in, say, the form of an AI-engineered supervirus — could fall into the wrong hands. In the second, AI could turn out like a superintelligent alien that — perhaps unmaliciously — wipes out its puny hosts.
AI experts agree that we should start planning how to avert these existential risks before it’s too late. That requires diverting money from the AI race to spend on safety research. But just how much?
Jones has run the numbers, and like a lot of numbers associated with AI, they’re really big. “I can’t tell you exactly what the right number is, but the right number is way bigger than anything we’re spending now,” he says. His modeling suggests that optimal spending on AI risk mitigation should be at least 1% of the United States’ current GDP — more than $310 billion a year. And it could be much higher — more than 20% of GDP in some of the scenarios he explores.
“Those numbers are really shockingly large,” Jones says. Yet the logic behind them is straightforward: Human life is valuable, so protecting it from catastrophic risk can justify large expenditures. “We don’t quite know what the risk is, but the fact that it’s there and not zero, and because life is so valuable, we really do want to take action.”
Jones is not an AI “doomer” who thinks existential AI risk is all but inevitable. While he finds some of the arguments put forth by pessimists, such as the authors of If Anyone Builds It, Everyone Dies, interesting, he says that it’s not necessary to think we’re prompting the apocalypse to take AI risk seriously. “You don’t have to believe the probability is 90% before you want to do something. Even if the probability is 1%, we’re willing to take actions that are economically large and meaningful.”
The Cost of Living
Calculating the cost of containing AI starts with putting a price tag on a human life. While that may seem morbid, it’s a routine part of cost-benefit analysis. The federal government generally uses $10 million as the value of a statistical life when setting health, safety, and environmental regulations.
“How much would we as a society pay to avoid a 1% chance of one person dying?” Jones asks. “Well, if you value that life at $10 million, you’d pay 1% of $10 million, which is $100,000.” By comparison, current per capita GDP in the U.S. is around $86,000.
Jones cites COVID as an example of large-scale risk mitigation in action. At the height of the pandemic, he estimates that the United States lost around 4% of total GDP to lockdowns and other public health measures meant to slow the spread of the virus. Whether or not we spent the right amount, Jones says the takeaway is that “it’s very easy to justify spending fairly large amounts to avoid outcomes that kill people.” We sacrificed a significant chunk of our economy to stop a disease with a mortality rate of around 0.3%, which suggests we’d be willing to spend a lot more to prevent a disaster that could kill everyone.
Pinning down an exact figure isn’t possible given the many unknowns. Just how much existential risk do we face from AI? How soon could this risk be realized? How effective would our mitigation efforts be? When Jones ran 10 million simulations with different values for these and other parameters, optimal spending on AI risk mitigation was slightly more than 8% of GDP. In about one-third of scenarios, however, the cost was zero, meaning that the risk is negligible — or that our efforts would be ineffective.
Whatever the numbers, the next big question is how to make robust AI risk mitigation a reality. While AI executives and researchers have expressed concern that they’re moving too fast, they have powerful incentives not to pump the brakes. “You can understand this as a classic prisoner’s dilemma,” Jones explains, referring to the game theory scenario in which acting in self-interest is the rational choice yet leads to a collectively worse outcome. “Each AI lab says, ‘Look, I could race or I could slow down. Even if I slow down, whatever happens is going to happen. But if I continue to race, well, maybe I’m safer than the other people. So I should be part of the race.’”
Though Jones does not fully describe the policies that could cool the AI arms race, he has some ideas. A tax on GPUs could fund safety research. Drawing on another familiar risk scenario, he sketches a picture in which advanced AI is controlled like nuclear weapons, kept in check by international agreements and institutions. During the Cold War, Jones says, “we managed not to push the red button.”
Yet he acknowledges that containing AI may prove even more daunting than preventing nuclear war: “If eight billion people had access to the red button, can you ensure nobody pushes the red button?”
For media inquiries, visit the Newsroom.