Technology & AI
5 min read

Researchers Build a Virtual World to Run Experiments Over and Over

A social network with 20,000 AI-driven users simulates real human behavior — to a point.

Simulations “capture the big picture, but they’re missing a lot of the detail,” says Sadegh Shirani. | iStock/imaginima

December 17, 2025

| by Roberta Kwok

On the day of the 2010 U.S. midterm elections, researchers conducted a massive experiment on 61 million Facebook users. Some were shown a reminder to vote and a link to find their polling place; others saw a version of the same post displaying faces of their friends who had already voted. The social message was much more effective at getting out the vote, driving 340,000 people to the polls.

The study is an influential example of how online networks can influence people’s behavior. But it was a one-off. “The 2010 election happens only once,” says Sadegh Shirani, a PhD student in operations, information, and technology at Stanford Graduate School of Business. “It’s never going to happen again.”

By their nature, giant social experiments can’t be easily rerun to test new hypotheses or study different conditions. Setting them up in the first place can be expensive, and some questions are difficult to investigate without raising ethical concerns.

Shirani and Mohsen Bayati, a professor of operations, information, and technology at Stanford GSB, wondered if they could create a virtual world where they could run behavioral experiments as often as they liked. In a new study, they developed a realistic simulation populated by 20,000 AI-powered digital “agents” that mimicked human behavior in a social network.

The researchers showed the agents messages based on those used in the 2010 Facebook experiment. They found that their simulation generally replicated the results of that earlier study: A message mentioning social connections increased voter turnout more than a generic informational post. However, the effect of seeing the social message was much stronger than what the real-world experiment had found.

The results suggest that simulations “capture the big picture, but they’re missing a lot of the detail,” Shirani says. Still, this virtual environment could help researchers broadly test the effectiveness of different interventions. “You can run a type of experiment that you cannot ever do in the real world,” Bayati says. “You can live the life twice.”

A Web of AI Agents

To create their virtual environment, Shirani and Bayati started by gathering U.S. Census data on 20,000 people, including details such as age, gender, job, education, and marital status. Each profile was assigned to a unique digital agent.

Quote
You can run a type of experiment that you cannot ever do in the real world.
Author Name
Mohsen Bayati

Next, the researchers obtained data from a 2012 study on a real network of Twitter users and mapped each virtual agent randomly onto a network node. They used large language models to refine and expand each user’s profile, filling out details such as their interests, political stance, and tendency to vote.

The researchers then ran 30 rounds of a simulation, with each round representing one day leading up to an election. During a round, the agents interacted with a Facebook-like social network. They queried the LLM to decide what to do, taking into account their demographic details and “personality.” For example, agents could create a new post about the election or another topic, which would appear in other agents’ newsfeeds; or they could follow another user or change their intention to vote, based on the content they saw.

Shirani and Bayati ran different iterations of the simulation. In some versions, all agents saw the informational get-out-the-vote message; in others, they saw the social message; and in others, they saw no message at all.

Seeing the social message directly increased the agents’ voting turnout by an average of 3.9%, while the informational message didn’t make much difference. This result broadly aligned with the findings in the 2010 Facebook experiment. However, in that earlier experiment, the effect was much weaker; the social post increased voter turnout by about 0.4%.

Studying the Sims

The simulation might have yielded a stronger effect partly because virtual agents are “fully focused on what they’re seeing,” Shirani says, while real people are more distracted by the flood of information in their social feeds. “I may see the message, but then I quickly may see a post from my friend about the soccer game tonight, and I may totally forget about the message on the election.”

He and Bayati also measured the message’s indirect network effect — how it influenced users who didn’t directly see the post but saw related posts by friends. In the simulation, this effect was much smaller than the one estimated in the earlier Facebook experiment. Shirani speculates that this might be because “people have deeper connections” within their social networks in the real world. For example, “if I see my wife vote, that means a lot to me.”

Bayati envisions that LLM-driven agents could be useful for research in many fields, ranging from materials science to healthcare, as long as researchers are aware of their limitations and interpret results with caution. For instance, a researcher could run thousands of virtual experiments to identify promising leads for a new material and then validate those candidates in the lab.

“Everybody has been hearing ‘AI has value,’” Bayati says. “But here’s a very concrete example where we see it.”

For media inquiries, visit the Newsroom.

Explore More