Technology & AI

Ghost in the Machine: Knowing Who Created a Robot Makes It Feel More Authentic

Researchers found that origin stories changed how people viewed AI agents’ work.

May 05, 2022

| by Rebecca Beyer
llustration of a human-esque silhouette made up of three dimensional bars and spheres of different colors. Illustration by Khyati Trehan

Illustration by Khyati Trehan

There is Alexa sitting on the kitchen counter waiting for your next query. But before she tells you how to make a perfect avocado salad, would you like to know something about the person who invented her?

As the use of automated assistants and other AI agents becomes more pervasive, how humans interact with them is increasingly a subject of debate and research. Now a new study reveals that when people think about the humans who create these tools, they view the robots’ work as more authentic.

The study was conducted by Stanford Graduate School of Business Professor Glenn R. Carroll, his University of Washington colleague Arthur S. Jago, and Mariana Lin, a writer and poet (and Stanford d.school lecturer) who helped create the voice of Apple’s Siri. Their paper was published in March by the Journal of Experimental Psychology: Applied.

Building on previous work by Jago indicating that people generally view AI agents as less authentic than humans, Jago, Carroll, and Lin conducted five experiments to measure how human origin stories and anthropomorphism impacted perceived authenticity. Across each, the findings were consistent: People viewed an AI agent’s work as more authentic when they were presented with information or asked questions about the person or people who created the agent in the first place.

The findings could have far-reaching implications in a world increasingly powered by AI. People value authenticity, which has many definitions, including — from previous work of Carroll’s — so-called moral authenticity, or the idea that one is acting in a way that is true to themselves.

“If you look at what drives purchases of consumers in advanced economies, it’s often not objective characteristics of products or services,” Carroll says. “It’s our interpretation of them, the meaning we derive. It matters a lot if we think something is authentic.”

“Think how much we analyze whether an apology is authentic, or someone’s work is authentic,” Jago says. “It’s embedded in our humanness.”

And the fact that people value authenticity can be, well, valuable. Companies that do a good job conveying the authenticity of their AI agents’ work may have an advantage over those that don’t. Previous research — including by Carroll — has shown that people are willing to pay more for products and services they perceive as authentic.

To test how authenticity was perceived in different AI scenarios, Carroll, Jago and Lin used a variety of real-life examples, including computerized therapy platforms, music composition, pizza making, and hiring decisions.

“I have fond memories of Glenn and I sitting in his office trying to generate all these different domains,” Jago says, laughing. “We were literally going, ‘Did you know AI does this? Did you know AI does that?’”

The appendix in their article lists descriptions of the work performed by the hypothetical AI agent the authors named Cyrill:

“Specifically, Cyrill designs graphic art…”

“Specifically, Cyrill works in a large hospital [helping] doctors make sense of a confusing X-ray …”

“Specifically, Cyrill works as a security guard… helps recommend products to customers… helps manage customers complaints.”

And so on.

Testing people’s responses in multiple contexts — a practice known in psychology as stimulus sampling — makes experimental results more generalizable, and, therefore, credible. If people respond similarly to variables in different scenarios, other researchers can rely on the findings.

Human, Not Humanlike

A vast amount of research supports the idea that giving machines humanlike qualities such as faces or conversational speech patterns makes people more comfortable with their use. What was striking about this study was that the human origin stories embedded in the experiments had a stronger effect on perceived authenticity than anthropomorphizing the robots.

Quote
Companies that do a good job conveying the authenticity of their AI agents’ work may have an advantage over companies that don’t.

In one experiment, the researchers asked participants to indicate how authentic they thought an automated agent’s work would be, using a scale of one to seven, with seven being extremely authentic. Participants read that “Cyrill was actually developed by a famous computer scientist at Stanford. This developer was a pioneer when it came to AI and put a great deal of thought and effort into Cyrill.” A photo, ostensibly of the creator, accompanied the description.

The study subjects who read the origin story about an AI agent ranked its work highest in authenticity — at 5.18. That was true even when the researchers made the human-origin story agent less humanlike (calling it “this AI” instead of Cyrill) and the anthropomorphic AI more humanlike to heighten the differences between them.

“I did not anticipate the final conclusion at all,” Carroll says. “It was not obvious to us that human origin stories were going to be so powerful here.”

The authors did not define authenticity for the studies’ participants — “If they say it’s authentic, I’m going to interpret it as authentic,” Carroll explains. But previous studies have shown that authenticity adds value no matter how people think of it. As Carroll says. “People value something more highly if they think it’s authentic.”

In other words, we may want Alexa to help us boil our eggs, but we also want that help to be genuinely offered.

The researchers were well positioned to tackle the topic of authenticity in AI. Carroll, whose background is in sociology, has studied authenticity in a variety of contexts, including, for example, the craft beer industry. (Microbrewers have “a strong appeal of authenticity” while mass production breweries “take on the role of almost evil opponents.”) Jago, a psychologist by training, focuses on how new technologies are changing people’s experiences in the workplace and society. And, in a previous position as creative director for Apple, Lin oversaw the development of Siri, drafting many of the responses heard by iPhone users around the world.

“This was a fun paper to write,” Jago says. “The three of us all have very different theoretical and disciplinary perspectives.”

The Real-World Effect of Authenticity

Another study explored the possible practical applications of the findings. Participants were shown a painting generated by an algorithm and either told nothing about the artist behind the creation or given a short description of the artist. Then participants were asked whether a university should display the art in a museum, how much they would pay for the art, and whether they would recommend the algorithm, the painting, or both for an award.

Although they were no more or less likely to think the art should be displayed in the museum, participants who read a description of the artist believed the painting was more authentic, were more willing to pay for the piece, and more likely to recommend the art and the algorithm for awards.

The authors note in their paper that an AI agent’s authenticity might matter more in different contexts. If, for instance, a customer service bot solves the problem you’re having, who cares if the bot seems genuine or not?

But, in other cases, Carroll points out, “thinking something is authentic is what gives it meaning.”

“That’s how we interpret what goes on in social life,” he says.

The paper is the first collaboration between Jago and Carroll, and they have already started on another project aimed at discovering how people think about work jointly created by people and AI agents.

“Creative work often has a team of people behind it but one artist or author who is given sole credit officially,” Carroll says. “If the backstory was made public, that would undermine some of the mystique around the author. The question is, if the backstory is a human or an AI agent, does that make a difference?”

The possibilities for further exploration seem almost limitless.

“I’m really interested in understanding how people respond to technological change,” Jago says. “This just completely fascinates me, and as AI moves beyond making widgets and problem-solving into unique and novel domains, we don’t have adequate theories to explain how people are going to respond to or think about it.”

For media inquiries, visit the Newsroom.

Explore More