Technology & AI

When the Best AI Isn’t Necessarily the Best AI

Why organizations might want to design and train less-than-perfect AI.

July 29, 2020

| by Katharine Miller

 

Image
An autonomous car drives through the city. Credit: Reuters/Fabian Bimmer

When it comes to driving cars, sometimes artificial intelligence can be too good: It makes people lazy when it’s their turn to drive. | Reuters/Fabian Bimmer

These days, artificial intelligence systems make our steering wheels vibrate when we drive unsafely, suggest how to invest our money, and recommend workplace hiring decisions. In these situations, the AI has been intentionally designed to alter our behavior in beneficial ways: We slow the car, take the investment advice, and hire people we might not have otherwise considered.

Each of these AI systems also keeps humans in the decision-making loop. That’s because, while AIs are much better than humans at some tasks (e.g., seeing 360 degrees around a self-driving car), they are often less adept at handling unusual circumstances (e.g., erratic drivers).

In addition, giving too much authority to AI systems can unintentionally reduce human motivation. Drivers might become lazy about checking their rearview mirrors; investors might be less inclined to research alternatives; and human resource managers might put less effort into finding outstanding candidates. Essentially, relying on an AI system risks the possibility that people will, metaphorically speaking, fall asleep at the wheel.

How should businesses and AI designers think about these tradeoffs? In a recent paper, economics professor Susan Athey of Stanford Graduate School of Business and colleagues at the University of Toronto laid out a theoretical framework for organizations to consider when designing and delegating decision-making authority to AI systems. “This paper responds to the realization that organizations need to change the way they motivate people in environments where parts of their jobs are done by AI,” says Athey, who is also an associate director of the Stanford Institute for Human-Centered Artificial Intelligence, or HAI.

Athey’s model suggests that an organization’s decision of whether to use AI at all — or how thoroughly to design or train an AI system — may depend not only on what’s technically available, but also on how the AI impacts its human coworkers.

Motivated to Pay Attention

The idea that decision-making authority incentivizes employees to work hard is not new. Previous research has shown that employees who have been given decision-making authority are more motivated to do a better job of gathering the information to make a good decision. “Bringing that idea back to the AI-human tradeoff,” Athey says, “there may be times when — even if the AI can make a better decision than the human — you might still want to let humans be in charge because that motivates them to pay attention.” Indeed, the paper shows that, in some cases, improving the quality of an AI can be bad for a firm if it leads to less effort by humans.

Athey’s theoretical framework aims to provide a logical structure to organize thinking about implementing AI within organizations. The paper classifies AI into four types, two with the AI in charge (replacement AI and unreliable AI), and two with humans in charge (augmentation AI and antagonistic AI). Athey hopes that by gaining an understanding of these classifications and their tradeoffs, organizations will be better able to design their AIs to obtain optimal outcomes.

Replacement AI is in some ways the easiest to understand: If an AI system works perfectly every time, it can replace the human. But there are downsides. In addition to taking a person’s job, replacement AI has to be extremely well-trained, which may involve a prohibitively costly investment in training data. When AI is imperfect or “unreliable,” humans play a key role in catching and correcting AI errors — partially compensating for AI imperfections with greater effort. This scenario is most likely to produce optimal outcomes when the AI hits the sweet spot where it makes bad decisions often enough to keep human coworkers on their toes.

 

Quote
There may be times when — even if the AI can make a better decision than the human — you might still want to let humans be in charge because that motivates them to pay attention.
Attribution
Susan Athey

With augmentation AI, employees retain decision-making power while a high-quality AI augments their effort without decimating their motivation. Examples of augmentative AI might include systems that, in an unbiased way, review and rank loan applications or job applications but don’t make lending or hiring decisions. However, human biases will have a bigger influence on decisions in this scenario.

Antagonistic AI is perhaps the least intuitive classification. It arises in situations where there’s an imperfect yet valuable AI, human effort is essential but poorly incentivized, and the human retains decision rights when the human and AI conflict. In such cases, Athey’s model proposes, the best AI design might be one that produces results that conflict with the preferences of the human agents, thereby antagonistically motivating them to put in effort so they can influence decisions. “People are going to be, at the margin, more motivated if they are not that happy with the outcome when they don’t pay attention,” Athey says.

The Dangers of Mimicking Bias

To illuminate the value of Athey’s model, she describes the possible design issues as well as tradeoffs for worker effort when companies use AI to address the issue of bias in hiring. The scenario runs like this: If hiring managers, consciously or not, prefer to hire people who look like them, an AI trained with hiring data from such managers will likely learn to mimic that bias (and keep those managers happy).

If the organization wants to reduce bias, it may have to make an effort to expand the AI training data or even run experiments — for example, adding candidates from historically black colleges and universities who might not have been considered before — to gather the data needed to train an unbiased AI system. Then, if biased managers are still in charge of decision-making, the new, unbiased AI could actually antagonistically motivate them to read all of the applications so they can still make a case for hiring the person who looks like them.

But since this doesn’t help the owner achieve the goal of eliminating bias in hiring, another option is to design the organization so that the AI can overrule the manager, which will have another unintended consequence: an unmotivated manager.

“These are the tradeoffs that we’re trying to illuminate,” Athey says. “AI in principle can solve some of these biases, but if you want it to work well, you have to be careful about how you train the AI and how you maintain motivation for the human.”

As AI is adopted in more and more contexts, it will change the way organizations function. Firms and other organizations will need to think differently about organizational design, worker incentives, how well the decisions by workers and AI are aligned with the goals of the firm, and whether an investment in training data to improve AI quality will have desirable consequences, Athey says. “Theoretical models can help organizations think through the interactions among all of these choices.”

For media inquiries, visit the Newsroom.

Explore More