Â
More businesses are experimenting with artificial intelligence, but executives need to understand what the technology can — and can’t — do. | Reuters/Bobby Yip
So you’re the CEO of a clothing retailer, a rental car agency, or a payroll processing company, and you hear that artificial intelligence is changing the world. What are you supposed to do?
The short answer, says Paul Oyer at Stanford Graduate School of Business, is to start learning fast.
“Artificial intelligence will affect every industry, whether it’s clothing or shipping,” says Oyer, a professor of economics and the codirector of a new multidisciplinary course on AI for senior executives. “We need to find a complementary relationship between those who deal with the technology of AI and the managers who understand what drives their companies. Managers don’t need to learn all the technical details, but they do need to understand the implications for their business.”
It’s a tall order. AI has powered major advances in self-driving cars, robotics, image recognition, medical diagnostics, and big-data analysis. But each industry has its own needs, and non-technical executives are usually the ones who have to set strategic direction.
Oyer and Mykel Kochenderfer, director of the Stanford Intelligent Systems Laboratory and the course’s other codirector, say that humans and AI systems both need to understand each other better.
In a recent interview, the two outlined several issues for managers.
AI Isn’t Just for “Tech” Firms
Like personal computing back in the 1980s, artificial intelligence is a tool that can transform even seemingly old-school industries. A clothing retailer, for example, can use pattern recognition to better target particular kinds of customers. A trucking company can use AI to plan loads efficiently, optimize routes, anticipate maintenance issues, and identify drivers who may need more training.
On the other hand …
AI Is Not a Magic Wand
“Managers need to separate the hype from the reality,” says Kochenderfer. “For most executives, their understanding of AI comes from what they learn in the media. They learn the buzzwords, but they need to understand the core fundamental insights. It will be years, for example, before we have robotic flight attendants.”
AI Will Transform Labor Markets
Â
Oyer does not believe that AI and robots will cause mass unemployment, any more than the mechanization of farming did a century ago. Indeed, robotics could be helpful in countries with older populations and shortages of working age humans. However, Oyer warns, AI is likely to disrupt many current job categories. Autonomous vehicles will dramatically affect jobs based on driving. In the warehouse sector, a big growth area in recent years, robots are likely to replace many human packers and pickers.
Because lower-skilled jobs tend to be easier to automate, the pay gap between low-skilled and high-skilled workers is likely to keep widening.
“I’m not worried about there being enough jobs,” Oyer says. “But I am worried that a lot of people will have a very rough time making the transition after automation wipes out their old jobs. As a society, we’ve been terrible at retraining those people.”
AI Poses Major Safety Issues
Autonomous vehicles have made amazing advances at driving under normal conditions, but Kochenderfer cautions that they haven’t come close to eliminating the risks — in particular, the unpredictability of how humans or other machines will react to a decision in complicated situations.
There are many, many “edge cases” — low probability situations with serious consequences. Kochenderfer says edge cases arise in many fields, from expert systems that diagnose medical images to collision avoidance systems that decide what kind of evasive action an airplane should take. Mistakes can be fatal. “The issue,” Kochenderfer says, “is how do we certify that AI systems are truly safe and worthy of our trust?”
AI Systems Can Be as Plagued as Humans by Biases
Kochenderfer cites Amazon’s ill-fated attempt to use an expert system to review and rank job applications. To its chagrin, the company discovered that the system was biased against women and quietly shut it down. Why? Because most of the tech people hired in previous years had been male. The tech industry’s well-documented history of “bro” bias had subtly infected the AI system as it “learned” from previous hiring patterns.
“This goes to a core theme of the course,” says Kochenderfer. “To build successful systems, we need to account for human behavior. Much of uncertainty in the world is due to human influence.”
For media inquiries, visit the Newsroom.