Exploring the Human Side of Artificial Intelligence
At an AI forum, experts say the arrival of superhuman machine intelligence will be one of the biggest events in human history.
November 07, 2019
HAI Distinguished Fellow Erik Brynjolfsson and Stanford GSB Professor Susan Athey discuss the ethics of AI at a recent conference. | Holly Hernandez
An underlying theme emerged from the Stanford Institute for Human-Centered Artificial Intelligence’s fall conference: AI must be truly beneficial for humanity and not undermine people in a cold calculus of efficiency.
Titled AI Ethics, Policy, and Governance, the event brought together more than 900 people from academia, industry, civil society, and government to discuss the future of AI (or automated computer systems able to perform tasks that normally require human intelligence).
Discussions at the conference highlighted how companies, governments, and people around the world are grappling with AI’s ethical, policy, and governance implications.
Expanding Human Experience
Susan Athey, the Economics of Technology Professor at Stanford Graduate School of Business and faculty associate director at Stanford HAI, spoke about AI’s impact on the economy. It’s critical, she said, that AI creates shared prosperity and expands — rather than replaces — the human experience in life and at work. Humans, after all, understand things in a way that may be difficult to codify in AI. How we organize and think about the future of work for people as well as machines is important, as it is all interconnected, she added.
“The real benefits of AI come when we dive into the applications and understand the entire vertical, everything through implementation, including the ethics and the feelings of people adopting it,” Athey said.
Erik Brynjolfsson, director of the Initiative on the Digital Economy at MIT, said companies building AI need to focus on the human side in addition to the eye-popping technology. “We need to understand first what our values are so we can understand how best to use these technologies.” He said it’s necessary to rethink the whole organizational and business process in terms of how AI fits in with the human culture.
Other panelists discussed the roles of public entities and private enterprise when it comes to regulating AI.
Eric Schmidt, the former CEO of Google and technical advisor to Alphabet Inc., spoke with Marietje Schaake, a Dutch member of the European Parliament who played a role in the European Union’s regulation of big tech and is the Stanford Cyber Policy Center’s international policy director.
Schmidt noted that ethics matter in how a human decision is combined with an AI decision and said that “liberal, Western values” are important to support at a time when countries like China are using AI technology to repress and surveil their own people. “We want to make sure the systems we’re building are built on our values, human values,” he said.
Schaake urged that policymakers worldwide should take a citizen-oriented approach to AI policies and regulations, rather than follow a more corporate user-oriented framework. She advocated greater regulation of how tech companies use big data and stronger privacy protections for individuals, and urged that regulation should happen sooner rather than later in the case of artificial intelligence.
“We need a deeper debate about which tasks need to stay in the hands of the public and out of the market,” she said.
Ethics, Geopolitics, and Diversity
Reid Hoffman, cofounder of LinkedIn, talked about his concept of “blitzscaling,” a set of techniques learned at Silicon Valley companies to develop innovations quickly. Hoffman said this should happen in AI simultaneously with a sense of ethics and responsibility.
For example, when fast-growing companies plan for the future and quickly build up their engineering or sales capabilities, they also need to anticipate risk and what could go wrong on the road ahead. This means hiring people who understand risk and ethics and developing a risk framework for the company that is combined with a sense of ethics.
In the area of health care and disease, DJ Patil, the head of technology for Devoted Health, noted how AI holds tremendous promise for treating people and saving lives: “We need to go at maximum warp speed to help those people.” The challenge is how to bring those cures and treatments to market quickly while also adhering to the necessary health care safeguards and ethical sensibilities.
Patil also called for more cooperation on data sharing around the world. “We have climate change, the potential for pandemics. What we need is better international frameworks, treaty mechanisms to share data across regional lines so that we can actually work on human problems.”
AI and National Security
In an AI and Geopolitics breakout session, led by Amy Zegart, a senior fellow at the Freeman Spogli Institute for International Studies and at the Hoover Institution, panelists analyzed the nature of artificial intelligence, its role in national security, intelligence, and safety systems, and how it may affect strategic stability — or instability.
On the latter, Colin H. Kahl, codirector of Stanford’s Center for International Security and Cooperation, raised concerns about whether AI would increase economic tensions among the world’s most powerful nations and alter the global military balance of power if some countries move ahead quickly on AI while others fall behind. Another concern he mentioned was the possibility of someone using AI-enabled cyber weapons against nuclear command and control centers.
Zegart added that machine learning can help lighten the cognitive load when intelligence specialists are analyzing and sifting through data, which today is being produced at an accelerated rate. The challenge is organizational, as bureaucracies are slow to adopt game-changing technology.
For media inquiries, visit the Newsroom.