Worried about artificial intelligence eliminating your job? So 2024.

“The age of AI agentics is here,” announced Nvidia CEO Jensen Huang in January. A few weeks later, Meta CEO Mark Zuckerberg predicted that “2025 will be the year when it becomes possible to build an AI engineering agent that has coding and problem-solving abilities of around a good mid-level engineer.” In February, Klarna CEO Sebastian Siemiatkowski declared that he believes “AI can already do all of the jobs that we, as humans, do.”

As AI rewrites how we work — and increasingly, how we live — the question is no longer whether the technology will eliminate my job, your job, or all of our jobs.

In fact, that framing may miss the point entirely.

“The biggest mistake we can make is thinking that this is just a story about technology,” says Jennifer Aaker, PhD ’95, a behavioral scientist and professor of marketing at Stanford Graduate School of Business. “It’s a story about humanity, and how AI will alter the fundamental nature of the human experience.”

Aaker often says that people want to be valued members of a winning team on an inspired mission. And even as AI is being heralded as the engine of a multitrillion-dollar revolution, how most of us fit into this new paradigm isn’t clear.

As many as three-quarters of Americans believe AI will reduce the total number of jobs in the U.S. over the next decade, and more than half of U.S. workers say they’re worried about how AI will be used in the workplace, a concern that cuts across age, education, and income levels. Just 32% of Americans say they trust AI, and a majority are concerned about the use of the technology in their day-to-day lives.

How AI will be implemented remains an open question, and a growing body of literature suggests that high-stakes decisions made with AI assistance in domains like healthcare, social services, and criminal justice are often no better than those made without it.

And yet, AI is already on the team — or will be very soon — and Aaker suggests we think of it not only as a tool, but as a coach, a guide, a partner, and a problem-solver. “AI optimizes. It makes us faster and more efficient, but not necessarily better,” she says.

Quote
“We have choices. We can build technology that harnesses our humanity or settle for tools that diminish us.”

Better, Aaker explains, means a doctor using AI insights to have more empathetic conversations with patients — rather than reducing interactions to mechanical transactions. Better is when AI frees writers or artists to spend more time imagining new worlds — instead of burdening them with routine tasks. Better ensures that AI-driven decisions reflect fairness and equity — not merely efficiency — by actively identifying and reducing biases humans might miss.

“Our work focuses on something deeper,” Aaker continues. “How do we use AI to make humans more authentically themselves? How do we make sure this tool — this incredibly powerful, seductive, and increasingly autonomous tool — doesn’t push us further away from what actually makes life meaningful?”

To deploy this extraordinary technology, we must look beyond its brute force capability.

To design AI that elevates rather than erodes our lives, our work, and our world, Aaker argues that we must double down on three distinctly human capacities. “Together, they help define what it means to be fully human in the age of the algorithm,” she says.

  • Authenticity: Courageously speaking and acting with genuine integrity, truth, creativity, and compassion — even when imperfect
  • Boldness: The uniquely human combination of imagination, creativity, and courage to envision — and build — a better future
  • Love: Our capacity not just to feel emotions, but to genuinely connect, care for, and support each other’s experiences

“We have choices,” Aaker says. “We can build technology that harnesses our humanity or settle for tools that diminish us.”

Some companies already embrace such a human-first approach. Anthropic emphasizes transparency and ethics in its AI safety framework, “Constitutional AI,” which trains models to be helpful, harmless, and aligned with human values. At Wildflower Schools, AI-powered sensors are designed to expand awareness and equity — helping teachers identify who’s being overlooked, not who’s underperforming. And Nvidia’s AI powers medical research, using generative models to help doctors detect diseases earlier and with greater accuracy, augmenting rather than replacing clinical judgment and compassionate, patient-centered care.

Aaker is currently co-developing a course on generative science, an emerging field that uses the computational power of AI to uncover novel solutions or insights that accelerate and enhance human-led research. Since AI can analyze massive datasets and identify patterns, correlations, or potential solutions that humans might not consider, it can propose hypotheses that are novel, unbiased, or derived from complexities people might overlook.

“It’s basically supercharging humans in the hypothesis step of the scientific method,” Aaker says. “We’ll use scientific models in ways that could really tackle previously unsolvable problems in domains like biology and renewable energy.” The working title of the new course: AI for Humanity: Solving the Unsolvable.

How we use the most powerful tool humanity has created isn’t tomorrow’s question; it’s today’s.

So far, the story of AI has been one of exponential technological growth — a relentless push toward speed, scale, and optimization. But as the technology matures, a deeper question has emerged: Will we let AI narrow what it means to be human, or will we use AI to expand it — making real progress on the challenges of our time in the process?

“AI computes; we create,” Aaker says. “Our shared potential is extraordinary. The challenge? Unifying the efficiency and speed of AI with our authentic human truths.”

AI will change the world. How we work with it to capture the best of who we are is up to us.

For media inquiries, visit the Newsroom.

Explore More