“The way I think about trying to anticipate and shape the AI future requires us to take a step back and ask ourselves first, ‘What does this technology do? What does it enable?’” reflects Amir Goldberg, a professor of organizational behavior at Stanford Graduate School of Business. “That’s very different from asking ourselves, ‘How is the technology implemented?’”
From locating the origins of innovation to identifying hidden barriers blocking new ideas, this perspective informs how Goldberg thinks about effectively harnessing novel technological capabilities.
“The data/AI train is leaving the station,” Goldberg says. “The problem is, there are many trains — and some are going off a cliff.”
Staying on track isn’t easy, but Goldberg offers some practical ways to get started. For example, his work has explored the power of transforming conversations, documents, and other communication into data that can reveal useful insights.
“You can’t afford not to use data,” Goldberg says. “Your competitors will be transforming their apparatuses into data, and they will kick you out of business.”
Listen & Subscribe
If/Then is a podcast from Stanford Graduate School of Business that examines research findings that can help us navigate the complex issues we face in business, leadership, and society. Each episode features an interview with a Stanford GSB faculty member.
Full Transcript
Note: This transcript was generated by an automated system and has been lightly edited for clarity. It may contain errors or omissions.
Ge Wang: My first instrument that I learned was the accordion at the age of seven. This was in Beijing. My grandparents, with whom I grew up, they were like, “Hmm, you wanna go check out and take accordion lessons?”
Kevin Cool: That’s Ge Wang, a professor of music at Stanford University.
Ge Wang: I was never too into it, to be honest, but it was a first foray into music making. The instrument that really like changed everything was an electric guitar.
Kevin Cool: A few years after Ge moved to the United States, his parents got him a second hand fire red series 10 electric guitar for his 13th birthday.
Ge Wang: I had goals, like goals that were felt initially outta my reach. Like, oh man, I wanna play that solo or to just to play the riff from that song that I, I, I know and love. And it would seem like it’s insurmountable at first, but over time you practice and you practice and you practice, you practice your scales, you try to play the songs you love and you realize, yeah, you, you’re getting better. It’s like playing a really difficult video game.
Kevin Cool: As he got older, Ge developed an interest in computers and computer programming.
Ge Wang: All my degrees are in computer science. So that’s kind of where I guess my training has been. I remember how much I loved programming, like the act of programming, much like for people who like cooking or any passion or hobby.
Kevin Cool: In the early 2000s, Ge invented ChucK, a widely used audio programming language. A few years later, he designed an app that enabled an iPhone to simulate an ocarina — a small wind instrument.
Ge Wang: Thank you.
Kevin Cool: Today, Ge uses computers and algorithms to create music in novel ways. He also teaches a course called “Music and AI.”
Ge Wang: Computers offer an entirely fundamentally different kind of way to make ourselves more tools, not to take the place of anything, but just to make more tools for us to make music with, to express ourselves with.
Kevin Cool: Put another way, computers and AI add instruments to the orchestra. But when and whether to use them is another question.
Ge Wang: It’s really asking what parts of it do I want more automated and what parts of it do I decidedly want to not automate. You just have to know to ask the question. You gotta figure out what you want from this thing. And also when you want to not use the thing.
Kevin Cool: Regardless of how we choose to use AI, its impact on our work is inescapable.
Ge Wang: What we can do with the technology is interesting and for some of us, like myself, it’s our job, but it is subordinate to why I think perhaps we got into our jobs in the first place, which is this love of music and this need for us to creatively express ourselves. What I love about computers is that it offers new ways for me to do that.
Kevin Cool: AI and the data that powers it have clear benefits, whether it’s for musicians in the studio or leaders creating a culture of innovation. But these tools aren’t the whole answer, says Amir Goldberg, a professor of organizational behavior at Stanford Graduate School of Business.
Amir Goldberg: The machines are not going to give them solutions, and having data is not beneficial in and of itself. Data don’t tell you a story, you need to narrate the data.
Kevin Cool: Goldberg researches culture, innovation, and how businesses can successfully integrate AI into their organizations. And that’s our focus today.
This is If/Then, a podcast from Stanford Graduate School of Business. I’m Kevin Cool, Senior Editor at Stanford GSB.
Kevin Cool: We’re going to start today by talking about culture, which is something that you study, but you study it in an unusual way, maybe even a novel way, in a kind of data-driven approach. How do you do that?
Amir Goldberg: Well, the process of analytical thinking is a process of representation itself in the following way.
Amir Goldberg: It requires us to think about categories of things that exist in the world. And once we have those categories, we can ask ourselves, okay, then how do we measure them? And by measurement, I mean, how do we transform these categories of things in the world into numeric representation?
Kevin Cool: Ok.
Amir Goldberg: But what this process requires beforehand is to ask ourselves, when you say this fluffy term “culture,” what do we mean? And we mean many, many, many things. Culture is not a thing in and of itself. It’s an umbrella term to describe a set of processes that affect how people interpret the world. At least that’s how I tend to think about culture.
Kevin Cool: You have said that some people cringe at the idea that you can even measure culture. Have you found it to be impenetrable in some ways? What has your experience been like?
Amir Goldberg: I have not found it impenetrable. I have found it exciting to use these approaches and to exploit the fact that, I was coming of age as a scientist during the internet and digital revolution that kind of opened up the opportunity to transform culture or pieces of culture into data. But I definitely received a lot of pushback, mostly from people who have studied culture up until that point and were resistant to the idea that you can do this process of dissection. But it enables all kinds of things.
Kevin Cool: Let’s use your study on essentially where groundbreaking ideas come from. Do they come from people who are sort of established? Do they come from people on the perimeter, on the fringes? What specifically was the question you were trying to answer? And then how did you find your way into answering that question?
Amir Goldberg: So, you know, a really fundamental question in the study of innovation and creativity is, why is it that some people and some entities, some organizations are very successful at producing innovation and some are not? And there are two opposing kind of hypotheses. One is, innovation comes from the fringes. People who are on the outside, they see the world differently.
Kevin Cool: Maybe even sort of iconoclastic, you know, if you think of like great inventors. Yeah.
Amir Goldberg: And we like to tell these stories. They very much comport with American cultural perceptions of individualism. The lone cowboy, you know, the nonconformist, who is blazing their own trail. There’s a lot of truth to that story. It’s both about the fact that by virtue of being on the outside, they see things differently, but also they have incentives to innovate because they’re on the margins. They have nothing to lose. Nobody is paying attention to them. So to use the parlance of our times, they have the incentive to be disruptors.
Amir Goldberg: Ok so that’s one hypothesis. The other hypothesis is that actually people at the center have greater, both ability and incentives to innovate. The ability would be, by virtue of the power that they have, they have the ability to shape other people’s perceptions. They have the resources. Innovation is hard. It requires a lot of experimentation. And if they can innovate and shape other people’s perceptions of innovation, that could become a source of huge advantage. They will always be one step ahead of everybody.
Kevin Cool: Mmhmm.
Amir Goldberg: So these are competing kind of theories about where innovation comes from and why. The big problem is, it’s extremely difficult to measure what is innovative.
Kevin Cool: Exactly.
Amir Goldberg: And when we tend to think about innovation, probably most people, most listeners to this podcast, think about technology. And technology, as is documented, for example, in patenting and, and things of that sort, which has been the traditional way by which innovation has been studied.
But innovation manifests in non technological, non patentable things in a variety of domains. So when a company, for example, like Walmart, innovated with strategy, and completely upended the retail market. You know, up until the 1960s, the most desirable locations for stores were downtowns. And Walmart had an inherently different idea of going outside the city center, operating at scale, competing at extremely, extremely narrow margins. Et cetera, et cetera.
Kevin Cool: Having a giant parking lot.
Amir Goldberg: And having a giant parking lot. This required thinking outside the box, reimagining what is desirable, separating things that exist simply by virtue of convention from things that are logically, causal reasons.
But we don’t tend to think about this as a technological innovation. It’s not patentable. It doesn’t register any way where we can measure it. Companies innovate in strategy, not just in technology.
There’s so many creative endeavors and innovations that are missed if what you focus on is merely technological innovation like new internet protocols or pharmaceutical innovations.
Kevin Cool: So how did you capture them?
Amir Goldberg: So the way that we captured them was to think about and employ this really exciting moment of computational innovation from our perspective where we take this fluffy thing, how people speak, and transform it into data.
Kevin Cool: And can you say just another word about what you mean by how they speak?
Amir Goldberg: Very specifically, we look at how politicians speak. We basically analyze the full congressional record. We look at how executives describe their firm’s strategies and performance during quarterly earning calls.
And we also look at basically the full judicial record, all decisions in the United States starting in the mid 20th century and leading to the early 21st century. The problem is that language is immensely unstructured.
Kevin Cool: Right. And you have millions of examples that you’re looking at here.
Amir Goldberg: We have many, many millions of examples. We need to take that very, very unstructured thing, And this has been a boon for me, probably by virtue of luck that I, you know, I came into this business during a moment of technological innovation, which is that these huge corpora of text can now be manipulated using natural language processing, natural language understanding algorithms.
And what we did is we applied this machinery in multiple contexts. So we asked, who are the people who produce ideas that are innovative at the time of production, but that become commonplace, in the future. And we find very, very systematically that these are people on the fringes.
Kevin Cool: Okay.
Amir Goldberg: These are politicians who are not at the center of, like, Washington, these are people who start firms that are small and on the margins. These are judges who operate at the lower courts, who are not trained in prominent schools or by prominent justices. So we find very compelling evidence across very different domains of human activity, that this type of prescience is more likely to emanate from the margins than it is from the center. This doesn’t mean that people at the center can’t be prescient, they can. But on average, that’s what we find.
Kevin Cool: That’s an amazing insight. If I am, say, a leader in an organization, and maybe we can start to talk about this in terms of organizations, and how you would create a culture, to some degree, to get to these sorts of outcomes. How do I use this insight?
Amir Goldberg: Well, I think one of the most exciting moments that we live in is that, and I think leaders have started to understand this but are really, really behind the curve, they can now transform their own experiences into data.
So what happens in their organization is also being documented. People are having conversations. People are writing documents. People are exchanging ideas over Slack. Now, there are all kinds of issues related to the ethical implications of transforming these conversations that occur naturally in organizations into data. So the ethical responsibilities of leaders to their employees, notwithstanding, and there are ways to do it in a way that’s collaborative and constructive with your employees. I think the opportunities here are immense.
If you are running a company and you’re asking yourself, why am I so behind on innovation? Well, you can start transforming the exchange of ideas that happens in your organization into data by using contextual embedding models, large language models and start asking yourself, where is innovation happening? Maybe my problem is that people who are on the margins are suggesting great ideas, but we are systematically dismissing them. And, what I’m proposing now to people who are in the business of innovation, you have data at your disposal. You can ask yourself: Where are the good ideas happening in my organization? Who’s killing them? Why?
There is no one size fit all answer to this question. A lot of the times people want to say, oh, here’s what I need to do. I need to have everybody go on a rafting trip, and that will solve my innovation problems. It doesn’t work that way. But what you need is, in order to diagnose the problem, you need data. And I think that’s the amazing moment that we’re living in. The data that I’m excited about collecting that helps me understand where innovation happens in the courts and in politics are the same kind of data that organizational leaders can use.
Kevin Cool: Mmhmm
Amir Goldberg: What’s the recipe? I don’t know. It depends on the diagnosis.
Kevin Cool: Right. So this cracks open a whole new area of possibility in terms of what you can know.
Amir Goldberg: I would say even more than that. It doesn’t only open a new area of possibility. It’s the imperative of leadership today. You can’t afford not to use data. You can’t run a 21st century organization with insights and managerial instruments that were developed on the basis of the 20th century organization. Because you will fall behind. Because your competitors will be transforming their apparatus into data and they will kick you out of business because good diagnosis is the first step for efficient competitive advantage.
That’s something that in all the classes that I teach, whether I meet executives or MBA students or undergrads. The data/AI train is leaving the station. The problem is there are many trains leaving the station, they’re going in multiple directions, and many of them are going to fall off a cliff.
Kevin Cool: So, this is all really interesting, the idea that you can sort of use data to create almost sort of a profile of a cultural context. Can we take it one step further? Can AI tell us about the way humans operate at some level?
Amir Goldberg: So the advantage of AI is that it imbibes copious amounts of data, far more than any human can imbibe in a lifetime.
Kevin Cool: Sure.
Amir Goldberg: Okay? And really what a large language model, what AI kind of machinery does is it has the ability to identify patterns in the data that are simply astonishing. The largest models have billions, trillions of parameters. No human can look at these parameters and make sense of them.
Kevin Cool: Course not.
Amir Goldberg: Okay? By the way, our human brain does that too. We represent a lot of patterns in our brain in ways that cognitive scientists don’t fully understand. What this enables us is to reach new insights about the patterns of human behavior.
Kevin Cool: They would be completely opaque to us otherwise.
Amir Goldberg: Well, they would be opaque or we would understand intuitively, but we wouldn’t know how to numerically represent.
Kevin Cool: Yeah, yeah.
Amir Goldberg: And therefore use in an analytical framework, a diagnostic framework. But as it stands today, the technology does not replace us. It becomes an amazing aid. And what we need to become are proficient users of this technology to try and understand these patterns. We need to know what to ask the machine.
Kevin Cool: Right, right.
Amir Goldberg: To help us diagnose, to use that language, the world. You know, some of the amazing things that are being done with AI right now by my colleague, Michael Bernstein, who’s a computer scientist here at Stanford, is they use large language models, and they transcribe an interview that a person had. They feed it to the machine and they say, now you’re this person. And they ask the machine all kinds of questions and ask it to make predictions about how that human would have answered these questions. A kind of simulated human. And it turns out that depending on the quality of the interview and depending on the kinds of questions that they ask, sometimes AI provides answers that are very consistent with what that human would have answered. That’s astonishing.
Now imagine the implications for you as a organizational leader. You are about to implement a new policy. Instead of implementing it and see if that ruins your business, you can run a simulation.
Now, if you run the simulation incorrectly, you will have reached incorrect conclusions with a false confidence of using quote unquote data, and you will destroy value even more thoroughly than you would have had you done it the 20th century way.
So, and this is all going to depend on whether you are proficient users of these simulated human agents. And I’m excited about this possibility. But I also know that it’s going to have serious limitations and what Michael and his collaborators are doing right now is trying to understand and map and scope these limitations and possibilities.
But you could already imagine how this can become an astonishing managerial tool if used properly. And as I said earlier, also ethically. Which raises so many questions that I’m not even equipped to start grappling with.
Kevin Cool: Yeah and in a different episode we can talk about the extent to which the technology is sprinting ahead of, you know, some of these ethical questions.
So, circling back to what you were just describing for a moment, I understand you have an exercise with students where you ask them to classify what kinds of problems AI can solve. So it sounds like you are in the business of educating the next generation of leaders to not make the mistakes that you were talking about in terms of using AI poorly.
Amir Goldberg: The way I think about trying to anticipate and shape that AI future requires us to take a step back and ask ourselves first, what does this technology do? What does it enable? And that’s very different from asking ourselves, how is the technology implemented? And the irony is that the people who work at OpenAI and, and Google, they understand how the technology works, but they don’t have a clear vision of what it does, what problems in the world it solves or doesn’t solve because they are not cognitive scientists. They are not organizational leaders.
So what I ask my students to do in the classroom is to try and abstract from the technical details of the technology and try to understand the kind of problems in the world that the technology can solve. I like to make the following analogy: In order to drive a car, you don’t need to be able to build the internal combustion engine…
Kevin Cool: Right.
Amir Goldberg: …from scratch. And in fact, the car is not dependent on that. We know that we now have electric cars that look exactly like gas propelled cars, you know, and they do exactly the same thing. They solve the exact same problem. They transport people from one point to the other. So the technological implementation is separate from the problem being solved. And the person who knows how to design an electric or a petrol based engine doesn’t understand the nature of transportation problems and how people are going to, to adapt it. So the same analogy operates here. We need to understand how the technology operates to understand the limits of its affordances. But we need to take the framework of what problem in the world it solves.
Kevin Cool: So Amir, what’s the one thing you want people to understand with respect to data driven insights about culture?
Amir Goldberg: Maybe the most important thing to understand is that the machines are not going to give them solutions, and having data is not beneficial in and of itself. Data don’t tell you a story, you need to narrate the data. It tells you nothing about how you need to act, how you translate that world into a competitive advantage if that is what you’re trying to achieve.
So I think the real imperative for people who want to manage culture, who want to manage their business, who want to understand the strategy – these things don’t apply just to the question of, like, managing culture and organizations. They have no choice but to become proficient users of data.
At the end of the day, that is good news. Because it means that they’re not substitutable by the machine. But it’s bad news in the sense that they need to rethink their role. And it’s really, really difficult for incumbents.
The good leaders of this world and those who will be able to survive this classical incumbents dilemma are the ones who will be able to harness the experience and knowledge of those 20th century minded employees but train them into a data oriented world where they use AI as aides, as collaborative instruments in the production of knowledge.
Those who won’t do it will be obliterated by their competition, basically. Those who do it by simply thinking, Oh, I’ll just replace and will be tempted by the cost-effective savings, will probably introduce a lot of problems that we don’t fully understand.
Kevin Cool: That feels like a good place to end.
Amir Goldberg: It doesn’t sound like an optimistic ending.
Kevin Cool: If/Then is a podcast from Stanford Graduate School of Business. I’m your host, Kevin Cool. Our show is written and produced by Making Room and the Content and Design team at the GSB.
Our managing producers are Michael McDowell and Elizabeth Wyleczuk-Stern. Executive producers are Sorel Husbands Denholtz and Jim Colgan. Sound design and additional production support from Mumble Media and Aech Ashe.
And a special thanks to Ge Wang, an associate professor at Stanford University in the Center for Computer Research in Music and Acoustics. Find more at gewang.com. That’s G-E-W-A-N-G.com.
For more on our faculty and their research, find Stanford GSB online at gsb.stanford.edu or on social media @stanfordgsb.
If you enjoyed today’s conversation, consider sharing it with a friend or colleague. And remember to subscribe to If/Then wherever you get your podcasts or leave us a review. It really helps other listeners find the show.
We’d also love to hear from you. Is there a subject you’d like us to cover? Something that sparked your curiosity? Or a story or perspective that you’d like to share? Email us at ifthenpod@stanford.edu. That’s i f t h e n p o d at stanford dot edu.
Thanks for listening. We’ll be back with another episode soon.
For media inquiries, visit the Newsroom.