“The reason people disagree is because they’re striving toward a different goal,” says mathematician and Nobel Prize winner Robert Aumann. In this episode of All Else Equal, Aumann sits down with finance professors Jonathan Berk and Jules van Binsbergen to discuss what to do when leaders disagree.
“If there’s a disagreement in the room at the end of a discussion, someone in the room is making a mistake,” says Berk. “And like I often tell my students, ‘it could be you.’”
Listen & Subscribe
All Else Equal: Making Better Decisions is a podcast produced by Stanford Graduate School of Business. It is hosted by Jonathan Berk, The A.P. Giannini Professor of Finance at Stanford GSB, and Jules van Binsbergen, The Nippon Life Professor in Finance, Professor of Finance, at The Wharton School. Each episode provides insight into how to make better decisions.
Full Transcript
Jules van Binsbergen: I am Jules van Binsbergen, finance professor at the Wharton School at the University of Pennsylvania.
Jonathan Berk: And I’m Jonathan Berk, finance professor at the Graduate School of Business at Stanford University.
Jules van Binsbergen: Welcome back to our show. And please keep sending us your comments; we really appreciate them.
Jonathan Berk: It has been great doing the show, that’s all I can say, Jules.
Jules van Binsbergen: I completely agree with you, Jonathan. So today we have a very exciting episode ahead of us. We’re going to answer the question: Can people agree to disagree about important business decisions?
Jonathan Berk: And the surprising answer is they can’t; that is, if there’s disagreement at the end of a discussion, somebody in the room is making a mistake.
Jules van Binsbergen: So to understand that better, let’s take a very simple example: Can you agree to disagree about whether a number is a prime number?
Jonathan Berk: So just for those of us who don’t know this, a prime number is a number that only has two divisors, one and itself. For example, 3 is a prime number.
Jules van Binsbergen: Or 7 is a prime number.
Jonathan Berk: And the question is: Can we agree to disagree about whether a number is prime?
Jules van Binsbergen: And of course the answer is either a number is prime or it isn’t, and so there’s no room to disagree over this question.
Jonathan Berk: Right. So basically it’s very difficult to tell if a number is prime, especially if the number is large. In fact, Jules, that’s the basis of the bitcoin algorithm. The way bitcoin works is the people that mine for new bitcoins are actually mining for prime numbers. And what that means is the only way to tell if a number is prime is what we call a brute force method: You have to check every single divisor and see if it divides into the prime number.
Jules van Binsbergen: And so when we’re talking about a very large prime number, and we have two people that have an opinion on this, there are really only two options here. The first option is people did the brute force approach and checked whether or not the number is prime, and therefore they can tell for sure whether the number is prime or not, or if they didn’t run the numbers, you simply don’t know.
Jonathan Berk: Yeah, because there’s no predictability with prime numbers; that’s what makes the bitcoin algorithm work. You do not know unless you’ve done the brute force algorithm whether the number is prime. There’s no predictability, there’s no pattern involved. So the choice is either you don’t know or you’ve done the algorithm and therefore you do know.
Jules van Binsbergen: And so when somebody says, “I think that the number is prime,” that statement really has no basis.
Jonathan Berk: Exactly, and they’re making a mistake unless they’ve either done the algorithm or they’ve announced “I don’t know.”
Jules van Binsbergen: So in summary, two fully rational people — that is, people that never make mistakes — cannot agree to disagree on this particular question.
Jonathan Berk: Right. And what about other questions? What about the shortest route between two places — can people agree to disagree about that?
Jules van Binsbergen: So, no, clearly not. I mean, if two people have the same information and they have access to the same maps, then either the route is the shortest route or it’s not the shortest route. We cannot agree to disagree on that issue. But what about the quickest route — can you disagree about what the quickest route is?
Jonathan Berk: Well, now somebody might have more information about traffic than somebody else. And so perhaps because of that we could disagree. What we’re going to show on this show, though, is that even in that situation, you can’t agree to disagree, and let’s explore that.
Jules van Binsbergen: And the reason why this question is so relevant is that we see too often in practice that people are disagreeing. And in fact I would say, Jonathan, I don’t know how you feel about it, but I think that we’re seeing more disagreement today than we’ve ever seen before. And so a lot of people say, “Why don’t we just agree to disagree.” But what we want to point out in this episode is — particularly in the context of business decision making — that’s a cop-out and it’s not a smart strategy.
Jonathan Berk: It’s not a cop-out, it’s a mistake. Somebody is making a mistake. Now let me just be very specific about this: What do we mean by a mistake? In a business decision, in a well-run business, everybody has the same objectives. Obviously, if you have different objectives you might have different decisions.
Jules van Binsbergen: Different motivations to have different outcomes.
Jonathan Berk: In a well-run business everybody has to have the same objective. So if everybody has the same objective — and I think the prime number example makes it clear — if you have the same information, you must come to the same decision. And so what we need to talk about in this situation is that it’s a very realistic assumption to say in a good business everybody has the same objective. I would say that’s the definition of a well-run business.
But it obviously is not a realistic assumption to say everybody in the organization has the same information. And it must seem obvious that if they have different information they could come to different decisions. And so let’s now think about the case when they have different information.
Jules van Binsbergen: And so the guest that we have on the show this time is Robert Aumann, who has shown something quite astonishing. He has shown that even in the case when you don’t have the same information, you cannot agree to disagree.
Jonathan Berk: It’s an astonishing result, and the Nobel Prize committee was so astonished by this result they gave the Nobel Prize to Robert Aumann partly for this insight. So let’s talk about how you get that insight.
Jules van Binsbergen: So let’s go back to the example of the prime numbers. Suppose that I come to you, Jonathan, and we have a discussion about whether a number is a prime number, and I know that you have a supercomputer and I don’t.
Jonathan Berk: And we know also that I have the same objectives as you and I’m not lying.
Jules van Binsbergen: Yes, all we’re trying to do is figure out whether the number is prime or not. Now clearly in this case we’re not going to have the same information because you ran the brute force approach where you checked all the divisors and I didn’t. So if at the end of that conversation we agree to disagree, clearly one of us is making a mistake. And I would say it is pretty obvious that if we agree to disagree in that case, I’m the one making the mistake because you have the supercomputer.
Jonathan Berk: Right, if I tell you the number is prime and I’m not lying and we have the same objectives, you have to say to yourself, “I don’t have a supercomputer. Jonathan does; therefore, Jonathan has run that number and he knows it’s prime.” Which means you know it’s prime.
Jules van Binsbergen: So in the end we will agree, and therefore we will come to the same conclusion based on whether or not you tell me whether the number is prime, yes or no.
Jonathan Berk: And I could also tell you I don’t know, in which case I’m telling you I haven’t run it and that means you also don’t know. Or the third possibility is I say it’s not prime — then you know I’ve run and I found out that the number is not a prime number, it has another divisor, and therefore you also know it.
Jules van Binsbergen: So either way, either we both know and we agree that it’s not a prime number, either we both agree that we both don’t know, or we all agree that the number is a prime number. Those are the only options we have.
Jonathan Berk: So for prime numbers it’s clear, even if we have different information, we can’t agree to disagree. But this is a pretty simple example, so let’s think about a more complicated example where it’s still going to be the case that you can’t agree to disagree.
Jules van Binsbergen: All right, so let’s explore this further in the context of a fun detective story. So a crime has been committed and the DA would like to get it solved. He calls the heads of two investigative units and asks them to please report back with a culprit.
Jonathan Berk: Each head — who knows nothing about the case — appoints his best detective to the case. The detectives work completely independently and never communicate with each other.
Jules van Binsbergen: Now importantly, both the detectives are both fully rational — that is, they never make a mistake, they’re equally skilled, and have received exactly the same training in detective work.
Jonathan Berk: Detective A reports to his boss that the crime was most likely committed by Jules.
Jules van Binsbergen: And Detective B reports to his boss that the crime was most likely committed by Jonathan. They report nothing else to their respective bosses.
Jonathan Berk: After about 30 days in which the bosses did not communicate at all and learned nothing about the case other than the names of the culprits of their respective detectives, they meet in the DA’s office and they both reveal the only information they know about the case: the identity of the perpetrator as reported to them.
Jules van Binsbergen: At this point what will happen? Will they change their minds? The answer is that if they are fully rational they must change their minds because they have the same information, which necessarily implies they must come to the same conclusion.
Jonathan Berk: In other words, let’s just understand that the only information they have is the culprit’s name as reported by their detectives, which they’ve shared. So they do both have the same information.
Jules van Binsbergen: Obviously one possibility is for them to now report that they simply don’t know who committed the crime. Just as with the prime number example, it’s either prime, not prime, or you don’t know. In this case it could be that Jonathan committed the crime, Jules committed the crime, or we simply don’t know. I obviously think Jonathan did it.
Jonathan Berk: Well, in exasperation the DA asks both detectives to join them and explain the basis of their decision to the group.
Jules van Binsbergen: Because they do not have an infinite amount of time the detectives’ explanation cannot cover every single detail of information they know. Once the detectives are done they leave the room.
Jonathan Berk: At the end of this will the group disagree in their recommendations? Well, it’s the same as before: The group has the same information, so they cannot agree to disagree.
Jules van Binsbergen: Now why is that? Well, before the detectives entered everybody in the room agreed. Any information revealed by the detectives is common knowledge to everybody in the room, so the DA and the bosses must update the same way. They still share the same information so they must still be in full agreement.
Jonathan Berk: OK, now for Aumann’s insight. What about the two detectives? Can they agree to disagree, because they do not have the same information?
Jules van Binsbergen: Each detective knows all the information that was revealed in the discussion in the DA’s office as well as any private information that she did not reveal during the discussion. So if either detective does not agree with the group’s decision, it must be because of something of relevance in her private information.
Jonathan Berk: Now leaving aside the question of why the detective would not choose to reveal an important piece of information, both detectives know that they are both fully rational. Both detectives know that neither detective makes a mistake. So if one detective discovers the other detective has a different person who committed the crime, then that detective knows that the other detective has an important piece of information and because of that he must change his mind.
Jules van Binsbergen: Now given that this extra piece of information exists, each detective and the other members of the group will reason that if they have that information their own mind would also change, since they know their minds work identically to the detectives because they’re fully rational too.
Jonathan Berk: So the result is they cannot agree to disagree. It’s just as if I have a supercomputer and I know the number is prime; you might not have a supercomputer but you know because I have a supercomputer that therefore it must be the case that the number is prime — you don’t even have to work it out.
Jules van Binsbergen: Now importantly, this whole argument does not depend on what either detective chose to reveal during the group discussion. Indeed, both detectives could have kept completely silent. The conclusion remains the same. The entire group, including both detectives, can simply not agree to disagree.
Jonathan Berk: And this is an astonishing result. Basically it says I do not have to reveal all my information. All you need to know is that I don’t make a mistake, and then we cannot agree to disagree. And the power of this insight means that if we’re all sitting in a room making a business decision, if we’re disagreeing, somebody in the room is making a mistake. And like I often tell my students, it could be you.
Jules van Binsbergen: Definitely. But now let’s go back to what is probably the key assumption underlying all of this, and that is that we both have the same objective. So let’s think that through a little bit. What if there is the possibility that you have another objective than me? I’m not even sure that you do, but I’m just suspicious that you have other objectives than me. How do I deal with that?
Jonathan Berk: Well, that is going to lead me that the insight that we have here will break down, right? If I have a different objective to you, then we can come to different conclusions without either one of us making a mistake. But what I would say is you’re working for an organization that isn’t optimal, because in an organization everybody has to have the same objectives. So another way of saying it is the mistake is not everybody has the same objectives.
Jules van Binsbergen: And so the organization should really work on setting the incentives better so that everybody’s incentives are aligned to work toward the same objective.
Jonathan Berk: In fact, Jules, I would turn this around. I would say if in a business meeting we are disagreeing at the end of the meeting, then what we should all be thinking to ourselves is: What mistake are we making? Is the mistake that different people have different objectives? I want to be CEO and you want to be CEO, and therefore we have different objectives? Or is it that somebody is truly making a mistake — they’re weighing a piece of information too heavily, or they’re ignoring important information? It could be either. It’s like a red flag — something is wrong.
Jules van Binsbergen: Absolutely, and it’s important to then dedicate resources to figuring out what exactly is going wrong and why.
Jonathan Berk: So at this point I think it’s time to introduce our guest, Robert Aumann, who won the Nobel Prize in 2005 partly for this insight.
Jules van Binsbergen: Welcome, Bob, to the show.
Jonathan Berk: Bob, let me start with a question I think is on many of our listeners’ minds. Tell us how you had the insight. How did that work?
Robert Aumann: OK, very good question, very good, yeah, I was waiting for that question. It’s very interesting. I’ll tell you how it came about. There is in game theory — which is how I make a living, that’s my racket, like they say in America — there is this concept of a mixed strategy.
Now what is a mixed strategy? It means when you have to decide whether to do A or B or C or D in a game, in an interactive decision situation where you are pushing one thing and somebody else is pushing something else, then it may be important for you to not make a definite decision but to mix your decisions.
Because if you make a definite decision — let’s take the game of rock, scissors, paper. Everybody knows how that is played. Now, so rock beats scissors, and scissors beats paper, and paper beats rock, and so what should you do? With two players one has to show a rock or a paper or scissors, and they have to do it simultaneously. Well, there’s no good solution to that, because if I put scissors then you’ll put rock, so there’s no right answer to this. So what you have to do is play it random — mix the three, rock, scissors, paper, each one with one-third probability, and that way the other side will not be able to sort of jump on you and also do that. So the answer is sometimes you win, sometimes not.
All right, now so that matter of mixing your strategies becomes central in game theory. And you mix your strategies; let’s say in rock, scissors, paper you throw a die and if it comes out 1 or 2 then you play rock, and if it comes out 3 or 4 you play scissors, and if you come out 5 or 6 you play paper.
And now those are objective probabilities, and that works fine. But then it occurred to me — and this is many years ago, it’s like 50 years ago — it occurred to me that maybe you should do your mixing not on the basis of objective probabilities like a die coming out, but whether you think that the next president will be Republican or Democratic, and we can have different opinions about that and maybe both of us can come out ahead.
And I’ve brought some examples where if we base our decisions on something which is not objective but is subjective — where people could have different, legitimately different opinions, different probabilities — then we both could get out ahead, maybe. And I developed that and I published a paper about it. In fact, I published it in the first issue of the Journal of Mathematical Economics, which was I think in 1972, something like that.
And then I started asking myself, hey, Bob, is that possible? Is it possible for people to hold different opinions as to the probability of the Republicans winning the next election — you and I hold different opinions and we know about it — is that possible? And I started thinking about this, and I started thinking and thinking and thinking.
And one afternoon, this was in Stanford University, one afternoon I went to discuss this with Kenneth Arrow, who was perhaps the greatest economist of the second half of the 20th century. Let’s be careful, OK, undoubtedly the greatest economist of the second half of the 20th century. And he was sitting there in the office of Frank Hahn, who was also a great economist, both are no longer with us, and they were discussing and I started discussing this matter with them.
And then suddenly I had a flash of intuition that said, Bob, this is impossible, you can’t have different probabilities about, and I said that to them. And at that time the concept of common knowledge was not known in the economic world either. In other words, what this business of I know that you know and you know that I know and so on and so forth wasn’t, this was not known to the economic world.
And I went back to my office and I thought about the thing for two or three days, and I came to the conclusion that if there’s common knowledge of different, that you can’t have common knowledge of different probability. So the paper I wrote in the first issue of Journal of Mathematical Economics, at least that part of it, there was another part of it, but at least that part of it became empty, because you can’t have that. But it grew out of the idea of subjective mixing of strategies, subjective mixing.
Jules van Binsbergen: Bob, you know what I really like about this story is that the story is about you changing your mind.
Robert Aumann: Oh, absolutely.
Jules van Binsbergen: Which is, of course, exactly what it’s about.
Robert Aumann: That’s it, and right now I’m working on another paper which takes off from a paper I wrote not 50 years ago, like this one, but 25 years ago. In 1995 I published this paper, and I said, hey, no, it’s all wrong. Not it’s all wrong, but there’s a better way of doing this. What I published is not wrong, but there’s a better way of doing it right. So that is exactly right, and that is how that came about.
And I remember sitting in the office of Frank Hahn — Arrow liked to go about to other people’s offices, he was a great man, he liked to go about to other people’s offices and talk to them. And I went back to my office and I had this insight. And then I thought, hey, this is pretty simple, Bob; I said to myself, it’s not really worth publishing.
And I went back to Arrow and I said, “Kenneth, it’s this way,” and he said, “Oh, my God, this is terrific.” I said, “No, no, should I publish this?” He said, “Of course you should publish it.” And it’s a very short paper. It’s — what is it, one, two, three pages long, three and a half pages long, something like that — a very, very short paper. And I have two papers like that which are very widely cited, and this is one of them. So that’s how that came about, that’s the story of the genesis.
Jules van Binsbergen: Thanks a lot, Bob. What a great story that is. So related to that I have a question for you that I want to ask you. Why do you think that people disagree so much, particularly since in the modern world we all have access to vast amounts of data, and surely based on your insights wouldn’t we have predicted that as access to data went up disagreement between people should have gone down? Yet it seems like we’ve seen the opposite. Do you have an idea of why that is?
Robert Aumann: I think so, OK, yes. I think people disagree because they are striving to different goals. Many, many times people are striving to different goals, and that is the source of disagreement. People see facts differently, they look at the world differently, and I think the phenomenon that you have brought up in that the more data there is the more disagreement there is, yes, that phenomenon could be because people find in the data, in all this data they find the items that accord with their view and they naturally latch on to those items that accord with their view. And the more data there is in general, the more items there are that accord with your view and the more items that may accord with my view, so it may make for even more disagreement.
Jules van Binsbergen: All right, I have one last question for you, a very short question that comes back to what you said earlier, and I’m just curious how you think about it. You said we’re born, we have, you know, there’s nothing to have a view of the world on, so there’s no impression yet. Then everybody goes through different experiences and that’s really what determines your probability distribution at that point in time.
But right now there is a discussion going on and people are claiming that certain information cannot be exchanged, right, what some people call the lived experience. Do you believe that there is information that can never be exchanged between people, or do you believe that such a thing exists? Or do you think that people should just try harder?
Robert Aumann: Well, you know, let me point this out. We’re not talking about exchanging information, and that is essential to realize that. We’re not talking about exchanging information, we’re only talking about exchanging my probability estimate for a certain event happening, and your probability of it.
So I say to you my probability is I think that this will happen, let’s say, Biden will be reelected — don’t take this personally, but just I want to make it realistic — Biden will be reelected with probability one-quarter. I give it one-quarter. And you say, Bob, that’s crazy, I mean, he’s doing a great job, people will be convinced of this argument 90 percent. So I say, well, you don’t know the facts, I hold on to one-quarter. And you say, well, maybe so, OK, I’ll bring it down to 80 percent.
We’re just exchanging probabilities, and it’s only at the end it has to be if we’re at the point where we say it cannot be common knowledge between us that you, you’re always updating because you think that when I say one-quarter I must know something on which that is based. And when I look at your 90 percent I say, well, that must be based on something, so I readjust.
But it never becomes explicit what that something is, it doesn’t become explicit. So we’re not exchanging information, we’re only exchanging the probabilities. So even if that is possible, that there is certain information that cannot be exchanged, but you are basing your probability estimate on that, then it doesn’t matter that you can’t give me that information. Because I say your probability I might give full faith and confidence and you’re not making some kind of mistake, then I say, OK, your information which you can’t give me, even if you wanted to you can’t give, but on that it’s based. And my information maybe I can’t give you, but still we have to reach the same conclusion.
Jonathan Berk: It’s just like the two detectives. All they needed to do was report the culprits. They did not have to share their private information. OK, Bob, I feel bad for how long we’ve kept you.
Jules van Binsbergen: Thank you so much.
Jonathan Berk: This has been really interesting.
Jules van Binsbergen: It was great to meet you.
Robert Aumann: Thank you very much, thank you.
Jonathan Berk: Jules, that was so interesting. It was so interesting how he had that idea.
Jules van Binsbergen: Yeah, and you know what I really liked the most about the interview is the deeper lesson that it had. Can you imagine really what he said — he said, because I was willing to admit that I made a mistake. And I don’t know whether you noticed it too, but it was almost that he was giddy about the fact that he had done something wrong, that he had made a mistake, and by fixing that mistake the payoff of admitting that he made a mistake and fixing it was winning a Nobel Prize. Now how much better does it get than that?
Jonathan Berk: I think that’s an amazing life lesson. You know, you can use what we’re talking about today as a diagnostic tool. Think of all the disagreements you’ve had when you’ve been in meetings. In every one of those cases somebody was making a mistake.
Jules van Binsbergen: Yeah, just think about all the hiring decisions that people make. I mean, how often do people leave the room still disagreeing? And particularly in our profession, I think I’ve noticed this many times.
Jonathan Berk: You know, one of the things to remember is when people disagree you know that somebody is making a mistake. That’s not the same thing as saying that somebody else is correct. It could be the case that everybody in the room is making a mistake.
Jules van Binsbergen: And that’s what we saw in the example of the detectives, right? We had the two detectives, but it could be that both people were innocent, right? It could be that both detectives were wrong. The only thing we knew was that at least one person was making a mistake.
Now, Jonathan, another thing that came up several times and I want to have your opinion on, is it seems to me — I don’t know whether you agree — but it seems to me that disagreement has increased in the world. And I would have predicted that given all the data that we have, if you and I have a disagreement all we have to do is go to the web, look up what the fact is, and then we’re done with it. And for some reason it hasn’t worked out that way. Now why do you think that is?
Jonathan Berk: You know I agree with Aumann. I think it’s that people have different goals and they suffer from what psychologists call the confirmation bias; that is, people naturally look for confirming evidence and ignore contradictory evidence.
Jules van Binsbergen: No, and I like those arguments, but the thing that I’m worried about most is this, right, it is about what you can attribute to the other person. Think about the political situation that we have in many places in the world today. There are two reasons by which we can just try to disqualify the other person and thereby not having to learn from what they have to say and update your beliefs. And the first one is just to say it’s the other person making the mistake. It’s not me so I don’t have to reconsider my position.
The other one, which I think is even nastier in some sense, is that you just attribute bad intentions to the other person, meaning our goals are not the same. But it’s not just even that it’s not the same goal — I just say that the other person is a bad person and therefore I don’t have to update. And then people are off the hook, because we both know that learning, or what we call in statistics Bayesian updating, is very difficult. It’s a painful process. But now people have these two excuses to not have to do it. That worries me.
Jonathan Berk: You know I agree, Jules. You know the basic insight here is if there’s disagreement in the room it means somebody is making a mistake. And you can just let yourself off the hook by saying, “I’m not the person making the mistake, it’s the other person,” and you’re off the hook. I think that’s a big mistake people make in business. They let themselves off the hook too easily.
You know if you think about it, let’s think about it in a very important decision that we’ve been dealing with in the last two years, which is how to respond to COVID. I mean, basically every county in America had a different approach. They can’t all be right.
Jules van Binsbergen: Yeah, somebody must have been making a mistake, or all of them.
Jonathan Berk: Exactly.
Jules van Binsbergen: Well, Jonathan, I think that was a great episode. I’m really looking forward to the next one too, let me tell you. I mean, in the next episode we’re going to be speaking with Larry Summers on what I think is a very important question, which is: Should we tax corporations? What is the rationale for the corporate income tax? And I think it’s going to be a lively discussion.
Jonathan Berk: Thanks for listening to All Else Equal podcast. Please leave us a review at Apple podcasts; we love to hear from our listeners. And be sure to catch our next episode by subscribing or following our show wherever you listen to your podcasts. For more information and episodes visit AllElseEqualpodcast.com or follow us on LinkedIn. The All Else Equal podcast is a joint venture of Stanford University’s Graduate School of Business and the Wharton School at the University of Pennsylvania and is produced by Jenny Luna and Podium Podcast Co.
For media inquiries, visit the Newsroom.