Leadership & Management

What We're Missing When We Study Success

A researcher argues that a research gap in understanding failure skews how we understand success.

January 01, 2004

| by Marina Krakovsky

 

If you want to learn the secrets of success, it seems perfectly reasonable to study successful people and organizations. But the research of Jerker Denrell, an associate professor of organizational behavior, suggests that studying successes without also looking at failures tends to create a misleading — if not entirely wrong — picture of what it takes to succeed.

To illustrate this point, Denrell relates a particularly absurd example. All successful executives have at least one thing in common, he tells his students: They all brush their teeth. “Obviously, that’s stupid,” says Denrell, and nobody would call toothbrushing a determinant of success. Yet when we seek the common denominator of successful organizations, we tend to reach similarly useless conclusions — unless we compare successes with failures.

Case in point: The well-known advice to focus on a single core business. Authors of popular business books identify many successful companies that have focused on one key product and argue that this focus caused their success. But look at Kodak and Xerox, says Denrell, explaining that some companies that are focused on one product tend to have very poor performance over time. In focusing excessively on successes, we overlook the important fact that failing companies do many of the same things as companies that succeed.

This idea hit Denrell during a seminar on serial entrepreneurs — those who had started several companies. “The presenter argued that these entrepreneurs were unusually persistent, did not give up when facing initial failures, and were able to generate support for their projects even when most people were initially skeptical,” recalls Denrell. “Hearing this, I thought that these were characteristics that were also necessary in order to fail spectacularly.” After all, if a project is not a good idea, yet you stick with it and persuade others to contribute resources and money, the costs of the failure will be quite large.

Denrell believes such risky behavior is a prime example of the danger of making inferences solely — or disproportionately — from successful people and organizations. It’s very likely, he says, that firms pursuing risky strategies tend to achieve either a very high or a very low performance, while firms that pursue conservative strategies always achieve an average performance. But while risk-taking can lead to either spectacular success or disastrous failure, looking only at successes will show a positive correlation between success and risk-taking.

At least half the problem, says Denrell, is that data for failures tends to disappear. After all, companies that pursue unsuccessful strategies either go out of business or change their approach. Either way, information about the unsuccessful strategies becomes scarce, especially in comparison to the wealth of data from successful organizations. The same systematic “undersampling of failure” occurs among individuals, since organizations tend to promote high-performing managers while ousting low-performing ones. Aspiring managers trying to emulate those at the top might take the same kind of ill-advised risks taken by those who never got promoted.

But if a shortage of data on unsuccessful people and organizations were the only problem, we could correct for it through statistical techniques. We don’t do that, says Denrell, because of a more insidious problem: psychological biases. For example, even when we know as much about an organization that failed as we do about another that succeeded, we delude ourselves by using different language to describe essentially the same behavior. “We tend to argue that a decision with a good outcome is an indication of visionary management, while a decision with a bad outcome indicates reckless behavior,” says Denrell, adding that people want explanations to be deterministic. “But the performance of any given firm is influenced by many random events beyond the control of managers.”

Denrell concedes that accepting his ideas is not enough to help us understand the true determinants of success. Among the many problems that plague this search is noisy data — too many variables to determine clear causes. And, of course, there’s the randomness factor. But understanding the effect of undersampling of failure has value, nonetheless, if for no other reason than to tell us what doesn’t work.

These ideas are not always an easy sell. The practice of studying successes is so deeply ingrained and appeals so much to our intuition that some people at first don’t see Denrell’s point. He says that when he asked some “best practices” experts whether they’ve looked at both successes and failures, they told him, “Well, you know, we haven’t looked at failures, but that’s because we’re interested in success!”

At least one group of people probably won’t make that error in reasoning: Denrell’s students. When he teaches the MBA elective Organizational Learning, his ideas on undersampling of failure are a major theme. “It seems very obvious that you can infer the properties of success by looking at the attributes of successful firms, but that’s not true.”

For media inquiries, visit the Newsroom.

Explore More