Monday, November 1, 2004

A True Expert Knows What Question to Ask

STANFORD GRADUATE SCHOOL OF BUSINESS—We're bombarded with opinions from so-called experts on everything from the weather to economics and politics. But how do we, as non-experts, divide the genuine experts from those who don't really possess the relevant knowledge? This is a question that has long bothered statisticians as well as the general public. Yet until recently the literature on the subject has failed to come up with a viable test for determining a true expert.

There are two requisites for a viable test for differentiating experts from non-experts. A good test must yield a "passing" grade whenever someone is an expert and a "failing" grade whenever someone is not.

"The situation is actually quite a common one," said Yossi Feinberg, associate professor of economics, who studies reasoning and uncertainty. He points to weather forecasting as an example of the inherent challenges of determining, in an uncertain environment, whether someone is an expert. An actual test used by weather forecasters to determine if they are doing a good job is called a calibration test. This involves taking all the days they predicted, say, a 40 percent chance of rain on the following day, and seeing if the average is close to 40 percent. The problem is, this test actually provides no clear indication of whether a particular weather forecaster is good or not. "I, as a clueless professor with no idea about the weather, can sit down in the dark room, close the windows, make predictions about tomorrow, see what happens tomorrow, and adjust my prediction in a way that I will most likely pass the test," said Feinberg. "It's not magic, not knowledge, but merely a sophisticated probabilistic algorithm that I can use."

The solution, according to a new paper by Feinberg and a colleague, is deceptively simple: Ask the expert what question to ask.

"The challenge for the expert is twofold," said Feinberg "The expert needs to pick a question that is so difficult that no one else could answer it if they were not an expert, but also a question that is not too difficult to actually answer."

So what's to prevent a non-expert from posing a question that he or she can easily answer?

"I ask you, as the expert, to give me a prediction that will be so surprising and so unusual that it would be unlikely to happen unless you knew something," said Feinberg. "Our contribution to the literature is that you can always find such a prediction. If you are an expert in an uncertain environment, you can always come up with a prediction that seems unlikely to the non-expert but is actually very probable."

For example, consider economic predictions. Feinberg said that if a supposed expert says that market volatility is almost perfectly correlated with the consumption of doughnuts in Manhattan the previous day, that's a prediction that if it happens, it will be so surprising that there's no way the analyst could have guessed it. "That will definitely prove he's an expert," he said.

In the late 1990s, a number of papers started studying the posing of expert questions, but all had negative results. "All they came up with was a collection of tests that seemed reasonable but didn't provide the goods: You could manipulate them one after another," said Feinberg. "Our paper is dramatic in respect to that literature, because it says there is always a test that cannot be manipulated."

There are all kinds of real-world problems that exist for which this theory would have resonance, including weather prediction, economic forecasting, political forecasting, and any kind of repeated phenomenon that is random and about which people claim to possess expertise.

Even if an expert cannot come up with an exact prediction—in cases, for example, when they come up with a probability of a certain event transpiring—they should be able to come up with some property of the resulting answer. "No matter what you know about the distribution, you can always provide a surprising prediction," said Feinberg.

Related Information

A True Expert Knows What Question Should Be Asked
Eddie Dekel, Yossi Feinberg
Stanford Research Paper Series, June 2004

Calibrated Forecasting and Merging
E. Kalai, E. Lehrer, and R. Smorodinsky
Game Economic Behavior (1999): 151-159

The Reproducible Properties of Correct Forecasts
A. Sandroni
International Journal of Game Theory (2003): 151-159

Any Inspection Rule Is Manipulable
E. Lehrer
Econometrica (2001): 1333-1347