Innovation

Neil Malhotra: Why No News Is Still Important News in Research

Researchers often don’t publish null findings. A Stanford scholar explains why that’s a bad thing.

October 27, 2014

| by Susan Greenberg

 

Image
A researcher watching a participant jump during an experiment.

A demonstration of a scientific study on age, movement, and disease at Imperial College in London | Reuters/Olivia Harris

Do the results of a scientific study determine whether that particular study is published or not? In the biomedical sciences, the answer is a resounding “yes”; research shows that the more clear-cut or significant a study’s findings, the more likely it is to see print. Null or statistically insignificant results of an experiment tend to be disregarded or discarded—not necessarily by scientific journals or conference organizers, but by the very researchers themselves, who fear that their results are uninteresting or unworthy of further pursuit.

 

Quote
Why do some researchers choose not to write up their null findings?

Now Stanford GSB professor Neil Malhotra has shown that the same kind of “publication bias” holds true for the harder-to-measure social sciences as well. Working with Stanford political scientists Annie Franco and Gabor Simonovits, Malhotra found some 65 percent of studies with null findings were never written up.

While some might say no news is good news, in research, this information gap poses a problem. Researchers could waste valuable time on a study that they don’t realize has already been done, for example, or they could come up with statistically significant findings for a study that previously garnered the unpublished null findings, thereby skewing the overall results.

Mining the Data

Malhotra, Franco, and Simonovits analyzed a single cohort of 249 studies conducted between 2002 and 2012 under a program called TESS, for Time-sharing Experiments for the Social Sciences. Sponsored by the National Science Foundation, TESS awards competitive grants to researchers who submit proposals for survey-based experiments in such fields as political science, economics, communications, public health, and psychology. Using TESS offered several advantages: The experiments were actually conducted and underwent rigorous peer review, and all “exceed a substantial quality threshold,” write the authors in their paper, published last month in the journal Science.

Malhotra and his colleagues took a straightforward approach: They set out to compare the statistical results of the TESS studies that got published to those that didn’t.

First they searched online to determine whether, when, and where the TESS studies had been published. For the more than 100 that they found no record of, they emailed the authors to ask what had happened to their studies. They also asked those who had provided no paper to summarize their results. Of the original 249 TESS studies in the cohort, they wound up using 221, or 89 percent. (Seven were disqualified for appearing in book chapters—which are typically held to much lower standards of peer review—and 21 for incomplete results.)

Rather than determine themselves whether each experiment’s findings were statistically significant, Malhotra’s team asked the authors to assess their own outcomes because it is their perception that really matters in presenting findings or writing up a paper.

Malhotra and his colleagues then classified the findings into three categories:

  • Strong, meaning all or most hypotheses were supported;
  • Null, meaning none or very few held up;
  • Mixed, for those somewhere in the middle. They concluded that 41 percent of the TESS studies reported strong findings, 37 percent mixed, and 22 percent null.

 

In all, about half the studies in the TESS sample were eventually published. But Malhotra and his colleagues discovered a strong correlation between a study’s results and its likelihood of publication: More than 60 percent of those with strong results and 50 percent of those with mixed results were published, compared with just 20 percent of those with null results.

Even more striking, nearly 65 percent of the studies with null findings were not only not published, but never even written up. (By contrast, 12 percent of the mixed-result studies and only 4 percent of the strong were not written up.)

This is problematic for several reasons, says Malhotra. First, other researchers may unknowingly waste precious time and resources repeating experiments that have already been conducted and shown negligible results. Second, if other researchers conduct similar studies that result in statistically significant outcomes—thereby contradicting the unwritten studies—the body of published findings will be skewed, breeding potential misinformation. Furthermore, if researchers think they need to report significant results to get published, they may be compelled to “fish” for results.

All in all, Malhotra’s analysis of the TESS studies suggests that when the media report on a surprising finding from the social sciences, we should also ask: How many studies that did not find this result were conducted and not published?

Holding Research Back

Why do some researchers choose not to write up their null findings? According to the authors, some anticipated the rejection of such papers while others simply lost interest in their “unsuccessful” projects.

In 26 detailed email responses from researchers who didn’t write up their null findings, 15 said they abandoned their projects even if they found them personally interesting because they assumed no one else would see them that way. Nine shifted their focus to other experiments, and two eventually published papers supporting their initial hypotheses using different evidence.

Malhotra says that the best way to combat “publication bias” in the social sciences is to offer incentives for researchers “not to bury statistically insignificant results in their desk drawers.” That may mean providing more venues for publishing such papers, creating greater access to databases that track them, or sanctioning researchers who fail to register null findings.

After all, knowing what doesn’t work is often key to figuring out what does.

“Publication Bias in the Social Sciences: Unlocking the File Drawer,” by Annie Franco, Neil Malhotra, and Gabor Simonovits, was published in the 19 September 2014 issue of Science, Vol. 345, Issue 6203.

For media inquiries, visit the Newsroom.

Explore More