Leadership & Management

Everything That Can Go Wrong in a Field Experiment (and What to Do About It)

Four Stanford scholars share the most important lessons they’ve learned in the field.

January 16, 2015

| by Melissa Leavitt

 

Image
A researcher standing in a field

Field experiments can lead to major breakthroughs, but they can also bring their fair share of obstacles. | Reuters/Baz Ratner

Untried field staff. Technology failures. Government agencies with their own agendas. Field experiments in the developing world can lead to major breakthroughs but can also offer serious challenges. How can researchers prepare for these obstacles? And what can they do when unexpected circumstances threaten to derail a project?

Four Stanford researchers offered up the lessons they have learned during a panel discussion called “Everything That Can Go Wrong in a Field Experiment,” hosted by the Stanford Institute for Innovation in Developing Economies and the Freeman Spogli Institute for International Studies. The Global Development Poverty initiative seminar is intended to help build the interest and capacity of Stanford researchers around international development.

The biggest mistake in a field experiment, the panelists all agree, is failing to recognize it.

“Many things can go wrong,” says Pascaline Dupas, an associate professor of economics. “The most wrong thing is not to know it has gone wrong.”

Here, their advice for other scholars:

Katherine Casey, assistant professor of political economy

Problem: How do you effectively manage your field staff?

Solution: Implement the tough standards you need to get the job done — but make room for local practices in your management strategy.

Image
A child standing in a doorway

Reuters/Finbarr O’Reilly

The first time Katherine Casey, an assistant professor of political economy at the Graduate School of Business, conducted a survey in West Africa, she followed the local practice of letting the central statistics office hire the field team. But when she checked on that team in the field, she found an incompetent staff. For example, one worker, who could barely see, was interviewing respondents with no one to help him fill out the questionnaire. It was clear a weak enumeration team compromised the quality of the data.

For the next study, she oversaw the hiring process. She opened up recruitment to a large group of people and required applicants to pass an exam before entering training. She also purposely brought on more people than she needed and selected the final field team based on a post-training exam.

When she headed out into the field, she expected to see a highly functioning team. For the most part, she did, save encountering one person who had failed her exam — and didn’t even speak the local language — working on one of the teams. She fired everyone in that group to make an example, and refused to reinstate them after they apologized.

In reflection, she wondered whether she let her own agenda overshadow the cultural practices in the area. It would have been “culturally appropriate” to rehire the balance of the group after the apology, she said; what she did was “out of step with what the local norms were.” One downside to her decision, she said, was losing the opportunity to work with the other researchers in the team — the ones who had the bad luck to be on the same team as the person who had failed the exam. “They were actually really fantastic researchers, and they have not worked with me again.”

Establishing high standards and a tough evaluation process was the right thing to do for her project. But she suggests that researchers temper that rigor with a little flexibility.

Pascaline Dupas, associate professor of economics

Problem: Nothing goes according to plan

Solution: Make sure you know what is going on in the field and fess up.

 

Image
A group of children in a building

Reuters/Youssef Boudlal

One caveat for field researchers — nice ideas on paper can become quite messy when you try to implement them. For Pascaline Dupas, an associate professor of economics, that lesson came up most recently during a study in Morocco.

Dupas and her colleagues were working with the government to evaluate various ways of implementing a cash transfer program designed to improve school participation. In one of the schemes they had initially designed, households would receive money with no strings attached. In the other schemes, households would receive the cash conditional on their child regularly attending school; what varied across those schemes was the rigor with which school attendance would be monitored

That was the proposal. But when it came to implementation, nothing went according to plan. Dupas and her colleagues quickly learned that two of the three monitoring procedures they tried to put in place could not be used. In-classroom fingerprint machines to track one group’s attendance didn’t work. Inspectors in charge of auditing the teachers responsible for tracking student attendance in another group had no means of transport to perform the audits. Even the unconditional treatment arm didn’t go according to plan. The government implemented a requirement that families enroll children in school before they got the funds, de facto “labeling” the cash transfers as for education support. This is clearly different from simply dropping cash — and that matters a lot for the interpretation of the impacts of the program.

In the end, Dupas said, none of the treatment arms functioned the way intended. But the most important thing was that her research team knew what did go on — and used this knowledge to acknowledge weaknesses of the actual study and correctly interpret the data.

“You can end up with treatment arms that aren’t actually in any way what you wanted them to be,” she said. “And it’s critical to know it.”

Jenna Davis, associate professor of civil and environmental engineering, Higgins-Magid senior fellow at Stanford Woods Institute for the Environment

Problem: What if external events skew your study?

Solution: Find another question you can answer with your research outcomes — and next time, get your data set more quickly.

Image
Children carrying water buckets

Reuters/Peter Andrews

Jenna Davis, an associate professor of civil and environmental engineering, thought she would have two years to run a study in Mozambique. Instead, she had two months.

The study she co-led with then-PhD student Valentina Zuin evaluated the potential impact of changing a law governing water use. In the urban area where Davis was working, the poorest households often did not have their own water connections. To get the water they needed, they would go to a wealthier neighbor to purchase water — even though this was technically illegal. The Water Sector Authority wanted to know what would happen if they legalized this practice, so they enlisted Davis’ team to evaluate the effects.

They began the study planning to gather data and then analyze the results over a two-year period. But other geopolitical events intervened. About two months after the policy change was announced, spiking global food prices sparked riots in the area, and protestors’ grievances quickly encompassed other issues, including water. In response, the government reduced the price of a water connection, making it easier for more people to have their own water connections. That move threw off Davis’ efforts to analyze informal buying and selling practices.

“It knocked the wind out of us,” she said. “We couldn’t return an answer to the question we had been asked.”

She and Zuin had gathered enough data to answer other questions about water distribution in Mozambique, salvaging the work. But she cautioned researchers to figure out the least amount of time necessary to conduct an experiment and think about other questions that can be answered with the data you’ve already obtained.

Stephen Luby, professor of medicine, senior fellow at Stanford Woods Institute for the Environment and Freeman Spogli Institute for International Studies

Problem: How do you delegate in the field?

Solution: Monitor closely, evaluate the work, and accept that changes may occur.

Image
Hands washing with soap

Reuters/Lucas Jackson

When Stephen Luby, a professor of medicine, conducted a hand-washing study in Bangladesh, his goal was to evaluate whether hand sanitizer or soap was more effective at reducing microbial contamination. Luby felt he needed an expert in behavioral change, so he hired a local researcher with a background in psychology.

The trial involved three groups — a control group, a group given soap, and a group given hand sanitizer. Researchers would then visit participants unannounced to measure the contamination on their hands.

 

Image
Professor Luby inspecting a latrine.

Professor Stephen Luby inspects one of the latrines his team implemented as part of a randomized controlled trial in rural Bangladesh.

When the data came in, Luby said, he saw none of the interventions had had an impact.

A conversation with his local researcher revealed the reason for the data glitch. Partway through the experiment, Luby explained, the investigator decided on his own to take away all the soap and hand sanitizer to see who was motivated to buy the supplies themselves. It derailed the study (and to make matters worse, hand sanitizer wasn’t even available at local stores).

Luby restarted the experiment — without this researcher’s assistance.

His takeaway: Don’t hesitate to delegate responsibility — it gives team members the opportunity to be creative and propose new solutions to research challenges. But work closely with the people you hire and find ways to evaluate their work through objective benchmarks.

For media inquiries, visit the Newsroom.

Explore More