Browse or search publications from faculty affiliated with the lab.
Flexible and Efficient Contextual Bandits with Heterogeneous Treatment Effect Oracles
Contextual bandit algorithms often estimate reward models to inform decision-making. However, true rewards can contain action-independent redundancies that are not relevant for decision-making. We show it is more data-efficient to estimate any…
Battling the Coronavirus Infodemic Among Social Media Users in Africa
During a global pandemic, how can we best prompt social media users to demonstrate discernment in sharing information online? We ran a contextual adaptive experiment on Facebook Messenger with users in Kenya and Nigeria and tested 40 combinations…
Bias-Variance Tradeoffs for Designing Simultaneous Temporal Experiments
We study the analysis and design of simultaneous temporal experiments, where a set of interventions are applied concurrently in continuous time, and outcomes are measured on a sequence of events observed in time. As a motivating setting, suppose…
Flexible and Efficient Contextual Bandits with Heterogeneous Treatment Effect Oracles
Contextual bandit algorithms often estimate reward models to inform decision-making. However, true rewards can contain action-independent redundancies that are not relevant for decision-making. We show it is more data-efficient to estimate any…
Emotion- Versus Reasoning-Based Drivers of Misinformation Sharing: A Field Experiment Using Text Message Courses in Kenya
Two leading hypotheses for why individuals unintentionally share misinformation are that 1) they are unable to recognize that a post contains misinformation, and 2) they make impulsive, emotional sharing decisions without thinking about whether a…
Smiles in Profiles: Improving Fairness and Efficiency Using Estimates of User Preferences in Online Marketplaces
Online platforms often face challenges being both fair (i.e., non-discriminatory) and efficient (i.e., maximizing revenue). Using computer vision algorithms and observational data from a microlending marketplace, we find that choices made by…
PayPal Giving Experiments
This report describes insights gleaned from the Data Fellows collaboration among PayPal, Northwestern University’s Kellogg School of Management, the Golub Capital Social Impact Lab at Stanford University’s Graduate School of Business, and…
Semiparametric Estimation of Treatment Effects in Randomized Experiments
We develop new semiparametric methods for estimating treatment effects. We focus on a setting where the outcome distributions may be thick tailed, where treatment effects are small, where sample sizes are large and where assignment is completely…
Shared Decision-Making: Can Improved Counseling Increase Willingness to Pay for Modern Contraceptives?
Long-acting reversible contraceptives are highly effective in preventing unintended pregnancies, but take-up remains low. This paper analyzes a randomized controlled trial of interventions addressing two barriers to long-acting reversible…
Off-Policy Evaluation via Adaptive Weighting with Data from Contextual Bandits
It has become increasingly common for data to be collected adaptively, for example using contextual bandits. Historical data of this type can be used to evaluate other treatment assignment policies to guide future innovation or experiments.…
Confidence Intervals for Policy Evaluation in Adaptive Experiments
Adaptive experiment designs can dramatically improve statistical efficiency in randomized trials, but they also complicate statistical inference. For example, it is now well known that the sample mean is biased in adaptive trials. Inferential…
Tractable Contextual Bandits Beyond Realizability
Tractable contextual bandit algorithms often rely on the realizability assumption — i.e., that the true expected reward model belongs to a known class, such as linear functions. In this work, we present a tractable bandit algorithm that is not…
Practitioner’s Guide: Designing Adaptive Experiments
Adaptive experiments present a unique opportunity to more rapidly learn which of many treatments work best, evaluate multiple hypotheses, and optimize for several objectives. For example, they can be used to pilot a large number of potential…
Adapting to Misspecification in Contextual Bandits with Offline Regression Oracles
Computationally efficient contextual bandits are often based on estimating a predictive model of rewards given contexts and arms using past data. However, when the reward model is not well-specified, the bandit algorithm may incur unexpected…
Optimal Policies to Battle the Coronavirus “Infodemic” Among Social Media Users in Sub-Saharan Africa: Pre-analysis Plan
Alongside the outbreak of the novel coronavirus, an “infodemic” of myths and hoax cures is spreading over online media outlets and social media platforms. Building on the literature on combating fake news, we evaluate experimental interventions…
Survey Bandits with Regret Guarantees
We consider a variant of the contextual bandit problem. In standard contextual bandits, when a user arrives we get the user’s complete feature vector and then assign a treatment (arm) to that user. In a number of applications (like health…
The Surrogate Index: Combining Short-Term Proxies to Estimate Long-Term Treatment Effects More Rapidly and Precisely
A common challenge in estimating the long-term impacts of treatments (e.g., job training programs) is that the outcomes of interest (e.g., lifetime earnings) are observed with a long delay. We address this problem by combining several short-term…
Estimation Considerations in Contextual Bandits
Contextual bandit algorithms are sensitive to the estimation method of the outcome model as well as the exploration method used, particularly in the presence of rich heterogeneity or complex outcome models, which can lead to difficult…
Approximate Residual Balancing: Debiased Inference of Average Treatment Effects in High Dimensions
There are many settings where researchers are interested in estimating average treatment effects and are willing to rely on the unconfoundedness assumption, which requires that the treatment assignment be as good as random conditional on…