Browse or search publications from faculty affiliated with the lab.
Contextual bandit algorithms often estimate reward models to inform decision-making. However, true rewards can contain action-independent…
In this paper, we study the design and analysis of experiments conducted on a set of units over multiple time periods where the starting time of…
Learning optimal policies from historical data enables the gains from personalization to be realized in a wide variety of applications. The…
Online platforms often face challenges being both fair (i.e., non-discriminatory) and efficient (i.e., maximizing revenue). Using computer vision…
This report describes insights gleaned from the Data Fellows collaboration among PayPal, Northwestern University’s Kellogg School of Management,…
It has become increasingly common for data to be collected adaptively, for example using contextual bandits. Historical data of this type can be…
Adaptive experiment designs can dramatically improve statistical efficiency in randomized trials, but they also complicate statistical inference.…
Tractable contextual bandit algorithms often rely on the realizability assumption — i.e., that the true expected reward model belongs to a known…
Adaptive experiments present a unique opportunity to more rapidly learn which of many treatments work best, evaluate multiple hypotheses, and…
Computationally efficient contextual bandits are often based on estimating a predictive model of rewards given contexts and arms using past data.…
Alongside the outbreak of the novel coronavirus, an “infodemic” of myths and hoax cures is spreading over online media outlets and social media…
We consider a variant of the contextual bandit problem. In standard contextual bandits, when a user arrives we get the user’s complete…
A common challenge in estimating the long-term impacts of treatments (e.g., job training programs) is that the outcomes of interest (e.g.,…
We introduce a novel measure of segregation, experienced isolation, that captures individuals’ exposure to diverse others in the places…
Contextual bandit algorithms are sensitive to the estimation method of the outcome model as well as the exploration method used,…