Suraj Malladi

Suraj Malladi
PhD Student, Economic Analysis & Policy
PhD Program Office Graduate School of Business Stanford University 655 Knight Way Stanford, CA 94305

Suraj Malladi

Research Statement

I study delegation, networks and social learning with an eye to robust policy design.

Research Interests

  • Microeconomic Theory
  • Networks

Job Market Paper

A policymaker relies on regulators or bureaucrats to screen agents on her behalf. How can she maintain some control over the design of the screening process? She solves a two-layer mechanism design problem: she restricts the set of allowable allocations, after which a screener picks a menu that maps an agent's costly evidence to this restricted set. In general, the policymaker can set a floor in a way that dominates full delegation no matter how the screener's objectives are misaligned. When this misalignment is only over the relative importance of reducing allocation errors or agent's screening costs, the effectiveness of this restriction hinges sharply on the direction of the screener's bias. If the screener is more concerned with reducing errors, setting this floor is in fact robustly optimal for the policymaker. But if the screener is more concerned with keeping costs down, not only does this particular floor have no effect: any restriction that strictly improves over full delegation is complex and sensitive to the details of the screener's preferences. I consider the implications for regulatory governance.

Working Papers

(R&R at the American Economic Review; with Mohammad Akbarpour and Amin Saberi) Identifying the optimal set of individuals to first receive information in a social network is a widely-studied problem in settings such as the diffusion of information, microfinance programs, and new technologies. We show that, for some frequently studied diffusion processes, randomly seeding S + X individuals can prompt a larger cascade than optimally targeting the best S individuals, for a small X. Given these findings, practitioners interested in communicating a message to a large number of people may wish to compare the cost of network-based targeting to that of slightly expanding initial outreach.

(with Matthew O. Jackson and David McAdams) We examine how well agents learn when information reaches them through long chains of noisy person-to-person relay. In such settings, when agents have even slight uncertainties over message mutation rates, they become stuck at their priors no matter how dense networks become. We then turn to the question of how governments and communication platform designers can limit the spread of misinformation when they are either unable or unwilling to take a stance on what messages are true and false. One suggestion arising from our model is to limit the number of message forwards any agent can make. This creates a positive selection effect whereby most message chains originate from nearby (with far away messages likely dying out due to transmission errors), leaving fewer steps of relay for noise to accumulate by the time the messages get to the receiver. Interestingly, WhatsApp adopted exactly this policy in response to criticism that it was being used as a vehicle of misinformation in Brazillian and Indian elections, and more recently to curb health-related disinformation.

Work in Progress

Inducing Skeptics to try Innovations

Even after receiving subsidies, insurance and training, smallholder farmers in developing countries may not adopt or even partially experiment with productivity-enhancing technologies. I consider the design of subsidies in settings where one of the remaining bottlenecks to adoption is ambiguity over the efficacy of new technologies. If (1) adoption is unobservable, (2) output can be hidden, and (3) agents can learn from their peers, optimal collusion-proof interventions take the form of a simple, fixed-prize output contest between peer farmers. This contract is optimal regardless of what the principal knows about the efficacy of the new technology. It is also robust to whether agents believe their joint outcomes of using the new technology are perfectly correlated, independent, or follow some asymmetric distribution with the same marginal distribution. Comparing the performance of this contract against traditional subsidies gives a test of whether risk-aversion, ambiguity-aversion, or pessimism is the main bottleneck to technology adoption.

To Seed or to Learn?

A policymaker knows the marginal cost of seeding an additional individual and the cost of acquiring network information. She knows nothing about the network structure or the virality of the product she wishes to diffuse. Her objective is to avoid large errors: she wants to diffuse the product widely if it is possible under some strategy and does not want to wastefully expend resources otherwise. Her optimal static policy, if she wishes to produce a diffusion within an epsilon fraction of the largest diffusion possible, is to choose the cheaper of two options: pay to learn the network or seed 1/(epsilon * e) agents randomly.

Fair Auctions with Asymmetrically Informed Bidders

(with Aranyak Mehta and Uri Nadav) Agents often arrive to auctions with different levels of information about their own value for the object sold. In such asymmetric settings, it may be optimal to charge different reservation prices to discriminate between bidders. However, it is often infeasible to expressly treat different bidders in the same auction differently, particularly in online settings. We characterize optimal nondiscriminatory mechanisms in the presence of informational asymmetries, and in some cases, bound the price of nondiscrimination.