Research
-
Informing agents amidst biased narratives
Abstract
I study the strategic interaction between a benevolent sender (who provides data) and a biased narrator (who interprets data) who compete to persuade a boundedly rational receiver (who takes action). The receiver does not know the data-generating model. She chooses between models provided by the sender and the narrator using the maximum likelihood principle, selecting the one that best fits the data given her prior belief. The sender faces a trade-off between providing precise information and minimizing misinterpretation. Surprisingly, full disclosure can be suboptimal and even backfire. I identify a finite set of models that contain the optimal data-generating model, which maximizes the receiver’s expected utility. The sender can guarantee non-negative value of information, preventing harm from misinterpretation. I apply this framework to information campaigns and employee feedback.
-
Calibrated Forecasting and Persuasion (with Vianney Perchet) Poster [New version]
(Extended abstract at EC’24)
Abstract
We study a dynamic game where an expert sends probabilistic forecasts to a decision-maker. The decision-maker verifies these forecasts using a calibration test based on past data. How should the expert send forecasts to maximize her payoff while passing the test? For a stationary ergodic process, we characterize the optimal forecasting strategy by reducing the dynamic game to a static persuasion problem. The distributions of forecasts that can arise under calibration are precisely the mean-preserving contractions of the distribution of conditionals. We compare the payoffs attainable by an informed and uninformed expert, providing a benchmark for the value of information. Finally, we consider a regret-minimizing decision-maker and show that the expert can always guarantee at least the calibration benchmark and sometimes strictly more. -
Efficiency in Games with Incomplete Information (with Itai Arieli, Yakov Babichenko and Rann Smorodinsky) Slides
Abstract
We study games with incomplete information and characterize when a feasible outcome is Pareto efficient. Outcomes with excessive randomization are inefficient: generically, the total number of action profiles across states must be strictly less than the sum of the number of players and the number of states. We consider three applications. A cheap talk outcome is efficient only if pure; with state-independent sender payoffs, it is efficient if and only if the sender’s most preferred action is induced with certainty. In natural settings, Bayesian persuasion outcomes are inefficient across many priors. Finally, ranking-based allocation mechanisms are inefficient under mild conditions. -
On the Inefficiency of Social Learning (with Florian Brandl and Wanying (Kate) Huang)
Abstract
We study whether a social planner can improve the efficiency of learning, measured by the expected total welfare loss, in a sequential decision-making environment. Agents arrive in order and each makes a binary action based on their private signal and the social information they observe. The planner can intervene by jointly designing the social information disclosed to agents and offering monetary transfers contingent on agents’ actions. We show that, despite such flexibility, efficient learning cannot be restored with a finite budget: whenever learning is inefficient without intervention, no combination of information disclosure and transfers can achieve efficient learning while keeping total expected transfers finite. -
Dynamic Cheap Talk with no Feedback
Abstract
I study a dynamic sender-receiver game, where the sequence of states follows an irreducible Markov chain. The sender provides valuable information but gets no feedback on the receiver’s actions. Under certain assumptions, I characterize the set of uniform equilibrium payoffs. I show that the sender benefits from the dynamic interaction, even without feedback. The interaction can restore commitment but only partially. The sender can attain any outcome where she cannot profit by altering her signals while keeping the marginal distribution of signals unchanged. If the sender's payoff is state-independent, she can achieve the commitment benchmark of Bayesian Persuasion.