aislinn.jpg

Assistant Professor of Economics
Carnegie Mellon University
University of Pennsylvania (on leave 2018-2019)

Research Affiliate in Industrial Organization
Center for Economic Policy and Research Development (CEPR)

Contact Information
Phone: (908) 432-7889
Email: abohren@gmail.com

CV

I study various topics in microeconomics with a focus on models of information and how individuals interact in dynamic settings. My research explores questions related to learning under model misspecification, discrimination, information aggregation, moral hazard and the econometrics of randomized experiments. My work on discrimination has both theoretical and empirical components, and builds on my research on learning under model misspecification. My work in the other four areas is theoretical, and includes applications to designing rating systems, information campaigns and committees, and providing incentives in online labor markets.

An overview of my research agenda is available here: [link to research agenda]

Social Learning under Model Misspecification

In many economic settings, individuals learn about their environment from observing their peers. Model misspecification allows for the possibility that agents have incorrect models of the informational environment and how others make decisions.

  1. Informational Herding with Model Misspecification, Journal of Economic Theory, May 2016, 163: 222-247.
  2. I study a social learning setting where agents have a misspecified model of the correlation between other agents' actions -- they cannot distinguish between new and redundant information. When individuals significantly overestimate the amount of new information, beliefs about the state become entrenched and incorrect learning occurs with positive probability; while when individuals sufficiently overestimate the amount of redundant information, beliefs fail to converge and learning is cyclical.

    [link to publication] [link to working paper]

  3. Social Learning with Model Misspecification: A Framework and a Robustness Result, with D. Hauser, PIER Working Paper 18-017.

  4. Revise & Resubmit (2nd Round) from Econometrica.

    We provide a general characterization of how model misspecification affects long-run learning in a sequential social learning setting. We first develop a framework to represent cognitive biases as forms of model misspecification, which captures three broad categories of model misspecification: strategic misspecification, such as level-k and cognitive hierarchy; signal misspecification, such as partisan bias, motivated reasoning and overconfidence; and preference misspecification, such as the false consensus effect and pluralistic ignorance. Our main result is a simple criterion to characterize learning outcomes that is straightforward to derive from the primitives of the misspecification. Depending on the nature of the misspecification, we show that learning may be correct, incorrect or beliefs may not converge. Multiple degenerate limit beliefs may arise and agents may asymptotically disagree, despite observing the same sequence of information. We also establish that the correctly specified model is robust -- agents with approximately correct models almost surely learn the true state. We close with a demonstration of our framework in each category of model misspecification.

    The misspecified model of correlation in Bohren (2016) is one form of strategic misspecification that is captured by our framework. In Bohren Imas Rosenberg (2018) , we apply this framework to a labor market setting and show how the dynamic patterns of discrimination between two groups of workers can identify whether the discrimination stems from accurate or misspecified beliefs about worker ability.

    [link to working paper]

  5. Misinterpreting Social Outcomes and Information Campaigns, with D. Hauser.
  6. Given the different inefficiencies that arise when agents are misspecified (Bohren (2016) and Bohren Hauser (2018)), it is natural to ask what types of policies will improve decision-making. In this paper, we explore how information campaigns can counteract inefficient choices. We study the optimal way for a social planner to release costly public information about the state. For example, this could entail a public health campaign to encourage parents to vaccinate their children or a savings campaign that encourages workers to invest in the stock market. We show that the duration (temporary or permanent) and targeting (intervene to correct inefficient action choices or intervene to reinforce efficient action choices) of the optimal information campaign depends crucially on the form of misspecification.

    [link to extended abstract] (work in progress, slides available upon request)

Discrimination

My work on discrimination focuses on distinguishing between different sources of discrimination, including belief-based discrimination stemming from accurate or inaccurate (misspecified) beliefs and preference-based discrimination. Identifying the source of discrimination is important for policy and welfare assessment.

  1. The Dynamics of Discrimination: Theory and Evidence, with A. Imas and M. Rosenberg, PIER Working Paper 18-016.

  2. Conditionally Accepted at the American Economic Review.

    We model the dynamics of discrimination and show how its evolution can identify the underlying source. We test these theoretical predictions in a field experiment on a large online platform where users post content that is evaluated by other users on the platform. We assign posts to accounts that exogenously vary by gender and evaluation histories. With no prior evaluations, women face significant discrimination. However, following a sequence of positive evaluations, the direction of discrimination reverses: women's posts are favored over men's. Interpreting these results through the lens of our model, this dynamic reversal implies discrimination driven by inaccurate beliefs.

    Our model builds on the framework for social learning with model misspecification developed in Bohren Hauser (2018).

    [link to working paper]

  3. Inaccurate Statistical Discrimination, with K. Haggag, A. Imas and D. Pope.
  4. We provide evidence from an online experiment that illustrates how to distinguish between accurate (based on correct beliefs) and inaccurate (based on misspecified beliefs) belief-based discrimination. We show how ignoring this distinction -- as is often the case in the discrimination literature -- can lead to erroneous interpretations of the motives and implications of discriminatory behavior.

    (work in progress, draft coming soon)

  5. The Language of Discrimination: Using Experimental versus Observational Data, with A. Imas and M. Rosenberg, AEA Papers & Proceedings, May 2018, 108: 169-174.
  6. Discrimination can also occur along dimensions that are harder to quantify, such as the language used when engaging with and evaluating members of a targeted group. We use text and language data to examine whether people respond differently to questions posed by women versus men. Using techniques from machine learning and the textual data from the field experiment in Bohren Imas Rosenberg (2018) , we document a significant difference in the distribution of language used in response to questions from male versus female usernames. This highlights the importance of considering language as an additional means of discrimination.

    [link to publication] [link to working paper]

Moral Hazard

I explore how the persistence of past actions and peer-monitoring can be used to overcome moral hazard. Applications include designing rating systems on online platforms and providing incentives in online labor markets.

  1. Using Persistence to Generate Incentives in a Dynamic Moral Hazard Problem, PIER Working Paper 18-015.
  2. Accepted subject to revisions at Theoretical Economics.

    I study how the persistence of past choices can be used to create incentives in a continuous time stochastic game in which a large player, such as a firm, interacts with a sequence of short-run players, such as customers. The long-run player faces moral hazard and her past actions are imperfectly observed -- they are distorted by a Brownian motion. Persistence refers to the impact that actions have on a payoff-relevant state variable, e.g. the quality of a product depends on both current and past investment choices. I characterize actions and payoffs in Markov Perfect Equilibria (MPE), for a fixed discount rate, and show that the perfect public equilibrium (PPE) payoff set is the convex hull of the MPE payoff set. Finally, I derive sufficient conditions for a MPE to be the unique PPE. Persistence creates effective intertemporal incentives to overcome moral hazard in settings where traditional channels fail. Several applications illustrate how the structure of persistence impacts the strength of these incentives.

    [link to working paper]

  3. Optimal Rating Design with Moral Hazard.
  4. Using the equilibrium characterization from Bohren (2018) , I explore how ratings can be used to create incentives on a platform in which a long-run worker sells a service to a sequence of short-run consumers, and consumers each report a review of the worker's performance. The platform designs a rating mechanism in which it can commit to divert a portion of the worker's revenue and use this to reward the worker based on her aggregate rating. Key design features include the rate at which past reviews decay, the amount of revenue to divert and the shape of the reward function. I obtain a sharp characterization of the optimal rating mechanism for two cases: (i) the platform maximizes the worker's payoff, subject to budget balance, and (ii) the platform is a profit-maximizing player. Efficiency is significantly improved in both cases, in that the worker exerts effort closer to the efficient level, relative to a market without persistent ratings. In the worker-optimal case, the optimal mechanism achieves full efficiency under strong conditions: the platform must be able to perfectly price discriminate and the worker must be perfectly patient. The profit-maximizing platform always selects an inefficient rating mechanism, leading to lower effort provision than in the worker-optimal mechanism. Even under conditions that do not allow platforms to fully overcome moral hazard, utilizing persistent rating mechanisms improves efficiency and provides platforms with a powerful tool to generate intertemporal incentives.

    (draft coming soon)

  5. Optimal Contracting with Costly State Verification, with an Application to Crowdsourcing, with T. Kravitz, PIER Working Paper 16-023.
  6. A firm employs workers to obtain costly unverifiable information -- for example, categorizing the content of images. Monitoring takes the form of hiring multiple workers to complete the same task and comparing reported output across workers. The optimal contract under limited liability exhibits three key features: (i) the monitoring technology depends crucially on the commitment power of the firm -- virtual monitoring, or monitoring with arbitrarily small probability, is optimal when the firm can commit to truthfully reveal messages from other workers, while monitoring with strictly positive probability is optimal when the firm can hide messages (partial commitment); (ii) bundling multiple tasks reduces worker rents and monitoring inefficiencies; and (iii) the optimal contract is approximately efficient under full but not partial commitment. We conclude with an application to crowdsourcing platforms, and characterize the optimal contract for tasks found on these platforms.

    [link to working paper]

Information Aggregation in Committees

Another important question related to learning is how to aggregate information from multiple sources. In such scenarios, committees are often formed to assist with the decision.

  1. Should Straw Polls be Banned? with S.N. Ali, PIER Working Paper 18-022.

  2. Under Review.

    We consider a setting in which a Principal appoints a committee of partially informed experts to choose a policy. The experts' preferences are aligned with each other but conflict with hers. We study whether she gains from banning committee members from communicating or ``deliberating'' before voting. Our main result is that if the committee plays its preferred equilibrium and the Principal must use a threshold voting rule, then she does not gain from banning deliberation. We show using examples how she can gain if she can choose the equilibrium played by the committee, or use a non-anonymous or non-monotone social choice rule.

    [link to working paper]

The Econometrics of Randomized Experiments

Experimental policy trials that explicitly consider interference between individuals are an increasingly useful lens to study spillover and network effects. Empirical researchers who seek to experimentally investigate these effects face novel design choices that do not exist in settings with no interference.

  1. Optimal Design of Experiments in the Presence of Interference, with S. Baird, C. McIntosh and B. Ozler, Review of Economics & Statistics, December 2018, 100:844-860.
  2. We formalize the optimal design of experiments when there is interference between units, i.e. an individual's outcome depends on the outcomes of others in her group. We focus on randomized saturation designs, two-stage experiments that first randomize treatment saturation of a group, and then randomize individual treatment assignment. We map the potential outcomes framework with partial interference to a regression model with clustered errors, calculate standard errors of randomized saturation designs, and derive analytical insights about the optimal design. We show that the power to detect average treatment effects declines precisely with the ability to identify novel treatment and spillover effects. Bohren Staples Baird McIntosh Ozler (2016) provides software for researchers to use our standard error calculations and optimal design results.

    [link to publication] [link to working paper] [link to replication files]

  3. Power Calculation Software for Randomized Saturation Experiments, A. Bohren, P. Staples, S. Baird, C. McIntosh and B. Ozler, Version 1.0, 2016.
  4. To complement our analytical results in Baird Bohren McIntosh Ozler (2018) , we developed software to assist researchers in designing randomized saturation experiments. Our software allows users to calculate the standard errors of estimators for different randomized saturation designs or calculate the optimal randomized saturation design for a given researcher objective.

    Available in R, Python, Matlab, Graphical User Interface (GUI).

    [link to software]