aislinn.jpg

Associate Professor of Economics
University of Pennsylvania

Research Fellow in Industrial Organization
Center for Economic Policy and Research Development (CEPR)

Member, Inequality: Measurement, Interpretation, and Policy Network
Human Capital & Economic Opportunity (HCEO) Global Working Group

Contact Information
Phone: (908) 432-7889
Email: abohren@gmail.com

I study various topics in microeconomics with a focus on learning and belief formation. My research explores questions related to social learning, model misspecification, biased beliefs, discrimination, and moral hazard. My work on discrimination and biased beliefs has both theoretical and empirical components, and builds on my research on learning under model misspecification.

Belief Formation and Learning

Learning and belief formation are central to many economic decisions. This component of my research agenda develops methods to model cognitive biases and incorrect models of the informational environment, and studies how such phenomena impact decisions.

  1. Informational Herding with Model Misspecification, Journal of Economic Theory, May 2016, 163: 222-247.
  2. Consider a social learning setting where agents misunderstand the correlation between other agents' actions---they cannot distinguish between new and redundant information. When agents sufficiently overestimate the amount of new information, beliefs about the state become entrenched and incorrect learning occurs with positive probability; while when agents sufficiently overestimate the amount of redundant information, beliefs fail to converge and learning is cyclical.

    [link to publication] [link to working paper]

  3. Learning with Heterogeneous Misspecified Models: Characterization and Robustness, with Daniel N. Hauser, Econometrica, November 2021, 89: 3025-3077
  4. This paper provides a general characterization of how model misspecification affects long-run learning. A simple criterion characterizes learning outcomes---depending on the misspecified model, learning may be correct, incorrect, or beliefs may not converge, and agents may asymptotically disagree despite observing the same sequence of information. It also establishes that the correctly specified model is robust---agents with approximately correct models almost surely learn the true state.

    The misspecified model of correlation in Bohren (2016) is one form of misspecification captured by our framework. In Bohren Imas Rosenberg (2019), we apply this framework to a labor market setting and show how the dynamic patterns of discrimination between two groups of workers can identify whether the discrimination stems from accurate or misspecified beliefs about worker ability.

    [link to publication] [link to working paper]

  5. Optimal Lending Contracts with Retrospective and Prospective Bias, with Daniel N. Hauser, AEA Papers & Proceedings, May 2023, 113: 665-72.
  6. This paper considers an entrepreneur who borrows to invest in a project and learns about project quality with a misspecified model. Using the insight that a misspecified model can be decomposed into the two key classes of distortions: prospective biases and retrospective biases (Bohren Hauser (2023)), we explore how each type of bias impacts the structure of optimal lending contracts.

    [link to publication] [link to working paper] [link to online appendix]

  7. The Behavioral Foundations of Model Misspecification, with Daniel N. Hauser.
  8. This paper links two approaches to biased interpretations of information: non-Bayesian updating and model misspecification. We show that misspecified models can be decomposed into an updating rule and belief forecast, derive necessary and sufficient conditions for an updating rule and belief forecast to have a misspecified model representation, and show that the representation is unique. Finally, we explore two ways to select belief forecasts--- introspection-proof and naive consistent. This highlights the belief restrictions implicit in the misspecified model approach and shows how to identify a misspecified model from belief data.

    NEW PAPER MAY 2023: [link to working paper]

  9. Over- and Underreaction to Information, with Cuimin Ba and Alex Imas.
  10. This paper explores how features of the learning environment interact with cognitive constraints to determine whether agents under- or overreact to information. We develop a model of belief updating when agents are subject to limited attention and cognitive noise, and use it to generate predictions about how agents react to information. Our model predicts underreaction when the state space is simple, signals are precise, and the prior is flat or diffuse; it predicts overreaction when the state space is complex, signals are noisy, and the prior is concentrated. A series of experiments provide direct support for these theoretical predictions.

    NEW PAPER JULY 2023: [link to working paper]

  11. A Cognitive Foundation for Perceiving Uncertainty, with Josh Hascher, Alex Imas, Michael Ungeheuer and Martin Weber.
  12. We propose a framework where perceptions of uncertainty are driven by the interaction between cognitive constraints and whether information is presented sequentially or simultaneously. Limited attention leads to the overweighting of unlikely but salient events---the dominant force when learning from simultaneous information---whereas imperfect recall leads to the underweighting of such events---the dominant force when learning sequentially. A series of studies show that, when learning from simultaneous information, people are overoptimistic about assets that mostly underperform but sporadically exhibit large outperformance. However, they overwhelmingly select more consistently outperforming assets when observing the same information sequentially.

    NEW PAPER FEBRUARY 2024: [link to working paper]

  13. Posteriors as Signals in Misspecified Learning Models, with Daniel N. Hauser.
  14. The Bayesian learning literature often normalizes a signal to be the induced posterior distribution over the state space. We provide a foundation for such a normalization when agents have a misspecified model of the state-signal distributions.

    (draft coming soon)

  15. Misinterpreting Social Outcomes and Information Campaigns, with Daniel N. Hauser.
  16. Given the different inefficiencies that arise when agents are misspecified in Bohren Hauser (2021), it is natural to ask what types of policies will improve decision-making. In this paper, we explore how information campaigns can counteract inefficient choices in a learning setting with social perception bias, where agents have a misspecified model of others' preferences. We characterize how the type and level of social perception bias affects the optimal information policy, and show that key features of this policy depend crucially on the form of misspecification.

    (work in progress, slides available upon request)

Discrimination

My work on discrimination focuses on understanding how discrimination evolves across time and interacts across markets, and on identifing the source of this discrimination---in particular, how discrimination stemming from inaccurate (misspecified) beliefs manifests.

  1. The Dynamics of Discrimination: Theory and Evidence, with Alex Imas and Michael Rosenberg, American Economic Review, October 2019, 109: 3395-3436 (lead article)
  2. Exeter Prize 2020

    We model the dynamics of discrimination and show how its evolution can identify the underlying source. We test these theoretical predictions in a field experiment on a large online platform where users post content that is evaluated by other users on the platform. We assign posts to accounts that exogenously vary by gender and evaluation histories. With no prior evaluations, women face significant discrimination. However, following a sequence of positive evaluations, the direction of discrimination reverses: women's posts are favored over men's. Interpreting these results through the lens of our model, this dynamic reversal implies discrimination driven by inaccurate beliefs. Our model builds on the framework for social learning with model misspecification developed in Bohren Hauser (2021).

    [link to publication] [link to working paper]

  3. The Language of Discrimination: Using Experimental versus Observational Data, with Alex Imas and Michael Rosenberg, AEA Papers & Proceedings, May 2018, 108: 169-174.
  4. Discrimination can also occur along dimensions that are harder to quantify, such as the language used when engaging with and evaluating members of a targeted group. Using textual data from the field experiment in Bohren Imas Rosenberg (2019), we document a significant difference in the language used to respond to questions posed by women versus men. This highlights the importance of considering language as an additional means of discrimination.

    [link to publication] [link to working paper]

  5. Inaccurate Statistical Discrimination: An Identification Problem, with Kareem Haggag, Alex Imas and Devin Pope, Accepted at Review of Economics and Statistics.
  6. We argue that in many situations, individuals may have inaccurate beliefs about the relevant characteristics of different groups. This possibility creates an identification problem when isolating the source of discrimination. When not accounted for, we show both theoretically and experimentally that such inaccurate statistical discrimination will be misclassified as taste-based. We then examine two alternative methodologies for differentiating between these three sources of discrimination---varying the amount of information presented to evaluators and eliciting evaluators' beliefs---and propose a possible intervention: when presented with accurate information, inaccurate statistical discrimination decreases.

    [link to working paper] [link to literature survey papers] [link to qualtrics survey]

  7. Systemic Discrimination: Theory and Measurement, with Peter Hull and Alex Imas.
  8. This paper develops new tools for modeling and measuring direct and systemic forms of discrimination. We show how systemic discrimination arises from direct discrimination in other decisions, which then generates disparities in the signaling technology or skill accumulation. Importantly, standard tools for measuring direct discrimination such as audit and correspondence studies cannot detect systemic discrimination. We propose two ways to measure such systemic discrimination: by decomposing total discrimination into direct and systemic components, and via a new experimental design that we refer to as an iterated audit. Finally, we illustrate these tools empirically and document sizeable systemic discrimination. These findings illustrate how discrimination in one domain can drive persistent disparities through systemic channels even when the direct discrimination is eliminated.

    NEW VERSION DEC 2023: [link to working paper]

Moral Hazard

I explore how the persistence of past actions and peer-monitoring can be used to overcome moral hazard. Applications include designing rating systems on online platforms and providing incentives in online labor markets.

  1. Persistence in a Dynamic Moral Hazard Game, Theoretical Economics, January 2024, 19:449-498.
  2. This paper studies how the persistence of past choices can be used to create incentives. A large player, such as a firm, interacts with a sequence of short-run players, such as customers. The long-run player faces moral hazard and her past actions are imperfectly observed--- they are distorted by a Brownian motion. Persistence refers to the impact that actions have on a payoff-relevant state variable, e.g. product quality depends on current and past investment choices. I characterize actions and payoffs in Markov Perfect Equilibria (MPE), for a fixed discount rate. Finally, I derive sufficient conditions for a MPE to be the unique PPE. Persistence creates effective intertemporal incentives to overcome moral hazard in settings where traditional channels fail. Several applications illustrate how the structure of persistence impacts the strength of these incentives.

    [link to working paper] [link to supplemental appendix]

  3. Optimal Rating Design with Moral Hazard.
  4. Using the equilibrium characterization from Bohren (2023), I explore how ratings can be used to create incentives on a platform in which a long-run worker sells a service to a sequence of short-run consumers, and consumers each report a review of the worker's performance. The platform designs a rating mechanism in which it can commit to divert a portion of the worker's revenue and use this to reward the worker based on her aggregate rating. Key design features include the rate at which past reviews decay, the amount of revenue to divert and the shape of the reward function. I obtain a sharp characterization of the optimal rating mechanism for two cases: (i) the platform maximizes the worker's payoff, subject to budget balance, and (ii) the platform maximizes its profit. In the worker-optimal case, the optimal mechanism achieves full efficiency under strong conditions: the platform must be able to perfectly price discriminate and the worker must be perfectly patient. The profit-maximizing platform always selects an inefficient rating mechanism. Even under conditions that do not allow platforms to fully overcome moral hazard, utilizing persistent rating mechanisms improves efficiency and provides platforms with a powerful tool to generate intertemporal incentives.

    (work in progress)

  5. Peer Monitoring with Partial Commitment, with Troy Kravitz
  6. A firm employs workers to obtain costly unverifiable information -- for example, categorizing the content of images. Monitoring takes the form of hiring multiple workers to complete the same task and comparing reported output across workers. The optimal contract under limited liability exhibits three key features: (i) the monitoring technology depends crucially on the commitment power of the firm -- virtual monitoring, or monitoring with arbitrarily small probability, is optimal when the firm can commit to truthfully reveal messages from other workers, while monitoring with strictly positive probability is optimal when the firm can hide messages (partial commitment); (ii) bundling multiple tasks reduces worker rents and monitoring inefficiencies; and (iii) the optimal contract is approximately efficient under full but not partial commitment. We conclude with an application to crowdsourcing platforms, and characterize the optimal contract for tasks found on these platforms.

    (draft available upon request)

Information Aggregation

Another important question related to learning is how to aggregate information from multiple sources. In such scenarios, committees are often formed to assist with the decision.

  1. Should Straw Polls be Banned? with S. Nageeb Ali, Games and Economic Behavior, November 2019, 118:284-294.
  2. We consider a setting in which a Principal appoints a committee of partially informed experts to choose a policy. The experts' preferences are aligned with each other but conflict with hers. We study whether she gains from banning committee members from communicating or "deliberating" before voting. Our main result is that if the committee plays its preferred equilibrium and the Principal must use a threshold voting rule, then she does not gain from banning deliberation. We show using examples how she can gain if she can choose the equilibrium played by the committee, or use a non-anonymous or non-monotone social choice rule.

    [link to publication] [link to working paper]

The Econometrics of Randomized Experiments

Experimental policy trials that explicitly consider interference between individuals are an increasingly useful lens to study spillover and network effects. Empirical researchers who seek to experimentally investigate these effects face novel design choices that do not exist in settings with no interference.

  1. Optimal Design of Experiments in the Presence of Interference, with Sarah Baird, Craig McIntosh and Berk Ozler, Review of Economics & Statistics, December 2018, 100:844-860.
  2. We formalize the optimal design of experiments when there is interference between units, i.e. an individual's outcome depends on the outcomes of others in her group. We focus on randomized saturation designs, two-stage experiments that first randomize treatment saturation of a group, and then randomize individual treatment assignment. We map the potential outcomes framework with partial interference to a regression model with clustered errors, calculate standard errors of randomized saturation designs, and derive analytical insights about the optimal design. We show that the power to detect average treatment effects declines precisely with the ability to identify novel treatment and spillover effects. Bohren Staples Baird McIntosh Ozler (2016) provides software for researchers to use our standard error calculations and optimal design results.

    [link to publication] [link to working paper] [link to replication files]

  3. Power Calculation Software for Randomized Saturation Experiments, Aislinn Bohren, Patrick Staples, Sarah Baird, Craig McIntosh and Berk Ozler, Version 1.0, 2016.
  4. To complement our analytical results in Baird Bohren McIntosh Ozler (2018), we developed software to assist researchers in designing randomized saturation experiments. Our software allows users to calculate the standard errors of estimators for different randomized saturation designs or calculate the optimal randomized saturation design for a given researcher objective.

    Available in R, Python, Matlab, Graphical User Interface (GUI).

    [link to software]