Belief Updating with Misinformation
Uncertain information is frequently confirmed or retracted after people have initially heard it. A large existing literature has studied how people change their beliefs in response to new information, however, how people react to information about previous information is still unclear. We investigate three closely related questions: 1) How do people update their belief in response to being told a previous signal was (un)informative, 2) What is the effect of verifying the informativeness of a signal ex-ante rather than checking information ex-post, and 3) Do past information checks affect how people react to new uncertain information in the future? To answer these questions we conduct two (online) experiments using a novel modification of the classical ball and urn framework. It is deliberately abstract to avoid the influence of motivated reasoning or other situation specific circumstances. We find that the majority of subjects react to information about information incorrectly. Importantly, we can predict people's belief after the uncertain information is retracted or confirmed based on their initial response. For retractions, people that over-reacted initially end up with a belief higher than their initial prior and vice versa, people that under-reacted initially end up with a belief lower than before. After multiple consecutive retracted signals this leads to beliefs being more dispersed compared to equivalent information that is ex-ante labeled as uninformative. Confirmations or retractions in the past do not seem to affect how people respond to new uncertain information in our setup.
Useful Forecasting: Belief Elicitation for Decision-Making [Slides]
Having information about an uncertain event is crucial for informed decision making. This paper introduces a simple framework in which 1) a principal uses the reported beliefs of multiple agents to make a decision and 2) the agents reporting a belief are affected by the decision. Naturally, the question arises how the principal can incentivize the agents to report their belief truthfully. I show that in this setting a direct reporting mechanism using a scoring rule to incentivize belief reports leads to truthful reporting by all agents as the unique Nash equilibrium under precisely two conditions, preference diversity and no pivotality. Moreover, if the principal can only consult a single agent the only mechanism that can guarantee truth-telling requires perfect knowledge of the agent’s preferences. In practice, it is best for the principal to delegate the decision to the agent and provide incentives that best align the preferences.
Beliefs and the Role of Memory
Lying and Reputation - An Experimental Study of Reputation Effects