MetaTOC stay on top of your field, easily

True Overconfidence in Interval Estimates: Evidence Based on a New Measure of Miscalibration

, ,

Journal of Behavioral Decision Making

Published online on

Abstract

Overconfidence is often regarded as one of the most prevalent judgment biases. Several studies show that overconfidence can lead to suboptimal decisions of investors, managers, or politicians. Recent research, however, questions whether overconfidence should be regarded as a bias and shows that standard “overconfidence” findings can easily be explained by different degrees of knowledge of agents plus a random error in predictions. We contribute to the current literature and ongoing research by extensively analyzing interval estimates for knowledge questions, for real financial time series, and for artificially generated charts. We thereby suggest a new method to measure overconfidence in interval estimates, which is based on the implied probability mass behind a stated prediction interval. We document overconfidence patterns, which are difficult to reconcile with rationality of agents and which cannot be explained by differences in knowledge as differences in knowledge do not exist in our task. Furthermore, we show that overconfidence measures are reliable in the sense that there exist stable individual differences in the degree of overconfidence in interval estimates, thereby testing an important assumption of behavioral economics and behavioral finance models: stable individual differences in the degree of overconfidence across people. We do this in a “field experiment,” for different levels of expertise of subjects (students on the one hand and professional traders and investment bankers on the other hand), over time, by using different miscalibration metrics, and for tasks that avoid common weaknesses such as a non‐representative selection of trick questions. Copyright © 2012 John Wiley & Sons, Ltd.