MetaTOC stay on top of your field, easily

What's Context Got to Do with It? Comparative Difficulty of Test Questions Influences Metacognition and Corrected Scores for Formula‐scored Exams

, ,

Applied Cognitive Psychology

Published online on

Abstract

Summary: On formula‐scored exams students receive points and penalties for correct and incorrect answers, respectively, but they can avoid the penalty by withholding incorrect answers. However, test‐takers have difficulty strategically regulating their accuracy and often set an overly conservative metacognitive response bias (e.g., Higham, 2007). The current experiments extended these findings by exploring whether the comparative difficulty of surrounding test questions (i.e., easy vs. hard)—a factor unrelated to the knowledge being tested—impacts metacognitive response bias for medium‐difficulty test questions. Comparative difficulty had no significant influence on participants' ability to choose correct answers for medium questions, but it did affect willingness to report answers and confidence ratings. This difference carried over to corrected scores (scores after penalties are applied) when comparative difficulty was manipulated within‐subjects: Scores were higher in the hard condition. Results are discussed in terms of implications for interpreting formula‐scored tests and underlying mechanisms of performance.Copyright © 2017 John Wiley & Sons, Ltd.