The Effect of Rating Unfamiliar Items on Angoff Passing Scores
Educational and Psychological Measurement
Published online on October 10, 2016
Abstract
The Angoff standard setting method relies on content experts to review exam items and make judgments about the performance of the minimally proficient examinee. Unfortunately, at times content experts may have gaps in their understanding of specific exam content. These gaps are particularly likely to occur when the content domain is broad and/or highly technical, or when non-expert stakeholders are included in a standard setting panel (e.g., parents, administrators, or union representatives). When judges lack expertise regarding specific exam content, the ratings associated with those items may be bias. This study attempts to illustrate the impact of rating unfamiliar items on Angoff passing scores. The study presents a comparison of Angoff ratings for typical items with those identified by judges as containing unfamiliar content. The results indicate that judges tend to perceive unfamiliar items as being artificially difficult resulting in systematically lower Angoff ratings. The results suggest that when judges are forced to rate unfamiliar items, the validity of the resulting classification decision may be jeopardized.