Preventing Rater Biases in 360-Degree Feedback by Forcing Choice
Organizational Research Methods
Published online on September 15, 2016
Abstract
We examined the effects of response biases on 360-degree feedback using a large sample (N = 4,675) of organizational appraisal data. Sixteen competencies were assessed by peers, bosses, and subordinates of 922 managers as well as self-assessed using the Inventory of Management Competencies (IMC) administered in two formats—Likert scale and multidimensional forced choice. Likert ratings were subject to strong response biases, making even theoretically unrelated competencies correlate highly. Modeling a latent common method factor, which represented nonuniform distortions similar to those of "ideal-employee" factor in both self- and other assessments, improved validity of competency scores as evidenced by meaningful second-order factor structures, better interrater agreement, and better convergent correlations with an external personality measure. Forced-choice rankings modeled with Thurstonian item response theory (IRT) yielded as good construct and convergent validities as the bias-controlled Likert ratings and slightly better rater agreement. We suggest that the mechanism for these enhancements is finer differentiation between behaviors in comparative judgements and advocate the operational use of the multidimensional forced-choice response format as an effective bias prevention method.