MetaTOC stay on top of your field, easily

Corrections for Criterion Reliability in Validity Generalization: A False Prophet in a Land of Suspended Judgment

, ,

Industrial and Organizational Psychology

Published online on

Abstract

--- - |2 Abstract The results of meta‐analytic (MA) and validity generalization (VG) studies continue to be impressive. In contrast to earlier findings that capped the variance accounted for in job performance at roughly 16%, many recent studies suggest that a single predictor variable can account for between 16 and 36% of the variance in some aspect of job performance. This article argues that this “enhancement” in variance accounted for is often attributable not to improvements in science but to a dumbing down of the standards for the values of statistics used in correction equations. With rare exceptions, applied researchers have suspended judgment about what is and is not an acceptable threshold for criterion reliability in their quest for higher validities. We demonstrate a statistical dysfunction that is a direct result of using low criterion reliabilities in corrections for attenuation. Corrections typically applied to a single predictor in a VG study are instead applied to multiple predictors. A multiple correlation analysis is then conducted on corrected validity coefficients. It is shown that the corrections often used in single predictor studies yield a squared multiple correlation that appears suspect. Basically, the multiple predictor study exposes the tenuous statistical foundation of using abjectly low criterion reliabilities in single predictor VG studies. Recommendations for restoring scientific integrity to the meta‐analyses that permeate industrial–organizational (I–O) psychology are offered. - Industrial and Organizational Psychology, Volume 7, Issue 4, Page 478-500, December 2014.