In‐basket Validity: A systematic review
International Journal of Selection and Assessment
Published online on February 18, 2014
Abstract
In‐baskets are high‐fidelity simulations often used to predict performance in a variety of jobs including law enforcement, clerical, and managerial occupations. They measure constructs not typically assessed by other simulations (e.g., administrative and managerial skills, and procedural and declarative job knowledge). We compiled the largest known database (k = 31; N = 3,958) to address the criterion‐related validity of in‐baskets and possible moderators. Moderators included features of the in‐basket: content (generic vs. job specific) and scoring approach (objective vs. subjective) and features of the validity studies: design (concurrent vs. predictive) and source (published vs. unpublished). Sensitivity analyses assessed how robust the results were to the influence of various biases. Results showed that the operational criterion‐related validity of in‐baskets was sufficiently high to justify their use in high‐stakes settings. Moderator analyses provided useful guidance for developers and users regarding content and scoring.