An Empirical Study of Design Parameters for Assessing Differential Impacts for Students in Group Randomized Trials
Evaluation Review: A Journal of Applied Social Research
Published online on October 18, 2016
Abstract
Prior research has investigated design parameters for assessing average program impacts on achievement outcomes with cluster randomized trials (CRTs). Less is known about parameters important for assessing differential impacts.
This article develops a statistical framework for designing CRTs to assess differences in impact among student subgroups and presents initial estimates of critical parameters.
Effect sizes and minimum detectable effect sizes for average and differential impacts are calculated before and after conditioning on effects of covariates using results from several CRTs. Relative sensitivities to detect average and differential impacts are also examined.
Student outcomes from six CRTs are analyzed.
Achievement in math, science, reading, and writing.
The ratio of between-cluster variation in the slope of the moderator divided by total variance—the "moderator gap variance ratio"—is important for designing studies to detect differences in impact between student subgroups. This quantity is the analogue of the intraclass correlation coefficient. Typical values were .02 for gender and .04 for socioeconomic status. For studies considered, in many cases estimates of differential impact were larger than of average impact, and after conditioning on effects of covariates, similar power was achieved for detecting average and differential impacts of the same size.
Measuring differential impacts is important for addressing questions of equity, generalizability, and guiding interpretation of subgroup impact findings. Adequate power for doing this is in some cases reachable with CRTs designed to measure average impacts. Continuing collection of parameters for assessing differential impacts is the next step.