An Examination of Assessment Fidelity in the Administration and Interpretation of Reading Tests
Remedial and Special Education
Published online on November 12, 2012
Abstract
Researchers have expressed concern about implementation fidelity in intervention research but have not extended that concern to assessment fidelity, or the extent to which pre-/posttests are administered and interpreted as intended. When studying reading interventions, data gathering heavily influences the identification of students, the curricular components delivered, and the interpretation of outcomes. However, information on assessment fidelity is rarely reported. This study examined the fidelity with which individuals paid to be testers for research purposes were directly observed administering and interpreting reading assessments for middle school students. Of 589 testing packets, 45 (8% of the total) had to be removed from the data set for significant abnormalities and another 484 (91% of the remaining packets) had correctable errors only found in double scoring. Results indicate reading assessments require extensive training, highly structured protocols, and ongoing calibration to produce reliable and valid results useful in applied research.