MetaTOC stay on top of your field, easily

Assessing and Strengthening Evidence-Based Program Registries Usefulness for Social Service Program Replication and Adaptation

Evaluation Review: A Journal of Applied Social Research

Published online on

Abstract

Background:

Government and private funders increasingly require social service providers to adopt program models deemed "evidence based," particularly as defined by evidence-based program registries, such as What Works Clearinghouse and National Registry of Evidence-Based Programs and Practices. These registries summarize the evidence about programs’ effectiveness, giving near-exclusive priority to evidence from experimental-design evaluations. The registries’ goal is to aid decision making about program replication, but critics suspect the emphasis on evidence from experimental-design evaluations, while ensuring strong internal validity, may inadvertently undermine that goal, which requires strong external validity as well.

Objective:

The objective of this study is to determine the extent to which the registries’ reports provide information about context-specific program implementation factors that affect program outcomes and would thus support decision making about program replication and adaptation.

Method:

A research-derived rubric was used to rate the extent of context-specific reporting in the population of seven major registries’ evidence summaries (N = 55) for youth development programs.

Findings:

Nearly all (91%) of the reports provide context-specific information about program participants, but far fewer provide context-specific information about implementation fidelity and other variations in program implementation (55%), the program’s environment (37%), costs (27%), quality assurance measures (22%), implementing agencies (19%), or staff (15%).

Conclusion:

Evidence-based program registries provide insufficient information to guide context-sensitive decision making about program replication and adaptation. Registries should supplement their evidence base with nonexperimental evaluations and revise their methodological screens and synthesis-writing protocols to prioritize reporting—by both evaluators and the registries themselves—of context-specific implementation factors that affect program outcomes.