MetaTOC stay on top of your field, easily

Reviewing the Reviews: Examining Similarities and Differences Between Federally Funded Evidence Reviews

, ,

Evaluation Review: A Journal of Applied Social Research

Published online on

Abstract

Background:

The federal government’s emphasis on supporting the implementation of evidence-based programs has fueled a need to conduct and assess rigorous evaluations of programs. Through partnerships with researchers, policy makers, and practitioners, evidence reviews—projects that identify, assess, and summarize existing research in a given area—play an important role in supporting the quality of these evaluations and how the findings are used. These reviews encourage the use of sound scientific principles to identify, select, and implement evidence-based programs. The goals and standards of each review determine its conclusions about whether a given evaluation is of high quality or a program is effective. It can be difficult for decision makers to synthesize the body of evidence when faced with results from multiple program evaluations.

Sample:

This study examined 14 federally funded evidence reviews to identify commonalities and differences in their assessments of evidence of effectiveness.

Findings:

There were both similarities and significant differences across the reviews. In general, the evidence reviews agreed on the broad critical elements to consider when assessing evaluation quality, such as research design, low attrition, and baseline equivalence. The similarities suggest that, despite differences in topic and the availability of existing research, reviews typically favor evaluations that limit potential bias in their estimates of program effects. However, the way in which some of the elements were assessed, such as what constituted acceptable amounts of attrition, differed. Further, and more substantially, the reviews showed greater variation in how they conceptualized "effectiveness."