MetaTOC stay on top of your field, easily

Reliability, Validity, and Usability of Data Extraction Programs for Single-Case Research Designs

, ,

Behavior Modification: (formerly Behavior Modification Quarterly)

Published online on

Abstract

Single-case experimental designs (SCEDs) have been increasingly used in recent years to inform the development and validation of effective interventions in the behavioral sciences. An important aspect of this work has been the extension of meta-analytic and other statistical innovations to SCED data. Standard practice within SCED methods is to display data graphically, which requires subsequent users to extract the data, either manually or using data extraction programs. Previous research has examined issues of reliability and validity of data extraction programs in the past, but typically at an aggregate level. Little is known, however, about the coding of individual data points. We focused on four different software programs that can be used for this purpose (i.e., Ungraph, DataThief, WebPlotDigitizer, and XYit), and examined the reliability of numeric coding, the validity compared with real data, and overall program usability. This study indicates that the reliability and validity of the retrieved data are independent of the specific software program, but are dependent on the individual single-case study graphs. Differences were found in program usability in terms of user friendliness, data retrieval time, and license costs. Ungraph and WebPlotDigitizer received the highest usability scores. DataThief was perceived as unacceptable and the time needed to retrieve the data was double that of the other three programs. WebPlotDigitizer was the only program free to use. As a consequence, WebPlotDigitizer turned out to be the best option in terms of usability, time to retrieve the data, and costs, although the usability scores of Ungraph were also strong.