MetaTOC stay on top of your field, easily

Interpreting t-Statistics Under Publication Bias: Rough Rules of Thumb

Journal of Quantitative Criminology

Published online on

Abstract

Abstract

Introduction

A key issue is how to interpret t-statistics when publication bias is present. In this paper we propose a set of rough rules of thumb to assist readers to interpret t-values in published results under publication bias. Unlike most previous methods that utilize collections of studies, our approach evaluates the strength of evidence under publication bias when there is only a single study.

Methods

We first re-interpret t-statistics in a one-tailed hypothesis test in terms of their associated p-values when there is extreme publication bias, that is, when no null findings are published. We then consider the consequences of different degrees of publication bias. We show that under even moderate levels of publication bias adjusting one’s p-values to insure Type I error rates of either 0.05 or 0.01 result in far higher t-values than those in a conventional t-statistics table. Under a conservative assumption that publication bias occurs 20 percent of the time, with a one-tailed test at a significance level of 0.05, a t-value equal or greater than 2.311 is needed. For a two-tailed test the appropriate standard would be equal or above 2.766. Both cutoffs are far higher than the traditional ones of 1.645 and 1.96. To achieve a p-value less than 0.01, the adjusted t-values would be 2.865 (one-tail) and 3.254 (two-tail), as opposed to the traditional values 2.326 (one-tail) and 2.576 (two-tail). We illustrate our approach by applying it to evaluate the hypothesis tests in recent issues of Criminology and Journal of Quantitative Criminology (JQC).

Conclusion

Under publication bias much higher t-values are needed to restore the intended p-value. By comparing the observed test scores with the adjusted critical values, this paper provides a rough rule of thumb for readers to evaluate the degree to which a reported positive result in a single publication reflects a true positive effect. Further measures to increase the reporting of robust null findings are needed to ameliorate the issue of publication bias.