MetaTOC stay on top of your field, easily

Neuroimaging Research: From Null-Hypothesis Falsification to Out-of-Sample Generalization

, ,

Educational and Psychological Measurement

Published online on

Abstract

Brain-imaging technology has boosted the quantification of neurobiological phenomena underlying human mental operations and their disturbances. Since its inception, drawing inference on neurophysiological effects hinged on classical statistical methods, especially, the general linear model. The tens of thousands of variables per brain scan were routinely tackled by independent statistical tests on each voxel. This circumvented the curse of dimensionality in exchange for neurobiologically imperfect observation units, a challenging multiple comparisons problem, and limited scaling to currently growing data repositories. Yet, the always bigger information granularity of neuroimaging data repositories has lunched a rapidly increasing adoption of statistical learning algorithms. These scale naturally to high-dimensional data, extract models from data rather than prespecifying them, and are empirically evaluated for extrapolation to unseen data. The present article portrays commonalities and differences between long-standing classical inference and upcoming generalization inference relevant for conducting neuroimaging research.