MetaTOC stay on top of your field, easily

Task‐free auditory EEG paradigm for probing multiple levels of speech processing in the brain

, , , ,

Psychophysiology

Published online on

Abstract

--- - |2 Abstract While previous studies on language processing highlighted several ERP components in relation to specific stages of sound and speech processing, no study has yet combined them to obtain a comprehensive picture of language abilities in a single session. Here, we propose a novel task‐free paradigm aimed at assessing multiple levels of speech processing by combining various speech and nonspeech sounds in an adaptation of a multifeature passive oddball design. We recorded EEG in healthy adult participants, who were presented with these sounds in the absence of sound‐directed attention while being engaged in a primary visual task. This produced a range of responses indexing various levels of sound processing and language comprehension: (a) P1‐N1 complex, indexing obligatory auditory processing; (b) P3‐like dynamics associated with involuntary attention allocation for unusual sounds; (c) enhanced responses for native speech (as opposed to nonnative phonemes) from ∼50 ms from phoneme onset, indicating phonological processing; (d) amplitude advantage for familiar real words as opposed to meaningless pseudowords, indexing automatic lexical access; (e) topographic distribution differences in the cortical activation of action verbs versus concrete nouns, likely linked with the processing of lexical semantics. These multiple indices of speech‐sound processing were acquired in a single attention‐free setup that does not require any task or subject cooperation; subject to future research, the present protocol may potentially be developed into a useful tool for assessing the status of auditory and linguistic functions in uncooperative or unresponsive participants, including a range of clinical or developmental populations. - Psychophysiology, Volume 55, Issue 11, November 2018.