Double dissociation of configural and featural face processing on P1 and P2 components as a function of spatial attention
Published online on May 11, 2016
Abstract
Face recognition relies on both configural and featural processing. Previous research has shown that P1 is sensitive to configural face processing, but it is unclear whether any component is sensitive to featural face processing; moreover, if there is such a component, its temporal sequence relative to P1 is unknown. Thus, to avoid confounding physical stimuli differences between configural and featural face processing on ERP components, a spatial attention paradigm was employed by instructing participants to attend an image stream (faces and houses) or an alphanumeric character stream. The interaction between attention and face processing type on P1 and P2 components indicates different mechanisms of configural and featural face processing as a function of spatial attention. The steady‐state visual evoked potential (SSVEP) results clearly demonstrated that participants could selectively attend to different streams of information. Importantly, configural face processing elicited a larger posterior P1 (approximately 128 ms) than featural face processing, whereas P2 (approximately 248 ms) was greater for featural than for configural face processing under attended condition. The interaction between attention and face processing type (configural vs. featural) on P1 and P2 components indicates that there are different mechanisms of configural and featural face processing operating as functions of spatial attention. While the P1 result confirms previous findings separating configural and featural face processing, the newly observed P2 finding in the present study extends this separation to a double dissociation. Therefore, configural and featural face processing are modulated differently by spatial attention, and configural face processing precedes featural face processing.