Self-related stimuli—such as one’s own face or name—seem to be processed differently from non-self stimuli and to involve greater attentional resources, as indexed by larger amplitude of the P3 event-related potential (ERP) component. Nonetheless, the differential processing of self-related vs. non-self information using voice stimuli is still poorly understood. The present study investigated the electrophysiological correlates of processing self-generated vs. non-self voice stimuli, when they are in the focus of attention.
ERP data were recorded from twenty right-handed healthy males during an oddball task comprising pre-recorded self-generated (SGV) and non-self (NSV) voice stimuli. Both voices were used as standard and deviant stimuli in distinct experimental blocks. SGV was found to elicit more negative N2 and more positive P3 in comparison with NSV. No association was found between ERP data and voice acoustic properties.
These findings demonstrated an earlier and later attentional bias to self-generated relative to non-self voice stimuli. They suggest that one’s own voice representation may have a greater affective salience than an unfamiliar voice, confirming the modulatory role of salience on P3.