摘要
采用事件相关电位(ERP)技术考察了情绪语音影响面孔表情识别的时间进程。设置效价一致或不一致的"语音-面孔"对,要求被试判断情绪语音和面孔表情的效价是否一致。行为结果显示,被试对效价一致的"语音-面孔"对的反应更快。ERP结果显示,在70-130ms和220-450ms,不一致条件下的面孔表情比一致条件诱发了更负的波形;在450-750ms,不一致条件下的面孔表情比一致条件诱发更正的后正成分。说明情绪语音对面孔表情识别的多个阶段产生了跨通道影响。
Continuous integration of information from multiple sensory inputs is very important for the daily life of human beings. But the mechanisms underlying the interaction of cross-modal stimulus processing failed to draw sufficient attention, especially when it comes to the cross-modal interaction of the stimulus containing emotional significance. This study aimed to investigate the neural mechanism of the interaction of emotional voice and facial expression. Event-related potentials (ERP) technique and cross-modal priming paradigm were used to explore the influence of emotional voice on the recognition of facial expression. The materials consisted of 240 prime-target pairs using voices as primes and facial expressions as targets. Neutral semantic words were spoken with happy or angry prosody and followed by congruous or incongruous facial expressions. The participants were asked to judge the consistency between the valence of emotional voice and facial expression, during which ERPs were recorded. Each trial began with a central fixation cross presented for 500 ms. Then, the priming stimulus ( emotional voice) was presented through headphones. The central fixation cross displayed on the screen until the target (facial expression) was presented. The inter-stimulus-interval (ISI) is 1000 ms. The facial expression was presented for 500 ms, followed by a black screen for 2000-2200 ms. After the presentation of facial expression, participants were instructed to indicate the consistency of valance between the emotional voice and facial expression by pressing a mouse button as quickly and accurately as possible. The results were analyzed by repeated measures ANOVA. The response time (RT) results showed that participants responded more quickly to the congruous trials than the incongruous trials. It suggested the existence of the priming effect of emotional voice on recognition of emotional facial expression. The analysis of ERPs waveforms indicated that emotional voice modulated the time course of processing of facial expression. At the time window of 70-130 ms and 220-450 ms, facial expressions evoked more negative waveforms in incongruous trials than in congruous trials. At the time window of 450-750 ms, facial expressions evoked more positive late positive component (LPC) in incongruous trials than in congruous trials. The ERPs results suggested that emotional voice influenced the processing of emotional facial expression at the early perception stage, the emotional significance evaluation stage and the subsequent decision-making stage. This study demonstrates that emotional voice can influence the processing of facial expression in a cross-modal manner. Also, it provides converging evidence for the interaction of multi-sensory inputs.
出处
《心理科学》
CSSCI
CSCD
北大核心
2013年第1期33-37,共5页
Journal of Psychological Science
基金
国家自然科学基金(31100816)
教育部人文社科青年基金项目(09YJCLX021)
高等学校博士学科点专项基金(20101108120005)的资助
关键词
情绪语音
面孔表情
跨通道
事件相关电位
emotional voice, facial expression, cross-modal, event-related potential (ERP)