In that face perception deficits display in patients with autism and other neuro-developmental disorders, Cindy Hagan, from University of York (United Kingdom), and colleagues studied brain activity via MagnetoEncephaloGraphic (MEG) scanner technology. Each of 19 study subjects was shown photographs of people with fearful and neutral facial expressions, and listened to fearful and neutral sounds. The researchers found that responses in the posterior superior temporal sulcus region of the brain were much greater when the subjects could both see and hear the emotional faces and voices, but not when they saw and heard neutral faces and voices. The team comments that: “Previous models of face perception suggested that this region of the brain responds to the face alone, but we demonstrated a supra-additive response to emotional faces and voices presented together – the response was greater than the sum of the parts. This is important because emotions in everyday life are often intrinsically multimodal – expressed through face, posture and voice at the same time.”
UK Researchers Pinpoint How Brain Detects Emotion
Cindy C. Hagan, Will Woods, Sam Johnson, Andrew J. Calder, Gary G. R. Green, Andrew W. Young. “MEG demonstrates a supra-additive response to facial and vocal emotion in the right superior temporal sulcus.” PNAS published online before print November 11, 2009, doi:10.1073/pnas.0905792106.
RELATED ARTICLES