Mechanisms Which Underlie Face-vocalization Integration in VLPFC
The perception and integration of congruent communication stimuli is necessary for appropriate evaluation and comprehension of an audio-visual message. Our studies have shown that there are several types of multisensory interactions: linear and non-linear; enhanced and inhibitory (Sugihara et al., 2006). There are a number of factors that affect sensory integration including temporal coincidence and stimulus congruency, which are thought to underlie the successful merging of two intermodal stimuli into a coherent perceptual representation which is especially important in speech perception. We have begun to explore the role of the prefrontal cortex in encoding congruent face-vocalization stimuli in order to understand the essential components of face-vocalization integration. To this end we have examined changes in neural activity when face-vocalization pairs are mismatched and presented either during fixation or in an audio-visual non-match-to-sample task. Our data indicates that non-human primates can detect these mismatches and that single cells in VLPFC display changes in neuronal firing to incongruent and to temporally offset face-vocalization stimuli compared to congruent audiovisual stimuli. Continued analysis and recordings are aimed at further defining the role of the VLPFC in the integration of audio-visual face and vocalization information for the purpose of communication.
Our work is aimed at determining if multimodal prefrontal neurons detect:
- Changes in semantic meaning
-
If prefrontal neurons are sensitive to the semantic congruence in a vocalization and corresponding facial gesture we expect a difference in the neuronal response to a semantically congruent and a semantically incongruent AV pair.
- Changes in identity
-
By mismatching a vocalization made by caller A with the facial gesture from caller B issuing the same call type, we can test the sensitivity of prefrontal neurons for caller identity and subtle acoustic changes which accompany them. We expect that a neuron that is sensitive to such changes will be modulated by the acoustic differences between callers.
- Changes in auditory or visual features
-
If VLPFC neurons are sensitive to acoustic features, then alterations of these features in the vocalization of an AV pair will cause a significant change in neuronal response compared to the congruent audio-visual stimulus. We expect that this will occur in predominantly auditory/multisensory neurons. Similar alterations of the visual stimulus in an AV pair should evoke changes when compared to the response to the congruent AV pair for neurons that are predominantly visual.
- Temporal offset
-
A number of brain regions are sensitive to the temporal coincidence of cross-modal events including the superior colliculus and frontal lobe speech regions in the human brain. We expect that prefrontal neurons will exhibit a significant change to temporally offset stimuli.