DS10 - Défi de tous les savoirs

Functional importance of the cortical core in the high density counterstream architecture – ARCHI-CORE

Cognitive architectures and multisensory integration: Exploration of the dynamics and circuits supporting voice face recognition and integration.

Communication in primates largely depends on social interactions involving vocalizations and recognition of facial expressions. This poses an acute problem in terms of multisensory integration. Here we combine invasive experiments in macaque with whole brain imaging in human to determine the dynamics and structural constrains on voice face recognition.

Multimodal circuits and dynamics in humans and NHP

We hypothesize that a cortical core plays an important role in the cognitive architectures supporting conscious processing and certain aspects of multisensory integration. Communication vocalizations and visual face perception show remarkably powerful interactions in both humans and NHP. We undertake parallel investigation in in these two species using quantitative anatomy and electrophysiology in order to determine the homologous features of their cognitive architecture. We use fMRI to determine the neural processes underlying face-voice integration in a Predictive Coding framework, a Bayesian models of cortical function, according to which our brain infers, from sensory inputs, an internal model of the outside world. In turn, the internal model is used to create expectations about sensory inputs. It has thus been suggested that predictive signals reflect top-down processes, whereas prediction-error signals constitute bottom-up processes. <br />In human and NHP we analyzed how along term sensory deprivation or sensory modality switch can affect multisensory gain. The underlying hypothesis is that when we are engaged in processing of a single modality, it can exclude or reduce attentional mechanisms in other modalities. Further, in deaf patients our previous work showed that the temporal areas that are processing specifically human voice are subject to crossmodal reorganization and are involved in processing visual speech information. Our hypothesis is that in spite of a function recovery through a cochlear implant, the CI deaf patients will present a strong visual bias toward faces when they are engaged in voice recognition task.

The homology between the cortical core of the human and macaque requires a multidisciplinary approach in both species. We supplement the human studies by invasive techniques available in the PNH and thus determine the hierarchical organization of the processes involved between species. In macaque we carry out quantitative anatomical investigation using retrograde tract tracing which allows us to construct large scale structural models of the cortex. This approach is completed in macaque and human by tractography based on diffusion MRI. This allows correlation of tract-tracing and tractogrphy in macaque and better informs our large-scale models of human brain architecture. In macaque we implant subdural electrodes over wide expanses of cortex which allowing recording of inter-areal theta, beta and gamma-band rhythms. In human we look at gender discrimination tasks with combined voice and face stimuli in fMRI.
First in human and monkey we used a simple detection task using natural visual and auditory stimuli including conspecific voices and faces. Further, to test whether the multisensory benefits (or redundant-signals effect) exceeded the facilitation predicted by probability summation, we applied the “race model” inequality (Miller 1982). In a co-activation model, the multisensory stimuli converge and interact prior to the initiation of the behavioral response, leading to a decrease in the threshold for initiating a response.
In adult CI deaf patients, we used a McGurk type of protocol that requires participants to categorise by gender voice stimuli from a morphing-generated voice continuum between a male and a female voice speaking the same syllable. The voice was combined to a face stimuli that could be congruent or not to the auditory voice.

Human-macaque homology of cognitive architecture. We have achieved this using combined electrophysiological and anatomy macaque (Bastos et al., 2015) and magnetoencephalography in humans (Michalareas et al., 2016).
Task 5: We have completed D5.1. Using morphed voice and face stimuli we have been able to show that both stimuli contribute to gender discrimination via non-additive interactions. The results support a dense cortical network for representing gender in which voice and face signals are integrated at multiple hierarchical levels suggesting that selection processes may not be able to completely disentangle them in decision processes.
We are working on D5.2 with pilot fMRI to optimize the protocols, localize cortical areas involved in face-voice integration and estimate multi-voxel pattern classifiers for face and voice responses.
Using a simple detection task of natural visual and auditory events, we observed in both monkeys and human a large variability of MS gains. The violation of the race model that represents a true integration of bimodal stimuli, was observed only in case of high MS gains. We demonstrated that the benefit of multimodal integration was dependent of the sensory history in which the subject is engaged.
Further, in deaf patients in spite of a recovery of auditory functions through a cochlear implant, while attention is engaged in the auditory modality, the processing of human voice was highly influenced by visual information especially when face and voice stimuli were semantically incongruent. Such phenomena was depends probably on crossmodal brain reorganization that occur during the prolonged period of deafness as not such effect was observed in control hearing subjects that were tested with a simulation of a cochlear implant

The project will inform on the way the brain integrates information from different senses. This is increasingly viewed as a complex computational process which plays a central role in brain function. Voice face interactions play a special role in primate (human and non-human) social interactions and the project will bring together novel insight into the circuits that underlie this phenomena. On a more fundamental level our work will allow a better understand of the large-scale features of cognitive architecture and explore the functional dynamics that they support. These investigations will be in the emerging framework of predictive coding theory. Predictive coding is increasingly seen to be able to deliver an understanding of the neural and computational mechanisms of the normal brain as well as deep insight into brain diseases.

Bastos AM, Vezoli J, Bosman CA, Schoffelen JM, Oostenveld R, Dowdall JR, et al. Visual Areas Exert Feedforward and Feedback Influences through Distinct Frequency Channels. Neuron. 2015;85(2):390-401.

Chaudhuri R, Knoblauch K, Gariel MA, Kennedy H, Wang XJ. A Large-Scale Circuit Mechanism for Hierarchical Dynamical Processing in the Primate Cortex. Neuron. 2015;88(2):419-31.

Michalareas G, Vezoli J, van Pelt S, Schoffelen JM, Kennedy H, Fries P. Alpha-Beta and Gamma Rhythms Subserve Feedback and Feedforward Influences among Human Visual Cortical Areas. Neuron. 2016.

Wang X-J, Kennedy H. Brain structure and dynamics across scales: in search of rules. Curr Opin Neurobiol. 2016;37:92-8.

Barone P., Chambaudie L., Strelnikov S., Fraysse B., Marx M., Belin P. and Deguine O. Crossmodal interactions during non linguistic auditory processing in cochlear-implanted deaf subjects. Cortex. In revision.

We have recently developed a large-scale model of the macaque cerebral cortical network where weight-distance relations determine numerous characteristics of cortical architecture including the existence of a high-density cortical core (Markov et al., 2013). We hypothesised that the cortical core plays an important role in the cognitive architecture supporting conscious processing and certain aspects of multisensory integration (Dehaene et al., 2014). This proposal sets out to use a multidisciplinary approach including whole brain functional imaging, single-cell electrophysiology and quantitative anatomy in order to investigate the role of the core in information processing, focusing on circuits and integrative processes involved in multisensory integration. We shall complement our human studies by the invasive techniques available in the NHP, thereby enabling us to determine the hierarchical organization of the inter-areal processes involved. Communication vocalizations and visual face perception show remarkably powerful interactions in humans and NHP leading to the unity assumption. Voice face interactions are further suited to exploration of the cortical core as they are highly distributed over numerous levels stretching from the early auditory areas to the ventral lateral prefrontal cortex. In the NHP we shall use fMRI in order to localize voice and face areas and identify those areas showing strong integration. The animals will be used for: (i) Recording single-neuron responses in order to determine electrophysiological mechanisms underlying sensory integration; (ii) making injections of retrograde tracers into face and voice areas in order to determine the weighted and directed circuits integrating these areas into a large-scale cortical network. In addition to using our tract tracing to understand face voice circuits, we shall use them to validate dMRI. By combining high-resolution dMRI on the brains used for tract tracing we shall be able to complete a much-needed evaluation of the fidelity of the imaging data, which is highly relevant for clinical practice. Our work in NHP will specifically address the behavioural significance of voice face interactions, via cooling probes to examine the consequences of inactivation of key nodes of the voice face circuits. In order to bind audio-visual cues to form a coherent percept of a talking face, the brain synchronizes cues emanating from a common source. Inferences about such combinations are generated through learning. In our human work we shall build on recent studies that support Predictive Coding Theory as a model to examine neural processes underlying face-voice integration. Predictive coding is a concept derived from Bayesian models of cortical function, according to which our brain infers, from sensory inputs, an internal model of the outside world. In turn, the internal model is used to create expectations about sensory inputs. It has thus been suggested that predictive signals reflect top-down processes, whereas prediction-error signals constitute bottom-up processes. It is this link between hierarchical direction of information flow in predictive processes that we will be investigating here with respect to our previous work (Gerardin et al., 2010). In our work in human we shall use fMRI to examine face voice interaction in a gender discrimination task. We shall use a scaling technique called conjoint measurement that will permit us to quantify and test the mutual influences of face and voice signals in gender perception. We shall combine the psychophysical results with fMRI data using multi-voxel pattern recognition (MVPA). MVPA has been shown to provide a sensitive measure of cortical function in imaging data and using MVPA defined classifiers, we can test the capacity of cortical areas to decode stimuli in a manner comparable to the psychophysical data.

Project coordination

Henry Kennedy (Institut Cellule Souche et Cerveau - U846)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

WUSTL Department of Anatomy and Neurobiology
INSERM Institut Cellule Souche et Cerveau - U846
CNRS Centre de recherche Cerveau et Cognition - UMR5549

Help of the ANR 460,767 euros
Beginning and duration of the scientific project: September 2014 - 48 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter