DS0707 - Interactions des mondes physiques, de l'humain et du monde numérique

Virtual Reality for Training Doctors to Break Bad News – ACORFORMed

Submission summary

This project aims at exploring the question of multimodal natural language interaction, its theoretical basis and its experimentation into a virtual reality environment thanks to the development of an embodied conversational agent with high-level natural language communication skills. This naturally communicating avatar will be used in the frameworks of a specific task-oriented situation: training doctor to break bad news. The goal of the project is twofold:

- theoretical: thanks to a semantically focused context, and starting from corpus analysis, development of a complete interaction model, including verbal and non-verbal features. No such model already exists, whatever the language.

- applicative: development of a communicating avatar, interacting with a human in a specific contextual framework. Such avatar, with real-time multimodal interaction capacities, including particular behavioural properties is a first step towards different applications involving interaction, in particular training.

Language plays a central role in human interaction. However, even if our knowledge on language production and perception in a natural context increases regularly, we still lack a general framework for describing, formalizing and modelling this activity. The problem comes from the fact that language results from the convergence of different sources of information, coming from different domains (prosody, semantics, syntax, etc.) as well as different modalities (speech, gestures, attitudes). Moreover, especially when studying interaction, language has to be taken in its global matrix: contextual features as well as social, emotional and even physiological parameters should be involved in this modelling. This goal remains far from reachable, due to the immense variability resulting from these parameters interaction.

One way to address this question consists in defining a focused context, making it possible to control precisely the different parameters: from lexicon to gestures and syntax, taking into account pragmatics as well as emotion. In other words, thanks to a precise context, variability of sources of information (verbal or non verbal) can be drastically reduced, opening the way towards a dialogue modelling that can be validated into a human-machine dialogue environment. The context we propose to work on consists in training doctors to break bad news. This task is usually done by means of short sessions (around 20mns) during which doctors disclose bad news to actors playing the role of patient. Our two medical partners are experienced in such work, and they bring to the project, as a starting point, a corpus of 30 hours of such training sessions. The linguists, in collaboration with doctors, will describe and annotate this corpus, starting from which an interaction model will be built.

The applicative side of this project consists in integrating the model into a multimodal dialogue environment. Practically, it consists in developing a virtual reality environment in which a human (a doctor in training) interacts with a virtual patient, reacting in real-time through verbal and non-verbal behaviour. This context will give the opportunity to pursue several goals:

1. Complete modelling of an interaction situation, gathering verbal and non-verbal aspects, on the basis of natural data
2. Development of an integrated multimodal dialog system
3. Specification of a virtual patient with a real-time adaptation behavior, reacting to the environment

Besides theoretical results, the project aims at developing an application (training platform) that could take over an increasing need in the training of doctors for which no human resources could be imagine at a large scale. The project is interdisciplinary (linguistics, computer science, psychology, medicine) and bears both technological and fundamental aspects. The consortium covers all the needs, bringing together scientific labs, industrials, experimental platforms and hospitals.

Project coordination

Philippe Blache (Laboratoire Parole et Langage)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

LPL Laboratoire Parole et Langage
LTCI Télécom ParisTech Institut Mines Télécom
CHU-Angers CHU-Angers
ISM - CNRS Institut des Sciences du Mouvement
IPC INSTITUT PAOLI CALMETTES
IMM IMMERSION

Help of the ANR 391,613 euros
Beginning and duration of the scientific project: September 2015 - 36 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter