Blanc SIMI 2 - Blanc - SIMI 2 - Science informatique et applications

Real-time Visual Reconstruction by Mixing Multiple Depth and Color Cameras – MIXCAM

Mixed camera/depth system

Real-Time Visual Reconstruction by Mixing Multiple Depth and Color Cameras

The objective of MIXCAM is to develop novel scientific concepts and associated methods and software for producing live 3D content for glass-free multi-view 3D displays.

The objective of MIXCAM is to develop novel scientific concepts and associated methods and software for producing live 3D content for glass-free multi-view 3D displays. MIXCAM will combine (i) theoretical principles underlying computational stereopsis, (ii) multiple-camera reconstruction methodologies, and (iii) active-light sensor technology in order to develop a complete content-production and -visualization methodological pipeline, as well as an associated proof-of-concept demonstrator implemented on a multiple-sensor/multiple-PC platform supporting real-time distributed processing. MIXCAM plans to develop an original approach based on methods that combine color cameras with time-of-flight (TOF) cameras: TOF-stereo robust matching, accurate and efficient 3D reconstruction, realistic photometric rendering, real-time distributed processing, and the development of an advanced mixed-camera platform. The MIXCAM consortium is composed of two French partners (INRIA and 4D View Solutions).

Three main problems have been studied during the first year: the calibration of a camera network of time-of-flight (TOF) and colour (stereo) cameras, the estimation of high-resolution depth maps from TOF-Stereo units, and the registration of multiple point clouds obtained from different TOF-Stereo units.

After 12 months, we completed the development of the methodological pipeline and we started to study the issue of how to implement the algorithms using parallel/distributed computing using the MIXCAM platform

The project is ontime. During the next period we will finalize the data processing pipeline and study possible industrial applications.

Based on the methodology that was developed, the MIXCAM coordinator published three papers, two journal papers (Computer Vision and Image Understanding and IEEE Transactions on Pattern Analysis and Machine Intelligence) and one conference paper (European Conference on Computer Vision).

Humans have an extraordinary ability to see in three dimensions, thanks to their sophisticated binocular vision system. While both biological and computational stereopsis have been thoroughly studied for the last fifty years, the film and TV methodologies and technologies have exclusively used 2D image sequences, including the very recent 3D movie productions that use two image sequences, one for each eye. This state of affairs is due to two fundamental limitations: it is difficult to obtain 3D reconstructions of complex scenes and glass-free multi-view 3D displayers, which are likely to need real 3D content, are still under development. The objective of MIXCAM is
to develop novel scientific concepts and associated methods and software for producing live 3D content for glass-free multi-view 3D displays. MIXCAM will combine (i)~theoretical principles underlying computational stereopsis, (ii)~multiple-camera reconstruction methodologies, and (iii)~active-light sensor technology in order to develop a complete content-production and -visualization methodological pipeline, as well as an associated proof-of-concept demonstrator implemented on a multiple-sensor/multiple-PC platform supporting real-time distributed processing. MIXCAM plans to develop an original approach based on methods that combine color cameras with time-of-flight (TOF) cameras: TOF-stereo robust matching, accurate and efficient 3D reconstruction, realistic photometric rendering, real-time distributed processing, and the development of an advanced mixed-camera platform. The MIXCAM consortium is composed of two French partners (INRIA and 4D View Solutions) and one international partner (Samsung Electronics). The MIXCAM partners will develop scientific software that will be demonstrated using two prototypes of a novel platform, developed by 4D Views Solutions, and which will be available at INRIA and at Samsung, thus facilitating scientific and industrial exploitation. By the end of the project, the developed scientific principles and associated technology will be demonstrated at the Samsung annual fair in Seoul, Korea.

Project coordination

Radu Horaud (Centre de Recherche Inria Grenoble Rhone-Alpes)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

4D Views 4D Views Solutions
Inria Centre de Recherche Inria Grenoble Rhone-Alpes

Help of the ANR 272,543 euros
Beginning and duration of the scientific project: January 2014 - 24 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter