ASTRID - Accompagnement spécifique des travaux de recherches et d’innovation Défense

Sensory Control of Aerial Robots – SCAR

Submission summary

This project aims at investigating three key issues in aerial robotics: teleoperation, high-level sensor-based control, and sensor fusion. The intended applications encompass visual inspection of a target (e.g. building, electric line, bridge, etc.) thanks to a semi-autonomous aerial robot piloted by an unexperienced operator.

• Teleoperation: the aerial robot is remotely piloted by a human operator in order to perform some desired task. The robot must therefore hover close to obstacles. Yet, it is very difficult and time-consuming for the pilot to correctly evaluate the distance to the obstacles so as to avoid collisions. This project will use a force-feedback interface based on a haptic joystick along with visual feedback from onboard cameras or telemetric sensors. Such an interface, integrated into the robot control system, will allow the pilot to efficiently perform the desired task.
• High-level sensor based control: though basic stabilization of an aerial robot has received a lot of attention, there are still issues when it comes to flying close to obstacles while relying only on onboard sensors (and not using for instance a sophisticated off-board motion tracking system, as often done in indoors lab experiments). Moreover, the aerodynamic environment near a large obstacle is often perturbed (strong and unpredictable wind gusts), which further complicates the task of stabilizing the robot while avoiding obstacles. This project will develop control laws able to perform semi-autonomous tasks (stabilization near obstacles, take-off/landing, wall tracking, etc.), relying only on the various onboard sensors (including onboard cameras). Once again the goal is to relieve the operator of low-level piloting tasks so that he can concentrate on the desired high-level task.
• Sensor fusion: a central problem in the control of an aerial robot is to correctly estimate the state (position, velocity, orientation, etc.) of the robot. Each sensor gives only a partial piece of information, possibly rather indirect and prone to temporary outages, about the state. The sensor fusion algorithm must reconcile all the measurements through the use of a dynamical model of the robot, to produce an estimate of the state to enable effective control, maintain stability and achieve the required performance. Consequently, the sensor fusion algorithm must work in real-time, with a high bandwidth and a low latency. This imposes severe constraints on the sensor fusion algorithm, from both the theoretical and practical points of view. Hence sensor fusion can be seen as an enabling technology for the control of aerial robots. This project will develop sensor fusion algorithms (in the form of so-called “observers”) enjoying good convergence properties, yet computationally thrifty and easy to tune. The idea is to take advantage of the rich geometric structure of the dynamic model (symmetries, group structure, etc.).

Project coordination

Tarek Hamel (Laboratoire d'Informatique, Signaux et Systèmes de Sophia Antipolis) – thamel@i3s.unice.fr

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partner

ISIR-UPMC Institut des Systemes Intelligents et de Robotique
I3S CNRS Laboratoire d'Informatique, Signaux et Systèmes de Sophia Antipolis
ARMINES CAS ARMINES Centre Automatique et Systèmes de Mines ParisTech

Help of the ANR 292,774 euros
Beginning and duration of the scientific project: February 2013 - 36 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter