The main objective is to study autonomous robotic systems, from the perception and the control point of views, interacting and evolving among the human beings in live and dynamic environments. By autonomous robotic systems, we refer to Autonomous Vehicles, Mobile robots, UAV and combination of them.

Our research ambition is to explore new paradigms and concepts allowing autonomous robotic systems i) to acquire and share a task-oriented representation of the world (accounting for interactions with humans) ii) to act and interact in human-like environments (accounting for interactions with humans) in a safe and efficient way.

Task specification, world and interaction modelling, situtation awareness, multi sensor based perception and control, the coupling between perception and action, and hybrid model based/deep learning based architectures will be the main focuses of our researches. Although the underlying concepts could be potentially applied to manipulator arms, we have voluntarily restricted the scope of the project to mobile robotic applications that sound more topical and challenging.

Research Directions

  • Task based world modeling and understanding:

    Executing a robotic task needs to specify a task space and a set of objective functions to be optimized. One research issue will be to define a framework allowing to represent the tasks in a generic canonic space in order to make their design and their analysis easier thanks to the control theory tools (observability, controllability, robustness…). All along the execution of the task, autonomous robotics systems have to acquire and maintain a model of the world and of the interactions between the different components involved in the task (heterogeneous robots, human beings, changes in the environment…). This model evolves in time and in space. In this research axis, we will investigate novel task-oriented world multi-layers representations (photometry, geometry, semantic) embedded in a short/long term memory framework able to handle static and dynamic events (long term mapping). A particular attention will be also paid to integrate human-robot interactions in shared environment (social skills). Another ambition of the project will be to build a bridge between model-based and machine learning methods. Understanding the world evolution is on of the key of autonomy. In this aim, we will focus on situtation awareness.

  • Multi-sensory perception and control

    Multi-sensory based perception and control is an area that starts from one single robot evolving in the environment with a set of sensors, up to a set of heterogeneous robots collaborating for the execution of a global shared task. We will address problems such as the active selection of the most suitable source of information (e.g. sensors and features) during the execution of the task and the active sensing control in order to maximize the collected information about the world modeling (including calibration and environment parameters, exogenous disturbances), allowing the task-driven sensor-based control framework to be more efficiently and robustly executed. Another issue will be the execution of a task defined by another robot or human, and to be replicated with a robot with different capabilities in perception, control and level of autonomy (i.e. heterogeneous robots). Last issues will come from the collaboration of different autonomous and heterogeneous robots in order to accomplish a shared task (mapping, robust localization, calibration, tracking, transporting, moving, …).


CHORALE team is the continuation of the LAGADIC team at Sophia Antipolis.


Keywords: Modelling, Perception, Control, Learning, Human-robot interaction, Collaborative robotics,  Autonomous Vehicles, Drones

Comments are closed.