Home

The project ML3RI (pronounced /mlẹɾi/) is mainly funded by the Young Researcher (Jeunes Chercheuses et Jeunes Chercheurs) programme of the ANR, under grant agreement ANR-19-CE33-0008-01. Additional support has been received within the framework of the “Investissements d’avenir” program (ANR-15-IDEX-02). The Principal Investigator is Dr. Xavier Alameda-Pineda.

ML3RI stands for Multi-modal multi-person low-level learning for robot interactions. The project aims to develop learning models and algorithms enabling a robot immersed in multi-person interactions to perceive/react with low-level multi-modal cues; and to develop realistic data generation techniques for the robust training of both the perception and actions models. ML3RI started on March 1st, 2020, and will run for a period of 4 years, ending in February 29th, 2024, and the overall funding is roughly 310 kEUR.

ML3RI’s raison d’être

Robots with autonomous communication capabilities interacting with multiple persons at the same time in the wild are both a societal mirage and a scientific Ithaca. Indeed, despite the presence of various companion robots on the market, their social skills are derived from machine learning techniques functioning mostly under laboratory conditions. Moreover, current robotic platforms operate in confined environments, where on one side, qualified personnel received detailed instructions on how to interact with the robot as part of their technical training, and on the other side, external sensors and actuators may be available to ease the interaction between the robot and the environment. Trespassing these two constraints would allow a robotic platform to freely interact with multiple humans in a wide variety of every-day situations, e.g. as an office assistant, a health-care helper, a janitor or a waiter/waitress. To our understanding, interacting in the wild means that the environment is natural (unscripted conversation, regular lighting and acoustic conditions, people freely moving, etc.) and the robot is self-sufficient (uses only its sensing, acting and computing resources).

Scientific Ambition

To develop learning models and algorithms enabling a robot immersed in multi-person interactions to perceive/react with low-level multi-modal cues; and to develop realistic data generation techniques for the robust training of both the perception and actions models.

Challenge 1: To develop learning algorithms for low-level perception of multi-person scenarios
Challenge 2: To develop models and algorithms to synthesize low-level robot behavior
Challenge 3: To develop methods for multi-modal low-level behavioral data synthesis

Comments are closed.