[Seminar] Towards Higher Efficiency in Reinforcement Learning for Robotics, by Samuele Tosatto

Title: Towards Higher Efficiency in Reinforcement Learning for Robotics

Speaker: Dr. Samuele Tosatto

Abstract: To bring robots outside industries, they need to become more versatile and adapt better to new situations, possibly requiring minimal human expertise. While in industry, they are employed mainly to repeat million of times the same tasks in stationary conditions; outside this controlled setting, they are subject to non-stationary environments and possibly used for different tasks. Reinforcement Learning (RL) holds the promise of making robots more versatile, learning new tasks, and adapting to new conditions. However, the low sample efficiency of state-of-the-art methods impedes an online adaptation in the real world, requiring the agent to first learn in simulation, where a higher provision of samples is possible, and then to adapt the learned behavior to the real world. This pipeline is non-scalable, as it requires designing new complex simulations for each different task. Consequently, robots need to learn directly by interacting with the real world. To make this possible, it is critical to identify the sources of the sample inefficiency and design an ad-hoc learning system that considers the complexity of the robotic interaction with the real world. This talk is divided into two parts: 1) biased off-policy improvement in state-of-the-art techniques constitutes a major cause to sample inefficiency, the speaker will propose some techniques to mitigate the issue; 2) it will be discussed how current hierarchical reinforcement learning (HRL), which already ameliorate sample efficiency of RL, could be even more tailored to the real-world application, by considering specific properties of robotic systems. The speaker will present his past and current research on the topics mentioned above, and he will conclude with his vision of future research directions.

Comments are closed.