We gave a high-level presentation on the topic of transfer in reinforcement learning. Here is the abstract:
The reinforcement learning (RL) framework formalizes the problem of sequential decision-making under uncertainty. RL algorithms enables virtual or real agents to learn an optimal behavior strategy from the experience obtained by a direct interaction with an unknown environment. Despite the high level of generality, current RL algorithms often require a significant amount of prior knowledge from a domain expert to be effective and can hardly generalize across different tasks. In order to solve these limitations, it is possible to adopt a “transfer learning” approach where the prior knowledge is incrementally constructed as the agent solves a series of problems. In particular, the idea is that the agent can automatically detect the similarity across problems and exploit it to improve its learning performance. In this talk, I will first review the basic concepts of RL and discuss about two major aspects of RL that can significantly benefit from effective transfer algorithms, the reduction in the sample complexity in exploration-exploitation and the improvement in approximation accuracy in the representation problem.
Here you can find the slides of the presentation.