Skip to content
Audio-visual machine perception & interaction for robots
Chair of the Multidisciplinary Institute of Artificial Intelligence of the Université Grenoble-Alpes
    • English
Skip to content
  • Home
  • People
  • Publications
  • Scope
  • Workshop
    • Face processing for visual- and audio-visual speech by Dr. Radu Horaud
    • Learning and controlling the source-filter representation of speech with a variational autoencoder by Prof. Simon Leglaive
    • Unsupervised Audio Source Separation Using Differentiable Parametric Source Models by Prof. Gaël Richard
    • An introduction to dynamical variational autoencoders by Dr. Xavier Alameda-Pineda
    • An online Minorization-Maximization algorithm by Dr. Florence Forbes
    • Deep Transfer Reinforcement Learning for Social Robotics by Dr. Chris Reinke
    • Dynamic neural fields and manifold learning for audiovisual fusion in psychophysics and robotics by Simon Forest

Home

Welcome to the webpage of the Audio-visual machine perception and interaction for companion robots chair of the Multidisciplinary Institute of Artificial Intelligence. This initiative is co-chaired by Dr. Radu Horaud and Dr. Xavier Alameda-Pineda, both at Inria Grenoble Rhône-Alpes.

Here are some recent news:

  • [Seminar] What can we further learn from the brain for AI? by Prof. Kenji Doya
  • [Seminar] DNN-based Algorithms for Audio Processing in Reverberant Environments by Prof. Sharon Gannot
  • [Seminar] Towards Higher Efficiency in Reinforcement Learning for Robotics, by Samuele Tosatto
  • [Seminar] Transfer Learning, Data Efficiency and Fairness in Deep Reinforcement Learning – Dr. Matthieu Zimmer
  • [Seminar] Complex-valued and hybrid models for audio processing – Dr. Paul Magron
  • [Seminar] Variational Recurrent Neural Networks – Prof. Laurent Girin
  • [Seminar] Using cognitive science for artificial intelligence – Dr. Chris Reinke
  • [Seminar] Variational auto-encoders for audio-visual speech enhancement – Dr. Mostafa Sadeghi
  • [Seminar] Deformations in Deep Models for Image and Video Generation – Prof. Stéphane Lathuilière

Comments are closed.

The "Audio-visual machine perception and interaction for companion robots" chair is part of the Multidisciplinary Institute of Artificial Intelligence, and thus funded by the ANR under grand agreement ANR-19-P3IA-0003.
Powered by Nirvana & WordPress. Mentions légales & CGU & Politique de confidentialité & Cookies

We are using cookies to give you the best experience on our website.

You can find out more about which cookies we are using or switch them off in .

Audio-visual machine perception & interaction for robots
Powered by  GDPR Cookie Compliance
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.