Welcome to ACM MIG 2023!

Important information: For attendees to be able to plan their trip and presentation. Detailed program will be announced sept. 27th. The conference should take place from wed. nov. 15th morning to friday 17th lunch time. Registrations will be opened as soon as we can, expect early rates up to oct. 15. We are working on rates starting from 350€ for students, which include lunches and dinners. Presenters will be able to present online, but we encourage onsite presentations. 1 full registration is mandatory for each paper, could the presentation be onsite or online. We are working on an extra social program, optional, friday afternoon and saturday to visit Saint-Malo and Mont Saint-Michel for those who are interested, for a small extra cost.

Invitation letters for VISAs: please send an email to julien.pettre@inria.fr with your name, paper id, title, passport id (as well as travel dates and itinerary if known).

Motion plays a crucial role in interactive applications, such as VR, AR, and video games. Characters move around, objects are manipulated or move due to physical constraints, entities are animated, and the camera moves through the scene. Motion is currently studied in many different research areas, including graphics and animation, game technology, robotics, simulation, computer vision, and also physics, psychology, and urban studies. Cross-fertilization between these communities can considerably advance the state-of-the-art in the area.

The goal of the Motion, Interaction and Games conference is to bring together researchers from this variety of fields to present their most recent results, to initiate collaborations, and to contribute to the establishment of the research area. The conference will consist of regular paper sessions, poster presentations, and as well as presentations by a selection of internationally renowned speakers in all areas related to interactive systems and simulation. The conference includes entertaining cultural and social events that foster casual and friendly interactions among the participants.

This year again, MIG will be held in a hybrid format, with a strong will to have you in person here in Rennes, for you to enjoy the most the conference program, face-to-face interactions with the community, the city of Rennes and the beautiful Brittany region! Nevertheless, you may choose to attend in person or to attend virtually to allow a maximum (virtual) attendance.


The 16th annual ACM/SIGGRAPH conference on Motion, Interaction and Games (MIG 2023, formerly Motion in Games), an  ACM SIGGRAPH Specialized Conferences, held in cooperation with Eurographics, will take place in Rennes, France,  15th – 17th Nov 2023.

The goal of the Motion, Interaction, and Games conference is to be a platform for bringing together researchers from interactive systems and animation, and have them present their most recent results, initiate collaborations, and contribute to the advancement of the research area. The conference will consist of regular paper sessions for long and short papers, and talks by a selection of internationally renowned speakers from Academia as well as from the Industry. 

The conference organizers invite researchers to consider submitting their highest quality research for publication in MIG 2023.

Important dates

  • Abstract submission: No abstract submission required*
  • Long and Short Paper Submission Deadline: 7th July 2023 14th July 2023 (extended)
  • Long and Short Paper Acceptance Notification: 1st September 2023 5th September 2023 7th September 23:59 AoE
  • Long and Short Paper Camera Ready Deadline: 22nd September 2023

*New papers may be submitted even if no abstract was previously submitted. We already received a significant number of abstracts to facilitate the reviewing process, so no further abstract submissions are required. Thanks to those authors who submitted abstracts.


  • Poster Submission Deadline: 12th September 2023 22nd September 2023 (extended)
  • Poster Notification: 22nd September 2023 29th September 2023
  • Final Version of Accepted Posters: 29th September 2023 TBD

Note: all submission deadlines are 23:59 AoE timezone (Anywhere on Earth).

Topics of Interest

Relevant topics include (but are not limited to):

  • Animation Systems
  • Animal locomotion
  • Autonomous actors
  • Behavioral animation, crowds & artificial life
  • Clothes, skin and hair
  • Deformable models
  • Expressive animation
  • Facial animation
  • Facial feature analysis
  • Game interaction and player experience
  • Game technology
  • Gesture recognition
  • Group and crowd behaviour
  • Human motion analysis
  • Image-based animation
  • Interaction in virtual and augmented reality
  • Interactive animation systems
  • Interactive storytelling in games
  • Machine learning techniques for animation
  • Motion capture & retargeting
  • Motion control
  • Motion in performing arts
  • Motion in sports
  • Motion rehabilitation systems
  • Multimodal interaction: haptics, sound, etc
  • Navigation & path planning
  • Physics-based animation
  • Real-time fluids
  • Robotics           
  • User-adaptive interaction and personalization
  • Virtual humans
  • XR (AR, VR, MR) environments

We invite submissions of original, high-quality papers in any of the topics of interest (see above) or any related topic. Each submission should be 7-9 pages in length for a long paper,  or 4-6 pages for a short paper. References are excluded from the page limit. They will be reviewed by our international program committee for technical quality, novelty, significance, and clarity. We encourage authors with content that can be fit into 6 pages to submit as a short paper and only to submit a long paper if the content requires it.

Submission Instructions

All submissions will be double-blind peer-reviewed by our international program committee for technical quality, novelty, significance, and clarity. Double-blind means that paper submissions must be anonymous and include the unique paper ID that will be assigned upon creating a submission using the online system.
Papers should not have previously appeared in, or be currently submitted to, any other conference or journal. For each accepted contribution, at least one of the authors must register for the conference.

All submissions will be considered for the Best Paper, Best Student Paper, and Best Presentation awards, which will be conferred during the conference. Authors of selected best papers will be referred (under validation) to submit extended and significantly revised versions for a Special Issue of Computers & Graphics journal.

We also invite submissions of poster papers in any of the topics of interest and related areas. Each submission should be 1-2 pages in length. Two types of work can be submitted directly for poster presentation:

  • Work that has been published elsewhere but is of particular relevance to the MIG community can be submitted as a poster. This work and the venue in which it is published should be identified in the abstract;
  • Work that is of interest to the MIG community but is not yet mature enough to appear as a paper.

Posters will not appear in the official MIG proceedings or in the ACM Digital library but will appear in an online database for distribution at author’s discretion. You can use any paper format, though the MIG paper format is recommended. In addition, you are welcome to submit supplementary material such as videos.

All submissions should be formatted using the SIGGRAPH formatting guidelines (sigconf). Latex template can be found here: https://www.acm.org/publications/proceedings-template (for the review version, you can use the command \documentclass[sigconf, screen, review, anonymous]{acmart})

All papers and posters should be submitted electronically to their respective tracks on EasyChair: https://easychair.org/conferences/?conf=mig2023

Supplementary Material

Due to the nature of the conference, we strongly encourage authors to submit supplementary materials (such as videos) with the size up to 200MB. They may be submitted electronically and will be made available to reviewers. For Video, we advise QuickTime MPEG-4 or DivX Version 6, and for still images, we advise JPG or PNG. If you use another format, you are not guaranteed that reviewers will view them. It is also allowed to have an appendix as supplementary material. These materials will accompany the final paper in the ACM Digital Library.


Johanna Pirker

Dr. Johanna Pirker is a computer scientist focusing on game development, research, and education and an active and strong voice of the local indie dev community. She has lengthy experience in designing, developing, and evaluating games and VR experiences and believes in them as tools to support learning, collaboration, and solving real problems. Johanna has started in the industry as QA tester at EA and still consults studios in the field of games user research. In 2011/12 she started researching and developing VR experiences at Massachusetts Institute of Technology. At the moment, she is is professor for media informatics at the Ludwig Maximilian University of Munich and Ass.Prof. for game development at TU Graz and researches games with a focus on AI, HCI, data analysis, and VR technologies. Johanna was listed on the Forbes 30 Under 30 list of science professionals.

Jonas Beskow

Jonas Beskow is a Professor of Speech Communication, specialising in Multimodal Embodied Systems at KTH in Stockholm. He is also a co-founder and Senior R&D Engineer at Furhat Robotics. His interests encompass modelling, synthesis, and understanding human communicative signals and behaviours, including speech, facial expressions, gestures, gaze, and the dynamics of face-to-face interaction. Specifically, he is passionate about integrating all these elements into machines and embodied agents, both physical and virtual, to enhance more engaging and dynamic interactions.

Sylvia Pan

Prof Sylvia Pan is a Professor of Virtual Reality at Goldsmiths, University of London. She co-leads the SeeVR research group including 10 academics and researchers. She holds a PhD in Virtual Reality, and an MSc in Computer Graphics, both from UCL, and a BEng in Computer Science from Beihang University, Beijing, China. Before joining Goldsmiths in 2015, she worked as a research fellow at the Institute of Cognitive Neuroscience, and at the Computer Science Department of UCL. Her research interest is the use of Virtual Reality as a medium for real-time social interaction, in particular in the application areas of medical training and therapy. Her work in social anxiety in VR and moral decisions in VR has been featured multiple times in the media, including BBC Horizon, the New Scientist magazine, and the Wall Street Journal. Her 2017 Coursera VR specialisation attracted over 100,000 learners globally, and she co-leads on the MA/MSc in Virtual and Augmented Reality at Goldsmiths Computing.

Steve Tonneau

Steve Tonneau is a lecturer at the University of Edinburgh. He defended his Phd in 2015 after 3 years in the INRIA/IRISA Mimetic research team, and pursued a post-doc in robotics at LAAS-CNRS in Toulouse, within the Gepetto team. His research focuses on motion planning based on the biomechanical analysis of motion invariants. Applications include computer graphics animation as well as robotics.


Rahul Narain
Indian Institute of Technology Delhi

Panayiotis Charalambous
CYENS – Center of Excellence

Fotis Liarokapis
CYENS – Center of Excellence

Franck Multon
University Rennes 2

Remi Ronfard

Rinat Abdrashitov
Epic Games

Mikhail Bessmeltsev
University of Montreal

Tiberius Popa
Concordia Unviversity

Edmond S. L. Ho
University of Glasgow

Ludovic Hoyet
INRIA Rennes – Centre Bretagne Atlantique

Tianlu Mao
Institute of Computing Technology Chinese Academy of Sciences

Nuria Pelechano
Univesitat Politèctnica de Catalunya

Lauren Buck
Trinity College Dublin

Ylva Ferstl

Yuting Ye
Reality Labs Research @ Meta

Damien Rohmer
Ecole Polytechnique

Brandon Haworth
University of Victoria

Claudia Esteves
Departamento de Matemáticas, Universidad de Guanajuato

Daniel Holden
Epic Games

He Wang

University College London

Eric Patterson
Clemson University

Ben Jones
University of Utah

Yorgos Chrysanthou
University of Cyprus

Eduard Zell
Bonn University

Marc Christie

Adam Bargteil
University of Maryland, Baltimore County

Steve Tonneau

Ronan Boulic
Ecole Polytechnique Fédérale de Lausanne

Pei Xu
Clemson University

John Dingliana
Trinity College Dublin

Stephen Guy
University of Minnesota

Christos Mousas
Purdue University

Aline Normoyle
Bryn Mawr College

James Gain
University of Cape Town

Carol O’Sullivan
Trinity College Dublin

Matthias Teschner

University of Freiburg

Hang Ma
Simon Fraser University

Soraia Musse

Sylvie Gibet
Southern Britanny University

Nuria Pelachano
Universitat Politècnica de Catalunya

Xiaogang Jin
Zhejiang University

Catherine Pelachaud
CNRS – ISIR, Sorbonne

Cathy Ennis
TU Dublin

Zerrin Yumak
Utrecht University

Funda Durupinar Babur
University of Massachussetts Boston

Katja Zibrek


Wednesday, November 15th

09:15AM – 10:00AM

10:00AM – 11:00AM

11:15AM – 01:00PM

Opening remarks

Keynote 1

Session: ML for Motion

Learning Robust and Scalable Motion Matching with Lipschitz Continuity and Sparse Mixture of Experts.

Objective Evaluation Metric for Motion Generative Models: Validating Fréchet Motion Distance on Foot Skating and Over-smoothing Artifacts.

Motion-DVAE: Unsupervised learning for fast human motion denoising.

Reward Function Design for Crowd Simulation via Reinforcement Learning.

MeshGraphNetRP: Improving Generalization of GNN-based Cloth Simulation

02:30PM – 03:30PM

03:45PM – 05:45PM

Keynote 2

Session: Games

Real-time Computational Cinematographic Editing for Broadcasting of Volumetric-captured events: an Application to Ultimate Fighting.

Exploring Mid-air Gestural Interfaces for Children with ADHD.

Player Exploration Patterns in Interactive Molecular Docking with Electrostatic Visual Cues.

Heat Simulation on Meshless Crafted-Made Shapes.

Virtual Joystick Control Sensitivity and Usage Patterns in a Large-Scale Touchscreen-Based Mobile Game Study



Thursday, November 16th

09:15AM – 11:30AM

Session: ML for faces

SoftDECA: Computationally Efficient Physics-Based Facial Animations

Audiovisual Inputs for Learning Robust, Real-time Facial Animation with Lip Sync

FaceDiffuser: Speech-Driven Facial Animation Synthesis Using Diffusion

MUNCH: Modelling Unique ’N Controllable Heads

Generating Emotionally Expressive Look-At Animation

12:00PM – 13:00PM

14:30PM – 16:00PM

Keynote 3

Session: Virtual Reality

Avatar Tracking Control with Featherstone’s Algorithm and Newton-Euler Formulation for Inverse Dynamics

Real-Time Conversational Gaze Synthesis for Avatars

Designing Hand-held Controller-based Handshake Interaction in Social VR and Metaverse

Effect of Avatar Clothing and User Personality on Group Dynamics in Virtual Reality

Runtime Motion Adaptation for Precise Character Locomotion

Friday, November 17th

09:30AM – 11:00AM

Session: Animation

Primal Extended Position Based Dynamics for Hyperelasticity

SwimXYZ: A large-scale dataset of synthetic swimming motions and videos

Physical Simulation of Balance Recovery after a Push

Video-Based Motion Retargeting Framework between Characters with Various Skeleton Structure

Navigating With a Defensive Agent: Role Switching for Human Automation Collaboration

11:30AM – 12:30PM

12:30PM – 13:00PM

Keynote 4

Closing remarks


A complete guide to all you need to know about lodging, transportations and other useful information will soon be uploaded.

Conference Organization

Conference Chairs

  • Julien Pettré, Inria, France
  • Barbara Solenthaler, ETH Zurich, Switzerland

Program Chairs

  • Rachel McDonnell, TCD, Ireland
  • Christopher Peters, KTH, Sweden

Poster Chair

  • Jovane Alberto, Trinity College Dublin

Main Contact

All questions about submissions should be emailed to Rachel McDonnell (ramcdonn (at) tcd.ie) and Christopher Peters (chpeters (at) kth.se).

Julien Pettré, Inria, France (julien.pettre (at) inria.fr)

All questions about posters should be emailed to Jovane Alberto, Trinity College Dublin (JOVANEA (at) tcd.ie)


Comments are closed.