

{"id":4,"date":"2011-12-08T11:55:34","date_gmt":"2011-12-08T11:55:34","guid":{"rendered":"http:\/\/project.inria.fr\/template1\/?page_id=4"},"modified":"2024-02-22T09:50:15","modified_gmt":"2024-02-22T08:50:15","slug":"home","status":"publish","type":"page","link":"https:\/\/project.inria.fr\/mig2023\/","title":{"rendered":"Welcome to ACM MIG 2023!"},"content":{"rendered":"<p class=\"has-vivid-red-color has-text-color\"><strong><span style=\"text-decoration: underline;\">Last updates:<\/span> MIG 2023 AWARDS RESULTS<\/strong><\/p>\n\n\n\n<p class=\"has-vivid-red-color has-text-color\"><strong>BEST POSTER AWARD: <\/strong><br><strong>Enabling Physical VR Interaction with Deep RL Agents.<\/strong><br><em>Paul Boursin, David Hamelin, James Burness and Marie-Paule Cani<\/em><br><br><strong>BEST PRESENTATION AWARD:<\/strong><br><strong>Physical Simulation of Balance Recovery after a Push.<\/strong><br><em>Alexis Jensen, Thomas Chatagnon, Niloofar Khoshsiyar, Daniele Reda, Michiel van de Panne, Charles Pontonnier and Julien Pettr\u00e9<\/em><br><br><strong>BEST PAPER AWARD (non-student category):<\/strong><br><em><strong>Real-time Computational Cinematographic Editing for Broadcasting of Volumetric-captured events: an Application to Ultimate Fighting.<\/strong><\/em><br><em>Francois Bourel, Xi Wang, Ervin Teng, Valerio Ortenzi, Adam Myhill and Marc Christie<\/em><br><br><strong>BEST PAPER AWARD (student category):<\/strong><br><strong><em>Primal Extended Position Based Dynamics for Hyperelasticity.<\/em><\/strong><br><em>Yizhou Chen, Yushan Han, Jingyu Chen, Shiqian Ma, Ronald Fedkiw and Joseph Teran<\/em><\/p>\n\n\n\n<p><strong>ACM MIG 2023 is sponsored by <\/strong><\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/06\/RBLX_WORDMARK_BLK.png\" alt=\"\"\/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"has-text-align-center has-vivid-red-color has-text-color wp-block-heading\"><a href=\"https:\/\/www.youtube.com\/playlist?list=PLta3p3SfVzKDKxzLtNkn2MamzgOm5Uxmk\" data-type=\"URL\" data-id=\"https:\/\/www.youtube.com\/playlist?list=PLta3p3SfVzKDKxzLtNkn2MamzgOm5Uxmk\" target=\"_blank\" rel=\"noreferrer noopener\">Click here to access the replay playlist of ACM MIG 2023 !<\/a><\/h2>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"ACM SIGGRAPH MIG 2023\" width=\"900\" height=\"506\" src=\"https:\/\/www.youtube.com\/embed\/videoseries?list=PLta3p3SfVzKDKxzLtNkn2MamzgOm5Uxmk\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<h1 class=\"wp-block-heading\">REGISTRATION (<a href=\"https:\/\/cvent.me\/NlEeen\" data-type=\"URL\" data-id=\"https:\/\/cvent.me\/NlEeen\">link<\/a>)<\/h1>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td><strong>Registration type<\/strong><\/td><td class=\"has-text-align-center\" data-align=\"center\"><strong>Early Bird (&#8211;&gt;oct. 31st 2023)<\/strong><\/td><td class=\"has-text-align-center\" data-align=\"center\"><strong>Late Registration (nov 1st 2023 &#8211;&gt;)<\/strong><\/td><\/tr><tr><td>On-site attendance<\/td><td class=\"has-text-align-center\" data-align=\"center\"><\/td><td class=\"has-text-align-center\" data-align=\"center\"><\/td><\/tr><tr><td>Non ACM member<\/td><td class=\"has-text-align-center\" data-align=\"center\">580\u20ac<\/td><td class=\"has-text-align-center\" data-align=\"center\">680\u20ac<\/td><\/tr><tr><td>ACM member<\/td><td class=\"has-text-align-center\" data-align=\"center\">500\u20ac<\/td><td class=\"has-text-align-center\" data-align=\"center\">600\u20ac<\/td><\/tr><tr><td>Student<\/td><td class=\"has-text-align-center\" data-align=\"center\">350\u20ac<\/td><td class=\"has-text-align-center\" data-align=\"center\">450\u20ac<\/td><\/tr><tr><td>Virtual attendance<\/td><td class=\"has-text-align-center\" data-align=\"center\"><\/td><td class=\"has-text-align-center\" data-align=\"center\"><\/td><\/tr><tr><td>Non ACM author<\/td><td class=\"has-text-align-center\" data-align=\"center\">183\u20ac<\/td><td class=\"has-text-align-center\" data-align=\"center\">267\u20ac<\/td><\/tr><tr><td>ACM author<\/td><td class=\"has-text-align-center\" data-align=\"center\">117\u20ac<\/td><td class=\"has-text-align-center\" data-align=\"center\">200\u20ac<\/td><\/tr><tr><td>Virtual attendee<\/td><td class=\"has-text-align-center\" data-align=\"center\">free<\/td><td class=\"has-text-align-center\" data-align=\"center\">free<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>Please follow this link to get registered: <strong><a rel=\"noreferrer noopener\" href=\"https:\/\/cvent.me\/NlEeen\" target=\"_blank\">https:\/\/cvent.me\/NlEeen<\/a><\/strong><\/p>\n\n\n\n<p>For virtual presentation of papers and posters, please pick one of the paying virtual attendance registration. The virtual attendance is free, virtual meeting links will be posted here.<\/p>\n\n\n\n<p><strong>Registrations include<\/strong>:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Tuesday welcome party in the Rennes downtown area (more info soon)<\/li><li>Wednesday, thursday and friday lunches<\/li><li>Wednesday poster session food and beverage<\/li><li>Thursday social event<\/li><\/ul>\n\n\n\n<p><strong>Visit to Mont Saint-Michel<\/strong> is planned on Saturday Nov. 18, but not included in registration fees. We plan to help you organizing a visit to the Mont Saint Michel. The visit will be at your cost, Mont Saint Michel is 1 hour drive from Rennes, our goal is to form a group for the visit, and to see whether we are enough to organize transportation, food and commented visit. We will propose a solution anyway, and will let you know about the cost before you confirm your participation. For you to get an idea about the cost, the Mont Saint Michel can be reached by public transport (bus from Rennes) for 30 euros (two-ways), the visit of the abbey is 11 euros. If we organize a commented tour and private transportation, the cost will be increased of course. If you are interested, please answer this form: <a href=\"https:\/\/forms.gle\/V6uNMpRhVAbCDgZh8\">https:\/\/forms.gle\/V6uNMpRhVAbCDgZh8<\/a> ! <\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<p id=\"start\">Motion plays a crucial role in interactive applications, such as VR, AR, and video games. Characters move around, objects are manipulated or move due to physical constraints, entities are animated, and the camera moves through the scene. Motion is currently studied in many different research areas, including graphics and animation, game technology, robotics, simulation, computer vision, and also physics, psychology, and urban studies. Cross-fertilization between these communities can considerably advance the state-of-the-art in the area.<\/p>\n\n\n\n<p>The goal of the Motion, Interaction and Games conference is to bring together researchers from this variety of fields to present their most recent results, to initiate collaborations, and to contribute to the establishment of the research area. The conference will consist of regular paper sessions, poster presentations, and as well as presentations by a selection of internationally renowned speakers in all areas related to interactive systems and simulation. The conference includes entertaining cultural and social events that foster casual and friendly interactions among the participants.<\/p>\n\n\n\n<p>This year again, MIG will be held in a <strong>hybrid format<\/strong>, with a strong will to have you in person here in Rennes, for you to enjoy the most the conference program, face-to-face interactions with the community, the city of Rennes and the beautiful Brittany region! Nevertheless, you may choose to attend in person or to attend virtually to allow a maximum (virtual) attendance. <\/p>\n\n\n\n<h1 class=\"wp-block-heading\" id=\"callpaper\">3RD CALL FOR PAPERS<\/h1>\n\n\n\n<p>The 16th annual ACM\/SIGGRAPH conference on <strong>Motion, Interaction and Games<\/strong> (MIG 2023, formerly Motion in Games), an&nbsp; ACM SIGGRAPH Specialized Conferences, held in cooperation with Eurographics, will take place in <strong>Rennes, France,&nbsp; 15th &#8211; 17th Nov 2023.<\/strong><\/p>\n\n\n\n<p>The goal of the Motion, Interaction, and Games conference is to be a platform for bringing together researchers from interactive systems and animation, and have them present their most recent results, initiate collaborations, and contribute to the advancement of the research area. The conference will consist of regular paper sessions for long and short papers, and talks by a selection of internationally renowned speakers from Academia as well as from the Industry.&nbsp;<\/p>\n\n\n\n<p>The conference organizers invite researchers to consider submitting their highest quality research for publication in MIG 2023.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Important dates<\/strong> <\/h3>\n\n\n\n<ul class=\"wp-block-list\"><li>Abstract submission:<strong> <strong>No abstract submission required*<\/strong><\/strong><\/li><li>Long and Short Paper Submission Deadline: <strong><s>7th July 2023<\/s><\/strong> <strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\">14th July 2023 (extended)<\/mark><\/strong><\/li><li>Long and Short Paper Acceptance Notification: <strong><s>1st September 2023<\/s><\/strong> <strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\"><s>5th September 2023<\/s><\/mark><\/strong> <strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\">7th September <\/mark><\/strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\"><strong>23:59 AoE<\/strong><\/mark><\/li><li>Long and Short Paper Camera Ready Deadline: <strong>22nd September 2023<\/strong><\/li><\/ul>\n\n\n\n<p><strong>*<\/strong>New papers may be submitted even if no abstract was previously submitted. We already received a significant number of abstracts to facilitate the reviewing process, so no further abstract submissions are required. Thanks to those authors who submitted abstracts.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Poster<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\"><li>Poster Submission Deadline: <s><strong>12th September 2023<\/strong><\/s><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\"> <strong>22nd September 2023<\/strong><\/mark> <strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\">(extended)<\/mark><\/strong><\/li><li>Poster Notification: <s><strong>22nd September 2023<\/strong><\/s> <mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\"><strong>29th September 2023<\/strong><\/mark><\/li><li>Final Version of Accepted Posters: <s><strong>29th September 2023<\/strong><\/s> <mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\"><strong>TBD<\/strong><\/mark><\/li><\/ul>\n\n\n\n<p><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\"><strong>Note: all submission deadlines are 23:59 AoE timezone (Anywhere on Earth).<\/strong><\/mark><\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Topics of Interest<\/strong><\/h3>\n\n\n\n<p>Relevant topics include (but are not limited to):<\/p>\n\n\n\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:500px\">\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\n<ul class=\"wp-block-list\"><li>Animation Systems<\/li><li>Animal locomotion<\/li><li>Autonomous actors<\/li><li>Behavioral animation, crowds &amp; artificial life<\/li><li>Clothes, skin and hair<\/li><li>Deformable models<\/li><li>Expressive animation<\/li><li>Facial animation<\/li><li>Facial feature analysis<\/li><li>Game interaction and player experience<\/li><li>Game technology<\/li><li>Gesture recognition<\/li><li>Group and crowd behaviour<\/li><li>Human motion analysis<\/li><li>Image-based animation<\/li><li>Interaction in virtual and augmented reality<\/li><\/ul>\n<\/div><\/div>\n<\/div><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:500px\">\n<ul class=\"wp-block-list\"><li>Interactive animation systems<\/li><li>Interactive storytelling in games<\/li><li>Machine learning techniques for animation<\/li><li>Motion capture &amp; retargeting<\/li><li>Motion control<\/li><li>Motion in performing arts<\/li><li>Motion in sports<\/li><li>Motion rehabilitation systems<\/li><li>Multimodal interaction: haptics, sound, etc<\/li><li>Navigation &amp; path planning<\/li><li>Physics-based animation<\/li><li>Real-time fluids<\/li><li>Robotics&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<\/li><li>User-adaptive interaction and personalization<\/li><li>Virtual humans<\/li><li>XR (AR, VR, MR) environments<\/li><\/ul>\n<\/div>\n<\/div>\n<\/div><\/div>\n<\/div><\/div>\n\n\n\n<p>We invite submissions of original, high-quality papers in any of the topics of interest (see above) or any related topic. Each submission should be <strong>7-9 pages in length for a long paper,&nbsp; or 4-6 pages for a short paper<\/strong>. References are excluded from the page limit. They will be reviewed by our international program committee for technical quality, novelty, significance, and clarity. We encourage authors with content that can be fit into 6 pages to submit as a short paper and only to submit a long paper if the content requires it.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Submission Instructions<\/strong><\/h3>\n\n\n\n<p>All submissions will be <strong>double-blind peer-reviewed<\/strong> by our international program committee for technical quality, novelty, significance, and clarity. Double-blind means that paper submissions must be anonymous and include the unique paper ID that will be assigned upon creating a submission using the online system.<br>Papers should not have previously appeared in, or be currently submitted to, any other conference or journal. For each accepted contribution, at least one of the authors must register for the conference.<br><br>All submissions will be considered for the <strong>Best Paper<\/strong>, <strong>Best Student Paper<\/strong>, and <strong>Best Presentation<\/strong> awards, which will be conferred during the conference. Authors of selected best papers will be referred (under validation) to submit extended and significantly revised versions for a Special Issue of Computers &amp; Graphics journal.<\/p>\n\n\n\n<p>We also invite submissions of <strong>poster <\/strong>papers in any of the topics of interest and related areas. Each submission should be 1-2 pages in length. Two types of work can be submitted directly for poster presentation: <\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Work that has been published elsewhere but is of particular relevance to the MIG community can be submitted as a poster. This work and the venue in which it is published should be identified in the abstract;<\/li><li> Work that is of interest to the MIG community but is not yet mature enough to appear as a paper.<\/li><\/ul>\n\n\n\n<p><br>Posters will not appear in the official MIG proceedings or in the ACM Digital library but will appear in an online database for distribution at author\u2019s discretion. You can use any paper format, though the MIG paper format is recommended. In addition, you are welcome to submit supplementary material such as videos.<br><br>All submissions should be formatted using the SIGGRAPH formatting guidelines (sigconf). Latex template can be found here: <a rel=\"noreferrer noopener\" href=\"https:\/\/www.acm.org\/publications\/proceedings-template\" target=\"_blank\">https:\/\/www.acm.org\/publications\/proceedings-template<\/a> (for the review version, you can&nbsp;use the command <strong>\\documentclass[sigconf, screen, review, anonymous]{acmart}<\/strong>)<\/p>\n\n\n\n<p><br>All papers and posters should be submitted electronically to their respective tracks on EasyChair: <a href=\"https:\/\/easychair.org\/conferences\/?conf=mig2023\">https:\/\/easychair.org\/conferences\/?conf=mig2023<\/a><\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Supplementary Material<\/strong><\/h3>\n\n\n\n<p>Due to the nature of the conference, we strongly encourage authors to submit supplementary materials (such as videos) with the size up to 200MB. They may be submitted electronically and will be made available to reviewers. For Video, we advise QuickTime MPEG-4 or DivX Version 6, and for still images, we advise JPG or PNG. If you use another format, you are not guaranteed that reviewers will view them. It is also allowed to have an appendix as supplementary material. These materials will accompany the final paper in the ACM Digital Library.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\" id=\"Keynotes\">Keynotes<\/h1>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"wp-block-columns alignwide is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<figure class=\"wp-block-image size-full is-style-rounded\"><img loading=\"lazy\" decoding=\"async\" width=\"874\" height=\"847\" src=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/67412302_434723400456419_5999385635322855424_o21.jpg\" alt=\"\" class=\"wp-image-148\" srcset=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/67412302_434723400456419_5999385635322855424_o21.jpg 874w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/67412302_434723400456419_5999385635322855424_o21-300x291.jpg 300w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/67412302_434723400456419_5999385635322855424_o21-768x744.jpg 768w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/67412302_434723400456419_5999385635322855424_o21-150x145.jpg 150w\" sizes=\"auto, (max-width: 874px) 100vw, 874px\" \/><\/figure>\n\n\n\n<p class=\"has-text-align-center\"><strong>Johanna Pirker<\/strong><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:90%\">\n<p class=\"has-medium-font-size\" id=\"keynotes\"><strong>Dr. Johanna Pirker<\/strong> is a computer scientist focusing on game development, research, and education and an active and strong voice of the local indie dev community. She has lengthy experience in designing, developing, and evaluating games and VR experiences and believes in them as tools to support learning, collaboration, and solving real problems. Johanna has started in the industry as QA tester at EA and still consults studios in the field of games user research. In 2011\/12 she started researching and developing VR experiences at Massachusetts Institute of Technology. At the moment, she is is professor for media informatics at the <strong>Ludwig Maximilian University of Munich<\/strong> and Ass.Prof. for game development at TU Graz and researches games with a focus on AI, HCI, data analysis, and VR technologies. Johanna was listed on the <strong>Forbes 30 Under 30<\/strong> list of science professionals.<\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:100%\">\n<div class=\"wp-block-group alignwide\"><div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\n<div class=\"wp-block-columns alignwide is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<figure class=\"wp-block-image size-full is-style-rounded\"><img loading=\"lazy\" decoding=\"async\" width=\"960\" height=\"960\" src=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/JonasBeskow.jpg\" alt=\"\" class=\"wp-image-149\" srcset=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/JonasBeskow.jpg 960w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/JonasBeskow-300x300.jpg 300w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/JonasBeskow-150x150.jpg 150w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/JonasBeskow-768x768.jpg 768w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/><\/figure>\n\n\n\n<p class=\"has-text-align-center\"><strong>Jonas Beskow<\/strong><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:90%\">\n<p class=\"has-medium-font-size\"><strong>Jonas Beskow <\/strong>is a Professor of Speech Communication, specialising in Multimodal Embodied Systems at <strong>KTH <\/strong>in Stockholm. He is also a co-founder and Senior R&amp;D Engineer at <strong>Furhat Robotics<\/strong>. His interests encompass modelling, synthesis, and understanding human communicative signals and behaviours, including speech, facial expressions, gestures, gaze, and the dynamics of face-to-face interaction. Specifically, he is passionate about integrating all these elements into machines and embodied agents, both physical and virtual, to enhance more engaging and dynamic interactions.<\/p>\n<\/div>\n<\/div>\n<\/div><\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:100%\">\n<div class=\"wp-block-group alignwide\"><div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\n<div class=\"wp-block-columns alignwide is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<figure class=\"wp-block-image size-large is-style-rounded\"><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/08\/Picture-1.jpg\"><img loading=\"lazy\" decoding=\"async\" width=\"770\" height=\"1024\" src=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/08\/Picture-1-770x1024.jpg\" alt=\"\" class=\"wp-image-201\" srcset=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/08\/Picture-1-770x1024.jpg 770w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/08\/Picture-1-226x300.jpg 226w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/08\/Picture-1-768x1021.jpg 768w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/08\/Picture-1-113x150.jpg 113w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/08\/Picture-1.jpg 812w\" sizes=\"auto, (max-width: 770px) 100vw, 770px\" \/><\/a><\/figure>\n\n\n\n<p class=\"has-text-align-center\"><strong>Sylvia Pan<\/strong><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:90%\">\n<p class=\"has-medium-font-size\"><strong>Prof Sylvia Pan<\/strong> is a Professor of Virtual Reality at <strong>Goldsmiths<\/strong>, University of London. She co-leads the <strong>SeeVR<\/strong> research group including 10 academics and researchers. She holds a PhD in Virtual Reality, and an MSc in Computer Graphics, both from UCL, and a BEng in Computer Science from Beihang University, Beijing, China. Before joining Goldsmiths in 2015, she worked as a research fellow at the Institute of Cognitive Neuroscience, and at the Computer Science Department of UCL. Her research interest is the use of Virtual Reality as a medium for real-time social interaction, in particular in the application areas of medical training and therapy. Her work in social anxiety in VR and moral decisions in VR has been featured multiple times in the media, including BBC Horizon, the New Scientist magazine, and the Wall Street Journal. Her 2017 Coursera VR specialisation attracted over 100,000 learners globally, and she co-leads on the MA\/MSc in Virtual and Augmented Reality at Goldsmiths Computing.<\/p>\n<\/div>\n<\/div>\n<\/div><\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:100%\">\n<div class=\"wp-block-group alignwide\"><div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\n<div class=\"wp-block-columns alignwide is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<figure class=\"wp-block-image size-full is-resized is-style-rounded\"><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/09\/steve.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/09\/steve.png\" alt=\"\" class=\"wp-image-208\" width=\"245\" height=\"286\"\/><\/a><\/figure>\n\n\n\n<p class=\"has-text-align-center\"><strong>Steve Tonneau<\/strong><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:90%\">\n<p class=\"has-medium-font-size\"><strong>How can model-based AI advance locomotion skills for legged characters?<\/strong><br><br><strong>Steve Tonneau<\/strong> is a lecturer at the <strong>University of Edinburgh<\/strong>. He defended his Phd in 2015 after 3 years in the <strong>INRIA\/IRISA Mimetic<\/strong> research team, and pursued a post-doc in robotics at <strong>LAAS-CNRS<\/strong> in Toulouse, within the Gepetto team. His research focuses on motion planning based on the biomechanical analysis of motion invariants. Applications include computer graphics animation as well as robotics.<\/p>\n<\/div>\n<\/div>\n<\/div><\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Local Industry (Wednesday November 15th, after poster session)<\/h2>\n\n\n\n<p>Mixed with the poster session, we propose to have presentations from companies from the Rennes area in the field of Computer Animation and Graphics! We are happy to have 3 great (but short) talks from:<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:100%\">\n<div class=\"wp-block-group alignwide\"><div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\n<div class=\"wp-block-columns alignwide is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<figure class=\"wp-block-image size-large is-style-rounded\"><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/pasted-image-0-2.jpg\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"683\" src=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/pasted-image-0-2-1024x683.jpg\" alt=\"\" class=\"wp-image-352\" srcset=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/pasted-image-0-2-1024x683.jpg 1024w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/pasted-image-0-2-300x200.jpg 300w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/pasted-image-0-2-768x512.jpg 768w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/pasted-image-0-2-1536x1024.jpg 1536w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/pasted-image-0-2-150x100.jpg 150w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/pasted-image-0-2.jpg 1600w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:90%\">\n<p class=\"has-medium-font-size\"><strong><strong>St\u00e9phane Donikian<\/strong>, <\/strong>Golaem <strong>(<a href=\"https:\/\/golaem.com\/\">golaem.com<\/a>)<\/strong><\/p>\n<\/div>\n<\/div>\n<\/div><\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:100%\">\n<div class=\"wp-block-group alignwide\"><div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\n<div class=\"wp-block-columns alignwide is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<figure class=\"wp-block-image size-full is-style-rounded\"><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/pasted-image-0.png\"><img loading=\"lazy\" decoding=\"async\" width=\"1000\" height=\"1000\" src=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/pasted-image-0.png\" alt=\"\" class=\"wp-image-353\" srcset=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/pasted-image-0.png 1000w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/pasted-image-0-300x300.png 300w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/pasted-image-0-150x150.png 150w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/pasted-image-0-768x768.png 768w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><\/a><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:90%\">\n<p class=\"has-medium-font-size\"><strong><strong><strong>Cyril Corvazier<\/strong>, <\/strong><\/strong>Mercenaries Engineering<strong><strong> (<a href=\"http:\/\/guerillarender.com\/\">guerillarender.com<\/a>)<\/strong><\/strong><\/p>\n<\/div>\n<\/div>\n<\/div><\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:100%\">\n<div class=\"wp-block-group alignwide\"><div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\n<div class=\"wp-block-columns alignwide is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<figure class=\"wp-block-image size-full is-style-rounded\"><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/pasted-image-1.png\"><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"800\" src=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/pasted-image-1.png\" alt=\"\" class=\"wp-image-354\" srcset=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/pasted-image-1.png 800w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/pasted-image-1-300x300.png 300w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/pasted-image-1-150x150.png 150w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/pasted-image-1-768x768.png 768w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" \/><\/a><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:90%\">\n<p class=\"has-medium-font-size\"><strong><strong><strong>Quentin Avril, <\/strong><\/strong><\/strong>InterDigital <strong><strong>(<a href=\"https:\/\/www.interdigital.com\/\">interdigital.com<\/a>)<\/strong><\/strong><\/p>\n<\/div>\n<\/div>\n<\/div><\/div>\n<\/div>\n<\/div>\n\n\n\n<h1 class=\"wp-block-heading\" id=\"IPC\">International Program Comitee<\/h1>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p class=\"has-text-align-center\" style=\"font-size:15px\"><strong>Rahul Narain<\/strong><br>Indian Institute of Technology Delhi<br><br><strong>Panayiotis Charalambous<\/strong><br>CYENS &#8211; Center of Excellence<br><br><strong>Fotis Liarokapis<\/strong><br> CYENS &#8211; Center of Excellence<br><br><strong>Franck Multon<\/strong><br>INRIA<br><br><strong><strong>Remi Ronfard<\/strong><br><\/strong>INRIA<strong><br><br><strong>Rinat Abdrashitov<\/strong><br><\/strong>Epic Games<br><br><strong>Mikhail Bessmeltsev<\/strong><br>University of Montreal<br><br><strong>Tiberius Popa<\/strong><br>Concordia Unviversity<br><br><strong>Edmond S. L. Ho<\/strong><br>University of Glasgow<br><br><strong>Ludovic Hoyet<\/strong><br>INRIA Rennes \u2013 Centre Bretagne Atlantique<br><br><strong>Tianlu Mao<\/strong><br>Institute of Computing Technology Chinese Academy of Sciences<br><br><strong>Nuria Pelechano<\/strong><br>Univesitat Polit\u00e8ctnica de Catalunya<br><br><strong>Lauren Buck<\/strong><br>Trinity College Dublin<br><br><strong>Ylva Ferstl<\/strong><br>Ubisoft<br><br><strong>Yuting Ye<\/strong><br>Reality Labs Research @ Meta<br><br><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p class=\"has-text-align-center\" id=\"IPC\" style=\"font-size:15px\"><strong>Damien Rohmer<\/strong><br>Ecole Polytechnique<br><br><strong>Brandon Haworth<\/strong><br> University of Victoria<br><br><strong>Claudia Esteves<\/strong><br>Departamento de Matem\u00e1ticas, Universidad de Guanajuato<br><br><strong>Daniel Holden<\/strong><br>Epic Games<strong><br><br>He Wang<\/strong><br>University College London<br><br><strong>Eric Patterson<\/strong><br>Clemson University<br><br><strong>Ben Jones<\/strong><br>University of Utah<br><br><strong>Yorgos Chrysanthou<\/strong><br>University of Cyprus<br><br><strong>Eduard Zell<\/strong><br>Bonn University<br><br><strong>Marc Christie<\/strong><br>Universit\u00e9 de Rennes<br><\/p>\n\n\n\n<p class=\"has-text-align-center\" id=\"IPC\" style=\"font-size:15px\"><strong>Adam Bargteil<\/strong><br>University of Maryland, Baltimore County<br><br><strong>Steve Tonneau<\/strong><br>University of Edinburgh<br><br><strong>Ronan Boulic<\/strong><br>Ecole Polytechnique F\u00e9d\u00e9rale de Lausanne<br><br><strong>Pei Xu<\/strong><br>Clemson University<br><br><strong>John Dingliana<\/strong><br>Trinity College Dublin<br><br><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p class=\"has-text-align-center\" style=\"font-size:15px\"><strong>Christos Mousas<\/strong><br>Purdue University<br><br><strong>Aline Normoyle<\/strong><br>Bryn Mawr College<br><br><strong>James Gain<\/strong><br>University of Cape Town<br><br><strong>Carol O&#8217;Sullivan<\/strong><br>Trinity College Dublin<strong><br><br>Matthias Teschner<\/strong><br>University of Freiburg<br><br><strong>Hang Ma<\/strong><br>Simon Fraser University<br><br><strong>Soraia Musse<\/strong><br>PUCRS<br><br><strong>Sylvie Gibet<\/strong><br>Southern Britanny University<br><br><strong>Xiaogang Jin<\/strong><br>Zhejiang University<br><br><strong>Catherine Pelachaud<\/strong><br>CNRS &#8211; ISIR, Sorbonne<br><\/p>\n\n\n\n<p class=\"has-text-align-center\" style=\"font-size:15px\"><strong>Cathy Ennis<\/strong><br>TU Dublin<br><br><strong>Zerrin Yumak<\/strong><br>Utrecht University<br><br><strong>Funda Durupinar Babur<\/strong><br>University of Massachussetts Boston<br><br><strong>Katja Zibrek<\/strong><br>INRIA<br><br><strong>Stephen Guy<\/strong><br>University of Minnesota<\/p>\n\n\n\n<p class=\"has-text-align-center\" style=\"font-size:15px\"><br><\/p>\n<\/div>\n<\/div>\n\n\n\n<h1 class=\"wp-block-heading\" id=\"Program\">Program<\/h1>\n\n\n\n<p><em><strong>Full papers: 30 mins (20 mins presentation + 10 mins questions)<br>Short papers: 15 mins (10 mins presentation + 5 mins questions)<\/strong><\/em><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><em><strong>Please find a Powerpoint template for your slides here<\/strong><\/em> (note that using the template is optional): <\/p>\n\n\n\n<div class=\"wp-block-file\"><a id=\"wp-block-file--media-e0ec06ba-3db7-4d69-ab30-e97eebde5e2e\" href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/MIG_template.pptx\">MIG_template<\/a><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/MIG_template.pptx\" class=\"wp-block-file__button\" download aria-describedby=\"wp-block-file--media-e0ec06ba-3db7-4d69-ab30-e97eebde5e2e\">Download<\/a><\/div>\n\n\n\n<h2 class=\"has-text-align-center wp-block-heading\"><strong>Tuesday, November 14th<\/strong><\/h2>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<h4 class=\"wp-block-heading\"><strong>From 6:00PM<\/strong><\/h4>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<h4 class=\"wp-block-heading\"><strong>Welcome Reception, <\/strong>Delirium caf\u00e9, Rennes downtown area (see venue section for details)<\/h4>\n<\/div>\n<\/div>\n\n\n\n<h2 class=\"has-text-align-center wp-block-heading\"><strong>Wednesday, November 15th<\/strong><\/h2>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<h4 class=\"wp-block-heading\"><strong>8:45AM &#8211; 9:15AM<\/strong><br><br><strong>9:15AM &#8211; 10:00AM<br><br>10:00AM &#8211; 11:00AM<br><br>11:15AM &#8211; 1:00PM<\/strong><\/h4>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<h4 class=\"wp-block-heading\">Conference center is open to access<br><br>Opening remarks<br><br>Keynote 1 &#8211; Sylvia Pan<br><br>Session: <strong>ML for Motion<\/strong><\/h4>\n<\/div>\n<\/div>\n\n\n\n<p style=\"font-size:19px\"><em><strong>Learning Robust and Scalable Motion Matching with Lipschitz Continuity and Sparse Mixture of Experts.<\/strong><\/em><br><em>Tobias Kleanthous and Antonio Martini<br><\/em><strong>Objective Evaluation Metric for Motion Generative Models: Validating Fr\u00e9chet Motion Distance on Foot Skating and Over-smoothing Artifacts.<\/strong><br><em>Antoine Maiorca, Hugo Bohy, Youngwoo Yoon and Thierry Dutoit<br><strong>Motion-DVAE: Unsupervised learning for fast human motion denoising.<\/strong><\/em><br><em>Gu\u00e9nol\u00e9 Fiche, Simon Leglaive, Xavier Alameda-Pineda and Renaud S\u00e9guier<br><strong>Reward Function Design for Crowd Simulation via Reinforcement Learning.<\/strong><\/em><br><em>Ariel Kwiatkowski, Vicky Kalogeiton, Julien Pettre and Marie-Paule Cani<br><strong>MeshGraphNetRP: Improving Generalization of GNN-based Cloth Simulation.<\/strong><\/em><br><em>Emmanuel Ian Libao, Myeongjin Lee, Sumin Kim and Sung-Hee Lee<\/em><\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<h4 class=\"wp-block-heading\"><strong>1:00PM &#8211; 2:30PM<\/strong><\/h4>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<p><strong>Lunch Break<\/strong><\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<h4 class=\"wp-block-heading\"><strong>2:30PM &#8211; 3:30PM<\/strong><br><strong><br><br>3:45PM &#8211; 5:45PM<\/strong><\/h4>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<h4 class=\"wp-block-heading\">Keynote 2 &#8211; Steve Tonneau &#8211; <strong>How can model-based AI advance locomotion skills for legged characters?<\/strong><br><br>Session: <strong>Games<\/strong><\/h4>\n<\/div>\n<\/div>\n\n\n\n<p style=\"font-size:19px\"><em><strong>Real-time Computational Cinematographic Editing for Broadcasting of Volumetric-captured events: an Application to Ultimate Fighting.<\/strong><\/em><br><em>Francois Bourel, Xi Wang, Ervin Teng, Valerio Ortenzi, Adam Myhill and Marc Christie<br><strong>Exploring Mid-air Gestural Interfaces for Children with ADHD.<\/strong><\/em><br><em>Vera Remizova, Antti Sand, Oleg \u0160pakov, Jani Lylykangas, Moshi Qin, Terhi Helminen, Fiia Takio, Kati Rantanen, Anneli Kylli\u00e4inen, Veikko Surakka and Yulia Gizatdinova<br><strong>Player Exploration Patterns in Interactive Molecular Docking with Electrostatic Visual Cues.<\/strong><\/em><br><em>Lin Liu, Torin Adamson, Lydia Tapia and Bruna Jacobson<br><strong>Heat Simulation on Meshless Crafted-Made Shapes.<\/strong><\/em><br><em>Auguste De Lambilly, Gabriel Benedetti, Nour Rizk, Chen Hanqi, Siyuan Huang, Junnan Qiu, David Louapre, Raphael Granier de Cassagnac and Damien Rohmer<br><strong>Virtual Joystick Control Sensitivity and Usage Patterns in a Large-Scale Touchscreen-Based Mobile Game Study.<\/strong><\/em><br><em>John Baxter, Torin Adamson, Yazied Hasan, Mohammad Yousefi, Lidia Obregon, Evan Carter and Lydia Tapia<\/em><\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<h4 class=\"wp-block-heading\"><strong>6:15PM &#8211; 6:30PM<\/strong><\/h4>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>6:30PM &#8211; 7:30PM<\/strong><\/h4>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>7:30PM &#8211; &#8230;<\/strong><\/h4>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<h4 class=\"wp-block-heading\">Posters Fast Forward<\/h4>\n\n\n\n<h4 class=\"wp-block-heading\">Local Industry Keynote<\/h4>\n\n\n\n<h4 class=\"wp-block-heading\">Poster Session with Galettes Party, Inria Convention Center<br><a href=\"#Posters\">Click here for the poster program<\/a><\/h4>\n\n\n\n<p><br><\/p>\n<\/div>\n<\/div>\n\n\n\n<h2 class=\"has-text-align-center wp-block-heading\">Thursday, November 16th<\/h2>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<h4 class=\"wp-block-heading\"><strong>09:15AM &#8211; 11:30AM<\/strong><\/h4>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<h4 class=\"wp-block-heading\">Session: <strong>ML for faces<\/strong><\/h4>\n<\/div>\n<\/div>\n\n\n\n<p style=\"font-size:19px\"><em><strong>SoftDECA: Computationally Efficient Physics-Based Facial Animations<\/strong>.<br>Nicolas Wagner, Ulrich Schwanecke and Mario Botsch<br><strong>Learned Real-time Facial Animation from Audiovisual Inputs for Low-end Devices<\/strong>.<br>I\u00f1aki Navarro, Dario Kneubuehler, Tijmen Verhulsdonck, Eloi du Bois, William Welch, Charles Shang, Ian Sachs, Victor Zordan, Morgan McGuire and Kiran Bhat<br><strong>FaceDiffuser: Speech-Driven Facial Animation Synthesis Using Diffusion<\/strong>.<br>Stefan Stan, Kazi Injamamul Haque and Zerrin Yumak<br><strong>MUNCH: Modelling Unique \u2019N Controllable Heads<\/strong>.<br>Debayan Deb, Suvidha Tripathi and Pranit Puri<br><\/em><s><em><strong>Generating Emotionally Expressive Look-At Animation<\/strong><\/em>.<br><em>Ylva Ferstl<\/em><\/s> (moved to Friday)<\/p>\n\n\n\n<p class=\"has-vivid-red-color has-text-color\" style=\"font-size:19px\"><em><strong>Physical Simulation of Balance Recovery after a Push.<\/strong><\/em><br><em>Alexis Jensen, Thomas Chatagnon, Niloofar Khoshsiyar, Daniele Reda, Michiel van de Panne, Charles Pontonnier and Julien Pettr\u00e9<\/em> <br>(moved from Friday to Thursday)<br><\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<h4 class=\"wp-block-heading\"><strong>12:00PM &#8211; 13:00PM<br><br>14:30PM &#8211; 16:00PM<\/strong><\/h4>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<h4 class=\"wp-block-heading\">Keynote 3 &#8211; Johanna Pirker<br><br>Session: <strong>Virtual Reality<\/strong><\/h4>\n<\/div>\n<\/div>\n\n\n\n<p style=\"font-size:19px\"><em><strong>Avatar Tracking Control with Featherstone&#8217;s Algorithm and Newton-Euler Formulation for Inverse Dynamics.<\/strong><\/em><br><em>Ken Sugimori, Hironori Mitake, Hirohito Sato and Shoichi Hasegawa<br><strong>Real-Time Conversational Gaze Synthesis for Avatars.<\/strong><\/em><br><em>Ryan Canales, Eakta Jain and Sophie Joerg<br><strong>Effect of Avatar Clothing and User Personality on Group Dynamics in Virtual Reality.<\/strong><\/em><br><em>Yuan He, Lauren Buck, Brendan Rooney and Rachel McDonnell<\/em><br><em><strong>Designing Hand-held Controller-based Handshake Interaction in Social VR and Metaverse.<\/strong><\/em><br><em>Filippo Gabriele Prattic\u00f2, Irene Checo, Alessandro Visconti, Adalberto Simeone and Fabrizio Lamberti<br><strong>Runtime Motion Adaptation for Precise Character Locomotion.<\/strong><\/em><br><em>Noureddine Gueddach, Steven Poulakos and Robert Sumner<\/em><\/p>\n\n\n\n<p class=\"has-vivid-red-color has-text-color\"><\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:100%\">\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<h4 class=\"wp-block-heading\"><strong>17:00PM<\/strong><\/h4>\n\n\n\n<h4 class=\"wp-block-heading\"><br><strong>From 19:30PM<\/strong><\/h4>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<h4 class=\"wp-block-heading\"><strong>City Tour, <\/strong>meeting point in front of the tourism office (<a href=\"https:\/\/maps.app.goo.gl\/QYea7D4VBUtVqZk7A\">Click here for the meetup position<\/a>)<\/h4>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Dinner,<\/strong> La Fabrique Saint-Georges, downtown area<\/h4>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<h2 class=\"has-text-align-center wp-block-heading\">Friday, November 17th<\/h2>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<h4 class=\"wp-block-heading\"><strong>09:15AM &#8211; 11:00AM<\/strong><\/h4>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<h4 class=\"wp-block-heading\">Session: <strong>Animation<\/strong><\/h4>\n<\/div>\n<\/div>\n\n\n\n<p style=\"font-size:19px\"><em><strong>Primal Extended Position Based Dynamics for Hyperelasticity.<\/strong><\/em><br><em>Yizhou Chen, Yushan Han, Jingyu Chen, Shiqian Ma, Ronald Fedkiw and Joseph Teran<\/em><br><s><em><strong>Physical Simulation of Balance Recovery after a Push.<\/strong><\/em><br><em>Alexis Jensen, Thomas Chatagnon, Niloofar Khoshsiyar, Daniele Reda, Michiel van de Panne, Charles Pontonnier and Julien Pettr\u00e9<\/em><\/s><em><br><strong>SwimXYZ: A large-scale dataset of synthetic swimming motions and videos.<\/strong><\/em><br><em>Gu\u00e9nol\u00e9 Fiche, Vincent Sevestre, Camila Gonzalez-Barral, Simon Leglaive and Renaud S\u00e9guier<br><strong>Video-Based Motion Retargeting Framework between Characters with Various Skeleton Structure.<\/strong><\/em><br><em>Xin Huang and Takashi Kanai<br><strong>Navigating With a Defensive Agent: Role Switching for Human Automation Collaboration.<\/strong><\/em><br><em>Liz DiGioia, Torin Adamson, Yazied Hasan, Lidia Obregon, Evan Carter and Lydia Tapia<\/em><\/p>\n\n\n\n<p class=\"has-vivid-red-color has-text-color\"><em><strong>Generating Emotionally Expressive Look-At Animation<\/strong><\/em>.<s><br><\/s><em>Ylva Ferstl<\/em> (moved from Thursday to Friday)<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<h4 class=\"wp-block-heading\"><strong>11:30AM &#8211; 12:30PM<br><br>12:30PM &#8211; 13:00PM<\/strong><\/h4>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<h4 class=\"wp-block-heading\">Keynote 4 &#8211; Jonas Beskow<br><br>Closing remarks<\/h4>\n<\/div>\n<\/div>\n\n\n\n<h2 class=\"has-text-align-center wp-block-heading\">Saturday, November 18th &#8211; OPTIONNAL (see Registrations section) &#8211; Mont-Saint Michel tour<\/h2>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<h4 class=\"wp-block-heading\"><strong>9:00-18:00<\/strong><\/h4>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<p>More details soon<\/p>\n<\/div>\n<\/div>\n\n\n\n<h1 class=\"wp-block-heading\">Venue<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">Rennes City, France<\/h2>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:100%\">\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large is-style-default\"><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/DSF2847.jpg\"><img loading=\"lazy\" decoding=\"async\" width=\"1920\" height=\"1765\" data-id=\"103\" src=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/DSF2847.jpg\" alt=\"\" class=\"wp-image-103\" srcset=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/DSF2847.jpg 1920w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/DSF2847-300x276.jpg 300w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/DSF2847-1024x941.jpg 1024w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/DSF2847-768x706.jpg 768w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/DSF2847-1536x1412.jpg 1536w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/DSF2847-150x138.jpg 150w\" sizes=\"auto, (max-width: 1920px) 100vw, 1920px\" \/><\/a><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/Place-mairie-rennes-visite.jpg\"><img loading=\"lazy\" decoding=\"async\" width=\"1600\" height=\"1066\" data-id=\"104\" src=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/Place-mairie-rennes-visite.jpg\" alt=\"\" class=\"wp-image-104\" srcset=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/Place-mairie-rennes-visite.jpg 1600w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/Place-mairie-rennes-visite-300x200.jpg 300w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/Place-mairie-rennes-visite-1024x682.jpg 1024w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/Place-mairie-rennes-visite-768x512.jpg 768w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/Place-mairie-rennes-visite-1536x1023.jpg 1536w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/Place-mairie-rennes-visite-150x100.jpg 150w\" sizes=\"auto, (max-width: 1600px) 100vw, 1600px\" \/><\/a><figcaption>Visite Centre ancien. Destination Rennes.<\/figcaption><\/figure>\n<\/figure>\n<\/div>\n<\/div>\n\n\n\n<p>The conference will take place at the Inria Conference Center in Rennes, 263 Av. G\u00e9n\u00e9ral Leclerc, 35000 Rennes. However, we recommend that you stay in the downtown area, the Inria Conference Center is easily reached by public transport from the downtown area (20-30 minutes). Follow this <a href=\"https:\/\/www.tourisme-rennes.com\/en\/\">link<\/a> for more information about the city, and touristic attractions. <\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/transport-1.png\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"742\" src=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/transport-1-1024x742.png\" alt=\"\" class=\"wp-image-325\" srcset=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/transport-1-1024x742.png 1024w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/transport-1-300x217.png 300w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/transport-1-768x557.png 768w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/transport-1-150x109.png 150w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/transport-1.png 1207w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Recommended Hotels<\/h2>\n\n\n\n<p>Here is a non exhaustive Hotels we recommend for your stay:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><a rel=\"noreferrer noopener\" href=\"https:\/\/www.tourisme-rennes.com\/en\/organize-my-trip\/the-accomodations\/les-chouettes-hostel-2\/\" target=\"_blank\">Les Chouettes Hostel<\/a><\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/www.tourisme-rennes.com\/en\/organize-my-trip\/the-accomodations\/sejours-et-affaires-apparthotel-de-bretagne-2\/\" target=\"_blank\">S\u00e9jours et Affaires Appart\u2019h\u00f4tel de Bretagne<\/a><\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/www.tourisme-rennes.com\/en\/organize-my-trip\/the-accomodations\/hotel-mercure-place-de-bretagne-2\/\" target=\"_blank\">Hotel Mercure Place de Bretagne<\/a><\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/www.tourisme-rennes.com\/en\/organize-my-trip\/the-accomodations\/campanile-rennes-centre-gare-2\/\" target=\"_blank\">Campanile Rennes Centre Gare<\/a><\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/www.tourisme-rennes.com\/en\/organize-my-trip\/the-accomodations\/adagio-access-rennes-centre-2\/\" target=\"_blank\">Adagio Access Rennes Centre<\/a><\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/www.tourisme-rennes.com\/en\/organize-my-trip\/the-accomodations\/hotel-de-nemours-2\/\" target=\"_blank\">H\u00f4tel de Nemours<\/a><\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/www.tourisme-rennes.com\/en\/organize-my-trip\/the-accomodations\/houses\/gite-du-passant-rennais-2\/\" target=\"_blank\">G\u00eete du Passant Rennais<\/a><\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/www.tourisme-rennes.com\/en\/organize-my-trip\/the-accomodations\/hotel-anne-de-bretagne-2\/\" target=\"_blank\">H\u00f4tel Anne de Bretagne<\/a><\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/www.tourisme-rennes.com\/en\/organize-my-trip\/the-accomodations\/balthazar-hotel-and-spa-2\/\" target=\"_blank\">Balthazar Hotel and Spa<\/a><\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/www.tourisme-rennes.com\/en\/organize-my-trip\/the-accomodations\/mercure-rennes-centre-parlement-2\/\" target=\"_blank\">Mercure Rennes Centre Parlement<\/a><\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/www.tourisme-rennes.com\/en\/organize-my-trip\/the-accomodations\/hotel-atlantic-2\/\" target=\"_blank\">H\u00f4tel Atlantic<\/a><\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/www.tourisme-rennes.com\/en\/organize-my-trip\/the-accomodations\/le-magic-hall-2\/\" target=\"_blank\">Le Magic Hall<\/a><\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/www.tourisme-rennes.com\/en\/organize-my-trip\/the-accomodations\/hotel-le-saint-antoine-2\/\" target=\"_blank\">H\u00f4tel Le Saint-Antoine<\/a><\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/www.tourisme-rennes.com\/en\/organize-my-trip\/the-accomodations\/novotel-rennes-centre-gare-2\/\" target=\"_blank\">Novotel Rennes Centre Gare<\/a><\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/www.tourisme-rennes.com\/en\/organize-my-trip\/the-accomodations\/garden-hotel-2\/\" target=\"_blank\">Garden Hotel<\/a><\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/www.tourisme-rennes.com\/en\/organize-my-trip\/the-accomodations\/kyriad-rennes-centre-gare-2\/\" target=\"_blank\">Kyriad Rennes Centre Gare<\/a><\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/www.tourisme-rennes.com\/en\/organize-my-trip\/the-accomodations\/mercure-rennes-centre-gare-2\/\" target=\"_blank\">Mercure Rennes Centre Gare<\/a><\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/www.tourisme-rennes.com\/en\/organize-my-trip\/the-accomodations\/ibis-styles-rennes-centre-gare-nord-2\/\" target=\"_blank\">Ibis Styles Rennes Centre Gare Nord<\/a><\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/www.tourisme-rennes.com\/en\/organize-my-trip\/the-accomodations\/novotel-rennes-centre-gare-2\/\" target=\"_blank\">Novotel Rennes Centre Gare<\/a><\/li><li><a rel=\"noreferrer noopener\" href=\"https:\/\/www.tourisme-rennes.com\/en\/organize-my-trip\/the-accomodations\/garden-hotel-2\/\" target=\"_blank\">Garden Hotel<\/a><\/li><\/ul>\n\n\n\n<h2 class=\"has-text-align-center wp-block-heading\">Inria Rennes Convention Center<\/h2>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full\"><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/CRI_Rennes.jpg\"><img loading=\"lazy\" decoding=\"async\" width=\"600\" height=\"398\" src=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/CRI_Rennes.jpg\" alt=\"\" class=\"wp-image-102\" srcset=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/CRI_Rennes.jpg 600w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/CRI_Rennes-300x199.jpg 300w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/05\/CRI_Rennes-150x100.jpg 150w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><\/a><\/figure><\/div>\n\n\n\n<p class=\"has-text-align-center\">ACM MIG Conference will take place in the Inria Convention Center, in the Beaulieu Campus Area. You can reach the campus or with the metro line B (Beaulieu Unviersit\u00e9 station +15 mins walk to reach the convention center) or with city bus lines C4 or C6 (Preales or Tournebride stops, +5 mins walk)<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/map.png\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"676\" src=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/map-1024x676.png\" alt=\"\" class=\"wp-image-326\" srcset=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/map-1024x676.png 1024w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/map-300x198.png 300w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/map-768x507.png 768w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/map-150x99.png 150w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/map.png 1090w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Social Events<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Welcome Reception<\/h3>\n\n\n\n<figure class=\"wp-block-image size-large\"><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/10\/delirium-bar-rennes-ohqapnpepxckjdqy6ec71n2ns3v9bmit0h2yi7j3hs.jpeg\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"384\" src=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/10\/delirium-bar-rennes-ohqapnpepxckjdqy6ec71n2ns3v9bmit0h2yi7j3hs-1024x384.jpeg\" alt=\"\" class=\"wp-image-284\" srcset=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/10\/delirium-bar-rennes-ohqapnpepxckjdqy6ec71n2ns3v9bmit0h2yi7j3hs-1024x384.jpeg 1024w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/10\/delirium-bar-rennes-ohqapnpepxckjdqy6ec71n2ns3v9bmit0h2yi7j3hs-300x113.jpeg 300w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/10\/delirium-bar-rennes-ohqapnpepxckjdqy6ec71n2ns3v9bmit0h2yi7j3hs-768x288.jpeg 768w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/10\/delirium-bar-rennes-ohqapnpepxckjdqy6ec71n2ns3v9bmit0h2yi7j3hs-1536x576.jpeg 1536w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/10\/delirium-bar-rennes-ohqapnpepxckjdqy6ec71n2ns3v9bmit0h2yi7j3hs-150x56.jpeg 150w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/10\/delirium-bar-rennes-ohqapnpepxckjdqy6ec71n2ns3v9bmit0h2yi7j3hs.jpeg 1600w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure>\n\n\n\n<p>Welcome Reception will take place at the Delirium caf\u00e9, Tuesday Nov. 14th from 6pm (until as late as you like). One drink and some platters of cold cuts or cheese will be served. Delirium caf\u00e9 is located at 15, place des Lices in the downtown area (<a href=\"https:\/\/maps.app.goo.gl\/nNTqvXFVjgUxp8Ez6\" data-type=\"URL\" data-id=\"https:\/\/maps.app.goo.gl\/nNTqvXFVjgUxp8Ez6\">link to Google Maps<\/a>). <\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Poster Session and Industrial Keynotes Wednesday nov. 15th at 19:30PM<\/h3>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"Posters\">Poster program:<\/h4>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:85%\">\n<p><em><strong>ZStudio: Portable and Real-time Motion Capture Studio for Creators in the Metaverse.<\/strong><\/em><br><em>Jung-Seok Cho, Seongchan Jeong, Geonwon Lee, Jaehyun Han and HyeRin Yoo.<\/em><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<div class=\"wp-block-file\"><a id=\"wp-block-file--media-b080c0d4-18a3-421c-9ca7-ddd8a96e7103\" href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_1.pdf\">poster_1<\/a><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_1.pdf\" class=\"wp-block-file__button\" download aria-describedby=\"wp-block-file--media-b080c0d4-18a3-421c-9ca7-ddd8a96e7103\">Download<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:85%\">\n<p><em><strong><em><strong>Improving self-supervised 3D face reconstruction with few-shot transfer learning.<\/strong><\/em><\/strong><\/em><br><em><em>Martin Dornier, Philippe-Henri Gosselin, Christian Raymond, Yann Ricquebourg and Bertrand Co\u00fcasnon.<\/em><\/em><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<div class=\"wp-block-file\"><a id=\"wp-block-file--media-d0095f11-8f35-4595-9bf9-c040aa56e290\" href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_2.pdf\">poster_2<\/a><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_2.pdf\" class=\"wp-block-file__button\" download aria-describedby=\"wp-block-file--media-d0095f11-8f35-4595-9bf9-c040aa56e290\">Download<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:85%\">\n<p><em><strong><em><strong><em><strong>Disentangling Embedding Vectors for Controllable Facial Video Generation.<\/strong><\/em><br><\/strong><\/em><\/strong><em><em>Matt Partridge.<\/em><\/em><\/em><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<div class=\"wp-block-file\"><a id=\"wp-block-file--media-68b9ac02-40d9-4bcb-93dc-f5271f52d0e9\" href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_3.pdf\">poster_3<\/a><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_3.pdf\" class=\"wp-block-file__button\" download aria-describedby=\"wp-block-file--media-68b9ac02-40d9-4bcb-93dc-f5271f52d0e9\">Download<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:85%\">\n<p><em><strong><em><strong><em><strong>A summary of VR quadruped embodiment using NeuroDog.<\/strong><\/em><br><\/strong><\/em><\/strong><em><em>Donal Egan, Darren Cosker and Rachel McDonnell.<\/em><\/em><\/em><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<div class=\"wp-block-file\"><a id=\"wp-block-file--media-d4303eeb-5c4b-4e49-ba99-116d07bcaf15\" href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_4.pdf\">poster_4<\/a><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_4.pdf\" class=\"wp-block-file__button\" download aria-describedby=\"wp-block-file--media-d4303eeb-5c4b-4e49-ba99-116d07bcaf15\">Download<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:85%\">\n<p><em><strong>ShapeVerse: Physics-based Characters with Varied Body Shapes.<\/strong><\/em><br><em>Bharat Vyas and Carol O&#8217;Sullivan.<\/em><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<div class=\"wp-block-file\"><a id=\"wp-block-file--media-c1ca107d-d9f5-4997-a547-1bd513c4a1c6\" href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_5.pdf\">poster_5<\/a><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_5.pdf\" class=\"wp-block-file__button\" download aria-describedby=\"wp-block-file--media-c1ca107d-d9f5-4997-a547-1bd513c4a1c6\">Download<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:85%\">\n<p><em><strong><em><strong>How Much Do We Pay Attention? A Comparative Study of User Gaze and Synthetic Vision during Navigation. <\/strong><br><\/em><\/strong><em>Julia Melgare, Guido Mainardi, Eduardo Alvarado, Damien Rohmer, Marie-Paule Cani and Soraia Musse.<\/em><\/em><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<div class=\"wp-block-file\"><a id=\"wp-block-file--media-3741237a-8bd1-4a80-8a61-cc6f19b994a9\" href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_6.pdf\">poster_6<\/a><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_6.pdf\" class=\"wp-block-file__button\" download aria-describedby=\"wp-block-file--media-3741237a-8bd1-4a80-8a61-cc6f19b994a9\">Download<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:85%\">\n<p><em><strong><em><strong><em><strong>A Comparative Evaluation of Formed Team Perception when in Human-Human and Human-Autonomous Teams.<\/strong><\/em><br><\/strong><\/em><\/strong>Chandni Murmu, Gnaanavarun Parthiban, Konnor McDowell and Nathan McNeese.<\/em><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<div class=\"wp-block-file\"><a id=\"wp-block-file--media-5e74a7ad-7201-41c8-b1ac-fb7ee915f9d4\" href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_7.pdf\">poster_7<\/a><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_7.pdf\" class=\"wp-block-file__button\" download aria-describedby=\"wp-block-file--media-5e74a7ad-7201-41c8-b1ac-fb7ee915f9d4\">Download<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:85%\">\n<p><em><strong><em><strong><em><strong><em><strong>Exploring the Influence of Super-Functional Virtual Hands on Embodiment and Perception in Virtual Reality with Children.<\/strong><\/em><br><\/strong><\/em><\/strong><\/em><\/strong><em><em><em>Yuke Pi, Leif Johannsen, Simon Thurlbeck, Dorothy Cowie, Marco Gillies and Xueni Pan.<\/em><\/em><\/em><\/em><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<div class=\"wp-block-file\"><a id=\"wp-block-file--media-36860d7b-38a0-4f99-9920-da07f326fb6e\" href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_8.pdf\">poster_8<\/a><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_8.pdf\" class=\"wp-block-file__button\" download aria-describedby=\"wp-block-file--media-36860d7b-38a0-4f99-9920-da07f326fb6e\">Download<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:85%\">\n<p><em><strong><em><strong><em><strong><em><strong>Improving motion matching for VR avatars by fusing inside-out tracking with outside-in 3D pose estimation.<\/strong><\/em><br><\/strong><\/em><\/strong><\/em><\/strong><em><em><em>George Fletcher, Donal Egan, Rachel McDonnell and Darren Cosker.<\/em><\/em><\/em><\/em><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<div class=\"wp-block-file\"><a id=\"wp-block-file--media-47ee2283-b7c7-4f1c-b665-1f205a7c60c0\" href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_9.pdf\">poster_9<\/a><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_9.pdf\" class=\"wp-block-file__button\" download aria-describedby=\"wp-block-file--media-47ee2283-b7c7-4f1c-b665-1f205a7c60c0\">Download<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:85%\">\n<p><em><strong><em><strong><em><strong><em><strong>Enabling Physical VR Interaction with Deep RL Agents.<\/strong><\/em><br><\/strong><\/em><\/strong><\/em><\/strong><em><em><em>Paul Boursin, David Hamelin, James Burness and Marie-Paule Cani.<\/em><\/em><\/em><\/em><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<div class=\"wp-block-file\"><a id=\"wp-block-file--media-6529a1df-073a-4f34-8195-2003f03bf313\" href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_10.pdf\">poster_10<\/a><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_10.pdf\" class=\"wp-block-file__button\" download aria-describedby=\"wp-block-file--media-6529a1df-073a-4f34-8195-2003f03bf313\">Download<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:85%\">\n<p><em><strong><em><strong><em><strong><em><strong><em><strong>Sparse Motion Semantics for Contact-Aware Retargeting.<\/strong><\/em><br><\/strong><\/em><\/strong><\/em><\/strong><\/em><\/strong><em><em><em><em>Th\u00e9o Cheynel, Thomas Rossi, Baptiste Bellot-Gurlet, Damien Rohmer and Marie-Paule Cani.<\/em><\/em><\/em><\/em><\/em><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<div class=\"wp-block-file\"><a id=\"wp-block-file--media-89f38d97-6aaa-48bf-b1aa-f1e5744cbca8\" href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_11.pdf\">poster_11<\/a><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_11.pdf\" class=\"wp-block-file__button\" download aria-describedby=\"wp-block-file--media-89f38d97-6aaa-48bf-b1aa-f1e5744cbca8\">Download<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:85%\">\n<p><em><strong><em><strong><em><strong><em><strong>Detailed Eye Region Capture and Animation.<\/strong><\/em><br><\/strong><\/em><\/strong><\/em><\/strong><em><em><em>Glenn Kerbiriou, Quentin Avril and Maud Marchal.<\/em><\/em><\/em><\/em><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<div class=\"wp-block-file\"><a id=\"wp-block-file--media-ea91d3f1-7a96-4d82-9d61-46c3f0026b47\" href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_12.pdf\">poster_12<\/a><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_12.pdf\" class=\"wp-block-file__button\" download aria-describedby=\"wp-block-file--media-ea91d3f1-7a96-4d82-9d61-46c3f0026b47\">Download<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:85%\">\n<p><em><strong><em><strong><em><strong><em><strong><em><strong><em><strong>ArtWalks via Latent Diffusion Models.<\/strong><\/em><br><\/strong><\/em><\/strong><\/em><\/strong><\/em><\/strong><\/em><\/strong><em><em><em><em><em>Alberto Pennino, Majed El Helou, Daniel Vera Nieto and Fabio Zund.<\/em><\/em><\/em><\/em><\/em><\/em><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<div class=\"wp-block-file\"><a id=\"wp-block-file--media-7ff673f4-9798-4560-9d90-aaa6433d5cbf\" href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_13.pdf\">poster_13<\/a><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_13.pdf\" class=\"wp-block-file__button\" download aria-describedby=\"wp-block-file--media-7ff673f4-9798-4560-9d90-aaa6433d5cbf\">Download<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:85%\">\n<p><em><strong><em><strong><em><strong><em><strong><em><strong><em><strong><em><strong>A Perceptual Sensing System for Interactive Virtual Agents: towards Human-like Expressiveness and Reactiveness.<\/strong><\/em><br><\/strong><\/em><\/strong><\/em><\/strong><\/em><\/strong><\/em><\/strong><\/em><\/strong><em><em><em><em><em><em>Alberto Jovane and Pierre Raimbaud.<\/em><\/em><\/em><\/em><\/em><\/em><\/em><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<div class=\"wp-block-file\"><a id=\"wp-block-file--media-798ce243-d329-4723-841f-302d30b31e8a\" href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_14.pdf\">poster_14<\/a><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_14.pdf\" class=\"wp-block-file__button\" download aria-describedby=\"wp-block-file--media-798ce243-d329-4723-841f-302d30b31e8a\">Download<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n<p><em><strong>Persuasive polite robots in free-standing conversational groups<\/strong><\/em><br><em>Christopher Peters<\/em><\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:85%\">\n<p><em><strong><em><strong><em><strong><em><strong>Real-time self-contact sensitive finger and full-body animation of avatars with different morphologies and proportions<\/strong><\/em><br><\/strong><\/em><\/strong><\/em><\/strong><em><em><em>Mathias Delahaye, Bruno Herbelin and Ronan Boulic<\/em><\/em><\/em><\/em><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:20%\">\n<div class=\"wp-block-file\"><a id=\"wp-block-file--media-9ea30ca7-fbd0-480a-aaf4-0deec8e16345\" href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_16.pdf\">poster_16<\/a><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/11\/poster_16.pdf\" class=\"wp-block-file__button\" download aria-describedby=\"wp-block-file--media-9ea30ca7-fbd0-480a-aaf4-0deec8e16345\">Download<\/a><\/div>\n<\/div>\n<\/div>\n\n\n\n<h3 class=\"wp-block-heading\">City Tour, Thursday nov. 16th 17:00PM<\/h3>\n\n\n\n<p>Meetup in front of the tourisme office (<a href=\"https:\/\/maps.app.goo.gl\/QYea7D4VBUtVqZk7A\">Click here for position<\/a>) at 17:00PM for a tour of Rennes before dinner !<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Dinner in the downtown area, Thursday nov. 16th 19:30PM<\/h3>\n\n\n\n<p>Meetup at &#8216;La Fabrique Saint-Georges&#8217; (<a href=\"https:\/\/maps.app.goo.gl\/3Ri3gQyLzwdyRKPG6\">Click here for position<\/a>) at 19:30PM, for a convivial dinner in the downtown area of Rennes !<\/p>\n\n\n\n<hr class=\"wp-block-separator\"\/>\n\n\n\n<h1 class=\"wp-block-heading\">Conference Organization<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"Contact\">Conference Chairs<\/h2>\n\n\n\n<ul class=\"wp-block-list\"><li>Julien Pettr\u00e9, Inria, France<\/li><li>Barbara Solenthaler, ETH Zurich, Switzerland<\/li><\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Program Chairs<\/h2>\n\n\n\n<ul class=\"wp-block-list\"><li>Rachel McDonnell, TCD, Ireland<\/li><li>Christopher Peters, KTH, Sweden<\/li><\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Poster Chair<\/h2>\n\n\n\n<ul class=\"wp-block-list\"><li>Jovane Alberto, Trinity College Dublin<\/li><\/ul>\n\n\n\n<h1 class=\"wp-block-heading\">Main Contact<\/h1>\n\n\n\n<p>All questions about submissions should be emailed to Rachel McDonnell (ramcdonn (at) tcd.ie) and Christopher Peters (chpeters (at) kth.se).<\/p>\n\n\n\n<p>Julien Pettr\u00e9, Inria, France (julien.pettre (at) inria.fr)<\/p>\n\n\n\n<p>All questions about posters should be emailed to Jovane Alberto, Trinity College Dublin (JOVANEA (at) tcd.ie)<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Sponsors<\/h1>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large is-resized\"><a href=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/06\/RBLX_WORDMARK_BLK.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/06\/RBLX_WORDMARK_BLK-1024x189.png\" alt=\"\" class=\"wp-image-180\" width=\"512\" height=\"95\" srcset=\"https:\/\/project.inria.fr\/mig2023\/files\/2023\/06\/RBLX_WORDMARK_BLK-1024x189.png 1024w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/06\/RBLX_WORDMARK_BLK-300x55.png 300w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/06\/RBLX_WORDMARK_BLK-768x142.png 768w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/06\/RBLX_WORDMARK_BLK-1536x284.png 1536w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/06\/RBLX_WORDMARK_BLK-2048x378.png 2048w, https:\/\/project.inria.fr\/mig2023\/files\/2023\/06\/RBLX_WORDMARK_BLK-150x28.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>","protected":false},"excerpt":{"rendered":"<p>Last updates: MIG 2023 AWARDS RESULTS BEST POSTER AWARD: Enabling Physical VR Interaction with Deep RL Agents.Paul Boursin, David Hamelin, James Burness and Marie-Paule Cani BEST PRESENTATION AWARD:Physical Simulation of Balance Recovery after a Push.Alexis Jensen, Thomas Chatagnon, Niloofar Khoshsiyar, Daniele Reda, Michiel van de Panne, Charles Pontonnier and Julien\u2026<\/p>\n<p> <a class=\"continue-reading-link\" href=\"https:\/\/project.inria.fr\/mig2023\/\"><span>Continue reading<\/span><i class=\"crycon-right-dir\"><\/i><\/a> <\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"open","template":"","meta":{"footnotes":""},"class_list":["post-4","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/project.inria.fr\/mig2023\/wp-json\/wp\/v2\/pages\/4","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/project.inria.fr\/mig2023\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/project.inria.fr\/mig2023\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/mig2023\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/mig2023\/wp-json\/wp\/v2\/comments?post=4"}],"version-history":[{"count":150,"href":"https:\/\/project.inria.fr\/mig2023\/wp-json\/wp\/v2\/pages\/4\/revisions"}],"predecessor-version":[{"id":420,"href":"https:\/\/project.inria.fr\/mig2023\/wp-json\/wp\/v2\/pages\/4\/revisions\/420"}],"wp:attachment":[{"href":"https:\/\/project.inria.fr\/mig2023\/wp-json\/wp\/v2\/media?parent=4"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}