

{"id":4,"date":"2011-12-08T11:55:34","date_gmt":"2011-12-08T10:55:34","guid":{"rendered":"http:\/\/project.inria.fr\/template1\/?page_id=4"},"modified":"2018-06-18T16:39:11","modified_gmt":"2018-06-18T14:39:11","slug":"home","status":"publish","type":"page","link":"https:\/\/project.inria.fr\/humans2018\/","title":{"rendered":"<strong>3D HUMANS 2018<\/strong>"},"content":{"rendered":"<p><strong>in conjunction with CVPR 2018, Salt Lake City, June 18th 2018.<\/strong><\/p>\n<p><strong>Topic<\/strong><br \/>\nThis workshop aims at gathering researchers who work on 3D understanding of humans from visual data, including topics such as 3D human pose estimation and tracking, 3D human shape estimation from RGB images or human activity recognition from 3D skeletal data. Current computer vision algorithms and deep learning-based methods can detect people in images and estimate their 2D pose with a remarkable accuracy. However, understanding humans and estimating their pose and shape in 3D is still an open problem. The ambiguities in lifting 2D pose to 3D, the lack of annotated data to train 3D pose regressors in the wild and the absence of a reliable evaluation dataset in real world situations make the problem very challenging. The workshop will include <strong> 8 invited talks<\/strong> and <strong> 2 poster sessions<\/strong> with a total of 21 posters.<\/p>\n<p><strong>Organizers<\/strong><\/p>\n<p><a href=\"http:\/\/www.gregrogez.net\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-117\" src=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/greg.jpeg\" alt=\"\" width=\"94\" height=\"128\" \/><\/a>\u00a0\u00a0\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <a href=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/thumb_Javier_winter_2.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-118\" src=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/thumb_Javier_winter_2.png\" alt=\"\" width=\"115\" height=\"127\" srcset=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/thumb_Javier_winter_2.png 145w, https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/thumb_Javier_winter_2-136x150.png 136w\" sizes=\"auto, (max-width: 115px) 100vw, 115px\" \/><\/a><br \/>\n<a href=\"http:\/\/www.gregrogez.net\">Gr\u00e9gory Rogez (Inria)<\/a>, <a href=\"https:\/\/es.linkedin.com\/in\/javier-romero-38b87331\">Javier Romero (Amazon)<\/a><\/p>\n<p><strong>Sponsors<\/strong><br \/>\n<a href=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/NAVERLABS_Europe_LOGO.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-108\" src=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/NAVERLABS_Europe_LOGO-300x75.jpg\" alt=\"\" width=\"300\" height=\"75\" srcset=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/NAVERLABS_Europe_LOGO-300x75.jpg 300w, https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/NAVERLABS_Europe_LOGO-768x191.jpg 768w, https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/NAVERLABS_Europe_LOGO-1024x254.jpg 1024w, https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/NAVERLABS_Europe_LOGO-150x37.jpg 150w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a>\u00a0 <a href=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/AMAZON_LOGO.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-109\" src=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/AMAZON_LOGO-300x147.png\" alt=\"\" width=\"151\" height=\"74\" srcset=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/AMAZON_LOGO-300x147.png 300w, https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/AMAZON_LOGO-150x73.png 150w, https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/AMAZON_LOGO.png 321w\" sizes=\"auto, (max-width: 151px) 100vw, 151px\" \/><\/a><\/p>\n<p><strong>Program<\/strong><\/p>\n<ul>\n<li>09:00 &#8211; 08:50 introduction\/opening remarks<\/li>\n<li>09:00 &#8211; 09:30 <strong> \u00a0Dr Christian Wolf (INSA): <\/strong>&#8220;Pose or attention for human activity recognition?&#8221;<strong><br \/>\n<\/strong><\/li>\n<li>09:30 &#8211; 10:00 <strong> Dr Gerard Pons-Moll (MPII): <\/strong>&#8220;From pixels to 3D human pose, shape and clothing&#8221;<\/li>\n<li>10:00 &#8211; 11:00 Coffee break \/ poster session 1<\/li>\n<li>11:00 &#8211; 11:30 <strong> Prof Deva Ramanan (CMU):<\/strong> &#8220;Analyzing human poses, tracks, and actions&#8221;<\/li>\n<li>11:30 &#8211; 12:00 <strong> Prof Yaser Sheikh (CMU\/Facebook):<\/strong> &#8220;Social perception: enabling machines to perceive social behavior&#8221;<strong><br \/>\n<\/strong><\/li>\n<li>12:00 &#8211; 13:30 Lunch break<\/li>\n<li>13:30 &#8211; 14:00 <strong> Prof Michael J. Black (MPI-IS)<\/strong><\/li>\n<li>14:00 &#8211; 14:30 <strong> Prof Kostas Daniilidis (UPenn): <\/strong>&#8220;3D human pose in-the-wild with diverse supervision&#8221;<strong><br \/>\n<\/strong><\/li>\n<li>14:30 &#8211; 15:00 <strong> Dr Cordelia Schmid (Inria\/Google): <\/strong>&#8220;Inference of 3D human body poses and shapes&#8221;<strong><br \/>\n<\/strong><\/li>\n<li>15:00 &#8211; 15:30 <strong> Prof Iasonas Kokkinos (UCL\/Facebook): <\/strong>&#8220;DensePose: dense pose estimation in the wild&#8221;<strong><br \/>\n<\/strong><\/li>\n<li>15:30 &#8211; 16:30 Coffee break \/ poster session 2<\/li>\n<li>16:30 &#8211; 17:00 panel discussion, awards and closing<\/li>\n<\/ul>\n<p><strong>Speakers<\/strong><\/p>\n<p><a href=\"https:\/\/perso.liris.cnrs.fr\/christian.wolf\/\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-119 alignnone\" src=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/christianWolf-300x300.jpg\" alt=\"\" width=\"109\" height=\"109\" srcset=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/christianWolf-300x300.jpg 300w, https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/christianWolf-150x150.jpg 150w, https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/christianWolf.jpg 400w\" sizes=\"auto, (max-width: 109px) 100vw, 109px\" \/><\/a><a href=\"http:\/\/virtualhumans.mpi-inf.mpg.de\/\"> <img loading=\"lazy\" decoding=\"async\" class=\" wp-image-120\" src=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/GerardPons-300x300.jpg\" alt=\"\" width=\"109\" height=\"109\" srcset=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/GerardPons-300x300.jpg 300w, https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/GerardPons-150x150.jpg 150w, https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/GerardPons-768x768.jpg 768w, https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/GerardPons-1024x1024.jpg 1024w, https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/GerardPons.jpg 1928w\" sizes=\"auto, (max-width: 109px) 100vw, 109px\" \/><\/a> <a href=\"https:\/\/www.cs.cmu.edu\/~deva\/\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-121\" src=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/Deva.jpg\" alt=\"\" width=\"93\" height=\"111\" \/><\/a> <a href=\"http:\/\/www.cs.cmu.edu\/~yaser\/\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-122\" src=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/Yaser.jpg\" alt=\"\" width=\"79\" height=\"112\" srcset=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/Yaser.jpg 131w, https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/Yaser-106x150.jpg 106w\" sizes=\"auto, (max-width: 79px) 100vw, 79px\" \/><\/a><\/p>\n<p><a href=\"https:\/\/ps.is.tuebingen.mpg.de\/person\/black\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-123\" src=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/Michael-261x300.jpg\" alt=\"\" width=\"100\" height=\"115\" srcset=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/Michael-261x300.jpg 261w, https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/Michael-130x150.jpg 130w, https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/Michael.jpg 297w\" sizes=\"auto, (max-width: 100px) 100vw, 100px\" \/><\/a><a href=\"http:\/\/www.cis.upenn.edu\/~kostas\/\">\u00a0<img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-124\" src=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/kostas2011www-232x300.jpg\" alt=\"\" width=\"90\" height=\"116\" srcset=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/kostas2011www-232x300.jpg 232w, https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/kostas2011www-116x150.jpg 116w, https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/kostas2011www.jpg 463w\" sizes=\"auto, (max-width: 90px) 100vw, 90px\" \/>\u00a0<\/a><a href=\"https:\/\/thoth.inrialpes.fr\/~schmid\/\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-125\" src=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/Cordelia.jpg\" alt=\"\" width=\"93\" height=\"116\" srcset=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/Cordelia.jpg 201w, https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/Cordelia-120x150.jpg 120w\" sizes=\"auto, (max-width: 93px) 100vw, 93px\" \/><\/a> <a href=\"http:\/\/www0.cs.ucl.ac.uk\/staff\/I.Kokkinos\/\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-126\" src=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/iasonas.jpg\" alt=\"\" width=\"117\" height=\"117\" srcset=\"https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/iasonas.jpg 230w, https:\/\/project.inria.fr\/humans2018\/files\/2011\/12\/iasonas-150x150.jpg 150w\" sizes=\"auto, (max-width: 117px) 100vw, 117px\" \/><\/a><\/p>\n<p><strong>Poster session 1:<\/strong><\/p>\n<ul>\n<li>1: Monocular RGB Hand Pose Inference from Unsupervised Refinable Nets<br \/>\nEndri Dibra; Thomas Wolf; Markus Gross; Cengiz Oztireli; Silvan Melchior; Ali Balkis<\/li>\n<li>2: Unsupervised Features for Facial Expression Intensity Estimation over Time<br \/>\nJoern Ostermann; Maren Awiszus; Stella Gra\u00dfhof; Felix Kuhnke<\/li>\n<li>3: Deep Learning Whole Body Point Cloud Scans from a Single Depth Map<br \/>\nJohn Zelek; Nolan Lunscher<\/li>\n<li>4: HandyNet: A One-stop Solution to Detect, Segment, Localize &amp; Analyze Driver Hands<br \/>\nMohan Trivedi; Akshay Rangesh<\/li>\n<li>5: Cross-modal Deep Variational Hand Pose Estimation<br \/>\nAdrian Spurr, Jie Song, Seonwook Park, Otmar Hilliges.<\/li>\n<li>6: 4D Human Body Correspondences from Panoramic Depth Maps.<br \/>\nZhong Li, Minye Wu, Yitengwang Zhou, Jingyi Yu<\/li>\n<li>7: DoubleFusion: Real-time Capture of Human Performances with Inner Body Shapes from a Single Depth Sensor<br \/>\nTao Yu, Zerong Zheng, Kaiwen Guo, Jianhui Zhao, Qionghai Dai, Hao Li, Gerard Pons-Moll and Yebin Liu.<\/li>\n<li>8: 2D\/3D Pose Estimation and Action Recognition using Multitask Deep Learning<br \/>\nDiogo C. Luvizon, David Picard, Hedi Tabia<\/li>\n<li>9: Ordinal Depth Supervision for 3D Human Pose Estimation<br \/>\nGeorgios Pavlakos, Xiaowei Zhou, Kostas Daniilidis<\/li>\n<li>10: First-Person Hand Action Benchmark with RGB-D Videos and 3D Hand Pose Annotations<br \/>\nGuillermo Garcia-Hernando, Shanxin Yuan, Seungryul Baek, Tae-Kyun Kim<\/li>\n<\/ul>\n<p><strong>Poster session 2:<\/strong><\/p>\n<ul>\n<li>11: Hand Pose Estimation via Latent 2.5D Heatmap Regression<br \/>\nUmar Iqbal, Pavlo Molchanov, Thomas Breuel, Juergen Gall, Jan Kautz<\/li>\n<li>12: A generalizable approach for multi-view 3D human pose regression and the release of the MVOR dataset<br \/>\nAbdolrahim Kadkhodamohammadi, Nicolas Padoy<\/li>\n<li>13: End-to-end Recovery of Human Shape and Pose<br \/>\nAngjoo Kanazawa, Michael J. Black, David W. Jacobs, Jitendra Malik<\/li>\n<li>14: Learning Monocular 3D Human Pose Estimation from Multi\u2013view Images<br \/>\nHelge Rhodin, J\u00f6rg Sp\u00f6rri, Isinsu Katircioglu, Victor Constantin,<br \/>\nFr\u00e9d\u00e9ric Meyer, Erich M\u00fcller, Mathieu Salzmann and Pascal Fua<\/li>\n<li>15: FeaStNet: Feature-Steered Graph Convolutions for 3D Shape Analysis<br \/>\nNitika Verma ; Edmond Boyer ; Jakob Verbeek.<\/li>\n<li>16: Extreme 3D Face Reconstruction: Seeing Through Occlusions<br \/>\nAnh Tu\u1ea5n Tr\u1ea7n ; Tal Hassner ; Iacopo Masi ; Eran Paz ; Yuval Nirkin ; G\u00e9rard Medioni.<\/li>\n<li>17: Video Based Reconstruction of 3D People Models<br \/>\nThiemo Alldieck ; Marcus Magnor ; Weipeng Xu ; Christian Theobalt ; Gerard Pons-Moll<\/li>\n<li>18: Coding Kendall&#8217;s Shape Trajectories for 3D Action Recognition.<br \/>\nAmor BEN TANFOUS, Hassen DRIRA, Boulbaba BEN AMOR<\/li>\n<li>19: Learning Pose Specific Representations by Predicting Different Views<br \/>\nGeorg Poier, David Schinagl and Horst Bischof<\/li>\n<li>20: GANerated Hands for Real-Time 3D Hand Tracking from Monocular RGB<br \/>\nFranziska Mueller, Florian Bernard, Oleksandr Sotnychenko, Dushyant Mehta, Srinath Sridhar, Dan Casas, Christian Theobalt.<\/li>\n<li>21: Learning to Estimate 3D Human Pose and Shape from a Single Color Image<br \/>\nGeorgios Pavlakos, Luyang Zhu, Xiaowei Zhou, Kostas Daniilidis<\/li>\n<\/ul>\n<p><\/p>","protected":false},"excerpt":{"rendered":"<p>in conjunction with CVPR 2018, Salt Lake City, June 18th 2018. Topic This workshop aims at gathering researchers who work on 3D understanding of humans from visual data, including topics such as 3D human pose estimation and tracking, 3D human shape estimation from RGB images or human activity recognition from\u2026<\/p>\n<p> <a class=\"continue-reading-link\" href=\"https:\/\/project.inria.fr\/humans2018\/\"><span>Continue reading<\/span><i class=\"crycon-right-dir\"><\/i><\/a> <\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-4","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/project.inria.fr\/humans2018\/wp-json\/wp\/v2\/pages\/4","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/project.inria.fr\/humans2018\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/project.inria.fr\/humans2018\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/humans2018\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/humans2018\/wp-json\/wp\/v2\/comments?post=4"}],"version-history":[{"count":41,"href":"https:\/\/project.inria.fr\/humans2018\/wp-json\/wp\/v2\/pages\/4\/revisions"}],"predecessor-version":[{"id":136,"href":"https:\/\/project.inria.fr\/humans2018\/wp-json\/wp\/v2\/pages\/4\/revisions\/136"}],"wp:attachment":[{"href":"https:\/\/project.inria.fr\/humans2018\/wp-json\/wp\/v2\/media?parent=4"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}