

{"id":4,"date":"2011-12-08T11:55:34","date_gmt":"2011-12-08T11:55:34","guid":{"rendered":"http:\/\/project.inria.fr\/template1\/?page_id=4"},"modified":"2025-01-31T16:58:29","modified_gmt":"2025-01-31T15:58:29","slug":"home","status":"publish","type":"page","link":"https:\/\/project.inria.fr\/nerphys\/","title":{"rendered":"The NERPHYS ERC Advanced Grant Project"},"content":{"rendered":"<p>Project number ERC AdG 101141721.<\/p>\n<p>We are looking for postdocs, please see the <a href=\"https:\/\/project.inria.fr\/nerphys\/job-offers\/\">Job Offers<\/a> page.<\/p>\n<p class=\"s22\"><span class=\"s20\">While<\/span> <span class=\"s20\">long<\/span> <span class=\"s20\">restricted<\/span> <span class=\"s20\">to<\/span> <span class=\"s20\">an<\/span> <span class=\"s20\">elite<\/span> <span class=\"s20\">of<\/span> <span class=\"s20\">expert<\/span> <span class=\"s20\">digital<\/span> <span class=\"s20\">artists, 3D<\/span> <span class=\"s20\">content<\/span> <span class=\"s20\">creation<\/span> <span class=\"s20\">has<\/span> <span class=\"s20\">recently<\/span> <span class=\"s20\">been<\/span> <span class=\"s20\">greatly<\/span> <span class=\"s20\">simplified by<\/span> <span class=\"s21\">deep<\/span> <span class=\"s21\">learning<\/span><span class=\"s20\">.<\/span> <span class=\"s20\">Neural<\/span> <span class=\"s20\">representations<\/span> <span class=\"s20\">of<\/span> <span class=\"s20\">3D<\/span> <span class=\"s20\">objects<\/span> <span class=\"s20\">have<\/span> <span class=\"s20\">revolutionized<\/span> <span class=\"s20\">real-world<\/span> <span class=\"s20\">capture<\/span> <span class=\"s20\">from<\/span> <span class=\"s20\">photos, <\/span><span class=\"s20\">while generative<\/span> <span class=\"s20\">models<\/span> <span class=\"s20\">are<\/span> <span class=\"s20\">starting<\/span> <span class=\"s20\">to<\/span> <span class=\"s20\">enable<\/span> <span class=\"s20\">3D<\/span> <span class=\"s20\">object<\/span> <span class=\"s20\">synthesis<\/span> <span class=\"s20\">from<\/span> <span class=\"s20\">text<\/span> <span class=\"s20\">prompts. These<\/span> <span class=\"s20\">methods<\/span> <span class=\"s20\">use <\/span><span class=\"s20\">differentiable <\/span><span class=\"s21\">neural<\/span> <span class=\"s21\">rendering<\/span> <span class=\"s20\">that<\/span> <span class=\"s20\">allows<\/span> <span class=\"s20\">efficient<\/span> <span class=\"s20\">optimization<\/span> <span class=\"s20\">of<\/span> <span class=\"s20\">the<\/span> <span class=\"s20\">powerful<\/span> <span class=\"s20\">and<\/span> <span class=\"s20\">expressive<\/span> <span class=\"s20\">\u201csoft\u201d<\/span> <span class=\"s20\">neural <\/span><span class=\"s20\">representations, but <\/span><span class=\"s21\">ignores physically-based principles<\/span><span class=\"s20\">, and thus has no guarantees on accuracy, severely limiting the utility of the resulting content.<\/span><\/p>\n<p class=\"s22\"><span class=\"s20\">Differentiable <\/span><span class=\"s21\">physically-based rendering <\/span><span class=\"s20\">on the other hand can produce 3D assets with physics-based parameters, but depends on rigid traditional \u201chard\u201d graphics representations required for light-transport <\/span><span class=\"s20\">computation, that make optimization much harder and is also costly, limiting applicability. Figure 1 illustrates these concepts.<\/span><\/p>\n<p class=\"s22\"><img \/><a href=\"https:\/\/project.inria.fr\/nerphys\/files\/2025\/01\/teaser.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-109 aligncenter\" src=\"https:\/\/project.inria.fr\/nerphys\/files\/2025\/01\/teaser-300x65.jpg\" alt=\"\" width=\"882\" height=\"191\" srcset=\"https:\/\/project.inria.fr\/nerphys\/files\/2025\/01\/teaser-300x65.jpg 300w, https:\/\/project.inria.fr\/nerphys\/files\/2025\/01\/teaser-1024x222.jpg 1024w, https:\/\/project.inria.fr\/nerphys\/files\/2025\/01\/teaser-768x166.jpg 768w, https:\/\/project.inria.fr\/nerphys\/files\/2025\/01\/teaser-1536x333.jpg 1536w, https:\/\/project.inria.fr\/nerphys\/files\/2025\/01\/teaser-2048x443.jpg 2048w, https:\/\/project.inria.fr\/nerphys\/files\/2025\/01\/teaser-150x32.jpg 150w\" sizes=\"auto, (max-width: 882px) 100vw, 882px\" \/><\/a>Figure 1: <em><span class=\"s68\">Neural<\/span> <span class=\"s68\">Rendering<\/span> <span class=\"s44\">produces<\/span> <span class=\"s44\">impressive<\/span> <span class=\"s44\">imagery<\/span> <span class=\"s68\">efficiently<\/span><span class=\"s44\">, revolutionizing<\/span> <span class=\"s44\">scene<\/span> <span class=\"s44\">capture<\/span> <span class=\"s44\">(a), or<\/span> <span class=\"s44\">generating <\/span><span class=\"s44\">images <\/span><span class=\"s44\">of<\/span> <span class=\"s44\">stunning<\/span> <span class=\"s44\">realism<\/span> <span class=\"s44\">from<\/span> <span class=\"s44\">text<\/span> <span class=\"s44\">prompts<\/span> <span class=\"s44\">(b),<\/span> <span class=\"s44\">but<\/span> <span class=\"s44\">3D<\/span> <span class=\"s44\">versions<\/span> <span class=\"s44\">(c)<\/span> <span class=\"s44\">are<\/span> <span class=\"s44\">mostly<\/span> <span class=\"s44\">limited<\/span> <span class=\"s44\">to<\/span> <span class=\"s44\">isolated<\/span> <span class=\"s44\">objects,<\/span> <span class=\"s44\">lacking<\/span> <span class=\"s44\">realism; <\/span><span class=\"s44\">all<\/span> <span class=\"s68\">lack a<\/span> <span class=\"s68\">physically-based<\/span> <span class=\"s44\">image<\/span> <span class=\"s44\">formation<\/span> <span class=\"s44\">model.<\/span> <span class=\"s68\">Differentiable<\/span> <span class=\"s68\">Physically-Based<\/span> <span class=\"s68\">Rendering<\/span> <span class=\"s44\">(d)<\/span> <span class=\"s44\">has<\/span> <span class=\"s44\">controllable <\/span><span class=\"s44\">accuracy,<\/span> <span class=\"s44\">but often has limited initial topology, is <\/span><span class=\"s68\">costly <\/span><span class=\"s44\">and tied to traditional rigid representations needed for ray intersections.<\/span><\/em><\/p>\n<p><span class=\"s20\">In<\/span> <span class=\"s21\">NERPHYS<\/span> <span class=\"s20\">we<\/span> <span class=\"s20\">will<\/span> <span class=\"s20\">combine<\/span> <span class=\"s20\">the<\/span> <span class=\"s20\">strengths<\/span> <span class=\"s20\">of<\/span> <span class=\"s20\">both<\/span> <span class=\"s20\">neural<\/span> <span class=\"s20\">and<\/span> <span class=\"s20\">physically-based<\/span> <span class=\"s20\">rendering,<\/span> <span class=\"s20\">lifting<\/span> <span class=\"s20\">their <\/span><span class=\"s20\">respective limitations<\/span> <span class=\"s20\">by<\/span> <span class=\"s20\">introducing<\/span> <span class=\"s21\">polymorphic<\/span> <span class=\"s20\">3D<\/span> <span class=\"s20\">representations, i.e., capable<\/span> <span class=\"s20\">of<\/span> <span class=\"s20\">morphing<\/span> <span class=\"s20\">between <\/span><span class=\"s20\">different<\/span> <span class=\"s20\">states<\/span> <span class=\"s20\">to accommodate both efficient gradient-based optimization and physically-based light transport. <\/span><span class=\"s20\">By augmenting these<\/span> <span class=\"s20\">representations<\/span> <span class=\"s20\">with<\/span> <span class=\"s20\">corresponding<\/span> <span class=\"s21\">polymorphic<\/span> <span class=\"s21\">differentiable<\/span> <span class=\"s21\">renderers<\/span><span class=\"s20\">,<\/span> <span class=\"s20\">our <\/span><span class=\"s20\">methodology<\/span> <span class=\"s20\">will<\/span> <span class=\"s20\">unleash<\/span> <span class=\"s20\">the potential of neural rendering to produce physically-based 3D assets with<\/span><span class=\"s20\">\u00a0guarantees on accuracy.<\/span><\/p>\n<p class=\"s24\"><span class=\"s21\">NERPHYS<\/span> <span class=\"s20\">will<\/span> <span class=\"s20\">have<\/span> <span class=\"s20\">ground-breaking<\/span> <span class=\"s20\">impact<\/span> <span class=\"s20\">on<\/span> <span class=\"s20\">3D<\/span> <span class=\"s20\">content<\/span> <span class=\"s20\">creation,<\/span> <span class=\"s20\">moving<\/span> <span class=\"s20\">beyond<\/span> <span class=\"s20\">today\u2019s<\/span> <span class=\"s20\">simplistic <\/span><span class=\"s21\">plausible<\/span> <span class=\"s20\">imagery,<\/span> <span class=\"s20\">to<\/span> <span class=\"s20\">full<\/span> <span class=\"s20\">physically-based<\/span> <span class=\"s20\">rendering<\/span> <span class=\"s20\">with<\/span> <span class=\"s20\">guarantees<\/span> <span class=\"s20\">on<\/span> <span class=\"s20\">error,<\/span> <span class=\"s20\">enabling<\/span> <span class=\"s20\">the<\/span> <span class=\"s20\">use<\/span> <span class=\"s20\">of<\/span> <span class=\"s20\">powerful <\/span><span class=\"s20\">neural<\/span> <span class=\"s20\">rendering <\/span><span class=\"s20\">methods in any application requiring accuracy.<\/span> <span class=\"s20\">Our polymorphic approach will fundamentally change how we reason about scene representations for geometry and appearance, while our rendering algorithms will provide a new methodology for image synthesis, e.g., for training data generation or visual effects.<\/span><\/p>\n<p>\u00a0<\/p>\n<p>NERPHYS started Dec. 1, 2024 and runs for 5 years, the principal investigator is George Drettakis.<\/p>\n\n\n<p>Please see the Inria press release on NERPHYS:<a href=\" https:\/\/www.inria.fr\/en\/erc-grants-george-drettakis-ai-physics-3d\"> https:\/\/www.inria.fr\/en\/erc-grants-george-drettakis-ai-physics-3d<\/a><\/p>\n\n\n\n<p><\/p>","protected":false},"excerpt":{"rendered":"<p>Project number ERC AdG 101141721. We are looking for postdocs, please see the Job Offers page. While long restricted to an elite of expert digital artists, 3D content creation has recently been greatly simplified by deep learning. Neural representations of 3D objects have revolutionized real-world capture from photos, while generative\u2026<\/p>\n<p> <a class=\"continue-reading-link\" href=\"https:\/\/project.inria.fr\/nerphys\/\"><span>Continue reading<\/span><i class=\"crycon-right-dir\"><\/i><\/a> <\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"open","template":"","meta":{"footnotes":""},"class_list":["post-4","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/project.inria.fr\/nerphys\/wp-json\/wp\/v2\/pages\/4","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/project.inria.fr\/nerphys\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/project.inria.fr\/nerphys\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/nerphys\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/nerphys\/wp-json\/wp\/v2\/comments?post=4"}],"version-history":[{"count":28,"href":"https:\/\/project.inria.fr\/nerphys\/wp-json\/wp\/v2\/pages\/4\/revisions"}],"predecessor-version":[{"id":120,"href":"https:\/\/project.inria.fr\/nerphys\/wp-json\/wp\/v2\/pages\/4\/revisions\/120"}],"wp:attachment":[{"href":"https:\/\/project.inria.fr\/nerphys\/wp-json\/wp\/v2\/media?parent=4"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}