

{"id":101,"date":"2019-12-13T15:57:34","date_gmt":"2019-12-13T14:57:34","guid":{"rendered":"https:\/\/project.inria.fr\/fungraph\/?page_id=101"},"modified":"2022-04-22T22:39:51","modified_gmt":"2022-04-22T20:39:51","slug":"results","status":"publish","type":"page","link":"https:\/\/project.inria.fr\/fungraph\/results\/","title":{"rendered":"Results"},"content":{"rendered":"<p>Results from the project include:<\/p>\n<ul>\n<li>Neural Rendering for Synthetic Scenes:\n<ul>\n<li>\n<div class=\"title\">Active Exploration for Neural Global Illumination of Variable Scenes: please see project page <a href=\"https:\/\/repo-sam.inria.fr\/fungraph\/freestylegan\/\">here<\/a>.<\/div>\n<\/li>\n<li>\n<div class=\"title\">Neural Precomputed Radiance Transfer: publication page\u00a0 <a href=\"http:\/\/www-sop.inria.fr\/reves\/Basilic\/2022\/RBRD22\/\">here<\/a><\/div>\n<\/li>\n<\/ul>\n<\/li>\n<li>\n<div class=\"title\">Neural Rendering &amp; Relighting for Captured Scenes &amp; Faces:<\/div>\n<ul>\n<li>\n<div class=\"title\">Point-Based Neural Rendering with Per-View Optimization: Please see project page <a href=\"https:\/\/repo-sam.inria.fr\/fungraph\/differentiable-multi-view\/\">here<\/a><\/div>\n<\/li>\n<li>FreeStyleGAN: Free-view Editable Portrait Rendering with the Camera Manifold: Please see project page <a href=\"https:\/\/repo-sam.inria.fr\/fungraph\/freestylegan\/\">here<\/a>.<\/li>\n<li>Free-viewpoint Indoor Neural Relighting from Multi-view Stereo: Please see project page <a href=\"https:\/\/repo-sam.inria.fr\/fungraph\/deep-indoor-relight\/\">here<\/a><\/li>\n<li>Multi-view relighting using geometry and deep learning: Please see the project page <a href=\"https:\/\/repo-sam.inria.fr\/fungraph\/deep-relighting\/\">here<\/a>.<\/li>\n<\/ul>\n<\/li>\n<li>Image- and Video-Based Rendering and Editing:\n<ul>\n<li>Video-Based Rendering of Dynamic Stationary Environments from Unsynchronized Inputs: please see publication <a href=\"http:\/\/www-sop.inria.fr\/reves\/Basilic\/2021\/TAAPDD21\/\">here<\/a><\/li>\n<li>Image-Based Rendering of Cars using Semantic Labelling: Please see publication <a href=\"http:\/\/www-sop.inria.fr\/reves\/Basilic\/2020\/RPHD20\/\">here<\/a>.<\/li>\n<li>Realistic Compositing of Image-Based Scenes: Please see publication <a href=\"http:\/\/www-sop.inria.fr\/reves\/Basilic\/2020\/NPD20a\/\">here<\/a>.<\/li>\n<\/ul>\n<\/li>\n<li>Material Capture and Transfer:\n<ul>\n<li>Multi-image SVBRDF recovery: Please see publication <a href=\"http:\/\/www-sop.inria.fr\/reves\/Basilic\/2019\/DADDB19\/\">here<\/a>.<\/li>\n<li>Guided fine tuning for large scale material transfer: Please see publication <a href=\"http:\/\/www-sop.inria.fr\/reves\/Basilic\/2020\/DDB20\/\">here<\/a>.<\/li>\n<\/ul>\n<\/li>\n<li>Global Illumination:\n<ul>\n<li>Product Path guiding: Please see publication <a href=\"http:\/\/www-sop.inria.fr\/reves\/Basilic\/2020\/DGJND20\/\">here.<\/a><\/li>\n<\/ul>\n<\/li>\n<li>Contrast enhancement for VR: Please see publication <a href=\"http:\/\/www-sop.inria.fr\/reves\/Basilic\/2019\/ZKDBCDM19\/\">here<\/a> (and project page at <a href=\"https:\/\/www.cl.cam.ac.uk\/research\/rainbow\/projects\/dice\/\">Cambridge<\/a>).<\/li>\n<\/ul>\n<p><\/p>","protected":false},"excerpt":{"rendered":"<p>Results from the project include: Neural Rendering for Synthetic Scenes: Active Exploration for Neural Global Illumination of Variable Scenes: please see project page here. Neural Precomputed Radiance Transfer: publication page\u00a0 here Neural Rendering &amp; Relighting for Captured Scenes &amp; Faces: Point-Based Neural Rendering with Per-View Optimization: Please see project page\u2026<\/p>\n<p> <a class=\"continue-reading-link\" href=\"https:\/\/project.inria.fr\/fungraph\/results\/\"><span>Continue reading<\/span><i class=\"crycon-right-dir\"><\/i><\/a> <\/p>\n","protected":false},"author":375,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-101","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/project.inria.fr\/fungraph\/wp-json\/wp\/v2\/pages\/101","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/project.inria.fr\/fungraph\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/project.inria.fr\/fungraph\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/fungraph\/wp-json\/wp\/v2\/users\/375"}],"replies":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/fungraph\/wp-json\/wp\/v2\/comments?post=101"}],"version-history":[{"count":6,"href":"https:\/\/project.inria.fr\/fungraph\/wp-json\/wp\/v2\/pages\/101\/revisions"}],"predecessor-version":[{"id":201,"href":"https:\/\/project.inria.fr\/fungraph\/wp-json\/wp\/v2\/pages\/101\/revisions\/201"}],"wp:attachment":[{"href":"https:\/\/project.inria.fr\/fungraph\/wp-json\/wp\/v2\/media?parent=101"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}