Project number ERC AdG 101141721.
We are looking for postdocs, please see the Job Offers page.
While long restricted to an elite of expert digital artists, 3D content creation has recently been greatly simplified by deep learning. Neural representations of 3D objects have revolutionized real-world capture from photos, while generative models are starting to enable 3D object synthesis from text prompts. These methods use differentiable neural rendering that allows efficient optimization of the powerful and expressive “soft” neural representations, but ignores physically-based principles, and thus has no guarantees on accuracy, severely limiting the utility of the resulting content.
Differentiable physically-based rendering on the other hand can produce 3D assets with physics-based parameters, but depends on rigid traditional “hard” graphics representations required for light-transport computation, that make optimization much harder and is also costly, limiting applicability. Figure 1 illustrates these concepts.
Figure 1: Neural Rendering produces impressive imagery efficiently, revolutionizing scene capture (a), or generating images of stunning realism from text prompts (b), but 3D versions (c) are mostly limited to isolated objects, lacking realism; all lack a physically-based image formation model. Differentiable Physically-Based Rendering (d) has controllable accuracy, but often has limited initial topology, is costly and tied to traditional rigid representations needed for ray intersections.
In NERPHYS we will combine the strengths of both neural and physically-based rendering, lifting their respective limitations by introducing polymorphic 3D representations, i.e., capable of morphing between different states to accommodate both efficient gradient-based optimization and physically-based light transport. By augmenting these representations with corresponding polymorphic differentiable renderers, our methodology will unleash the potential of neural rendering to produce physically-based 3D assets with guarantees on accuracy.
NERPHYS will have ground-breaking impact on 3D content creation, moving beyond today’s simplistic plausible imagery, to full physically-based rendering with guarantees on error, enabling the use of powerful neural rendering methods in any application requiring accuracy. Our polymorphic approach will fundamentally change how we reason about scene representations for geometry and appearance, while our rendering algorithms will provide a new methodology for image synthesis, e.g., for training data generation or visual effects.
NERPHYS started Dec. 1, 2024 and runs for 5 years, the principal investigator is George Drettakis.
Please see the Inria press release on NERPHYS: https://www.inria.fr/en/erc-grants-george-drettakis-ai-physics-3d