Results from the project include:

  • Neural Rendering for Synthetic Scenes:
    • Active Exploration for Neural Global Illumination of Variable Scenes: please see project page here.
    • Neural Precomputed Radiance Transfer: publication pageĀ  here
  • Neural Rendering & Relighting for Captured Scenes & Faces:
    • Point-Based Neural Rendering with Per-View Optimization: Please see project page here
    • FreeStyleGAN: Free-view Editable Portrait Rendering with the Camera Manifold: Please see project page here.
    • Free-viewpoint Indoor Neural Relighting from Multi-view Stereo: Please see project page here
    • Multi-view relighting using geometry and deep learning: Please see the project page here.
  • Image- and Video-Based Rendering and Editing:
    • Video-Based Rendering of Dynamic Stationary Environments from Unsynchronized Inputs: please see publication here
    • Image-Based Rendering of Cars using Semantic Labelling: Please see publication here.
    • Realistic Compositing of Image-Based Scenes: Please see publication here.
  • Material Capture and Transfer:
    • Multi-image SVBRDF recovery: Please see publication here.
    • Guided fine tuning for large scale material transfer: Please see publication here.
  • Global Illumination:
    • Product Path guiding: Please see publication here.
  • Contrast enhancement for VR: Please see publication here (and project page at Cambridge).

Comments are closed.