Job Offers

We have several postdoc and starting researcher positions available, feel free to contact me directly: George dot Drettakis at inria.fr. Starting researcher & postdoc candidates are welcome to define their own research agenda, as long as it is in the general area of the ERC, so feel free to get in touch with a proposal; please see below for more details.

FUNGRAPH is an ERC Advanced Grant, which started October 1st, 2018.  We have a number of positions available: 1-2 “Starting Researcher” positions (typically Ph.D. + 2-3 years of postdoc), 2-3 postdoctoral fellows, 1-2 Ph.D. students and 1-2 software engineers.

Below we present a set of research topics, with an indication of the level (PostDoc/Ph.D.). These topics should be considered as the “perimeter” of the research, and will be adapted to the candidate’s interests and qualifications. It is possible that a Ph.D. student works on one of the topics listed as “PostDoc” below, and vice versa. Starting Researchers and — depending on their experience — postdoctoral fellows are expected to define their own research agenda within this “perimeter” (to be interpreted very widely).
This list of topics will evolve constantly throughout the project, so please come back regularly.

Context

Several of these topics will involve collaborations with our network of regular collaborators: F. Durand (MIT), S. Paris (Adobe Research), A. Efros, M. Banks, E. Cooper (UC Berkeley), G. Brostow (UCL) and can include visits to the respective laboratories. We plan to expand this set of collaborators during FUNGRAPH.

Successful candidates will be members of a dynamic and highly motivated group of excellent young researchers, the GRAPHDECO Inria group, and will have the opportunity to collaborate and interact with my colleague Adrien Bousseau and the other Ph.D. students, postdocs and engineers of the group.

Required Qualifications

For Starting Researcher and Postdoctoral positions, candidates are expected to have a Ph.D. in Computer Graphics or in Computer Vision with an emphasis on Graphics applications, and an excellent publication record. For the Ph.D. candidate are expected to have a Masters in Computer Graphics, with a solid mathematical and programming (C++/OpenGL/GLSL/Vulkan) background and have completed a thesis with a research component (ideally submitted or published). Fluency in spoken and written English is a requirement.

How to apply

If you are interested in any of these positions/topics, please email me directly George dot Drettakis at inria.fr with your CV and short motivation, and the email addresses of 2-3 references. If you are applying for a Ph.D. please also attach your academic transcript for the last 3 years (an unofficial list of courses and grades is sufficient).

Engineering Position: Graphics for Image-Based Rendering & Learning

The goal of this position is to develop and extend two main software platforms in the research group.

The first platform is an image synthesis platform for training and testing computer vision and computer graphics algorithms. We have already set up an initial pipeline based on 3DSMax (http://www.autodesk.com/products/3ds-max/overview) and the Mitsuba renderer (https://www.mitsuba-renderer.org/), including our own custom plugins to parse the 3D scenes and render high quality images, as well as to run various computer vision algorithms on the rendered images (structure from motion, multi-view stereo). We have notably used the platform to generate large collections of rendered images for training machine learning algorithms. The platform was notably used for the publication in SIGGRAPH 2019 “Multi-view Relighting Using a Geometry-Aware Network” (https://repo-sam.inria.fr/fungraph/deep-relighting/). The engineer will be in charge of designing and implementing novel features of the pipeline to make it more flexible and easy-to-use. These include (among others) automating the generation of new scenes by modifying the geometry, materials and lighting of existing scenes and providing support for the various research projects in the group. The task will involve programming in c++ and python both in an OpenGL-based system (see below) and in mitsuba.

The second platform is our Image-Based Rendering system based on C++ and OpenGL that has been used for over 10 recent publications in the group. We have recently restructured our codebase into a shared core, providing functionality for multi-view imaging and basic Image-Based Rendering functionality, and separate code repositories for each project. The engineer will complete the integration task of the various projects and will have overall responsibility of providing an opensource version that will be progressively released in the near future. The task includes programming in C++ and OpenGL, but also the use of machine learning libraries and python.

The ideal candidate will have a Masters in Computer Graphics, with extensive experience in building complex graphics systems in C++ as well as extensive knowledge of the theory and practice of the graphics pipeline (including GPU rendering and ray-tracing/global illumination). The ability to read, comprehend and implement research papers is also necessary. Knowledge of python and OpenCV will be very helpful, knowledge of cmake and some experience in deep learning and CNNs will also be appreciated. Fluency in spoken and written English is a requirement.

Example papers integrated in our IBR platform:

  • [Chaurasia13] G. CHAURASIA, S. DUCHENE, O. SORKINE-HORNUNG, G. DRETTAKIS, Depth Synthesis and Local Warps for Plausible Image-based Navigation, ACM Transactions on Graphics 32, 2013, http://www-sop.inria.fr/reves/Basilic/2013/CDSD13.
  • [Hedman16] Hedman, P., Ritschel, T., Drettakis, G., & Brostow, G. (2016). Scalable inside-out image-based rendering. ACM Transactions on Graphics (TOG), 35(6), 231. http://www-sop.inria.fr/reves/Basilic/2016/HRDB16/
  • [Hedman18] P. HEDMAN, J. PHILIP, T. PRICE, J.-M. FRAHM, G. DRETTAKIS, G. BROSTOW, Deep Blending for Free-Viewpoint Image-Based Rendering, ACM Transactions on Graphics (SIGGRAPH Asia Conference Proceedings) 37, 6, November 2018, http://www-sop.inria.fr/reves/Basilic/2018/HPPFDB18.
  • [Rodriguez18] S. RODRIGUEZ, A. BOUSSEAU, F. DURAND, G. DRETTAKIS, Exploiting Repetitions for Image-Based Rendering of Facades, Computer Graphics Forum (Proceedings of the Eurographics Symposium on Rendering) 37, 4, 2018, http://www-sop.inria.fr/reves/Basilic/2018/RBDD18.
  • [Philip19] J. PHILIP, M. GHARBI, T. ZHOU, A. EFROS, G. DRETTAKIS, Multi-view Relighting Using a Geometry-Aware Network, ACM Transactions on Graphics (SIGGRAPH Conference Proceedings) 38, 4, July 2019, http://www-sop.inria.fr/reves/Basilic/2019/PGZED19.

PostDoc: Modeling Uncertainty for Hybrid Rendering of Captured Scenes

Computer vision algorithms allow easy capture of complex scenes from photos, thanks to camera calibration with Structure from Motion and Multi-view Stereo. These captured scenes can be rendered in a variety of ways: a simple textured mesh, the more sophisticated unstructured lumigraph (ULR) [Buehler01] which uses the reconstructed mesh to reproject and blend input photos or a more complex algorithm using per-view refinement (e.g., [Hedman16]). This sequence of algorithms is approximately increasing in computational and memory expense for rendering, but also in visual quality.

One difficulty these rendering algorithms is that they do not explicitly model the uncertainty in the capture of geometry or appearance. Effects such as noisy reconstruction, missing geometry or inaccurate representation of appearance are caused by several levels of uncertainty, such as the sparse angular sampling of views for non-diffuse materials or conflicting priors applied during 3D reconstruction.

In this project we will study ways to explicitly estimate this uncertainty, and use it to develop a new rendering algorithm that only uses a more expensive algorithm if required. The solution will be based on the analysis developed to balance the use of the different algorithms considered, depending on the complexity and uncertainty inherent to the input and the rendering process itself. As an example the method could use a textured mesh when sufficient, ULR when it is capable of representing the effects and only use per-view refinement when required; new rendering algorithms will probably be developed in the process.

Bibliography

  • [Buehler01] Buehler, C., Bosse, M., McMillan, L., Gortler, S., & Cohen, M. (2001, August). Unstructured lumigraph rendering. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques (pp. 425-432). ACM. https://dash.harvard.edu/bitstream/handle/1/2641679/Gortler_UnstructuredLumigraph.pdf?sequence=2
  • [Hedman16] Hedman, P., Ritschel, T., Drettakis, G., & Brostow, G. (2016). Scalable inside-out image-based rendering. ACM Transactions on Graphics (TOG), 35(6), 231. http://www-sop.inria.fr/reves/Basilic/2016/HRDB16/

Ph.D./PostDoc: Spatial and Angular Sampling for Image-Based Rendering

The Soft3D IBR algorithm [Penner17] provides a fascinating new design space for Image-Based Rendering (IBR). This solution uses uncertainty to provide a soft visibility estimate based on a depth-sweep type discretization, with excellent results for view interpolation, but without the ability to handle free-viewpoint paths: in this configuration, Soft3D produces blur and discretization artifacts compared to other solutions. The regular sampling nature of the algorithm makes it a natural candidate for a signal processing/Fourier analysis [Durand05, Chai00] to determine the required sampling and reconstruction strategies for a given target novel view space. Our approach will investigate ways to first perform a sparse capture of a scenes, and then propose a dense capture plan and a new IBR reconstruction algorithm to allow high-quality IBR of the scene. We will start with the �easier� case of diffuse scenes, and move on to scenes containing more complex appearances.

Bibliography

  • [Durand05] Durand, F., Holzschuch, N., Soler, C., Chan, E., & Sillion, F. X. (2005, July). A frequency analysis of light transport. In ACM Transactions on Graphics (TOG) (Vol. 24, No. 3, pp. 1115-1126). ACM.
  • http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.369.7939&rep=rep1&type=pdf
  • [Penner17] Penner, Eric, and Li Zhang. “Soft 3D reconstruction for view synthesis.” ACM Transactions on Graphics (TOG) 36.6 (2017): 235. https://dl.acm.org/ft_gateway.cfm?id=3130855&type=pdf
  • [Chai00] Chai, J. X., Tong, X., Chan, S. C., & Shum, H. Y. (2000, July). Plenoptic sampling. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques (pp. 307-318). ACM Press/Addison-Wesley Publishing Co.. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.360.1223&rep=rep1&type=pdf

Ph.D.: Material Rendering with Uncertain Data

Current material capture approaches can work with a large number of photographs, and typically fit parameters of a Spatially Varying Bidirectional Reflectance Distribution Function (SVBRDF), e.g., [Lensch03], [Aittala15], [Riviere16], or [Deschaintre18], which uses machine learning. At the other end of the spectrum, Image-Based Rendering (IBR) techniques use a similar dense set of photos to re-render the object from different angles for novel-view synthesis, reproducing view-dependent effects. In this project we will start by studying the different trade-offs of these two representations in the context of controlled object capture, and attempt to first determine which is more effective given a set of capture conditions and rendering use-cases. We will then develop a new representation that provides a continuum between the two by explicitly modelling uncertainty of material parameters.

In the second part of the project we will investigate the effect of geometric uncertainty in a similar tradeoff, i.e., capturing geometry and materials vs IBR.We will finally determine how this unified representation can be incorporated in global illumination algorithms. These steps will involve a machine learning component using synthetic data for evaluation of the tradeoffs and to improve acquisition.

Bibliography

  • [Lensch03] Lensch, Hendrik, et al. “Image-based reconstruction of spatial appearance and geometric detail.” ACM Transactions on Graphics (TOG) 22.2 (2003): 234-257. https://dl.acm.org/citation.cfm?doid=636886.636891
  • [Deschaintre18] Deschaintre, V., Aittala, M., Durand, F., Drettakis, G., & Bousseau, A. (2018). Single-Image SVBRDF Capture with a Rendering-Aware Deep Network. ACM Transactions on Graphics, 37.
  • [Aittala15] Miika Aittala, Tim Weyrich, and Jaakko Lehtinen. 2015. Two-shot SVBRDF capture for stationary materials. ACM Trans. Graph. 34, 4, Article 110 (July 2015), 13 pages. DOI: https://doi.org/10.1145/2766967
  • [RIviere16] Mobile surface reflectometry. Jeremy Riviere, Pieter Peers, Abhijeet Ghosh. Computer Graphics Forum, 35(1): 191-202, 2016. doi:10.1111/cgf.12719.

PostDoc/Ph.D.: Evaluating Error/Uncertainty in Approximate Global Illumination Algorithms

There are currently several very effective global illumination algorithms that manage to simulate the majority of significant visual phenomena (eg [Georgiev12]); however they are far from real time. On the other end of the spectrum, there are real-time global illumination solutions (eg [McGuire17]) that usually achieve very approximate solutions, but at interactive or real-time framerates, and often with remarkable visual quality. These solutions are typically built on light probes or virtual point lights, that can be seen as a sampling of path space.

We will first analyze error in the different steps of these approximate algorithms, possibly modelling the error with statistical tools that handle uncertainty [Smi13]. This will require careful analysis starting with simple configurations, moving up to more complex cases. We will investigate the effect of discretization, both spatial and directional, and quantify the effect on the accumulated error. Based on this analysis we will develop new algorithms that allow progressive improvement of quality moving towards higher quality solutions. This may involve stochastic approaches to approximate the complex calculations involved in path-tracing like solutions, and progressively quantify error allowing to bridge the gap between the two. Another possibility will be data-driven learning-based solutions to replace extremely expensive calculations that do not necessarily affect image quality significantly (e.g., complex but unimportant indirect paths).

Bibliography

  • [Georgiev12] Georgiev, I., Krivanek, J., Davidovic, T., & Slusallek, P. (2012). Light transport simulation with vertex connection and merging. ACM Trans. Graph., 31(6), 192-1.
  • http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.398.833&rep=rep1&type=pdf
  • [McGuire17] McGuire, M., Mara, M., Nowrouzezahrai, D., & Luebke, D. (2017, February). Real-time global illumination using precomputed light field probes. In Proceedings of the 21st ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (p. 2). ACM.
  • https://dl.acm.org/citation.cfm?id=3023378
  • [Smi13] Ralph C Smith. Uncertainty quantification: theory, implementation, and applications, volume 12. Siam, 2013.

PostDoc: 4D Video-Based Rendering [POSITION FILLED]

Video-based capture of human motion has made impressive advances in recent years [Microsoft, 4dviews, 8i]. In most cases, the capture process generates textured meshes that are used for display in computer graphics or virtual reality applications. In this project, we are particularly interested in the case of human capture of a self-avatar, and the realistic display of this avatar. The postdoctoral fellowship will involve two Inria groups, MORPHEO in Grenoble that is a leader in human motion capture [Tsiminaki 2014, Leroy 2017] and GRAPHDECO in Sophia-Antipolis that has extensive experience in image-based rendering [Chaurasia 2013, Hedman 2016]. The main focus of the research will be to investigate the continuum between texture-map based solutions and their corresponding (potentially temporal) compression and temporal image-based rendering solutions. This is an exciting and very novel research area which has not been investigated before, involving the development of novel 4D representations, possibly building on surface light fields but also more recent view-dependent free-viewpoint methods. The project will benefit from one of the most advanced motion capture platforms in Europe [Kinovis], designed and maintained by MORPHEO. In the context of this project, specific capture configurations will be developed for the case of self-avatar sequences, supported by engineering staff at Inria Grenoble, allowing the postdoctoral fellow to concentrate on the development of novel algorithmic solutions. In addition to FUNGRAPH, this project is part of the coordinated Inria Action IPL AVATAR

Bibliography:

  • Microsoft, https://www.microsoft.com/en-us/mixed-reality/capture-studios
  • 4dviews, https://www.4dviews.com
  • 8i, https://8i.com
  • Kinovis, https://kinovis.inria.fr
  • [Tsiminaki14] Tsiminaki, V., Franco, J. S., & Boyer, E. (2014). High resolution 3D shape texture from multiple videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(pp. 1502-1509).
  • [Leroy17] Vincent Leroy, Jean-Sebastien Franco, Edmond Boyer (2017). Multi-View Dynamic Shape Refinement Using Local Temporal Integration.  In Proceedings of the IEEE International Conference on Computer Vision.
  • [Chaurasia13] Chaurasia, G., Duchene, S., Sorkine-Hornung, O., & Drettakis, G. (2013). Depth synthesis and local warps for plausible image-based navigation. ACM Transactions on Graphics (TOG), 32(3), 30.
  • [Hedman16] Hedman, P., Ritschel, T., Drettakis, G., & Brostow, G. (2016). Scalable inside-out image-based rendering. ACM Transactions on Graphics (TOG), 35(6), 231.

PostDoc : Perception of Rendering Artifacts

Evaluating the quality of rendering algorithms, and in particular evaluating the significance or importance of rendering artifacts is a very hard problem [Cadik12]. This is true both for traditional rendering, but also for Image-Based rendering (IBR). Specific cases, such as perception of slant can be analysed using well-known perceptual principles, often leading to improvement in practical algorithms [Vangorp 11, Vangorp 13]. However, analyzing blending artifacts in rendering is a much more difficult problem which has received little attention to date [Guthe 17]; Similarly, while some results exist for offline evaluation of reconstruction error [Waechter 17] there are no solutions for the visual artifacts due to these errors during online rendering. One major difficulty for this problem is how to define a meaningful error metric; an interesting direction for this topic involves recent advances in deep learning [Zhang 2018] which show that basic deep features (such as VGG) are a good predictor of a perceptual image quality metric. Our recent work on Deep Blending [Hedman 18] provides an interesting design space for this problem, since we can include a CNN during rendering. In this project we will investigate the use of synthetic data to train a network to identify perceptually significant blending and geometric reprojection artifacts, and investigate ways of optimizing rendering algorithms to correct them.

Bibliography:

  • [Cadik12] Cadik, M., Herzog, R., Mantiuk, R., Myszkowski, K., & Seidel, H. P. (2012). New measurements reveal weaknesses of image quality metrics in evaluating graphics artifacts. ACM Transactions on Graphics (TOG), 31(6), 147.
    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.352.3346&rep=rep1&type=pdf
  • [Guthe 17] Guthe, S., Schardt, P., Goesele, M., & Cunningham, D. (2016, July). Ghosting and popping detection for image-based rendering. In 3DTV-Conference: The True Vision-Capture, Transmission and Display of 3D Video (3DTV-CON), 2016 (pp. 1-4). IEEE. https://ieeexplore.ieee.org/document/7548891/ (requires access)
  • [Hedman 18] Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis, et al.. Deep Blending for Free-Viewpoint Image-Based Rendering. ACM Transactions on Graphics, Association for Computing Machinery, 2018, 37 (6)
    http://www-sop.inria.fr/reves/Basilic/2018/HPPFDB18/
  • [Vangorp 11] Vangorp, P., Chaurasia, G., Laffont, P. Y., Fleming, R. W., & Drettakis, G. (2011, June). Perception of Visual Artifacts in Image-Based Rendering of Facades. In Computer Graphics Forum (Vol. 30, No. 4, pp. 1241-1250). Oxford, UK: Blackwell Publishing Ltd.
    http://www-sop.inria.fr/reves/Basilic/2011/VCLFD11/
  • [Vangorp 13] Vangorp, P., Richardt, C., Cooper, E. A., Chaurasia, G., Banks, M. S., & Drettakis, G. (2013). Perception of perspective distortions in image-based rendering. ACM Transactions on Graphics (TOG), 32(4), 58.
    http://www-sop.inria.fr/reves/Basilic/2013/VRCCBD13/
  • [Waechter17] Waechter, M., Beljan, M., Fuhrmann, S., Moehrle, N., Kopf, J., & Goesele, M. (2017). Virtual rephotography: Novel view prediction error for 3D reconstruction. ACM Transactions on Graphics (TOG), 36(1), 8.
    https://www.gcc.tu-darmstadt.de/home/proj/virtual_rephotography/virtual_rephotography.en.jsp
  • [Zhang 2018] Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. arXiv preprint arXiv:1801.03924. CVPR 2018. https://arxiv.org/pdf/1801.03924

Software Engineer: Large-Scale Synthetic Data for Learning and Perception [POSITION FILLED]

The goal of this position is to build a state-of-the-art image synthesis platform for training and testing computer vision and computer graphics algorithms. We have already set up an initial pipeline based on 3DSMax (http://www.autodesk.com/products/3ds-max/overview) and Blender, and the Mitsuba renderer (https://www.mitsuba-renderer.org/). This pipeline includes our own custom plugins to parse the 3D scenes and render high quality images, as well as to run various computer vision algorithms on the rendered images (structure from motion, multi-view stereo). We use this synthetic data to evaluate the accuracy of our recent algorithms on image relighting and image-based rendering. We also want to generate large collections of rendered images for training machine learning algorithms such as Convolutional Neural Networks (CNNs).

In this context, the goal of this software engineering position will be to significantly extend our software infrastructure to generate large amounts of high-quality realistic images. The engineer will be in charge of designing and implementing novel features of the pipeline to make it more flexible and easy-to-use. These include (among others) automating the conversion of 3D scenes in various formats; automating the generation of new scenes by modifying the geometry, materials and lighting of existing scenes; simplifying the use of the Inria cluster for large-scale computation.

These new features will involve writing scripts and plugins for 3DS Max and Mitsuba, as well as implementing published methods on automatic scene generation and augmentation. The position involves working closely with Ph.D. students and postdoctoral fellows in the group.

The ideal candidate will have a Masters in Computer Graphics, with extensive experience in building complex graphics systems in C++ as well as extensive knowledge of the theory and practice of the graphics pipeline (including GPU rendering and ray-tracing/global illumination). The ability to read, comprehend and implement research papers is also necessary. Knowledge of python and OpenCV will be very helpful, knowledge of cmake and some experience in deep learning and CNNs will also be appreciated. Fluency in spoken and written English is a requirement.

The position is available immediately at the Inria Sophia-Antipolis center in the South of France in the GRAPHDECO group. Compensation follows standard Inria salary scales.

Last Update: November 2019

Comments are closed.