PRESENT (Photoreal REaltime Sentient ENTity) is a proposal for a three-year Research and Innovation project to create virtual digital companions––embodied agents––that look entirely naturalistic, demonstrate emotional sensitivity, can establish meaningful dialogue, add sense to the experience, and act as trustworthy guardians and guides in the interfaces for AR, VR and more traditional forms of media.

There is no higher quality interaction than the human experience when we use all our senses together with language and cognition to understand our surroundings and––above all—to interact with other people. We interact with today’s ‘Intelligent Personal Assistants’ primarily by voice; communication is episodic, based on a request-response model. The user does not see the assistant, which does not take advantage of visual and emotional clues or evolve over time. However, advances in the real-time creation of photorealistic computer generated characters, coupled with emotion recognition and behaviour, and natural language technologies, allow us to envisage virtual agents that are realistic in both looks and behaviour; that can interact with users through vision, sound, touch and movement as they navigate rich and complex environments; converse in a natural manner; respond to moods and emotional states; and evolve in response to user behaviour.

PRESENT will create and demonstrate a set of practical tools, a pipeline and APIs for creating realistic embodied agents and incorporating them in interfaces for a wide range of applications in entertainment, media and advertising. The international partnership includes the Oscar-winning VFX company Framestore; technology developers Brainstorm, Cubic Motion and IKinema; Europe’s largest certification authority InfoCert; research groups from Universitat Pompeu Fabra, Universität Augsburg and Inria; and the pioneers of immersive virtual reality performance CREW.

Museum scene with a groupe of virtual humans.

The Inria team of PRESENT is composed of both RAINBOW and MimeTIC team members. The principal investigator is Julien Pettre (crowd simulation); the other members of the team are specialists of haptics (Claudio Pacchierotti), real time animation (Ludovic Hoyet), biomechanics (Anne-Helene Olivier) and virtual reality Cinema (Marc Christie). Two PhD students (Alberto Jovane and Adèle Colas) were added to the team to specificaly work on the project and virtual human behaviour.

Inria aims to create virtual environments with a heightened sense of presence by enhancing non-verbal communication (NVC) with surrounding virtual agents in the scene. Inria will work on ways to create more natural interactions between humans and digital agents in a virtual scene by developing technologies that improve NVC with virtual characters. Inria will mainly explore three aspects:

  1. animation techniques that convey voluntary NVC messages;
  2. reactive behaviours that convey involuntary NVC messages; and
  3. the application of haptic feedback.

For that, Inria will give particular care about two aspects of the human behavior modeling. The individual aspect, where the agent is seen as a unique character and his motions are local and the collective aspect, where the agents are considered as groups that move collectively and globally.


Comments are closed.