Project

Despite virtual humans being nowadays a requisite to create always more lifelike virtual worlds, they still display a certain level of uniformity which is detrimental for realism. While this need for greater variety has been identified, existing approaches solely focus on variations of characters’ visual aspect, i.e., appearance and shape. However, motions are extremely important for humans to perform actions and to express themselves, in particular in nonverbal communication. More specifically, humans do not perform actions in precisely the same manner, not even in the same manner every time, which is why variations are important to create always more believable and expressive virtual humans. 

Therefore, this project aims at creating variety in human motions, to create a new generation of more realistic virtual characters. However, variety is not simply a reflection of random differences, but results from complex intra-individual (e.g., fatigue) and inter-individual (e.g., morphology, age, sex) differences, which are seldom taken into account today. As such differences can be difficult to quantify, we propose in this project to focus on how viewers perceive motion variations to automatically produce natural motion personalisation accounting for inter-individual variations. In short, our goal is to automate the creation of variations to represent individuals with given characteristics, and to produce natural variations that are perceived and identified as such by users. However, because of the complexity of human motion, it is highly likely that creating such variations will depend on the type of motion considered. Therefore, to validate this first attempt at perceptually-based personalisation, we propose to focus on a main transversal scenario on locomotion, at locomotion is commonly used in interactive applications, presents large potentials for personalisation (e.g., morphology, personality, fitness), and has large potential impacts on our targeted applications. 

In order to reach this objective, we propose an approach based on three challenges. Our first challenge consists in understanding what makes motions of individuals perceptually different. To tackle this challenge, we will first acquire a unique dataset of human motions, including a minimum of 200 individuals, covering a large range of characteristics (e.g., age, morphology, personality). Once acquired, it will enable us to conduct a large-scale perceptual experiment to identify how visually different the motions of each individual are from each other individual in our database. Such an experiment will require to adapt perceptual frameworks used in our previous works to deal with the amount of motions to compare, but will give us the unprecedented opportunity of automatically identifying the motion features contributing most to visual motion variety. Then, the next challenge consists in synthesising variations based on these perceptual features. In particular, we propose to explore how a simple model can be designed to create variations of cyclic motions, and validated on locomotion before being adapted and generalized for acyclic motions. Finally, our last challenge consists in further creating variations for interactive large-scale scenarios, where both performance and realism are critical. It requires to automatically and efficiently personalise the motions of large numbers of virtual humans. Therefore we will identify through perceptual experiments the best means of producing variations in large groups of characters, and build on these insights to design adaptive perception-based methods providing the best trade-off between visual realism and computation load. 

The expected outcome is a breakthrough in the creation of natural virtual human content. This will be particularly impacting for large-scale simulations where variations still cannot be automated today, and are therefore manually created by artists or kept to the bare minimum. 

Comments are closed.