Autonomous Virtual Humans and Social Robots for 3D telepresence
Nadia Magnenat Thalmann, NTU , Singapore
In our talk, we will show the research we have done so far at MIRALab, University of Geneva in Switzerland and our new research at NTU, Singapore on the autonomy of both Virtual Humans (VH) and Social Robots. First, we have defined personality , mood and emotion models for VH and social robots and shown real time interaction with them. Now in Singapore, we are working on a more complex interaction, including the recognition of gestures, face, hand gestures and also sound classification. We have worked on linking social media to our VH and Social Robots and true dialog can be demonstrated between these 3 entities. In the long run, we aim to have full interaction with distance partners, be human or robots or Virtual. Quite a few videos will be presented to demonstrate our on going research.
Some new Advances in Crowd Simulation, Prof. Daniel Thalmann
In this Talk, we will survey techniques to model crowds in real-time: variety in crowds, individualized path-planning, and accessories. We will emphasize methods for path planning of thousands of pedestrians in real time, while ensuring dynamic collision avoidance; we will discuss recent results based on the reuse of real trajectories. We will also described interaction with the crowd using gesture recognition.
Nadia Magnenat Thalmann and Daniel Thalmann have pioneered research into virtual humans over the last 30 years. Together with her PhD students, they have published more than 500 papers and books on Virtual Humans and Social Robots with research topics such as 3D clothes, hair, body gestures and emotions modelling, crowd simulation and medical simulation. They lead research labs in Canada, Switzerland and recently they are with NTU, Singapore.