Expressive Encounters Workshop

September 29, 2024

About the Expressive Encounters Workshop 2024

Virtual embodied agents and robots are increasingly integrated into our daily lives, serving roles such as receptionists in public services, home assistants, virtual personal trainers, and coaches for physical and mental health activities. To ensure user acceptance and trust, it is essential to design these agents to be not only functionally complex and useful but also understandable and socially appropriate. People are more likely to accept such technologies when they perceive them as extensions of themselves. Therefore, human-human interaction, which incorporates not only language but also non-verbal communication that is richer in terms of social and cultural features, serves as a natural model for designing the behaviours of such agents. Consequently, there has been a significant effort in recent years to generate non-verbal gestures for agents automatically in a data-driven manner.

This workshop aims to advance the development of real-world applications that involve virtual embodied agents and robots. To ensure user acceptance and trust, such real-world applications require new generative models and evaluation methods to generate socially appropriate non-verbal behaviours, considering cultural and personal factors, as well as enable real-time processing (i.e., understanding and responding to the user on the fly).

TOPICS
The main suggested topics for the workshop include, but are not limited to:

  • Multimodal (i.e., vision, audio, and/or text)data processing for gesture generation
  • Multimodal data modelling for understanding personal, affective, and cultural factors Real-time gesture generation techniques
  • Real-time gesture generation techniques
  • New generative models to model personal factors (e.g., culture, affect, and personality)
  • Incorporating personal factors into gesture generation
  • Gesture generation beyond monologues, in multi-party interaction settings
  • New approaches to transfer human-human interactions to human-agent interactions
  • Transfer learning and transformer techniques to apply virtual agents' gestures to reduced degrees of freedom embodied agents
  • Multimodal data modelling techniques for detecting human trust and openness towards embodied agents

The Call for papers is online! Check out the guidelines at its dedicated page!

For email updates about the workshop, sign up here.

-->

Organizers

Viktor Schmuck

King's College London | UK

Ariel Gjaci

Università degli Studi di Genova | IT & King's College London | UK

Chaitanya Ahuja

Meta AI | US

Gustav Eje Henter

KTH Royal Institute of Technology | SE

Rajmund Nagy

KTH Royal Institute of Technology | SE

Youngwoo Yoon

Electronics and Telecommunications Research Institute| KR

Oya Celiktutan

King's College London | UK
-->