Workshop on Socially Aware and Cooperative Intelligent Systems

banner_1
banner_2

About the Workshop

This workshop proposal focuses on realizing socially aware systems in the wild via cooperative intelligence by keeping humans-in-the-loop. Specifically, this workshop is dedicated to discussing computational methods for sensing and recognition of nonverbal cues and internal states in the wild to realize cooperative intelligence between humans and intelligent systems, including learning and behavior generation complying to the social norm, and other relevant technologies like social interaction datasets.

One of the main considerations to achieve cooperative intelligence between humans and intelligent systems is to enable everyone and everything to know each other well, like how humans can trust or infer the implicit internal states like intention, emotion, and cognitive states of each other. The importance of empathy to facilitate human-robot interaction has been highlighted in previous studies. However, it is difficult for intelligent systems to estimate the internal states of humans because they are dependent on the complex social dynamics and environment contexts. This requires intelligent systems to be capable of sensing the multi-modal inputs, reasoning the underlying abstract knowledge, and generating the corresponding responses to collaborate and interact with humans. There are many studies on estimating internal states of humans through measurements of wearables and non-invasive sensors [10, 24], but it would be difficult to implement these solutions in the wild because of the additional sensors to be worn by humans. It remains an open question for intelligent systems to sense and recognize nonverbal cues and reason the rich underlying internal states of humans in the wild and noisy environments. The scope of this workshop includes but not limited to the following:

  • Human internal state inference, e.g., cognitive, emotional
  • Recognition of nonverbal cues, e.g., gaze, gesture.
  • Multi-modal sensing fusion for scene perception.
  • Learning algorithms
  • Generative and adversarial algorithms.
  • Empathetic interaction between humansintelligent systems.
  • Robust sensing of facial and body key points.
  • Personalization and trust of intelligent systems
  • Applications of cooperative intelligence in the wild.

Keywords: "Socially aware AI, cooperative intelligence, group interaction, social norm, nonverbal cues"

News updates

June 21th Workshop webpage was launched.

Call for Papers

Submission Guidelines

We invite authors to submit unpublished papers (2-4 pages excluding references) to our workshop, to be presented at a workshop session upon acceptance. Submissions will undergo a peer-review process by the workshop's program committee and accepted papers will be invited to present their works at the workshop (see presentation format).

We are pleased to announce that award will be given to the best paper accepted by this workshop.

Important Dates

  • June 20, 2025

    Notification of workshop acceptance

  • June 21, 2025

    Workshop web page open

  • Aug 7, 2025

    HAI2025 final decisions to authors

  • Sep 8, 2025

    Workshop paper submission deadline

  • Oct 1, 2025

    Workshop paper reviews deadline

  • Oct 10, 2025

    Notification to authors

  • Oct 31, 2025

    Camera-ready deadline

  • Nov 10, 2025

    Workshop date

Submission Instructions

Please use the IEEE conferences paper format to write your manuscript. Please submit your paper electronically through the workshop's EasyChair submission system.

Presentation Format

Accepted papers should be presented in three-way presentation approach to foster active participation

Publication Format

Authors are recommended to archive their papers and inform workshop organizers once this procedure is completed. Accepted papers which have been archived will be hosted on the workshop webpage.
As with the previous IROS2024 workshop, extensions of the papers presented at this ICRA2025 workshop will be invited to submit to a special issue journal to-be-announced at a later date.

Program

We plan a full-day event for 8 hours, including oral and poster presentations of accepted/invited papers, talks by FOUR invited speakers, TWO interactive sessions including a demonstration and a forum discussion. For participants who could not attend in person, we will disseminate the papers and pre-recorded videos on our workshop page, which also consists of a comment section for Q&A.

  • 08:30 08:40 Welcome and opening remarks (10 mins)
  • 08:40 09:25 Invited talk I (45 mins, including Q&A)
  • 09:25 10:21 Flash talks I: 7 papers (8 mins each)
  • 10:21 10:55 Coffee break and poster session I (34 mins)
  • 10:55 11:40 Invited talk II(45 mins, including Q&A)
  • 11:40 12:30 Forum (50 mins)
  • 13:30 14:15 Invited talk III (45 mins, including Q&A)
  • 14:15 15:11 Flash talks II: 7 papers (8 mins each)
  • 15:11 15:45 Coffee break and poster session II (34 mins)
  • 15:45 16:35 Group interaction with an agent (50 mins)
  • 16:35 17:20 Invited talk IV(45 mins, including Q&A)
  • 17:20 17:30 Closing and awards ceremony (10 mins)

Invited Speakers

We have confirmed the attendance of FOUR speakers:

img

Invited Talk I

Is This a Person Speaking? Unintended Side Effects of Simulated Humanness in Conversational AI

Martina Mara, Johannes Kepler Universität Linz, Austria.
Link to website:
Abstract Biography
img

Invited Talk II

Constructing Human-Aware AI for Integrated Information processing

Tim Schrills, University of Lübeck, Germany.
Link to website:
Abstract Biography
img

Invited Talk III

Learning Expressive Motion Controllers for Robotic Creatures

Sehoon Ha, Georgia Institute of Technology, USA.
Link to website:
Abstract Biography
img

Invited Talk IV

Visually Humans Measurement

Xucong Zhang, Delft University of Technology, Netherlands.
Link to website:
Abstract Biography

Flash talks

09:25 09:33
title members

Motivation and Background

This workshop theme centers on the development of AI agents and systems that are capable of understanding, adapting to, and reacting to collaborate with humans in compliance with the social norms. These systems leverage insights from social psychology, cognitive science, robotics, and Al to interpret social cues, anticipate the needs of others, and coordinate actions effectively within dynamic and often unpredictable contexts. We focus on embedding social awareness into AI systems, leading to Cooperative Intelligence which focuses on building trust and relationship between humans and intelligent systems, instead of focusing on functions to replace humans. This paradigm is expected to realize a hybrid society, where humans coexist with ubiquitous intelligent agents.

It is increasingly important for intelligent systems-such as robots, virtual agents, and human-machine interfaces-to collaborate and interact seamlessly with humans across diverse settings, including homes, factories, offices, and transportation systems. Achieving efficient and intelligent humans-system collaboration relies on cooperative intelligence, which draws on interdisciplinary research spanning robotics, AI, human-robot and human-computer interaction, computer vision, and cognitive science.

Organizers

Jouh Yeong Chew

Honda Research Institute Japan

jouhyeong.chew@jp.honda-ri.com
Alan Sarkisian

Honda Research Institute Japan

alan.sarkisian@jp.honda-ri.com
Shigeaki Nishina

Honda Research Institute Japan

nishina@jp.honda-ri.com
Christiane Wiebel-Herboth

Honda Research Institute Europe

christiane.wiebel@honda-ri.de
Christiane Attig

Honda Research Institute Europe

christiane.attig@honda-ri.de
Zhaobo Zheng

Honda Research Institute USA

zhaobo_zheng@honda-ri.com