Workshop on Socially Aware and Cooperative Intelligent Systems

banner_1
banner_2

About the Workshop

This workshop proposal focuses on realizing socially aware systems in the wild via cooperative intelligence by keeping humans-in-the-loop. Specifically, this workshop is dedicated to discussing computational methods for sensing and recognition of nonverbal cues and internal states in the wild to realize cooperative intelligence between humans and intelligent systems, including learning and behavior generation complying to the social norm, and other relevant technologies like social interaction datasets.

One of the main considerations to achieve cooperative intelligence between humans and intelligent systems is to enable everyone and everything to know each other well, like how humans can trust or infer the implicit internal states like intention, emotion, and cognitive states of each other. The importance of empathy to facilitate human-robot interaction has been highlighted in previous studies. However, it is difficult for intelligent systems to estimate the internal states of humans because they are dependent on the complex social dynamics and environment contexts. This requires intelligent systems to be capable of sensing the multi-modal inputs, reasoning the underlying abstract knowledge, and generating the corresponding responses to collaborate and interact with humans. There are many studies on estimating internal states of humans through measurements of wearables and non-invasive sensors [10, 24], but it would be difficult to implement these solutions in the wild because of the additional sensors to be worn by humans. It remains an open question for intelligent systems to sense and recognize nonverbal cues and reason the rich underlying internal states of humans in the wild and noisy environments. The scope of this workshop includes but not limited to the following:

  • Human internal state inference, e.g., cognitive, emotional
  • Recognition of nonverbal cues, e.g., gaze, gesture.
  • Multi-modal sensing fusion for scene perception.
  • Learning algorithms
  • Generative and adversarial algorithms.
  • Empathetic interaction between humansintelligent systems.
  • Robust sensing of facial and body key points.
  • Personalization and trust of intelligent systems
  • Applications of cooperative intelligence in the wild.

Keywords: "Socially aware AI, cooperative intelligence, group interaction, social norm, nonverbal cues"

News updates

June 21th Workshop webpage was launched.

Call for Papers

Submission Guidelines

We invite authors to submit unpublished papers (2-4 pages excluding references) to our workshop, to be presented at a workshop session upon acceptance. Submissions will undergo a peer-review process by the workshop's program committee and accepted papers will be invited to present their works at the workshop (see presentation format).

We are pleased to announce that award will be given to the best paper accepted by this workshop.

Important Dates

  • June 20, 2025

    Notification of workshop acceptance

  • June 21, 2025

    Workshop web page open

  • Aug 7, 2025

    HAI2025 final decisions to authors

  • Sep 8 Sep 29, 2025

    FINAL EXTENSION! : Workshop paper submission deadline

  • Oct 1 Oct 14, 2025

    Workshop paper reviews deadline

  • Oct 10 Oct 17, 2025

    Notification to authors

  • Oct 31, 2025

    Camera-ready deadline

  • Nov 10, 2025

    Workshop date

Submission Instructions

Please use the IEEE conferences paper format to write your manuscript. Please submit your paper electronically through the workshop's EasyChair submission system.

Presentation Format

Accepted papers should be presented in three-way presentation approach to foster active participation

Publication Format

Authors are recommended to archive their papers and inform workshop organizers once this procedure is completed. Accepted papers which have been archived will be hosted on the workshop webpage.
Extensions of the papers presented at this workshop will be invited to submit to a special issue journal to-be-announced at a later date.

Program

We plan a full-day event for 8 hours, including oral and poster presentations of accepted/invited papers, talks by FOUR invited speakers, TWO interactive sessions including a demonstration and a forum discussion. For participants who could not attend in person, we will disseminate the papers and pre-recorded videos on our workshop page, which also consists of a comment section for Q&A.

  • 08:30 08:40 Welcome and opening remarks (10 mins)
  • 08:40 09:25 Invited talk I (45 mins, including Q&A)
  • 09:25 10:21 Flash talks I: 7 papers (8 mins each)
  • 10:21 10:55 Coffee break and poster session I (34 mins)
  • 10:55 11:40 Invited talk II(45 mins, including Q&A)
  • 11:40 12:30 Forum (50 mins)
  • 13:30 14:15 Invited talk III (45 mins, including Q&A)
  • 14:15 15:11 Flash talks II: 7 papers (8 mins each)
  • 15:11 15:45 Coffee break and poster session II (34 mins)
  • 15:45 16:35 Group interaction with an agent (50 mins)
  • 16:35 17:20 Invited talk IV(45 mins, including Q&A)
  • 17:20 17:30 Closing and awards ceremony (10 mins)

Invited Speakers

We have confirmed the attendance of FOUR speakers:

img

Invited Talk I

Is This a Person Speaking? Unintended Side Effects of Simulated Humanness in Conversational AI

Martina Mara, Johannes Kepler Universität Linz, Austria.
Link to website: https://www.jku.at/en/lit-robopsychology-lab/about-us/team/martina-mara/
Abstract

The use of conversational AI, such as ChatGPT or DeepSeek, is rising rapidly across many personal and professional settings. These AI models not only enable natural-language interactions but are also increasingly equipped with realistic synthetic voices and other human-like cues that mimic interpersonal communication. Research, including recent studies from Prof. Mara’s Robopsychology Lab, shows that many people readily anthropomorphize AI language models, attributing emotionality, personality, and intentionality to them. While anthropomorphism can help reduce complexity and satisfy the human need for social relatedness, it may also raise concerns. Studies suggest that individuals who more strongly anthropomorphize AI are more likely to endorse granting such systems rights and to judge AI-generated recommendations as more competent and trustworthy, thereby increasing the risk of over-reliance on potentially flawed outputs. Although anthropomorphism of non-human entities is very widespread, it varies with individual and contextual factors. Findings from the Robopsychology Lab indicate that people who feel lonely or have limited AI literacy can be especially prone to anthropomorphism. This talk will examine the psychological mechanisms underlying AI anthropomorphism, its implications for both human-AI and human-human interaction, and practical strategies for de-anthropomorphizing and improving AI literacy in an increasingly AI-integrated world.

Biography

Martina Mara is a Full Professor of Psychology of Artificial Intelligence & Robotics and Head of the Robopsychology Lab at Johannes Kepler University Linz, Austria. She earned her PhD in Psychology from the University of Koblenz-Landau (2014), focusing on user acceptance of highly human-like robots, and received her habilitation (venia docendi) from the University of Nuremberg (2022). Trained as an empirical psychologist with an interdisciplinary outlook, her research examines how people perceive, trust, and collaborate with intelligent systems—and how system design can foster human autonomy, creativity, and well-being. Before joining JKU in 2018, Martina spent many years outside academia, working in media companies and a design studio, and later at the Ars Electronica Futurelab, where she collaborated with international partners, including major companies across Europe and Japan, at the intersection of technology, research, and the arts. She is a co-founder of the Initiative Digitalisierung Chancengerecht (IDC), serves on the board of the Austrian Research Promotion Agency (FFG), and has received several honors, including the Vienna Women’s Prize, a Futurezone Award, and the Kaethe Leichter Prize for outstanding contributions to women’s and gender research from the Austrian Federal Ministry for Digital and Economic Affairs. Alongside her teaching and research, she is passionate about science communication, giving public talks, writing commentary, and creating interactive exhibitions on her lab’s topics.

img

Invited Talk II

Constructing Human-Aware AI for Integrated Information processing

Tim Schrills, University of Lübeck, Germany.
Link to website:
Abstract Biography
img

Invited Talk III

From Robot Audition to Ecological AI: Expanding Human-Agent Interaction into the Real World

Kazuhiro Nakadai, Institute of Science Tokyo.
Link to website: https://www.ra.sc.e.titech.ac.jp/en/member/kazuhiro-nakadai/
Abstract

Human-Agent Interaction (HAI) has traditionally centered on dialogue between humans and artificial agents in relatively controlled settings. In this talk, I will present a broader perspective on Ecological AI—agents that adapt not only to humans, but also to the diverse environments in which interaction occurs. The journey begins with robot audition, enabling robots to localize, separate, and understand speech in noisy real-world acoustic scenes. We then extend auditory intelligence to drone audition for disaster response, where agents must support human rescuers under extreme uncertainty. Beyond audition, I will highlight efforts on two-way sign language translation and generation, enabling inclusive communication that bridges spoken and signed languages, and allowing agents to interact naturally with people who rely on sign. Finally, I will turn to wildlife and ecosystem monitoring, where agents listen to and interpret animal vocalizations, connecting humans with the voices of nature. Together, these studies illustrate how HAI can evolve into human–agent–environment interaction, expanding from human-centered conversation to inclusive, resilient, and ecological forms of collaboration.

Biography

Kazuhiro Nakadai received his B.E., M.E., and Ph.D. degrees in electrical and information engineering from the University of Tokyo in 1993, 1995, and 2003, respectively. He worked at Nippon Telegraph and Telephone (1995–1999), the Kitano Symbiotic Systems Project, ERATO, JST (1999–2003), and Honda Research Institute Japan as a principal scientist (2003–2022). He is currently a professor at the Department of Systems and Control Engineering, Institute of Science Tokyo (formerly Tokyo Institute of Technology). He also held visiting and special appointments at Tokyo Institute of Technology (2006–2022) and served as a guest professor at Waseda University (2011–2018). His research interests include artificial intelligence, robotics, signal processing, computational auditory scene analysis, multimodal integration, and robot audition. He served on the executive boards of the Japanese Society for Artificial Intelligence (2015–2016, 2024–2025) and the Robotics Society of Japan (2017–2018). He is a Fellow of IEEE and RSJ.

img

Invited Talk IV

The Future of Intimacy: Understanding Artificial Romantic Relationships in a Digital Age

Mayu Koike, Institute of Science Tokyo
Link to website:
Abstract

In recent years, AI companions have gained remarkable popularity worldwide. What began as relatively simple chatbots or entertainment applications has rapidly developed into systems that many people now turn to for emotional support, daily conversation, and even romantic companionship. For some individuals, these AI partners are not only a source of comfort but also function as meaningful figures in their personal lives. This development highlights a significant cultural and technological shift: artificial agents are no longer perceived solely as tools, but increasingly as “partners” in human relationships. The growing prevalence of AI companions prompts us to reconsider what it means to form emotional attachments, and how such relationships might influence human-to-human connections in broader society. This workshop presentation will examine these questions by exploring recent findings on the psychology of virtual intimacy of human and virtual agents. These findings highlight the psychological potential of virtual romance and the importance of understanding how emerging technologies are reshaping human relationships.

Biography

Mayu Koike is an assistant professor at the Institute of Science Tokyo. She obtained her PhD in Psychology from The University of Edinburgh. Her work focuses on the relationships between people and virtual agents. She is particularly interested in how people form strong attachments (love and romantic relationships) with virtual agents, and the potential for improving psychological well-being within this context.

Flash talks

09:25 09:33
title members

Motivation and Background

This workshop theme centers on the development of AI agents and systems that are capable of understanding, adapting to, and reacting to collaborate with humans in compliance with the social norms. These systems leverage insights from social psychology, cognitive science, robotics, and Al to interpret social cues, anticipate the needs of others, and coordinate actions effectively within dynamic and often unpredictable contexts. We focus on embedding social awareness into AI systems, leading to Cooperative Intelligence which focuses on building trust and relationship between humans and intelligent systems, instead of focusing on functions to replace humans. This paradigm is expected to realize a hybrid society, where humans coexist with ubiquitous intelligent agents.

It is increasingly important for intelligent systems-such as robots, virtual agents, and human-machine interfaces-to collaborate and interact seamlessly with humans across diverse settings, including homes, factories, offices, and transportation systems. Achieving efficient and intelligent humans-system collaboration relies on cooperative intelligence, which draws on interdisciplinary research spanning robotics, AI, human-robot and human-computer interaction, computer vision, and cognitive science.

Organizers

Jouh Yeong Chew

Honda Research Institute Japan

jouhyeong.chew@jp.honda-ri.com
Alan Sarkisian

Honda Research Institute Japan

alan.sarkisian@jp.honda-ri.com
Christiane Wiebel-Herboth

Honda Research Institute Europe

christiane.wiebel@honda-ri.de
Christiane Attig

Honda Research Institute Europe

christiane.attig@honda-ri.de
Zhaobo Zheng

Honda Research Institute USA

zhaobo_zheng@honda-ri.com
Shigeaki Nishina

Honda Research Institute Japan

nishina@jp.honda-ri.com