ICDL-EpiRob Workshop on Naturalistic Non-Verbal and Affective Human-Robot Interactions

19th August 2019, Engineer's House Conference Centre, Oslo, Norway


Aim and Scope

This full-day workshop will investigate the sensorimotor and affective mechanisms that underlie human-robot interaction. Non-verbal and affective cues and expressions used to foster cooperation, mutual understanding and signal trustworthiness are manifested by humans all the time. If these cues are not appropriately reciprocated, however, the interaction can be negatively impacted. Moreover, inappropriate reciprocation, or lack thereof, may be the result of misperception and or non-timely reactions. Failure to adequately account for biologically plausible perceptual and temporal facets of interactions may detract from the quality of human-robot interaction and hinder progress in the field of social robotics more generally.

Incorporation of naturalistic and adaptive forms of sensorimotor and affective human-robot non-verbal communication is challenging because such interaction is highly dependent on the context and the relationship between the observer and the expresser. Biological species based interaction often requires explicit forms of social signalling such as nodding, nonverbal gestures, emotional expressions, etc., the interpretation of all of which may be highly context-sensitive. Furthermore, naturalistic social signalling may involve a certain degree of mimicry of autonomic responses such as pupil dilation, blinking, blushing, etc. which, in human-robot interaction requires the implementation of time-sensitive perceptual mechanisms currently underused in both commercial and research robotics platforms.

In this workshop, we will investigate and discuss to what extent the aforementioned naturalistic social signalling capabilities needs to be accounted for in human-robot interaction and what modalities are more relevant, and in what contexts. The workshop will focus strongly on research motivated by naturalistic empirical data. We hope to provide a discussion friendly environment to connect with research covering complementary interests in the areas of: robotics, computer science, psychology, neuroscience, affective computing and animal learning research.

The primary list of topics covers the following (but not limited to):

  • - Emotion recognition

  • - Gesture recognition

  • - Social gaze recognition

  • - The development of expression and recognition capabilities

  • - Joint visual attention and activity

  • - Alignment in social interactions

  • - Non-verbal cues in human-robot interaction

Invited Speakers

Program

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam.

Time Session Description
8h50 Welcome and Introduction -
9h00 Human-robot interaction for therapy and assistance Kerstin Dautenhahn Canada 150 Research Chair in Intelligent Robotics, University of Waterloo, Canada My talk will cover some research on (primarily) non-verbal human-robot interaction that I have been involved in over the past few years. This includes applications of robots as home companion robots with the goal to assist independently living, as well as using robots as social mediators in robot-assisted therapy for children with autism. I will present results from a few studies in these domains, and point our challenges for future research.
9h40 Motor resonance, quality of interaction, and their relationship to entrainment and rapport Frank Foerster School of Computer Science, University of Hertfordshire, UK In this presentation, I will summarise the results of some of our recent research on motor resonance in human-robot interaction (HRI), and their potential to measure the 'quality of interaction'. I will subsequently attempt to clarify the often tacit assumptions about the potential role of motor resonance measures in HRI, as well as discuss the link between motor resonance and the related notions of entrainment and rapport.
10h20 Spotlight sessions for posters -
10h30 Coffee Break & Poster Session -
11h10 Children's interpersonal trust beliefs and trust development in child-robot interaction Vicky Charisi Joint Research Centre, European Commission Basic levels of interpersonal trust among people have been believed to be necessary for the survival of society and the development of successful psychosocial functioning. In the children’s learning process, a core assumption of many theories of development is that children can learn indirectly from other people. If children are to capitalize on this source of knowledge, they must be able to infer who is trustworthy and who is not. The development of this process has been examined by experimental and developmental psychologists and social scientists in various contexts. As a result, interpersonal trust among humans can be evaluated in terms of epistemic, social and affective criteria. However, when one of the interacting agents is a robot, the emergence of trust in child-robot collaborative settings might differ. In my talk, I will give an overview of the relevant literature on child’s interpersonal trust, and I will make connections with the current research on child-robot interaction. I will briefly present a series of studies on child-robot interaction in collaborative problem-solving activities, and I will highlight instances of non-verbal interaction that are indicative of trust in the specific settings. Finally, I will discuss specific examples of design principles that focus on non-verbal interaction and trust development.
11h50 Predictive Coding Account for Emotion Yukie Nagai International Research Center for Neurointelligence, The University of Tokyo, Japan The theory called predictive coding has been attracting increasing attention in developmental robotics as well as in neuroscience. It suggests that the human brain works as a predictor, which tries to minimize prediction errors by updating the internal model and/or by affecting the environment. Inspired by the theory, we have been proposing computational neural networks for robots to acquire social capabilities such as imitation, reading others’ intention, and helping others. This talk presents our computational studies investigating how emotion develops based on predictive coding. Various phenomena such as developmental differentiation of emotional states, emotional imitation through mental simulation, and emotion estimation through active inference have been demonstrated in our robot experiments.
12h30 Lunch -
13h30 Up close and personal: managing social connection with non-verbal cues Christian Balkenius Lund University, Sweden Humans are susceptible to subtle non-verbal cues that modulate whether we feel connected to other people. These cues include gaze direction, pupil dilation, blinking and touch. Pupil dilation, being a sign of increases arousal, can signal both positive and negative emotions. The interpretation depends on contextual information, including our impression of the person we are interacting with. This in turn can be based on previous interactions or superficial evaluations based on looks. The detailed dynamics of pupil dilation also contains information. For example, a synchronised pupil dilation is a sign of trust. Like pupil dilation, blinking conveys both emotional and synchronicity information. A higher blinking frequency is usually associated with increased emotion, while the exact timing of the blinking can be a sign of attention. For example, we are more likely to blink at the end of a sentence when we are attending to another person talking. Groups of people listening to the same story or watching the same event often blink at the same time. Like the other cues, the reaction to touch is context dependent, and can be either positive or negative depending on the situation. Although gentle touch will generally increase a positive connection, it may have the opposite effect when it is not welcome. I will describe ongoing work on the humanoid robot Epi that attempts to implement some of the mechanism behind these effect. The robot has two movable eyes with physical animated pupils. Blinking is implemented as the momentary decrease of the illumination of the eyes. The robot is able to detect both gentle and painful touch. Gentle touch is detected through capacitive sensors on robot body while painful touch is detected as ‘proprioceptive’ feedback that does not match expectations. This mechanism, that is also used to detect collisions with objects, will recognise if the robot is hit or handled roughly. The robot is controlled by a computational model of systems in the brain responsible for the control of gaze, pupil dilation and blinking. The system uses visual input for face detection and recognition and uses touch as the primary emotionally input modality.
14h10 Movement-based communication for Human-Robot Interaction Alessandra Sciutti Center for Human Technologies, Istituto Italiano di Tecnologia, Italy Human interaction is based on mutual understanding: I know how to communicate because I entertain a model of you, which enables me to select an effective way to convey to you what I want and to have an intuition of your internal states – what you need, fear or desire. Such intuition guides my vision, enabling me to perceive properties that would be otherwise not accessible to my perception, as goals, emotions or effort, just by observing your actions. We use the humanoid robot iCub to investigate how a robot could leverage on similar visual signals to anticipate the partner’s goals for collaborative or competitive purposes, to infer the right moment to interact and to assess the emotional reactions of the partners. In a dual approach, we are trying to understand how to modulate robot behavior to elicit better human understanding and to express different characteristics of the interaction: from the mood to the level of commitment. This approach is propaedeutic to the creation of a cognitive system, by helping in the definition of what is relevant to attend to, starting from signals originating from the intrinsic characteristics of the human body. We believe that only a proper mutual understanding, also leveraging on non-verbal communication, will allow for more humane machines, able to perceive and interact with the world and others as we do.
14h50 Collect questions for discussion? -
15h00 Coffee Break & Poster Session -
15h30 When and why kids imitate nonsense: Over-imitation of human and robot models in preschoolers Stefanie Hoehl Faculty of Psychology, University of Vienna, Austria Children imitate actions that serve no apparent function with regard of the goal of the action sequence, a phenomenon termed over-imitation. In my talk I will present a series of experiments in which we determined relevant characteristics of the model affecting the occurrence and persistence of over-imitation in preschoolers. We show that communication is not necessary to elicit over-imitation, but it does enable children to switch more flexibly between different more or less efficient action strategies. Group membership, manipulated through minimal groups, did not affect over-imitation rates when all models were equally communicative. Similarly, children were equally likely to imitate a communicative robot model as a human. I will discuss our findings in the light of the underlying motivations and potential rationality of over-imitation.
16h10 Discussion -
16h50 Conclusions and Farewell -

Call for Papers

Participants are invited to submit short paper (max 4 pages) following the standard IEEE conference style. Submissions must be in PDF and should be send per email to nng@ieee.org with [ICDL-EPIROB 2019] in the subject

Selected contributions will be presented during the workshop.

Important Dates

  • Paper submission deadline: 31th May 2019

  • Notification of acceptance: 14th June 2019

  • Camera-ready version: 1st August 2019

  • Workshop: 19th August 2019

Registration

For information about registration for this workshop please refer to the ICDL-EPIROB 2019 website.

Organizers

  • Nicolás Navarro-Guerrero

    Nicolás Navarro-Guerrero

    Aarhus University

  • Robert Lowe

    Robert Lowe

    University of Gothenburg

  • Chrystopher L. Nehaniv

    Chrystopher L. Nehaniv

    University of Waterloo, Canada