Multimodal Emotional Recognition for Human-Robot Interaction in Social Robotics

  1. Sergio García Muñoz 1
  2. Francisco Gómez Donoso
  3. Miguel Cazorla Quevedo
  1. 1 Universitat d'Alacant
    info

    Universitat d'Alacant

    Alicante, España

    ROR https://ror.org/05t8bcz72

Liburua:
Proceedings of the XXIV Workshop of Physical Agents: September 5-6, 2024
  1. Miguel Cazorla (coord.)
  2. Francisco Gomez-Donoso (coord.)
  3. Felix Escalona (coord.)

Argitaletxea: Universidad de Alicante / Universitat d'Alacant

ISBN: 978-84-09-63822-2

Argitalpen urtea: 2024

Orrialdeak: 220-234

Biltzarra: WAF (24. 2024. Alicante)

Mota: Biltzar ekarpena

Laburpena

This study explores the enhancement of human-robot interaction (HRI) through multimodal emotional recognition within social robotics, using the humanoid robot Pepper as a testbed. Despite the advanced interactive capabilities of robots like Pepper, their ability to accurately interpret and respond to human emotions remains limited. This paper addresses these limitations by integrating visual, auditory, and textual analyses to improve emotion recognition accuracy and contextual understanding. By leveraging multimodal data, the study aims to facilitate more natural and effective interactions between humans and robots, particularly in assistive, educational, and healthcare settings. The methods employed include convolutional neural networks for visual emotion detection, audio processing techniques for auditory emotion analysis, and natural language processing for text-based sentiment analysis. The results demonstrate that the multimodal approach significantly enhances the robot’s interactive and empathetic capabilities. This paper discusses the specific improvements observed, the challenges encountered, and potential future directions for research in multimodal emotional recognition in HRI.