[2603.21013] A Framework for Low-Latency, LLM-driven Multimodal Interaction on the Pepper Robot
About this article
Abstract page for arXiv paper 2603.21013: A Framework for Low-Latency, LLM-driven Multimodal Interaction on the Pepper Robot
Computer Science > Artificial Intelligence arXiv:2603.21013 (cs) [Submitted on 9 Jan 2026] Title:A Framework for Low-Latency, LLM-driven Multimodal Interaction on the Pepper Robot Authors:Erich Studerus, Vivienne Jia Zhong, Stephan Vonschallen View a PDF of the paper titled A Framework for Low-Latency, LLM-driven Multimodal Interaction on the Pepper Robot, by Erich Studerus and 2 other authors View PDF HTML (experimental) Abstract:Despite recent advances in integrating Large Language Models (LLMs) into social robotics, two weaknesses persist. First, existing implementations on platforms like Pepper often rely on cascaded Speech-to-Text (STT)->LLM->Text-to-Speech (TTS) pipelines, resulting in high latency and the loss of paralinguistic information. Second, most implementations fail to fully leverage the LLM's capabilities for multimodal perception and agentic control. We present an open-source Android framework for the Pepper robot that addresses these limitations through two key innovations. First, we integrate end-to-end Speech-to-Speech (S2S) models to achieve low-latency interaction while preserving paralinguistic cues and enabling adaptive intonation. Second, we implement extensive Function Calling capabilities that elevate the LLM to an agentic planner, orchestrating robot actions (navigation, gaze control, tablet interaction) and integrating diverse multimodal feedback (vision, touch, system state). The framework runs on the robot's tablet but can also be built to ru...