[2603.01804] Non-verbal Real-time Human-AI Interaction in Constrained Robotic Environments
About this article
Abstract page for arXiv paper 2603.01804: Non-verbal Real-time Human-AI Interaction in Constrained Robotic Environments
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.01804 (cs) [Submitted on 2 Mar 2026] Title:Non-verbal Real-time Human-AI Interaction in Constrained Robotic Environments Authors:Dragos Costea, Alina Marcu, Cristina Lazar, Marius Leordeanu View a PDF of the paper titled Non-verbal Real-time Human-AI Interaction in Constrained Robotic Environments, by Dragos Costea and 3 other authors View PDF HTML (experimental) Abstract:We study the ongoing debate regarding the statistical fidelity of AI-generated data compared to human-generated data in the context of non-verbal communication using full body motion. Concretely, we ask if contemporary generative models move beyond surface mimicry to participate in the silent, but expressive dialogue of body language. We tackle this question by introducing the first framework that generates a natural non-verbal interaction between Human and AI in real-time from 2D body keypoints. Our experiments utilize four lightweight architectures which run at up to 100 FPS on an NVIDIA Orin Nano, effectively closing the perception-action loop needed for natural Human-AI interaction. We trained on 437 human video clips and demonstrated that pretraining on synthetically-generated sequences reduces motion errors significantly, without sacrificing speed. Yet, a measurable reality gap persists. When the best model is evaluated on keypoints extracted from cutting-edge text-to-video systems, such as SORA and VEO, we observe that performanc...