[2505.17645] HoloLLM: Multisensory Foundation Model for Language-Grounded Human Sensing and Reasoning
Summary
HoloLLM introduces a Multimodal Large Language Model that enhances human sensing and reasoning by integrating diverse sensory inputs, outperforming existing models in accuracy.
Why It Matters
This research addresses the limitations of current Vision-Language Models by incorporating multiple sensing modalities, which is crucial for developing robust AI systems capable of understanding human behavior in complex environments. The advancements could significantly impact fields like robotics, smart homes, and AI safety.
Key Takeaways
- HoloLLM integrates LiDAR, infrared, mmWave radar, and WiFi for enhanced human perception.
- The model addresses challenges of data scarcity and heterogeneous signal representations.
- A Universal Modality-Injection Projector improves alignment of sensory data with text.
- HoloLLM shows up to 30% improvement in language-grounded human sensing accuracy.
- This work lays a foundation for advanced multisensory embodied intelligence.
Computer Science > Computer Vision and Pattern Recognition arXiv:2505.17645 (cs) [Submitted on 23 May 2025 (v1), last revised 24 Feb 2026 (this version, v2)] Title:HoloLLM: Multisensory Foundation Model for Language-Grounded Human Sensing and Reasoning Authors:Chuhao Zhou, Jianfei Yang View a PDF of the paper titled HoloLLM: Multisensory Foundation Model for Language-Grounded Human Sensing and Reasoning, by Chuhao Zhou and 1 other authors View PDF HTML (experimental) Abstract:Embodied agents operating in smart homes must understand human behavior through diverse sensory inputs and communicate via natural language. While Vision-Language Models (VLMs) have enabled impressive language-grounded perception, their reliance on visual data limits robustness in real-world scenarios with occlusions, poor lighting, or privacy constraints. In this paper, we introduce HoloLLM, a Multimodal Large Language Model (MLLM) that integrates uncommon but powerful sensing modalities, such as LiDAR, infrared, mmWave radar, and WiFi, to enable seamless human perception and reasoning across heterogeneous environments. We address two key challenges: (1) the scarcity of aligned modality-text data for rare sensors, and (2) the heterogeneity of their physical signal representations. To overcome these, we design a Universal Modality-Injection Projector (UMIP) that enhances pre-aligned modality embeddings with fine-grained, text-aligned features from tailored encoders via coarse-to-fine cross-attention w...