[2602.22897] OmniGAIA: Towards Native Omni-Modal AI Agents
Summary
The paper introduces OmniGAIA, a benchmark for evaluating omni-modal AI agents that integrate vision, audio, and language for complex reasoning tasks, aiming to enhance AI assistants' capabilities.
Why It Matters
As AI technology evolves, the need for agents that can seamlessly process and reason across multiple modalities becomes critical. OmniGAIA addresses this gap, paving the way for more advanced AI applications that can interact with the real world in a more human-like manner.
Key Takeaways
- OmniGAIA benchmarks omni-modal agents on complex reasoning tasks.
- The framework utilizes a novel omni-modal event graph for task synthesis.
- OmniAtlas, a proposed agent, enhances tool-use capabilities of existing models.
- The research aims to bridge the gap between bi-modal and omni-modal AI interactions.
- This work is a step towards developing next-generation AI assistants.
Computer Science > Artificial Intelligence arXiv:2602.22897 (cs) [Submitted on 26 Feb 2026] Title:OmniGAIA: Towards Native Omni-Modal AI Agents Authors:Xiaoxi Li, Wenxiang Jiao, Jiarui Jin, Shijian Wang, Guanting Dong, Jiajie Jin, Hao Wang, Yinuo Wang, Ji-Rong Wen, Yuan Lu, Zhicheng Dou View a PDF of the paper titled OmniGAIA: Towards Native Omni-Modal AI Agents, by Xiaoxi Li and 10 other authors View PDF Abstract:Human intelligence naturally intertwines omni-modal perception -- spanning vision, audio, and language -- with complex reasoning and tool usage to interact with the world. However, current multi-modal LLMs are primarily confined to bi-modal interactions (e.g., vision-language), lacking the unified cognitive capabilities required for general AI assistants. To bridge this gap, we introduce OmniGAIA, a comprehensive benchmark designed to evaluate omni-modal agents on tasks necessitating deep reasoning and multi-turn tool execution across video, audio, and image modalities. Constructed via a novel omni-modal event graph approach, OmniGAIA synthesizes complex, multi-hop queries derived from real-world data that require cross-modal reasoning and external tool integration. Furthermore, we propose OmniAtlas, a native omni-modal foundation agent under tool-integrated reasoning paradigm with active omni-modal perception. Trained on trajectories synthesized via a hindsight-guided tree exploration strategy and OmniDPO for fine-grained error correction, OmniAtlas effectively ...