[2602.14589] MATEO: A Multimodal Benchmark for Temporal Reasoning and Planning in LVLMs
Summary
MATEO introduces a benchmark for assessing temporal reasoning in Large Vision Language Models (LVLMs), focusing on multimodal inputs and planning capabilities.
Why It Matters
This research addresses a significant gap in AI's ability to understand and execute complex tasks involving temporal reasoning. By providing a structured benchmark, it enhances the evaluation of LVLMs, which are increasingly vital in real-world applications like robotics and automated planning.
Key Takeaways
- MATEO benchmark focuses on temporal reasoning and planning in LVLMs.
- Introduces a high-quality multimodal recipe corpus for evaluation.
- Evaluates state-of-the-art LVLMs with varied input structures and fine-tuning strategies.
Computer Science > Artificial Intelligence arXiv:2602.14589 (cs) [Submitted on 16 Feb 2026] Title:MATEO: A Multimodal Benchmark for Temporal Reasoning and Planning in LVLMs Authors:Gabriel Roccabruna, Olha Khomyn, Giuseppe Riccardi View a PDF of the paper titled MATEO: A Multimodal Benchmark for Temporal Reasoning and Planning in LVLMs, by Gabriel Roccabruna and 2 other authors View PDF HTML (experimental) Abstract:AI agents need to plan to achieve complex goals that involve orchestrating perception, sub-goal decomposition, and execution. These plans consist of ordered steps structured according to a Temporal Execution Order (TEO, a directed acyclic graph that ensures each step executes only after its preconditions are satisfied. Existing research on foundational models' understanding of temporal execution is limited to automatically derived annotations, approximations of the TEO as a linear chain, or text-only inputs. To address this gap, we introduce MATEO (MultimodAl Temporal Execution Order), a benchmark designed to assess and improve the temporal reasoning abilities of Large Vision Language Models (LVLMs) required for real-world planning. We acquire a high-quality professional multimodal recipe corpus, authored through a standardized editorial process that decomposes instructions into discrete steps, each paired with corresponding images. We collect TEO annotations as graphs by designing and using a scalable crowdsourcing pipeline. Using MATEO, we evaluate six state-o...