[2602.14589] MATEO: A Multimodal Benchmark for Temporal Reasoning and Planning in LVLMs

[2602.14589] MATEO: A Multimodal Benchmark for Temporal Reasoning and Planning in LVLMs

arXiv - Machine Learning 3 min read Article

Summary

MATEO introduces a benchmark for assessing temporal reasoning in Large Vision Language Models (LVLMs), focusing on multimodal inputs and planning capabilities.

Why It Matters

This research addresses a significant gap in AI's ability to understand and execute complex tasks involving temporal reasoning. By providing a structured benchmark, it enhances the evaluation of LVLMs, which are increasingly vital in real-world applications like robotics and automated planning.

Key Takeaways

  • MATEO benchmark focuses on temporal reasoning and planning in LVLMs.
  • Introduces a high-quality multimodal recipe corpus for evaluation.
  • Evaluates state-of-the-art LVLMs with varied input structures and fine-tuning strategies.

Computer Science > Artificial Intelligence arXiv:2602.14589 (cs) [Submitted on 16 Feb 2026] Title:MATEO: A Multimodal Benchmark for Temporal Reasoning and Planning in LVLMs Authors:Gabriel Roccabruna, Olha Khomyn, Giuseppe Riccardi View a PDF of the paper titled MATEO: A Multimodal Benchmark for Temporal Reasoning and Planning in LVLMs, by Gabriel Roccabruna and 2 other authors View PDF HTML (experimental) Abstract:AI agents need to plan to achieve complex goals that involve orchestrating perception, sub-goal decomposition, and execution. These plans consist of ordered steps structured according to a Temporal Execution Order (TEO, a directed acyclic graph that ensures each step executes only after its preconditions are satisfied. Existing research on foundational models' understanding of temporal execution is limited to automatically derived annotations, approximations of the TEO as a linear chain, or text-only inputs. To address this gap, we introduce MATEO (MultimodAl Temporal Execution Order), a benchmark designed to assess and improve the temporal reasoning abilities of Large Vision Language Models (LVLMs) required for real-world planning. We acquire a high-quality professional multimodal recipe corpus, authored through a standardized editorial process that decomposes instructions into discrete steps, each paired with corresponding images. We collect TEO annotations as graphs by designing and using a scalable crowdsourcing pipeline. Using MATEO, we evaluate six state-o...

Related Articles

I can't help rooting for tiny open source AI model maker Arcee | TechCrunch
Llms

I can't help rooting for tiny open source AI model maker Arcee | TechCrunch

Arcee is a tiny 26-person U.S. startup that built a high-performing, massive, open source LLM. And it's gaining popularity with OpenClaw ...

TechCrunch - AI · 4 min ·
Machine Learning

We have an AI agent fragmentation problem

Every AI agent works fine on its own — but the moment you try to use more than one, everything falls apart. Different runtimes. Different...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

Using AI properly

AI is a tool. Period. I spent decades asking forums for help in writing HTML code for my website. I wanted my posts to self-scroll to a p...

Reddit - Artificial Intelligence · 1 min ·
Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything | WIRED
Llms

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything | WIRED

The AI lab's Project Glasswing will bring together Apple, Google, and more than 45 other organizations. They'll use the new Claude Mythos...

Wired - AI · 7 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime