[2602.13294] VisPhyWorld: Probing Physical Reasoning via Code-Driven Video Reconstruction

[2602.13294] VisPhyWorld: Probing Physical Reasoning via Code-Driven Video Reconstruction

arXiv - AI 3 min read Article

Summary

The paper introduces VisPhyWorld, a framework for evaluating physical reasoning in Multimodal Large Language Models (MLLMs) through code-driven video reconstruction, highlighting its effectiveness and limitations.

Why It Matters

Understanding how MLLMs reason about physical dynamics is crucial for advancing AI capabilities in real-world applications. The VisPhyWorld framework provides a novel approach to evaluate these models beyond traditional benchmarks, offering insights into their reasoning processes and potential improvements.

Key Takeaways

  • VisPhyWorld evaluates MLLMs by generating executable simulator code from visual observations.
  • The framework allows for direct inspection and falsifiability of inferred world representations.
  • VisPhyBench includes 209 evaluation scenes to assess models' physical reasoning capabilities.
  • Current MLLMs excel in semantic understanding but struggle with accurate physical parameter inference.
  • The proposed method achieves a 97.7% success rate in producing valid reconstructed videos.

Computer Science > Computer Vision and Pattern Recognition arXiv:2602.13294 (cs) [Submitted on 9 Feb 2026] Title:VisPhyWorld: Probing Physical Reasoning via Code-Driven Video Reconstruction Authors:Jiarong Liang, Max Ku, Ka-Hei Hui, Ping Nie, Wenhu Chen View a PDF of the paper titled VisPhyWorld: Probing Physical Reasoning via Code-Driven Video Reconstruction, by Jiarong Liang and 4 other authors View PDF HTML (experimental) Abstract:Evaluating whether Multimodal Large Language Models (MLLMs) genuinely reason about physical dynamics remains challenging. Most existing benchmarks rely on recognition-style protocols such as Visual Question Answering (VQA) and Violation of Expectation (VoE), which can often be answered without committing to an explicit, testable physical hypothesis. We propose VisPhyWorld, an execution-based framework that evaluates physical reasoning by requiring models to generate executable simulator code from visual observations. By producing runnable code, the inferred world representation is directly inspectable, editable, and falsifiable. This separates physical reasoning from rendering. Building on this framework, we introduce VisPhyBench, comprising 209 evaluation scenes derived from 108 physical templates and a systematic protocol that evaluates how well models reconstruct appearance and reproduce physically plausible motion. Our pipeline produces valid reconstructed videos in 97.7% on the benchmark. Experiments show that while state-of-the-art MLLMs...

Related Articles

Llms

Have Companies Began Adopting Claude Co-Work at an Enterprise Level?

Hi Guys, My company is considering purchasing the Claude Enterprise plan. The main two constraints are: - Being able to block usage of Cl...

Reddit - Artificial Intelligence · 1 min ·
Llms

What I learned about multi-agent coordination running 9 specialized Claude agents

I've been experimenting with multi-agent AI systems and ended up building something more ambitious than I originally planned: a fully ope...

Reddit - Artificial Intelligence · 1 min ·
Llms

[D] The problem with comparing AI memory system benchmarks — different evaluation methods make scores meaningless

I've been reviewing how various AI memory systems evaluate their performance and noticed a fundamental issue with cross-system comparison...

Reddit - Machine Learning · 1 min ·
Shifting to AI model customization is an architectural imperative | MIT Technology Review
Llms

Shifting to AI model customization is an architectural imperative | MIT Technology Review

In the early days of large language models (LLMs), we grew accustomed to massive 10x jumps in reasoning and coding capability with every ...

MIT Technology Review · 6 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime