[2601.10611] Molmo2: Open Weights and Data for Vision-Language Models with Video Understanding and Grounding

[2601.10611] Molmo2: Open Weights and Data for Vision-Language Models with Video Understanding and Grounding

arXiv - AI 4 min read Article

Summary

Molmo2 introduces a new family of open-weight vision-language models that excel in video understanding and grounding, featuring innovative datasets and training methods.

Why It Matters

This research addresses the gap in open-source video-language models, providing essential datasets and methodologies that enable further advancements in video understanding and grounding tasks, which are crucial for various applications in AI and computer vision.

Key Takeaways

  • Molmo2 offers state-of-the-art performance among open-source video-language models.
  • Introduces 7 new video datasets and 2 multi-image datasets for training and fine-tuning.
  • Demonstrates superior capabilities in point-driven grounding tasks compared to existing models.
  • Utilizes an efficient training recipe that enhances model performance through innovative techniques.
  • Outperforms proprietary models in specific tasks, showcasing the potential of open-source solutions.

Computer Science > Computer Vision and Pattern Recognition arXiv:2601.10611 (cs) [Submitted on 15 Jan 2026 (v1), last revised 23 Feb 2026 (this version, v2)] Title:Molmo2: Open Weights and Data for Vision-Language Models with Video Understanding and Grounding Authors:Christopher Clark, Jieyu Zhang, Zixian Ma, Jae Sung Park, Mohammadreza Salehi, Rohun Tripathi, Sangho Lee, Zhongzheng Ren, Chris Dongjoo Kim, Yinuo Yang, Vincent Shao, Yue Yang, Weikai Huang, Ziqi Gao, Taira Anderson, Jianrui Zhang, Jitesh Jain, George Stoica, Winson Han, Ali Farhadi, Ranjay Krishna View a PDF of the paper titled Molmo2: Open Weights and Data for Vision-Language Models with Video Understanding and Grounding, by Christopher Clark and 20 other authors View PDF Abstract:Today's strongest video-language models (VLMs) remain proprietary. The strongest open-weight models either rely on synthetic data from proprietary VLMs, effectively distilling from them, or do not disclose their training data or recipe. As a result, the open-source community lacks the foundations needed to improve on the state-of-the-art video (and image) language models. Crucially, many downstream applications require more than just high-level video understanding; they require grounding -- either by pointing or by tracking in pixels. Even proprietary models lack this capability. We present Molmo2, a new family of VLMs that are state-of-the-art among open-source models and demonstrate exceptional new capabilities in point-driven g...

Related Articles

Llms

[R] GPT-5.4-mini regressed 22pp on vanilla prompting vs GPT-5-mini. Nobody noticed because benchmarks don't test this. Recursive Language Models solved it.

GPT-5.4-mini produces shorter, terser outputs by default. Vanilla accuracy dropped from 69.5% to 47.2% across 12 tasks (1,800 evals). The...

Reddit - Machine Learning · 1 min ·
Llms

built an open source CLI that auto generates AI setup files for your projects just hit 150 stars

hey everyone, been working on this side project called ai-setup and just hit a milestone i wanted to share 150 github stars, 90 PRs merge...

Reddit - Artificial Intelligence · 1 min ·
Llms

built an open source tool that auto generates AI context files for any codebase, 150 stars in

one of the most tedious parts of working with AI coding tools is having to manually write context files every single time. CLAUDE.md, .cu...

Reddit - Artificial Intelligence · 1 min ·
Find out what’s new in the Gemini app in March's Gemini Drop.
Llms

Find out what’s new in the Gemini app in March's Gemini Drop.

Gemini Drops is our regular monthly update on how to get the most out of the Gemini app.

AI Tools & Products · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime