[2602.20739] PyVision-RL: Forging Open Agentic Vision Models via RL

[2602.20739] PyVision-RL: Forging Open Agentic Vision Models via RL

arXiv - AI 3 min read Article

Summary

The paper introduces PyVision-RL, a reinforcement learning framework designed to enhance agentic multimodal models by preventing interaction collapse and improving tool usage in image and video understanding.

Why It Matters

As AI systems increasingly rely on multimodal capabilities, the development of robust frameworks like PyVision-RL is crucial for advancing the efficiency and effectiveness of these systems. This research addresses significant challenges in reinforcement learning, making it relevant for both academic and practical applications in AI.

Key Takeaways

  • PyVision-RL stabilizes training for multimodal models, preventing interaction collapse.
  • The framework employs an oversampling-filtering-ranking strategy to enhance tool usage.
  • It introduces on-demand context construction for efficient video reasoning.
  • Experiments demonstrate improved performance and efficiency in multimodal tasks.
  • Sustained interaction is critical for developing scalable multimodal agents.

Computer Science > Artificial Intelligence arXiv:2602.20739 (cs) [Submitted on 24 Feb 2026] Title:PyVision-RL: Forging Open Agentic Vision Models via RL Authors:Shitian Zhao, Shaoheng Lin, Ming Li, Haoquan Zhang, Wenshuo Peng, Kaipeng Zhang, Chen Wei View a PDF of the paper titled PyVision-RL: Forging Open Agentic Vision Models via RL, by Shitian Zhao and 6 other authors View PDF Abstract:Reinforcement learning for agentic multimodal models often suffers from interaction collapse, where models learn to reduce tool usage and multi-turn reasoning, limiting the benefits of agentic behavior. We introduce PyVision-RL, a reinforcement learning framework for open-weight multimodal models that stabilizes training and sustains interaction. Our approach combines an oversampling-filtering-ranking rollout strategy with an accumulative tool reward to prevent collapse and encourage multi-turn tool use. Using a unified training pipeline, we develop PyVision-Image and PyVision-Video for image and video understanding. For video reasoning, PyVision-Video employs on-demand context construction, selectively sampling task-relevant frames during reasoning to significantly reduce visual token usage. Experiments show strong performance and improved efficiency, demonstrating that sustained interaction and on-demand visual processing are critical for scalable multimodal agents. Comments: Subjects: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV) Cite as: arXiv:2602.2...

Related Articles

[2603.18940] Entropy trajectory shape predicts LLM reasoning reliability: A diagnostic study of uncertainty dynamics in chain-of-thought
Llms

[2603.18940] Entropy trajectory shape predicts LLM reasoning reliability: A diagnostic study of uncertainty dynamics in chain-of-thought

Abstract page for arXiv paper 2603.18940: Entropy trajectory shape predicts LLM reasoning reliability: A diagnostic study of uncertainty ...

arXiv - Machine Learning · 3 min ·
[2512.20620] Uncovering Patterns of Brain Activity from EEG Data Consistently Associated with Cybersickness Using Neural Network Interpretability Maps
Machine Learning

[2512.20620] Uncovering Patterns of Brain Activity from EEG Data Consistently Associated with Cybersickness Using Neural Network Interpretability Maps

Abstract page for arXiv paper 2512.20620: Uncovering Patterns of Brain Activity from EEG Data Consistently Associated with Cybersickness ...

arXiv - Machine Learning · 4 min ·
[2512.13607] Nemotron-Cascade: Scaling Cascaded Reinforcement Learning for General-Purpose Reasoning Models
Machine Learning

[2512.13607] Nemotron-Cascade: Scaling Cascaded Reinforcement Learning for General-Purpose Reasoning Models

Abstract page for arXiv paper 2512.13607: Nemotron-Cascade: Scaling Cascaded Reinforcement Learning for General-Purpose Reasoning Models

arXiv - Machine Learning · 4 min ·
[2512.02650] Hear What Matters! Text-conditioned Selective Video-to-Audio Generation
Machine Learning

[2512.02650] Hear What Matters! Text-conditioned Selective Video-to-Audio Generation

Abstract page for arXiv paper 2512.02650: Hear What Matters! Text-conditioned Selective Video-to-Audio Generation

arXiv - Machine Learning · 3 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime