[2602.13235] Lang2Act: Fine-Grained Visual Reasoning through Self-Emergent Linguistic Toolchains

[2602.13235] Lang2Act: Fine-Grained Visual Reasoning through Self-Emergent Linguistic Toolchains

arXiv - AI 4 min read Article

Summary

The paper introduces Lang2Act, a novel framework for enhancing visual reasoning in Vision-Language Models (VLMs) through self-emergent linguistic toolchains, improving performance by over 4%.

Why It Matters

Lang2Act addresses limitations in existing Visual Retrieval-Augmented Generation frameworks by integrating visual perception and reasoning processes. This innovation can significantly enhance the capabilities of AI systems in understanding and interacting with visual data, which is crucial for applications in robotics, computer vision, and AI-driven interfaces.

Key Takeaways

  • Lang2Act improves VLMs by integrating self-emergent linguistic tools.
  • The framework utilizes a two-stage Reinforcement Learning approach.
  • Performance enhancements exceed 4% compared to existing methods.
  • It addresses the loss of visual information in traditional VRAG frameworks.
  • Code and data are publicly available for further research.

Computer Science > Artificial Intelligence arXiv:2602.13235 (cs) [Submitted on 29 Jan 2026] Title:Lang2Act: Fine-Grained Visual Reasoning through Self-Emergent Linguistic Toolchains Authors:Yuqi Xiong, Chunyi Peng, Zhipeng Xu, Zhenghao Liu, Zulong Chen, Yukun Yan, Shuo Wang, Yu Gu, Ge Yu View a PDF of the paper titled Lang2Act: Fine-Grained Visual Reasoning through Self-Emergent Linguistic Toolchains, by Yuqi Xiong and Chunyi Peng and Zhipeng Xu and Zhenghao Liu and Zulong Chen and Yukun Yan and Shuo Wang and Yu Gu and Ge Yu View PDF HTML (experimental) Abstract:Visual Retrieval-Augmented Generation (VRAG) enhances Vision-Language Models (VLMs) by incorporating external visual documents to address a given query. Existing VRAG frameworks usually depend on rigid, pre-defined external tools to extend the perceptual capabilities of VLMs, typically by explicitly separating visual perception from subsequent reasoning processes. However, this decoupled design can lead to unnecessary loss of visual information, particularly when image-based operations such as cropping are applied. In this paper, we propose Lang2Act, which enables fine-grained visual perception and reasoning through self-emergent linguistic toolchains. Rather than invoking fixed external engines, Lang2Act collects self-emergent actions as linguistic tools and leverages them to enhance the visual perception capabilities of VLMs. To support this mechanism, we design a two-stage Reinforcement Learning (RL)-based train...

Related Articles

Llms

Have Companies Began Adopting Claude Co-Work at an Enterprise Level?

Hi Guys, My company is considering purchasing the Claude Enterprise plan. The main two constraints are: - Being able to block usage of Cl...

Reddit - Artificial Intelligence · 1 min ·
Llms

What I learned about multi-agent coordination running 9 specialized Claude agents

I've been experimenting with multi-agent AI systems and ended up building something more ambitious than I originally planned: a fully ope...

Reddit - Artificial Intelligence · 1 min ·
Llms

[D] The problem with comparing AI memory system benchmarks — different evaluation methods make scores meaningless

I've been reviewing how various AI memory systems evaluate their performance and noticed a fundamental issue with cross-system comparison...

Reddit - Machine Learning · 1 min ·
Shifting to AI model customization is an architectural imperative | MIT Technology Review
Llms

Shifting to AI model customization is an architectural imperative | MIT Technology Review

In the early days of large language models (LLMs), we grew accustomed to massive 10x jumps in reasoning and coding capability with every ...

MIT Technology Review · 6 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime