[2604.06250] DISSECT: Diagnosing Where Vision Ends and Language Priors Begin in Scientific VLMs

[2604.06250] DISSECT: Diagnosing Where Vision Ends and Language Priors Begin in Scientific VLMs

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2604.06250: DISSECT: Diagnosing Where Vision Ends and Language Priors Begin in Scientific VLMs

Computer Science > Computer Vision and Pattern Recognition arXiv:2604.06250 (cs) [Submitted on 6 Apr 2026] Title:DISSECT: Diagnosing Where Vision Ends and Language Priors Begin in Scientific VLMs Authors:Dikshant Kukreja, Kshitij Sah, Karan Goyal, Mukesh Mohania, Vikram Goyal View a PDF of the paper titled DISSECT: Diagnosing Where Vision Ends and Language Priors Begin in Scientific VLMs, by Dikshant Kukreja and 4 other authors View PDF HTML (experimental) Abstract:When asked to describe a molecular diagram, a Vision-Language Model correctly identifies ``a benzene ring with an -OH group.'' When asked to reason about the same image, it answers incorrectly. The model can see but it cannot think about what it sees. We term this the perception-integration gap: a failure where visual information is successfully extracted but lost during downstream reasoning, invisible to single-configuration benchmarks that conflate perception with integration under one accuracy number. To systematically expose such failures, we introduce DISSECT, a 12,000-question diagnostic benchmark spanning Chemistry (7,000) and Biology (5,000). Every question is evaluated under five input modes -- Vision+Text, Text-Only, Vision-Only, Human Oracle, and a novel Model Oracle in which the VLM first verbalizes the image and then reasons from its own description -- yielding diagnostic gaps that decompose performance into language-prior exploitation, visual extraction, perception fidelity, and integration effecti...

Originally published on April 09, 2026. Curated by AI News.

Related Articles

Llms

We gave 45 psychological questionnaires to 50 LLMs. What we found was not “personality.”

What is the “personality” of an LLM? What actually differentiates models psychometrically? Since LLMs entered public use, researchers hav...

Reddit - Artificial Intelligence · 1 min ·
How to Disable Google's Gemini in Chrome | WIRED
Llms

How to Disable Google's Gemini in Chrome | WIRED

Chrome users were caught off guard by a 4-GB Google AI model baked into Chrome, sparking privacy concerns. The good news: You can easily ...

Wired - AI · 6 min ·
OpenAI introduces new 'Trusted Contact' safeguard for cases of possible self-harm | TechCrunch
Llms

OpenAI introduces new 'Trusted Contact' safeguard for cases of possible self-harm | TechCrunch

The company is expanding its efforts to protect ChatGPT users in cases where conversations may turn to self-harm.

TechCrunch - AI · 5 min ·
Mira Murati’s deposition pulled back the curtain on Sam Altman’s ouster | The Verge
Llms

Mira Murati’s deposition pulled back the curtain on Sam Altman’s ouster | The Verge

Thanks to Musk v. Altman, the public is getting a concrete look at details of Sam Altman’s ouster from OpenAI, much of it centered on for...

The Verge - AI · 11 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime