[2603.23867] Can VLMs Reason Robustly? A Neuro-Symbolic Investigation
About this article
Abstract page for arXiv paper 2603.23867: Can VLMs Reason Robustly? A Neuro-Symbolic Investigation
Computer Science > Machine Learning arXiv:2603.23867 (cs) [Submitted on 25 Mar 2026] Title:Can VLMs Reason Robustly? A Neuro-Symbolic Investigation Authors:Weixin Chen, Antonio Vergari, Han Zhao View a PDF of the paper titled Can VLMs Reason Robustly? A Neuro-Symbolic Investigation, by Weixin Chen and 2 other authors View PDF HTML (experimental) Abstract:Vision-Language Models (VLMs) have been applied to a wide range of reasoning tasks, yet it remains unclear whether they can reason robustly under distribution shifts. In this paper, we study covariate shifts in which the perceptual input distribution changes while the underlying prediction rules do not. To investigate this question, we consider visual deductive reasoning tasks, where a model is required to answer a query given an image and logical rules defined over the object concepts in the image. Empirically, we find that VLMs fine-tuned through gradient-based end-to-end training can achieve high in-distribution accuracy but fail to generalize under such shifts, suggesting that fine-tuning does not reliably induce the underlying reasoning function. This motivates a neuro-symbolic perspective that decouples perception from reasoning. However, we further observe that recent neuro-symbolic approaches that rely on black-box components for reasoning can still exhibit inconsistent robustness across tasks. To address this issue, we propose VLC, a neuro-symbolic method that combines VLM-based concept recognition with circuit-ba...