[2512.02902] VLA Models Are More Generalizable Than You Think: Revisiting Physical and Spatial Modeling
About this article
Abstract page for arXiv paper 2512.02902: VLA Models Are More Generalizable Than You Think: Revisiting Physical and Spatial Modeling
Computer Science > Robotics arXiv:2512.02902 (cs) [Submitted on 2 Dec 2025 (v1), last revised 31 Mar 2026 (this version, v2)] Title:VLA Models Are More Generalizable Than You Think: Revisiting Physical and Spatial Modeling Authors:Weiqi Li, Quande Zhang, Ruifeng Zhai, Liang Lin, Guangrun Wang View a PDF of the paper titled VLA Models Are More Generalizable Than You Think: Revisiting Physical and Spatial Modeling, by Weiqi Li and 4 other authors View PDF HTML (experimental) Abstract:Vision-language-action (VLA) models achieve strong in-distribution performance but degrade sharply under novel camera viewpoints and visual perturbations. We show that this brittleness primarily arises from misalignment in Spatial Modeling, rather than Physical Modeling. To address this, we propose a one-shot adaptation framework that recalibrates visual representations through lightweight, learnable updates. Our first method, Feature Token Modulation (FTM), applies a global affine transformation to visual tokens and improves Libero viewpoint accuracy from 48.5% to 87.1% with only 4K parameters. Building on this, Feature Linear Adaptation (FLA) introduces low-rank updates to the ViT encoder, achieving 90.8% success with 4.7M parameters -- matching LoRA-scale finetuning at far lower cost. Together, these results reveal substantial untapped robustness in pretrained VLA models and demonstrate that targeted, minimal visual adaptation is sufficient to restore viewpoint generalization. Subjects: Robot...