[2603.03313] How does fine-tuning improve sensorimotor representations in large language models?
About this article
Abstract page for arXiv paper 2603.03313: How does fine-tuning improve sensorimotor representations in large language models?
Computer Science > Computation and Language arXiv:2603.03313 (cs) [Submitted on 9 Feb 2026] Title:How does fine-tuning improve sensorimotor representations in large language models? Authors:Minghua Wu, Javier Conde, Pedro Reviriego, Marc Brysbaert View a PDF of the paper titled How does fine-tuning improve sensorimotor representations in large language models?, by Minghua Wu and 3 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) exhibit a significant "embodiment gap", where their text-based representations fail to align with human sensorimotor experiences. This study systematically investigates whether and how task-specific fine-tuning can bridge this gap. Utilizing Representational Similarity Analysis (RSA) and dimension-specific correlation metrics, we demonstrate that the internal representations of LLMs can be steered toward more embodied, grounded patterns through fine-tuning. Furthermore, the results show that while sensorimotor improvements generalize robustly across languages and related sensory-motor dimensions, they are highly sensitive to the learning objective, failing to transfer across two disparate task formats. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2603.03313 [cs.CL] (or arXiv:2603.03313v1 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2603.03313 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Minghua Wu [view email] [v1] Mon, 9 Feb ...