[2602.00462] LatentLens: Revealing Highly Interpretable Visual Tokens in LLMs
Summary
The paper introduces LatentLens, a method for mapping visual tokens to natural language descriptions in Vision-Language Models (VLMs), enhancing interpretability across various models.
Why It Matters
Understanding how visual tokens are processed in LLMs is crucial for improving AI interpretability. LatentLens provides a new approach that reveals the semantic meanings of visual representations, contributing to the alignment of vision and language in AI systems.
Key Takeaways
- LatentLens maps visual tokens to natural language descriptions effectively.
- The method enhances interpretability of visual tokens across multiple VLMs.
- LatentLens outperforms existing methods like LogitLens in revealing token meanings.
- The findings support better alignment between vision and language representations.
- This research opens new avenues for analyzing latent representations in AI.
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.00462 (cs) [Submitted on 31 Jan 2026 (v1), last revised 25 Feb 2026 (this version, v3)] Title:LatentLens: Revealing Highly Interpretable Visual Tokens in LLMs Authors:Benno Krojer, Shravan Nayak, Oscar Mañas, Vaibhav Adlakha, Desmond Elliott, Siva Reddy, Marius Mosbach View a PDF of the paper titled LatentLens: Revealing Highly Interpretable Visual Tokens in LLMs, by Benno Krojer and Shravan Nayak and Oscar Ma\~nas and Vaibhav Adlakha and Desmond Elliott and Siva Reddy and Marius Mosbach View PDF HTML (experimental) Abstract:Transforming a large language model (LLM) into a Vision-Language Model (VLM) can be achieved by mapping the visual tokens from a vision encoder into the embedding space of an LLM. Intriguingly, this mapping can be as simple as a shallow MLP transformation. To understand why LLMs can so readily process visual tokens, we need interpretability methods that reveal what is encoded in the visual token representations at every layer of LLM processing. In this work, we introduce LatentLens, a novel approach for mapping latent representations to descriptions in natural language. LatentLens works by encoding a large text corpus and storing contextualized token representations for each token in that corpus. Visual token representations are then compared to their contextualized textual representations, with the top-k nearest neighbor representations providing descriptions of the visual token. We...