[2604.05090] Multilingual Language Models Encode Script Over Linguistic Structure
About this article
Abstract page for arXiv paper 2604.05090: Multilingual Language Models Encode Script Over Linguistic Structure
Computer Science > Computation and Language arXiv:2604.05090 (cs) [Submitted on 6 Apr 2026] Title:Multilingual Language Models Encode Script Over Linguistic Structure Authors:Aastha A K Verma, Anwoy Chatterjee, Mehak Gupta, Tanmoy Chakraborty View a PDF of the paper titled Multilingual Language Models Encode Script Over Linguistic Structure, by Aastha A K Verma and 3 other authors View PDF HTML (experimental) Abstract:Multilingual language models (LMs) organize representations for typologically and orthographically diverse languages into a shared parameter space, yet the nature of this internal organization remains elusive. In this work, we investigate which linguistic properties - abstract language identity or surface-form cues - shape multilingual representations. Focusing on compact, distilled models where representational trade-offs are explicit, we analyze language-associated units in Llama-3.2-1B and Gemma-2-2B using the Language Activation Probability Entropy (LAPE) metric, and further decompose activations with Sparse Autoencoders. We find that these units are strongly conditioned on orthography: romanization induces near-disjoint representations that align with neither native-script inputs nor English, while word-order shuffling has limited effect on unit identity. Probing shows that typological structure becomes increasingly accessible in deeper layers, while causal interventions indicate that generation is most sensitive to units that are invariant to surface-fo...