[2603.02258] Universal Conceptual Structure in Neural Translation: Probing NLLB-200's Multilingual Geometry
About this article
Abstract page for arXiv paper 2603.02258: Universal Conceptual Structure in Neural Translation: Probing NLLB-200's Multilingual Geometry
Computer Science > Computation and Language arXiv:2603.02258 (cs) [Submitted on 27 Feb 2026] Title:Universal Conceptual Structure in Neural Translation: Probing NLLB-200's Multilingual Geometry Authors:Kyle Elliott Mathewson View a PDF of the paper titled Universal Conceptual Structure in Neural Translation: Probing NLLB-200's Multilingual Geometry, by Kyle Elliott Mathewson View PDF HTML (experimental) Abstract:Do neural machine translation models learn language-universal conceptual representations, or do they merely cluster languages by surface similarity? We investigate this question by probing the representation geometry of Meta's NLLB-200, a 200-language encoder-decoder Transformer, through six experiments that bridge NLP interpretability with cognitive science theories of multilingual lexical organization. Using the Swadesh core vocabulary list embedded across 135 languages, we find that the model's embedding distances significantly correlate with phylogenetic distances from the Automated Similarity Judgment Program ($\rho = 0.13$, $p = 0.020$), demonstrating that NLLB-200 has implicitly learned the genealogical structure of human languages. We show that frequently colexified concept pairs from the CLICS database exhibit significantly higher embedding similarity than non-colexified pairs ($U = 42656$, $p = 1.33 \times 10^{-11}$, $d = 0.96$), indicating that the model has internalized universal conceptual associations. Per-language mean-centering of embeddings improve...