[2604.05217] On the Geometry of Positional Encodings in Transformers
About this article
Abstract page for arXiv paper 2604.05217: On the Geometry of Positional Encodings in Transformers
Computer Science > Machine Learning arXiv:2604.05217 (cs) [Submitted on 6 Apr 2026] Title:On the Geometry of Positional Encodings in Transformers Authors:Giansalvo Cirrincione View a PDF of the paper titled On the Geometry of Positional Encodings in Transformers, by Giansalvo Cirrincione View PDF HTML (experimental) Abstract:Neural language models process sequences of words, but the mathematical operations inside them are insensitive to the order in which words appear. Positional encodings are the component added to remedy this. Despite their importance, positional encodings have been designed largely by trial and error, without a mathematical theory of what they ought to do. This paper develops such a theory. Four results are established. First, any Transformer without a positional signal cannot solve any task sensitive to word order (Necessity Theorem). Second, training assigns distinct vector representations to distinct sequence positions at every global minimiser, under mild and verifiable conditions (Positional Separation Theorem). Third, the best achievable approximation to an information-optimal encoding is constructed via classical multidimensional scaling (MDS) on the Hellinger distance between positional distributions; the quality of any encoding is measured by a single number, the stress (Proposition 5, Algorithm 1). Fourth, the optimal encoding has effective rank r = rank(B) <= n-1 and can be represented with r(n+d) parameters instead of nd (minimal parametrisa...