[2603.19460] GeoLAN: Geometric Learning of Latent Explanatory Directions in Large Language Models
About this article
Abstract page for arXiv paper 2603.19460: GeoLAN: Geometric Learning of Latent Explanatory Directions in Large Language Models
Computer Science > Machine Learning arXiv:2603.19460 (cs) [Submitted on 19 Mar 2026] Title:GeoLAN: Geometric Learning of Latent Explanatory Directions in Large Language Models Authors:Tianyu Bell Pan, Damon L. Woodard View a PDF of the paper titled GeoLAN: Geometric Learning of Latent Explanatory Directions in Large Language Models, by Tianyu Bell Pan and Damon L. Woodard View PDF HTML (experimental) Abstract:Large language models (LLMs) demonstrate strong performance, but they often lack transparency. We introduce GeoLAN, a training framework that treats token representations as geometric trajectories and applies stickiness conditions inspired by recent developments related to the Kakeya Conjecture. We have developed two differentiable regularizers, Katz-Tao Convex Wolff (KT-CW) and Katz-Tao Attention (KT-Attn), that promote isotropy and encourage diverse attention. Our experiments with Gemma-3 (1B, 4B, 12B) and Llama-3-8B show that GeoLAN frequently maintains task accuracy while improving geometric metrics and reducing certain fairness biases. These benefits are most significant in mid-sized models. Our findings reveal scale-dependent trade-offs between geometric precision and performance, suggesting that geometry-aware training is a promising approach to enhance mechanistic interpretability. Subjects: Machine Learning (cs.LG); Computational Geometry (cs.CG) Cite as: arXiv:2603.19460 [cs.LG] (or arXiv:2603.19460v1 [cs.LG] for this version) https://doi.org/10.48550/ar...