[2603.01227] The Lattice Representation Hypothesis of Large Language Models
About this article
Abstract page for arXiv paper 2603.01227: The Lattice Representation Hypothesis of Large Language Models
Computer Science > Artificial Intelligence arXiv:2603.01227 (cs) [Submitted on 1 Mar 2026] Title:The Lattice Representation Hypothesis of Large Language Models Authors:Bo Xiong View a PDF of the paper titled The Lattice Representation Hypothesis of Large Language Models, by Bo Xiong View PDF HTML (experimental) Abstract:We propose the Lattice Representation Hypothesis of large language models: a symbolic backbone that grounds conceptual hierarchies and logical operations in embedding geometry. Our framework unifies the Linear Representation Hypothesis with Formal Concept Analysis (FCA), showing that linear attribute directions with separating thresholds induce a concept lattice via half-space intersections. This geometry enables symbolic reasoning through geometric meet (intersection) and join (union) operations, and admits a canonical form when attribute directions are linearly independent. Experiments on WordNet sub-hierarchies provide empirical evidence that LLM embeddings encode concept lattices and their logical structure, revealing a principled bridge between continuous geometry and symbolic abstraction. Comments: Subjects: Artificial Intelligence (cs.AI) Cite as: arXiv:2603.01227 [cs.AI] (or arXiv:2603.01227v1 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2603.01227 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Bo Xiong [view email] [v1] Sun, 1 Mar 2026 18:42:59 UTC (806 KB) Full-text links: Access P...