[2603.01227] The Lattice Representation Hypothesis of Large Language Models

[2603.01227] The Lattice Representation Hypothesis of Large Language Models

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2603.01227: The Lattice Representation Hypothesis of Large Language Models

Computer Science > Artificial Intelligence arXiv:2603.01227 (cs) [Submitted on 1 Mar 2026] Title:The Lattice Representation Hypothesis of Large Language Models Authors:Bo Xiong View a PDF of the paper titled The Lattice Representation Hypothesis of Large Language Models, by Bo Xiong View PDF HTML (experimental) Abstract:We propose the Lattice Representation Hypothesis of large language models: a symbolic backbone that grounds conceptual hierarchies and logical operations in embedding geometry. Our framework unifies the Linear Representation Hypothesis with Formal Concept Analysis (FCA), showing that linear attribute directions with separating thresholds induce a concept lattice via half-space intersections. This geometry enables symbolic reasoning through geometric meet (intersection) and join (union) operations, and admits a canonical form when attribute directions are linearly independent. Experiments on WordNet sub-hierarchies provide empirical evidence that LLM embeddings encode concept lattices and their logical structure, revealing a principled bridge between continuous geometry and symbolic abstraction. Comments: Subjects: Artificial Intelligence (cs.AI) Cite as: arXiv:2603.01227 [cs.AI]   (or arXiv:2603.01227v1 [cs.AI] for this version)   https://doi.org/10.48550/arXiv.2603.01227 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Bo Xiong [view email] [v1] Sun, 1 Mar 2026 18:42:59 UTC (806 KB) Full-text links: Access P...

Originally published on March 03, 2026. Curated by AI News.

Related Articles

What is AI, how do apps like ChatGPT work and why are there concerns?
Llms

What is AI, how do apps like ChatGPT work and why are there concerns?

AI is transforming modern life, but some critics worry about its potential misuse and environmental impact.

AI News - General · 7 min ·
[2603.29957] Think Anywhere in Code Generation
Llms

[2603.29957] Think Anywhere in Code Generation

Abstract page for arXiv paper 2603.29957: Think Anywhere in Code Generation

arXiv - Machine Learning · 3 min ·
[2603.16880] NeuroNarrator: A Generalist EEG-to-Text Foundation Model for Clinical Interpretation via Spectro-Spatial Grounding and Temporal State-Space Reasoning
Llms

[2603.16880] NeuroNarrator: A Generalist EEG-to-Text Foundation Model for Clinical Interpretation via Spectro-Spatial Grounding and Temporal State-Space Reasoning

Abstract page for arXiv paper 2603.16880: NeuroNarrator: A Generalist EEG-to-Text Foundation Model for Clinical Interpretation via Spectr...

arXiv - Machine Learning · 4 min ·
[2512.21106] Semantic Refinement with LLMs for Graph Representations
Llms

[2512.21106] Semantic Refinement with LLMs for Graph Representations

Abstract page for arXiv paper 2512.21106: Semantic Refinement with LLMs for Graph Representations

arXiv - Machine Learning · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime