[2510.15987] Algorithmic Primitives and Compositional Geometry of Reasoning in Language Models

[2510.15987] Algorithmic Primitives and Compositional Geometry of Reasoning in Language Models

arXiv - AI 4 min read Article

Summary

The paper explores how algorithmic primitives and compositional geometry can enhance reasoning capabilities in large language models (LLMs), demonstrating their effectiveness through various benchmarks.

Why It Matters

Understanding the underlying mechanisms of reasoning in LLMs is crucial for developing more effective AI systems. This research provides insights into how algorithmic primitives can be utilized to improve reasoning performance, which is essential for applications in AI and machine learning.

Key Takeaways

  • Introduces a framework for tracing and steering algorithmic primitives in LLMs.
  • Demonstrates the effectiveness of these primitives through benchmarks like TSP and 3SAT.
  • Highlights the role of compositional geometry in enhancing reasoning capabilities.
  • Shows that reasoning finetuning improves algorithmic generalization across tasks.
  • Suggests that primitive vectors can be combined to create reusable reasoning components.

Computer Science > Machine Learning arXiv:2510.15987 (cs) [Submitted on 13 Oct 2025 (v1), last revised 16 Feb 2026 (this version, v2)] Title:Algorithmic Primitives and Compositional Geometry of Reasoning in Language Models Authors:Samuel Lippl, Thomas McGee, Kimberly Lopez, Ziwen Pan, Pierce Zhang, Salma Ziadi, Oliver Eberle, Ida Momennejad View a PDF of the paper titled Algorithmic Primitives and Compositional Geometry of Reasoning in Language Models, by Samuel Lippl and 7 other authors View PDF HTML (experimental) Abstract:How do latent and inference time computations enable large language models (LLMs) to solve multi-step reasoning? We introduce a framework for tracing and steering algorithmic primitives that underlie model reasoning. Our approach links reasoning traces to internal activations and evaluates algorithmic primitives by injecting them into residual streams and measuring their effect on reasoning steps and task performance. We consider four benchmarks: Traveling Salesperson Problem (TSP), 3SAT, AIME, and graph navigation. We operationalize primitives by clustering activations and annotating their matched reasoning traces using an automated LLM pipeline. We then apply function vector methods to derive primitive vectors as reusable compositional building blocks of reasoning. Primitive vectors can be combined through addition, subtraction, and scalar operations, revealing a geometric logic in activation space. Cross-task and cross-model evaluations (Phi-4, Phi-...

Related Articles

[2604.01989] Attention at Rest Stays at Rest: Breaking Visual Inertia for Cognitive Hallucination Mitigation
Llms

[2604.01989] Attention at Rest Stays at Rest: Breaking Visual Inertia for Cognitive Hallucination Mitigation

Abstract page for arXiv paper 2604.01989: Attention at Rest Stays at Rest: Breaking Visual Inertia for Cognitive Hallucination Mitigation

arXiv - AI · 4 min ·
[2603.24326] Boosting Document Parsing Efficiency and Performance with Coarse-to-Fine Visual Processing
Llms

[2603.24326] Boosting Document Parsing Efficiency and Performance with Coarse-to-Fine Visual Processing

Abstract page for arXiv paper 2603.24326: Boosting Document Parsing Efficiency and Performance with Coarse-to-Fine Visual Processing

arXiv - AI · 4 min ·
[2603.18545] CoDA: Exploring Chain-of-Distribution Attacks and Post-Hoc Token-Space Repair for Medical Vision-Language Models
Llms

[2603.18545] CoDA: Exploring Chain-of-Distribution Attacks and Post-Hoc Token-Space Repair for Medical Vision-Language Models

Abstract page for arXiv paper 2603.18545: CoDA: Exploring Chain-of-Distribution Attacks and Post-Hoc Token-Space Repair for Medical Visio...

arXiv - AI · 4 min ·
[2509.22367] What Is The Political Content in LLMs' Pre- and Post-Training Data?
Llms

[2509.22367] What Is The Political Content in LLMs' Pre- and Post-Training Data?

Abstract page for arXiv paper 2509.22367: What Is The Political Content in LLMs' Pre- and Post-Training Data?

arXiv - AI · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime