[2510.15987] Algorithmic Primitives and Compositional Geometry of Reasoning in Language Models
Summary
The paper explores how algorithmic primitives and compositional geometry can enhance reasoning capabilities in large language models (LLMs), demonstrating their effectiveness through various benchmarks.
Why It Matters
Understanding the underlying mechanisms of reasoning in LLMs is crucial for developing more effective AI systems. This research provides insights into how algorithmic primitives can be utilized to improve reasoning performance, which is essential for applications in AI and machine learning.
Key Takeaways
- Introduces a framework for tracing and steering algorithmic primitives in LLMs.
- Demonstrates the effectiveness of these primitives through benchmarks like TSP and 3SAT.
- Highlights the role of compositional geometry in enhancing reasoning capabilities.
- Shows that reasoning finetuning improves algorithmic generalization across tasks.
- Suggests that primitive vectors can be combined to create reusable reasoning components.
Computer Science > Machine Learning arXiv:2510.15987 (cs) [Submitted on 13 Oct 2025 (v1), last revised 16 Feb 2026 (this version, v2)] Title:Algorithmic Primitives and Compositional Geometry of Reasoning in Language Models Authors:Samuel Lippl, Thomas McGee, Kimberly Lopez, Ziwen Pan, Pierce Zhang, Salma Ziadi, Oliver Eberle, Ida Momennejad View a PDF of the paper titled Algorithmic Primitives and Compositional Geometry of Reasoning in Language Models, by Samuel Lippl and 7 other authors View PDF HTML (experimental) Abstract:How do latent and inference time computations enable large language models (LLMs) to solve multi-step reasoning? We introduce a framework for tracing and steering algorithmic primitives that underlie model reasoning. Our approach links reasoning traces to internal activations and evaluates algorithmic primitives by injecting them into residual streams and measuring their effect on reasoning steps and task performance. We consider four benchmarks: Traveling Salesperson Problem (TSP), 3SAT, AIME, and graph navigation. We operationalize primitives by clustering activations and annotating their matched reasoning traces using an automated LLM pipeline. We then apply function vector methods to derive primitive vectors as reusable compositional building blocks of reasoning. Primitive vectors can be combined through addition, subtraction, and scalar operations, revealing a geometric logic in activation space. Cross-task and cross-model evaluations (Phi-4, Phi-...