[2603.03310] Entropic-Time Inference: Self-Organizing Large Language Model Decoding Beyond Attention
About this article
Abstract page for arXiv paper 2603.03310: Entropic-Time Inference: Self-Organizing Large Language Model Decoding Beyond Attention
Computer Science > Computation and Language arXiv:2603.03310 (cs) [Submitted on 8 Feb 2026] Title:Entropic-Time Inference: Self-Organizing Large Language Model Decoding Beyond Attention Authors:Andrew Kiruluta View a PDF of the paper titled Entropic-Time Inference: Self-Organizing Large Language Model Decoding Beyond Attention, by Andrew Kiruluta View PDF HTML (experimental) Abstract:Modern large language model (LLM) inference engines optimize throughput and latency under fixed decoding rules, treating generation as a linear progression in token time. We propose a fundamentally different paradigm: entropic\-time inference, where decoding is governed by the flow of uncertainty rather than token index. We introduce a self\-organizing inference architecture that jointly couples scheduling, attention sparsification, and sampling temperature under a unified entropy control objective. Our method extends vLLM with entropy-aware scheduling, entropic pruning of paged attention blocks, and adaptive temperature control that stabilizes generation near a target entropy regime. This transforms inference into a resource\-intelligent thermodynamic process that allocates computation where uncertainty reduction is maximized. We present a concrete systems design, pseudocode, and integration plan, demonstrating how entropy can serve as a first\-class control signal for scalable LLM inference. Subjects: Computation and Language (cs.CL); Machine Learning (cs.LG) Cite as: arXiv:2603.03310 [cs.CL...