[2602.16054] CLAA: Cross-Layer Attention Aggregation for Accelerating LLM Prefill
Summary
The paper introduces Cross-Layer Attention Aggregation (CLAA) to enhance the efficiency of long-context LLM inference by addressing token importance estimation issues.
Why It Matters
As large language models (LLMs) become integral in various applications, optimizing their performance is crucial. CLAA offers a solution to a significant bottleneck in LLM prefill, potentially improving response times and overall efficiency in real-world applications.
Key Takeaways
- CLAA aggregates attention scores across layers to improve token ranking stability.
- The method reduces Time-to-First-Token (TTFT) by up to 39% compared to traditional methods.
- Existing token-ranking heuristics show high variance across layers, which CLAA addresses effectively.
- The proposed Answer-Informed Oracle provides a new way to evaluate token importance.
- CLAA closes the gap to the oracle upper bound, enhancing inference efficiency.
Computer Science > Computation and Language arXiv:2602.16054 (cs) [Submitted on 17 Feb 2026] Title:CLAA: Cross-Layer Attention Aggregation for Accelerating LLM Prefill Authors:Bradley McDanel, Steven Li, Harshit Khaitan View a PDF of the paper titled CLAA: Cross-Layer Attention Aggregation for Accelerating LLM Prefill, by Bradley McDanel and 2 other authors View PDF HTML (experimental) Abstract:The prefill stage in long-context LLM inference remains a computational bottleneck. Recent token-ranking heuristics accelerate inference by selectively processing a subset of semantically relevant tokens. However, existing methods suffer from unstable token importance estimation, often varying between layers. Evaluating token-ranking quality independently from heuristic-specific architectures is challenging. To address this, we introduce an Answer-Informed Oracle, which defines ground-truth token importance by measuring attention from generated answers back to the prompt. This oracle reveals that existing heuristics exhibit high variance across layers: rankings can degrade sharply at specific layers, a failure mode invisible to end-to-end benchmarks. The diagnosis suggests a simple fix: aggregate scores across layers rather than relying on any single one. We implement this as Cross-Layer Attention Aggregation (CLAA), which closes the gap to the oracle upper bound and reduces Time-to-First-Token (TTFT) by up to 39\% compared to the Full KV Cache baseline. Comments: Subjects: Computat...