[2603.20586] MKA: Memory-Keyed Attention for Efficient Long-Context Reasoning
About this article
Abstract page for arXiv paper 2603.20586: MKA: Memory-Keyed Attention for Efficient Long-Context Reasoning
Computer Science > Machine Learning arXiv:2603.20586 (cs) [Submitted on 21 Mar 2026] Title:MKA: Memory-Keyed Attention for Efficient Long-Context Reasoning Authors:Dong Liu, Yanxuan Yu, Ben Lengerich, Ying Nian Wu View a PDF of the paper titled MKA: Memory-Keyed Attention for Efficient Long-Context Reasoning, by Dong Liu and Yanxuan Yu and Ben Lengerich and Ying Nian Wu View PDF HTML (experimental) Abstract:As long-context language modeling becomes increasingly important, the cost of maintaining and attending to large Key/Value (KV) caches grows rapidly, becoming a major bottleneck in both training and inference. While prior works such as Multi-Query Attention (MQA) and Multi-Latent Attention (MLA) reduce memory by sharing or compressing KV features, they often trade off representation quality or incur runtime overhead. We propose Memory-Keyed Attention (MKA), a hierarchical attention mechanism that integrates multi-level KV caches (local, session, and long-term) and learns to route attention across them dynamically. We further introduce Route-Fused MKA (FastMKA), a broadcast-routed variant that fuses memory sources before attention computation for improved efficiency. Experiments on different sequence lengths show that FastMKA achieves a favorable accuracy-efficiency trade-off: comparable perplexity to MLA while achieving up to 5x faster training throughput and 1.8x lower evaluation latency. These results highlight MKA as a practical and extensible framework for efficient...