[2510.03346] KVComm: Enabling Efficient LLM Communication through Selective KV Sharing

[2510.03346] KVComm: Enabling Efficient LLM Communication through Selective KV Sharing

arXiv - AI 4 min read Article

Summary

The paper introduces KVComm, a novel framework for efficient communication between Large Language Models (LLMs) using selective KV pair sharing, enhancing inter-model communication while reducing inference costs.

Why It Matters

As LLMs are increasingly utilized in multi-agent systems, effective communication is critical. KVComm addresses inefficiencies in existing protocols, offering a scalable solution that can improve performance and reduce costs, making it relevant for researchers and developers in AI.

Key Takeaways

  • KVComm enables efficient communication by selectively sharing KV pairs.
  • The framework reduces inference costs compared to traditional methods.
  • Experiments show KVComm achieves performance comparable to upper-bound methods.
  • The approach leverages attention importance scores for optimal KV selection.
  • KV pairs can serve as an effective medium for inter-LLM communication.

Computer Science > Machine Learning arXiv:2510.03346 (cs) [Submitted on 2 Oct 2025 (v1), last revised 22 Feb 2026 (this version, v3)] Title:KVComm: Enabling Efficient LLM Communication through Selective KV Sharing Authors:Xiangyu Shi, Marco Chiesa, Gerald Q. Maguire Jr., Dejan Kostic View a PDF of the paper titled KVComm: Enabling Efficient LLM Communication through Selective KV Sharing, by Xiangyu Shi and 3 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) are increasingly deployed in multi-agent systems, where effective inter-model communication is crucial. Existing communication protocols either rely on natural language, incurring high inference costs and information loss, or on hidden states, which suffer from information concentration bias and inefficiency. To address these limitations, we propose KVComm, a novel communication framework that enables efficient communication between LLMs through selective sharing of KV pairs. KVComm leverages the rich information encoded in the KV pairs while avoiding the pitfalls of hidden states. We introduce a KV layer-wise selection strategy based on attention importance scores with a Gaussian prior to identify the most informative KV pairs for communication. Extensive experiments across diverse tasks and model pairs demonstrate that KVComm achieves comparable performance to the upper-bound method, which directly merges inputs to one model without any communication, while transmitting as few as 30\% of...

Related Articles

Llms

Building knowledge bases from YouTube data using LLMs -- my workflow after 52 guides

I've been building a system that turns YouTube channels into structured knowledge bases. Thought I'd share the workflow since Karpathy's ...

Reddit - Artificial Intelligence · 1 min ·
What is AI, how do apps like ChatGPT work and why are there concerns?
Llms

What is AI, how do apps like ChatGPT work and why are there concerns?

AI is transforming modern life, but some critics worry about its potential misuse and environmental impact.

AI News - General · 7 min ·
[2603.29957] Think Anywhere in Code Generation
Llms

[2603.29957] Think Anywhere in Code Generation

Abstract page for arXiv paper 2603.29957: Think Anywhere in Code Generation

arXiv - Machine Learning · 3 min ·
[2603.16880] NeuroNarrator: A Generalist EEG-to-Text Foundation Model for Clinical Interpretation via Spectro-Spatial Grounding and Temporal State-Space Reasoning
Llms

[2603.16880] NeuroNarrator: A Generalist EEG-to-Text Foundation Model for Clinical Interpretation via Spectro-Spatial Grounding and Temporal State-Space Reasoning

Abstract page for arXiv paper 2603.16880: NeuroNarrator: A Generalist EEG-to-Text Foundation Model for Clinical Interpretation via Spectr...

arXiv - Machine Learning · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime