[2603.23234] MemCollab: Cross-Agent Memory Collaboration via Contrastive Trajectory Distillation
About this article
Abstract page for arXiv paper 2603.23234: MemCollab: Cross-Agent Memory Collaboration via Contrastive Trajectory Distillation
Computer Science > Artificial Intelligence arXiv:2603.23234 (cs) [Submitted on 24 Mar 2026] Title:MemCollab: Cross-Agent Memory Collaboration via Contrastive Trajectory Distillation Authors:Yurui Chang, Yiran Wu, Qingyun Wu, Lu Lin View a PDF of the paper titled MemCollab: Cross-Agent Memory Collaboration via Contrastive Trajectory Distillation, by Yurui Chang and 3 other authors View PDF HTML (experimental) Abstract:Large language model (LLM)-based agents rely on memory mechanisms to reuse knowledge from past problem-solving experiences. Existing approaches typically construct memory in a per-agent manner, tightly coupling stored knowledge to a single model's reasoning style. In modern deployments with heterogeneous agents, a natural question arises: can a single memory system be shared across different models? We found that naively transferring memory between agents often degrades performance, as such memory entangles task-relevant knowledge with agent-specific biases. To address this challenge, we propose MemCollab, a collaborative memory framework that constructs agent-agnostic memory by contrasting reasoning trajectories generated by different agents on the same task. This contrastive process distills abstract reasoning constraints that capture shared task-level invariants while suppressing agent-specific artifacts. We further introduce a task-aware retrieval mechanism that conditions memory access on task category, ensuring that only relevant constraints are used at ...