[2603.02426] Personalized Multi-Agent Average Reward TD-Learning via Joint Linear Approximation

[2603.02426] Personalized Multi-Agent Average Reward TD-Learning via Joint Linear Approximation

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2603.02426: Personalized Multi-Agent Average Reward TD-Learning via Joint Linear Approximation

Computer Science > Machine Learning arXiv:2603.02426 (cs) [Submitted on 2 Mar 2026] Title:Personalized Multi-Agent Average Reward TD-Learning via Joint Linear Approximation Authors:Leo (Muxing)Wang, Pengkun Yang, Lili Su View a PDF of the paper titled Personalized Multi-Agent Average Reward TD-Learning via Joint Linear Approximation, by Leo (Muxing) Wang and 2 other authors View PDF Abstract:We study personalized multi-agent average reward TD learning, in which a collection of agents interacts with different environments and jointly learns their respective value functions. We focus on the setting where there exists a shared linear representation, and the agents' optimal weights collectively lie in an unknown linear subspace. Inspired by the recent success of personalized federated learning (PFL), we study the convergence of cooperative single-timescale TD learning in which agents iteratively estimate the common subspace and local heads. We showed that this decomposition can filter out conflicting signals, effectively mitigating the negative impacts of ``misaligned'' signals, and achieving linear speedup. The main technical challenges lie in the heterogeneity, the Markovian sampling, and their intricate interplay in shaping error evolutions. Specifically, not only are the error dynamics of multiple variables closely interconnected, but there is also no direct contraction for the principal angle distance between the optimal subspace and the estimated subspace. We hope our an...

Originally published on March 04, 2026. Curated by AI News.

Related Articles

[2601.13227] Insider Knowledge: How Much Can RAG Systems Gain from Evaluation Secrets?
Llms

[2601.13227] Insider Knowledge: How Much Can RAG Systems Gain from Evaluation Secrets?

Abstract page for arXiv paper 2601.13227: Insider Knowledge: How Much Can RAG Systems Gain from Evaluation Secrets?

arXiv - AI · 3 min ·
[2601.22440] AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Values from Casual Conversations
Llms

[2601.22440] AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Values from Casual Conversations

Abstract page for arXiv paper 2601.22440: AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Value...

arXiv - AI · 4 min ·
[2601.13222] Incorporating Q&A Nuggets into Retrieval-Augmented Generation
Nlp

[2601.13222] Incorporating Q&A Nuggets into Retrieval-Augmented Generation

Abstract page for arXiv paper 2601.13222: Incorporating Q&A Nuggets into Retrieval-Augmented Generation

arXiv - AI · 3 min ·
[2512.01707] StreamGaze: Gaze-Guided Temporal Reasoning and Proactive Understanding in Streaming Videos
Llms

[2512.01707] StreamGaze: Gaze-Guided Temporal Reasoning and Proactive Understanding in Streaming Videos

Abstract page for arXiv paper 2512.01707: StreamGaze: Gaze-Guided Temporal Reasoning and Proactive Understanding in Streaming Videos

arXiv - AI · 4 min ·
More in Nlp: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime