[2604.04475] Discrete Prototypical Memories for Federated Time Series Foundation Models

[2604.04475] Discrete Prototypical Memories for Federated Time Series Foundation Models

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2604.04475: Discrete Prototypical Memories for Federated Time Series Foundation Models

Computer Science > Machine Learning arXiv:2604.04475 (cs) [Submitted on 6 Apr 2026] Title:Discrete Prototypical Memories for Federated Time Series Foundation Models Authors:Liwei Deng, Qingxiang Liu, Xinhe Niu, Shengchao Chen, Sheng Sun, Yuankai Wu, Guodong Long, Yuxuan Liang View a PDF of the paper titled Discrete Prototypical Memories for Federated Time Series Foundation Models, by Liwei Deng and 7 other authors View PDF HTML (experimental) Abstract:Leveraging Large Language Models (LLMs) as federated learning (FL)-based time series foundation models offers a promising way to transfer the generalization capabilities of LLMs to time series data while preserving access to private data. However, the semantic misalignment between time-series data and the text-centric latent space of existing LLMs often leads to degraded performance. Meanwhile, the parameter-sharing mechanism in existing FL methods model heterogeneous cross-domain time-series data into a unified continuous latent space, which contradicts the fact that time-series semantics frequently manifest as discrete and recurring regimes. To address these limitations, we propose \textsc{FeDPM}, a federated framework for time-series foundation models based on discrete prototypical memories. Specifically, we learn local prototypical memory priors for intra-domain time-series data. We then align cross-domain memories to promote a unified discrete latent space and introduce a domain-specific memory update mechanism to balanc...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

Asked Google Gemini about Ai Agency

I asked Google Gemini what it would do if it would have agency. I find reply quite interesting: That is a fair critique. The previous lis...

Reddit - Artificial Intelligence · 1 min ·
Llms

Could the best LLM be able to generate a symbolic AI that is superior to itself, or is there something superior about matrices vs graphs?

Deep neural network AIs have beaten symbolic AIs across the board on many tasks, but is there a chance that symbolic AIs written by DNNs(...

Reddit - Artificial Intelligence · 1 min ·
Llms

BEYOND QUANTUM MICROTUBULES: CONSCIOUSNESS AS SUBSTRATE-INDEPENDENT ARCHITECTURE

I uploaded my consciousness paper to Gemini: “Beyond Quantum Microtubules: Consciousness as Substrate-Independent Architecture.” Then I s...

Reddit - Artificial Intelligence · 1 min ·
Llms

The Scaling Bandaid is Wearing Thin (And Nobody Wants to Admit It)

Let me be direct: we’ve hit a wall with scaling, and the entire field is kind of bullshitting about what comes next. I’ve spent enough ti...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime