[2602.13783] MEMTS: Internalizing Domain Knowledge via Parameterized Memory for Retrieval-Free Domain Adaptation of Time Series Foundation Models

[2602.13783] MEMTS: Internalizing Domain Knowledge via Parameterized Memory for Retrieval-Free Domain Adaptation of Time Series Foundation Models

arXiv - Machine Learning 4 min read Article

Summary

The paper presents MEMTS, a novel method for domain adaptation in time series forecasting that internalizes domain knowledge through a Knowledge Persistence Module, enhancing efficiency and mitigating issues like catastrophic forgetting.

Why It Matters

As time series forecasting becomes increasingly vital across various industries, the ability to adapt models to specific domains without significant overhead is crucial. MEMTS addresses the limitations of existing methods by providing a scalable, efficient solution that maintains performance in real-world applications.

Key Takeaways

  • MEMTS offers a retrieval-free approach to domain adaptation in time series models.
  • The Knowledge Persistence Module internalizes domain-specific dynamics, improving efficiency.
  • The method achieves constant-time inference, addressing scalability issues in real-time processing.
  • Extensive experiments demonstrate MEMTS's state-of-the-art performance across multiple datasets.
  • The approach mitigates catastrophic forgetting while preserving general temporal patterns.

Computer Science > Machine Learning arXiv:2602.13783 (cs) [Submitted on 14 Feb 2026] Title:MEMTS: Internalizing Domain Knowledge via Parameterized Memory for Retrieval-Free Domain Adaptation of Time Series Foundation Models Authors:Xiaoyun Yu, Li fan, Xiangfei Qiu, Nanqing Dong, Yonggui Huang, Honggang Qi, Geguang Pu, Wanli Ouyang, Xi Chen, Jilin Hu View a PDF of the paper titled MEMTS: Internalizing Domain Knowledge via Parameterized Memory for Retrieval-Free Domain Adaptation of Time Series Foundation Models, by Xiaoyun Yu and 9 other authors View PDF HTML (experimental) Abstract:While Time Series Foundation Models (TSFMs) have demonstrated exceptional performance in generalized forecasting, their performance often degrades significantly when deployed in real-world vertical domains characterized by temporal distribution shifts and domain-specific periodic structures. Current solutions are primarily constrained by two paradigms: Domain-Adaptive Pretraining (DAPT), which improves short-term domain fitting but frequently disrupts previously learned global temporal patterns due to catastrophic forgetting; and Retrieval-Augmented Generation (RAG), which incorporates external knowledge but introduces substantial retrieval overhead. This creates a severe scalability bottleneck that fails to meet the high-efficiency requirements of real-time stream processing. To break this impasse, we propose Memory for Time Series (MEMTS), a lightweight and plug-and-play method for retrieval-f...

Related Articles

Llms

[R] Looking for arXiv cs.LG endorser, inference monitoring using information geometry

Hi r/MachineLearning, I’m looking for an arXiv endorser in cs.LG for a paper on inference-time distribution shift detection for deployed ...

Reddit - Machine Learning · 1 min ·
Llms

How LLM sycophancy got the US into the Iran quagmire

submitted by /u/sow_oats [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Llms

Kept hitting ChatGPT and Claude limits during real work. This is the free setup I ended up using

I do a lot of writing and random problem solving for work. Mostly long drafts, edits, and breaking down ideas. Around Jan I kept hitting ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Is ChatGPT changing the way we think too much already?

Back in the day, I got ChatGPT Plus mostly for work and to help me write better and do stuff faster. But now I use it for almost everythi...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime