[2602.14200] TS-Haystack: A Multi-Scale Retrieval Benchmark for Time Series Language Models

[2602.14200] TS-Haystack: A Multi-Scale Retrieval Benchmark for Time Series Language Models

arXiv - Machine Learning 4 min read Article

Summary

The paper introduces TS-Haystack, a benchmark for evaluating Time Series Language Models (TSLMs) on long-context retrieval tasks, addressing limitations in current models.

Why It Matters

As TSLMs become more prevalent in processing continuous signals, understanding their limitations in long-context retrieval is crucial for improving model design and performance. TS-Haystack provides a structured approach to evaluate these models, potentially guiding future research and applications in time series analysis.

Key Takeaways

  • TS-Haystack benchmark evaluates TSLMs on long-context retrieval tasks.
  • Existing models struggle with temporal localization in long sequences.
  • Compression improves classification but hinders localized event retrieval.
  • The benchmark includes ten task types across four categories.
  • Architectural designs must balance sequence length and computational complexity.

Computer Science > Machine Learning arXiv:2602.14200 (cs) [Submitted on 15 Feb 2026] Title:TS-Haystack: A Multi-Scale Retrieval Benchmark for Time Series Language Models Authors:Nicolas Zumarraga, Thomas Kaar, Ning Wang, Maxwell A. Xu, Max Rosenblattl, Markus Kreft, Kevin O'Sullivan, Paul Schmiedmayer, Patrick Langer, Robert Jakob View a PDF of the paper titled TS-Haystack: A Multi-Scale Retrieval Benchmark for Time Series Language Models, by Nicolas Zumarraga and 9 other authors View PDF HTML (experimental) Abstract:Time Series Language Models (TSLMs) are emerging as unified models for reasoning over continuous signals in natural language. However, long-context retrieval remains a major limitation: existing models are typically trained and evaluated on short sequences, while real-world time-series sensor streams can span millions of datapoints. This mismatch requires precise temporal localization under strict computational constraints, a regime that is not captured by current benchmarks. We introduce TS-Haystack, a long-context temporal retrieval benchmark comprising ten task types across four categories: direct retrieval, temporal reasoning, multi-step reasoning and contextual anomaly. The benchmark uses controlled needle insertion by embedding short activity bouts into longer longitudinal accelerometer recordings, enabling systematic evaluation across context lengths ranging from seconds to 2 hours per sample. We hypothesize that existing TSLM time series encoders overl...

Related Articles

Llms

Kept hitting ChatGPT and Claude limits during real work. This is the free setup I ended up using

I do a lot of writing and random problem solving for work. Mostly long drafts, edits, and breaking down ideas. Around Jan I kept hitting ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Is ChatGPT changing the way we think too much already?

Back in the day, I got ChatGPT Plus mostly for work and to help me write better and do stuff faster. But now I use it for almost everythi...

Reddit - Artificial Intelligence · 1 min ·
Llms

Will people continue paying for the plans after the honeymoon is over?

I currently pay for Max 20x and the demand at work is so high that I can only get everything I need done because I have access to Claude....

Reddit - Artificial Intelligence · 1 min ·
Llms

Nvidia goes all-in on AI agents while Anthropic pulls the plug

TLDR: Nvidia is partnering with 17 major companies to build a platform specifically for enterprise AI agents, basically trying to become ...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime