[2505.11349] Context parroting: A simple but tough-to-beat baseline for foundation models in scientific machine learning

[2505.11349] Context parroting: A simple but tough-to-beat baseline for foundation models in scientific machine learning

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2505.11349: Context parroting: A simple but tough-to-beat baseline for foundation models in scientific machine learning

Computer Science > Machine Learning arXiv:2505.11349 (cs) [Submitted on 16 May 2025 (v1), last revised 29 Mar 2026 (this version, v3)] Title:Context parroting: A simple but tough-to-beat baseline for foundation models in scientific machine learning Authors:Yuanzhao Zhang, William Gilpin View a PDF of the paper titled Context parroting: A simple but tough-to-beat baseline for foundation models in scientific machine learning, by Yuanzhao Zhang and William Gilpin View PDF HTML (experimental) Abstract:Recent time-series foundation models exhibit strong abilities to predict physical systems. These abilities include zero-shot forecasting, in which a model forecasts future states of a system given only a short trajectory as context, without knowledge of the underlying physics. Here, we show that foundation models often forecast through a simple parroting strategy, and when they are not parroting they exhibit some shared failure modes such as converging to the mean. As a result, a naive context parroting model that copies directly from the context scores higher than leading time-series foundation models on predicting a diverse range of dynamical systems, including low-dimensional chaos, turbulence, coupled oscillators, and electrocardiograms, at a tiny fraction of the computational cost. We draw a parallel between context parroting and induction heads, which explains recent works showing that large language models can often be repurposed for time series forecasting. Our dynamical ...

Originally published on March 31, 2026. Curated by AI News.

Related Articles

Llms

Anyone here using local models mainly to keep LLM costs under control?

Been noticing that once you use LLMs for real dev work, the cost conversation gets messy fast. It is not just raw API spend. It is retrie...

Reddit - Artificial Intelligence · 1 min ·
Claude AI Goes Down for Thousands of Users Wednesday, Downdetector Reports
Llms

Claude AI Goes Down for Thousands of Users Wednesday, Downdetector Reports

Claude AI faces an outage today as over 7,000 users report issues. Stay informed about the situation here.

AI Tools & Products · 6 min ·
Llms

ChatGPT meets coffee: Starbucks launches AI ordering tool

Starbucks has launched an AI ordering tool that integrates with ChatGPT, aiming to improve the customer experience by streamlining the or...

AI Tools & Products · 1 min ·
NFL mock draft 2026: ChatGPT AI gives the worst predictions you'll ever see
Llms

NFL mock draft 2026: ChatGPT AI gives the worst predictions you'll ever see

USA TODAY Sports features a mock draft for the 2026 NFL Draft created by ChatGPT AI, which is noted for being the worst mock draft ever p...

AI Tools & Products · 9 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime