[2505.11349] Context parroting: A simple but tough-to-beat baseline for foundation models in scientific machine learning
About this article
Abstract page for arXiv paper 2505.11349: Context parroting: A simple but tough-to-beat baseline for foundation models in scientific machine learning
Computer Science > Machine Learning arXiv:2505.11349 (cs) [Submitted on 16 May 2025 (v1), last revised 29 Mar 2026 (this version, v3)] Title:Context parroting: A simple but tough-to-beat baseline for foundation models in scientific machine learning Authors:Yuanzhao Zhang, William Gilpin View a PDF of the paper titled Context parroting: A simple but tough-to-beat baseline for foundation models in scientific machine learning, by Yuanzhao Zhang and William Gilpin View PDF HTML (experimental) Abstract:Recent time-series foundation models exhibit strong abilities to predict physical systems. These abilities include zero-shot forecasting, in which a model forecasts future states of a system given only a short trajectory as context, without knowledge of the underlying physics. Here, we show that foundation models often forecast through a simple parroting strategy, and when they are not parroting they exhibit some shared failure modes such as converging to the mean. As a result, a naive context parroting model that copies directly from the context scores higher than leading time-series foundation models on predicting a diverse range of dynamical systems, including low-dimensional chaos, turbulence, coupled oscillators, and electrocardiograms, at a tiny fraction of the computational cost. We draw a parallel between context parroting and induction heads, which explains recent works showing that large language models can often be repurposed for time series forecasting. Our dynamical ...