[2602.18679] Transformers for dynamical systems learn transfer operators in-context
Summary
This article explores how transformers can learn transfer operators for dynamical systems through in-context learning, enabling zero-shot forecasting of unseen systems.
Why It Matters
Understanding how large-scale foundation models adapt to new physical systems without retraining is crucial for advancing machine learning applications in scientific domains. This research provides insights into the mechanisms that allow for effective forecasting in complex dynamical systems, which can impact fields such as climate modeling, robotics, and beyond.
Key Takeaways
- Transformers can forecast different dynamical systems without retraining.
- In-context learning reveals a tradeoff between in-distribution and out-of-distribution performance.
- Attention-based models utilize transfer-operator strategies for effective forecasting.
- The study highlights the importance of global attractor information in short-term predictions.
- This research challenges conventional learning paradigms in physical systems.
Computer Science > Machine Learning arXiv:2602.18679 (cs) [Submitted on 21 Feb 2026] Title:Transformers for dynamical systems learn transfer operators in-context Authors:Anthony Bao, Jeffrey Lai, William Gilpin View a PDF of the paper titled Transformers for dynamical systems learn transfer operators in-context, by Anthony Bao and 2 other authors View PDF HTML (experimental) Abstract:Large-scale foundation models for scientific machine learning adapt to physical settings unseen during training, such as zero-shot transfer between turbulent scales. This phenomenon, in-context learning, challenges conventional understanding of learning and adaptation in physical systems. Here, we study in-context learning of dynamical systems in a minimal setting: we train a small two-layer, single-head transformer to forecast one dynamical system, and then evaluate its ability to forecast a different dynamical system without retraining. We discover an early tradeoff in training between in-distribution and out-of-distribution performance, which manifests as a secondary double descent phenomenon. We discover that attention-based models apply a transfer-operator forecasting strategy in-context. They (1) lift low-dimensional time series using delay embedding, to detect the system's higher-dimensional dynamical manifold, and (2) identify and forecast long-lived invariant sets that characterize the global flow on this manifold. Our results clarify the mechanism enabling large pretrained models to ...