[2603.22586] A Foundation Model for Instruction-Conditioned In-Context Time Series Tasks
About this article
Abstract page for arXiv paper 2603.22586: A Foundation Model for Instruction-Conditioned In-Context Time Series Tasks
Computer Science > Machine Learning arXiv:2603.22586 (cs) [Submitted on 23 Mar 2026] Title:A Foundation Model for Instruction-Conditioned In-Context Time Series Tasks Authors:Anish Saha, Konstantin Shmakov View a PDF of the paper titled A Foundation Model for Instruction-Conditioned In-Context Time Series Tasks, by Anish Saha and 1 other authors View PDF HTML (experimental) Abstract:In-context learning (ICL) allows a model to adapt at inference time by conditioning on examples rather than updating parameters. Existing time-series foundation models use implicit positional context, retrieval, or task-specific objectives, but rarely explicit instruction-conditioned demonstrations. We present a foundation model for instruction-conditioned in-context time-series tasks based on a quantile-regression T5 encoder-decoder. Historical examples and queries are encoded with a structured tokenization scheme that marks target series, covariates, context, and task-specific future information. A hierarchical Transformer with per-example encoding, example-level fusion, and cross-example attention conditions decoding on demonstration pairs, enabling forecasting and related tasks without task-specific fine-tuning. We train on large-scale real and synthetic time series using supervised forecasting plus self-supervised tasks, including imputation, reconstruction, classification, anomaly detection, and source demixing. This multi-task training learns a distribution over task mappings and improve...