[2509.14181] Bridging Past and Future: Distribution-Aware Alignment for Time Series Forecasting

[2509.14181] Bridging Past and Future: Distribution-Aware Alignment for Time Series Forecasting

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2509.14181: Bridging Past and Future: Distribution-Aware Alignment for Time Series Forecasting

Computer Science > Machine Learning arXiv:2509.14181 (cs) [Submitted on 17 Sep 2025 (v1), last revised 25 Mar 2026 (this version, v4)] Title:Bridging Past and Future: Distribution-Aware Alignment for Time Series Forecasting Authors:Yifan Hu, Jie Yang, Tian Zhou, Peiyuan Liu, Yujin Tang, Rong Jin, Liang Sun View a PDF of the paper titled Bridging Past and Future: Distribution-Aware Alignment for Time Series Forecasting, by Yifan Hu and 6 other authors View PDF HTML (experimental) Abstract:Although contrastive and other representation-learning methods have long been explored in vision and NLP, their adoption in modern time series forecasters remains limited. We believe they hold strong promise for this domain. To unlock this potential, we explicitly align past and future representations, thereby bridging the distributional gap between input histories and future targets. To this end, we introduce TimeAlign, a lightweight, plug-and-play framework that establishes a new representation paradigm, distinct from contrastive learning, by aligning auxiliary features via a simple reconstruction task and feeding them back into any base forecaster. Extensive experiments across eight benchmarks verify its superior performance. Further studies indicate that the gains arise primarily from correcting frequency mismatches between historical inputs and future outputs. Additionally, we provide two theoretical justifications for how reconstruction improves forecasting generalization and how ali...

Originally published on March 26, 2026. Curated by AI News.

Related Articles

Machine Learning

[D] Looking for definition of open-world ish learning problem

Hello! Recently I did a project where I initially had around 30 target classes. But at inference, the model had to be able to handle a lo...

Reddit - Machine Learning · 1 min ·
[2603.11687] SemBench: A Universal Semantic Framework for LLM Evaluation
Llms

[2603.11687] SemBench: A Universal Semantic Framework for LLM Evaluation

Abstract page for arXiv paper 2603.11687: SemBench: A Universal Semantic Framework for LLM Evaluation

arXiv - AI · 4 min ·
[2603.11583] UtilityMax Prompting: A Formal Framework for Multi-Objective Large Language Model Optimization
Llms

[2603.11583] UtilityMax Prompting: A Formal Framework for Multi-Objective Large Language Model Optimization

Abstract page for arXiv paper 2603.11583: UtilityMax Prompting: A Formal Framework for Multi-Objective Large Language Model Optimization

arXiv - AI · 3 min ·
[2512.05245] STAR-GO: Improving Protein Function Prediction by Learning to Hierarchically Integrate Ontology-Informed Semantic Embeddings
Machine Learning

[2512.05245] STAR-GO: Improving Protein Function Prediction by Learning to Hierarchically Integrate Ontology-Informed Semantic Embeddings

Abstract page for arXiv paper 2512.05245: STAR-GO: Improving Protein Function Prediction by Learning to Hierarchically Integrate Ontology...

arXiv - Machine Learning · 4 min ·
More in Nlp: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime