[2508.09904] Beyond Naïve Prompting: Strategies for Improved Context-aided Forecasting with LLMs
About this article
Abstract page for arXiv paper 2508.09904: Beyond Naïve Prompting: Strategies for Improved Context-aided Forecasting with LLMs
Computer Science > Machine Learning arXiv:2508.09904 (cs) [Submitted on 13 Aug 2025 (v1), last revised 27 Feb 2026 (this version, v2)] Title:Beyond Naïve Prompting: Strategies for Improved Context-aided Forecasting with LLMs Authors:Arjun Ashok, Andrew Robert Williams, Vincent Zhihao Zheng, Irina Rish, Nicolas Chapados, Étienne Marcotte, Valentina Zantedeschi, Alexandre Drouin View a PDF of the paper titled Beyond Na\"ive Prompting: Strategies for Improved Context-aided Forecasting with LLMs, by Arjun Ashok and 7 other authors View PDF Abstract:Real-world forecasting requires models to integrate not only historical data but also relevant contextual information provided in textual form. While large language models (LLMs) show promise for context-aided forecasting, critical challenges remain: we lack diagnostic tools to understand failure modes, performance remains far below their potential, and high computational costs limit practical deployment. We introduce a unified framework of four strategies that address these limitations along three orthogonal dimensions: model diagnostics, accuracy, and efficiency. Through extensive evaluation across model families from small open-source models to frontier models including Gemini, GPT, and Claude, we uncover both fundamental insights and practical solutions. Our findings span three key dimensions: diagnostic strategies reveal the "Execution Gap" where models correctly explain how context affects forecasts but fail to apply this reas...