[2602.21229] Forecasting Future Language: Context Design for Mention Markets

[2602.21229] Forecasting Future Language: Context Design for Mention Markets

arXiv - Machine Learning 4 min read Article

Summary

This paper explores the design of context for mention markets, focusing on how input context affects the accuracy of predictions made by large language models (LLMs).

Why It Matters

Understanding how to effectively design input context for mention markets can significantly enhance the predictive capabilities of LLMs, which are increasingly used in financial forecasting. This research provides insights that could improve decision-making in finance by leveraging advanced AI techniques.

Key Takeaways

  • Richer contextual information consistently improves forecasting performance.
  • Market-Conditioned Prompting (MCP) enhances forecast calibration by treating market probability as a prior.
  • A combination of market probability and MCP (MixMCP) yields more robust predictions than using either method alone.

Quantitative Finance > General Finance arXiv:2602.21229 (q-fin) [Submitted on 4 Feb 2026] Title:Forecasting Future Language: Context Design for Mention Markets Authors:Sumin Kim, Jihoon Kwon, Yoon Kim, Nicole Kagan, Raffi Khatchadourian, Wonbin Ahn, Alejandro Lopez-Lira, Jaewon Lee, Yoontae Hwang, Oscar Levy, Yongjae Lee, Chanyeol Choi View a PDF of the paper titled Forecasting Future Language: Context Design for Mention Markets, by Sumin Kim and 11 other authors View PDF HTML (experimental) Abstract:Mention markets, a type of prediction market in which contracts resolve based on whether a specified keyword is mentioned during a future public event, require accurate probabilistic forecasts of keyword-mention outcomes. While recent work shows that large language models (LLMs) can generate forecasts competitive with human forecasters, it remains unclear how input context should be designed to support accurate prediction. In this paper, we study this question through experiments on earnings-call mention markets, which require forecasting whether a company will mention a specified keyword during its upcoming call. We run controlled comparisons varying (i) which contextual information is provided (news and/or prior earnings-call transcripts) and (ii) how \textit{market probability}, (i.e., prediction market contract price) is used. We introduce Market-Conditioned Prompting (MCP), which explicitly treats the market-implied probability as a prior and instructs the LLM to update t...

Related Articles

Llms

[P] Gemma 4 running on NVIDIA B200 and AMD MI355X from the same inference stack, 15% throughput gain over vLLM on Blackwell

Google DeepMind dropped Gemma 4 today: Gemma 4 31B: dense, 256K context, redesigned architecture targeting efficiency and long-context qu...

Reddit - Machine Learning · 1 min ·
Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex | WIRED
Llms

Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex | WIRED

As Cursor launches the next generation of its product, the AI coding startup has to compete with OpenAI and Anthropic more directly than ...

Wired - AI · 8 min ·
Llms

Anthropic leak reveals Claude Code tracks user frustration and raises new questions about AI privacy

submitted by /u/scientificamerican [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Llms

Google has published its new open-weight model Gemma 4. And made it commercially available under Apache 2.0 License

The model is also available here: 🤗 HuggingFace: https://huggingface.co/collections/google/gemma-4 🦙 Ollama: https://ollama.com/library/g...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime