[2505.23783] Boosting In-Context Learning in LLMs Through the Lens of Classical Supervised Learning
About this article
Abstract page for arXiv paper 2505.23783: Boosting In-Context Learning in LLMs Through the Lens of Classical Supervised Learning
Statistics > Machine Learning arXiv:2505.23783 (stat) [Submitted on 22 May 2025 (v1), last revised 3 Mar 2026 (this version, v3)] Title:Boosting In-Context Learning in LLMs Through the Lens of Classical Supervised Learning Authors:Korel Gundem, Juncheng Dong, Dennis Zhang, Vahid Tarokh, Zhengling Qi View a PDF of the paper titled Boosting In-Context Learning in LLMs Through the Lens of Classical Supervised Learning, by Korel Gundem and 4 other authors View PDF HTML (experimental) Abstract:In-Context Learning (ICL) allows Large Language Models (LLMs) to adapt to new tasks with just a few examples, but their predictions often suffer from systematic biases, leading to unstable performance in classification. While calibration techniques are proposed to mitigate these biases, we show that, in the logit space, many of these methods are equivalent to merely shifting the LLM's decision boundary without having the ability to alter its orientation. This proves inadequate when biases cause the LLM to be severely misaligned. To address these limitations and provide a unifying framework, we propose Supervised Calibration (SC), a loss-minimization-based framework, which learns an optimal, per-class affine transformation of LLM's predictive probabilities in the logit space without requiring external data beyond the context. By using a more expressive functional class, SC not only subsumes many existing calibration methods in ICL as special cases but also enables the ability of altering a...