[2602.17694] AsynDBT: Asynchronous Distributed Bilevel Tuning for efficient In-Context Learning with Large Language Models
Summary
The paper presents AsynDBT, an innovative algorithm for asynchronous distributed bilevel tuning aimed at improving in-context learning with large language models (LLMs). It addresses challenges in federated learning by optimizing sample selection and prompt fragments, enhancin...
Why It Matters
As large language models become integral to various applications, optimizing their learning processes is crucial for efficiency and effectiveness. AsynDBT offers a solution to common issues in federated learning, such as data privacy and adaptation to heterogeneous environments, making it relevant for researchers and practitioners in AI and machine learning.
Key Takeaways
- AsynDBT enhances in-context learning by optimizing sample selection and prompt fragments.
- The algorithm addresses privacy concerns through federated learning techniques.
- AsynDBT demonstrates improved performance on benchmark datasets compared to previous methods.
- Theoretical convergence guarantees are provided for the proposed algorithm.
- The distributed architecture allows adaptability to different computing environments.
Computer Science > Machine Learning arXiv:2602.17694 (cs) [Submitted on 6 Feb 2026] Title:AsynDBT: Asynchronous Distributed Bilevel Tuning for efficient In-Context Learning with Large Language Models Authors:Hui Ma, Shaoyu Dou, Ya Liu, Fei Xing, Li Feng, Feng Pi View a PDF of the paper titled AsynDBT: Asynchronous Distributed Bilevel Tuning for efficient In-Context Learning with Large Language Models, by Hui Ma and 5 other authors View PDF HTML (experimental) Abstract:With the rapid development of large language models (LLMs), an increasing number of applications leverage cloud-based LLM APIs to reduce usage costs. However, since cloud-based models' parameters and gradients are agnostic, users have to manually or use heuristic algorithms to adjust prompts for intervening LLM outputs, which requiring costly optimization procedures. In-context learning (ICL) has recently emerged as a promising paradigm that enables LLMs to adapt to new tasks using examples provided within the input, eliminating the need for parameter updates. Nevertheless, the advancement of ICL is often hindered by the lack of high-quality data, which is often sensitive and different to share. Federated learning (FL) offers a potential solution by enabling collaborative training of distributed LLMs while preserving data privacy. Despite this issues, previous FL approaches that incorporate ICL have struggled with severe straggler problems and challenges associated with heterogeneous non-identically data. To ...