[2602.08351] The Chicken and Egg Dilemma: Co-optimizing Data and Model Configurations for LLMs
Summary
The paper discusses the challenge of co-optimizing data and model configurations for training large language models (LLMs), introducing a new method called JoBS that enhances optimization efficiency.
Why It Matters
This research addresses a critical issue in machine learning, where the interplay between data and model configurations can significantly impact performance. By proposing JoBS, the authors offer a solution that could lead to more effective training of LLMs, which is crucial as these models become increasingly integral to various applications in AI.
Key Takeaways
- JoBS optimizes both data and model configurations simultaneously, overcoming traditional limitations.
- The method utilizes a performance predictor to reduce the cost of full training runs.
- JoBS demonstrates superior performance compared to existing optimization methods across various tasks.
Computer Science > Machine Learning arXiv:2602.08351 (cs) [Submitted on 9 Feb 2026 (v1), last revised 19 Feb 2026 (this version, v2)] Title:The Chicken and Egg Dilemma: Co-optimizing Data and Model Configurations for LLMs Authors:Zhiliang Chen, Alfred Wei Lun Leong, Shao Yong Ong, Apivich Hemachandra, Gregory Kang Ruey Lau, Chuan-Sheng Foo, Zhengyuan Liu, Nancy F. Chen, Bryan Kian Hsiang Low View a PDF of the paper titled The Chicken and Egg Dilemma: Co-optimizing Data and Model Configurations for LLMs, by Zhiliang Chen and 8 other authors View PDF HTML (experimental) Abstract:Co-optimizing data and model configurations for training LLMs presents a classic chicken-and-egg dilemma: The best training data configuration (e.g., data mixture) for a downstream task depends on the chosen model configuration (e.g., model architecture), and vice versa. However, jointly optimizing both data and model configurations is often deemed intractable, and existing methods focus on either data or model optimization without considering their interaction. We introduce JoBS, an approach that uses a scaling-law-inspired performance predictor to aid Bayesian optimization (BO) in jointly optimizing LLM training data and model configurations efficiently. JoBS allocates a portion of the optimization budget to learn an LLM performance predictor that predicts how promising a training configuration is from a small number of training steps. The remaining budget is used to perform BO entirely with the pr...