[2601.23155] SPICE: Submodular Penalized Information-Conflict Selection for Efficient Large Language Model Training

[2601.23155] SPICE: Submodular Penalized Information-Conflict Selection for Efficient Large Language Model Training

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2601.23155: SPICE: Submodular Penalized Information-Conflict Selection for Efficient Large Language Model Training

Computer Science > Machine Learning arXiv:2601.23155 (cs) [Submitted on 30 Jan 2026 (v1), last revised 8 Apr 2026 (this version, v2)] Title:SPICE: Submodular Penalized Information-Conflict Selection for Efficient Large Language Model Training Authors:Powei Chang, Jinpeng Zhang, Bowen Chen, Chenyu Wang, Chenlu Guo, Yixing Zhang, Yukang Gao, JianXiang Xiang, Yue Gao, Chaoqun Sun, Yiyi Chen, Dongying Kong View a PDF of the paper titled SPICE: Submodular Penalized Information-Conflict Selection for Efficient Large Language Model Training, by Powei Chang and 11 other authors View PDF HTML (experimental) Abstract:Information-based data selection for instruction tuning is compelling: maximizing the log-determinant of the Fisher information yields a monotone submodular objective, enabling greedy algorithms to achieve a $(1-1/e)$ approximation under a cardinality budget. In practice, however, we identify alleviating gradient conflicts, misalignment between per-sample gradients, is a key factor that slows down the decay of marginal log-determinant information gains, thereby preventing significant loss of information. We formalize this via an $\varepsilon$-decomposition that quantifies the deviation from ideal submodularity as a function of conflict statistics, yielding data-dependent approximation factors that tighten as conflicts diminish. Guided by this analysis, we propose SPICE, a conflict-aware selector that maximizes information while penalizing misalignment, and that supports...

Originally published on April 09, 2026. Curated by AI News.

Related Articles

Llms

We gave 45 psychological questionnaires to 50 LLMs. What we found was not “personality.”

What is the “personality” of an LLM? What actually differentiates models psychometrically? Since LLMs entered public use, researchers hav...

Reddit - Artificial Intelligence · 1 min ·
How to Disable Google's Gemini in Chrome | WIRED
Llms

How to Disable Google's Gemini in Chrome | WIRED

Chrome users were caught off guard by a 4-GB Google AI model baked into Chrome, sparking privacy concerns. The good news: You can easily ...

Wired - AI · 6 min ·
OpenAI introduces new 'Trusted Contact' safeguard for cases of possible self-harm | TechCrunch
Llms

OpenAI introduces new 'Trusted Contact' safeguard for cases of possible self-harm | TechCrunch

The company is expanding its efforts to protect ChatGPT users in cases where conversations may turn to self-harm.

TechCrunch - AI · 5 min ·
Mira Murati’s deposition pulled back the curtain on Sam Altman’s ouster | The Verge
Llms

Mira Murati’s deposition pulled back the curtain on Sam Altman’s ouster | The Verge

Thanks to Musk v. Altman, the public is getting a concrete look at details of Sam Altman’s ouster from OpenAI, much of it centered on for...

The Verge - AI · 11 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime