[2601.03385] SIGMA: Scalable Spectral Insights for LLM Model Collapse
About this article
Abstract page for arXiv paper 2601.03385: SIGMA: Scalable Spectral Insights for LLM Model Collapse
Computer Science > Machine Learning arXiv:2601.03385 (cs) [Submitted on 6 Jan 2026 (v1), last revised 23 Mar 2026 (this version, v3)] Title:SIGMA: Scalable Spectral Insights for LLM Model Collapse Authors:Yi Gu, Lingyou Pang, Xiangkun Ye, Tianyu Wang, Jianyu Lin, Carey E. Priebe, Alexander Aue View a PDF of the paper titled SIGMA: Scalable Spectral Insights for LLM Model Collapse, by Yi Gu and 6 other authors View PDF HTML (experimental) Abstract:The rapid adoption of synthetic data for training Large Language Models (LLMs) has introduced the technical challenge of "model collapse"-a degenerative process where recursive training on model-generated content leads to a contraction of distributional variance and representational quality. While the phenomenology of collapse is increasingly evident, rigorous methods to quantify and predict its onset in high-dimensional spaces remain elusive. In this paper, we introduce SIGMA (Spectral Inequalities for Gram Matrix Analysis), a unified framework that benchmarks model collapse through the spectral lens of the embedding Gram matrix. By deriving and utilizing deterministic and stochastic bounds on the matrix's spectrum, SIGMA provides a mathematically grounded metric to track the contraction of the representation space. Crucially, our stochastic formulation enables scalable estimation of these bounds, making the framework applicable to large-scale foundation models where full eigendecomposition is intractable. We demonstrate that SIG...