[2412.16031] Learning sparsity-promoting regularizers for linear inverse problems
About this article
Abstract page for arXiv paper 2412.16031: Learning sparsity-promoting regularizers for linear inverse problems
Statistics > Machine Learning arXiv:2412.16031 (stat) [Submitted on 20 Dec 2024 (v1), last revised 2 Mar 2026 (this version, v2)] Title:Learning sparsity-promoting regularizers for linear inverse problems Authors:Giovanni S. Alberti, Ernesto De Vito, Tapio Helin, Matti Lassas, Luca Ratti, Matteo Santacesaria View a PDF of the paper titled Learning sparsity-promoting regularizers for linear inverse problems, by Giovanni S. Alberti and 4 other authors View PDF HTML (experimental) Abstract:This paper introduces a novel approach to learning sparsity-promoting regularizers for solving linear inverse problems. We develop a bilevel optimization framework to select an optimal synthesis operator, denoted as $B$, which regularizes the inverse problem while promoting sparsity in the solution. The method leverages statistical properties of the underlying data and incorporates prior knowledge through the choice of $B$. We establish the well-posedness of the optimization problem, provide theoretical guarantees for the learning process, and present sample complexity bounds. The approach is demonstrated through theoretical infinite-dimensional examples, including compact perturbations of a known operator and the problem of learning the mother wavelet, and through extensive numerical simulations. This work extends previous efforts in Tikhonov regularization by addressing non-differentiable norms and proposing a data-driven approach for sparse regularization in infinite dimensions. Comments...