[2604.05842] Expectation Maximization (EM) Converges for General Agnostic Mixtures
About this article
Abstract page for arXiv paper 2604.05842: Expectation Maximization (EM) Converges for General Agnostic Mixtures
Computer Science > Machine Learning arXiv:2604.05842 (cs) [Submitted on 7 Apr 2026] Title:Expectation Maximization (EM) Converges for General Agnostic Mixtures Authors:Avishek Ghosh View a PDF of the paper titled Expectation Maximization (EM) Converges for General Agnostic Mixtures, by Avishek Ghosh View PDF HTML (experimental) Abstract:Mixture of linear regression is well studied in statistics and machine learning, where the data points are generated probabilistically using $k$ linear models. Algorithms like Expectation Maximization (EM) may be used to recover the ground truth regressors for this problem. Recently, in \cite{pal2022learning,ghosh_agnostic} the mixed linear regression problem is studied in the agnostic setting, where no generative model on data is assumed. Rather, given a set of data points, the objective is \emph{fit} $k$ lines by minimizing a suitable loss function. It is shown that a modification of EM, namely gradient EM converges exponentially to appropriately defined loss minimizer even in the agnostic setting. In this paper, we study the problem of \emph{fitting} $k$ parametric functions to given set of data points. We adhere to the agnostic setup. However, instead of fitting lines equipped with quadratic loss, we consider any arbitrary parametric function fitting equipped with a strongly convex and smooth loss. This framework encompasses a large class of problems including mixed linear regression (regularized), mixed linear classifiers (mixed logist...