[2603.19465] Global Convergence of Multiplicative Updates for the Matrix Mechanism: A Collaborative Proof with Gemini 3
About this article
Abstract page for arXiv paper 2603.19465: Global Convergence of Multiplicative Updates for the Matrix Mechanism: A Collaborative Proof with Gemini 3
Computer Science > Machine Learning arXiv:2603.19465 (cs) [Submitted on 19 Mar 2026] Title:Global Convergence of Multiplicative Updates for the Matrix Mechanism: A Collaborative Proof with Gemini 3 Authors:Keith Rush View a PDF of the paper titled Global Convergence of Multiplicative Updates for the Matrix Mechanism: A Collaborative Proof with Gemini 3, by Keith Rush View PDF HTML (experimental) Abstract:We analyze a fixed-point iteration $v \leftarrow \phi(v)$ arising in the optimization of a regularized nuclear norm objective involving the Hadamard product structure, posed in~\cite{denisov} in the context of an optimization problem over the space of algorithms in private machine learning. We prove that the iteration $v^{(k+1)} = \text{diag}((D_{v^{(k)}}^{1/2} M D_{v^{(k)}}^{1/2})^{1/2})$ converges monotonically to the unique global optimizer of the potential function $J(v) = 2 \text{Tr}((D_v^{1/2} M D_v^{1/2})^{1/2}) - \sum v_i$, closing a problem left open there. The bulk of this proof was provided by Gemini 3, subject to some corrections and interventions. Gemini 3 also sketched the initial version of this note. Thus, it represents as much a commentary on the practical use of AI in mathematics as it represents the closure of a small gap in the literature. As such, we include a small narrative description of the prompting process, and some resulting principles for working with AI to prove mathematics. Comments: Subjects: Machine Learning (cs.LG); Artificial Intelligence...