[2601.09166] DP-FedSOFIM: Differentially Private Federated Stochastic Optimization using Regularized Fisher Information Matrix
About this article
Abstract page for arXiv paper 2601.09166: DP-FedSOFIM: Differentially Private Federated Stochastic Optimization using Regularized Fisher Information Matrix
Computer Science > Machine Learning arXiv:2601.09166 (cs) [Submitted on 14 Jan 2026 (v1), last revised 24 Mar 2026 (this version, v2)] Title:DP-FedSOFIM: Differentially Private Federated Stochastic Optimization using Regularized Fisher Information Matrix Authors:Sidhant Nair, Tanmay Sen, Mrinmay Sen, Sayantan Banerjee View a PDF of the paper titled DP-FedSOFIM: Differentially Private Federated Stochastic Optimization using Regularized Fisher Information Matrix, by Sidhant Nair and 3 other authors View PDF HTML (experimental) Abstract:Differentially private federated learning (DP-FL) often suffers from slow convergence under tight privacy budgets because the noise required for privacy preservation degrades gradient quality. Although second-order optimization can accelerate training, existing approaches for DP-FL face significant scalability limitations: Newton-type methods require clients to compute Hessians, while feature covariance methods scale poorly with model dimension. We propose DP-FedSOFIM, a simple and scalable second-order optimization method for DP-FL. The method constructs an online regularized proxy for the Fisher information matrix at the server using only privatized aggregated gradients, capturing useful curvature information without requiring Hessian computations or feature covariance estimation. Efficient rank-one updates based on the Sherman-Morrison formula enable communication costs proportional to the model size and require only O(d) client-side memory...