[2603.04323] PTOPOFL: Privacy-Preserving Personalised Federated Learning via Persistent Homology
About this article
Abstract page for arXiv paper 2603.04323: PTOPOFL: Privacy-Preserving Personalised Federated Learning via Persistent Homology
Computer Science > Machine Learning arXiv:2603.04323 (cs) [Submitted on 4 Mar 2026] Title:PTOPOFL: Privacy-Preserving Personalised Federated Learning via Persistent Homology Authors:Kelly L Vomo-Donfack, Adryel Hoszu, Grégory Ginot, Ian Morilla View a PDF of the paper titled PTOPOFL: Privacy-Preserving Personalised Federated Learning via Persistent Homology, by Kelly L Vomo-Donfack and 3 other authors View PDF HTML (experimental) Abstract:Federated learning (FL) faces two structural tensions: gradient sharing enables data-reconstruction attacks, while non-IID client distributions degrade aggregation quality. We introduce PTOPOFL, a framework that addresses both challenges simultaneously by replacing gradient communication with topological descriptors derived from persistent homology (PH). Clients transmit only 48-dimensional PH feature vectors-compact shape summaries whose many-to-one structure makes inversion provably ill-posed-rather than model gradients. The server performs topology-guided personalised aggregation: clients are clustered by Wasserstein similarity between their PH diagrams, intra-cluster models are topology-weighted,and clusters are blended with a global consensus. We prove an information-contraction theorem showing that PH descriptors leak strictly less mutual information per sample than gradients under strongly convex loss functions, and we establish linear convergence of the Wasserstein-weighted aggregation scheme with an error floor strictly smaller t...