[2510.03165] FTTE: Enabling Federated and Resource-Constrained Deep Edge Intelligence
About this article
Abstract page for arXiv paper 2510.03165: FTTE: Enabling Federated and Resource-Constrained Deep Edge Intelligence
Computer Science > Machine Learning arXiv:2510.03165 (cs) [Submitted on 3 Oct 2025 (v1), last revised 23 Mar 2026 (this version, v2)] Title:FTTE: Enabling Federated and Resource-Constrained Deep Edge Intelligence Authors:Irene Tenison, Anna Murphy, Charles Beauville, Lalana Kagal View a PDF of the paper titled FTTE: Enabling Federated and Resource-Constrained Deep Edge Intelligence, by Irene Tenison and 3 other authors View PDF HTML (experimental) Abstract:Federated learning (FL) enables collaborative model training across distributed devices while preserving data privacy, but deployment on resource-constrained edge nodes remains challenging due to limited memory, energy, and communication bandwidth. Traditional synchronous and asynchronous FL approaches further suffer from straggler induced delays and slow convergence in heterogeneous, large scale networks. We present FTTE (Federated Tiny Training Engine),a novel semi-asynchronous FL framework that uniquely employs sparse parameter updates and a staleness-weighted aggregation based on both age and variance of client updates. Extensive experiments across diverse models and data distributions - including up to 500 clients and 90% stragglers - demonstrate that FTTE not only achieves 81% faster convergence, 80% lower on-device memory usage, and 69% communication payload reduction than synchronous FL (this http URL), but also consistently reaches comparable or higher target accuracy than semi-asynchronous (this http URL) in ch...