[2602.16749] U-FedTomAtt: Ultra-lightweight Federated Learning with Attention for Tomato Disease Recognition
Summary
The paper presents U-FedTomAtt, an ultra-lightweight federated learning framework designed for tomato disease recognition, optimizing performance in resource-constrained environments.
Why It Matters
This research addresses the challenges of deploying AI in agriculture, particularly in privacy-sensitive and resource-limited settings. By enhancing disease recognition in tomatoes, it supports sustainable farming practices and improves crop management.
Key Takeaways
- U-FedTomAtt framework minimizes computational and memory overhead with only 245.34K parameters.
- Incorporates a novel local-global residual attention module to maintain accuracy.
- Achieves high accuracy and F1-scores on benchmark datasets for tomato diseases.
Quantitative Biology > Quantitative Methods arXiv:2602.16749 (q-bio) [Submitted on 18 Feb 2026] Title:U-FedTomAtt: Ultra-lightweight Federated Learning with Attention for Tomato Disease Recognition Authors:Romiyal George, Sathiyamohan Nishankar, Selvarajah Thuseethan, Chathrie Wimalasooriya, Yakub Sebastian, Roshan G. Ragel, Zhongwei Liang View a PDF of the paper titled U-FedTomAtt: Ultra-lightweight Federated Learning with Attention for Tomato Disease Recognition, by Romiyal George and 5 other authors View PDF HTML (experimental) Abstract:Federated learning has emerged as a privacy-preserving and efficient approach for deploying intelligent agricultural solutions. Accurate edge-based diagnosis across geographically dispersed farms is crucial for recognising tomato diseases in sustainable farming. Traditional centralised training aggregates raw data on a central server, leading to communication overhead, privacy risks and latency. Meanwhile, edge devices require lightweight networks to operate effectively within limited resources. In this paper, we propose U-FedTomAtt, an ultra-lightweight federated learning framework with attention for tomato disease recognition in resource-constrained and distributed environments. The model comprises only 245.34K parameters and 71.41 MFLOPS. First, we propose an ultra-lightweight neural network with dilated bottleneck (DBNeck) modules and a linear transformer to minimise computational and memory overhead. To mitigate potential accuracy l...