Federated Learning for Privacy-Preserving Edge Intelligence: A Scalable Systems Perspective
Main Article Content
Abstract
The rapid proliferation of edge devices and the exponential growth of user-generated data have accelerated the demand for intelligent systems that operate in distributed, resource-constrained, and privacy-sensitive environments. Federated Learning (FL) has emerged as a promising solution to this challenge by enabling collaborative model training across decentralized devices without transferring raw data to a central server. This paper presents a comprehensive systems-level framework for deploying scalable and privacy-preserving FL on heterogeneous edge platforms. We propose a modular architecture that integrates adaptive model compression, dynamic client selection, and secure gradient aggregation under bandwidth and compute constraints. Our design emphasizes fault tolerance, communication efficiency, and adversarial robustness while maintaining inference performance comparable to centralized training. Extensive experiments on CIFAR-10, HAR, and speech datasets using Raspberry Pi and NVIDIA Jetson devices show that our system achieves up to 38% reduction in communication cost and 26% training speed-up, with only a 1.7% accuracy loss compared to centralized baselines. We further demonstrate the system’s resilience to client dropout and adversarial data poisoning. This work contributes a practical, extensible platform for real-world FL deployment and offers insights into building future intelligent edge infrastructures.
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.
Mind forge Academia also operates under the Creative Commons Licence CC-BY 4.0. This allows for copy and redistribute the material in any medium or format for any purpose, even commercially. The premise is that you must provide appropriate citation information.