Federated Learning for Privacy-Preserving Edge Intelligence: A Scalable Systems Perspective

Main Article Content

Elowen Price

Abstract

The rapid proliferation of edge devices and the exponential growth of user-generated data have accelerated the demand for intelligent systems that operate in distributed, resource-constrained, and privacy-sensitive environments. Federated Learning (FL) has emerged as a promising solution to this challenge by enabling collaborative model training across decentralized devices without transferring raw data to a central server. This paper presents a comprehensive systems-level framework for deploying scalable and privacy-preserving FL on heterogeneous edge platforms. We propose a modular architecture that integrates adaptive model compression, dynamic client selection, and secure gradient aggregation under bandwidth and compute constraints. Our design emphasizes fault tolerance, communication efficiency, and adversarial robustness while maintaining inference performance comparable to centralized training. Extensive experiments on CIFAR-10, HAR, and speech datasets using Raspberry Pi and NVIDIA Jetson devices show that our system achieves up to 38% reduction in communication cost and 26% training speed-up, with only a 1.7% accuracy loss compared to centralized baselines. We further demonstrate the system’s resilience to client dropout and adversarial data poisoning. This work contributes a practical, extensible platform for real-world FL deployment and offers insights into building future intelligent edge infrastructures.

Article Details

How to Cite
Price, E. (2025). Federated Learning for Privacy-Preserving Edge Intelligence: A Scalable Systems Perspective. Journal of Computer Science and Software Applications, 5(5). https://doi.org/10.5281/zenodo.15381925
Section
Articles