FedRIC: A Trust-Aware Federated Reinforcement Learning Framework for Real-Time Industrial Control

Main Article Content

Yidi Wang

Abstract

Real-time industrial control systems increasingly rely on intelligent agents to maintain stability, optimize throughput, and adapt to dynamic environments. However, deploying deep reinforcement learning (DRL) agents in such safety-critical settings is challenging due to strict latency constraints, heterogeneous edge infrastructure, and stringent data privacy regulations. To address these challenges, we propose a novel framework that combines federated learning (FL) with reinforcement learning (RL) to enable decentralized training of control policies across multiple industrial edge nodes without sharing raw sensor data. Our approach, termed Federated Reinforcement Learning for Industrial Control (FedRIC), integrates local actor-critic learners with a global federated coordinator that aggregates policy gradients using adaptive trust-weighted averaging. A task-specific stabilization module ensures convergence despite non-stationary environment dynamics and client heterogeneity. We validate our framework across three industrial benchmark suites—Factory Assembly Line, Industrial Heating Process, and Smart Grid Control—under both synchronous and asynchronous FL settings. Results demonstrate that FedRIC achieves up to 23% higher reward and 42% faster convergence compared to centralized or naive FL-RL baselines, while preserving strict control latency and maintaining system safety. This paper establishes a scalable, privacy-preserving solution for industrial intelligence at the network edge.

Article Details

How to Cite
Wang, Y. (2025). FedRIC: A Trust-Aware Federated Reinforcement Learning Framework for Real-Time Industrial Control. Journal of Computer Science and Software Applications, 5(5). https://doi.org/10.5281/zenodo.15381931
Section
Articles