FedRIC: A Trust-Aware Federated Reinforcement Learning Framework for Real-Time Industrial Control
Main Article Content
Abstract
Real-time industrial control systems increasingly rely on intelligent agents to maintain stability, optimize throughput, and adapt to dynamic environments. However, deploying deep reinforcement learning (DRL) agents in such safety-critical settings is challenging due to strict latency constraints, heterogeneous edge infrastructure, and stringent data privacy regulations. To address these challenges, we propose a novel framework that combines federated learning (FL) with reinforcement learning (RL) to enable decentralized training of control policies across multiple industrial edge nodes without sharing raw sensor data. Our approach, termed Federated Reinforcement Learning for Industrial Control (FedRIC), integrates local actor-critic learners with a global federated coordinator that aggregates policy gradients using adaptive trust-weighted averaging. A task-specific stabilization module ensures convergence despite non-stationary environment dynamics and client heterogeneity. We validate our framework across three industrial benchmark suites—Factory Assembly Line, Industrial Heating Process, and Smart Grid Control—under both synchronous and asynchronous FL settings. Results demonstrate that FedRIC achieves up to 23% higher reward and 42% faster convergence compared to centralized or naive FL-RL baselines, while preserving strict control latency and maintaining system safety. This paper establishes a scalable, privacy-preserving solution for industrial intelligence at the network edge.
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.
Mind forge Academia also operates under the Creative Commons Licence CC-BY 4.0. This allows for copy and redistribute the material in any medium or format for any purpose, even commercially. The premise is that you must provide appropriate citation information.