Skip to main navigation Skip to search Skip to main content

Development of Deep Reinforcement Learning Co-Simulation Platforms for Power System Control

  • Zhenghong Tu*
  • , Zhen Fan
  • , Wei Zhang
  • , Wenxin Liu
  • *Corresponding author for this work
  • Argonne National Laboratory
  • Eversource Energy
  • Research and Development
  • Lehigh University

Research output: Contribution to journalArticlepeer-review

Abstract

This paper introduces four co-simulation platforms for testing deep reinforcement learning (DRL)-based control solutions in power systems. The first one is to connect the off-the-shelf Matlab DRL toolbox with the developed Matlab Simulink power system models (platform 1). The second is to utilize the Python-C interface to integrate the algorithm design in Python and the model development in Matlab, wherein the developed model is first generated to C-code and then compiled into a shared library (platform 2). The third and fourth platforms are based on the real-time simulator, whereas these two employ two different communication protocols, i.e., TCP/IP (platform 3) and EtherCAT (platform 4). The Opal-RT real-time simulators utilized by platforms 3 and 4 ensure that both platforms can run the models in a real-time manner. Specifically, platform 4 possesses capabilities for conducting real-time DRL training and control owing to the real-time feature of both communication and simulation. The detailed procedures regarding implementing these different DRL co-simulation platforms are provided in this paper, with the pros and cons of each one commented on, which can help researchers speed up the preliminary design of DRL-based control solutions for dynamic power systems. Note to Practitioners - This article is motivated by the emerging deep reinforcement learning-based control solutions for dynamic power systems. There is an urgent need for co-simulation platforms that enable a high-fidelity and real-time simulation of power systems and a thorough evaluation of various DRL control algorithms. Since most existing DRL testing tools are developed using Python with simplification adapted to large time-scales or specific functionalities, limiting their applicability in small-time-scale online control. The model with higher granularity is necessary for small-time-scale online control problems for power system applications that involve fast dynamics. The developed platforms allow for direct integration of the dynamic Matlab Simulink power system model and run complex dynamic models in a real-time manner via the Opal-RT simulator. A classic optimal generation control problem for a test power system is studied in this paper to validate the effectiveness of the developed online and real-time DRL platforms. For other power system applications, one only needs to follow the procedures provided in this paper to customize the corresponding dynamic models or environments. The comprehensive real-time and online tests of different DRL control algorithms can be accomplished on the developed platforms with slight effort.

Original languageEnglish
Pages (from-to)4780-4789
Number of pages10
JournalIEEE Transactions on Automation Science and Engineering
Volume22
DOIs
StatePublished - 2025
Externally publishedYes

Keywords

  • Deep reinforcement learning (DRL)
  • co-simulation platform
  • power system control
  • real-time simulator

Fingerprint

Dive into the research topics of 'Development of Deep Reinforcement Learning Co-Simulation Platforms for Power System Control'. Together they form a unique fingerprint.

Cite this