Abstract
This article investigates the optimal control problem with disturbance rejection for discrete-time multi-agent systems under cooperative and non-cooperative graphical games frameworks. Given the practical challenges of obtaining accurate models, Q-function-based policy iteration methods are proposed to seek the Nash equilibrium solution for the cooperative graphical game and the distributed minmax solution for the non-cooperative graphical game. To implement these methods online, two reinforcement learning frameworks are developed, an actor-disturber-critic structure for the cooperative graphical game and an actor-adversary-disturber-critic structure for the non-cooperative graphical game. The stability of the proposed methods is rigorously analyzed, and simulation results are provided to illustrate the effectiveness of the proposed methods.
| Original language | English |
|---|---|
| Pages (from-to) | 585-601 |
| Number of pages | 17 |
| Journal | International Journal of Robust and Nonlinear Control |
| Volume | 36 |
| Issue number | 2 |
| DOIs | |
| State | Published - 25 Jan 2026 |
| Externally published | Yes |
Keywords
- Nash equilibrium
- disturbance rejection
- multi-agent system
- reinforcement learning
Fingerprint
Dive into the research topics of 'Strategic Learning for Disturbance Rejection in Multi-Agent Systems: Nash and Minmax in Graphical Games'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver