TY - GEN
T1 - DOB-Net
T2 - 2020 IEEE International Conference on Robotics and Automation, ICRA 2020
AU - Wang, Tianming
AU - Lu, Wenjie
AU - Yan, Zheng
AU - Liu, Dikai
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/5
Y1 - 2020/5
N2 - This paper presents an observer-integrated Reinforcement Learning (RL) approach, called Disturbance OB-server Network (DOB-Net), for robots operating in environments where disturbances are unknown and time-varying, and may frequently exceed robot control capabilities. The DOB-Net integrates a disturbance dynamics observer network and a controller network. Originated from conventional DOB mechanisms, the observer is built and enhanced via Recurrent Neural Networks (RNNs), encoding estimation of past values and prediction of future values of unknown disturbances in RNN hidden state. Such encoding allows the controller generate optimal control signals to actively reject disturbances, under the constraints of robot control capabilities. The observer and the controller are jointly learned within policy optimization by advantage actor critic. Numerical simulations on position regulation tasks have demonstrated that the proposed DOB-Net significantly outperforms conventional feedback controllers and classical RL policy.
AB - This paper presents an observer-integrated Reinforcement Learning (RL) approach, called Disturbance OB-server Network (DOB-Net), for robots operating in environments where disturbances are unknown and time-varying, and may frequently exceed robot control capabilities. The DOB-Net integrates a disturbance dynamics observer network and a controller network. Originated from conventional DOB mechanisms, the observer is built and enhanced via Recurrent Neural Networks (RNNs), encoding estimation of past values and prediction of future values of unknown disturbances in RNN hidden state. Such encoding allows the controller generate optimal control signals to actively reject disturbances, under the constraints of robot control capabilities. The observer and the controller are jointly learned within policy optimization by advantage actor critic. Numerical simulations on position regulation tasks have demonstrated that the proposed DOB-Net significantly outperforms conventional feedback controllers and classical RL policy.
UR - https://www.scopus.com/pages/publications/85092701561
U2 - 10.1109/ICRA40945.2020.9196641
DO - 10.1109/ICRA40945.2020.9196641
M3 - 会议稿件
AN - SCOPUS:85092701561
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 1881
EP - 1887
BT - 2020 IEEE International Conference on Robotics and Automation, ICRA 2020
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 31 May 2020 through 31 August 2020
ER -