TY - GEN
T1 - A 3D Simulation Environment and Navigation Approach for Robot Navigation via Deep Reinforcement Learning in Dense Pedestrian Environment
AU - Liu, Qi
AU - Li, Yanjie
AU - Liu, Lintao
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/8
Y1 - 2020/8
N2 - With the rapid development of mobile robot technology, robots are playing an increasingly important role in people's daily lives. As one of the key technologies of the basic functions of mobile robots, navigation also needs to deal with new challenges. How to navigate efficiently and collision-free in complex and changeable human environments is one of the problems that need to be solved urgently. Currently, mobile robots can achieve efficient navigation in static environments. However, in the unstructured and fast-changing environments of human daily society, robots need to make more flexible navigation strategies to deal with the dynamic scenarios. This paper built a 3D simulation environment for robot navigation via deep reinforcement learning in dense pedestrian environment. We also proposed a new navigation approach via deep reinforcement learning in dense pedestrian environment. The simulation environment of this paper integrates Gazebo, ROS navigation stack, Stable baselines and Social Force Pedestrian Simulator. In order to be able to collect the rich environmental information around the robot, our simulation environment is based on the Gazebo simulation platform. In order to use the traditional path planning methods, we introduce the ROS navigation stack. In order to make it easier to call the current mainstream reinforcement learning algorithms, we introduce Stable baselines which is a set of improved implementations of reinforcement learning algorithms based on OpenAI Baselines. In order to imitate dense pedestrian scenarios realistically, we introduce the Social Force Pedestrian Simulator which is a pedestrian simulation package, whose pedestrian's movement follows the rules of Social Force Movement. Our robot navigation approach combines the global optimality of traditional global path planning and the local barrier ability of reinforcement learning. Firstly, we plan global path by using A∗ algorithm. Secondly, we use Soft Actor Critic (SAC) to try to follow the waypoints generated at a certain distance on the global path to make action decisions on the premise of agile obstacle avoidance. Experiments show that our simulation environment can easily set up a robot navigation environment and navigation approaches can be simulated in various dense pedestrian environments.
AB - With the rapid development of mobile robot technology, robots are playing an increasingly important role in people's daily lives. As one of the key technologies of the basic functions of mobile robots, navigation also needs to deal with new challenges. How to navigate efficiently and collision-free in complex and changeable human environments is one of the problems that need to be solved urgently. Currently, mobile robots can achieve efficient navigation in static environments. However, in the unstructured and fast-changing environments of human daily society, robots need to make more flexible navigation strategies to deal with the dynamic scenarios. This paper built a 3D simulation environment for robot navigation via deep reinforcement learning in dense pedestrian environment. We also proposed a new navigation approach via deep reinforcement learning in dense pedestrian environment. The simulation environment of this paper integrates Gazebo, ROS navigation stack, Stable baselines and Social Force Pedestrian Simulator. In order to be able to collect the rich environmental information around the robot, our simulation environment is based on the Gazebo simulation platform. In order to use the traditional path planning methods, we introduce the ROS navigation stack. In order to make it easier to call the current mainstream reinforcement learning algorithms, we introduce Stable baselines which is a set of improved implementations of reinforcement learning algorithms based on OpenAI Baselines. In order to imitate dense pedestrian scenarios realistically, we introduce the Social Force Pedestrian Simulator which is a pedestrian simulation package, whose pedestrian's movement follows the rules of Social Force Movement. Our robot navigation approach combines the global optimality of traditional global path planning and the local barrier ability of reinforcement learning. Firstly, we plan global path by using A∗ algorithm. Secondly, we use Soft Actor Critic (SAC) to try to follow the waypoints generated at a certain distance on the global path to make action decisions on the premise of agile obstacle avoidance. Experiments show that our simulation environment can easily set up a robot navigation environment and navigation approaches can be simulated in various dense pedestrian environments.
UR - https://www.scopus.com/pages/publications/85094167548
U2 - 10.1109/CASE48305.2020.9217023
DO - 10.1109/CASE48305.2020.9217023
M3 - 会议稿件
AN - SCOPUS:85094167548
T3 - IEEE International Conference on Automation Science and Engineering
SP - 1514
EP - 1519
BT - 2020 IEEE 16th International Conference on Automation Science and Engineering, CASE 2020
PB - IEEE Computer Society
T2 - 16th IEEE International Conference on Automation Science and Engineering, CASE 2020
Y2 - 20 August 2020 through 21 August 2020
ER -