Skip to main navigation Skip to search Skip to main content

A 3D Simulation Environment and Navigation Approach for Robot Navigation via Deep Reinforcement Learning in Dense Pedestrian Environment

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

With the rapid development of mobile robot technology, robots are playing an increasingly important role in people's daily lives. As one of the key technologies of the basic functions of mobile robots, navigation also needs to deal with new challenges. How to navigate efficiently and collision-free in complex and changeable human environments is one of the problems that need to be solved urgently. Currently, mobile robots can achieve efficient navigation in static environments. However, in the unstructured and fast-changing environments of human daily society, robots need to make more flexible navigation strategies to deal with the dynamic scenarios. This paper built a 3D simulation environment for robot navigation via deep reinforcement learning in dense pedestrian environment. We also proposed a new navigation approach via deep reinforcement learning in dense pedestrian environment. The simulation environment of this paper integrates Gazebo, ROS navigation stack, Stable baselines and Social Force Pedestrian Simulator. In order to be able to collect the rich environmental information around the robot, our simulation environment is based on the Gazebo simulation platform. In order to use the traditional path planning methods, we introduce the ROS navigation stack. In order to make it easier to call the current mainstream reinforcement learning algorithms, we introduce Stable baselines which is a set of improved implementations of reinforcement learning algorithms based on OpenAI Baselines. In order to imitate dense pedestrian scenarios realistically, we introduce the Social Force Pedestrian Simulator which is a pedestrian simulation package, whose pedestrian's movement follows the rules of Social Force Movement. Our robot navigation approach combines the global optimality of traditional global path planning and the local barrier ability of reinforcement learning. Firstly, we plan global path by using A∗ algorithm. Secondly, we use Soft Actor Critic (SAC) to try to follow the waypoints generated at a certain distance on the global path to make action decisions on the premise of agile obstacle avoidance. Experiments show that our simulation environment can easily set up a robot navigation environment and navigation approaches can be simulated in various dense pedestrian environments.

Original languageEnglish
Title of host publication2020 IEEE 16th International Conference on Automation Science and Engineering, CASE 2020
PublisherIEEE Computer Society
Pages1514-1519
Number of pages6
ISBN (Electronic)9781728169040
DOIs
StatePublished - Aug 2020
Externally publishedYes
Event16th IEEE International Conference on Automation Science and Engineering, CASE 2020 - Hong Kong, Hong Kong
Duration: 20 Aug 202021 Aug 2020

Publication series

NameIEEE International Conference on Automation Science and Engineering
Volume2020-August
ISSN (Print)2161-8070
ISSN (Electronic)2161-8089

Conference

Conference16th IEEE International Conference on Automation Science and Engineering, CASE 2020
Country/TerritoryHong Kong
CityHong Kong
Period20/08/2021/08/20

Fingerprint

Dive into the research topics of 'A 3D Simulation Environment and Navigation Approach for Robot Navigation via Deep Reinforcement Learning in Dense Pedestrian Environment'. Together they form a unique fingerprint.

Cite this