Skip to main navigation Skip to search Skip to main content

UAV Autonomous Navigation Based on Deep Reinforcement Learning in Highly Dynamic and High-Density Environments

  • Faculty of Computing, Harbin Institute of Technology

Research output: Contribution to journalArticlepeer-review

Abstract

Autonomous navigation of Unmanned Aerial Vehicles (UAVs) based on deep reinforcement learning (DRL) has made great progress. However, most studies assume relatively simple task scenarios and do not consider the impact of complex task scenarios on UAV flight performance. This paper proposes a DRL-based autonomous navigation algorithm for UAVs, which enables autonomous path planning for UAVs in high-density and highly dynamic environments. This algorithm proposes a state space representation method that contains position information and angle information by analyzing the impact of UAV position changes and angle changes on navigation performance in complex environments. In addition, a dynamic reward function is constructed based on a non-sparse reward function to balance the agent’s conservative behavior and exploratory behavior during the model training process. The results of multiple comparative experiments show that the proposed algorithm not only has the best autonomous navigation performance but also has the optimal flight efficiency in complex environments.

Original languageEnglish
Article number516
JournalDrones
Volume8
Issue number9
DOIs
StatePublished - Sep 2024
Externally publishedYes

Keywords

  • autonomous navigation
  • deep reinforcement learning
  • dynamic rewards
  • obstacle avoidance
  • unmanned aerial vehicles

Fingerprint

Dive into the research topics of 'UAV Autonomous Navigation Based on Deep Reinforcement Learning in Highly Dynamic and High-Density Environments'. Together they form a unique fingerprint.

Cite this