Skip to main navigation Skip to search Skip to main content

Unsupervised Depth Completion Guided by Visual Inertial System and Confidence

  • Hanxuan Zhang
  • , Ju Huo*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This paper solves the problem of depth completion learning from sparse depth maps and RGB images. Specifically, a real-time unsupervised depth completion method in dynamic scenes guided by visual inertial system and confidence is described. The problems such as occlusion (dynamic scenes), limited computational resources and unlabeled training samples can be better solved in our method. The core of our method is a new compact network, which uses images, pose and confidence guidance to perform depth completion. Since visual-inertial information is considered as the only source of supervision, the loss function of confidence guidance is creatively designed. Especially for the problem of pixel mismatch caused by object motion and occlusion in dynamic scenes, we divide the images into static, dynamic and occluded regions, and design loss functions to match each region. Our experimental results in dynamic datasets and real dynamic scenes show that this regularization alone is sufficient to train depth completion models. Our depth completion network exceeds the accuracy achieved in prior work for unsupervised depth completion, and only requires a small number of parameters.

Original languageEnglish
Article number3430
JournalSensors
Volume23
Issue number7
DOIs
StatePublished - Apr 2023
Externally publishedYes

Keywords

  • confidence
  • dynamic scenes
  • loss function
  • unsupervised depth completion

Fingerprint

Dive into the research topics of 'Unsupervised Depth Completion Guided by Visual Inertial System and Confidence'. Together they form a unique fingerprint.

Cite this