Skip to main navigation Skip to search Skip to main content

LK-Road3R: Road point cloud mapping via UAV-based video and deep learning

  • School of Transportation Science and Engineering, Harbin Institute of Technology
  • University of Rwanda

Research output: Contribution to journalArticlepeer-review

Abstract

Low-cost, rapid road point cloud acquisition provides 3D environmental information for engineering applications like autonomous driving and road inspection. Vision-based reconstruction delivers rich, colored data more affordably than radar or fused clouds but faces challenges with repetitive road textures, causing sparse features and reduced accuracy. To address this issue, we propose LK-Road3R, a Lucas-Kanade optical flow-based workflow for 3D point cloud reconstruction from drone road videos. Firstly, to obtain high-quality reconstructed images, we design a keyframe extraction algorithm based on optical flow that ensures optimal image overlap in the video. Then, we develop a point cloud estimation workflow by applying a dual-branch network with attention sharing and fine-tuning the backbone for road scenes to achieve point cloud estimation. Finally, we introduce a cylindrical coordinate system into the segmentation network to avoid feature loss caused by the sparsity of large-scale point clouds in the traditional Cartesian coordinate system, thereby enabling semantic segmentation of the point cloud scene. On KITTI, LK-Road3R achieves an Absolute Relative Error of 5.6, surpassing existing vision models. Real-world tests confirm its effectiveness and practical value for automated construction tasks.

Original languageEnglish
Article number130605
JournalExpert Systems with Applications
Volume304
DOIs
StatePublished - 1 Apr 2026

Keywords

  • Deep learning
  • Optical flow tracking
  • Point cloud estimation
  • Semantic segmentation
  • Visual depth prediction

Fingerprint

Dive into the research topics of 'LK-Road3R: Road point cloud mapping via UAV-based video and deep learning'. Together they form a unique fingerprint.

Cite this