Abstract
Low-cost, rapid road point cloud acquisition provides 3D environmental information for engineering applications like autonomous driving and road inspection. Vision-based reconstruction delivers rich, colored data more affordably than radar or fused clouds but faces challenges with repetitive road textures, causing sparse features and reduced accuracy. To address this issue, we propose LK-Road3R, a Lucas-Kanade optical flow-based workflow for 3D point cloud reconstruction from drone road videos. Firstly, to obtain high-quality reconstructed images, we design a keyframe extraction algorithm based on optical flow that ensures optimal image overlap in the video. Then, we develop a point cloud estimation workflow by applying a dual-branch network with attention sharing and fine-tuning the backbone for road scenes to achieve point cloud estimation. Finally, we introduce a cylindrical coordinate system into the segmentation network to avoid feature loss caused by the sparsity of large-scale point clouds in the traditional Cartesian coordinate system, thereby enabling semantic segmentation of the point cloud scene. On KITTI, LK-Road3R achieves an Absolute Relative Error of 5.6, surpassing existing vision models. Real-world tests confirm its effectiveness and practical value for automated construction tasks.
| Original language | English |
|---|---|
| Article number | 130605 |
| Journal | Expert Systems with Applications |
| Volume | 304 |
| DOIs | |
| State | Published - 1 Apr 2026 |
Keywords
- Deep learning
- Optical flow tracking
- Point cloud estimation
- Semantic segmentation
- Visual depth prediction
Fingerprint
Dive into the research topics of 'LK-Road3R: Road point cloud mapping via UAV-based video and deep learning'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver