空间机械臂多模态视觉感知与操作技术综述

Translated title of the contribution: Review of multimodal visual perception and manipulation technologies for space manipulator
  • Yuhui Hu
  • , Ligang Wu*
  • , Jianguo Chen
  • , Liwen Zhang
  • , Zhao Zhang
  • , Heyuan Sun
  • , Dong Zhou
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

To investigate the current development of multimodal visual perception and manipulation for space manipulators and the pressing technical challenges, this paper conducts an analysis and summary of the existing literatures. Multimodal visual perception refers to a vision approach that integrates heterogeneous sensors and multi­source data, including visible, infrared, depth cameras, and LiDAR. Space manipulation is on-orbit activities conducted with robotic manipulators and other actuators, encompassing approach, grasping, assembly, and maintenance. This paper first reviews representative space manipulator systems that have been deployed domestically and internationally, summarizing their developmental path and application characteristics. Building on this foundation, we adopt a perception-planning-control framework to systematically review three technologies essential for autonomous on-orbit servicing. We first address multimodal visual perception, focusing on heterogeneous data fusion and multimodal pose estimation. We then examine trajectory planning under complex constraints, covering model-based, optimization-based, and learning-based methods and their applicability to free- floating bases and strongly coupled dynamics. Last, we discuss compliant grasping for free-floating moving targets to ensure operational safety. Finally, the paper highlights major challenges faced by space manipulators in autonomous on-orbit servicing, including limited onboard computational resources, scarcity of on-orbit data, difficulties in multimodal coordination, and the need for long-term reliability. Future directions are then outlined from the perspectives of hardware, algorithms, and system-level integration. Research indicates that autonomous on-orbit servicing for space manipulators is still immature, with multiple bottlenecks persisting in key technical components and practical deployment. Advances in multimodal vision, learning-based trajectory planning, and compliant grasping control will be critical to enhancing autonomous performance.

Translated title of the contributionReview of multimodal visual perception and manipulation technologies for space manipulator
Original languageChinese (Traditional)
Pages (from-to)1-21
Number of pages21
JournalHarbin Gongye Daxue Xuebao/Journal of Harbin Institute of Technology
Volume57
Issue number12
DOIs
StatePublished - Dec 2025

Fingerprint

Dive into the research topics of 'Review of multimodal visual perception and manipulation technologies for space manipulator'. Together they form a unique fingerprint.

Cite this