Skip to main navigation Skip to search Skip to main content

City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning

  • Penglei Sun
  • , Yaoxian Song
  • , Xiangru Zhu
  • , Xiang Liu
  • , Qiang Wang*
  • , Yue Liu*
  • , Changqun Xia
  • , Tiefeng Li
  • , Yang Yang
  • , Xiaowen Chu*
  • *Corresponding author for this work
  • The Hong Kong University of Science and Technology (Guangzhou)
  • Zhejiang University
  • Fudan University
  • Hong Kong University of Science and Technology
  • Harbin Institute of Technology Shenzhen
  • Terminus Technologies Co. Ltd
  • Pengcheng Laboratory
  • The Hong Kong University of Hong Kong (Guangzhou)

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Scene understanding enables intelligent agents to interpret and comprehend their environment. While existing large vision-language models (LVLMs) for scene understanding have primarily focused on indoor household tasks, they face two significant limitations when applied to outdoor large-scale scene understanding. First, outdoor scenarios typically encompass larger-scale environments observed through various sensors from multiple viewpoints (e.g., bird view and terrestrial view), while existing indoor LVLMs mainly analyze single visual modalities within building-scale contexts from humanoid viewpoints. Second, existing LVLMs suffer from missing multidomain perception outdoor data and struggle to effectively integrate 2D and 3D visual information. To address the aforementioned limitations, we build the first multidomain perception outdoor scene understanding dataset, named SVM-City, deriving from multi-Scale scenarios with multi-View and multi-Modal instruction tuning data. It contains 420k images and 4, 811M point clouds with 567k question-answering pairs from vehicles, low-altitude drones, high-altitude aerial planes, and satellite. To effectively fuse multimodal data in the absence of one modality, we introduce incomplete multimodal learning to model outdoor scene understanding and design the LVLM named City-VLM. Multimodal fusion is realized by constructed as a joint probabilistic distribution space rather than implementing directly explicit fusion operations (e.g., concatenation). Experimental results on three typical outdoor scene understanding tasks show City-VLM achieves 18.14 % performance surpassing existing LVLMs in question-answering tasks averagely. Our method demonstrates pragmatic and generalization performance across multiple outdoor scenes.

Original languageEnglish
Title of host publicationMM 2025 - Proceedings of the 33rd ACM International Conference on Multimedia, Co-Located with MM 2025
PublisherAssociation for Computing Machinery, Inc
Pages3448-3457
Number of pages10
ISBN (Electronic)9798400720352
DOIs
StatePublished - 27 Oct 2025
Externally publishedYes
Event33rd ACM International Conference on Multimedia, MM 2025 - Dublin, Ireland
Duration: 27 Oct 202531 Oct 2025

Publication series

NameMM 2025 - Proceedings of the 33rd ACM International Conference on Multimedia, Co-Located with MM 2025

Conference

Conference33rd ACM International Conference on Multimedia, MM 2025
Country/TerritoryIreland
CityDublin
Period27/10/2531/10/25

Keywords

  • 3D
  • multimodal question answering
  • scene understanding

Fingerprint

Dive into the research topics of 'City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning'. Together they form a unique fingerprint.

Cite this