Skip to main navigation Skip to search Skip to main content

Spatial Knowledge Graph-Guided Multimodal Synthesis

  • Yida Xue
  • , Zhen Bi
  • , Jinnan Yang
  • , Jungang Lou
  • , Kehai Chen
  • , Min Zhang
  • , Huajun Chen
  • , Ningyu Zhang*
  • *Corresponding author for this work
  • Zhejiang University
  • Nanjing University of Science and Technology
  • Huzhou University
  • Harbin Institute of Technology Shenzhen

Research output: Contribution to journalArticlepeer-review

Abstract

Recent advances in Multimodal Large Language Models (MLLMs) have significantly enhanced their capabilities; however, their spatial perception abilities remain a notable limitation. To address this challenge, multimodal data synthesis offers a promising solution. Yet, ensuring that synthesized data adhere to spatial common sense is a non-trivial task. Our approach addresses this critical gap by providing a systematic framework for generating spatially coherent data. In this work, we introduce SKG2Data, a novel multimodal synthesis approach guided by spatial knowledge graphs, grounded in the concept of knowledge-to-data generation. SKG2Data employs an automated pipeline for constructing Spatial Knowledge Graph (SKG) that effectively captures human-like spatial cognition, including directional and distance relationships. These structured representations then serve as precise guidance for our integrated synthesis pipeline, where a diffusion model generates spatially-consistent images while a MLLM produces corresponding textual descriptions. The automated construction of SKG enables scalable generation of diverse yet realistic spatial configurations, overcoming the limitations of manual data collection and annotation. Extensive experiments demonstrate that data synthesized from diverse types of spatial knowledge, including direction and distance, enhance the spatial perception and reasoning abilities of MLLMs markedly, albeit with a slight cost to their general capabilities. We hope that the idea of knowledge-based data synthesis can advance the development of spatial intelligence.

Original languageEnglish
Pages (from-to)4971-4981
Number of pages11
JournalIEEE Transactions on Audio, Speech and Language Processing
Volume33
DOIs
StatePublished - 2025
Externally publishedYes

Keywords

  • Multimodal synthesis
  • language models
  • natural language processing

Fingerprint

Dive into the research topics of 'Spatial Knowledge Graph-Guided Multimodal Synthesis'. Together they form a unique fingerprint.

Cite this