Graph Contrastive Partial Multi-View Clustering

  • Yiming Wang
  • , Dongxia Chang*
  • , Zhiqiang Fu
  • , Jie Wen
  • , Yao Zhao
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

With the diversity of information acquisition, data is stored and transmitted in an increasing number of modalities. Nevertheless, it is not unusual for parts of the data to be lost in some views due to unavoidable acquisition, transmission or storage errors. In this paper, we propose an augmentation-free graph contrastive learning framework to solve the problem of partial multi-view clustering. Notably, we suppose that the representations of similar samples (i.e., belonging to the same cluster) should be similar. This is distinct from the general unsupervised contrastive learning that assumes an image and its augmentations share a similar representation. Specifically, relation graphs are constructed using the nearest neighbors to identify existing similar samples, then the constructed inter-instance relation graphs are transferred to the missing views to build graphs on the corresponding missing data. Subsequently, two main components, within-view graph contrastive learning and cross-view graph consistency learning, are devised to maximize the mutual information of different views within a cluster. The proposed approach elevates instance-level contrastive learning and missing data inference to the cluster-level, effectively mitigating the impact of individual missing data on clustering. Experiments on several challenging datasets demonstrate the superiority of our proposed methods.

Original languageEnglish
Pages (from-to)6551-6562
Number of pages12
JournalIEEE Transactions on Multimedia
Volume25
DOIs
StatePublished - 2023
Externally publishedYes

Keywords

  • Contrastive learning
  • multi-view learning
  • partial multi-view clustering

Fingerprint

Dive into the research topics of 'Graph Contrastive Partial Multi-View Clustering'. Together they form a unique fingerprint.

Cite this