Asymmetric Gaussian Process multi-view learning for visual classification

Research output: Contribution to journalArticlepeer-review

Abstract

Methods of multi-view learning attain outstanding performance in different fields compared with the single-view based strategies. In this paper, the Gaussian Process Latent Variable Model (GPVLM), which is a generative and non-parametric model, is exploited to represent multiple views in a common subspace. Specifically, there exists a shared latent variable across various views that is assumed to be transformed to observations by using distinctive Gaussian Process projections. However, this assumption is only a generative strategy, being intractable to simply estimate the fused variable at the testing step. In order to tackle this problem, another projection from observed data to the shared variable is simultaneously learned by enjoying the view-shared and view-specific kernel parameters under the Gaussian Process structure. Furthermore, to achieve the classification task, label information is also introduced to be the generation from the latent variable through a Gaussian Process transformation. Extensive experimental results on multi-view datasets demonstrate the superiority and effectiveness of our model in comparison to state-of-the-art algorithms.

Original languageEnglish
Pages (from-to)108-118
Number of pages11
JournalInformation Fusion
Volume65
DOIs
StatePublished - Jan 2021
Externally publishedYes

Keywords

  • Classification
  • Gaussian Process
  • Multi-view
  • View-shared
  • View-specific

Fingerprint

Dive into the research topics of 'Asymmetric Gaussian Process multi-view learning for visual classification'. Together they form a unique fingerprint.

Cite this