Skip to main navigation Skip to search Skip to main content

Exploiting multi-expression dependences for implicit multi-emotion video tagging

  • Shangfei Wang*
  • , Zhilei Liu
  • , Jun Wang
  • , Zhaoyu Wang
  • , Yongqiang Li
  • , Xiaoping Chen
  • , Qiang Ji
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, a novel approach of implicit multiple emotional video tagging is proposed, which considers the relations between the users' facial expressions and emotions as well as the relations among multiple expressions. First, the audiences' expressions are inferred through a multi-expression recognition model, which consists of an image-driven expression measurement recognition and a Bayesian network representing the co-existence and mutual exclusion relations among multi-expressions. Second, the videos' multi-emotion tags are obtained from the recognized expressions by another Bayesian Network, capturing the relations between expressions and emotions. Results of the experiments conducted on the JAFFE and NVIE databases demonstrate that the performance of expression recognition is improved by considering the relations among multiple expressions. Furthermore, the relations between expressions and emotions help improve emotional tagging, as our approach outperforms the traditional expression-based or image-driven implicit tagging methods.

Original languageEnglish
Pages (from-to)682-691
Number of pages10
JournalImage and Vision Computing
Volume32
Issue number10
DOIs
StatePublished - Oct 2014
Externally publishedYes

Keywords

  • Implicit video tagging
  • Multi-emotion
  • Multi-expression

Fingerprint

Dive into the research topics of 'Exploiting multi-expression dependences for implicit multi-emotion video tagging'. Together they form a unique fingerprint.

Cite this