Skip to main navigation Skip to search Skip to main content

Multi-view common space learning for emotion recognition in the wild

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

It is a very challenging task to recognize emotion in the wild. Recently, combining information from various views or modalities has attracted more attention. Cross modality features and features extracted by different methods are regarded as multi-view information of the sample. In this paper, we propose a method to analyse multi-view features of emotion samples and automatically recognize the expression as part of the fourth Emotion Recognition in the Wild Challenge (EmotiW 2016). In our method, we first extract multi-view features such as BoF, CNN, LBP-TOP and audio features for each expression sample. Then we learn the corresponding projection matrices to map multi-view features into a common subspace. In the meantime, we impose 2;1-norm penalties on projection matrices for feature selection. We apply both this method and PLSR to emotion recognition. We conduct experiments on both AFEW and HAPPEI datasets, and achieve superior performance. The best recognition accuracy of our method is 55:31% on the AFEW dataset for video based emotion recognition in the wild. The minimum RMSE for group happiness intensity recognition is 0.9525 on HAPPEI dataset. Both of them are much better than that of the challenge baseline.

Original languageEnglish
Title of host publicationICMI 2016 - Proceedings of the 18th ACM International Conference on Multimodal Interaction
EditorsCatherine Pelachaud, Yukiko I. Nakano, Toyoaki Nishida, Carlos Busso, Louis-Philippe Morency, Elisabeth Andre
PublisherAssociation for Computing Machinery, Inc
Pages464-471
Number of pages8
ISBN (Electronic)9781450345569
DOIs
StatePublished - 31 Oct 2016
Externally publishedYes
Event18th ACM International Conference on Multimodal Interaction, ICMI 2016 - Tokyo, Japan
Duration: 12 Nov 201616 Nov 2016

Publication series

NameICMI 2016 - Proceedings of the 18th ACM International Conference on Multimodal Interaction

Conference

Conference18th ACM International Conference on Multimodal Interaction, ICMI 2016
Country/TerritoryJapan
CityTokyo
Period12/11/1616/11/16

Keywords

  • Common space learning
  • Emotion recognition
  • Emotiw 2016 challenge
  • Multi-view learning

Fingerprint

Dive into the research topics of 'Multi-view common space learning for emotion recognition in the wild'. Together they form a unique fingerprint.

Cite this