Skip to main navigation Skip to search Skip to main content

Enhancing Representation of Deep Features for Sensor-Based Activity Recognition

Research output: Contribution to journalArticlepeer-review

Abstract

Sensor-based activity recognition (AR) depends on effective feature representation and classification. However, many recent studies focus on recognition methods, but largely ignore feature representation. Benefitting from the success of Convolutional Neural Networks (CNN) in feature extraction, we propose to improve the feature representation of activities. Specifically, we use a reversed CNN to generate the significant data based on the original features and combine the raw training data with significant data to obtain to enhanced training data. The proposed method can not only train better feature extractors but also help better understand the abstract features of sensor-based activity data. To demonstrate the effectiveness of our proposed method, we conduct comparative experiments with CNN Classifier and CNN-LSTM Classifier on five public datasets, namely the UCIHAR, UniMiB SHAR, OPPORTUNITY, WISDM, and PAMAP2. In addition, we evaluate our proposed method in comparison with traditional methods such as Decision Tree, Multi-layer Perceptron, Extremely randomized trees, Random Forest, and k-Nearest Neighbour on a specific dataset, WISDM. The results show our proposed method consistently outperforms the state-of-the-art methods.

Original languageEnglish
Pages (from-to)130-145
Number of pages16
JournalMobile Networks and Applications
Volume26
Issue number1
DOIs
StatePublished - Feb 2021

Keywords

  • Activity recognition
  • Enhancing features
  • Reversed CNN
  • Significant features

Fingerprint

Dive into the research topics of 'Enhancing Representation of Deep Features for Sensor-Based Activity Recognition'. Together they form a unique fingerprint.

Cite this