Skip to main navigation Skip to search Skip to main content

Compressive video sensing based on user attention model

  • Jie Xu*
  • , Jianwei Ma
  • , Dongming Zhang
  • , Yongdong Zhang
  • , Shouxun Lin
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We propose a compressive video sensing scheme based on user attention model (UAM) for real video sequences acquisition. In this work, for every group of consecutive video frames, we set the first frame as reference frame and build a UAM with visual rhythm analysis (VRA) to automatically determine region-of-interest (ROI) for non-reference frames. The determined ROI usually has significant movement and attracts more attention. Each frame of the video sequence is divided into non-overlapping blocks of 16×16 pixel size. Compressive video sampling is conducted in a block-by-block manner on each frame through a single operator and in a whole region manner on the ROIs through a different operator. Our video reconstruction algorithm involves alternating direction l1-norm minimization algorithm (ADM) for the frame difference of non-ROI blocks and minimum total-variance (TV) method for the ROIs. Experimental results showed that our method could significantly enhance the quality of reconstructed video and reduce the errors accumulated during the reconstruction.

Original languageEnglish
Title of host publication28th Picture Coding Symposium, PCS 2010
Pages90-93
Number of pages4
DOIs
StatePublished - 2010
Externally publishedYes
Event28th Picture Coding Symposium, PCS 2010 - Nagoya, Japan
Duration: 8 Dec 201010 Dec 2010

Publication series

Name28th Picture Coding Symposium, PCS 2010

Conference

Conference28th Picture Coding Symposium, PCS 2010
Country/TerritoryJapan
CityNagoya
Period8/12/1010/12/10

Keywords

  • Compressive sensing
  • ROI
  • User attention model
  • Video

Fingerprint

Dive into the research topics of 'Compressive video sensing based on user attention model'. Together they form a unique fingerprint.

Cite this