Back to articles
Volume: 20 | Article ID: art00002
Visual attention based surveillance videos compression
  DOI :  10.2352/CIC.2012.20.1.art00002  Published OnlineJanuary 2012

Visual attention models (VAM) try to mimic the human visual system in distinguishing salient regions from non-salient ones in the scene. Only a few attention models propose to detect salient motion in surveillance videos. These model utilizes static features such as color, intensity, orientation, face, and dynamic features such as motion to detect most salient regions in videos. This motivated us to propose a compression algorithm based on visual attention model that is developed specificly for surveillance videos. In this paper we are using a state of the art visual attention model developed by combining bottom-up, top-down, and motion cues. Based on its similarity with experimentally obtained gaze maps evaluated both visually and with quantitative measures, a compression model based on this attention model is proposed for H.264/AVC encoded videos. Our experimental results show that we can encode videos with same or better quality than those obtained with the standard baseline profile of the JM 18.0 reference encoder, while reducing the file size uptil 22%.

Subject Areas :
Views 8
Downloads 0
 articleview.views 8
 articleview.downloads 0
  Cite this article 

Fahad Fazal Elahi Guraya, Victor Medina, Faouzi Alaya Cheikh, "Visual attention based surveillance videos compressionin Proc. IS&T 20th Color and Imaging Conf.,  2012,  pp 2 - 8,

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2012
Color and Imaging Conference
color imaging conf
Society of Imaging Science and Technology
7003 Kilworth Lane, Springfield, VA 22151, USA