Back to articles
Volume: 28 | Article ID: art00013
Video segmentation in presence of static and dynamic textures
  DOI :  10.2352/ISSN.2470-1173.2016.15.IPAS-187  Published OnlineFebruary 2016

This paper describes an approach to video sequence oversegmentation. The objective is to split the video up to set of disjoint spatiotemporal regions with homogeneous texture properties. In the work we consider three possible types of regions: static texture, dynamic texture and non-textured region. Video over-segmentation is useful for wide range of applications, including perceptual video coding, video-based object recognition and high-level video segmentation. We treat the problem as a labeling problem on a Markov Random Field. Observed data are represented by output of fully-connected layer of convolutional neural network trained on static and dynamic textures. The hidden states of our model represent appropriate region labels. To provide robust over-segmentation we employ energy function composed of terms associated with neighboring voxels similarity and smoothness of obtained supervoxels. We show that our approach is able to segment static and dynamic textures in simultaneous fashion. We have tested our approach on several video sequences rich of static and dynamic textures and it has shown promising results.

Subject Areas :
Views 16
Downloads 0
 articleview.views 16
 articleview.downloads 0
  Cite this article 

Frantc V A, Makov S V, Voronin V V, Marchuk V I, Stradanchenko S. G, Egiazarian K O, "Video segmentation in presence of static and dynamic texturesin Proc. IS&T Int’l. Symp. on Electronic Imaging: Image Processing: Algorithms and Systems XIV,  2016,

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2016
Electronic Imaging
Society for Imaging Science and Technology
7003 Kilworth Lane, Springfield, VA 22151 USA