Back to articles
Perception and Quality
Volume: 28 | Article ID: art00020
Image
How saccadic models help predict where we look during a visual task? Application to visual quality assessment.
  DOI :  10.2352/ISSN.2470-1173.2016.13.IQSP-216  Published OnlineFebruary 2016
Abstract

In this paper, we present saccadic models which are an alternative way to predict where observers look at. Compared to saliency models, saccadic models generate plausible visual scanpaths from which saliency maps can be computed. In addition these models have the advantage of being adaptable to different viewing conditions, viewing tasks and types of visual scene. We demonstrate that saccadic models perform better than existing saliency models for predicting where an observer looks at in free-viewing condition and quality-task condition (i.e. when observers have to score the quality of an image). For that, the joint distributions of saccade amplitudes and orientations in both conditions (i.e. free-viewing and quality task) have been estimated from eye tracking data. Thanks to saccadic models, we hope we will be able to improve upon the performance of saliency-based quality metrics, and more generally the capacity to predict where we look within visual scenes when performing visual tasks.

Subject Areas :
Views 13
Downloads 2
 articleview.views 13
 articleview.downloads 2
  Cite this article 

Olivier Le Meur, Antoine Coutrot, "How saccadic models help predict where we look during a visual task? Application to visual quality assessment.in Proc. IS&T Int’l. Symp. on Electronic Imaging: Image Quality and System Performance XIII,  2016,  https://doi.org/10.2352/ISSN.2470-1173.2016.13.IQSP-216

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2016
72010604
Electronic Imaging
2470-1173
Society for Imaging Science and Technology