Back to articles
Article
Volume: 34 | Article ID: HVEI-168
Image
SalyPath360: Saliency and scanpath prediction framework for omnidirectional images
  DOI :  10.2352/EI.2022.34.11.HVEI-168  Published OnlineJanuary 2022
Abstract
Abstract

This paper introduces a new framework to predict visual attention of omnidirectional images. The key setup of our architecture is the simultaneous prediction of the saliency map and a corresponding scanpath for a given stimulus. The framework implements a fully encoder-decoder convolutional neural network augmented by an attention module to generate representative saliency maps. In addition, an auxiliary network is employed to generate probable viewport center fixation points through the $SoftArgMax$ function. The latter allows to derive fixation points from feature maps. To take advantage of the scanpath prediction, an adaptive joint probability distribution model is then applied to construct the final unbiased saliency map by leveraging the encoder decoder-based saliency map and the scanpath-based saliency heatmap. The proposed framework was evaluated in terms of saliency and scanpath prediction, and the results were compared to state-of-the-art methods on Salient360! dataset. The results showed the relevance of our framework and the benefits of such architecture for further omnidirectional visual attention prediction tasks.

Subject Areas :
Views 65
Downloads 6
 articleview.views 65
 articleview.downloads 6
  Cite this article 

Mohamed Amine Kerkouri, Marouane Tliba, Aladine Chetouani, Mohamed Sayeh, "SalyPath360: Saliency and scanpath prediction framework for omnidirectional imagesin Proc. IS&T Int’l. Symp. on Electronic Imaging: Human Vision and Electronic Imaging,  2022,  pp 168-1 - 168-7,  https://doi.org/10.2352/EI.2022.34.11.HVEI-168

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2022
ei
Electronic Imaging
2470-1173
2470-1173
Society for Imaging Science and Technology
IS&T 7003 Kilworth Lane, Springfield, VA 22151 USA