This paper introduces a new framework to predict visual attention of omnidirectional images. The key setup of our architecture is the simultaneous prediction of the saliency map and a corresponding scanpath for a given stimulus. The framework implements a fully encoder-decoder convolutional neural network augmented by an attention module to generate representative saliency maps. In addition, an auxiliary network is employed to generate probable viewport center fixation points through the $SoftArgMax$ function. The latter allows to derive fixation points from feature maps. To take advantage of the scanpath prediction, an adaptive joint probability distribution model is then applied to construct the final unbiased saliency map by leveraging the encoder decoder-based saliency map and the scanpath-based saliency heatmap. The proposed framework was evaluated in terms of saliency and scanpath prediction, and the results were compared to state-of-the-art methods on Salient360! dataset. The results showed the relevance of our framework and the benefits of such architecture for further omnidirectional visual attention prediction tasks.
Mohamed Amine Kerkouri, Marouane Tliba, Aladine Chetouani, Mohamed Sayeh, "SalyPath360: Saliency and scanpath prediction framework for omnidirectional images" in Proc. IS&T Int’l. Symp. on Electronic Imaging: Human Vision and Electronic Imaging, 2022, pp 168-1 - 168-7, https://doi.org/10.2352/EI.2022.34.11.HVEI-168