Back to articles
Regular Article FastTrack
Volume: 0 | Article ID: 040507
Image
Assessing the Quality of Light Field Images: A Graph-based Approach
Abstract
Abstract

Light fields (LFs) capture both angular and spatial information of light rays, providing an immersive and detailed representation of the visual world. However, the high dimensionality of LF data presents challenges for compression and transmission algorithms, which often introduce degradations that affect visual quality. To address this, we propose GCNN-LFIQA, a novel no-reference LF image quality assessment method that leverages the power of deep graph convolutional neural networks (GCNNs). The method employs a single-stream deep GCNN architecture to model the complex structural and geometric relationships within LF data, enabling accurate quality predictions. A key innovation of the proposed approach is its input preparation pipeline, which converts horizontal epipolar plane images into skeleton-based graph representations enriched with node-level features such as betweenness centrality. These graph representations serve as input to the GCNN, which predicts quality scores using a regression block. We evaluated GCNN-LFIQA on two widely used LF quality datasets, Win5-LID and LFDD, where it achieved high correlation values and outperformed other state-of-the-art methods. The proposed method demonstrates robustness, computational efficiency, and the potential to address the unique challenges of LF image quality assessment in real-world applications.

Subject Areas :
Views 0
Downloads 0
 articleview.views 0
 articleview.downloads 0
  Cite this article 

Sana Alamgeer, Muhammad Irshad, Mylène C. Q. Farias, "Assessing the Quality of Light Field Images: A Graph-based Approachin Journal of Imaging Science and Technology,  2025,  pp 1 - 7,  https://doi.org/10.2352/J.ImagingSci.Technol.2025.69.4.040507

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2025
  Article timeline 
  • received August 2024
  • accepted January 2025

Preprint submitted to:
  Login or subscribe to view the content