Light fields (LFs) capture both angular and spatial information of light rays, providing an immersive and detailed representation of the visual world. However, the high dimensionality of LF data presents challenges for compression and transmission algorithms, which often introduce degradations that affect visual quality. To address this, we propose GCNN-LFIQA, a novel no-reference LF image quality assessment method that leverages the power of deep graph convolutional neural networks (GCNNs). The method employs a single-stream deep GCNN architecture to model the complex structural and geometric relationships within LF data, enabling accurate quality predictions. A key innovation of the proposed approach is its input preparation pipeline, which converts horizontal epipolar plane images into skeleton-based graph representations enriched with node-level features such as betweenness centrality. These graph representations serve as input to the GCNN, which predicts quality scores using a regression block. We evaluated GCNN-LFIQA on two widely used LF quality datasets, Win5-LID and LFDD, where it achieved high correlation values and outperformed other state-of-the-art methods. The proposed method demonstrates robustness, computational efficiency, and the potential to address the unique challenges of LF image quality assessment in real-world applications.
Sana Alamgeer, Muhammad Irshad, Mylène C. Q. Farias, "Assessing the Quality of Light Field Images: A Graph-based Approach" in Journal of Imaging Science and Technology, 2025, pp 1 - 7, https://doi.org/10.2352/J.ImagingSci.Technol.2025.69.4.040507