Back to articles
Articles
Volume: 33 | Article ID: art00018
Image
A deep perceptual metric for 3D point clouds
  DOI :  10.2352/ISSN.2470-1173.2021.9.IQSP-257  Published OnlineJanuary 2021
Abstract

Point clouds are essential for storage and transmission of 3D content. As they can entail significant volumes of data, point cloud compression is crucial for practical usage. Recently, point cloud geometry compression approaches based on deep neural networks have been explored. In this paper, we evaluate the ability to predict perceptual quality of typical voxel-based loss functions employed to train these networks. We find that the commonly used focal loss and weighted binary cross entropy are poorly correlated with human perception. We thus propose a perceptual loss function for 3D point clouds which outperforms existing loss functions on the ICIP2020 subjective dataset. In addition, we propose a novel truncated distance field voxel grid representation and find that it leads to sparser latent spaces and loss functions that are more correlated with perceived visual quality compared to a binary representation. The source code is available at <uri>https://github.com/mauriceqch/2021_pc_perceptual_loss</uri>.

Subject Areas :
Views 45
Downloads 0
 articleview.views 45
 articleview.downloads 0
  Cite this article 

Maurice Quach, Aladine Chetouani, Giuseppe Valenzise, Frederic Dufaux, "A deep perceptual metric for 3D point cloudsin Proc. IS&T Int’l. Symp. on Electronic Imaging: Image Quality and System Performance XVIII,  2021,  pp 257-1 - 257-7,  https://doi.org/10.2352/ISSN.2470-1173.2021.9.IQSP-257

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2021
72010604
Electronic Imaging
2470-1173
Society for Imaging Science and Technology
IS&T 7003 Kilworth Lane Springfield, VA 22151 USA