Back to articles
Article
Volume: 3 | Article ID: 04
Image
A Comparative Study on the Loss Functions for Image Enhancement Networks
  DOI :  10.2352/lim.2022.1.1.04  Published OnlineJuly 2022
Abstract
Abstract

Image enhancement and image retouching processes are often dominated by global (shift-invariant) change of colour and tones. Most “deep learning” based methods proposed for image enhancement are trained to enforce similarity in pixel values and/or in the high-level feature space. We hypothesise that for tasks, such as image enhancement and retouching, which involve a significant shift in colour statistics, training the model to restore the overall colour distribution can be of vital importance. To address this, we study the effect of a Histogram Matching loss function on a state-of-the art colour enhancement network — HDRNet. The loss enforces similarity of the RGB histograms of the predicted and the target images. By providing detailed qualitative and quantitative comparison of different loss functions on varied datasets, we conclude that enforcing similarity in the colour distribution achieves substantial improvement in performance and can play a significant role while choosing loss functions for image enhancement networks.

Subject Areas :
Views 117
Downloads 22
 articleview.views 117
 articleview.downloads 22
  Cite this article 

Aamir Mustafa, Hongjie You, Rafal K. Mantiuk, "A Comparative Study on the Loss Functions for Image Enhancement Networksin London Imaging Meeting,  2022,  pp 11 - 15,  https://doi.org/10.2352/lim.2022.1.1.04

 Copy citation
  Copyright statement 
Copyright ©2022 Society for Imaging Science and Technology 2022
lim
London Imaging Meeting
2694-118X
2694-118X
Society for Imaging Science and Technology
IS&T 7003 Kilworth Lane, Springfield, VA 22151 USA