Image enhancement and image retouching processes are often dominated by global (shift-invariant) change of colour and tones. Most “deep learning” based methods proposed for image enhancement are trained to enforce similarity in pixel values and/or in the high-level feature space. We hypothesise that for tasks, such as image enhancement and retouching, which involve a significant shift in colour statistics, training the model to restore the overall colour distribution can be of vital importance. To address this, we study the effect of a Histogram Matching loss function on a state-of-the art colour enhancement network — HDRNet. The loss enforces similarity of the RGB histograms of the predicted and the target images. By providing detailed qualitative and quantitative comparison of different loss functions on varied datasets, we conclude that enforcing similarity in the colour distribution achieves substantial improvement in performance and can play a significant role while choosing loss functions for image enhancement networks.
Aamir Mustafa, Hongjie You, Rafal K. Mantiuk, "A Comparative Study on the Loss Functions for Image Enhancement Networks" in London Imaging Meeting, 2022, pp 11 - 15, https://doi.org/10.2352/lim.2022.1.1.04