Back to articles
Work Presented at the 14th China Academy Conference on Printing and Packaging 2023
Volume: 68 | Article ID: 020401
Image
Dual-Discriminator Generative Adversarial Network with Uniform Color Information Extraction for Color Constancy
  DOI :  10.2352/J.ImagingSci.Technol.2024.68.2.020401  Published OnlineMarch 2024
Abstract
Abstract

Generative adversarial network (GAN) has attracted extensive attention in color constancy because it allows pixel-wise supervision. However, the misinterpretation of color features and the low sensitivity of the discriminator caused by strong correlation of multiple features limit the learning capability of GAN. To address these issues, we propose a dual-discriminator generative adversarial network (DDGAN), which includes a color feature learning (CFL) module, a feature fusion discriminator (FFD) module and a global consistency constraint (GCC) module. First, CFL pays attention to regions with uniform color to enable the generator to learn distinguishable color information. Second, FFD is a discriminator module that contains two feature extraction branches; one extracts color features and the other extracts globally correlated features. These features are then fused to weaken structural features and enhance the discriminator’s sensitivity to color features. Finally, GCC imposes global consistency constraints to reconsider the structural features weakened by FFD and unify structural features and color features, aiming to obtain more uniform images of colors and contents. Extensive experiments on the ColorChecker RECommended dataset, NUS 8-Camera and Cube datasets show that our DDGAN outperforms other GAN-based methods in terms of five popular metrics.

Subject Areas :
Views 45
Downloads 8
 articleview.views 45
 articleview.downloads 8
  Cite this article 

Huiting Xu, Zhenshan Tan, Zhijiang Li, Shuying Lyu, "Dual-Discriminator Generative Adversarial Network with Uniform Color Information Extraction for Color Constancyin Journal of Imaging Science and Technology,  2024,  pp 1 - 11,  https://doi.org/10.2352/J.ImagingSci.Technol.2024.68.2.020401

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2024
  Article timeline 
  • received June 2023
  • accepted August 2023
  • PublishedMarch 2024

Preprint submitted to:
  Login or subscribe to view the content