Most cameras still encode images in the small-gamut sRGB color space. The reliance on sRGB is disappointing as modern display hardware and image-editing software are capable of using wider-gamut color spaces. Converting a small-gamut image to a wider-gamut is a challenging problem. Many devices and software use colorimetric strategies that map colors from the small gamut to their equivalent colors in the wider gamut. This colorimetric approach avoids visual changes in the image but leaves much of the target wide-gamut space unused. Noncolorimetric approaches stretch or expand the small-gamut colors to enhance image colors while risking color distortions. We take a unique approach to gamut expansion by treating it as a restoration problem. A key insight used in our approach is that cameras internally encode images in a wide-gamut color space (i.e., ProPhoto) before compressing and clipping the colors to sRGB's smaller gamut. Based on this insight, we use a softwarebased camera ISP to generate a dataset of 5,000 image pairs of images encoded in both sRGB and ProPhoto. This dataset enables us to train a neural network to perform wide-gamut color restoration. Our deep-learning strategy achieves significant improvements over existing solutions and produces color-rich images with few to no visual artifacts.
The quality of building electric lighting systems can be assessed using color rendition metrics. However, color rendition metrics are limited in quantifying tunable solid-state light sources, since tunable lighting systems can generate a vast number of different white light spectra, providing flexibility in terms of color quality and energy efficiency. Previous research suggests that color rendition is multi-dimensional in nature, and it cannot be simplified to a single number. Color shifts under a test light source in comparison to a reference illuminant, changes in color gamut, and color discrimination are important dimensions of the quality of electric light sources, which are not captured by a single-numbered metric. To address the challenges in color rendition characterization of modern solid-state light sources, the development of a multi-dimensional color rendition space is proposed. The proposed continuous measure can quantify the change in color rendition ability of tunable solid-state light devices with caveats. Future work, discretization of the continuous color rendition space, will be carried out to address the shortcomings of a continuous three-dimensional space.
In modern moving image production pipelines, it is unavoidable to move the footage through different color spaces. Unfortunately, these color spaces exhibit color gamuts of various sizes. The most common problem is converting the cameras’ widegamut color spaces to the smaller gamuts of the display devices (cinema projector, broadcast monitor, computer display). So it is necessary to scale down the scene-referred footage to the gamut of the display using tone mapping functions [34].In a cinema production pipeline, ACES is widely used as the predominant color system. The all-color compassing ACES AP0 primaries are defined inside the system in a general way. However, when implementing visual effects and performing a color grade, the more usable ACES AP1 primaries are in use. When recording highly saturated bright colors, color values are often outside the target color space. This results in negative color values, which are hard to address inside a color pipeline. "Users of ACES are experiencing problems with clipping of colors and the resulting artifacts (loss of texture, intensification of color fringes). This clipping occurs at two stages in the pipeline: <list list-type="simple"> <list-item>- Conversion from camera raw RGB or from the manufacturer’s encoding space into ACES AP0</list-item> <list-item>- Conversion from ACES AP0 into the working color space ACES AP1" [1]</list-item> </list>The ACES community established a Gamut Mapping Virtual Working Group (VWG) to address these problems. The group’s scope is to propose a suitable gamut mapping/compression algorithm. This algorithm should perform well with wide-gamut, high dynamic range, scene-referred content. Furthermore, it should also be robust and invertible. This paper tests the behavior of the published GamutCompressor when applied to in- and out-ofgamut imagery and provides suggestions for application implementation. The tests are executed in The Foundry’s Nuke [2].