In modern moving image production pipelines, it is unavoidable to move the footage through different color spaces. Unfortunately, these color spaces exhibit color gamuts of various sizes. The most common problem is converting the cameras’ widegamut color spaces to the smaller gamuts of the display devices (cinema projector, broadcast monitor, computer display). So it is necessary to scale down the scene-referred footage to the gamut of the display using tone mapping functions [34].In a cinema production pipeline, ACES is widely used as the predominant color system. The all-color compassing ACES AP0 primaries are defined inside the system in a general way. However, when implementing visual effects and performing a color grade, the more usable ACES AP1 primaries are in use. When recording highly saturated bright colors, color values are often outside the target color space. This results in negative color values, which are hard to address inside a color pipeline. "Users of ACES are experiencing problems with clipping of colors and the resulting artifacts (loss of texture, intensification of color fringes). This clipping occurs at two stages in the pipeline: <list list-type="simple"> <list-item>- Conversion from camera raw RGB or from the manufacturer’s encoding space into ACES AP0</list-item> <list-item>- Conversion from ACES AP0 into the working color space ACES AP1" [1]</list-item> </list>The ACES community established a Gamut Mapping Virtual Working Group (VWG) to address these problems. The group’s scope is to propose a suitable gamut mapping/compression algorithm. This algorithm should perform well with wide-gamut, high dynamic range, scene-referred content. Furthermore, it should also be robust and invertible. This paper tests the behavior of the published GamutCompressor when applied to in- and out-ofgamut imagery and provides suggestions for application implementation. The tests are executed in The Foundry’s Nuke [2].
Contrast sensitivity functions (CSFs) describe the smallest visible contrast across a range of stimulus and viewing parameters. CSFs are useful for imaging and video applications, as contrast thresholds describe the maximum of color reproduction error that is invisible to the human observer. However, existing CSFs are limited. First, they are typically only defined for achromatic contrast. Second, even when they are defined for chromatic contrast, the thresholds are described along the cardinal dimensions of linear opponent color spaces, and therefore are difficult to relate to the dimensions of more commonly used color spaces, such as sRGB or CIE L*a*b*. Here, we adapt a recently proposed CSF to what we call color threshold functions (CTFs), which describe thresholds for color differences in more commonly used color spaces. We include color spaces with standard dynamic range gamut (sRGB, YCbCr, CIE L*a*b*, CIE L*u*v*) and high dynamic range gamut (PQ-RGB, PQ-YCbCr and ICTCP). Using CTFs, we analyze these color spaces in terms of coding efficiency and contrast threshold uniformity.
Quantization of images containing low texture regions, such as sky, water or skin, can produce banding artifacts. As the bitdepth of each color channel is decreased, smooth image gradients are transformed into perceivable, wide, discrete bands. Commonly used quality metrics cannot reliably measure the visibility of such artifacts. In this paper we introduce a visual model for predicting the visibility of both luminance and chrominance banding artifacts in image gradients spanning between two arbitrary points in a color space. The model analyzes the error introduced by quantization in the Fourier space, and employs a purpose-built spatio-chromatic contrast sensitivity function to predict its visibility. The output of the model is a detection probability, which can be then used to compute the minimum bit-depth for which banding artifacts are just-noticeable. We demonstrate that the model can accurately predict the results of our psychophysical experiments.