White balance (WB) is one of the first photo-finishing steps used to render a captured image to its final output. WB is applied to remove the color cast caused by the scene's illumination. Interactive photo-editing software allows users to manually select different regions in a photo as examples of the illumination for WB correction (e.g., clicking on achromatic objects). Such interactive editing is possible only with images saved in a RAW image format. This is because RAW images have no photorendering operations applied and photo-editing software is able to apply WB and other photo-finishing procedures to render the final image. Interactively editing WB in camera-rendered images is significantly more challenging. This is because the camera hardware has already applied WB to the image and subsequent nonlinear photo-processing routines. These nonlinear rendering operations make it difficult to change the WB post-capture. The goal of this paper is to allow interactive WB manipulation of camera-rendered images. The proposed method is an extension of our recent work [6] that proposed a post-capture method for WB correction based on nonlinear color-mapping functions. Here, we introduce a new framework that links the nonlinear color-mapping functions directly to user-selected colors to enable interactive WB manipulation. This new framework is also more efficient in terms of memory and run-time (99% reduction in memory and 3 × speed-up). Lastly, we describe how our framework can leverage a simple illumination estimation method (i.e., gray-world) to perform auto-WB correction that is on a par with the WB correction results in [6].
Color correction in standard images is mainly based on controlling the interaction of illuminant and reflectance and often results in a compensation of actual chromatic casts with respect to a chosen reference illuminant. Primary reference illuminants come from our reference star, the sun, whose color characteristics are of G2V star class. Visually, color correction in standard images allows us to obtain scene colors more pleasant and closer to our everyday experience. Quality of color is usually intended to please the vision system of the observer, that looks at the picture using the powerful mechanisms of visual color and contrast adjustment (color constancy), a very different tool with respect to a camera. In astrophotography this basic principle does not apply: the illuminants of the many different stars or emission nebulae are the only information we detect, while the only reflectance involved (not considering Solar System planets) is originated by dust particles producing the so called reflection nebulae. Thus, what is the meaning and the goal of color correction in astrophotography? In this paper, we try to present to the reader some points of discussion about this broad question.
Color is an important aspect of the camera quality. Above all in a Visual Effects Pipeline (VFX) it is necessary to maintain a linear relationship of the pixel color in the recorded image to the original light of the scene throughout every step in the production pipeline. This means that the plate recorded by the camera is not permitted to be subject of changes in any way (,,do no harm to the plate"). Unfortunately most of the camera vendors are applying certain functions during the input step to the recorded RAW material, mostly to meet the needs of the display devices at the end of the pipeline. But they also are adding functions to establish a certain look the camera company is associated with. Maintaining a linear relationship to the light of the scene enables compositing artists and editors to combine imagery of varying sources (mostly cameras of different vendors). The Academy of Motion Picture Arts and Science (AMPAS) established an Academy Color Encoding System (ACES). To achieve a linear relationship to the light of the scene, Input Device Transforms (IDTs) for most of the digital film cameras have been provided recently. Unfortunately, such IDTs are not available for nearly all consumer and DSLR cameras. To add their imagery to the film production pipeline it is desirable to create convenient IDTs for such devices as well. The goal of this paper is to record the spectral distribution of a GreagMacbeth ColorChecker using a spectrometer and also photography it with a Canon EOS 5D Mark III camera under the same lighting conditions. The RAW image is then converted to ACES color spacers (ACES2065-1 or ACEScg) using industrial approved RAW converters. The positions of the patches of the ColorChecker in CIEYxy color space are then compared to the positions of the patches captured by the spectral device. As a result a tendency could be obtained if the camera can be used inside the AMPAS ACES workflow.
An introduction to light in an absorbing, scattering medium is offered, with application to correcting the colors in underwater photographs. The "waterlight" model is presented which quantifies the amount of light scattered in the direction of the camera from the medium itself. A spectral model for ocean water is described along with a method to represent it in three bands (e.g. RGB). Given these models, the radiance of a diffuse reflector at known depth and distance can be computed. At infinite distance, this becomes the "abyss color", a new and useful concept with which to estimate camera response, and the water parameters. The color correction procedure for a given depth and distance is outlined and illustrated. Unknown causes of color shifts from camera or water are addressed via a "blue balance" transform, which maps the recorded abyss color to the modeled abyss color. A compact vector expression for the correction is presented and examples where it is applied to underwater scenes are provided.
This research examined the performance of skin coloredpatches for accurately estimating human skin color. More than 300 facial images of Korean females were taken with a digital singlelens reflex camera (Canon 550D) while each was holding the X-Rite Digital ColorChecker® semi-gloss target. The color checker consisted of 140 color patches, including the 14 skin-colored ones. As the ground truth, the CIE 1976 L*a* b* values of seven spots in each face were measured with a spectrophotometer. For an examination, three sets of calibration targets were compared, and each set consisted of the whole 140 patches, 24 standard color patches and 14 skin-colored patches. Consequently, three sets of estimated skin colors were obtained, and the errors from the ground truth were calculated through the square root of the sum of squared differences (ΔE). The results show that the error of color correction using the 14 skin-colored patches was significantly smaller (average ΔE = 8.58, SD = 3.89) than errors of correction using the other two sets of color patches. The study provides evidence that the skin-colored patches support more accurate estimations of skin colors. It is expected that the skin-colored patches will perform as a new standard calibration target for skin-related image calibration.