Regular
COLOR AND LIGHT DESIGN
ELDERLY USERS
FOGRA
GRAY BALANCEGAMUT VOLUME ETCGAMUT MAPPING
ICCINVERTIBILITY
LED LIGHTING
METRICS
ROUND TRIP
VISUAL COMFORT
 Filters
Month and year
 
  13  10
Image
Page 0,  © Society for Imaging Science and Technology 2019
Digital Library: CIC
Published Online: October  2019
  110  8
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2019
Volume 27
Issue 1

The in-camera imaging pipeline consists of several routines that render the sensor's scene-referred raw-RGB image to the final display-referred standard RGB (sRGB) image. One of the crucial routines applied in the pipeline is white balance (WB). WB is performed to normalize the color cast caused by the scene's illumination, which is often described using correlated color temperature. WB is applied early in the in-camera pipeline and is followed by a number of nonlinear color manipulations. Because of these nonlinear steps, it is challenging to modify an image's WB with a different color temperature in the sRGB image. As a result, if an sRGB image is processed with the wrong color temperature, the image will have a strong color cast that cannot be easily corrected. To address this problem, we propose an imaging framework that renders a small number of "tiny versions" of the original image (e.g., 0.1% of the full-size image), each with different WB color temperatures. Rendering these tiny images requires minimal overhead from the camera pipeline. These tiny images are sufficient to allow color mapping functions to be computed that can map the full-sized sRGB image to appear as if it was rendered with any of the tiny images' color temperature. Moreover, by blending the color mapping functions, we can map the output sRGB image to appear as if it was rendered through the camera pipeline with any color temperature. These mapping functions can be stored as a JPEG comment with less than 6 KB overhead. We demonstrate that this capture framework can significantly outperform existing solutions targeting post-capture WB editing.

Digital Library: CIC
Published Online: October  2019
  25  2
Image
Pages 7 - 12,  © Society for Imaging Science and Technology 2019
Volume 27
Issue 1

Retinex is a colour vision model introduced by Land more than 40 years ago. Since then, it has also been widely and successfully used for image enhancement. However, Retinex often introduces colour and halo artefacts. Artefacts are a necessary consequence of the per channel color processing and the lack of any strong control for controlling the locality of the processing (halos are very local errors). In this paper we relate an input to the corresponding output processed retinex image by using a single shading term which is both spatially varying and smooth and a global colour shift. This coupling dramatically reduces common Retinex artefacts. Coupled Retinex is strongly preferred in preference tests.

Digital Library: CIC
Published Online: October  2019
  22  4
Image
Pages 13 - 18,  © Society for Imaging Science and Technology 2019
Volume 27
Issue 1

Chromatic adaptation is an extensively studied concept. However, less is known about the time course of chromatic adaptation under gradually-changing lighting. Two experiments were carried out to quantify the time course of chromatic adaptation under dynamic lighting. In the first experiment, a step change in lighting chromaticity was used. The time course of adaptation was well described by the Rinner and Gegenfurtner slow adaptation exponential model [Vision Research, 40(14), 2000], and the adaptation state after saturation differed between observers. In the second experiment, chromatic adaptation was measured in response to two different speeds of lighting chromaticity transitions. An adjusted exponential model was able to fit the observed time course of adaptation for both lighting transition speeds.

Digital Library: CIC
Published Online: October  2019
  9  4
Image
Pages 19 - 22,  © Society for Imaging Science and Technology 2019
Volume 27
Issue 1

Adapting chromaticities are not considered in characterizing the degree of chromatic adaptation in various chromatic adaptation transforms (CATs). Though several recent studies have clearly suggested that the effect of adapting chromaticities on degree of chromatic adaptation should not be ignored, these studies were only carried out under a single adapting luminance level. This study was carefully designed to systematically vary the adapting luminance and chromaticities to investigate whether the adapting luminance and chromaticities jointly affect the degree of chromatic adaptation. Human observers adjusted the color of a stimulus produced by a self-luminous display to make it appear the whitest under each of the 17 different adapting conditions. It was found the adapting chromaticities and luminance jointly affected the degree of chromatic adaptation. At a same adapting luminance level, the degree of chromatic adaptation was found lower under a lower adapting CCT (i.e., 2700 and 3500 K). A higher adapting luminance can significantly increase the degree of chromatic adaptation, especially when the adapting CCT was low (i.e., 2700 and 3500 K).

Digital Library: CIC
Published Online: October  2019
  17  5
Image
Pages 23 - 27,  © Society for Imaging Science and Technology 2019
Volume 27
Issue 1

Over the years, various chromatic adaptation transforms have been derived to fit the visual perception. However, some research demonstrated that CAT02, the most widely used chromatic adaptation transform, overestimates the degree of adaptation, especially for colored illumination. In this study, a memory color matching experiment was conducted in a real scene with the background adapting field varying in field of view, luminance and chromaticity. It showed that a larger field of view results in more complete adaptation. The results were used to test several existing chromatic adaptation models and to develop three new types of models. All of them improved the performance to some extent, especially for the illuminations with low CCT.

Digital Library: CIC
Published Online: October  2019
  7  0
Image
Pages 28 - 36,  © Society for Imaging Science and Technology 2019
Volume 27
Issue 1

In this study, we propose a method to detect wetness on the surface of human skin and skin phantoms using an RGB camera. Recent research on affect analysis has addressed the non-contact multi-modal analysis of affect aimed at such applications as automated questionnaires. New modalities are needed to develop a more accurate system for analyzing affects than the current system. Thus we focus on emotional sweating, which is among the most reliable modalities in contact methods for affect analysis. However, sweat detection on the human skin has not been achieved by other researchers, and thus it is unclear whether their feature values are useful. The proposed method is based on feature values of color and glossiness obtained from images. In tests of this method, the error rate was approximately 6.5% on a skin phantom and at least approximately 12.7% on human skin. This research will help to develop non-contact affect analysis.

Digital Library: CIC
Published Online: July  2019
  21  6
Image
Pages 37 - 42,  © Society for Imaging Science and Technology 2019
Volume 27
Issue 1

Gloss is widely accepted as a surface- and illuminationbased property, both by definition and by means of metrology. However, mechanisms of gloss perception are yet to be fully understood. Potential cues generating gloss perception can be a product of phenomena other than surface reflection and can vary from person to person. While human observers are less likely to be capable of inverting optics, they might also fail predicting the origin of the cues. Therefore, we hypothesize that color and translucency could also impact perceived glossiness. In order to validate our hypothesis, we conducted series of psychophysical experiments asking observers to rank objects by their glossiness. The objects had the identical surface geometry and shape but different color and translucency. The experiments have demonstrated that people do not perceive objects with identical surface equally glossy. Human subjects are usually able to rank objects of identical surface by their glossiness. However, the strategy used for ranking varies across the groups of people.

Digital Library: CIC
Published Online: October  2019
  26  7
Image
Pages 43 - 48,  © Society for Imaging Science and Technology 2019
Volume 27
Issue 1

Texture analysis and characterization based on human perception has been continuously sought after by psychology and computer vision researchers. However, the fundamental question of how humans truly perceive texture still remains. In the present study, using a series of textile samples, the most important perceptual attributes people use to interpret and evaluate the texture properties of textiles were accumulated through the verbal description of texture by a group of participants. Smooth, soft, homogeneous, geometric variation, random, repeating, regular, color variation, strong, and complicated were ten of the most frequently used words by participants to describe texture. Since the participants were allowed to freely interact with the textiles, the accumulated texture properties are most likely a combination of visual and tactile information. Each individual texture attribute was rated by another group of participants via rank ordering. Analyzing the correlations between various texture attributes showed strong positive and negative correlations between some of the attributes. Principal component analysis on the rank ordering data indicated that there is a clear separation of perceptual texture attributes in terms of homogeneity and regularity on one hand, and non-homogeneity and randomness on the other hand.

Digital Library: CIC
Published Online: October  2019
  34  12
Image
Pages 49 - 54,  © Society for Imaging Science and Technology 2019
Volume 27
Issue 1

In order for a smartphone-based colorimetry system to be generalizable, it must be possible to account for results from multiple phones. A move from device-specific space to a device independent space such as XYZ space allows results to be compared, and means that the link between XYZ values and the physical parameter of interest needs only be determined once. We compare mapping approaches based on calibration data provided in image metadata, including the widely used open-source software dcraw, to a separate calibration carried out using a colorcard. The current version of dcraw is found to behave suboptimally with smartphones and should be used with care for mapping to XYZ. Other metadata approaches perform better, however the colorcard approach provides the best results. Several phones of the same model are compared and using an xy distance metric it is found that a device-specific calibration is required to maintain the desired precision.

Digital Library: CIC
Published Online: October  2019

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]