Regular
No keywords found
 Filters
Month and year
 
  25  4
Image
Pages 1 - 2,  © Society for Imaging Science and Technology 2007
Volume 15
Issue 1

A classic nature-versus-nurture debate in cognitive science concerns the relation between language and perception. The universalist view holds that language is shaped by universals of perception, while the opposing relativist view holds instead that language shapes perception, in a manner that varies with little constraint across languages. Over the years, consensus has oscillated between these two poles. In this talk, I argue that neither position is fully supported. I argue moreover that the universalist/relativist opposition itself should be resisted as a conceptual framework, since it paints with too broad a brush, and obscures interesting realities. I argue this general point using two case studies in the naming and perception of color.

Digital Library: CIC
Published Online: January  2007
  14  0
Image
Pages 3 - 7,  © Society for Imaging Science and Technology 2007
Volume 15
Issue 1

The explosive growth of digital photography has introduced a new population of non-technical users to the joy of color adjustment in digital images. Color adjustment is often frustrating and confusing to many of these users. However, while such users can be confounded by user interfaces for color adjustment tasks, they are universally capable of verbally describing the color changes they would like to make. The technology described in this paper is a set of mappings between a user's verbal description of a color change and the underlying algorithms needed to effect that change.

Digital Library: CIC
Published Online: January  2007
  36  6
Image
Pages 8 - 11,  © Society for Imaging Science and Technology 2007
Volume 15
Issue 1

Attempts to develop a universal color preference model have failed to explain individual differences or incorporate physiological factors. Here we propose a new color preference model in which an individual's color preference may be described as the weighted sum of 4 fundamental color-coding components (luminance, red-green, blue-yellow and saturation), all universal across populations. Meanwhile, each individual accords a different set of weights to these components, representing his/her individual color preference. We tested the model with a series of psychophysical experiments. The results reveal that the model explains most of the individual variance in color preference and may therefore be used as a good descriptor for individual as well as group differences. By translating complex color preference results into 4 easily interpreted weights, we also find that the main characteristics of individual color preference do not vary significantly across different color samples and experimental methods, thus allowing us to employ only a small sample of stimuli to reveal color preference across the entire color space. The model's simple format allows easy statistical and quantitative analysis, and provides a reliable platform for future studies on color preference.

Digital Library: CIC
Published Online: January  2007
  23  1
Image
Pages 12 - 17,  © Society for Imaging Science and Technology 2007
Volume 15
Issue 1

Substrates found in standard digital color printing applications frequently contain optical brightening agents. These agents fluoresce under UV light, thus increasing substrate reflectance in the short wavelength regime. The fluorescence phenomenon poses a considerable challenge in standard color management applications. This research presents a method of beneficially exploiting this phenomenon for a different application, namely watermarking. Information can be embedded in a printed color image that is perceptually invisible under normal illumination, and revealed via substrate fluorescence under UV illumination. The watermarking problem is formulated as an optimization problem that seeks pairs of colors exhibiting a close match under normal light, while producing visible luminance contrast under UV light. Models for predicting color under normal and UV light are described, and several successful watermarking examples are shown. From a practical standpoint, the approach requires no special colorants or media, and therefore can be offered at no extra cost to the user. Decoding of the watermark is easily accomplished with a common portable UV lamp.

Digital Library: CIC
Published Online: January  2007
  18  0
Image
Pages 18 - 24,  © Society for Imaging Science and Technology 2007
Volume 15
Issue 1

Surface reflectances are metameric for a colour input device if they induce identical response under one light source and induce a set of distinct responses under a second light source. Depending on the device sensitivities, metamerism will be different in structure (which reflectances form metamer sets), cardinality (the number of reflectances in each set) and their perceptual magnitude (e.g. the colour mismatch region of a metamer set under a change of illuminant).In this paper we propose measures to quantify the differences in colour input devices from the point of view of metamerism. Specifically, three quantitative correlates are proposed: the proportion of potentially metameric reflectances (reflectances that give identical response under a canonical illuminant), the proportion of metameric reflectances (potentially metameric reflectances that result in a colour mismatch under any of the test illuminants), the magnitude of the colour mismatch (CIE ΔE's of metameric reflectances under all relevant illuminants). In addition we introduce frequency images that visualise the extent of metamerism for a particular set of spectral sensitivities and a multi–spectral image of interest.Our aim in this study is twofold: firstly, to provide a means for the study of colour input devices from the point of view of their degree of metamerism; secondly, to expose the relationship between the accuracy of reflectance estimation and the extent of metamerism of a particular device.To illustrate our approach we compare several devices of various spectral sensitivities (trichromatic and multispectral) as well as series of synthetic sensitivities designed to study two particular aspects: the number of sensors and their shape.

Digital Library: CIC
Published Online: January  2007
  13  0
Image
Pages 25 - 29,  © Society for Imaging Science and Technology 2007
Volume 15
Issue 1

Thin-plate spline interpolation is used to interpolate the color of the incident scene illumination from an image of the scene. The algorithm can be used to provide color constancy under changing illumination conditions, and automatic white balancing for digital cameras. Thin-plate splines interpolate over a non-uniformly sampled input space, which in this case is a set of training images and associated illumination chromaticities. Tests of the thin-plate spline method on a large set of real images demonstrate that the method estimates the color of the incident illumination quite accurately.

Digital Library: CIC
Published Online: January  2007
  26  1
Image
Pages 30 - 35,  © Society for Imaging Science and Technology 2007
Volume 15
Issue 1

The present paper realizes multiband spectral imaging with high spatial resolution for omnidirectional estimation of scene illuminant in a simple measuring system. To overcome the mirrored ball problems, we propose a multiband omnidirectional imaging system using a high-resolution trichromatic digital camera, a fisheye lens, and two color filters of commercial use. The spatial resolution of omnidirectional imaging systems is analyzed based on an optics model in detail. We describe a practical spectral imaging system. Use of the RGB camera with each of the two color filters allows two different sets of trichromatic spectral sensitivity functions via spectral multiplication, resulting in six spectral bands after two image captures with each color filter. An algorithm is developed on the statistical estimation theory for estimating illuminant spectra from noisy observations of the sensor outputs. The feasibility of the proposed method is examined from the viewpoints of spatial resolution and omnidirectional illuminant estimation.

Digital Library: CIC
Published Online: January  2007
  13  0
Image
Pages 36 - 41,  © Society for Imaging Science and Technology 2007
Volume 15
Issue 1

The illumination-invariant image is a useful intrinsic feature latent in colour image data. The idea in forming an illumination invariant is to postprocess input image data by forming a logarithm of a set of chromaticity coordinates, and then project the resulting 2-dimensional data in a direction orthogonal to a special direction, characteristic of each camera, that best describes the effect of lighting change. Lighting change is approximately simply a straight line in the log-chromaticity domain; thus, forming a greyscale projection orthogonal to this line generates an image which is approximately independent of the illuminant, at every pixel. One application has been to effectively remove shadows from images. But a problem, addressed here, is that the direction in which to project is camera-dependent. Moreover, preprocessing with a spectral sharpening transform to linearly transform the sensor curves to more narrowband ones greatly improves shadow attenuation, but sharpening is also camera-dependent and we may not have information on the camera. So here we take a simpler approach and assume that every input image consists of data in the standardized sRGB colour space. Previously, this assumption has led to the suggestion that the built-in mapping of sRGB to XYZ tristimulus values could be used by going on to sharpen the resulting XYZ and then seeking for an invariant. Instead, here we sharpen the sRGB directly and show that performance is substantially improved this way. This approach leads to a standardized sharpening matrix for any input image and a fixed projection angle as well. Results are shown to be satisfactory, without any knowledge of camera characteristics.

Digital Library: CIC
Published Online: January  2007
  12  0
Image
Pages 42 - 47,  © Society for Imaging Science and Technology 2007
Volume 15
Issue 1

Diffusion-tensor data from medical MR imaging consists of a 3 × 3 symmetric positive semi-definite matrix at each voxel. The issue of how to understand, and how to meaningfully display this type of data has been gaining interest since its development as a noninvasive investigative tool [1]. Several schemes have been developed, usually aimed at the display of the spatial geometric structure of each voxel characterized by its eigenvectors. However these efforts have used colour merely as a visualization device, without regard to an underlying metric structure between voxels. At the same time, some work has been developed on analyzing whole-brain structure using independent component analysis, making use of similarity between tensors to identify separated overall structures, e.g. for de-noising of spatial features. In this paper we consider using colour to understand these separated structures, mapping a true metric giving a similarity measure between tensors into a perceptually uniform colour space, so that colour difference corresponds to true difference. We show that such a colour map can better discriminate regions of distinct diffusion properties in the brain than previous methods.

Digital Library: CIC
Published Online: January  2007
  19  0
Image
Pages 48 - 53,  © Society for Imaging Science and Technology 2007
Volume 15
Issue 1

Color device calibration is the process of achieving and maintaining a desired color response. For printers, calibration is typically achieved via 1-D tone reproduction `curves (TRCs) applied to each of C, M, Y and K colorant channels. This however, can be severely restrictive in the amount of control. For example, 1-D TRCs can be designed for either gray-balance or smooth rendition of individual color ramps, but not both. To enable complete control, 3- D/4-D color transforms may be used but they are at odds with the goal of calibration being a lightweight transform with respect to measurement and computation. In 2004, Bala et al. proposed two-dimensional (2-D) calibration to facilitate a superior cost vs. control trade-off. In this paper, we view the design of cost-effective calibration transforms as a dimensionality reduction problem. We observe that the quality of the transform, i.e. its ability to match a true higher-dimensional (4-D) transform, depends on both the projection operator applied to high-dimensional device inputs, and the functional approximation built out of the reduced dimension variables. With that view, we develop techniques to significantly enhance the accuracy of previously proposed 2-D calibration transforms. In particular, we develop 2-D color transforms that allow complete control of cleverly selected 2-D planes in the 3-D CMY cube. We also develop a novel 2-D calibration LUT for the K channel which exploits the knowledge of printer GCR strategy to improve rendition of dark colors. Experimental results show vastly improved calibration ability particularly for the case of calibrating multiple devices to a common colorimetric aim.

Digital Library: CIC
Published Online: January  2007