Many color-related image adjustments can be conveniently executed by exposing at most a small number of parameters to the user. Examples are tone reproduction, contrast enhancements, gamma correction and white balancing. Others require manual touch-ups, applied by means of brush strokes. More recently, a new class of algorithms has emerged, which transfers specific image attributes from one or more example images to a target. These attributes do not have to be well-defined and concepts that are difficult to quantify with a small set of parameters, such as the “mood” of an image, can be instilled upon a target image simply through the mechanism of selecting appropriate examples. This makes example-based image manipulation a particularly suitable paradigm in creative applications, but also finds uses in more technical tasks such as stereo pair correction, video compression, image colorization, panorama stitching and creating night-time images out of day-light shots.
While there may be no point in arguing about taste, creative professionals make a living from sharing theirs. Making specific, individual color preferences that a creative professional knows how to achieve when creating content on a display also propagate into print is a significant challenge since it lacks real-time feedback. The present paper introduces a method for allowing creative professionals to use the tools they know and love to also personalize the color behavior of their devices. This is achieved by analyzing color changes applied to images and applying them to a device's ICC profile. As a result the personalized device results in customized color behavior regardless of the workflow used. The paper describes the ICC profile transformation algorithm in detail and provides a color error analysis of its performance.
Color vision deficiency is the decreased ability to perceive color differences. To overcome this deficiency, supporting techniques for realizing barrier-free color environments are required. This paper proposes a color appearance control system to enable people afflicted with color vision deficiency to easily distinguish between confusing colors on any printed images using a projector-camera system. The proposed system captures a printed image using a video camera and shifts the lightness components of each pixel by super-imposing a compensation image obtained from the projector device onto the printed image. The compensation image is produced in the CIEL*a*b* color space for iteratively adjusting the desired lightness components toward the target lightness. The feasibility of the proposed system is verified experimentally using real printed images.
We propose a new method to generate images with a given color and texture, in order to visualize the appearance of car paints. Unlike current methods, the new method is based on visual comparisons of rendered paints with actual physical samples. Thus, we optimized the method to maximize the appearance match between rendered image and the corresponding car paint. In the new method it is possible to set accurate numerical values for not only color properties, but also for well-defined texture parameters.The new method is able to accurately render car paints under various light conditions, ranging from purely unidirectional, intense spot light to purely diffuse light. We show that the latter type of lighting conditions, which is often encountered in practical situations, is not well covered by existing rendering techniques that are based on BRDF and BTF measurements. Compared to existing methods for rendering, the proposed method is much faster regarding measurement and calculation, it has lower instrument costs and requires less data storage.
We present a corpus of experimental data from psychometric studies on gamut mapping and demonstrate its use to develop image similarity measures. We investigate whether similarity measures based on luminance (SSIM) can be improved when features based on chroma and hue are added.Image similarity measures can be applied to automatically select a good image from a sample of transformed images.
A crucial step in image compression is the evaluation of its performance, and more precisely available ways to measure the quality of compressed images. In this paper, a machine learning expert, providing a quality score is proposed. This quality measure is based on a learned classification process in order to respect that of human observers. The proposed method namely Machine Learning-based Image Quality Measurment (MLIQM) first classifies the quality using multi Support Vector Machine (SVM) classification according to the quality scale recommended by the ITU. This quality scale contains 5 ranks ordered from 1 (the worst quality) to 5 (the best quality). To evaluate the quality of images, a feature vector containing visual attributes describing images content is constructed. Then, a classification process is performed to provide the final quality class of the considered image. Finally, once a quality class is associated to the considered image, a specific SVM regression is performed to score its quality. Obtained results are compared to the one obtained applying classical Full-Reference Image Quality Assessment (FRIQA) algorithms to judge the efficiency of the proposed method.
We discuss a few selected hypotheses on how the visual system judges differences of color images. We then derive five image-difference features from these hypotheses and address their relation to the visual processing. Three models are proposed to combine these features for the prediction of perceived image differences. The parameters of the image-difference features are optimized on human image-difference assessments.For each model, we investigate the impact of individual features on the overall prediction performance. If chromatic features are combined with lightness-based features, the prediction accuracy on a test dataset is significantly higher than that of the SSIM index, which only operates on the achromatic component.
There have been many attempts to answer the question of how many distinct colors there are, with widely varying answers. Here we present an analysis of what it would take to arrive at a reliable answer and show how currently available models fail to make predictions under the wide range of conditions that needs to be considered. Gamut volumes are reported for a number of light sources and viewing modes and the conclusion is drawn that the only reliable data we have comes from psychophysical work. The color gamut of the LUTCHI data in CIECAM02 is therefore shown as an alternative to the gamut of all possible colors.
A new algorithm for evaluating metamer mismatch volumes is introduced. Unlike previous methods, the proposed method places no restrictions on the set of possible object reflectance spectra. Such restrictions lead to approximate solutions for the mismatch volume. The new method precisely characterizes the volume in all circumstances.
Color harmony patterns are relationships between coexisting colors where human psycho-perceptual visual pleasantness is the judging criterion. They play pivotal role in visualization, digital imaging and computer graphics. As a reference we assumed Itten model where harmony is expressed in terms of hue. The paper demonstrate investigation on color harmony patterns using clustering techniques.Our source data was Adobe Kuler database consisting of hundreds of thousands of color palettes prepared for creative purposes. For the color palettes dissimilarity measurement we propose to use Jaccard distance additionally treating colors as the elements of a fuzzy set. Then, in the next step, separate colors are grouped within each group of palettes to specify each scheme of relations. The results are schemes of relationships between color within palettes.