We learn the color of objects and scenes through our experience in everyday life. The colors of things that we see more frequently are defined as memory colors. These help us communicate, identify objects, detect crop ripeness or disease, evaluate the weather, and recognize emotions. Color quality has become a priority for the smartphone and camera industry. Color quality assessment (CQA) provides insight into user preference and can be put to use to improve cameras and display pipelines. The memory color of important content like human skin, food, etc. drives perceived color quality. Understanding memory color preference is critical to understanding perceived color quality. In this study, grass, sky, beach sand, green pepper, and skin were used to perform memory color assessment. Observers were asked to adjust patches with four different textures, including computed textures and real image content, according to their memory. The results show that observers adjust the image patch most consistently. In cases where the artificially generated textures closely resembled the real image content, particularly for the sky stimulus, which resembled a flat color patch, participants were able to adjust each sample more consistently to their memory color. To understand the relation between memory color and the color quality preference for camera images, a second experiment was performed. A paired comparison for familiar objects was performed with five different color quality images per object. Two of these five images were rendered from the results of the memory color assessment experiment. Additional images included were the three most preferred color quality images from a rank order CQA. This experiment was performed by naïve observers and a validation experiment was also performed by Munsell Color Science Laboratory observers. The results for color image rendering preference for each memory image content vary. The results show that for most of the colors, people prefer the top three camera color quality images used from the rank order CQA. For grass, however, the color quality preference is highest for one of the memory color assessment results. In this experiment, images rendered to reflect memory color do not match observer preference.
How do different object properties combine for the purposes of object identification? We developed a paradigm that allows us measure the degree to which human observers rely on one object property (e.g., color) vs. another (e.g., material) when they make forced-choice similarity judgments. On each trial of our experiment, observers viewed a target object paired with two test objects: a material match, that differed from the target only in color (along a green-blue axis) and a color match, that differed from the target only in material (along a glossy-matte axis). Across trials, the target was paired with different combinations of material-match and color-match tests and observers selected the test that appeared more similar to the target. To analyze observer responses, we developed a model (a two-dimensional generalization of the maximum-likelihood difference scaling method) that allows us to recover (1) the color-material weight, reflecting the relative importance of color vs. material in object identification and (2) the underlying positions of the material-match and color-match tests in a perceptual color-material space. Our results reveal large individual differences in the relative weighting of color vs. material.
This paper presents a new metric for evaluating the color perceptual smoothness of color transformations. The metric estimates three dimensional smoothness to cover the full gamut of the transform. This metric predicts any artifacts like jumps in any gradient introduced by the transformation itself. From the state of the art, three works have been found and compared for evaluating their pros and cons. Based on these previous proposals, a new metric has been developed and tested with several applications. The metric is based on the perceptual distance: CIEDE2000. The defined metric is dependent on the number of ramps and the number of colors per ramp but these two parameters can be reduced to a single one called granularity. The proposed metric has been applied on the AdobeRBG and sRGB color spaces with and without the addition of artificial artifacts and tested for a large variety of granularity values. Several basic statistics have been proposed and the root mean square seems to be a good candidate for representing the global smoothness. The metric demonstrated robustness for evaluating the global smoothness of a transform and also or detecting small jumps.