To investigate individual property of internal color representation of congenital red-green color-deficient observers (CDOs) and color-normal observers (CNOs) precisely, difference scaling experiment using pairs of primary colors was carried out for protans, deutans, and normal trichromats, and the results were analyzed using multi-dimensional Scaling (MDS). MDS configuration of CNOs showed circular shape similar to hue circle, whereas that of CNO showed large individual differences from circular to U- shape. Distortion index, DI, is proposed to express the shape variation of MDS configuration. All color chips were plotted in the color vision space, (L, r/g, y/b), and the MDS using a non-linear conversion from the distance in the color vision space to perceptual difference scaling was successful to obtain U-shape configuration that reflects internal color representation of CDOs.
The difference or distance between two color palettes is a metric of interest in color science. It allows a quantified examination of a perception that formerly could only be described with adjectives. Quantification of these properties is of great importance. The objective of this research is to obtain the dataset for perceptual colour difference between two color palettes and develop color difference metric(s) to correspond well with the perceptual color difference. The psychophysical experiment was carried out using Magnitude Estimation method. Three different color difference metrics, namely Single Color Difference Model (Modell), Mean Color Difference Model (Model 2), and Minimum Color Difference Model (Model 3), respectively, have been proposed and compared. Data analysis include regression analysis, statistical STRESS analysis, and examination of observer variability using coefficient of variance (CT). The results show that the Minimum Color Difference Model (Model 3) outperformed the other two with a coefficient of determination (R-squared) value of 0.603 and an STRESS value of 20.95. In terms of observer variability, the average intra-observer variability is 17.63 while the average inter-observer variability is 53.73.
This study aims at developing an image quality metric for camera auto white balance (AWB), with a transform to just noticeable differences (JNDs) of quality in pictorial scenes. In this study, a simulation pipeline was developed for a Nikon D40 DSLR camera, from raw capture to rendered image for display. Seven real-world scenes were used in the study, representing capture conditions in outdoor daylight, indoor fluorescent lighting, and indoor incandescent lighting conditions. Two psychophysical experiments were performed, and 38 observers participated in the study. In study one, method of adjustment was used to explore the color aims for individual scenes. In study two, a softcopy quality ruler method was used to refine the color aims and define the quality falloff functions. A quartic function was used to fit the results from the softcopy ruler study, forming the proposed objective metric for camera auto white balance.
Experimental phenomenology probes the meanings and qualities that compose immediate visual experience. In contradistinction, objective methods of classical psychophysics intentionally ignore meanings and qualities, or even awareness as such. Both have their proper uses. Methods of experimental phenomenology that address "equivalence" in a more intricate sense than "visible–not visible" or "discriminable–not discriminable", require stimuli that go beyond the mere level of magnitude-like parameters and perhaps intrude into the realm of semantics. One investigates the cloud of eidolons, or lookalikes, that mentally surround any image. "Eidolon factories" are based on models of the psychogenesis of visual awareness. The intentional fuzziness of eidolons may derive from a variety of processes. We explore the effects of capricious "local sign". Elsewhere, we formally proposed explicit eidolon factories based on such notions. Here we illustrate some of the effects of capricious local sign.
Recent advances in computational models in vision science have considerably furthered our understanding of human visual perception. At the same time, rapid advances in convolutional deep neural networks (DNNs) have resulted in computer vision models of object recognition which, for the first time, rival human object recognition. Furthermore, it has been suggested that DNNs may not only be successful models for computer vision, but may also be good computational models of the monkey and human visual systems. The advances in computational models in both vision science and computer vision pose two challenges in two different and independent domains: First, because the latest computational models have much higher predictive accuracy, and competing models may make similar predictions, we require more human data to be able to statistically distinguish between different models. Thus we would like to have methods to acquire trustworthy human behavioural data fast and easy. Second, we need challenging experiments to ascertain whether models show similar input-output behaviour only near "ceiling" performance, or whether their performance degrades similar to human performance: only then do we have strong evidence that models and human observers may be using similar features and processing strategies. In this paper we address both challenges.
In recent years, Convolutional Neural Networks (CNNs) have gained huge popularity among computer vision researchers. In this paper, we investigate how features learned by these networks in a supervised manner can be used to define a measure of self-similarity, an image feature that characterizes many images of natural scenes and patterns, and is also associated with images of artworks. Compared to a previously proposed method for measuring self-similarity based on oriented luminance gradients, our approach has two advantages. Firstly, we fully take color into account, an image feature which is crucial for vision. Secondly, by using higher-layer CNN features, we define a measure of selfsimilarity that relies more on image content than on basic local image features, such as luminance gradients.
The variability of human observers and differences in the cone photoreceptor sensitivities are important to understand and quantify in the context of Color Science research. Differences in human cone sensitivity may cause two observers to see different colors on the same display. Technicolor SA built a prototype instrument that allows classification of an observer with normal color vision into a small number of color vision categories. The instrument is used in color critical applications for displaying colors to human observers. To facilitate Color Science research, an Observer Calibrator is being designed and built. This instrument is modeled on one developed at Technicolor, but with improvements including providing higher luminance levels to the observers, a more robust MATLAB computer interface, two sets of individually controlled LED primaries, and the potential for interchangeable optical front ends to present the color stimuli to observers. The new prototype is lightweight, inexpensive, stable, and easy to calibrate and use. Human observers can view the difference between two displayed colors, or match one existing color by adjusting one LED primary set. The use of the new prototype will create opportunities for further color science research and will provide an improved experiment experience for participating observers.