A model of lightness computation by the human visual system is described and simulated. The model accounts to within a few percent error for the large perceptual dynamic range compression observed in lightness matching experiments conducted with Staircase Gelb and related stimuli. The model assumes that neural lightness computation is based on transient activations of ON- and OFF-center neurons in the early visual pathway generated during the course of fixational eye movements. The receptive fields of the ON and OFF cells are modeled as difference-of-gaussian functions operating on a log-transformed image of the stimulus produced by the photoreceptor array. The key neural mechanism that accounts for the observed degree of dynamic range compression is a difference in the neural gains associated with ON and OFF cell responses. The ON cell gain is only about 14 as large as that of the OFF cells. ON and OFF cell responses are sorted in visual cortex by the direction of the eye movements that generated them, then summed across space by large-scale receptive fields to produce separate ON and OFF edge induction maps. Lightness is computed by subtracting the OFF network response at each spatial location from the ON network response and normalizing the spatial lightness representation such that the maximum activation within the lightness network always equals a fixed value that corresponds to the white point. In addition to accounting for the degree of dynamic range compression observed in the Staircase Gelb illusion, the model also accounts for change in the degree of perceptual compression that occurs when the spatial ordering of the papers is altered, and the release from compression that occurs when the papers are surrounded by a white border. Furthermore, the model explains the Chevreul illusion and perceptual fading of stabilized images as a byproduct of the neural lightness computations assumed by the model.
Lightness perception is a long-standing topic in research on human vision, but very few image-computable models of lightness have been formulated. Recent work in computer vision has used artifical neural networks and deep learning to estimate surface reflectance and other intrinsic image properties. Here we investigate whether such networks are useful as models of human lightness perception. We train a standard deep learning architecture on a novel image set that consists of simple geometric objects with a few different surface reflectance patterns. We find that the model performs well on this image set, generalizes well across small variations, and outperforms three other computational models. The network has partial lightness constancy, much like human observers, in that illumination changes have a systematic but moderate effect on its reflectance estimates. However, the network generalizes poorly beyond the type of images in its training set: it fails on a lightness matching task with unfamiliar stimuli, and does not account for several lightness illusions experienced by human observers.
Last year at HVEI, I presented a computational model of lightness perception inspired by data from primate neurophysiology. That model first encodes local spatially-directed local contrast in the image, then integrates the resulting local contrast signals across space to compute lightness (Rudd, J Percept Imaging, 2020; HVEI Proceedings, 2021). Here I computer simulate the lightness model and generalize it to model color perception by including in the model local color contrast detectors that have properties similar to those of cortical “double-opponent” (DO) neurons. DO neurons make local spatial comparisons between the activities of L vs M and S vs (L + M) cones and half-wave rectify these local color contrast comparisons to produce psychophysical channels that encode, roughly, amounts of ‘red,’ ‘green’, ‘blue’, and ‘yellow.”