Land and McCannâ€™s original Retinex theory [1] described how the brain might achieve color constancy by spatially integrating the outputs of edge detector neurons in visual cortex (i.e., Hubel and Wiesel cells). Given a collection of reflective surfaces, separated by hard edges (a Mondrian stimulus) and viewed under uniform illumination, Retinex first computes luminance ratios at the borders between surfaces, then multiplies these ratios along image paths to compute the relative ratios of noncontiguous surfaces. This multiplication is equivalent to summing steps in log luminance. Here I review results from the human lightness literature supporting the key Retinex assumption that biological lightness computations involve a spatial integration of steps in log luminance. However, to explain perceptual data, the original Retinex algorithm must be supplemented with additional perceptual principles that together determine the weights given to particular image edges. These include: distance-dependent edge weighting, different weights for incremental and decremental luminance steps, contrast gain acting between edges, top-down control of edge weights, and computations in object-centered coordinates. I outline a theory, informed by recent findings from neurophysiology, of how these computations might be carried out by neural circuits in the ventral stream of visual cortex.