Computer simulations of an extended version of a neural model of lightness perception [1,2] are presented. The model provides a unitary account of several key aspects of spatial lightness phenomenology, including contrast and assimilation, and asymmetries in the strengths of lightness and darkness induction. It does this by invoking mechanisms that have also been shown to account for the overall magnitude of dynamic range compression in experiments involving lightness matches made to real-world surfaces . The model assumptions are derived partly from parametric measurements of visual responses of ON and OFF cells responses in the lateral geniculate nucleus of the macaque monkey [3,4] and partly from human quantitative psychophysical measurements. The model’s computations and architecture are consistent with the properties of human visual neurophysiology as they are currently understood. The neural model's predictions and behavior are contrasted though the simulations with those of other lightness models, including Retinex theory  and the lightness filling-in models of Grossberg and his colleagues .
Michael E. Rudd, "Neurocomputational model explains spatial variations in perceived lightness induced by luminance edges in the image" in Proc. IS&T Int’l. Symp. on Electronic Imaging: Human Vision and Electronic Imaging, 2021, pp 151-1 - 151-7, https://doi.org/10.2352/ISSN.2470-1173.2021.11.HVEI-151