Modern production and distribution workflows have allowed for high dynamic range (HDR) imagery to become widespread. It has made a positive impact in the creative industry and improved image quality on consumer devices. Akin to the dynamics of loudness in audio, it is predicted that the increased luminance range allowed by HDR ecosystems could introduce unintended, high-magnitude changes. These luminance changes could occur at program transitions, advertisement insertions, and channel change operations. In this article, we present findings from a psychophysical experiment conducted to evaluate three components of HDR luminance changes: the magnitude of the change, the direction of the change (darker or brighter), and the adaptation time. Results confirm that all three components exert significant influence. We find that increasing either the magnitude of the luminance or the adaptation time results in more discomfort at the unintended transition. We find that transitioning from brighter to darker stimuli has a non-linear relationship with adaptation time, falling off steeply with very short durations.
A model of lightness computation by the human visual system is described and simulated. The model accounts to within a few percent error for the large perceptual dynamic range compression observed in lightness matching experiments conducted with Staircase Gelb and related stimuli. The model assumes that neural lightness computation is based on transient activations of ON- and OFF-center neurons in the early visual pathway generated during the course of fixational eye movements. The receptive fields of the ON and OFF cells are modeled as difference-of-gaussian functions operating on a log-transformed image of the stimulus produced by the photoreceptor array. The key neural mechanism that accounts for the observed degree of dynamic range compression is a difference in the neural gains associated with ON and OFF cell responses. The ON cell gain is only about 14 as large as that of the OFF cells. ON and OFF cell responses are sorted in visual cortex by the direction of the eye movements that generated them, then summed across space by large-scale receptive fields to produce separate ON and OFF edge induction maps. Lightness is computed by subtracting the OFF network response at each spatial location from the ON network response and normalizing the spatial lightness representation such that the maximum activation within the lightness network always equals a fixed value that corresponds to the white point. In addition to accounting for the degree of dynamic range compression observed in the Staircase Gelb illusion, the model also accounts for change in the degree of perceptual compression that occurs when the spatial ordering of the papers is altered, and the release from compression that occurs when the papers are surrounded by a white border. Furthermore, the model explains the Chevreul illusion and perceptual fading of stabilized images as a byproduct of the neural lightness computations assumed by the model.
The critical flicker fusion (CFF) is the frequency of changes at which a temporally periodic light will begin to appear completely steady to an observer. This value is affected by several visual factors, such as the luminance of the stimulus or its location on the retina. With new high dynamic range (HDR) displays, operating at higher luminance levels, and virtual reality (VR) displays, presenting at wide fields-of-view, the effective CFF may change significantly from values expected for traditional presentation. In this work we use a prototype HDR VR display capable of luminances up to 20,000 cd/m^2 to gather a novel set of CFF measurements for never before examined levels of luminance, eccentricity, and size. Our data is useful to study the temporal behavior of the visual system at high luminance levels, as well as setting useful thresholds for display engineering.
We investigated how perceived achromatic and chromatic contrast changes with luminance. The experiment consisted of test and reference displays viewed haploscopically, where each eye sees one of the displays. Test stimuli presented on the test display on a background of varying luminance levels (0.02, 2,20,200,2000 cd/m²) were matched in perceived contrast to reference stimuli presented on a background at a fixed 200 cd/m² luminance level. We found that approximate contrast constancy holds at photopic luminance levels (20 cd/m² and above), that is, test stimuli presented at these luminance backgrounds matched when their physical contrasts were the same magnitude as the reference stimulus for most conditions. For lower background luminances, covering an extensive range of 5 log units, much higher physical contrast was required to achieve a match with the reference. This deviation from constancy was larger for lower spatial frequencies and lower pedestal suprathreshold contrasts. Our data provides the basis for new contrast retargeting models for matching appearances across luminance levels.