Regular
No keywords found
 Filters
Month and year
 
  10  0
Image
Pages 1 - 2,  © Society for Imaging Science and Technology 2016
Digital Library: EI
Published Online: February  2016
  28  3
Image
Pages 1 - 10,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 6

There are many different Retinex algorithms. They make different assumptions, and attempt to solve different problems. They have different goals, ground truths and output results. This "Retinex at 50 Workshop" session compares the variety of Retinex algorithms, along with their goals, ground truths that measure the success of their results. All Retinex algorithms use spatial comparisons to calculate the appearances of the entire scene. All Retinex algorithms need observer data to quantify human vision, so as to evaluate their accuracy. The most critical component of all Retinex experiments is the observer matches used to characterize human spatial vision. This paper reviews the experiments that have evolved as a result of Retinex Theory. They provide a very challenging data set for algorithms that predict appearance.

Digital Library: EI
Published Online: February  2016
  21  3
Image
Pages 1 - 5,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 6

Several different implementations of the Retinex model have derived from the original Land and McCann paper. This paper aims at presenting two of them: the Land Designator and Milano Retinex. Land Designator is an alternative calculation described in his 1983 and 86 papers. Later, Jobson used it as the basis of the NASA Retinex. Milano Retinex is a family of slightly different Retinex implementations, here described together with the principal members of this family.

Digital Library: EI
Published Online: February  2016
  17  1
Image
Pages 1 - 9,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 6

The Oriented Difference-of-Gaussians (ODOG) model of brightness perception is based on linear spatial filtering by oriented receptive fields followed by contrast/response normalization. The ODOG model can parsimoniously predict the perceived intensity (brightness) of regions in many visual stimuli including White's effect. Unlike competing explanations such as anchoring theory, filling-in, edge-integration, or layer decomposition, spatial filtering by the ODOG model accounts for the gradient structure of induction which, while most striking in grating induction, also occurs within the test fields of classical simultaneous brightness contrast and the White stimulus. Because the ODOG model does not require defined regions of interest it can be applied to arbitrary stimuli, including natural images. We give a detailed description of the ODOG model and illustrate its operation on the Black and White Mondrian stimulus similar to that used by Land & McCann [31] to demonstrate their Retinex model of lightness perception/constancy.

Digital Library: EI
Published Online: February  2016
  59  2
Image
Pages 1 - 8,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 6

This paper presents a computational framework inspired by the center-surround antagonistic receptive fields of the human visual system. It demonstrates that, starting from the actual pixel value (center) and a low-pass estimation of the pixel's neighborhood (surround) and using a mapping function inspired by the shunting inhibition mechanism, some widely used spatial image processing techniques can be implemented, including adaptive tone-mapping, local contrast enhancement, text binarization and local feature detection. As a result, it highlights the relations of these seemingly different applications with the early stages of the human visual system and draws insights about their characteristics.

Digital Library: EI
Published Online: February  2016
  14  0
Image
Pages 1 - 8,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 6

Land and McCann’s original Retinex theory [1] described how the brain might achieve color constancy by spatially integrating the outputs of edge detector neurons in visual cortex (i.e., Hubel and Wiesel cells). Given a collection of reflective surfaces, separated by hard edges (a Mondrian stimulus) and viewed under uniform illumination, Retinex first computes luminance ratios at the borders between surfaces, then multiplies these ratios along image paths to compute the relative ratios of noncontiguous surfaces. This multiplication is equivalent to summing steps in log luminance. Here I review results from the human lightness literature supporting the key Retinex assumption that biological lightness computations involve a spatial integration of steps in log luminance. However, to explain perceptual data, the original Retinex algorithm must be supplemented with additional perceptual principles that together determine the weights given to particular image edges. These include: distance-dependent edge weighting, different weights for incremental and decremental luminance steps, contrast gain acting between edges, top-down control of edge weights, and computations in object-centered coordinates. I outline a theory, informed by recent findings from neurophysiology, of how these computations might be carried out by neural circuits in the ventral stream of visual cortex.

Digital Library: EI
Published Online: February  2016
  27  2
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 6

We investigate the impact of the background luminance upon the perceived image quality of real world scenes. To do so, we generate a set of small image patches that span the full range of mean luminance values and contrasts that may be displayed upon a monitor with a finite luminance range. Subjects viewed the images on a uniform black, grey or white surround and were asked to rate the perceived quality on a scale from 0 to 9. We find that that the maximum image quality scores occur for images with a mean luminance of less than half, consistent with the image being passed through a compressive non-linearity before contrast is computed. Moreover, the maximum image quality scores occur at lower mean luminance levels when the background luminance is darker, a pattern consistent with investigations into lightness perception. We conclude that models of contrast perception require an adaptive model of lightness perception. However, we also note the considerable challenge of developing a model of lightness perception that can generalize to any given display configuration.

Digital Library: EI
Published Online: February  2016
  10  1
Image
Pages 1 - 10,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 6

Stars, galaxies and some nebulae emit light, differently from solar system planets and satellites that mainly reflect Sun's light or nebulae that also partly reflect nearby stars light. Light emission is concentrated on specific wavelengths corresponding to transition states of atoms that compose the object. Professional photographers and astronomers use special narrow band filters to detect spectral light emission. Using monochromatic CCD cameras a multi filter photograph can be taken, producing at least long, middle and short wavelength snapshots that can be processed to give full color pictures. Amateurs can use wide-band filters or even color cameras. Colors in astrophotography do not correspond to perceivable colors by human vision system (HVS) and our visual system did not evolve to perceive these kinds of images. Any way we still have to consider our perception when creating pictures to observe cosmic objects photos, that have been rendered using the so called representative colors, selected to show the captured wavelength bands with the purpose to make visible what is scientifically relevant. The typical application field of the Retinex based algorithms is that of natural images, since their purpose is to simulate some behaviors of the human visual system. However we can use HVS properties to enhance astrophotographs and increase local contrast, thus allowing researchers to detect non-visible structures and lay people to be fascinated by richness of cosmic objects. We will present the results of applications of some Retinex based algorithms to astrophotographs. We will discuss their efficacy, compared to traditional methods and discuss possible developments.

Digital Library: EI
Published Online: February  2016
  16  2
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 6

This paper explores the interrelations between Retinex, neural models and variational methods by making an overview of relevant related works in the past few years. Taking all the essential elements of the Retinex theory as postulated by Land and McCann (channel independence, the ratio reset mechanism, local averages, non-linear correction), it has been shown that we can obtain a Retinex algorithm implementation that is intrinsically 2D, whose results comply with all the expected properties of the original, one-dimensional path-based Retinex algorithm (such as approximating color constancy while being unable to deal with overexposed images) but don't suffer from common shortcomings such as sensitivity to noise, appearance of halos, etc. An iterative application of this 2D Retinex algorithm takes the form of a partial differential equation (PDE) that it's proven not to minimize any energy functional, and this fact is linked to its limitations regarding over-exposed pictures. It was proposed to modify in this regard the iterative method in a certain way so that it is able to handle both under and over-exposed images, the resulting PDE now has a number of very relevant properties which allow to connect Retinex with variational models, histogram equalization and efficient coding, perceptual color correction algorithms, and computational neuroscience models of cortical activity and retinal models.

Digital Library: EI
Published Online: February  2016
  97  2
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 6

The retinex theory of color vision has been a light of inspiration for color researchers since its inception 50 years ago. It has been adapted to work for different goals such as shadow removal, high dynamic range imaging, or computational color constancy. Back in 2007 a variational perceptually-based color correction model related to Retinex was presented by Bertalmío and colleagues. In this paper we first comment on this model and later we review different image processing applications that have been obtained by performing small modifications to it, namely color gamut mapping and image dehazing.

Digital Library: EI
Published Online: February  2016