A single tone curve which is used to globally remap the brightness of each pixel in an image is one of the simplest ways to enhance an image. Tone curves might be the result of individual user edits or from algorithmic processing including in-camera processing pipelines. The precise shape of the tone curve is not strongly constrained other than it is usually limited to increasing functions of brightness. In this paper we constrain the shape further and define a simple tone adjustment, mathematically, to be a tone curve that has either no or one inflexion point. It follows that a complex tone curve is one with more than one inflexion point, visually making the curve appear ‘wiggly’. Empirically, complex tone curves do not seem to be used very often. For any given tone curve we show how the closest simple approximation can be efficiently found. We apply our approximation method to the MIT-Adobe FiveK dataset which comprises 5000 images that are manually tone-edited by 5 experts. For all 25,000 edited images - where some of the tone adjustments are complex - we find that they are all well-approximated by simple tone curve adjustments.
Traditionally, the appearance of an object in an image is edited to elicit a preferred perception. However, the editing method might be arbitrary and might not consider the human perception mechanism. In this study, the authors explored image-based leather “authenticity” editing using an estimation model that considers a perception mechanism derived in their previous work. They created leather rendered images by emphasizing or suppressing image properties corresponding to the “authenticity.” Subsequently, they performed two subjective experiments, one using fully edited images and another using partially edited images whose specular reflection intensity was constant. Participants observed the leather rendered images and evaluated the differences in the perception of “authenticity.” The authors found that the “authenticity” perception could be changed by manipulating the intensity of specular reflection and the texture (grain and surface irregularity) in the images. The results of this study could be used to tune the properties of images to make them more appealing.
In addition to colors and shapes, factors of material appearance such as glossiness, translucency, and roughness are important for reproducing the realistic feeling of an image. In general, these perceptual qualities are often degraded when reproduced as a digital color image. The authors have aimed to edit the material appearance of an image as measured by a general camera and reproduce it on a general display device. In their previous study, the authors found that the pupil diameter decreases slightly when observing the surface properties of an object and proposed an algorithm called “PuRet” for enhancing the material appearance based on the physiological models of the pupil and retina. However, to obtain an accurate reproduction, it was necessary to manually adjust two types of adaptation parameters in PuRet as related to the retinal response for each scene and the particular characteristics of the display device. This study realizes the management of the appearance of material objects on display devices by automatically deriving the optimum parameters in PuRet from captured RAW image data. The results indicate that the authors succeeded in estimating an adaptation parameter from the median value of the scene luminance as estimated from a RAW image. They also succeeded in estimating another adaptation parameter from the average value of the scene luminance and the luminance contrast value of the output display device. As a result of an experiment using an unknown display device that was not applied to derive the estimation model, it was confirmed that the proposed model works properly.
Nonlinear complementary metal-oxide semiconductor (CMOS) image sensors (CISs), such as logarithmic (log) and linearâ–”logarithmic (linlog) sensors, achieve high/wide dynamic ranges in single exposures at video frame rates. As with linear CISs, fixed pattern noise (FPN) correction and salt-and-pepper noise (SPN) filtering are required to achieve high image quality. This paper presents a method to generate digital integrated circuits, suitable for any monotonic nonlinear CIS, to correct FPN in hard real time. It also presents a method to generate digital integrated circuits, suitable for any monochromatic nonlinear CIS, to filter SPN in hard real time. The methods are validated by implementing and testing generated circuits using field-programmable gate array (FPGA) tools from both Xilinx and Altera. Generated circuits are shown to be efficient, in terms of logic elements, memory bits, and power consumption. Scalability of the methods to full high-definition (FHD) video processing is also demonstrated. In particular, FPN correction and SPN filtering of over 140 megapixels per second are feasible, in hard real time, irrespective of the degree of nonlinearity. c 2018 Society for Imaging Science and Technology.
We suggest a method for sharpening an image or video stream without using convolution, as in unsharp masking, or deconvolution, as in constrained least-squares filtering. Instead, our technique is based on a local analysis of phase congruency and hence focuses on perceptually important details. The image is partitioned into overlapping tiles, and is processed tile by tile. We perform a Fourier transform for each of the tiles, and define congruency for each of the components in such a way that it is large when the component's neighbours are aligned with it, and small otherwise. We then amplify weak components with high phase congruency and reduce strong components with low phase congruency. Following this method, we avoid strengthening the Fourier components corresponding to sharp edges, while amplifying those details that underwent a slight or moderate defocus blur. The tiles are then seamlessly stitched. As a result, the image sharpness is improved wherever perceptually important details are present.
In this paper we describe two general parametric, non symmetric 3×3 gradient models. Equations for calculating the coefficients of matrices of gradients are presented. These models for generating gradients in x-direction include the known gradient operators and new operators that can be used in graphics, computer vision, robotics, imaging systems and visual surveillance applications, object enhancement, edge detection and classification. The presented approach can be easier extended for large windows.