Modern digital cameras have very limited dynamic range, which makes them unable to capture the full range of illumination in natural scenes. Since this prevents them from accurately photographing visible detail, researchers have spent the last two decades developing algorithms for high-dynamic range (HDR) imaging which can capture a wider range of illumination and therefore allow us to reconstruct richer images of natural scenes. The most practical of these methods are stack-based approaches which take a set of images at different exposure levels and then merge them together to form the final HDR result. However, these algorithms produce ghost-like artifacts when the scene has motion or the camera is not perfectly static. In this paper, we present an overview of state-of-the-art deghosting algorithms for stack-based HDR imaging and discuss some of the tradeoffs of each.
In this paper we present multiple methods to augment a graph-based foreground detection scheme which uses the smallest nonzero eigenvector to compute the saliency scores in the image. First, we present an augmented background prior to improve the foreground segmentation results. Furthermore, we present and demonstrate three complementary methods, which allow for detection of the foregrounds containing multiple subjects. The first method performs an iterative segmentation of the image to "pull out" the various salient objects in the image. In the second method, we used a higher dimensional embedding of the image graph to estimate the saliency score and extract multiple salient objects. The last method, using a proposed heuristic based on eigenvalue difference, constructs a saliency map of an image using a predetermined number of smallest eigenvectors. Experimental results show that the proposed methods do succeed in extracting multiple foreground subject more successfully as compared to the original method.
Time domain continuous imaging (TDCI) centers on the capture and representation of time-varying image data not as a series of frames, but as a compressed continuous waveform per pixel. A high-dynamic-range (HDR) image can be computationally synthesized from TDCI data to represent any virtual exposure interval covered by the waveforms, thus allowing both exposure start time and shutter speed to be selected arbitrarily after capture. This also enables extraction of video with arbitrary framerate and shutter angle. This paper presents the design, and discusses performance, of the first complete, fully open source, infrastructure supporting experimental use of TDCI: TIK (Temporal Imaging from Kentucky or Temporal Image Kontainer). The system not only provides for processing TDCI .tik files, but also allows conventional video files and still image sequences to be converted into TDCI .tik files.
The goal in photography is generally the construction of a model of scene appearance. Unfortunately, statistical variations introduced by photon shot and other noise introduce errors in the raw value reported for each pixel sample. Rather than simply accepting those values as the best raw representation, the current work treats them as initial estimates of the correct values, and computes an error model for each pixel's value. The value error models for all pixels in an image are then used to drive a type of texture synthesis which refines the pixel value estimates, potentially increasing both accuracy and precision of each value. Each refined raw pixel value is synthesized from the value estimates of a plurality of pixels with overlapping error bounds and similar context within the same image. The error modeling and texture synthesis algorithms are implemented in and evaluated using KREMY (KentuckY Raw Error Modeler, pronounced "creamy"), a free software tool created for this purpose.