Light field displays have a finite angular and spatial resolution, which limits the displays depth-of-field, that is, the depth range around the display screen in which the display can visualize a 3D scene with the maximum spatial resolution. This limitation causes aliasing artifacts in the parts of the scene that are outside of that range, resulting in a distorted appearance. The aliasing artifacts can be mitigated by properly blurring those parts, with blurring preferably done at the rendering stage. Though methods for rendering a single view with a correct depth of field exist, using those methods for rendering a large light field is computationally heavy. In this paper we propose a method for simultaneously rendering multiple adjacent views in a light field, with each of them having the required depth of field. By means of examples, we show that the proposed method can render a desired light field several times faster than methods for rendering a single view, without compromising on the overall rendered quality.
In this contribution, an objective metric for quality evaluation of light field images is presented. The method is based on the exploitation of the depth information of a scene, that is captured with high accuracy by the light field imaging system. The depth map is estimated both from the original and impaired light field data. Then, a similarity measure is applied, and a mapping is performed to link the depth distortion with the perceived quality. Experimental test performed by comparing state-of-art metrics with the proposed one, demonstrate the effectiveness of the proposed metric.
The light field camera with spatial multiplexing configuration enables to capture 4-dimensional data on 2-dimensional sensor. It creates 2D array of 2D images on a single sensor. Once certain conditions are fulfilled, it is possible to reconstruct the depth with single light field camera. Such measurement method has several advantages as well as several disadvantages. Currently, the most important problems are narrow measurement area and low light intensities. To overcome these obstacles, we propose augmented focused light field camera model which contains vignetting and image overlapping features included. The model is based on 2D ray tracing technique with first order optics and thin lens approximation. In this article, we state several properties which should be sufficient for the light field optical system design for depth reconstruction. This allows to describe every light field system configuration containing main lens, microlens arrays, stops and a sensor.