In eye-tracking based 3D displays, system latency due to eye-tracking and 3D rendering causes an error between the actual eye position and the tracked position, which is proportional to the viewer’s movement. This discrepancy makes viewers to see 3D content from a non-optimal position, thereby increasing 3D crosstalk and degrading the quality of 3D images under dynamic viewing conditions. In this paper, we investigate the latency issue, distinguish each source of system latency and study the display margin of eye-tracking based 3D display. To reduce 3D crosstalk during viewer’s motion, we propose a motion compensation method by predicting viewer’s eye position. The effectiveness of our motion compensation method is validated by experiments using previously implemented 3D display prototype and the results show that the prediction error decreased to 24.6%, indicating that the accuracy of eye pupil position became 4 times higher, and crosstalk reduced to a level similar to that of a 1/4 latency system.
In this article, the authors present an image processing method to reduce three-dimensional (3D) crosstalk for eye-tracking-based 3D display. Specifically, they considered 3D pixel crosstalk and offset crosstalk and applied different approaches based on its characteristics. For 3D pixel crosstalk which depends on the viewer?s relative location, they proposed output pixel value weighting scheme based on viewer?s eye position, and for offset crosstalk they subtracted luminance of crosstalk components according to the measured display crosstalk level in advance. By simulations and experiments using the 3D display prototypes, the authors evaluated the effectiveness of proposed method.
Many current far-eye 3D displays are incapable of providing accurate out-of-focus blur on the retina and hence cause discomfort with prolonged use. This out-of-focus blur rendered on the retina is an important stimulus for the accommodation response of the eye and hence is one of the major depth cues. Properly designed integral displays can render this out-of-focus blur accurately. In this paper, we report a rigorous simulation study of far-eye integral displays to study their ability to render depth and the out-of-focus blur on the retina. The beam propagation simulation includes the effects of diffraction from light propagation through the free space, the apertures of lenslet and the eye pupil, to calculate spot sizes on the retina formed by multiple views entering the eye. Upon comparing them with the spot sizes from the real objects and taking into account depth of field and spatial resolution of the eye, we determine the minimum number of views needed in the pupil for accurate retinal blur. In other words, we determine the minimum pixel pitch needed for the screen of a given integral display configuration. We do this for integral displays with varying pixel sizes, lenslet parameters and viewing distances to confirm our results. One of the key results of the study is that roughly 10 views are needed in a 4 mm pupil to generate out-of-focus blur similar to the real world. The 10 views are along one dimension only and out-focus-blur is only analyzed for the fovea. We also note that about 20 views in a 4 mm pupil in one dimension in the pupil would be more than sufficient for accurate out-of-focus blur on the fovea. Although 2-3 views in the pupil may start triggering accommodation response as shown previously, much higher density of views is needed to mimic the real world blur.