In this research, the effect on time perception of enhancing the emotional arousal of 2D images is examined experimentally. Images are converted to 3D and, additionally, disparity modification is carried out. The experimental results show that estimates of time are longer when 3D and disparity modified stimuli are presented for longer durations, and the tendency is significant in the case of images classified as evoking high arousal.
Large-area displays and an especial sharp image quality are built by gradual increasing pixel density. This trend is predestinated for the field of autostereoscopic 3D displays. Though it is hardly possible to adjust the optical grid against the pixel panel, due to micrometer scaled structures. This paper presents a method to locally correct adjustment failure by reallocating the image information in fine steps by adapting existing methods. It is possible to use the applicable methods at single-user, multiview as well as integral-imaging displays, and it enables an adjustment to the observer positions. With the correction method developed by us, local misalignments can be detected in a test image, which arranges the individual stereo channels on an HSV-color cycle by color coding. The test image will be detected and measured by a photo-spectrometer in a preselected position. We also demonstrate the generation of a resulting correction-map, which contains the position, offset or shift value as well as the shift direction of the affected subpixels. The effectiveness of this correction method was proven by measurements and also determined its working scope.
This paper proposes a new hole filling method using 3D geometric transformation to make a synthesized image at virtual viewpoint. Disoccluded regions in virtual viewpoint could appear as hole regions in the synthesized image. The hole regions are supposed to be filled in a visually plausible manner. In most of previous works, hole regions were filled with translated patches (labels) extracted from non-hole region (source region). In many cases, however, it is difficult to find the most similar labels due to texture irregularity and perspective distortion in a wide view angle. In this paper, we propose a new hole filling method in which hole regions are filled with the best labels using 3D geometric transformation and associated nearest neighbor search. Experimental results show that the proposed method provides structurally consistent results with a high computational efficiency, which outperforms existing hole filling methods.1
The increasing popularity of different kinds of 3D displays such as stereoscopic 3D monitors, auto-stereoscopic displays, and head mounted displays (HMDs) has led to visual discomfort being considered as one of the major concerns in the 3D industry. Previous research studies on visual discomfort have considered various disparities and motions in 3D videos to identify vergence-accommodation linkage conflicts. In this paper, we measure visual discomfort occurring from viewing 3D videos with various contrast changes using perceived symptoms questionnaires, measured near point of convergence, eye blink rate, and saccadic movements. We compare visual discomfort for different displays such as stereoscopic 3D (s3D) display, auto-stereoscopic display, and HMD. From our experimental results, we conclude that visual discomfort increases when the subject is watching 3D videos using auto-stereoscopic displays. In addition, brighter videos caused more visual discomfort compared to darker videos.
Various researches identified that the main factors causing visual discomfort from watching 3D video are Vergence-Accommodation linkage conflicts, contrast changes, and some other defects in 3D display. In this paper, we propose a new method to lessen the visual discomfort by reducing blue-light components, which are highly sensitive to human eyes in 3D video. In our experiments, 20 people (9 male, 11 female) participated in watching four 15-minute original and blue-light reduced modified 3D videos, respectively. We surveyed perceived symptom questionnaires before and after watching videos, and measured the participant s eye-blink rates, saccadic movements, and near point of convergence (NPC). Our experimental results demonstrated that the eye-blink rate of the participant was lower in watching the blue-light reduced videos, while saccadic movements of the subject was higher in blue-light reduced videos compared to the original videos. Since eye-blink rate is usually increased when subject s eyes are dried, and slower eye movements are occurred due to tiredness, the blue-light reduced video caused smaller visual discomfort than the original video. Based upon NPC and perceived symptom questionnaires, we confirmed that viewers indeed feel more comfortable watching the blue-light reduced video compared to the original video.
A novel optical realization method for the computer-generated cylindrical hologram by using the high-speed display device and the spinning screen is proposed. The sub-holograms are sampled for each determined viewpoints along the horizontal direction by a given angular step after the entire computer-generated cylindrical hologram is generated. In the experiment, the laser beam reflects on the high-speed digital micromirror device and reaches on the rotating screen during the digital micromirror device displays subholograms according to the generated consistency. The threedimensional holographic images are reconstructed on a rotating screen and they are tailored with each other along horizontal direction while rotating screen is synchronized with digital micromirror device. Finally, horizontally assembled entire 3D image is reconstructed in 360-degree viewing zone with perfect human depth cues that observers can see the displayed 3D image from anywhere around the display.
Predicting scene depth (or geometric information) from single monocular images is a challenging task. This paper addresses such challenging and essentially ill-posed problem by regression on samples for which the depth is known. In this regard, we first retrieve semantically similar RGB and depth pairs from datasets using a deep convolutional activation feature. We show that our framework provides a richer foundation for depth estimation than existing hand-craft representations. Subsequently, an initial estimation is then integrated by block-matching and robust patch regression. It assigns perceptually appropriate depth values to an input query in accordance with a data-driven depth prior. A final post processor aligns depth maps with RGB discontinuities, resulting in visually plausible results. Experiments on the Make 3D and NYU RGB-D datasets show competitive results compared to recent state-of-the-art methods.
Three-dimensional (3D) displays become more and more popular in many fields. However, visual fatigue is one of the critical factors that impede the wide range of applications of 3D technology. Although many studies have investigated the 3D visual fatigue, a few of them are based on continuous viewing 3D contents. In this paper, we propose a method to evaluate visual fatigue through subjective scoring and objective measuring the physiological parameters during the continuous viewing 3D/2D movie. In the viewing, we test the objective and subjective indicators, including the heart rate (HR), blink frequency (BF) and percentage of eyelid closure over the pupil over time (PERCLOS) and the subjective scoring (SS). Before and after viewing the video, VRT, PMA and questionnaires are measured. Experimental results showed that the subjective score and objective indicates of visual fatigue increased gradually with viewing time although it was fluctuated. The symptoms of visual fatigue were generally more serious after viewing 3D movie than 2D ones. Based on the results above, a model was built to predict visual fatigue from HR and BF during continuous viewing 3D video processes.
The MARquette Visualization Lab (MARVL) contains a clusterbased large-scale immersive visualization system to display and interact with stereoscopic content that has been filmed or computergenerated. MARVL uses head-mounted and augmented reality devices as portable sources for displaying this unique content. Traditional approaches to video playback using a plane fall short with larger immersive field-of-view (FOV) systems such as those used by MARVL. We therefore developed an approach to playback of stereoscopic videos in a 3D world where depth is determined by the video content. Objects in the 3D world receive the same video texture but computational efficiency is derived using UV texture offsets as opposing halves of a frame-packed 3D video. Left and right cameras are configured in Unity via culling masks so that they only uniquely show the texture for the corresponding eye. The camera configuration is then constructed through code at runtime using MiddleVR for Unity 4, and natively in Unity 5. This approach becomes more difficult with multiple cameras and maintaining stereo alignment for the full FOV, but has been used successfully in MARVL for applications including employee wellness initiatives, interactivity with high-performance computing results, and navigation within the physical world.
In this study investigated the effect of the frame design of a simple smartphone HMD on the stereoscopic vision and considered the design requirements for comfortable viewing environment. We mainly focused on the lens spacing used in screen enlargement and extension of the focal length. To investigate the differences in the fusional limit attributable to lens spacing, three HMDs with left/right eye-lens spacing of 57.5, 60, and 62.5 mm were utilized. When the three types of HMD and display were compared, the positive and negative direction fusional limits were closer than the display for all HMDs. In particular, that of 62.5 mm condition was shifted to significantly close in comparison with the control condition. The results showed a trend that the fusional range becomes narrow in a simple HMD.