This document provides an overview of the 2018 Stereoscopic Displays and Applications conference (the 29th in the series) and an introduction to the conference proceedings.
Many people cannot see depth in stereoscopic displays. These individuals are often highly motivated to recover stereoscopic depth perception, but because binocular vision is complex, the loss of stereo has different causes in different people, so treatment cannot be uniform. We have created a virtual reality (VR) system for assessing and treating anomalies in binocular vision. The system is based on a systematic analysis of subsystems upon which stereoscopic vision depends: the ability to converge properly, appropriate regulation of suppression, extraction of disparity, use of disparity for depth perception and for vergence control, and combination of stereoscopic depth with other depth cues. Deficiency in any of these subsystems can cause stereoblindness or limit performance on tasks that require stereoscopic vision. Our system uses VR games to improve the function of specific, targeted subsystems.
In this research, using an electric car as a motion platform we evaluated the user experience of motion representations in a virtual reality (VR) system. The system represents physical motion when it operates the car backward and forward with accompanying visual motion included in stereoscopic images in a head-mounted display (HMD). Image stimuli and car-based motion stimuli were prepared for three kinds of motion patterns, "starting", "stopping" and "landing", as experimental stimuli. In the experiment, pleasure and arousal were measured after each stimulus representation using the Self-Assessment Manikin (SAM), a questionnaire about emotions. Results showed that the car-based motion stimulus increased pleasure in the "landing" pattern.
Mid-air imaging is promising for glasses-free mixed reality systems in which the viewer does not need to wear any special equipment. However, it is difficult to install an optical system in a public space because conventional designs expose the optical components and disrupt the field of view. In this paper, we propose an optical system that can form mid-air images in front of building walls. This is achieved by installing optical components overhead. It displays a vertical mid-air image that is easily visible in front of the viewer. Our contributions are formulating its field of view, measuring the luminance of mid-air images formed with our system by using a luminance meter, psychophysical experiments to measure the resolution of the mid-air images formed with our system, and selection of suitable materials for our system. As a result, we found that highly reflective materials, such as touch display, transparent acrylic board, and white porcelain tile, are not suitable for our system to form visible mid-air images. We prototyped two applications for proof of concept and observed user reactions to evaluate our system.
In this paper, we present a refocus interface to set the parameters used for diminished reality (DR)-based work area visualization and a multiview camera-based rendering scheme. The refocus interface allows the user to determine two planes — one for setting a virtual window, through which the user can observe the background occluded by an object, and the other for a background plane, which is used for the subsequent background rendering. The background is rendered considering the geometric and appearance relationships of the multiview cameras observing the scene. Our preliminary results demonstrate that our DR system can visualize the hidden background.
Interest in 3D viewing has been increasing significantly over the last few years, with the vast majority of focus being on Virtual Reality (VR), a single-user form of Stereo 3D (S3D) with positional tracking, and Augmented Reality (AR) devices. However, Volumetric 3D displays and Light Field Displays (LFD) are also generating interest in the areas of operational and scientific analysis due to the unique capabilities of this class of hardware. The amount of available 3D data is also growing exponentially including computational simulation results, medical data (e.g. computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), ultrasound), computer-aided design (CAD) data, plenoptic camera data, synthetic aperture radar (SAR) data, light detection and ranging (LiDAR) data, 3D data from global positioning system (GPS) satellite scans, and numerous other 3D data sources. Much of this 3D data is available in the cloud and often at long distances from the application or user. While significant progress has been made developing open standards for S3D devices, no standard has yet converged that would allow 3D data streaming for devices such as LFDs that display an assembly of simultaneous views for full parallax and multi-user support without the need for specialized eyewear. A 3D Streaming Standard is desired that will allow display of 3D scenes on any Streaming Media for Field of Light Displays (SMFoLD) compliant device including S3D, VR, AR, Volumetric 3D, and LFD devices. With support from the Air Force Research Laboratories, Third Dimension Technologies (TDT), in collaboration with Oak Ridge National Laboratory (ORNL) and Insight Media, has initialized work on the development of an SMFoLD Open Standard.
A group of tools has been developed to analyze the display performance of light-field displays based on a micro-lens array. The simulation tools are based on the principle of Snell's Law and the characteristic of human visual system, in which the optical aberrations of lens arrays and the focus of eyes have also been taken into account. Some important issues in light-field displays can be shown clearly before the fabrication, such as perspective views, view zones, crosstalk, color moire patterns, and also the depth information in light-field near-eye displays. Therefore, the developed tools can provide a useful approach to verify the improvement of new techniques and also give a reference of the final performance for a system set-up with optimal specification. To our best of knowledge, the developed tools is the first approach to analyze the performance of 3D displays considering the optical aberrations and the characteristic of human visual system.
We propose a new optical system of glasses-free spherical 3D display and validity of the system is confirmed in this paper. The proposal display is light field display; a capable method to show 3D images to multiple viewers at once and perform complicated optical effect like occlusion effect. The proposal system is also capable to perform motion parallax effect in multiple directions (full-parallax). The proposal system is based on time-division multiplexing. The optical system consists of a rotating specially designed sphere-shaped flat mirror array and a high-speed projector. The rotating mirror array reflects rays from the projector to various angles. By controlling the intensity of the rays with the projector properly, a 3D image can be seen in the mirror array. From the results of the optical computer simulations using ray tracing method, expected 3D image was observed and we confirmed availability of the system as the 3D display. A prototype was developed using 3D printing and feasibility of the system was also confirmed. Additionally, color display of the same method was developed using 3 high-speed projectors.
A fast calculation method for full-color holographic system with real objects captured by a depth camera is proposed. In this research, the depth and color information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a computer-generated hologram (CGH). The computational complexity is reduced dramatically in comparison with conventional methods. The numerical simulation results confirm that our proposed method is able to improve full-color CGH computational speed.