Regular
Applications of Stereoscopic Displays
Binocular vision
Camera sliderConsumer Electronics MarketComputer-generated hologramComputer-generated integral imaging
Depth mapdiminished realitydepth-dependent filteringDisplay TechnologiesDepth perceptionDepth camera
Enhancement of viewing angleergonomicsEye trackingelectric car
Field of Light Display (FoLD)Full-colorFull-parallax
General ConsumerGaming
High-qualityhead-mounted displayHuman visionHistory
image-based renderingIntegral imaging systemIntegral imaging display
Low density point cloudlight field renderingLight-field displaysLight field image acquisitionlight field reconstructionLight field displayLight Field Display
Mobile 3D displayMixed RealityMirco-lens arraymotion representationMulti-view imagemultiple viewpoint imagesMid-air imagemultiview displayMultiview 3D display
Point light source arrayProjection type light field displayPerceptual learningpublic space
refocusReal 3D objectRay tracingRetail
stereoscopic imagesshearletStereoscopic visionStereoscopic Human Factors and DesignStereoscopicStereoblindnessStereoscopic, VR, and True 3D DisplaysSMFoLD Open StandardsSuper multi-view displayStreaming Medida for FoLD DisplaysStereoscopic Cinema and TVStereoscopic Content Production
Three-dimensional/two-dimensional switchable displayTime division multiplexingTraveling waveThree-dimensional DisplaysTime-division multiplexing
user experience
Video GamesVirtual Realityvirtual realityVirtual reality
Wide viewing zone
3D Display3D display3D scanning3D Data Sources
 Filters
Month and year
 
  32  0
Image
Pages 554-1 - 554-6,  © Society for Imaging Science and Technology 2018
Digital Library: EI
Published Online: January  2018
  153  0
Image
Pages 476-1 - 476-7,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 4

This document provides an overview of the 2018 Stereoscopic Displays and Applications conference (the 29th in the series) and an introduction to the conference proceedings.

Digital Library: EI
Published Online: January  2018
  125  37
Image
Pages 109-1 - 109-6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 4

Many people cannot see depth in stereoscopic displays. These individuals are often highly motivated to recover stereoscopic depth perception, but because binocular vision is complex, the loss of stereo has different causes in different people, so treatment cannot be uniform. We have created a virtual reality (VR) system for assessing and treating anomalies in binocular vision. The system is based on a systematic analysis of subsystems upon which stereoscopic vision depends: the ability to converge properly, appropriate regulation of suppression, extraction of disparity, use of disparity for depth perception and for vergence control, and combination of stereoscopic depth with other depth cues. Deficiency in any of these subsystems can cause stereoblindness or limit performance on tasks that require stereoscopic vision. Our system uses VR games to improve the function of specific, targeted subsystems.

Digital Library: EI
Published Online: January  2018
  138  2
Image
Pages 110-1 - 110-8,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 4

In this research, using an electric car as a motion platform we evaluated the user experience of motion representations in a virtual reality (VR) system. The system represents physical motion when it operates the car backward and forward with accompanying visual motion included in stereoscopic images in a head-mounted display (HMD). Image stimuli and car-based motion stimuli were prepared for three kinds of motion patterns, "starting", "stopping" and "landing", as experimental stimuli. In the experiment, pleasure and arousal were measured after each stimulus representation using the Self-Assessment Manikin (SAM), a questionnaire about emotions. Results showed that the car-based motion stimulus increased pleasure in the "landing" pattern.

Digital Library: EI
Published Online: January  2018
  37  4
Image
Pages 111-1 - 111-9,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 4

Mid-air imaging is promising for glasses-free mixed reality systems in which the viewer does not need to wear any special equipment. However, it is difficult to install an optical system in a public space because conventional designs expose the optical components and disrupt the field of view. In this paper, we propose an optical system that can form mid-air images in front of building walls. This is achieved by installing optical components overhead. It displays a vertical mid-air image that is easily visible in front of the viewer. Our contributions are formulating its field of view, measuring the luminance of mid-air images formed with our system by using a luminance meter, psychophysical experiments to measure the resolution of the mid-air images formed with our system, and selection of suitable materials for our system. As a result, we found that highly reflective materials, such as touch display, transparent acrylic board, and white porcelain tile, are not suitable for our system to form visible mid-air images. We prototyped two applications for proof of concept and observed user reactions to evaluate our system.

Digital Library: EI
Published Online: January  2018
  151  7
Image
Pages 112-1 - 112-4,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 4

In this paper, we present a refocus interface to set the parameters used for diminished reality (DR)-based work area visualization and a multiview camera-based rendering scheme. The refocus interface allows the user to determine two planes — one for setting a virtual window, through which the user can observe the background occluded by an object, and the other for a background plane, which is used for the subsequent background rendering. The background is rendered considering the geometric and appearance relationships of the multiview cameras observing the scene. Our preliminary results demonstrate that our DR system can visualize the hidden background.

Digital Library: EI
Published Online: January  2018
  130  1
Image
Pages 140-1 - 140-8,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 4

Interest in 3D viewing has been increasing significantly over the last few years, with the vast majority of focus being on Virtual Reality (VR), a single-user form of Stereo 3D (S3D) with positional tracking, and Augmented Reality (AR) devices. However, Volumetric 3D displays and Light Field Displays (LFD) are also generating interest in the areas of operational and scientific analysis due to the unique capabilities of this class of hardware. The amount of available 3D data is also growing exponentially including computational simulation results, medical data (e.g. computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), ultrasound), computer-aided design (CAD) data, plenoptic camera data, synthetic aperture radar (SAR) data, light detection and ranging (LiDAR) data, 3D data from global positioning system (GPS) satellite scans, and numerous other 3D data sources. Much of this 3D data is available in the cloud and often at long distances from the application or user. While significant progress has been made developing open standards for S3D devices, no standard has yet converged that would allow 3D data streaming for devices such as LFDs that display an assembly of simultaneous views for full parallax and multi-user support without the need for specialized eyewear. A 3D Streaming Standard is desired that will allow display of 3D scenes on any Streaming Media for Field of Light Displays (SMFoLD) compliant device including S3D, VR, AR, Volumetric 3D, and LFD devices. With support from the Air Force Research Laboratories, Third Dimension Technologies (TDT), in collaboration with Oak Ridge National Laboratory (ORNL) and Insight Media, has initialized work on the development of an SMFoLD Open Standard.

Digital Library: EI
Published Online: January  2018
  164  7
Image
Pages 141-1 - 141-6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 4

A group of tools has been developed to analyze the display performance of light-field displays based on a micro-lens array. The simulation tools are based on the principle of Snell's Law and the characteristic of human visual system, in which the optical aberrations of lens arrays and the focus of eyes have also been taken into account. Some important issues in light-field displays can be shown clearly before the fabrication, such as perspective views, view zones, crosstalk, color moire patterns, and also the depth information in light-field near-eye displays. Therefore, the developed tools can provide a useful approach to verify the improvement of new techniques and also give a reference of the final performance for a system set-up with optimal specification. To our best of knowledge, the developed tools is the first approach to analyze the performance of 3D displays considering the optical aberrations and the characteristic of human visual system.

Digital Library: EI
Published Online: January  2018
  35  5
Image
Pages 142-1 - 142-5,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 4

We propose a new optical system of glasses-free spherical 3D display and validity of the system is confirmed in this paper. The proposal display is light field display; a capable method to show 3D images to multiple viewers at once and perform complicated optical effect like occlusion effect. The proposal system is also capable to perform motion parallax effect in multiple directions (full-parallax). The proposal system is based on time-division multiplexing. The optical system consists of a rotating specially designed sphere-shaped flat mirror array and a high-speed projector. The rotating mirror array reflects rays from the projector to various angles. By controlling the intensity of the rays with the projector properly, a 3D image can be seen in the mirror array. From the results of the optical computer simulations using ray tracing method, expected 3D image was observed and we confirmed availability of the system as the 3D display. A prototype was developed using 3D printing and feasibility of the system was also confirmed. Additionally, color display of the same method was developed using 3 high-speed projectors.

Digital Library: EI
Published Online: January  2018
  150  0
Image
Pages 250-1 - 250-6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 4

A fast calculation method for full-color holographic system with real objects captured by a depth camera is proposed. In this research, the depth and color information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a computer-generated hologram (CGH). The computational complexity is reduced dramatically in comparison with conventional methods. The numerical simulation results confirm that our proposed method is able to improve full-color CGH computational speed.

Digital Library: EI
Published Online: January  2018

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]