This document provides an overview of the 2017 Stereoscopic Displays and Applications conference (the 28th in the series) and an introduction to the conference proceedings.
Recently the movie industry has been advocating the use of frame rates significantly higher than the traditional 24 frames per second. This higher frame rate theoretically improves the quality of motion portrayed in movies, and helps avoid motion blur, judder and other undesirable artifacts. Previously we reported that young adult audiences showed a clear preference for higher frame rates, particularly when contrasting 24 fps with 48 or 60 fps. We found little impact of shutter angle (frame exposure time) on viewers' choices. In the current study we replicated this experiment with an audience composed of imaging professionals who work in the film and display industry who assess image quality as an aspect of their everyday occupation. These viewers were also on average older and thus could be expected to have attachments to the "film look" both through experience and training. We used stereoscopic 3D content, filmed and projected at multiple frame rates (24, 48 and 60 fps), with shutter angles ranging from 90° to 358°, to evaluate viewer preferences. In paired-comparison experiments we assessed preferences along a set of five attributes (realism, motion smoothness, blur/clarity, quality of depth and overall preference). As with the young adults in the earlier study, the expert viewers showed a clear preference for higher frame rates, particularly when contrasting 24 fps with 48 or 60 fps. We found little impact of shutter angle on viewers' choices, with the exception of one clip at 48 fps where there was a preference for larger shutter angle. However, this preference was found for the most dynamic "warrior" clip in the experts but in the slower moving "picnic" clip for the naïve viewers. These data confirm the advantages afforded by high-frame rate capture and presentation in a cinema context in both naïve audiences and experienced film professionals. © 2016 Society for Imaging Science and Technology.
Due to concern that current U.S. Air Force depth perception standards and test procedures may not be adequate for accurately identifying aircrew medically fit to perform critical depth perception tasks during flight, the U.S. Air Force School of Aerospace Medicine developed a stereoscopic simulation environment to investigate depth perception vision standards. The initial results of this research showed that while the use of stereoscopic displays clearly improved performance for a helicopter landing task involving depth judgments, an individual's stereo acuity was not predictive of performance. However, landing task performance could be predicted when stereo acuity was used together with binocular fusion range. However, motion perception was a better predictor of performance than stereo acuity. Potential implications for medical vision standards and the potential complexities involved in predicting real-world performance based on performance in a stereoscopic flight simulation are discussed.
Research on the role of human stereopsis has largely focused on laboratory studies that control or eliminate other cues to depth. However, in everyday environments we rarely rely on a single source of depth information. Despite this, few studies have assessed the impact of binocular vision on depth judgements in real-world scenarios presented in simulation. Here we conducted a series of experiments to determine if, and to what extent, stereoscopic depth provides a benefit for tasks commonly performed by helicopter aircrew. We assessed the impact of binocular vision and stereopsis on perception of (1) relative and (2) absolute distance above the ground (altitude) using natural and simulated stereoscopic-3D (S3D) imagery. The results showed that, consistent with the literature, binocular vision provides very weak input to absolute altitude estimates at high altitudes (10-100ft). In contrast, estimates of relative altitude at low altitudes (0-5ft) were critically dependent on stereopsis, irrespective of terrain type. These findings are consistent with the view that stereopsis provides important information for altitude judgments when close to the ground; while at high altitudes these judgments are based primarily on the perception of 2D cues.
Visual fatigue of 3D display restricts its wide application, moreover, human attention is one of important factors for visual fatigue due to the fact that perceptual salient regions determine the quality of experience of 3D contents. In order to study the relationship between 3D image salient area and 3D visual fatigue, an experiment is designed in which subjects are required to accomplish a task realized with 3D display system, eye tracking system and ECG detecting apparatuses. The aim of the task is to induce 3D visual fatigue with stimuli with different salient areas. ECG signals and eye tracking parameters are recorded during the whole process and the questionnaire is adopted to obtain the subjective fatigue parameters. Analysis of the experimental data shows the relationship between salient area and visual fatigue. In our research, an evaluation method of visual fatigue by considering both ECG signals and eye tracking parameters is developed, and a visual fatigue evaluation model based on salient area of 3D images is introduced. In our future study, more elements will be added into the experiment to find more factors which can induce 3D visual fatigue.
Light field 3D displays represent a major step forward in visual realism, providing glasses-free spatial vision of real or virtual scenes. Applications that capture and process live imagery have to process data captured by potentially tens to hundreds of cameras and control tens to hundreds of projection engines making up the human perceivable 3D light field using a distributed processing system. The associated massive data processing is difficult to scale beyond a specific number and resolution of images, limited by the capabilities of the individual computing nodes. The authors therefore analyze the bottlenecks and data flow of the light field conversion process and identify possibilities to introduce better scalability. Based on this analysis they propose two different architectures for distributed light field processing. To avoid using uncompressed video data all along the processing chain, the authors also analyze how the operation of the proposed architectures can be supported by existing image/video codecs. © 2017 Society for Imaging Science and Technology.
We developed a novel projection-type integral threedimensional (3D) display method using multiple projectors and displayed a 3D image with a wide viewing angle. In the proposed method, the viewing angle and positions of light spots that become pixels of a 3D image are controlled by projecting elemental images onto a lens array at a predetermined angle as collimated light beams. By projecting elemental images at different angles from multiple projectors installed at optimal positions, the viewing angle is enlarged, and the resolution is enhanced. We prototyped a projection-type integral 3D display system consisting of five ultra high definition (UHD) projectors with a viewing angle of 40 degrees in the horizontal and vertical directions while having a resolution of 114 thousand dots in the center view. We experimentally verified the display performance of the prototype display system and confirmed the validity of the proposed method.
We have developed a method for combining the images of multiple flat-panel displays to improve the quality of integral three-dimensional (3D) images. A multi-image combining optical system (MICOS) is used to magnify and combine the images of multiple displays without gaps between multiple active display areas. However, in the previous prototype, the image quality of the 3D images deteriorated due to the use of a MICOS that had a complicated structure and a diffuser plate. This paper describes an optical system for combining multiple images while suppressing the deterioration of 3D image quality. The improved method can suppress the deterioration of the image quality because it uses a simple structure as a MICOS and does not require a diffuser plate. Furthermore, the thickness of the entire equipment was increased because parallel light was required for the backlight of the LCD panel in the previous design. The thickness of the entire equipment could be reduced to 1/5 or less because diffused light can be used in the improved design.
The authors have been conducting research on cross-modal perception employing sensory integration in which participants perceive tactile sensation from stereoscopic (3D) images. The pseudo-haptic system enables the phenomenon of subtle tactile sensation by spatial and temporal synchronization with 3D images without any physical contact. In this study, a 3D image of an object was presented using a binocular see-through head-mounted display, and participants moved their forearms as if touching the viewed object. Myoelectric potentials were measured during experiencing a subtle tactile sensation by the forearm movements. From the results of the experiment, a decrease of myoelectric potential and extension of movement time were found with increase of intensity of the pseudo-haptic sensation.