Regular
Applications of Stereoscopic DisplaysautostereoscopyAugmented realityAugmented Reality headset with eye tracking capabilitiesAutostereoscopic Display
CameraConvex Mirror ArrayComprehensive ReviewCitation StatisticsChallenges and limitationscrosstalkcamera arrayCAVE AlternativeCAVE
dynamic multi-viewDepth perceptionDownload StatisticsDisplay Walldepth discrimination
Eye tracking applicationsEPI analysis
four parallaxesFocal AccommodationFireballsFresnel Phase Plate
Head-mounted eye tracking data analysisHead-up displayHuman 3D perceptionhead-mounted displays (HMDs)HIVE cylinder displayHead-mounted eye tracking technologyh-index ComparisonHolographic Displays
Integral PhotographyImage Codingimage-based renderingIntegral displaysIPIntegral photographyImmersive Projection Technology
Light field displaysLight Fieldlight-field capture
Meteoritesmultiple projectorsMulti-view Renderingmachine visionMuseum tour applicationMolecular BiologyMulti-user Displays
OrnithologyOut of focus blur
Parallax BarrierPhotographingPanoramaPanoramas
Retinal blurrobotic armremote vision systemrobot manipulators
Stereo CamerasSaliency MapSuper-resolution image transformationStereoscopic VisualizationStereoscopic DisplaySemi-automatic video segmentationStereoscopic Content ProductionStitchingstereoscopic displayStereoscopicsicknessStereoscopic Human Factors and DesignSD&A conference program insightsSD&A award winnersSpatial Light ModulatorsStereoscopic, VR, and True 3D DisplaysSuper multiviewStereo DisplaySoftware solutionsSD&A Video Recordings StatisticsStereoscopic Cinema and TVSpatial ImagingSpatio-temporal filteringSemi-mobile Display SetupSD&A Super-ContributorsStereosubjective experiments
teleoperationTime-Division Multiplexingtime course
Virtual RealityView InterpolationVRViewing behaviorVideo post-processingvirtual displayVisualizationVideoVergence-accommodation conflict
warp/blend meshesWindshield display
2D-plus-depth video
3D projector3D theater program listing3D glasses3D displays3D video3D display3D camera360-degree Image3D Image Processing3D360x3D scene capture3D TV
 Filters
Month and year
 
  25  4
Image
Pages B03-1 - B03-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 3

This document provides an overview of the 30th Stereoscopic Displays and Applications conference and an introduction to the conference proceedings.

Digital Library: EI
Published Online: January  2019
  9  3
Image
Pages A03-1 - A03-8,  © Society for Imaging Science and Technology 2019
Digital Library: EI
Published Online: January  2019
  79  3
Image
Pages C03-1 - C03-12,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 3

The inaugural Stereoscopic Displays and Applications (SD&A) conference was held in February 1990 at the Santa Clara Convention Centre in Silicon Valley, California. For the past 30 years the SD&A conference has been held every year in the San Francisco Bay Area, up and down the San Francisco Peninsula, and over that period has brought together researchers from across a broad range of disciplines, and from locations world-wide, to report on their contributions to advancements in the field of stereoscopic imaging. The conference has supported a large community of researchers over time who have collectively presented and published a wealth of global knowledge in this subject area. In this paper we look at the impact of the conference in its first 30 years through an analysis of the conference’s published papers and an analysis of the citations of those papers. We also review some actions that conference organizers can take to help build communities of knowledge around conferences.

Digital Library: EI
Published Online: January  2019
  16  3
Image
Pages 625-1 - 625-6,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 3

Over the last 25 years, we have been involved in 3D image processing research field. We started our researches related to 3D image processing with “Data Compression of an Autostereoscopic 3-D Image” and presented our work in SPIE SD&A session in 1994. We first proposed the ray space representation of 3D images which is a common data format for various 3D capturing and displaying devices. Based on the ray space representation, we have conducted various researches on 3D image processing, which include: ray space coding and data compression, view interpolation, ray space acquisition, display, and a full system from capture to display of ray space. In this paper, we introduce some of our 25-year researches in terms of 3D image processing – from capture to display –.

Digital Library: EI
Published Online: January  2019
  84  2
Image
Pages 626-1 - 626-6,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 3

We are studying a three-dimensional (3D) TV system based on a spatial imaging method for the development of a new type of broadcasting that delivers a strong sense of presence. This spatial imaging method can reproduce natural glasses-free 3D images in accordance with the viewer’s position by faithfully reproducing light rays from an object. One challenge to overcome is that the 3D TV system based on spatial imaging requires a huge number of pixels to obtain high-quality 3D images. Therefore, we applied ultra-high-definition video technologies to a 3D TV system to improve the image quality. We developed a 3D camera system to capture multi-view images of large moving objects and calculate high-precision light rays for reproducing the 3D images. We also developed a 3D display using multiple high-definition display devices to reproduce light rays of high-resolution 3D images. The results show that our 3D display can display full-parallax 3D images with a resolution of more than 330,000 pixels.

Digital Library: EI
Published Online: January  2019
  67  4
Image
Pages 628-1 - 628-6,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 3

We expand the viewing zone of a full-HD super-multiview display based on time-division multiplexing parallax barrier. A super-multiview display is a kind of light field displays that induces focal accommodation of the viewer by projecting multiple light rays into each pupil. The problem of the conventional system is the limited viewing zone in the depth direction. To solve this problem, we introduce adaptive time-division, where quadruplexing is applied when the viewer is farthest, quintuplexing is applied when the viewer is in the middle, and sextuplexing is applied when the viewer is nearest. We have confirmed with a prototype system that the proposed system secures a deep viewing zone with little crosstalk as expected

Digital Library: EI
Published Online: January  2019
  18  2
Image
Pages 629-1 - 629-5,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 3

Holographic true 3D displays attempt to duplicate the original light field with all its relevant physical features. To this end, spatial light modulators (SLM) are commonly used as display devices. Currently available SLMs do not satisfy the technical specifications to achieve a consumer quality true 3D display; one of the important drawbacks is the rather large pixel size, and this results in a low spatial resolution of the fringe pattern written on the SLM which in turn severely restricts the diffraction/viewing angle. A design that uses low spatial resolution, but still achieves a 360-degrees viewing angle, using a parabolic mirror was already reported. Here in this design, the parabolic mirror is replaced by a Fresnel phase plate which is mounted as a covering layer on top of the SLM.

Digital Library: EI
Published Online: January  2019
  116  8
Image
Pages 631-1 - 631-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 3

We propose a virtual-image head-up display (HUD) based on the super multiview (SMV) display technology. Implementation-wise, the HUD provides a compact solution, consisting of a thin form-factor SMV display and a combiner placed on the windshield of the vehicle. Since the utilized display is at most few centimeters thick, it does not need extra installation space that is usually required by most of the existing virtual image HUDs. We analyze the capabilities of the proposed system in terms of several HUD related quality factors such as resolution, eyebox width, and target image depth. Subsequently, we verify the analysis results through experiments carried out using our SMVHUD demonstrator. We show that the proposed system is capable of visualizing images at the typical virtual image HUD depths of 2–3m, in a reasonably large eyebox, which is slightly over 30cm in our demonstrator. For an image at the target virtual image depth of 2.5m, the field of view of the developed system is 11°x16° and the spatial resolution is around 240x60 pixels in vertical and horizontal directions, respectively. There is, however, plenty of room for improvement regarding the resolution, as we actually utilize an LCD at moderate resolution (216ppi) and off-the-shelf lenticular sheet in our demonstrator.

Digital Library: EI
Published Online: January  2019
  13  3
Image
Pages 632-1 - 632-6,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 3

With advantages of motion parallax and viewing convenience, multi-view autostereoscopic displays have attracted increasing attention in recent years. It is obvious that increasing the number of views improves the quality of 3D images/videos and leads to better motion parallax. However, it requires huge amount of computing resources to generate large numbers of view images in real time. In principle, objects appearing near the screen plane have very small absolute disparity. It can use fewer views to present these objects for achieving the same level of motion parallax. The concept of dynamic multi-view autostereoscopy is to dynamically control the number of views to generate for the points in 3D space based on their disparity. Points with larger absolute disparity use more views, while points with smaller absolute disparity use fewer views. As a result, fewer computing resources are required for real-time generation of view images. Subjective assessments show that only slight degradation in 3D experience is resulted on its realization over 2D plus depth based multi-view autostereoscopic display. However, the amount of computation for generating view images can be reduced by about 44.3% when 3D scenes are divided into three spaces.

Digital Library: EI
Published Online: January  2019
  74  1
Image
Pages 635-1 - 635-9,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 3

Many current far-eye 3D displays are incapable of providing accurate out-of-focus blur on the retina and hence cause discomfort with prolonged use. This out-of-focus blur rendered on the retina is an important stimulus for the accommodation response of the eye and hence is one of the major depth cues. Properly designed integral displays can render this out-of-focus blur accurately. In this paper, we report a rigorous simulation study of far-eye integral displays to study their ability to render depth and the out-of-focus blur on the retina. The beam propagation simulation includes the effects of diffraction from light propagation through the free space, the apertures of lenslet and the eye pupil, to calculate spot sizes on the retina formed by multiple views entering the eye. Upon comparing them with the spot sizes from the real objects and taking into account depth of field and spatial resolution of the eye, we determine the minimum number of views needed in the pupil for accurate retinal blur. In other words, we determine the minimum pixel pitch needed for the screen of a given integral display configuration. We do this for integral displays with varying pixel sizes, lenslet parameters and viewing distances to confirm our results. One of the key results of the study is that roughly 10 views are needed in a 4 mm pupil to generate out-of-focus blur similar to the real world. The 10 views are along one dimension only and out-focus-blur is only analyzed for the fovea. We also note that about 20 views in a 4 mm pupil in one dimension in the pupil would be more than sufficient for accurate out-of-focus blur on the fovea. Although 2-3 views in the pupil may start triggering accommodation response as shown previously, much higher density of views is needed to mimic the real world blur.

Digital Library: EI
Published Online: January  2019

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]