Regular
No keywords found
 Filters
Month and year
 
  11  0
Image
Pages 1 - 4,  © Society for Imaging Science and Technology 2016
Digital Library: EI
Published Online: February  2016
  65  3
Image
Pages 3DIPM-035.1 - 3DIPM-035.6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 21

The rapid development of 3D display technologies allows consumers to enjoy the 3D world visually through different display systems such as the stereoscopic, multiview, and light field displays. Despite the hype of 3D display development, 3D contents for various display systems are very insufficient and the most of 3D contents are only stereo sequences for stereoscopic 3D. To handle with the lack of 3D contents for various display systems, various 3D display images are generated by stereo sequences. Howver, the conventional 3D display rendering algorithm suffers from increased computational complexity and memory usage with the increasing number of view in 3D displays to achieve a sufficient level of reality. This paper proposes an efficient method to generate 3D display images by 3D direct light field rendering. We propose new 3D image generation algorithm for hole filling, boundary matting, and postprocessing from common stereo input images.

Digital Library: EI
Published Online: February  2016
  17  0
Image
Pages 3DIPM-037.1 - 3DIPM-037.8,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 21

We propose an original technique to sample surfaces generated by stereoscopic acquisition systems. Our motivation is to simplify the long and fastidious sampling pipeline, for such acquisition systems. The idea is to make the sampling of the surfaces directly on the pair of stereoscopic images, instead of doing it on the meshes created by triangulation of the point clouds given by the acquisition system. More precisely, we present a feature-preserving sampling, done directly in the stereoscopic image domain, while computing the inter-sample distances in the 3D space, in order to reduce the distortion due the embedding in R3. We focus on Poisson-disk sampling, because of its nice blue noise properties. Experimental results show that our method is a good trade-off between the direct sampling methods that are timeconsuming, and the methods based on parameterizations that alter the final sampling properties.

Digital Library: EI
Published Online: February  2016
  16  0
Image
Pages 3DIPM-396.1 - 3DIPM-396.7,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 21

In this paper we present a high capacity data hiding method for 3D meshes. The proposed method is blind and orders selected vertices from the 3D object's mesh based on a pseudo random path. When a vertex is added to the path we embed a part of the message by displacing its location according to the location of another vertex called reference vertex. We solve the causality issue by removing certain vertices which would otherwise interfere with the path of selected vertices, during the message retrieval stage. Then we fill the resulting holes by remeshing. During the experiments, high bit capacity messages are embedded in several 3D objects. These messages are hidden under high levels of security while causing negligible surface distortions.

Digital Library: EI
Published Online: February  2016
  21  2
Image
Pages 3DIPM-396.1 - 3DIPM-397.6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 21

In this paper we investigate the usage of depth maps as a structure to represent a point cloud. The main idea is that depth maps implicitly define a global manifold structure for the underlying surface of a point cloud. Thus, it is possible to only work on the parameter domain, and to modify the point cloud indirectly. We show that this approach simplifies local computations on the point cloud and allows using standard image processing algorithms to interact with the point cloud. We present results of the application of standard image compression algorithms applied on depth maps to compress a point cloud, and compare them with state-of-the-art techniques in point cloud compression. We also present a method to visualize point clouds in a progressive manner, using a multiresolution analysis of depth maps.

Digital Library: EI
Published Online: February  2016
  19  1
Image
Pages 3DIPM-398.1 - 3DIPM-398.6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 21

3D reconstruction has been an active research topic with the popularity of consumer-grade range cameras, and the whole process mainly consists of registration and integration. Most recent methods pay their attention to making depth maps aligned with each other, but the step of integration is simply conducted by weighted average for the volumes of truncated signed distance function (TSDF), thus the relationship between individual and integrated TSDF representations is not well explored. In this paper, under the framework of voxel-level optimization, a novel method is proposed for TSDF volume integration. Considering camera distortions, each individual TSDF volume is corrected by a non-rigid transformation. Based on the consistency of TSDF values of individual and integrated volumes, both the final global TSDF representation and the transformation parameters are calculated by solving the optimization problem. Experimental results demonstrate that more satisfactory reconstruction performance can be obtained by our proposal.

Digital Library: EI
Published Online: February  2016
  10  0
Image
Pages 3DIPM-399.1 - 3DIPM-399.9,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 21

In media production, previsualization is an important step. It allows the director and the production crew to see an estimate of the final product during the filmmaking process. This work focuses in a previsualization system for composite shots, which involves real and virtual content. The system visualizes a correct perspective view of how the real objects in front of the camera operator look placed in a virtual space. The aim is to simplify the workflow, reduce production time and allow more direct control of the end result. The real scene is shot with a time-of-flight depth camera, whose pose is tracked using a motion capture system. Depth-based segmentation is applied to remove the background and content outside the desired volume, the geometry is aligned with a stream from an RGB color camera and a dynamic point cloud of the remaining real scene contents is created. The virtual objects are then also transformed into the coordinate space of the tracked camera, and the resulting composite view is rendered accordingly. The prototype camera system is implemented as a self-contained unit with local processing, and it runs at 15 fps and produces a 1024x768 image. Introduction

Digital Library: EI
Published Online: February  2016
  10  0
Image
Pages 3DIPM-401.1 - 3DIPM-401.7,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 21

We propose a novel highly efficient method for filling disparity holes, regions where disparity estimation fails to produce correct result, with the most plausible values. While the filling values may not exactly match the missing disparities, the filled disparity map has cohesive and smooth areas. Such disparity map enables many applications, such as refocusing and layer effects and other new 3D photography apps, to overcome artifacts due to holes and be visually pleasant for the user. To solve for the filling disparities, we incorporate the visual saliency in our model and decouple the solution complexity from the resolution of the original disparity map. Hence, our technique strikes a good balance between perceptual quality and computational efficiency. Overall, our method produces high quality results fulfilling or exceeding the requirements of practical applications that use depth. Moreover, it is fast and hence adequate to run on the ubiquitous mobile platforms.

Digital Library: EI
Published Online: February  2016
  24  0
Image
Pages 3DIPM-402.1 - 3DIPM-402.7,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 21

Depth estimation from captured video sequence needs a high time complexity. If we select a large size of window kernel for depth estimation, it will also affect to the computational time. Especially, in case of the depth estimation from sequential images, time complexity is a critical problem. In this paper, we propose a temporal domain stereo matching method for real-time depth estimation. Since the sequential image has a many similar region between neighboring frames, we use that properties for restricting a disparity search range. Even the relationship exists between the neighboring frames, following frame depth estimation result includes a small part of error. Eventually, the propagated error affect to accuracy of estimated depth value. Compensation method of error propagation is proposed based on the feature point in stereo image. Depth values are periodically estimated with maximum disparity search range. Since computing a cost value for all disparity search range needs a high time complexity, we restrict the disparity map renewal frequency. Experiment results show that the proposed depth estimation method in sequential image can derive more accurate depth value than conventional method.

Digital Library: EI
Published Online: February  2016
  12  0
Image
Pages 3DIPM-404.1 - 3DIPM-404.7,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 21

The presented work addresses the problem of non-uniform resampling that arises when an image shown on a spatially immersive projection display, such as walls of a room, is intended to look undistorted for the viewer at different viewing angles. A possible application for the proposed concept is in commercial motion capture studios, where it can be used to provide real-time visualization of virtual scenes for the performing actor. We model the viewer as a virtual pinhole camera, which is being tracked by the motion capture system. The visualization surfaces, i.e. displays or projector screens, are assumed to be planar with known dimensions, and are utilized along with the tracked position and orientation of the viewer. As the viewer moves, the image to be shown is geometry corrected, so that the viewer receives the intended image regardless of the relative pose of the visualization surface. The location and orientation of the viewer result in constant recalculation of the projected sampling grid, which causes a non-uniform sampling pattern and drastic changes in sampling rate. Here we observe and compare the ways to overcome the consequent problems in regular-to-irregular resampling and aliasing, and propose a method to objectively evaluate the quality of the geometry compensation.

Digital Library: EI
Published Online: February  2016