Regular
anthropometryAugmented Reality
Crotch Heightcalibration
deep neural networks
finer-scale networkfull body measurement
Global registrationglobal depth-map
Industrial inspectionInline inspection
landmark detection
M-estimationMicroscope 3D imagingmulti-scale deep network
Object Reconstruction
Pose estimationPoint cloudsPoint Cloud Registration
Robust registration
Structure from motionstructured light
Volumetric Video
3D Measurement3D rigid transformations3D Shape Indexing and Retrieval3D Range Data Encoding3D optical scans3D depth-map3D Video Communications3D Range Data3D Data Processing3D Imaging3D Compression3D Communications3D Video3D Telepresence3D Scene Reconstruction3D Immersion
4D scanning
 Filters
Month and year
 
  24  10
Image
Pages 2-1 - 2-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 16

In this paper we propose 4DBODY system which realizes full body 4D scanning of human in motion. It consists eight measurement heads based on single frame structured light method. All heads works synchronously with frequency 120 Hz with spectral separation to eliminate crosstalk. Single frame method is based on projection of sinusoidal fringe pattern with special marker allowing for absolute phase reconstruction. We develop custom made unwrapping procedure adjusted for human body analysis. We could achieve 0.5 mm spatial resolution and 0.3 mm of accuracy within a 1.5 x 1.5 x 2.0 m3 working volume. We integrated treadmill into system which allows to perform walking experiments. We present some results of measurements with initial validation of 4DBODY system.

Digital Library: EI
Published Online: January  2019
  25  10
Image
Pages 3-1 - 3-6,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 16

This work experimentally demonstrates inline 3D imaging using Structure-from-Motion in microscopic domain. Several microscopic 3D inspection systems exist. A popular method for standard microscopes is Depth-from-Focus reconstruction, which makes use of the shallow depth of field of microscope optics. It requires several scans acquired at different distances of the object along the optical axis. This and other 3D reconstruction methods based on a scanning process are not suitable for fast inline inspection if the scanning direction does not match the object’s transport direction. In this paper we propose a modification to standard microscope optics, which allows for Structure-from-Motion in microscopic domain, by including an additional aperture. The choice of aperture opening and location is crucial to reach the desired lateral and depth resolution. This paper investigates the optimal choice for these parameters to match a desired application, in this case the inspection of a metallic surface with 4 mm resolution in all three dimensions. The choice based on theoretical considerations is successfully tested in an experimental setup. Results are compared with a reference measurement from confocal microscopy.

Digital Library: EI
Published Online: January  2019
  19  1
Image
Pages 4-1 - 4-5,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 16

This study propose a robust 3D depth-map generation algorithm using a single image. Unlike previous related works estimating global depth-map using deep neural networks, this study uses the global and local feature of image together to reflect local changes in the depth map instead of using only global feature. A coarse-scale network is designed to predict the global-coarse depth map structure using a global view of the scene and the finer-scale random forest (RF) is to be designed to refine the depth map based on combination of original image and coarse depth map. As the first step, we use a partial structure of the multi-scale deep network (MSDN) to predict the depth of the scene at a global level. As the second step, we propose local patchbased deep RF to estimate the local depth and smoothen noise of local depth map by combining MSDN global-coarse network. The proposed algorithm was successfully applied to various single images and yielded a more accurate depthmap estimation performance than other existing methods.

Digital Library: EI
Published Online: January  2019
  12  1
Image
Pages 6-1 - 6-5,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 16

Recently, volumetric video based communications have gained a lot of attention, especially due to the emergence of devices that can capture scenes with 3D spatial information and display mixed reality environments. Nevertheless, capturing the world in 3D is not an easy task, with capture systems being usually composed by arrays of image sensors, which sometimes are paired with depth sensors. Unfortunately, these arrays are not easy to assembly and calibrate by non-specialists, making their use in volumetric video applications a challenge. Additionally, the cost of these systems is still high, which limits their popularity in mainstream communication applications. This work proposes a system that provides a way to reconstruct the head of a human speaker from single view frames captured using a single RGB-D camera (e.g. Microsoft?s Kinect 2 device). The proposed system generates volumetric video frames with a minimum number of occluded and missing areas. To achieve a good quality, the system prioritizes the data corresponding to the participants? face, therefore preserving important information from speakers facial expressions. Our ultimate goal is to design an inexpensive system that can be used in volumetric video telepresence applications and even on volumetric video talk-shows broadcasting applications.

Digital Library: EI
Published Online: January  2019
  25  10
Image
Pages 7-1 - 7-6,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 16

An increasing number of mobile devices are equipped to acquire 3D range data along with color texture (e.g., iPhone X). As these devices are adopted, more people will have direct access to 3D imaging devices, bringing advanced applications, such as mobile 3D video calls and remote 3D telemedicine, within reach. This paper introduces Holo Reality, a novel platform that enables real-time, wireless 3D video communications to and from todayâ–™s mobile (e.g., iPhone, iPad) devices. The major contributions are (1) a modular platform for performing 3D video acquisition, encoding, compression, transmission, decompression, and visualization entirely on consumer mobile devices and (2) a demonstration system that successfully delivered 3D video content from one mobile device to another, in real-time, over standard wireless networks. Our demonstration system uses augmented reality to visualize received 3D video content within the userâ–™s natural environment, highlighting the platformâ–™s potential to enable advanced applications for telepresence and telecollaboration. This technology also has the potential to realize new applications within areas such as mechatronics and telemedicine.

Digital Library: EI
Published Online: January  2019
  20  3
Image
Pages 8-1 - 8-8,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 16

We present a modified M-estimation based method for fast global 3D point cloud registration which rapidly converges to an optimal solution while matching or exceeding the accuracy of existing global registration methods.The key idea of our work is to introduce weighted median based M-estimation for re-weighted least squares adeployed in a graduated fashion which takes into account the error distribution of the residuals to achieve rapid convergence to an optimal solution. The experimental results on synthetic and real data sets show the significantly improved convergence of our method with a comparable accuracy with respect to the state-of-the-art global registration methods.

Digital Library: EI
Published Online: January  2019
  21  1
Image
Pages 10-1 - 10-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 16

Three-dimensional optical devices are relatively inexpensive and can rapidly acquire body surface scans. Therefore, they can be used as an efficient tool for automatically obtaining landmarks and anthropometric measurements. In this study, we develop an algorithm called DeCIMA (Detecting the Crotch In Mesh Analysis). DeCIMA automatically finds the crotch, as one of the fundamental landmarks on 3D human optical scans. DeCIMA presents a solution for finding the crotch-location even in challenging cases where the subjects’ thighs are connected. We test DeCIMA on 225 scans obtained from 75 human subjects using three different scanners. We compare the crotch height obtained by DeCIMA with the crotch height reported by the proprietary software of the 3D optical scanners. Results show that DeCIMA improves crotch detection and can correctly detect the crotch even for subjects with connected upper thighs.

Digital Library: EI
Published Online: January  2019
  12  1
Image
Pages A16-1 - A16-4,  © Society for Imaging Science and Technology 2019
Digital Library: EI
Published Online: January  2019

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]