Regular
CURVATURE
DIGITAL SIGNAL PROCESSORDIGITIZATIONDISTRIBUTIONDEPTH UPSCALING
EMBEDDED VISION
FOCAL LENGTH TUNABLE LENSFIXED POINTFOCUS MEASURE
HDR
MULTI-COREMICROSCOPY
NR-MVQA
PSYCHO-VISUAL TEST
QUALITY ASSESSMENT
ROUGHNESSREVERSE ENGINEERING
SHAPE FROM FOCUSSALIENCYSUPERPIXEL CLUSTERINGSTEREO MATCHINGSENSOR FUSIONSUPERPIXELSSEGMENTATION
TELECENTRIC LENS
VARIANCE OF LAPLACIAN
3D RECOVERY3D MESHES3D MESH3D/4D DATA PROCESSING AND FILTERING3D COMPRESSION AND ENCRYPTION3D SCENE RECONSTRUCTION AND MODELING3D SHAPE INDEXING AND RETRIEVAL3D/4D SCANNING3D PROFILE
 Filters
Month and year
 
  2  0
Image
Pages 1 - 3,  © Society for Imaging Science and Technology 2017
Digital Library: EI
Published Online: January  2017
  69  7
Image
Pages 4 - 8,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 20

3D mesh becomes a common tool used in several computer vision applications. The performances of these applications depend highly on its quality. In order to quantify it, several methods have been proposed in the literature. In this paper, we propose a 3D Mesh Quality Measure based on the fusion of some selected features. The goal is here to take into account the advantages of these features and thus improve the global performance. The selected features are here some 3D mesh quality metrics and a geometric attribute. The fusion step has been realized using a Support Vector Regression (SVR) model. The 3D Mesh General database has been used to evaluate our method. The obtained results, in terms of correlation with the subjective judgments, show the relevance of the proposed framework.

Digital Library: EI
Published Online: January  2017
  21  2
Image
Pages 9 - 26,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 20

After the sound, 2D images and videos, 3D models represented by polygonal meshes are the actual emergent content due to the technological advance in terms of 3D acquisition [1]. 3D meshes can be subject to several degradations due to acquisition, compression, pre-treatment or transmission that distort the 3D mesh and therefore affect its visual rendering. Because the human observer is generally located at the end of this line, quality assessment of the content is required. We propose in this paper a viewindependent 3D Blind Mesh Quality Assessment Index (BMQI) based on the estimation of visual saliency and roughness. Given a 3D distorted mesh, the metric can assess the percived visual quality without the need of the reference content as humans do. No assumption on the degradation to evaluate is required for this metric, which makes it powerful and usable in any context requiring quality assessment of 3D meshes. Obtained results in terms of correlation with subjective human scores of quality are important and highly competitive with existing full-reference quality assessment metrics.

Digital Library: EI
Published Online: January  2017
  22  2
Image
Pages 15 - 26,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 20

RGBD cameras capturing color and depth information are highly promising for various industrial, consumer and creative applications. Among others, these applications are segmentation, gesture control or deep compositing. Depth maps captured with Time-of-Flight sensors, as a potential alternative to vision-based approaches, still suffer from low depth resolution. Various algorithms are available for RGB-guided depth upscaling but they also introduce filtering artifacts like depth bleeding or texture copying. We propose a novel superpixel-based upscaling algorithm, which employs an iterative superpixel clustering strategy to achieve improved boundary reproduction at depth discontinuities without aforementioned artifacts. Concluding, a rich ground-truth-based evaluation validates that our upscaling method is superior compared to competing state-of-the-art algorithms with respect to depth jump reproduction. Reference material is collected from a real RGBD camera as well as the Middlebury 2005 and 2014 data sets. The effectiveness of our method is also confirmed by usage in a depth-jump-critical computational imaging use case.

Digital Library: EI
Published Online: January  2017
  26  1
Image
Pages 27 - 32,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 20

Shape From Focus (SFF) is the most effective technique for recovering 3D object shape in optical microscopic scenes. Although numerous methods have been recently proposed, less attention has been paid to the quality of source images, which directly affects the accuracy of 3D shape recovery. One of the critical factors impacting the source image quality is the high dynamic range issue, which is caused by the gap between the high dynamic ranges of the real world scenes and the low dynamic range images that the cameras capture. We now present a microscopic 3D shape recovery system based on high dynamic range (HDR) imaging technique. We have conducted experiments on constructing the 3D shapes of difficult-to-image materials such as metal and shiny plastic surfaces, where conventional imaging techniques will have difficulty capturing detail, and will thus result in poor 3D reconstruction. We present experimental results to show that the proposed HDR-based SFF 3D method yields more accurate and robust results than traditional non-HDR techniques for a variety of materials.

Digital Library: EI
Published Online: January  2017
  36  2
Image
Pages 33 - 38,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 20

Today, it is increasingly frequent and easy to digitize the surface of real 3D objects. However, the obtained meshes are often inaccurate and noisy. In this paper, we present a method to segment a digitized 3D surface from a real object by analyzing its curvature. Experimental results – applied on real digitized 3D meshes – show the efficiency of our proposed analysis, particularly in a reverse engineering process.

Digital Library: EI
Published Online: January  2017
  18  2
Image
Pages 39 - 48,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 20

In the field of Automated Optical Inspection (AOI), measuring the 3D profile of objects can be of paramount importance, enabling the measure of surface roughness, critical small or large scale dimensions in x, y and z, radius of curvature; all of which requiring a high level of accuracy and repeatability. This paper presents a depth from focus surface profile recovery approach using a tunable lens. Two prototype systems and calibrations steps are detailed, several focus measures are introduced and implemented. Experimental results shows the applicability of the approach to small scale depth measurements and surface reconstruction.

Digital Library: EI
Published Online: January  2017
  80  0
Image
Pages 49 - 54,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 20

Stereo Matching algorithms reconstruct a depth map from a pair of stereoscopic images. Stereo Matching algorithms are computationally intensive. Implementing efficient stereo matching algorithms on embedded systems is very challenging. This paper compares implementation efficiency and output quality of the state of the art dense stereo matching algorithms on the same multicore embedded system. The three different classes of stereo matching algorithms are local methods, semi-global methods and global methods. This paper compares three algorithms of the literature with a good trade-off between complexity and accuracy : Bilateral Filtering Aggregation (BFA, Local Method), One Dimension Belief Propagation (BP-1D, Semi Global Methods) and Semi Global Matching (SGM, Semi Global Methods). For the same input data the BFA, BP-1D and SGM were fully optimized and parallelized on the C6678 platform and run at respectively 10.7 ms, 4.1 ms and 47.1 ms.

Digital Library: EI
Published Online: January  2017

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]