Back to articles
Regular Article
Volume: 68 | Article ID: 050501
Image
Multiview Stereo High-Completeness Network for 3D Reconstruction
  DOI :  10.2352/J.ImagingSci.Technol.2024.68.5.050501  Published OnlineSeptember 2024
Abstract
Abstract

For solving the low completeness of scene reconstruction by existing methods in challenging areas such as weak texture, no texture, and non-diffuse reflection, this paper proposes a multiview stereo high-completeness network, which combined the light multiscale feature adaptive aggregation module (LightMFA2), SoftPool, and a sensitive global depth consistency checking method. In the proposed work, LightMFA2 is designed to adaptively learn critical information from the generated multiscale feature map, which can solve the troublesome problems of feature extraction in challenging areas. Furthermore, SoftPool is added to the regularization process to complete the downsampling of the 2D cost matching map, which reduces information redundancy, prevents the loss of useful information, and accelerates network computing. The purpose of the sensitive global depth consistency checking method is to filter the depth outliers. This method discards pixels with confidence less than 0.35 and uses the reprojection error calculation method to calculate the pixel reprojection error and depth reprojection error. The experimental results on the Technical University of Denmark dataset show that the proposed multiview stereo high-completeness 3D reconstruction network has significantly improved in terms of completeness and overall quality, with a completeness error of 0.2836 mm and an overall error of 0.3665 mm.

Subject Areas :
Views 57
Downloads 16
 articleview.views 57
 articleview.downloads 16
  Cite this article 

Cuihong Yu, Cheng Han, Zhengquan Yang, Chao Zhang, "Multiview Stereo High-Completeness Network for 3D Reconstructionin Journal of Imaging Science and Technology,  2024,  pp 1 - 13,  https://doi.org/10.2352/J.ImagingSci.Technol.2024.68.5.050501

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2024
  Article timeline 
  • received March 2024
  • accepted September 2024
  • PublishedSeptember 2024

Preprint submitted to:
  Login or subscribe to view the content