Back to articles
Regular Articles
Volume: 65 | Article ID: jist0996
Image
Deep Spatial–focal Network for Depth from Focus
  DOI :  10.2352/J.ImagingSci.Technol.2021.65.4.040501  Published OnlineJuly 2021
Abstract
Abstract

Traditional depth from focus (DFF) methods obtain depth image from a set of differently focused color images. They detect in-focus region at each image by measuring the sharpness of observed color textures. However, estimating sharpness of arbitrary color texture is not a trivial task especially when there are limited color or intensity variations in an image. Recent deep learning based DFF approaches have shown that the collective estimation of sharpness in a set of focus images based on large body of training samples outperforms traditional DFF with challenging target objects with textureless or glaring surfaces. In this article, we propose a deep spatial–focal convolutional neural network that encodes the correlations between consecutive focused images that are fed to the network in order. In this way, our neural network understands the pattern of blur changes of each image pixel from a volumetric input of spatial–focal three-dimensional space. Extensive quantitative and qualitative evaluations on existing three public data sets show that our proposed method outperforms prior methods in depth estimation.

Subject Areas :
Views 57
Downloads 6
 articleview.views 57
 articleview.downloads 6
  Cite this article 

Sherzod Salokhiddinov, Seungkyu Lee, "Deep Spatial–focal Network for Depth from Focusin Journal of Imaging Science and Technology,  2021,  pp 040501-1 - 040501-14,  https://doi.org/10.2352/J.ImagingSci.Technol.2021.65.4.040501

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2021
  Article timeline 
  • received September 2020
  • accepted December 2020
  • PublishedJuly 2021

Preprint submitted to:
  Login or subscribe to view the content