Imaging through scattering media finds applications in diverse fields from biomedicine to autonomous driving. However, interpreting the resulting images is difficult due to blur caused by the scattering of photons within the medium. Transient information, captured with fast temporal sensors, can be used to significantly improve the quality of images acquired in scattering conditions. Photon scattering, within a highly scattering media, is well modeled by the diffusion approximation of the Radiative Transport Equation (RTE). Its solution is easily derived which can be interpreted as a Spatio-Temporal Point Spread Function (STPSF). In this paper, we first discuss the properties of the ST-PSF and subsequently use this knowledge to simulate transient imaging through highly scattering media. We then propose a framework to invert the forward model, which assumes Poisson noise, to recover a noise-free, unblurred image by solving an optimization problem.
Diffraction X-ray images provide molecular level information in tissue under hydrated, physiological conditions at the physiologically relevant millisecond time scale. When processing diffraction x-ray images there is a need to subtract background produced during the capture process prior to making measurements. This is a non-uniform background that is strongest at the diffraction center and decays with increased distance from the center. Existing methods require careful parameter selection or assume a specific background model. In this paper we propose a novel approach for background subtraction in which we learn to subtract background based on labeled examples. The labeled examples are image pairs where in each pair one of the images has diffraction background and the second has the background removed. Using a deep convolutional neural network (CNN) we learn to map an image with background to an image without it. Experimental results demonstrate that the proposed approach is capable of learning background removal with results close to ground truth data (PSNR > 68, SSIM > 0.99) and without having to manually select background parameters.
Image deconvolution has been an important issue recently. It has two kinds of approaches: non-blind and blind. Non-blind deconvolution is a classic problem of image deblurring, which assumes that the PSF is known and does not change universally in space. Recently, Convolutional Neural Network (CNN) has been used for non-blind deconvolution. Though CNNs can deal with complex changes for unknown images, some CNN-based conventional methods can only handle small PSFs and does not consider the use of large PSFs in the real world. In this paper we propose a non-blind deconvolution framework based on a CNN that can remove large scale ringing in a deblurred image. Our method has three key points. The first is that our network architecture is able to preserve both large and small features in the image. The second is that the training dataset is created to preserve the details. The third is that we extend the images to minimize the effects of large ringing on the image borders. In our experiments, we used three kinds of large PSFs and were able to observe high-precision results from our method both quantitatively and qualitatively.