Noise suppression in complex-valued data is an important task for a wide class of applications, in particular concerning the phase retrieval in coherent imaging. The approaches based on BM3D techniques are ones of the most successful in the field. In this paper, we propose and develop a new class of BM3D-style algorithms, which use high order (3D and 4D) singular value decomposition (HOSVD) for transform design in complex domain. This set of the novel algorithms is implemented as a toolbox In Matlab. This development is produced for various types of the complex-domain sparsity: directly in complex domain, real/imaginary and phase/ amplitude parts of complexvalued variables. The group-wise transform design is combined with the different kinds of thresholding including multivariable Wiener filtering. The toolbox includes iterative and non-iterative novel complex-domain algorithms (filters). The efficiency of the developed algorithms is demonstrated on denoising problems with an additive Gaussian complex-valued noise. A special set of the complex-valued test-images was developed with spatially varying correlated phase and amplitudes imitating data typical for optical interferometry and holography. It is shown that for this class of the test-images the developed algorithms demonstrate the state-of-the-art performance.
In this paper, we will present a model of color images that provides insight into how the color channels are related. We will show experimental results that illustrate the efficacy of this model. We will then demonstrate how this model can be used to design a simple chrominance based image denoising system.
We introduce a content-adaptive approach to image denoising where the filter design is based on mean opinion scores (MOSs) from preliminary experiments with volunteers who evaluated the quality of denoised image fragments. This allows to tune the filter parameters so to improve the perceptual quality of the output image, implicitly accounting for the peculiarities of the human visual system (HVS). A modification of the BM3D image denoising filter (Dabov et al., IEEE TIP, 2007), namely BM3DHVS, is proposed based on this framework. We show that it yields a higher visual quality than the conventional BM3D. Further, we have also analyzed the MOSs against popular full-reference visual quality metrics such as SSIM (Wang et al., IEEE TIP, 2004), its extension FSIM (Zhang et al., IEEE TIP, 2011), and the noreference IL-NIQE (Zhang et al., IEEE TIP, 2015) over each image fragment. Both the Spearman and the Kendall rank order correlation show that these metrics do not correspond well to the human perception. This calls for new visual quality metrics tailored for the benchmarking and optimization of image denoising methods.
OCT (Optical coherence tomography) has become a popular method for macular degeneration diagnosis. The advantages over other methods are: OCT is noninvasive, it has a high penetration and it has a high resolution. However, the always present speckle noise and the low contrast differences make it hard to segment the layers for the measurements correctly. The aim of this paper is to show the importance of optimizing the retinal segmentation process. Actual automatic segmentation algorithms are capable of detecting up to eleven layers in real time, but often fail at images with (strong) macular degeneration, which are complicating the separation of the layers from each other. This paper sums up some actual aspects of developments in retinal segmentation and shows the limits of actual algorithms. As a comprehensive test process for this paper, we tested all common image processing algorithms and implemented found promising, modern OCT segmentation methods. The result is a wide scale analysis which can be used as a roadmap for optimizing the process of retinal segmentation. Promising algorithms were found with the Canny edge detector, graph cuts and dynamic programming. Combining these algorithms results, the graph-, gradient-, intensity information, and decreasing the search region step by step has shown to be a fast and reliable solution. All tests were using 2D image data, 3D data could be used as well but plays no role in this paper. The testing process includes pre-filtering for image denoising, which can be done fast and is creating better preconditions for the segmentation process.