Endoscopy is a process that allows viewing/ visualize the inside of a human body. In this article, we propose a specular reflection detection algorithm for endoscopic images that utilizes intensity, saturation and gradient information. The proposed algorithm is a two-stage procedure: (a) image enhancement using an adaptive alpha-rooting algorithm and (b) an efficient reflection detection algorithm in the HSV color space. The extensive computer simulations show a significant improvement over stateof-the-art results for specular reflection detection and segmentation accuracy.
This paper presents a new combined local and global transform domain-based feedback image enhancement algorithm for medical diagnosis, treatment, and clinical research. The basic idea in using local alfa-rooting method is to apply it to different disjoint blocks with different sizes. The block size and alfa-rooting parameters driven through optimization using the Agaian's cost function (image enhancement non-reference quality measure). The presented new approach allows enhancing MRI and CT images with uneven lighting and brightness gradient by preserving the local image features/details. Extensive computer simulations (CS) on real medical images are offered to gage the presented method. CS shows that our method improves the contrast and enhances the details of the medical images effectively compared with the current state-of-art methods.
In recent years, the use of surveillance cameras is increasingly recommended and they have been installed in many places. Snowy conditions at the time of an accident were associated with the problem that cars and accident circumstances become difficult to discern in images shot during snowfall. Previous techniques proposed methods for elimination of noise caused by snow using image shift or dedicated filters for the elimination of snowfall in video. However, these are associated with issues such as inability to cope with heavy snowfall or moving objects fading from view or being hard to discern. The present study proposes a method for snowfall noise elimination by extracting moving objects using the travel and the size of the moving object region between continuous frames, and compositing images while correcting for brightness in the background images. By distinguishing between falling snow and other moving objects, we can prevent objects other than snowfall becoming invisible. Using video of actual vehicles driving in snowy conditions for our experiments, we confirmed that snowfall noise can be eliminated without moving objects in the video becoming invisible. Furthermore, we confirmed that moving objects can be incorporated into the composited background images without any sense that they are out of place.
In this study, we propose an image despeckling method based on low-rank Hankel matrix approach and speckle level estimation. Annihilating filter-based low-rank Hankel matrix, so called ALOHA approach is very useful to various areas, such as image inpainting and impulse noise reduction. The proposed method utilizes this approach because it provides high performance in completing irregularly subsampled images. Speckled image are subsampled using patch-based speckle level estimator which selects pixels with low speckle level and abandon the others. The subsampled image is reconstructed using low-rank structured matrix completion. Our experimental results demonstrate that precisely estimated speckle level improves despeckling performance significantly. The accuracy of the proposed speckle estimator validates better despeckling performance of the proposed method compared with conventional despeckling methods.
In this paper, a single image multi-scale super-resolution technique is proposed. The concept under study is the learning procedure between steps of amplification in order to predict the next high scale of resolution. The method integrates two different approaches for the prediction of a high resolution multi-scale scheme, a pure interpolation and a gradient regularization. In the first step a pure interpolation is carried out. It is used a prediction scheme with algebraic reconstruction through different scales to produce the high resolution output. In the last step, the residual blur is reduced by a gradient auto-regularization method. The gradients are adapted by using a weight in a neighbour. Precision of method can be controlled by the parameters of an algebraic reconstruction technique (ART). The proposed model avoids the fast decrease of the output resolution as the amplification factor increases. The proposed system was tested with a dictionary. Results show that the output image quality is improved despite of the increment of the scale factor.
A method for image stitching is presented. The approach focuses on images with parallax (depth variation) to create panoramic views with high fidelity. The approach creates the stitching seam at a virtual depth to convert hard stitching problems to simple ones. The virtual depth is created by applying local distortions to the input images at the stitching seam so that the contents visually appear to be located at the same depth. The presented approach targets a wide variety of applications that require generating high (or super) resolution, wide-view images. These applications include tele-presence (or tele-reality) applications such as shopping, touring, conferencing, planning or architecting, learning, inspection, and surveillance. Our results show that the proposed approach provides promising results compared to commercial products that rely on stitching solutions.
Stereo matching methods estimate the depth information from stereoscopic images using the characteristics of binocular disparity. We try to find corresponding points from the left and right viewpoint images to obtain the disparity map. Once we get the disparity value between the two correspondences, we can calculate the depth information of the object. If there is no texture in the image, the stereo matching operation may not find the accurate disparity value. Especially, it is quite difficult to estimate correct disparity values in occluded regions. In this paper, we propose a new method to detect disparity errors in the disparity map and correct those errors using the enhanced guided image filter (GIF). An error correction using the GIF causes disparity-smoothing near the occluded region because of a coefficient-smoothing process in the GIF. Thus, our method employs a trilateral kernel for the coefficient smoothing process to alleviate the problem.
The task of optimization of phase masks for broadband diffractive imaging to minimize chromatic aberrations and to provide given value of Depth of Focus (DoF) is considered. Different schemes of multilevel phase mask (MPM) forming by combining pixels of two Fresnel lenses are analyzed. The Fresnel lenses are calculated for the same focal distance but for very different wavelengths. A possibility of adding to the optimized mask a cubic component is taking into account as well as usage of discrete phase masks with optimized number of levels. It is shown that the proposed approach in the combination with inverse imaging allows to significantly increase image quality for a focus distance in comparison to refractive lens-based optical systems. Moreover, it is shown that by changing of aforementioned parameters it is possible to increase or decrease DoF value depending from a given goal of optimization. It is demonstrated by numerical analysis that the proposed approach significantly increases robustness of designed MPM to Gaussian additive noise in MPM introduced due to fabrication errors.
Interferometric tomography can reconstruct 3D refractive index distributions through phase-shift measurements for different beam angles. To reconstruct a complex refractive index distribution, many projections along different directions are required. For the purpose of increasing the number of the projections, we earlier proposed a beam-angle-controllable interferometer with mechanical stages; however, the quality of some of extracted phase images from interferograms included large errors, because the background fringes cannot be precisely controlled. In this study we propose to apply machine learning to phase extraction, which has been generally performed by a sequence of several rule-based algorithms. In order to estimate a phase-shift image, we employ supervised learning in which input is an interferogram and output is the phase-shift image, and both are simulation data. As a result, the network after training can estimate phase-shift images almost correctly from interferograms, in which was difficult for the rule-based algorithms.