Deep image denoisers achieve state-of-the-art results but with a hidden cost. As witnessed in recent literature, these deep networks are capable of overfitting their training distributions, causing inaccurate hallucinations to be added to the output and generalizing poorly to varying data. For better control and interpretability over a deep denoiser, we propose a novel framework exploiting a denoising network. We call it controllable confidence-based image denoising (CCID). In this framework, we exploit the outputs of a deep denoising network alongside an image convolved with a reliable filter. Such a filter can be a simple convolution kernel which does not risk adding hallucinated information. We propose to fuse the two components with a frequency-domain approach that takes into account the reliability of the deep network outputs. With our framework, the user can control the fusion of the two components in the frequency domain. We also provide a user-friendly map estimating spatially the confidence in the output that potentially contains network hallucination. Results show that our CCID not only provides more interpretability and control, but can even outperform both the quantitative performance of the deep denoiser and that of the reliable filter, especially when the test data diverge from the training data.
In this paper, text recognition of variably curved cardboard pharmaceutical packages is studied from the photometric stereo imaging point-of-view with focus on developing a method for binarizing the expiration date and batch code texts. Adaptive filtering, more specifically Wiener filter, is used together with haze removal algorithm with fusion of LoG-edge detected sub-images resulting an Otsu thresholded binary image of expiration date and batch code texts for future analysis. Some results are presented, and they appear to be promising for text binarization. Successful binarization is crucial in text character segmentation and further automatic reading. Furthermore, some new ideas will be presented that will be used in our future research work.