Increasingly sophisticated algorithms, including trained artificial intelligence methods, are now widely employed to enhance image quality. Unfortunately, these algorithms often produce somewhat hallucinatory results, showing details that do not correspond to the actual scene content. It is not possible to avoid all hallucination, but by modeling pixel value error, it becomes feasible to recognize when a potential enhancement would generate image content that is statistically inconsistent with the image as captured. An image enhancement algorithm should never give a pixel a value that is outside of the error bounds for the value obtained from the sensor. More precisely, the repaired pixel values should have a high probability of accurately reflecting the true scene content. The current work investigates computation methods and properties of a class of pixel value error model that empirically maps a probability density function (PDF). The accuracy of maps created by various practical single-shot algorithms is compared to that obtained by analysis of many images captured under controlled circumstances. In addition to applications discussed in earlier work, the use of these PDFs to constrain AI-suggested modifications to an image is explored and evaluated.
Image enhancement is important in different application areas such as medical imaging, computer graphics, and military applications. In this paper, we introduce a dataset with enhanced images. The images have been enhanced by five end users, and these have been evaluated by observers in an online image quality experiment. The enhancement steps by the end users and subjective results are analysed in detail. Furthermore, 38 image quality metrics have been evaluated on the introduced dataset to reveal their suitability to measure image enhancement. The results show that the image quality metrics have low to average performance on the new dataset.
For a long time different studies have focused on introducing new image enhancement techniques. While these techniques show a good performance and are able to increase the quality of images, little attention has been paid to how and when overenhancement occurs in the image. This could possibly be linked to the fact that current image quality metrics are not able to accurately evaluate the quality of enhanced images. In this study we introduce the Subjective Enhanced Image Dataset (SEID) in which 15 observers are asked to enhance the quality of 30 reference images which are shown to them once at a low and another time at a high contrast. Observers were instructed to enhance the quality of the images to the point that any more enhancement will result in a drop in the image quality. Results show that there is an agreement between observers on when over-enhancement occurs and this point is closely similar no matter if the high contrast or the low contrast image is enhanced.
Unfortunately, images acquired by users from image acquiring devices like cameras could be under-exposed, overexposed, or backlit. These under-exposed, over-exposed, or backlit images are not suitable for recognizing the information contained in the images. In this paper, we propose a new technique which corrects the brightness of already acquired images from image acquiring devices. This technique divides the image area to determine the state of brightness and operates according to the results of the state analysis without any threshold or magic value. It effectively corrects the brightness of under-exposed, overexposed, or backlit images using only image data analysis.