Illumination conditions in images, such as shadows, can cause problems for both humans and computers. As well as shadows obscuring some features in images for human observers, many computer vision algorithms such as tracking, segmentation, recognition, and categorization are challenged by varying illumination. Previously, shadow removal algorithms were proposed that require recording a sequence of calibration images of a fixed scene over different illumination conditions, say over a day. As another alternative, calibration is replaced by using information in the single image itself, seeking a projection that minimizes entropy and allows one to generate a grayscale image that has shadows effectively eliminated. In this paper we wish to improve the entropy-based method by carrying out a sensor sharpening matrix transform first. In preceding work such a sensor transform for shadow removal was sought by utilizing many calibration images. Here, instead, we replace the calibration information by user interaction: we ask the user to identify two (or more) regions in a single image that correspond to the same surface(s) in shadow and not in shadow. Then using image data from these regions only, we generate a sensor sharpening transform via an optimization aimed at minimizing the difference between in-shadow and out-of-shadow pixel values once they are projected to grayscale. Again, entropy minimization is the driving force leading to a correct sensor matrix transform. Results show that, compared to using the camera sensors as-is, the sensor sharpening is beneficial for better shadow removal.
Mark S. Drew, Hamid Reza Vaezi Joze, "Sharpening from Shadows: Sensor Transforms for Removing Shadows using a Single Image" in Proc. IS&T 17th Color and Imaging Conf., 2009, pp 267 - 271, https://doi.org/10.2352/CIC.2009.17.1.art00049