An image fusion of different modal images, such as visible and far-infrared images, is an important image processing technique because different modal images can compensate for each other. Many existing image fusion algorithms assume that different modal images are perfectly aligned. However, that assumption is not satisfied in many practical situations. In this paper, we propose an image alignment and fusion algorithm with gradient-domain processing. First, we extract the gradient maps from both modality images. Then, assuming disparities between the two gradient maps, candidate gradient maps for the target fused image are generated by selecting the gradient having larger power from different modality images pixel-by-pixel. A key observation is as follows. If the assumed disparity is wrong, the fused image includes ghost edges. If the assumed disparity is correct, the single edge is preserved without the ghost edge in the fused image. Therefore, we evaluate the gradient power in the region-of-interest of the fused image with different diparities. Then, we can align images based on the disparity associated with the minimum gradient power. Finally, we apply gradient-based image fusion with the aligned image pairs. We experimentally validate that the proposed approach can effectively align and fuse the visible and far-infrared images.
Ayaka Tanihata, Masayuki Tanaka, Masatoshi Okutomi, "Alignment and fusion of visible and infrared images based on gradient-domain processing" in Proc. IS&T Int’l. Symp. on Electronic Imaging: Image Processing: Algorithms and Systems, 2022, pp 366-1 - 366-5, https://doi.org/10.2352/EI.2022.34.10.IPAS-366