This work focuses on the compensation of motion artefacts that may occur during a line scan acquisition and can be detected with our multi-line scan imaging system [10]. These artefacts are caused by fluctuations of the transport velocity that are not correctly reflected by the camera trigger, and are especially visible at high magnifications. We reduce such artefacts by analyzing the light field acquired with our system. Specifically, we use a variational formulation to design a warping function, such that lines that are acquired too early or too late are stretched or squeezed appropriately. To this end, we exploit the information comprised in the light field, i.e., control the estimation of the warping function by comparing light field views and enforce uniform spacing between line acquisitions. The proposed approach enables our system to perform the multi-line scan light field imaging at virtually any magnification independent from the transport and trigger quality. We demonstrate the capabilities of our approach for various objects by comparing 3D reconstructions from unprocessed acquisitions and our corrected acquisitions. Our approach significantly reduces artefacts in light fields and in 3D reconstructions that are generated from them.
Light-field cameras capture 4-dimensional spatio-angular information of the light field. They provide more helpful multiple viewpoints or sub-apertures for visual analysis and visual understanding than traditional cameras. Optical flow is a common method to get scene structure cues from two images, however, subpixel displacements and occlusions are two inevitable challenges in the optical flow estimation from light-field sub-apertures. In this paper, we develop a light-field flow model, and propose an edge-aware light-field flow estimation framework for joint depth estimation and occlusion detection. It consists of three steps: i) An optical flow volume with sub-pixel accuracy is extracted from sub-apertures by edge-preserving interpolation. Then occlusion regions are detected through consistency checking. ii) Robust light-field flow and depth estimation are initialized by a winner-take-all strategy and a weighted voting mechanism. iii) Final depth map is refined by a weighted median filter based on guided filter. Experimental results demonstrate the effectiveness and robustness of our method.