Camera motion blur generally varies across the image plane. In addition to camera rotation, scene depth is also an important factor that contributes to blur variation. This paper addresses the problem of estimating the latent image of a depth-varying scene from a blurred image caused
by camera in-plane motion. To make this depth-dependent deblurring problem tractable, we acquire a small sequence of images with different exposure settings along with inertial sensor readings using a smart phone. The motion trajectory can be roughly estimated from the noisy inertial measurements.
The short/long exposure settings are arranged in a special order such that the structure information preserved in short-exposed images is employed to compensate the trajectory drift introduced by the measurement noise. Meanwhile, these short-exposed images could be regarded as a stereo pair
which provide necessary constraints for depth map inference. However, even with ground-truth motion parameters and depth map, the deblurred image may still suffer from ringing artifacts due to depth value ambiguity along objects boundaries resulting from camera motion. We propose a modified
deconvolution algorithm that searches the “optimal” depth value in a neighborhood for each boundary pixel to resolve ambiguity. Experiments on real images validate that our deblurring approach achieves better performance than existing state-of-the-art methods on a depthvarying
scene.