Back to articles
Articles
Volume: 29 | Article ID: art00002
Abstract

We present an algorithm to get high-speed video using camera array with good perceptual quality in realistic scenes that may have clutter and complex background. We synchronize the cameras such that each captures an image at a different time offset. The algorithm processes the jittery interleaved frames and produces a stabilized video. Our method consists of: synthesis of views from a virtual camera to correct for differences in cameras perspectives, and video compositing to remove remaining artifacts especially around disocclusions. More explicitly, we process the optical flow of the raw video to estimate, for each raw frame, the disparity to the target virtual frame. We input these disparities to content-aware warping to synthesize the virtual views, significantly alleviating the jitter. Yet, while the warping fills the disocclusion holes, the filling may not be coherent temporally, leading to small jitter still visible in static/slow regions around large disocclusions. However, these regions don't benefit from high rate in high-speed video. Therefore, we extract low frame rate regions from only one camera and video composite them with the remaining highly moving regions taken by all cameras. The final video is smooth and efficiently has high frame rate in high motion regions.

Subject Areas :
Views 10
Downloads 0
 articleview.views 10
 articleview.downloads 0
  Cite this article 

Maha El Choubassi, Oscar Nestares, "Stabilized High-Speed Video from Camera Arraysin Proc. IS&T Int’l. Symp. on Electronic Imaging: Digital Photography and Mobile Imaging XIII,  2017,  pp 7 - 13,  https://doi.org/10.2352/ISSN.2470-1173.2017.15.DPMI-063

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2017
72010604
Electronic Imaging
2470-1173
Society for Imaging Science and Technology