Among the various techniques that allow the acquisition of the depth of the scene, Depth from Focus (DfF) technique is a good candidate for low-resources real-time embedded systems. Indeed it relies on low complexity processing and requires one single camera. On the other hand, the large data dependency imposed by the size of a focus-cube must be tackled in order to ensure the embeddability of the algorithm. This paper presents algorithm improvement and an architecture optimized for both processing complexity and memory footprint. For full-HD images, this architecture can produce depth and confidence maps in real time using roughly 1.4p arithmetic operations per pixel, where p is the number of depth planes, without the need of a multiplier, while the needed memory footprint is equivalent to 6% of one frame. All in focus images can also be processed on-the-fly to the price of an additional 2 frames memory buffer.
Simon Emberger, Laurent Alacoque, Antoine Dupret, Gilles Sicard, Jean Louis de Bougrenet de la Tocnaye, "A near pixel Depth from Focus architecture for video rate depth estimation" in Proc. IS&T Int’l. Symp. on Electronic Imaging: Image Sensors and Imaging Systems, 2018, pp 447-1 - 447-6, https://doi.org/10.2352/ISSN.2470-1173.2018.11.IMSE-447