For quality inspection in different industries, where objects may be transported at several m=s, acquisition and computation speed for 2d and 3d imaging even at resolutions in the micrometer (mm) scale is essential. AIT's well-established Inline Computational Imaging (ICI) system has until now used standard multilinescan cameras to build a linear light field stack. Unfortunately, this image readout mode is only supported by few camera manufacturers thus effectively limiting the application of ICI software. However, industrial grade area scan cameras now offer frame rates of several hundred FPS, so a novel method has been developed that can match previous speed requirements while upholding and eventually surpassing previous 3D reconstruction results even for challenging objects. AIT's new area scan ICI can be used with most standard industrial cameras and many different light sources. Nevertheless, AIT has also developed its own light source to illuminate a scene by high-frequency strobing tailored to this application. The new algorithms employ several consistency checks for a range of base lines and feature channels and give robust confidence values that ultimately improve subsequent 3D reconstruction results. Its lean output is well-suited for realtime applications while holding information from four different illumination direction. Qualitative comparisons with our previous method in terms of 3d reconstruction, speed and confidence are shown at a typical sampling of 22mm=pixel. In the future, this fast and robust inline inspection scheme will be extended to microscopic resolutions and to several orthogonal axes of transport.
Monocular depth estimation is an important task in scene understanding with applications to pose, segmentation and autonomous navigation. Deep Learning methods relying on multilevel features are currently used for extracting local information that is used to infer depth from a single RGB image. We present an efficient architecture that utilizes the features from multiple levels with fewer connections compared to previous networks. Our model achieves comparable scores for monocular depth estimation with better efficiency on the memory requirements and computational burden.