
LED flicker is a persistent artifact in imaging, where lights modulated via Pulse Width Modulation (PWM) above 90 Hz appear steady to humans but produce temporal intensity variations in captured video. While hardware mitigations like split-pixel architectures reduce flicker, they introduce a fundamental trade-off with motion blur. Progress in learned LED flicker mitigation (LFM) is currently hindered by a lack of public ground-truth datasets. We address this gap with ISET-LFM, an open-source physics-based simulation framework that models LED flicker in driving scenes. Built on the ISET ecosystem, our pipelinecombines camera motion simulation with an analytical flicker model to generate realistic dual-exposure frame sequences alongsideflicker-free ground truth. We provide a synthetic datasetof scene radiance, enabling benchmarking and training of LFMalgorithms across diverse sensor and ISP architectures. Thecode and dataset are available at: https: // github. com/ AyushJam/ iset-lfm and https: // purl. stanford. edu/ wd776hn7919 respectively.

For quality inspection in different industries, where objects may be transported at several m=s, acquisition and computation speed for 2d and 3d imaging even at resolutions in the micrometer (mm) scale is essential. AIT's well-established Inline Computational Imaging (ICI) system has until now used standard multilinescan cameras to build a linear light field stack. Unfortunately, this image readout mode is only supported by few camera manufacturers thus effectively limiting the application of ICI software. However, industrial grade area scan cameras now offer frame rates of several hundred FPS, so a novel method has been developed that can match previous speed requirements while upholding and eventually surpassing previous 3D reconstruction results even for challenging objects. AIT's new area scan ICI can be used with most standard industrial cameras and many different light sources. Nevertheless, AIT has also developed its own light source to illuminate a scene by high-frequency strobing tailored to this application. The new algorithms employ several consistency checks for a range of base lines and feature channels and give robust confidence values that ultimately improve subsequent 3D reconstruction results. Its lean output is well-suited for realtime applications while holding information from four different illumination direction. Qualitative comparisons with our previous method in terms of 3d reconstruction, speed and confidence are shown at a typical sampling of 22mm=pixel. In the future, this fast and robust inline inspection scheme will be extended to microscopic resolutions and to several orthogonal axes of transport.

Monocular depth estimation is an important task in scene understanding with applications to pose, segmentation and autonomous navigation. Deep Learning methods relying on multilevel features are currently used for extracting local information that is used to infer depth from a single RGB image. We present an efficient architecture that utilizes the features from multiple levels with fewer connections compared to previous networks. Our model achieves comparable scores for monocular depth estimation with better efficiency on the memory requirements and computational burden.