Light Field (LF) microscopy has emerged as a fast-growing field of interest in recent years due to its undoubted capacity of capturing in-vivo samples from multiple perspectives. In this work, we present a framework for Volume Reconstruction from LF images created following the setup of a Fourier Integral Microscope (FIMic). In our approach we do not use real images, instead, we use a dataset generated in Blender which mimics the capturing process of a FIMic. The resulted images have been used to create a Focal Stack (FS) of the LF, from which Epipolar Plane Images (EPIs) have been extracted. The FS and the EPIs have been used to train three different deep neural networks based on the classic U-Net architecture. The Volumetric Reconstruction is the result of the average of the probabilities produced by such networks.
Over the last 25 years, we have been involved in 3D image processing research field. We started our researches related to 3D image processing with “Data Compression of an Autostereoscopic 3-D Image” and presented our work in SPIE SD&A session in 1994. We first proposed the ray space representation of 3D images which is a common data format for various 3D capturing and displaying devices. Based on the ray space representation, we have conducted various researches on 3D image processing, which include: ray space coding and data compression, view interpolation, ray space acquisition, display, and a full system from capture to display of ray space. In this paper, we introduce some of our 25-year researches in terms of 3D image processing – from capture to display –.
We expand the viewing zone of a full-HD super-multiview display based on time-division multiplexing parallax barrier. A super-multiview display is a kind of light field displays that induces focal accommodation of the viewer by projecting multiple light rays into each pupil. The problem of the conventional system is the limited viewing zone in the depth direction. To solve this problem, we introduce adaptive time-division, where quadruplexing is applied when the viewer is farthest, quintuplexing is applied when the viewer is in the middle, and sextuplexing is applied when the viewer is nearest. We have confirmed with a prototype system that the proposed system secures a deep viewing zone with little crosstalk as expected