Regular
AutoencodersAutomatic parameter tuning methodAudio
CARS microspectroscopyCell imagingContrastive LearningContraction Mappingcoherent diffraction imagingConvolutional Neural Network
deep image priordata fidelity gradientdenoisingdeep learning
Exposure Bracketing Strategy
High Dynamic Range
inverse problemsimage reconstructionIterative Algorithmimage analysisImage reconstruction
joint model-based and learning-based framework
Linear ClassifierLight-field Display (LfD)Linear inverse problems
Machine LearningMulti-view RenderingMulti-view Processing Unit (MvPU)model-based imagingMultimodalMaximum Likelihood ClassifierMultivariate curve resolution
nonlinear inverse problems
Plug-and-Play priorsplug-and-play (PnP) reconstructionphase retrieval
Radiance ImageRadiance Image Rendering
subpixel sample propertiesStein's unbiased risk estimation
ultrasound elasticity reconstructionUnsupervisedUnmixing
VideoVolume Rendering
x-ray phase contrast dark field tomography
 Filters
Month and year
 
  39  26
Image
Pages A14-1 - A14-15,  © 2023, Society for Imaging Science and Technology 2023
Volume 35
Issue 14
Abstract

More than ever before, computers and computation are critical to the image formation process. Across diverse applications and fields, remarkably similar imaging problems appear, requiring sophisticated mathematical, statistical, and algorithmic tools. This conference focuses on imaging as a marriage of computation with physical devices. It emphasizes the interplay between mathematical theory, physical models, and computational algorithms that enable effective current and future imaging systems. Contributions to the conference are solicited on topics ranging from fundamental theoretical advances to detailed system-level implementations and case studies.

Digital Library: EI
Published Online: January  2023
  129  59
Image
Pages 153-1 - 153-6,  © 2023, Society for Imaging Science and Technology 2023
Volume 35
Issue 14
Abstract

Phase retrieval (PR) consists of recovering complex-valued objects from their oversampled Fourier magnitudes and takes a central place in scientific imaging. A critical issue around PR is the typical nonconvexity in natural formulations and the associated bad local minimizers. The issue is exacerbated when the support of the object is not precisely known and hence must be overspecified in practice. Practical methods for PR hence involve convolved algorithms, e.g., multiple cycles of hybrid input-output (HIO) + error reduction (ER), to avoid the bad local minimizers and attain reasonable speed, and heuristics to refine the support of the object, e.g., the famous shrinkwrap trick. Overall, the convolved algorithms and the support-refinement heuristics induce multiple algorithm hyperparameters, to which the recovery quality is often sensitive. In this work, we propose a novel PR method by parameterizing the object as the output of a learnable neural network, i.e., deep image prior (DIP). For complex-valued objects in PR, we can flexibly parametrize the magnitude and phase, or the real and imaginary parts separately by two DIPs. We show that this simple idea, free from multi-hyperparameter tuning and support-refinement heuristics, can obtain superior performance than gold-standard PR methods. For the session: Computational Imaging using Fourier Ptychography and Phase Retrieval.

Digital Library: EI
Published Online: January  2023
  108  44
Image
Pages 156-1 - 156-6,  © 2023, Society for Imaging Science and Technology 2023
Volume 35
Issue 14
Abstract

A lightweight learning-based exposure bracketing strategy is proposed in this paper for high dynamic range (HDR) imaging without access to camera RAW. Some low-cost, power-efficient cameras, such as webcams, video surveillance cameras, sport cameras, mid-tier cellphone cameras, and navigation cameras on robots, can only provide access to 8-bit low dynamic range (LDR) images. Exposure fusion is a classical approach to capture HDR scenes by fusing images taken with different exposures into a 8-bit tone-mapped HDR image. A key question is what the optimal set of exposure settings are to cover the scene dynamic range and achieve a desirable tone. The proposed lightweight neural network predicts these exposure settings for a 3-shot exposure bracketing, given the input irradiance information from 1) the histograms of an auto-exposure LDR preview image, and 2) the maximum and minimum levels of the scene irradiance. Without the processing of the preview image streams, and the circuitous route of first estimating the scene HDR irradiance and then tone-mapping to 8-bit images, the proposed method gives a more practical HDR enhancement for real-time and on-device applications. Experiments on a number of challenging images reveal the advantages of our method in comparison with other state-of-the-art methods qualitatively and quantitatively.

Digital Library: EI
Published Online: January  2023
  43  20
Image
Pages 157-1 - 157-4,  © 2023, Society for Imaging Science and Technology 2023
Volume 35
Issue 14
Abstract

X-Ray Phase Contrast Imaging (XPCI) augments absorption radiography with additional information related to the refractive and scattering properties of a sample. Grating-based XPCI allows broadband laboratory x-ray sources to be used, increasing the technique’s accessibility. However, grating-based techniques require repeatedly moving a grating and capturing an image at each location. Additionally, the gratings themselves are absorptive, reducing x-ray flux. As a result, data acquisition times and radiation dosages present a hurdle to practical application of XPCI tomography. We present a plug-and-play (PnP) reconstruction method for XPCI dark field tomographic reconstruction with sparse views. Dark field XPCI radiographs contain information about a sample’s microstructure and scatter. The dark field reveals subpixel sample properties, including crystalline structure, graininess, and material interfaces. This makes dark field images differently distributed from traditional absorption radiographs and natural imagery. PnP methods give greater control over reconstruction regularization compared to traditional iterative reconstruction techniques, which is especially useful given the dark field’s unique distribution. PnP allows us to collect dark field tomographic datasets with fewer projections, increasing XPCI’s practicality by reducing the amount of data needed for 3D reconstruction.

Digital Library: EI
Published Online: January  2023
  76  30
Image
Pages 168-1 - 168-4,  This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. 2023
Volume 35
Issue 14
Abstract

Coherent anti-Stokes Raman scattering (CARS) microspectroscopy is a powerful tool for label-free cell imaging thanks to its ability to acquire a rich amount of information. An important family of operations applied to such data is multivariate curve resolution (MCR). It aims to find main components of a dataset and compute their spectra and concentrations in each pixel. Recently, autoencoders began to be studied to accomplish MCR with dense and convolutional models. However, many questions, like the results variability or the reconstruction metric, remain open and applications are limited to hyperspectral imaging. In this article, we present a nonlinear convolutional encoder combined with a linear decoder to apply MCR to CARS microspectroscopy. We conclude with a study of the result variability induced by the encoder initialization.

Digital Library: EI
Published Online: January  2023
  88  48
Image
Pages 169-1 - 169-6,  © 2023, Society for Imaging Science and Technology 2023
Volume 35
Issue 14
Abstract

The light-field display (LfD) radiance image is a raster description of a light-field where every pixel in the image represents a unique ray within a 3D volume. The LfD radiance image can be projected through an array of micro-lenses to project a perspective-correct 3D aerial image visible for all viewers within the LfDs projection frustum. The synthetic LfD radiance image is comparable to the radiance image as captured by a plenoptic/light-field camera but is rendered from a 3D model or scene. Synthetic radiance image rasterization is an example of extreme multi-view rendering as the 3D scene must be rendered from many (1,000s to millions) viewpoints into small viewports per update of the light-field display. However, GPUs and their accompanying APIs (OpenGL, DirectX, Vulkan) generally expect to render a 3D scene from one viewpoint to a single large viewport/framebuffer. Therefore, LfD radiance image rendering is extremely time consuming and compute intensive. This paper reviews the novel, full-parallax, BowTie Radiance Image Rasterization algorithm which can be embedded within an LfD to accelerate light-field radiance image rendering for real-time update.

Digital Library: EI
Published Online: January  2023
  163  41
Image
Pages 170--1 - 170-6,  © 2023, Society for Imaging Science and Technology 2023
Volume 35
Issue 14
Abstract

We propose two automatic parameter tuning methods for Plug-and-Play (PnP) algorithms that use CNN denoisers. We focus on linear inverse problems and propose an iterative algorithm to calculate generalized cross-validation (GCV) and Stein’s unbiased risk estimator (SURE) functions for a half-quadratic splitting-based PnP (PnP-HQS) algorithm that uses a state-of- the-art CNN denoiser. The proposed methods leverage forward mode automatic differentiation to calculate the GCV and SURE functions and tune the parameters of a PnP-HQS algorithm automatically by minimizing the GCV and SURE functions using grid search. Because linear inverse problems appear frequently in computational imaging, the proposed methods can be applied in various domains. Furthermore, because the proposed methods rely on GCV and SURE functions, they do not require access to the ground truth image and do not require collecting an additional training dataset, which is highly desirable for imaging applications for which acquiring data is costly and time-consuming. We evaluate the performance of the proposed methods on deblurring and MRI experiments and show that the GCV-based proposed method achieves comparable performance to that of the oracle tuning method that adjusts the parameters by maximizing the structural similarity index between the ground truth image and the output of the PnP algorithm. We also show that the SURE-based proposed method often leads to worse performance compared to the GCV-based proposed method.

Digital Library: EI
Published Online: January  2023
  97  45
Image
Pages 171--1 - 171-6,  © 2023, Society for Imaging Science and Technology 2023
Volume 35
Issue 14
Abstract

Ultrasound elasticity images, which enable the visualization of quantitative maps of tissue stiffness, can be reconstructed by solving an inverse problem. Classical model-based approaches for ultrasound elastography use deterministic finite element methods (FEMs) to incorporate the governing physical laws leading to poor performance in low SNR conditions. Moreover, these approaches utilize approximate linear forward models discretized by FEMs to describe the underlying physics governed by partial differential equations (PDEs). To achieve highly accurate stiffness images, it is essential to compensate the error induced by noisy measurements and inaccurate forward models. In this regard, we propose a joint model-based and learning-based framework for estimating the elasticity distribution by solving a regularized optimization problem. To address noise, we introduce a statistical representation of the imaging system, which incorporates the noise statistics as a signal-dependent correlated noise model. Moreover, in order to compensate for the model errors, we introduce an explicit data-driven correction model, which can be integrated with any regularization term. This constrained optimization problem is solved using fixed-point gradient descent where the analytical gradient of the inaccurate data-fidelity term is corrected using a neural network, while regularization is achieved by data-driven unrolled regularization by denoising (RED). Both networks are jointly trained in an end-to-end manner.

Digital Library: EI
Published Online: January  2023
  139  41
Image
Pages 172-1 - 172-5,  © 2023, Society for Imaging Science and Technology 2023
Volume 35
Issue 14
Abstract

A novel iterative linear classification algorithm is developed from a maximum likelihood (ML) linear classifier. The main contribution of this paper is the discovery that a well-known maximum likelihood linear classifier with regularization is the solution to a contraction mapping for an acceptable range of values of the regularization parameter. Hence, a novel iterative scheme is proposed that converges to a fixed point, the globally optimum solution. To the best of our knowledge, this formulation has not been discovered before. Furthermore, the proposed iterative solution converges to a fixed point at a rate faster than the traditional gradient descent technique. The performance of the proposed iterative solution is compared to conventional gradient descent methods on linear and non-linearly separable data in terms of both convergence speed and overall classification performance.

Digital Library: EI
Published Online: January  2023
  80  27
Image
Pages 173-1 - 173-6,  © 2023, Society for Imaging Science and Technology 2023
Volume 35
Issue 14
Abstract

In this paper, we propose a multimodal unsupervised video learning algorithm designed to incorporate information from any number of modalities present in the data. We cooperatively train a network corresponding to each modality: at each stage of training, one of these networks is selected to be trained using the output of the other networks. To verify our algorithm, we train a model using RGB, optical flow, and audio. We then evaluate the effectiveness of our unsupervised learning model by performing action classification and nearest neighbor retrieval on a supervised dataset. We compare this triple modality model to contrastive learning models using one or two modalities, and find that using all three modalities in tandem provides a 1.5% improvement in UCF101 classification accuracy, a 1.4% improvement in R@1 retrieval recall, a 3.5% improvement in R@5 retrieval recall, and a 2.4% improvement in R@10 retrieval recall as compared to using only RGB and optical flow, demonstrating the merit of utilizing as many modalities as possible in a cooperative learning model.

Digital Library: EI
Published Online: January  2023

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]