Regular
absolute value kernelAutocorrelation
biomedical imaging
computed tomographyCanon Hack Development Kit (CHDK)Cryo-electron microscopyContinuous latent variablesCryo-Electron Microscopy (Cryo-EM)
DenoisingDensity map reconstructionDeep LearningDark Channel PriorDual-energy CT imageDeep learning
Eulerian Video MagnificationEdge-preserving total variationevent-based vision sensor
fast robust correlationfacial recognition
GANGMPHD filter
Heart rate
Imaging Deblurringimage matchImage-Based MeasurementsImage AnalysisImage ReconstructionInverse Problems
Joint optimization
LIDAR
multi-target trackingModel-based ImagingMotion correctionMetal artifact reduction
Neural network
PhotoplethysmographyPhenomicsPlant Height EstimationPoint CloudPlant Width EstimationPlant Image ProcessingPoint source localization
radiographyRotation invariant features
Split-Bregman optimization methodsurveillanceSuper resolutionSignal Processing
Unassigned distance geometry problem
Video Feature Extractionvehicle driver imaging
Wavelets
3D refinement
 Filters
Month and year
 
  21  0
Image
Pages A13-1 - A13-8,  © Society for Imaging Science and Technology 2019
Digital Library: EI
Published Online: January  2019
  35  5
Image
Pages 127-1 - 127-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 13

Tracking a large number of small, similar, high-speed/- agility targets is a challenging problem for current tracking systems that make use of traditional visual sensors. Such targets necessitate very high tracker update rates to keep up with meandering and mutually-occluding target paths. Event-based vision sensors may offer a solution to this problem as they report only “event-based” pixelwise changes in intensity as they occur with a time resolution approaching 1 μs [1], providing data that is much sparser and higher in time-resolution than traditional vision systems. However, this class of sensor presents unique challenges; for example, a single object in the sensor’s field of view may produce multiple synchronous or nearly-synchronous events. In addition, performing direct measurement-to-track association for event data on ms to ms timescales introduces problematic computational burdens for scenarios involving large numbers of targets. The work described in this paper is twofold. We first define and apply an event-clustering procedure to raw events to reduce the amount of data passed into the tracker. This transformation from events to event-clusters provides a) discrimination between event-clusters that correspond to true targets and those that do not and b) reduction in tracking computation time. Second, we define and apply a partial-update Gaussian mixture probability hypothesis density (GMPHD) filter [2] for tracking using event-cluster data. We demonstrate increased computational performance over the standard GMPHD filter while achieving comparable tracking performance per the optimal sub-pattern assignment (OSPA) metric [3].

Digital Library: EI
Published Online: January  2019
  227  9
Image
Pages 132-1 - 132-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 13

Photoplethysmography (PPG) is the detection of blood flow or pressure by optical means. The most common method involves direct skin-contact measurement of light from an LED. However, the small color changes in skin under normal lighting conditions, as recorded by conventional video, potentially allow passive, noncontact, PPG. Eulerian Video Magnification (EVM) was used to demonstrate that small color changes in a subject’s face can be amplified to make them visible to a human observer. A variety of methods have been applied to extract heart rate from video. The signal obtained by PPG is not a simple sinusoid, but has a relatively complex structure, which in video is degraded by ambient lighting variations, motion, noise, and a low sampling rate. Although EVM and many other analysis methods in the literature essentially operate in the frequency domain, fitting the video data to their model requires extensive preprocessing. In this paper a time-based autocorrelation method is applied directly to the video signal that exhibits superior noise rejection and resolution for detecting quasi-periodic waveforms. The method described in the current work avoids both the preprocessing computational cost and the potential signal distortions.

Digital Library: EI
Published Online: January  2019
  124  0
Image
Pages 133-1 - 133-5,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 13

Cryo-electron microscopy (Cryo-EM) is a popular imaging modality used to visualize a wide range of bio-molecules in their 3D form. The goal in Cryo-EM is to reconstruct the 3D density map of a molecule from projection images taken from random and unknown orientations. A critical step in the Cryo-EM pipeline is 3D refinement. In this procedure, an initial 3D map and a set of estimated projection orientations is refined to obtain higher resolution maps. State-of-the-art refinement techniques rely on projection matching steps in order to refine the initial projection orientations. Unfortunately projection matching is computationally inefficient and it requires a finite discretization of the space of orientations. To avoid repeated projection matching steps, in this work we consider the orientation variables in their continuous form. This enables us to formulate the refinement problem as a joint optimization problem that refines the underlying density map and orientations. We use alternating direction method of multipliers (ADMM) and gradient descent steps in order to update the density map and the orientations, respectively. Our results and their comparison with several baselines demonstrate the feasibility and performance of the proposed refinement framework.

Digital Library: EI
Published Online: January  2019
  137  2
Image
Pages 134-1 - 134-5,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 13

In this paper, we study a 2D tomography problem with random and unknown projection angles for a point source model. Specifically, we target recovering geometry information, i.e. the radial and pairwise distances of the underlying point source model. For this purpose, we introduce a set of rotation-invariant features that are estimated from the projection data. We further show these features are functions of the radial and pairwise distances of the point source model. By extracting the distances from the features, we gain insight into the geometry of the unknown point source model. This geometry information can be used later on to reconstruct the point source model. The simulation results verify the robustness of our method in presence of noise and errors in the estimation of the features.

Digital Library: EI
Published Online: January  2019
  14  1
Image
Pages 135-1 - 135-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 13

Despite the advances in single-image super resolution using deep convolutional networks, the main problem remains unsolved: recovering fine texture details. Recent works in super resolution aim at modifying the training of neural networks to enable the recovery of these details. Among the different method proposed, wavelet decomposition are used as inputs to super resolution networks to provide structural information about the image. Residual connections may also link different network layers to help propagate high frequencies. We review and compare the usage of wavelets and residuals in training super resolution neural networks. We show that residual connections are key in improving the performance of deep super resolution networks. We also show that there is no statistically significant performance difference between spatial and wavelet inputs. Finally, we propose a new super resolution architecture that saves memory costs while still using residual connections, and performing comparably to the current state of the art.

Digital Library: EI
Published Online: January  2019
  62  11
Image
Pages 136-1 - 136-6,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 13

A conditional general adversarial network (GAN) is proposed for image deblurring problem. It is tailored for image deblurring instead of just applying GAN on the deblurring problem. Motivated by that, dark channel prior is carefully picked to be incorporated into the loss function for network training. To make it more compatible with neuron networks, its original indifferentiable form is discarded and L2 norm is adopted instead. On both synthetic datasets and noisy natural images, the proposed network shows improved deblurring performance and robustness to image noise qualitatively and quantitatively. Additionally, compared to the existing end-to-end deblurring networks, our network structure is light-weight, which ensures less training and testing time.

Digital Library: EI
Published Online: January  2019
  45  4
Image
Pages 137-1 - 137-8,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 13

Efficient plant phenotyping methods are necessary to accelerate the development of high yield biofuel crops. Manual measurement of plant phenotypes, such as height is inefficient, labor intensive and error prone. We present a robust, LiDAR based approach to estimate the height of biomass sorghum plants. A vertically oriented laser rangefinder onboard an agricultural robot captures LiDAR scans of the environment as the robot traverses between crop rows. These LiDAR scans are used to generate height contours for a single row of plants corresponding to a given genetic strain. We apply ground segmentation, iterative peak detection and peak filtering to estimate the average height of each row. Our LiDAR based approach is capable of estimating height at all stages of the growing period, from emergence e.g. 10 cm through canopy closure e.g. 4 m. Our algorithm has been extensively validated by several ground truthing campaigns on biomass sorghum. These measurements encompass typical methods employed by breeders as well as higher accuracy methods of measurement. We are able to achieve an absolute height estimation error of 8.46% ground truthed via ?by-eye? method over 2842 plots, an absolute height estimation error of 5.65% ground truthed at high granularity by agronomists over 12 plots, and an absolute height estimation error of 7.2% when ground truthed by multiple agronomists over 12 plots.

Digital Library: EI
Published Online: January  2019
  36  1
Image
Pages 138-1 - 138-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 13

Efficient plant phenotyping methods are necessary in order to accelerate the development of high yield biofuel crops. Manual measurement of plant phenotypes, such as width, is slow and error-prone. We propose a novel approach to estimating the width of corn and sorghum stems from color and depth images obtained by mounting a camera on a robot which traverses through plots of plants. We use deep learning to detect individual stems and employ an image processing pipeline to model the boundary of each stem and estimate the pixel and metric width of each stem. This approach results in 13.5% absolute error in the pixel domain on corn averaged over 153 estimates and 13.2% metric absolute error on phantom sorghum averaged over 149 estimates.

Digital Library: EI
Published Online: January  2019
  32  6
Image
Pages 140-1 - 140-9,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 13

Biometric recognition of vehicle occupants in unconstrained environments is rife with a host of challenges. In particular, the complications arising from imaging through vehicle windshields provide a significant hurdle. Distance to target, glare, poor lighting, head pose of occupants, and speed of vehicle are some of the challenges. We explore the construction of a multi-unit computational camera system to mitigate these challenges in order to obtain accurate and consistent face recognition results. This paper documents the hardware components and software design of the computational imaging system. Also, we document the use of Region-based Convolutional Neural Network (RCNN) for face detection and Generative Adversarial Network (GAN) for machine learning-inspired High Dynamic Range Imaging, artifact removal, and image fusion.

Digital Library: EI
Published Online: January  2019

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]