Regular
No keywords found
 Filters
Month and year
 
  4  0
Image
Pages 1 - 4,  © Society for Imaging Science and Technology 2016
Digital Library: EI
Published Online: February  2016
  17  0
Image
Pages 1 - 4,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 19

Nuclear resonance fluorescence (NRF) is a nuclear phenomenon where the nucleus can be made to emit a spectral pattern that can be used to distinguish the material. This effect is triggered by exposing the material to a continuous spectrum of high-energy photons and then observing the resulting fluorescence spectrum, which varies with material. The photons used in the excitation process occur at energy levels of 2-8 MeV, which are among the most penetrating and can see through several inches of steel. The combination of material specific signatures and high penetration are well matched to applications such as cargo screening for threats and contraband. An imaging system has been proposed based on NRF using straight forward raster scanning of an excitation beam combined with simple collimation of the resulting emissions. While such a system is computationally inexpensive, it results in low inherent signal-to-noise ratio because most emitted photons are discarded, necessitating long scanning times. In this work we propose and explore the use of a coded aperture to increase the signal-to-noise ratio and lower acquisition time of NRF-based imaging.

Digital Library: EI
Published Online: February  2016
  19  0
Image
Pages 1 - 4,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 19

A watermark is a subtle pattern on images or documents to prevent counterfeiting. For color 3D models, it is feasible for add watermarks on either texture images or vertex colors, as well. We propose a hidden watermark method by superimposing a just noticeable difference pattern on 3D color models. The color difference on the watermark is too small to be noticed and it will be enlarged under specific lighting conditions in the computer graphics environment. This idea is very similar to the anticounterfeit label on most banknotes. Thus, the watermark is almost invisible when rendering under formal white lights, but visible under violet lights.

Digital Library: EI
Published Online: February  2016
  16  2
Image
Pages 1 - 5,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 19

This paper proposes a strip-based fast and robust text detection algorithm for low cost embedded devices such as scanners/printers that is designed to operate with minimal memory requirements. Generally speaking, the unavailability of the whole document at once along with other memory and processing speed constraints pose a significant challenge. While conventional approaches process the whole image/page with intensive algorithms to get a desirable result, our algorithm processes strips of the page very efficiently in terms of speed and memory allocation. To this effect, a DCT block based approach along with appropriate pre and post-processing algorithms is used to create a map of text pixels from the original page while suppressing any non-text background, graphics or images. The proposed algorithm is able to detect text pixels from documents of varying backgrounds, colors and non-textual portions. This algorithm is simulated in both MATLAB and C programming languages and tested using a Beagle Board to simulate a low processing CPU on a wide variety of documents. The average execution time for a full 8.5x11 page scanned at 300 dpi is approximately 0.5 sec. in C and about 3 seconds on the Beagle board.

Digital Library: EI
Published Online: February  2016
  44  0
Image
Pages 1 - 5,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 19

The problem of obtaining 3-D tomographic images from geometries involving sparse sets of illuminators and detectors arises in applications like digital breast tomosynthesis, security inspection, non-destructive evaluation and other similar applications. In these applications, the acquired projection data is highly incomplete, so traditional reconstruction approaches such as filtered backprojection (FBP) lead to significant distortion and artifacts in the reconstruction. In this work, we describe an iterative reconstruction algorithm that exploits regularization to obtain well-posed inverse problems. However, the computations associated with these iterative algorithms are significantly greater than the FBP algorithms. We describe how we structure those computations to exploit GPU architectures to reduce the computation time of the iterative reconstruction algorithm. We illustrate the results on data computed from an experimental 3-D imaging system.

Digital Library: EI
Published Online: February  2016
  54  3
Image
Pages 1 - 5,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 19

Precise measurement of spatial gas concentrations in an uncontrolled outdoor environment is needed for environmental studies. In this effort, a multispectral imaging system has been developed incorporating quantum cascade laser (QCL) modules with an iterative computed tomography algorithm to sense a transmission spectrum response for each voxel. With spatially distributed spectral data, researchers will be able to identify gas composition and concentration distributions over a region of interest. The QCL system uses multiple modules covering wavelength ranges from 3.77μm to 12.5μm to detect both carbon dioxide (4.2μm) and methane (7.5μm) greenhouse gasses. Simulation and lab studies have been performed for a system using a circular arrangement of mirrors to transmit and reproject the QCL beam around the detector circle. The QCL transmission system has been tested in both controlled indoor environments and uncontrolled outdoor environments to quantify sensitivity.

Digital Library: EI
Published Online: February  2016
  44  11
Image
Pages 1 - 5,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 19

Artificial neural networks loosely mimic the complex web of nearly 100 trillion connections in the human brain. Deep neural networks, and specifically convolutional neural networks, have recently demonstrated breakthrough performances in the pattern recognition community. Studies on the network depth, regularization, filters, choice of activation function, and training parameters are numerous. With regard to activation functions, the rectified linear unit, is favored over the sigmoid and tanh function because the differentiation of larger signals is maintained. This paper introduces multiple activation functions per single neuron. Libraries have been generated to allow individual neurons within a neural network the ability to select between a multitude of activation functions, where the selection of each function is done on a node by node basis to minimize classification error. Each node is able to use more than one activation function if the final classification error can be reduced. The resulting networks have been trained on several commonly used datasets, which show increases in classification performance, and are compared to the recent findings in neuroscience research.

Digital Library: EI
Published Online: February  2016
  26  0
Image
Pages 1 - 5,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 19

Compressive sensing has emerged as a novel sensing theory that can override the Shannon-Nyquist limit, having powerful implications in reducing the dimensionality of hyperspectral imaging acquisition demands. In order to recover the hyperspectral datacube from limited optically compressed measurements, we present a new reconstruction algorithm that exploits the space and spectral correlations through non-local means regularization. Based on a simple compressive sensing hyperspectral architecture that uses a digital micromirror device and a spectrometer, the reconstruction process is solved with the help of split Bregman optimization techniques, including penalty functions defined according to the spatial and spectral properties of the scene and noise sources.

Digital Library: EI
Published Online: February  2016
  14  1
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 19

High-spectral resolution imaging provides critical insights into important computer vision tasks such as classification, tracking, and remote sensing. Modern Snapshot Spectral Imaging (SSI) systems directly acquire the entire 3D data-cube through the intelligent combination of spectral filters and detector elements. Partially because of the dramatic reduction in acquisition time, SSI systems exhibit limited spectral resolution, for example, by associating each pixel with a single spectral band in Spectrally Resolvable Detector Arrays. In this paper, we propose a novel machine learning technique aiming to enhance the spectral resolution of imaging systems by exploiting the mathematical framework of Sparse Representations (SR). Our formal approach proposes a systematic way to estimate a high-spectral resolution pixel from a measured low-spectral resolution version by appropriately identifying a sparse representation that can directly generate the highspectral resolution output. We enforce the sparsity constraint by learning a joint space coding dictionary from multiple low and high spectral resolution training data and we demonstrate that one can successfully reconstruct high-spectral resolution images from limited spectral resolution measurements.

Digital Library: EI
Published Online: February  2016
  11  1
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 19

X-ray and neutron optics both lack efficient ray focusing capabilities. An x-ray source can be made small and powerful enough to facilitate high-resolution imaging while providing adequate flux. This is not yet possible for neutrons. One remedy is to employ a computational imaging technique such as magnified coded source imaging. The greatest challenge associated with successful reconstruction of high-resolution images from such radiographs is to precisely model the flux distribution for complex non-uniform neutron sources. We have developed a framework based on Monte Carlo simulation and iterative reconstruction that facilitates high-resolution coded source neutron imaging. In this paper, we define a methodology to empirically measure and approximate the flux profile of a non-uniform neutron source, and we show how to incorporate the result within the forward model of an iterative reconstruction algorithm. We assess improvement in image quality by comparing reconstructions based respectively on the new empirical forward model and our previous analytic models.

Digital Library: EI
Published Online: February  2016