A supervised learning approach for dynamic sampling (SLADS) was developed to reduce X-ray exposure prior to data collection in protein structure determination. Implementation of this algorithm allowed reduction of the X-ray dose to the central core of the crystal by up to 20-fold compared to current raster scanning approaches. This dose reduction corresponds directly to a reduction on X-ray damage to the protein crystals prior to data collection for structure determination. Implementation at a beamline at Argonne National Laboratory suggests promise for the use of the SLADS approach to aid in the analysis of X-ray labile crystals. The potential benefits match a growing need for improvements in automated approaches for microcrystal positioning.
In order to accurately monitor neural activity in a living mouse brain, it is necessary to image each neuron at a high frame rate. Newly developed genetically encoded calcium indicators like GCaMP6 have fast kinetic response and can be used to target specific cell types for long duration. This enables neural activity imaging of neuron cells with high frame rate via fluorescence microscopy. In fluorescence microscopy, a laser scans the whole volume and the imaging time is proportional to the volume of the brain scanned. Scanning the whole brain volume is time consuming and fails to fully exploit the fast kinetic response of new calcium indicators. One way to increase the frame rate is to image only the sparse set of voxels containing the neurons. However, in order to do this, it is necessary to accurately detect and localize the position of each neuron during the data acquisition. In this paper, we present a novel model-based neuron detection algorithm using sparse location priors. We formulate the neuron detection problem as an image reconstruction problem where we reconstruct an image that encodes the location of the neuron centers. We use a sparsity based prior model since the neuron centers are sparsely distributed in the 3D volume. The information about the shape of neurons is encoded in the forward model using the impulse response of a filter and is estimated from training data. Our method is robust to illumination variance and noise in the image. Furthermore, the cost function to minimize in our formulation is convex and hence is not dependent on good initialization. We test our method on GCaMP6 fluorescence neuron images and observe better performance than widely used methods.
Scintillation Detector has been playing an important role in radiation detection. The methods to improve the spatial resolution of scintillation detector have been widely studied. Commonly used scintillation detectors often use photon sensors attached to a scintillator to detect the position of light source in the scintillator, normally by counting the number of photons. In these cases the spatial resolution can reach about 1 mm. However some medical applications like positron emission tomography (PET) requires higher resolution. Some application-specific types of scintillation detector such as Si/CdTe Compton camera and PET crystal cube have improved the spatial resolution to about 250 μm to 500 μm [1]. However, the resolution of these types of scintillation detectors are mainly restricted by their hardware size. Therefore further improvement in the resolution can hardly be achieved unless the hardware has an obvious scale down. This paper introduces a method of high resolution point light source detection in scintillator by offering a scintillation detector with a new structure. Compared to the typical scintillator detector, the proposed one introduces a lightproof material with pinholes between the scintillator cube and photon sensors, which we used single-photon avalanche diodes (SPADs). Based on this novel construction the light source can be detected through photon reverse ray tracing method. The proposed scintillation detector can provide high spatial resolution about 10 μm ∼ 20 μm, which is more than ×10 finer than the prior arts.
We introduce a new algorithm to reduce metal artifacts in computed tomography images when data is acquired using a single source spectrum. Our algorithm is a hybrid approach which corrects the sinogram vector followed by an iterative reconstruction. Many prior sinogram correction algorithms identify projection measurements that travel through areas with significant metal content, and remove those projections, interpolating their values for use in subsequent reconstruction. In contrast, our algorithm retains the information of random subsets of these metal-affected projection measurements, and uses an average procedure to construct a modified sinogram. To reduce the secondary artifacts created by this interpolation, we apply an iterative reconstruction in which the solution is regularized using a sparsifying transform. We evaluate our algorithm on simulated data as well as data collected using a medical scanner. Our experiments indicate that our algorithm reduces the extent of metal artifacts significantly, and enables accurate recovery of structures in proximity to metal.
The problem of identifying materials in dual-energy CT images arises in many applications in medicine and security. In this paper, we introduce a new algorithm for joint segmentation and classification of material regions. In our algorithm, we learn an appearance model for patches of pixels that captures the correlation in observed values among neighboring pixels/voxels. We pose the joint segmentation/classification problem as a discrete optimization problem using a Markov random field model for correlation of class labels among neighboring patches, and solve the problem using graph cut techniques. We evaluate the performance of the proposed method using both simulated phantoms and data collected from a medical scanner. We show that our algorithm outperforms the alternative approaches in which the appearance model is based on pixel values instead of patches.
Model-based image reconstruction (MBIR) techniques have the potential to generate high quality images from noisy measurements and a small number of projections which can reduce the x-ray dose in patients. These MBIR techniques rely on projection and backprojection to refine an image estimate. One of the widely used projectors for these modern MBIR based technique is called branchless distance driven (DD) projection and backprojection. While this method produces superior quality images, the computational cost of iterative updates keeps it from being ubiquitous in clinical applications. In this paper, we provide several new parallelization ideas for concurrent execution of the DD projectors in multi-GPU systems using CUDA programming tools. We have introduced some novel schemes for dividing the projection data and image voxels over multiple GPUs to avoid runtime overhead and inter-device synchronization issues. We have also reduced the complexity of overlap calculation of the algorithm by eliminating the common projection plane and directly projecting the detector boundaries onto image voxel boundaries. To reduce the time required for calculating the overlap between the detector edges and image voxel boundaries, we have proposed a pre-accumulation technique to accumulate image intensities in perpendicular 2D image slabs (from a 3D image) before projection and after backprojection to ensure our DD kernels run faster in parallel GPU threads. For the implementation of our iterative MBIR technique we use a parallel multi-GPU version of the alternating minimization (AM) algorithm with penalized likelihood update. The time performance using our proposed reconstruction method with Siemens Sensation 16 patient scan data shows an average of 24 times speedup using a single TITAN X GPU and 74 times speedup using 3 TITAN X GPUs in parallel for combined projection and backprojection. © 2017 Society for Imaging Science and Technology.
Computational imaging problems are of increasing importance in domains ranging from security to biology and medicine. In these problems computational techniques based on an imaging model are coupled with data inversion to create useful images. When the underlying desired property field itself is discrete, the corresponding discrete-valued inverse problems are extremely challenging and computationally expensive to solve because of their non-convex, enumerative nature. In this work we demonstrate a fast and robust solution approach based on a new variable splitting coupled with the alternating direction method of multipliers (ADMM) technique. This approach exploits sub-problems that can be solved using existing and fast techniques, such as graph-cut methods, and results in overall solutions of excellent quality. The method can exploit both Gaussian and Poisson noise models. We exercise the method on both binary and multi-label phantoms for challenging limited data tomographic reconstruction problems.
We propose an inverse tone mapping (iTM) method which can both handle the details of low dynamic range (LDR) images and expand the dynamic ranges of the LDR images. The conventional iTM algorithms often fail to precisely restore the details of the input LDR images. To deal with this problem, we take a two-layer approach where each LDR image is separated into a base layer and a detail layer by bilateral filtering. The detail layer is mapped into that of a high dynamic range (HDR) image via linear mapping while the base layer is expanded via linear stretching to the dynamic range of a target display device. Then, the two resultant base and detail layers are used to reconstruct one final HDR image. From this, the details of the reconstructed HDR image can be revived via learned linear mapping. In order to learn the mapping from the LDR detail layer to an HDR detail layer, the HDR-LDR pairs of training patches of detail layers are classified into various groups based on the features of LDR detail patches. For each group, a linear mapping is learned during a training phase, which can then be applied for HDR reconstruction in testing phases. From the experimental results, we observed that proposed method can restore much more details of HDR images than the conventional methods.
This research examined the performance of skin coloredpatches for accurately estimating human skin color. More than 300 facial images of Korean females were taken with a digital singlelens reflex camera (Canon 550D) while each was holding the X-Rite Digital ColorChecker® semi-gloss target. The color checker consisted of 140 color patches, including the 14 skin-colored ones. As the ground truth, the CIE 1976 L*a* b* values of seven spots in each face were measured with a spectrophotometer. For an examination, three sets of calibration targets were compared, and each set consisted of the whole 140 patches, 24 standard color patches and 14 skin-colored patches. Consequently, three sets of estimated skin colors were obtained, and the errors from the ground truth were calculated through the square root of the sum of squared differences (ΔE). The results show that the error of color correction using the 14 skin-colored patches was significantly smaller (average ΔE = 8.58, SD = 3.89) than errors of correction using the other two sets of color patches. The study provides evidence that the skin-colored patches support more accurate estimations of skin colors. It is expected that the skin-colored patches will perform as a new standard calibration target for skin-related image calibration.