A supervised learning approach for dynamic sampling (SLADS) was developed to reduce X-ray exposure prior to data collection in protein structure determination. Implementation of this algorithm allowed reduction of the X-ray dose to the central core of the crystal by up to 20-fold compared to current raster scanning approaches. This dose reduction corresponds directly to a reduction on X-ray damage to the protein crystals prior to data collection for structure determination. Implementation at a beamline at Argonne National Laboratory suggests promise for the use of the SLADS approach to aid in the analysis of X-ray labile crystals. The potential benefits match a growing need for improvements in automated approaches for microcrystal positioning.
We introduce a new algorithm to reduce metal artifacts in computed tomography images when data is acquired using a single source spectrum. Our algorithm is a hybrid approach which corrects the sinogram vector followed by an iterative reconstruction. Many prior sinogram correction algorithms identify projection measurements that travel through areas with significant metal content, and remove those projections, interpolating their values for use in subsequent reconstruction. In contrast, our algorithm retains the information of random subsets of these metal-affected projection measurements, and uses an average procedure to construct a modified sinogram. To reduce the secondary artifacts created by this interpolation, we apply an iterative reconstruction in which the solution is regularized using a sparsifying transform. We evaluate our algorithm on simulated data as well as data collected using a medical scanner. Our experiments indicate that our algorithm reduces the extent of metal artifacts significantly, and enables accurate recovery of structures in proximity to metal.
Inverse quadratic problem of joint demosaicing and multiframe super-resolution(SR) was considered. Closed form solutions for different constant sub-pixel motions between frames were obtained and represented in the form of filter bank, which allows to compute solution of SR problem using adaptive filtering, where filters are selected depending on sub-pixel motion between frames. This procedure can be carried out using single iteration. For directional and non-directional parts of image corresponding directional or non-directional filters were applied. Color artifact reduction was achieved via usage of linear cross-channel regularizing term inspired by popular demosaicing methods. The framework includes motion estimation in Bayer domain, integrated noise reduction sub-algorithm, directionality estimation sub-algorithm, fallback logics and post-processing for additional color artifact reduction. Bank of filters is computed offline using specially developed compression techniques, which allows to reduce number of actually stored filters. Developed solution had shown superior results, compared to subsequent demosaicing and single channel SR and was tested on real raw images captured by cell phone camera in burst mode.
Analyzing the depth structure implied in two-dimensional images is one of the most active research areas in computer vision. Here, we propose a method of utilizing texture within an image to derive its depth structure. Though most approaches for deriving depth from a single still image utilize luminance edges and shading to estimate scene structure, relatively little work has been done to utilize the abundant texture information in images. Our new approach begins by analyzing the two cues of local spatial frequency and orientation distributions of the textures within an image, which are used to compute the local slant information across the image. The slant and frequency information are merged to create a unified depth map, providing an important channel for image structure information that can be combined with other available cues. The capabilities of the algorithm are illustrated for a variety of images of planar and curved surfaces under perspective projection, in most of which the depth structure is effortlessly perceived by human observers. Since these operations are readily implementable in neural hardware in early visual cortex, they therefore represent a model of the human perception of the depth structure of images from texture gradient cues.