Three-dimensional statistical iterative reconstruction (SIR) algorithms have the potential to significantly reduce image artifacts by minimizing a cost function that models the physics and statistics of the data acquisition process in x-ray CT. SIR algorithms are important for a wide range of applications including nonstandard geometries arising from irregular sampling, limited angular range, missing data, and low-dose CT. For iterative image reconstruction algorithms to be deployed in clinical settings, the images must be quantitatively accurate and computed in clinically useful times. We describe an acceleration method that is based on adaptively varying an update factor of the additive step of the alternating minimization (AM) algorithm. Our implementation combines this method with other acceleration techniques like ordered subsets (OS) which was originally proposed for transmission tomography by Ahn, Fessler et. al [1]. Results on both an NCAT phantom and real clinical data from a Siemens Sensation 16 scanner demonstrate an improved convergence rate compared to the straightforward implementations of the alternating minimization (AM) algorithm of O'Sullivan and Benac [2] with a Huber-type edge-preserving penalty, originally proposed by Lange [3]. Our proposed acceleration method on average yields 2X acceleration of the convergence rate for both baseline and ordered subset implementations of the AM algorithm.
Model-Based Image Reconstruction (MBIR) methods significantly enhance the quality of tomographic reconstruction in contrast to analytical techniques. However, the intensive computational time and memory required by MBIR limit its use for many practical real-time applications. But, with increasing availability of parallel computing resources, distributed MBIR algorithms can overcome this limitation on computational performance. In this paper, we propose a novel distributed and iterative approach to Computed Tomography (CT) reconstruction based on the Multi-Agent Consensus Equilibrium (MACE) framework. We formulate CT reconstruction as a consensus optimization problem wherein the objective function, and consequently the system matrix, is split across multiple disjoint view-subsets. This produces multiple regularized sparse-view reconstruction problems that are tied together by a consensus constraint, and these problems can be solved in parallel within the MACE framework. Further, we solve each sub-problem inexactly, using only 1 full pass of the Iterative Coordinate Descent (ICD) optimization technique. Yet, our distributed approach is convergent. Finally, we validate our approach with experiments on real 2D CT data.
One-sided ultrasonic non-destructive evaluation (UNDE) uses ultrasound signals to investigate and inspect structures that are only accessible from one side. A widely used reconstruction technique in UNDE is the synthetic aperture focusing technique (SAFT). SAFT produces fast reconstruction and reasonable images for simple structures. However, for large complex structures, SAFT reconstructions suffer from noise and artifacts. To resolve some of the drawbacks of SAFT, an ultrasonic model-based iterative reconstruction (MBIR) algorithm, a method based on Bayesian estimation, was proposed that showed significant enhancement over SAFT in reducing noise and artifacts. In this paper, we build on previous investigations of the use of MBIR reconstruction on ultrasound data by proposing a spatially varying prior-model to account for artifacts from deeper regions and a 3D regularizer to account for correlations between scans from adjacent regions. We demonstrate that the use of the new prior model in MBIR can significantly improve reconstructions compared to SAFT and the previously proposed MBIR technique.
Computed Tomography (CT) is a non-invasive imaging technique that reconstructs cross-sectional images of scenes from a series of projections acquired at different angles. In applications such as airport security luggage screening, the presence of dense metal clutter causes beam hardening and streaking in the resulting conventionally formed images. These artifacts can lead to object splitting and intensity shading that make subsequent labeling and identification inaccurate. Conventional approaches to metal artifact reduction (MAR) have post-processed the artifact-filled images or interpolated the metal regions of the sinogram projection data. In this work, we examine the use of deep-learning-based methods to directly correct the observed sinogram projection data prior to reconstruction using a fully convolutional network (FCN). In contrast to existing learning-based CT artifact reduction work, we work completely in the sinogram domain and train a network over the entire sinogram (versus just local image patches). Since the information in sinograms pertaining to objects is non-local, patch-based methods are not well matched to the nature of CT data. The use of an FCN provides better computational scaling than historical perceptron-based approaches. Using a poly-energetic CT simulation, we demonstrate the potential of this new approach in mitigating metal artifacts in CT.
In scanning microscopy based imaging techniques, there is a need to develop novel data acquisition schemes that can reduce the time for data acquisition and minimize sample exposure to the probing radiation. Sparse sampling schemes are ideally suited for such applications where the images can be reconstructed from a sparse set of measurements. In particular, dynamic sparse sampling based on supervised learning has shown promising results for practical applications. However, a particular drawback of such methods is that it requires training image sets with similar information content which may not always be available. In this paper, we introduce a Supervised Learning Approach for Dynamic Sampling (SLADS) algorithm that uses a deep neural network based training approach. We call this algorithm SLADS-Net. We have performed simulated experiments for dynamic sampling using SLADS-Net in which the training images either have similar information content or completely different information content, when compared to the testing images. We compare the performance across various methods for training such as least-squares, support vector regression and deep neural networks. From these results we observe that deep neural network based training results in superior performance when the training and testing images are not similar. We also discuss the development of a pre-trained SLADS-Net that uses generic images for training. Here, the neural network parameters are pre-trained so that users can directly apply SLADS-Net for imaging experiments.
A supervised learning approach for dynamic sampling (SLADS) yielded a seven-fold reduction in the number of pixels sampled in hyperspectral Raman microscopy of pharmaceutical materials with negligible loss in image quality (~0.1% error). Following validation with ground-truth samples, sparse sampling strategies were informed in real-time by the preceding set of measurements. In brief, Raman spectra acquired at an initial set of random positions inform the next most information-rich location to subsequently sample within the field of view, which in turn iteratively informs the next locations until a stopping criterion associated with the reconstruction error is met. Calculation times on the order of a few milliseconds were insignificant relative to the timeframe for spectral acquisition at a given sampling location. The SLADS approach has the distinct advantage of being directly compatible with standard Raman instrumentation. Furthermore, SLADS is not limited to Raman imaging, providing a time-savings in image reconstruction whenever the single-pixel measurement time is the limiting factor in image generation.
This paper presents a new method for tomographic reconstruction of volumes from sparse observational data. Application scenarios can be found in astrophysics, plasma physics, or whenever the amount of obtainable measurement is limited. In the extreme only a single view of the phenomenon may be available. Our method uses input image data together with complex, user-definable assumptions about 3D density distributions. The parameter values of the user-defined model are fitted to the input image. This allows for incorporating complex, data-driven assumptions, such as helical symmetry, into the reconstruction process. We present two different sparsity-based reconstruction approaches. For the first method, novel virtual views are generated prior to tomography reconstruction. In the second method, voxel groups of similar target densities are defined and used for group sparsity reconstruction. We evaluate our method on real data of a high-energy plasma experiment and show that the reconstruction is consistent with the available measurement and 3D density assumptions. An additional experiment on simulated data demonstrates possible gains when adding an additional view to the presented reconstruction methods.
Detecting materials of interest in containers using X-ray measurements is a critical problem in aviation security. Conventional X-ray systems obtain single- or dual-energy measurements, which are subsequently processed using computed tomography (CT) to obtain estimates of attenuation properties of different regions. Recently, novel detectors enable the measurement of the X-ray transmission intensities on multiple energy bands, leading to the use of spectral CT to construct additional properties of regions to assist in material identification. In this paper, we discuss the problem of material classification using spectral CT. We introduce a new basis representation which can accurately represent energy-dependent X-ray transmission characteristics in a few dimensions, and propose a class of reconstruction techniques for obtaining features of different regions. We illustrate the advantages of our approach over alternative approaches using different basis representations as well as CT reconstructions in each energy band using simulated spectral CT experiments. Our results illustrate that there are significant advantages to using our basis representation in both detection and material classification performance, particularly in the presence of complex materials or mixtures involving atoms with high atomic number.
Cone-beam computed tomography (CT) is an attractive tool for many kinds of non-destructive evaluation (NDE). Model-based iterative reconstruction (MBIR) has been shown to improve reconstruction quality and reduce scan time. However, the computational burden and storage of the system matrix is challenging. In this paper we present a separable representation of the system matrix that can be completely stored in memory and accessed cache-efficiently. This is done by quantizing the voxel position for one of the separable subproblems. A parallelized algorithm, which we refer to as zipline update, is presented that speeds up the computation of the solution by about 50 to 100 times on 20 cores by updating groups of voxels together. The quality of the reconstruction and algorithmic scalability are demonstrated on real cone-beam CT data from an NDE application. We show that the reconstruction can be done from a sparse set of projection views while reducing artifacts visible in the conventional filtered back projection (FBP) reconstruction. We present qualitative results using a Markov Random Field (MRF) prior and a Plug-and-Play denoiser.