Tomography is a powerful technique to non-destructively determine the interior structure of an object.Usually, a series of projection images (e.g.\ X-ray images) is acquired from a range of... Show moreTomography is a powerful technique to non-destructively determine the interior structure of an object.Usually, a series of projection images (e.g.\ X-ray images) is acquired from a range of different positions.from these projection images, a reconstruction of the object's interior is computed. Many advanced applications require fast acquisition, effectively limiting the number of projection images and imposing a level of noise on these images. These limitations result in artifacts (deficiencies) in the reconstructed images. Recently, deep neural networks have emerged as a powerful technique to remove these limited-data artifacts from reconstructed images, often outperformingconventional state-of-the-art techniques. To perform this task, the networks are typically trained on a dataset of paired low-quality and high-quality images of similar objects. This is a major obstacle to their use in many practical applications. In this thesis, we explore techniques to employ deep learning in advanced experiments where measuring additional objects is not possible. Show less
Tomographic algorithms are often compared by evaluating them on certain benchmark datasets. For fair comparison, these datasets should ideally (i) be challenging to reconstruct, (ii) be... Show moreTomographic algorithms are often compared by evaluating them on certain benchmark datasets. For fair comparison, these datasets should ideally (i) be challenging to reconstruct, (ii) be representative of typical tomographic experiments, (iii) be flexible to allow for different acquisition modes, and (iv) include enough samples to allow for comparison of data-driven algorithms. Current approaches often satisfy only some of these requirements, but not all. For example, real-world datasets are typically challenging and representative of a category of experimental examples, but are restricted to the acquisition mode that was used in the experiment and are often limited in the number of samples. Mathematical phantoms are often flexible and can sometimes produce enough samples for data-driven approaches, but can be relatively easy to reconstruct and are often not representative of typical scanned objects. In this paper, we present a family of foam-like mathematical phantoms that aims to satisfy all four requirements simultaneously. The phantoms consist of foam-like structures with more than 100000 features, making them challenging to reconstruct and representative of common tomography samples. Because the phantoms are computer-generated, varying acquisition modes and experimental conditions can be simulated. An effectively unlimited number of random variations of the phantoms can be generated, making them suitable for data-driven approaches. We give a formal mathematical definition of the foam-like phantoms, and explain how they can be generated and used in virtual tomographic experiments in a computationally efficient way. In addition, several 4D extensions of the 3D phantoms are given, enabling comparisons of algorithms for dynamic tomography. Finally, example phantoms and tomographic datasets are given, showing that the phantoms can be effectively used to make fair and informative comparisons between tomography algorithms. Show less
Recovering a high-quality image from noisy indirect measurements is an important problem with many applications. For such inverse problems, supervised deep convolutional neural network (CNN)-based... Show moreRecovering a high-quality image from noisy indirect measurements is an important problem with many applications. For such inverse problems, supervised deep convolutional neural network (CNN)-based denoising methods have shown strong results, but the success of these supervised methods critically depends on the availability of a high-quality training dataset of similar measurements. For image denoising, methods are available that enable training without a separate training dataset by assuming that the noise in two different pixels is uncorrelated. However, this assumption does not hold for inverse problems, resulting in artifacts in the denoised images produced by existing methods. Here, we propose Noise2Inverse, a deep CNN-based denoising method for linear image reconstruction algorithms that does not require any additional clean or noisy data. Training a CNN-based denoiser is enabled by exploiting the noise model to compute multiple statistically independent reconstructions. We develop a theoretical framework which shows that such training indeed obtains a denoising CNN, assuming the measured noise is element-wise independent, and zero-mean. On simulated CT datasets, Noise2Inverse demonstrates an improvement in peak signal-to-noise ratio and structural similarity index compared to state-of-the-art image denoising methods, and conventional reconstruction methods, such as Total-Variation Minimization. We also demonstrate that the method is able to significantly reduce noise in challenging real-world experimental datasets. Show less