The next series of posts discuss my recently published paper, “Efficient, non-iterative estimator for imaging contrast agents with spectral x-ray detectors,” available for free download
here. The paper extends the previous A-table estimator,
see this post, to three or more dimension basis sets so it can be used with high atomic number contrast agents. It also compares the A-table estimator to an iterative estimator.
This post describes the software to implement the new estimator. The next posts describe the code for an iterative estimator, compare the performance of the new estimator to the iterative estimator and the CRLB, compare the new estimator with a neural network estimator, and finally discuss an alternate implementation using a neural network as the interpolator.
more –>;
In the next posts, I will discuss my paper “Signal to noise ratio of energy selective x-ray photon counting systems with pileup”, which is available for free download
here. The paper uses an idealized model to derive limits on the effects of pileup on the SNR of A-vector data. There have been many papers (see, for example Overdick et al.
[4] Taguchi et al.
[3], and Taguchi and Iwanczyk
[6]) that use more or less realistic models of photon counting detectors to predict the quality of images computed from their data. These models are necessarily complex since state of the art is relatively primitive compared with the extreme count rate requirements in diagnostic imaging. The complexity of detailed models makes it hard to generalize from the results. Moreover, as research continues, the properties of the detectors will improve and their response will approach an idealized limit. This is the case with the energy integrating detectors used in state of the art medical imaging systems whose noise levels have been reduced so that the principal source of noise is the fundamental quantum noise that is present in all measurements with x-ray photons.
In this post, I will describe the rationale for an idealized model of photon counting detectors with pulse height analysis with pileup and illustrate it with the random data it generates. The following posts will show how the model can be applied to compute the SNR of systems with pileup and to compare the SNR to the full spectrum optimal value. The model will be used to determine the allowable response time so that the reduction in SNR due to pileup is small.
more –>;
The previous post in this series discussed the mathematics behind the increase in noise with the dimensionality, the number of basis functions used to approximate the attenuation coefficient. The series of posts is based on my recently published paper, Dimensionality and noise in energy selective x-ray imaging, available for free download here. This post describes simulations of the increase in noise with an object composed of body materials and an x-ray tube spectrum. The next post will show how to make low-noise images with the same properties as conventional x-ray images from the energy spectrum data. The main purpose of these last two posts is providing and explaining the code to reproduce the images in the paper.
more –>;
Not only is the singular value decomposition (SVD) fundamental to matrix theory but it is also widely used in data analysis. I have used it several times in my posts. For example,
here and
here, I used the singular values to quantify the intrinsic dimensionality of attenuation coefficients. In
this post, I applied the SVD to give the optimal basis functions to approximate the attenuation coefficient and compared them to the material attenuation coefficient basis set
[1]. All of these posts were based on the SVD approximation theorem, which allows us to find the nearest matrix of a given rank to our original matrix. This is an extremely powerful result because it allows us to reduce the dimensionality of a problem while still retaining most of the information.
In this post, I will discuss the SVD approximation theorem from an intuitive basis. The math here will be even less rigorous than my usual low standard since my purpose is to get an understanding of how the theorem works and what are its limitations. If you want a mathematical proof, you can find it in many places like Theorems 5.8 and 5.9 of the book
Numerical Linear Algebra[2] by Trefethen and Bau. These proofs do not provide much insight into the approximation so I will provide two ways of looking at the theorem: a geometric interpretation and an algebraic interpretation.
more → Continue reading “The singular value decomposition”
You may ask, what is the fundamental advantage of the new estimator? Yes, it is faster than the iterative method but so what? With Moore’s law, we can just throw silicon at the problem by doing the processing in parallel. I have two responses. The first is that not only is the iterative estimator slow but it also takes a random time to complete the calculation. This is a substantial problem since CT scanners are real-time systems. The calculations have to be done in a fixed time or the data are lost. The gantry cannot be stopped to wait for a long iteration to complete!
The second problem is that, as it has been implemented in the research literature, the iterative estimator requires measurement of the x-ray tube spectrum and the detector energy response to compute the likelihood for a given measurement. These are difficult measurements that cannot be done at most medical institutions. Because of drift of the system components, the measurements have to be done periodically to assure accurate results. There may be a way to implement an iterative estimator with simpler measurements but I am not aware of it.
In this post, I will show how the parameters required for the new estimator can be determined from measurements on a phantom placed in the system. This could be done easily by personnel at medical institutions and is similar to quality assurance measurements now done routinely on other medical systems.
more → Continue reading “Parameters for the estimator”