In this post, I continue the discussion of my paper “Signal to noise ratio of energy selective x-ray photon counting systems with pileup”

[2], which is available for free download

here. The computation of the SNR is based on the approach described in my previous paper, “

*Near optimal energy selective x-ray imaging system performance with simple detectors*[1]”, which is available for free download

here. The approach is extended to data with pileup.

The “Near optimal …” paper shows that regardless whether there is pileup or not, if the noise has a multivariate normal distribution and if the feature is sufficiently thin so the covariance in the background and feature regions is approximately the same, the performance is determined by the signal to noise ratio. So the first thing that has to be done is to show that the data with pileup satisfy these conditions. My plan for the discussion of the SNR paper is therefore as follows.

- First, I will use the idealized model and the Matlab function to generate random recorded counts described in the previous post to develop code to compute random samples of data from NQ and PHA detectors with pileup. I will use these functions to derive and validate formulas for the expected values and covariance of the data. These are required to compute the SNR.
- I will then use these models and software to determine the conditions so that the probability distribution of the data can be approximated as multivariate normal.
- Next, I will show how to use the Cramèr-Rao lower bound (CRLB) to compute the A-vector covariance with pileup data. I will use this to show that we can, under some conditions, use the constant covariance approximation to the CRLB with pileup data just as we can with non-pileup data as shown in this post.
- Finally, I will apply these results to compute the reduction of SNR as pileup increases.

In this post, I will use the Matlab function discussed in the previous post to compute random samples of recorded photon counts (N) and total energy (Q) data. These are data from an NQ detector with pileup. I will use these data to validate the formula for the covariance derived in Appendix C of the SNR paper. I will present Matlab code to reproduce Fig. 9 of the paper, which shows the covariance and the correlation of the data as a function of the dead time. I will also use the same data to validate the formulas for the expected value and variance of the recorded counts and the total energy as a function of dead time. These formulas are described in Section 2.E of the paper.

more –>;

In the next posts, I will discuss my paper “Signal to noise ratio of energy selective x-ray photon counting systems with pileup”, which is available for free download

here. The paper uses an idealized model to derive limits on the effects of pileup on the SNR of A-vector data. There have been many papers (see, for example Overdick et al.

[4] Taguchi et al.

[3], and Taguchi and Iwanczyk

[6]) that use more or less realistic models of photon counting detectors to predict the quality of images computed from their data. These models are necessarily complex since state of the art is relatively primitive compared with the extreme count rate requirements in diagnostic imaging. The complexity of detailed models makes it hard to generalize from the results. Moreover, as research continues, the properties of the detectors will improve and their response will approach an idealized limit. This is the case with the energy integrating detectors used in state of the art medical imaging systems whose noise levels have been reduced so that the principal source of noise is the fundamental quantum noise that is present in all measurements with x-ray photons.

In this post, I will describe the rationale for an idealized model of photon counting detectors with pulse height analysis with pileup and illustrate it with the random data it generates. The following posts will show how the model can be applied to compute the SNR of systems with pileup and to compare the SNR to the full spectrum optimal value. The model will be used to determine the allowable response time so that the reduction in SNR due to pileup is small.

more –>;

The previous post in this series discussed the mathematics behind the increase in noise with the dimensionality, the number of basis functions used to approximate the attenuation coefficient. The series of posts is based on my recently published paper, **Dimensionality and noise in energy selective x-ray imaging**, available for free download here. This post describes simulations of the increase in noise with an object composed of body materials and an x-ray tube spectrum. The next post will show how to make low-noise images with the same properties as conventional x-ray images from the energy spectrum data. The main purpose of these last two posts is providing and explaining the code to reproduce the images in the paper.

more –>;

In the next few posts I will discuss my paper,

**Dimensionality and noise in energy selective x-ray imaging**, available for free download

here. I will elaborate on the physical and mathematical background and explain how to reproduce the figures.

With my approach to energy selective imaging, the x-ray attenuation coefficient is approximated as a linear combination of functions of energy multiplied by constants that are independent of energy. The number of functions required is the dimensionality. The basic premise of the paper is that the dimensionality is really a pragmatic tradeoff between more information, which requires larger dimensionality, and the increase in noise, which requires higher dose and more expensive equipment to reduce it to a level where the resultant images are clinically useful. The bottom line of the paper is that with biological materials such as soft tissue, bone, and fat, only two dimensions are practical but if an externally administered contrast agent with a high atomic number element such as iodine is included then three and maybe more dimensions are possible.

more –>;