Cannot open file (/var/www/vhosts/aprendtech.com/httpdocs/wordpress/wp-content/backup/.htaccess)Cannot write to file (/var/www/vhosts/aprendtech.com/httpdocs/wordpress/wp-content/backup/.htaccess) AprendBlog » Physics


Dec 29 2014

SNR with pileup-2: Overall plan and NQ detector statistics with pileup

Tag: Implementation,Noise,Physicsadmin @ 12:33 pm
In this post, I continue the discussion of my paper “Signal to noise ratio of energy selective x-ray photon counting systems with pileup”[2], which is available for free download here. The computation of the SNR is based on the approach described in my previous paper, “Near optimal energy selective x-ray imaging system performance with simple detectors[1]”, which is available for free download here. The approach is extended to data with pileup.
The “Near optimal …” paper shows that regardless whether there is pileup or not, if the noise has a multivariate normal distribution and if the feature is sufficiently thin so the covariance in the background and feature regions is approximately the same, the performance is determined by the signal to noise ratio. So the first thing that has to be done is to show that the data with pileup satisfy these conditions. My plan for the discussion of the SNR paper is therefore as follows.
  • First, I will use the idealized model and the Matlab function to generate random recorded counts described in the previous post to develop code to compute random samples of data from NQ and PHA detectors with pileup. I will use these functions to derive and validate formulas for the expected values and covariance of the data. These are required to compute the SNR.
  • I will then use these models and software to determine the conditions so that the probability distribution of the data can be approximated as multivariate normal.
  • Next, I will show how to use the Cramèr-Rao lower bound (CRLB) to compute the A-vector covariance with pileup data. I will use this to show that we can, under some conditions, use the constant covariance approximation to the CRLB with pileup data just as we can with non-pileup data as shown in this post.
  • Finally, I will apply these results to compute the reduction of SNR as pileup increases.
In this post, I will use the Matlab function discussed in the previous post to compute random samples of recorded photon counts (N) and total energy (Q) data. These are data from an NQ detector with pileup. I will use these data to validate the formula for the covariance derived in Appendix C of the SNR paper. I will present Matlab code to reproduce Fig. 9 of the paper, which shows the covariance and the correlation of the data as a function of the dead time. I will also use the same data to validate the formulas for the expected value and variance of the recorded counts and the total energy as a function of dead time. These formulas are described in Section 2.E of the paper.

more –>;


Dec 15 2014

SNR with pileup-1

Tag: Implementation,Noise,Physics,softwareadmin @ 4:04 pm
In the next posts, I will discuss my paper “Signal to noise ratio of energy selective x-ray photon counting systems with pileup”, which is available for free download here. The paper uses an idealized model to derive limits on the effects of pileup on the SNR of A-vector data. There have been many papers (see, for example Overdick et al.[4] Taguchi et al.[3], and Taguchi and Iwanczyk [6]) that use more or less realistic models of photon counting detectors to predict the quality of images computed from their data. These models are necessarily complex since state of the art is relatively primitive compared with the extreme count rate requirements in diagnostic imaging. The complexity of detailed models makes it hard to generalize from the results. Moreover, as research continues, the properties of the detectors will improve and their response will approach an idealized limit. This is the case with the energy integrating detectors used in state of the art medical imaging systems whose noise levels have been reduced so that the principal source of noise is the fundamental quantum noise that is present in all measurements with x-ray photons.

 

In this post, I will describe the rationale for an idealized model of photon counting detectors with pulse height analysis with pileup and illustrate it with the random data it generates. The following posts will show how the model can be applied to compute the SNR of systems with pileup and to compare the SNR to the full spectrum optimal value. The model will be used to determine the allowable response time so that the reduction in SNR due to pileup is small.

more –>;


Nov 09 2014

Improve noise by throwing away photons?

Tag: Clinical hardware,Noise,Physicsadmin @ 11:48 am
Photon counting systems with pulse height analysis (PHA) count the number of photons whose energy falls within a set of energy ranges, which I will call bins. Usually the bins are contiguous, non-overlapping, and span the incident energy spectrum so each photon falls within one bin. A paper[6] by Wang and Pelc showed that the A-vector noise variance can be decreased by using bins that are not contiguous. That is, if we use bins that only cover the low and high energy regions and do not include intermediate energies, we can lower the noise variance. Photons with energies in these intermediate regions are not counted i.e. they are thrown away. Improving noise by throwing away photons is an interesting concept and I will discuss it in this post. It turns out to be an example where the choice of the quality measure fundamentally changes the hardware design, which happens often, so it is important to study it.

more –>;


Sep 04 2014

Dimensionality and noise in energy selective x-ray imaging-Part 3 low noise conventional images

Tag: Noise,Physics,softwareadmin @ 11:06 am
I have been discussing my recently published paper, Dimensionality and noise in energy selective x-ray imaging, available for free download here. In this post, I will show how to create low noise images with properties analogous to conventional images from the energy spectrum data used in the previous two posts of this series to compute the A-vector images. The results verify that the noise in the ’conventional’ images computed from energy spectrum information is lower than images computed from the total number of photons only.

more –>;


Aug 22 2014

Dimensionality and noise in energy selective x-ray imaging-Part 2

Tag: Math,Noise,Physics,softwareadmin @ 11:34 am

The previous post in this series discussed the mathematics behind the increase in noise with the dimensionality, the number of basis functions used to approximate the attenuation coefficient. The series of posts is based on my recently published paper, Dimensionality and noise in energy selective x-ray imaging, available for free download here. This post describes simulations of the increase in noise with an object composed of body materials and an x-ray tube spectrum. The next post will show how to make low-noise images with the same properties as conventional x-ray images from the energy spectrum data. The main purpose of these last two posts is providing and explaining the code to reproduce the images in the paper.

 
more –>;


Jul 23 2014

Dimensionality and noise in energy selective x-ray imaging-Part 1

Tag: Math,Noise,Physicsadmin @ 2:18 pm
In the next few posts I will discuss my paper, Dimensionality and noise in energy selective x-ray imaging, available for free download here. I will elaborate on the physical and mathematical background and explain how to reproduce the figures.

With my approach to energy selective imaging, the x-ray attenuation coefficient is approximated as a linear combination of functions of energy multiplied by constants that are independent of energy. The number of functions required is the dimensionality. The basic premise of the paper is that the dimensionality is really a pragmatic tradeoff between more information, which requires larger dimensionality, and the increase in noise, which requires higher dose and more expensive equipment to reduce it to a level where the resultant images are clinically useful. The bottom line of the paper is that with biological materials such as soft tissue, bone, and fat, only two dimensions are practical but if an externally administered contrast agent with a high atomic number element such as iodine is included then three and maybe more dimensions are possible.

more –>;


Dec 26 2013

Parameters for the estimator

You may ask, what is the fundamental advantage of the new estimator? Yes, it is faster than the iterative method but so what? With Moore’s law, we can just throw silicon at the problem by doing the processing in parallel. I have two responses. The first is that not only is the iterative estimator slow but it also takes a random time to complete the calculation. This is a substantial problem since CT scanners are real-time systems. The calculations have to be done in a fixed time or the data are lost. The gantry cannot be stopped to wait for a long iteration to complete!
The second problem is that, as it has been implemented in the research literature, the iterative estimator requires measurement of the x-ray tube spectrum and the detector energy response to compute the likelihood for a given measurement. These are difficult measurements that cannot be done at most medical institutions. Because of drift of the system components, the measurements have to be done periodically to assure accurate results. There may be a way to implement an iterative estimator with simpler measurements but I am not aware of it.
In this post, I will show how the parameters required for the new estimator can be determined from measurements on a phantom placed in the system. This could be done easily by personnel at medical institutions and is similar to quality assurance measurements now done routinely on other medical systems.

more → Continue reading “Parameters for the estimator”


Oct 30 2013

Rationale for the new estimator

Tag: Implementation,Noise,Physicsadmin @ 10:20 am
The past two posts have discussed estimators for A-vector data. I showed that with the same number of measurement spectra as the A-vector dimension, any estimator that solves the deterministic equations is the maximum likelihood estimator (MLE) and it will achieve the Cramèr-Rao lower bound (CRLB). If there are more measurement spectra than the dimension, then the polynomial estimator, which works well for the equal case, has very poor performance giving a variance that can be several hundred times larger than the CRLB. I showed by simulations that with more measurements than dimension the iterative MLE does give a variance close to the CRLB but it has substantial problems. Common to all iterative algorithms, the computation time is long and random. It may fail to converge at all if the initial estimate is too far from the actual value. As it was implemented by Schlomka et al.[2], it also requires measurements of the x-ray source spectrum and the detector spectral response. These are difficult, time consuming and require laboratory equipment that is not usually available in medical institutions.
In this post, I will give an intuitive explanation for the operation of a new estimator that I introduced in my paper[1] “Estimator for photon counting energy selective x-ray imaging with multi-bin pulse height analysis,” which is available for free download here. The estimator is efficient and can be implemented with data that can be measured at medical institutions. The details of the estimator are described in the paper. Here, I will discuss the background and give a rationale on how it works.

more → Continue reading “Rationale for the new estimator”


Oct 01 2013

Estimators for Energy-selective imaging—Part 1

Tag: Implementation,Math,Noise,Physicsadmin @ 5:46 pm

In a previous post I described the application of statistical estimator theory to energy selective x-ray imaging. I introduced a linearized model for the signal and noise and in a subsequent post I described a linear maximum likelihood estimator (MLE) that achieved the Cramèr-Rao lower bound (CRLB). In many applications, such as CT, the linear model is not sufficiently accurate. In this post, I will start the discussion of my paper[3] “Estimator for photon counting energy selective x-ray imaging with multi-bin pulse height analysis.” The paper describes an estimator that is accurate for a wide dynamic range that also achieves the CRLB and has other desirable properties such as fast and predictable computation time and being implementable in a clinical institution as opposed to a physics lab. This post frames the discussion by describing general aspects of computing the A-vector from energy selective measurements and several estimators that are widely used and their properties.

more →


Apr 12 2013

Image SNR with energy-selective detectors

Tag: Noise,Physics,softwareadmin @ 3:28 pm

This is the last post in my series discussing my paper, “Near optimal energy selective x-ray imaging system performance with simple detectors”. In the last post I showed plots of the signal to noise ratio (SNR) of images with different types of energy-selective detectors. In this post, I show images illustrating these differences. These images were not included in the paper but they are based on its approach. The images are calculated from a random sample of the energy spectrum at each point in a projection image. These data are then used to make images with (a) the total energy, which are comparable to the detectors now used in commercial systems, (b) the total number of photons, (c) an N2Q detector, and (d) an optimal full spectrum by weighting the spectrum data before summing, as described in Tapiovaara and Wagner (TW). I use the theory developed in my paper, to make images from A-space data using data from the N2Q detector. In order to do this, I need an estimator that achieves the Cramèr-Rao lower bound (CRLB). For this I use the A-table estimator I introduced in my paper “Estimator for photon counting energy selective x-ray imaging with multibin pulse height analysis” available for download here.

more →


« Previous PageNext Page »