Mar 07 2016

## Estimator for contrast agents-2 The iterative estimator

*AsolveIterFromSpectrum*.

*m*in the code package .

Comments Off

Mar 07 2016

As its name implies, the maximum likelihood estimate is the value of the dependent variable that maximizes the likelihood given the measured data. One way to implement it is to use an iterative algorithm, which I discussed here. In this post, I give a detailed a description of the code for an iterative estimator. The implementation is different than the one used in the previous post and is included as *AsolveIterFromSpectrum*.*m* in the code package .

Comments Off

Mar 03 2016

The next series of posts discuss my recently published paper, “Efficient, non-iterative estimator for imaging contrast agents with spectral x-ray detectors,” available for free download here. The paper extends the previous A-table estimator, see this post, to three or more dimension basis sets so it can be used with high atomic number contrast agents. It also compares the A-table estimator to an iterative estimator.

This post describes the software to implement the new estimator. The next posts describe the code for an iterative estimator, compare the performance of the new estimator to the iterative estimator and the CRLB, compare the new estimator with a neural network estimator, and finally discuss an alternate implementation using a neural network as the interpolator.

Comments Off

Sep 10 2015

In the posts on beam hardening, I have shown that it causes the estimate of the line integral using the single average energy assumption to be nonlinearly related to the actual line integral. The nonlinearity causes any noncircularly symmetric object to look different when you look at it from different angles. This inconsistency results in artifacts in the reconstructed images. That is, the reconstructed image has features in it that are not in the original object. The inconsistency brings up some interesting questions. Is there a way to test the data to determine whether it is inconsistent? If so, is there a way to subtract the inconsistent part and will the result be equal to the original object without artifacts? In this post, I will review some prior research into this subject.

Comments Off

Jul 10 2015

The first post in the discussion of beam hardening derived a Taylor’s series for logarithm of the x-ray measurement *L*,

where *x* is the object thickness. Since line integrals are linear operators, the inverse operator, that is the image reconstruction operator ℛ, is also linear so that

where *c*_{1} and *c*_{2} are constants and *P*_{1} and *P*_{2} are sets of line integrals, which I will also refer to as projections. Letting *c*_{1} = (∂*L*)/(∂*x*)(0) and *c*_{2} = (∂^{2}*L*)/(∂*x*^{2})(0) in the Taylor’s series in Eq. 1↑, the reconstruction of the logarithm of the x-ray measurement is

ℛ[*L*] = *c*_{1}ℛ[*x*] + *c*_{2}ℛ[*x*^{2}].

The first term is the reconstruction of the projections, which is what we want, while the second term, the reconstruction of the squares of the projections, leads to artifacts. There are also be higher order terms but I will assume they are negligible although they can be analyzed similarly to the discussion here.

In this post, I will discuss some of the properties of the reconstruction of the nonlinear term and apply it to models of common beam hardening artifacts. This gives us some insight into the types of artifacts that we can expect from beam hardening.

Comments Off

Jun 19 2015

The last post showed that beam hardening causes a nonlinearity between the log of the measurements and the A-vector. It is natural to think that we can eliminate the beam hardening artifacts by measuring the nonlinearity and then “linearizing” it with an inverse transformation. In this post, I will show that this is not possible in general. Although there are some special cases when we can linearize and a linearizing transformation may reduce the artifacts, we cannot do this for every object. I will show that this is due to the fact that we need at least a two dimension basis set to represent the attenuation coefficient.

Comments Off

Feb 09 2015

In this post I conclude my discussion of my “SNR with pileup …” paper[2]. I will present and explain the code to reproduce the “bottom line” figures 3 to 5 of the paper that show the decrease of the SNR as the pileup parameter *η* increases. The decrease is rapid and when the value of *η* reaches 1, all the counting and PHA detectors have SNR smaller than the energy integrating detector. The NQ detector SNR decreases rapidly and approaches but marginally stays above the Q SNR since it uses that signal.

Comments Off

Feb 09 2015

My “SNR with pileup …” paper[1] presented a set of theoretical formulas for the noise of NQ and PHA detectors in Tables I and II. I have discussed the individual formulas and Monte Carlo simulations of their validity in past posts in this series. In Section 2.K of the paper, I presented an overall test of the formulas that compared the A-vector component variances with a Monte Carlo simulation of the random detector data processed with a maximum likelihood estimator (MLE). In this post, I expand the discussion in the paper and present code to reproduce Fig. 2.

Comments Off

Jan 30 2015

In my last post, I showed that the probability distribution of photon counting detector data with pileup is multivariate normal for the counts typically used in material selective imaging. With the normal distribution and a linear model, the Cramèr-Rao lower bound (CRLB) for the covariance of the A-vector data includes a term that depends on the change in the measurement data covariance with **A**. Without pileup I show in this post and in the Appendix of my “Dimensionality and noise …” paper[2], available for free download here, that the change in covariance term is negligible for large enough counts. In Appendix B of my “SNR with pileup …” paper[1], I show that the term is also negligible with pileup. In this post, I will present and explain the code to reproduce the figures in that section.

Comments Off

Jan 26 2015

The method to compute SNR in my paper, “Signal to noise ratio of energy selective x-ray photon counting systems with pileup”[1], assumes that the noisy data have a multivariate normal distribution. Appendix A of the paper describes a Monte Carlo simulation to study the conditions under which the normal distribution assumption is valid. In this post, I will expand on the discussion in the paper and present Matlab code to reproduce the figures.

Comments Off

Jan 05 2015

This post continues the discussion of my paper “Signal to noise ratio of energy selective x-ray photon counting systems with pileup”[1], which is available for free download here. Following the road map described in my last post, I am deriving and validating formulas for the statistics of photon counting detectors with pileup. In this post, I describe formulas for the expected value and covariance of pulse height analysis data with pileup and present software to verify the formulas with Monte Carlo simulations.

Comments Off

Next Page »