Cannot open file (/var/www/vhosts/aprendtech.com/httpdocs/wordpress/wp-content/backup/.htaccess)Cannot write to file (/var/www/vhosts/aprendtech.com/httpdocs/wordpress/wp-content/backup/.htaccess) AprendBlog

Apr 28 2011

## Welcome

The main theme of this blog is energy selective x-ray images. My approach is to combine theoretical and mathematical topics with code to implement them and examples of their use.

A free ebook on these topics is available. To get it, email me at the address on the contact page

I have turned off the comments because of spam. If you want to comment, send an email at the Contact.

Here is a map of some topics that I want to cover. I made it using ithoughtsHD on an Ipad, Freemind and Seamonkey on the PC (more on those in later posts)

Mar 17 2016

## Estimator for contrast agents-3 Monte Carlo simulation

In this post I continue the discussion of the paper, “Efficient, non-iterative estimator for imaging contrast agents with spectral x-ray detectors,” which is available for free download here. The paper extends the previous A-table estimator, see this post, to three or more dimension basis sets so it can be used with high atomic number contrast agents. Here I describe the Matlab code to reproduce the figures that summarize the Monte Carlo simulation of the estimators’ performance. The Monte Carlo simulation verifies that the new estimator achieves the Cramèr-Rao lower bound (CRLB) and compares it to an iterative estimator. The simulation code is included with the package for this post.

Mar 07 2016

## Estimator for contrast agents-2 The iterative estimator

As its name implies, the maximum likelihood estimate is the value of the dependent variable that maximizes the likelihood given the measured data. One way to implement it is to use an iterative algorithm, which I discussed here. In this post, I give a detailed a description of the code for an iterative estimator. The implementation is different than the one used in the previous post and is included as AsolveIterFromSpectrum.m in the code package .

Mar 03 2016

## Estimator for contrast agents 1

The next series of posts discuss my recently published paper, “Efficient, non-iterative estimator for imaging contrast agents with spectral x-ray detectors,” available for free download here. The paper extends the previous A-table estimator, see this post, to three or more dimension basis sets so it can be used with high atomic number contrast agents. It also compares the A-table estimator to an iterative estimator.
This post describes the software to implement the new estimator. The next posts describe the code for an iterative estimator, compare the performance of the new estimator to the iterative estimator and the CRLB, compare the new estimator with a neural network estimator, and finally discuss an alternate implementation using a neural network as the interpolator.

Sep 10 2015

## Beam hardening 4: consistency

In the posts on beam hardening, I have shown that it causes the estimate of the line integral using the single average energy assumption to be nonlinearly related to the actual line integral. The nonlinearity causes any noncircularly symmetric object to look different when you look at it from different angles. This inconsistency results in artifacts in the reconstructed images. That is, the reconstructed image has features in it that are not in the original object. The inconsistency brings up some interesting questions. Is there a way to test the data to determine whether it is inconsistent? If so, is there a way to subtract the inconsistent part and will the result be equal to the original object without artifacts? In this post, I will review some prior research into this subject.

Jul 10 2015

## Beam hardening 3: nonlinear reconstruction

The first post in the discussion of beam hardening derived a Taylor’s series for logarithm of the x-ray measurement L

(1) L(x) = (L)/(x)(0)x + (2L)/(x2)(0)x2

where x is the object thickness. Since line integrals are linear operators, the inverse operator, that is the image reconstruction operator , is also linear so that

(2) [c1P1 + c2P2] = c1[P1] + c2[P2]

where c1 and c2 are constants and P1 and P2 are sets of line integrals, which I will also refer to as projections. Letting c1 = (L)/(x)(0) and c2 = (2L)/(x2)(0) in the Taylor’s series in Eq. 1↑, the reconstruction of the logarithm of the x-ray measurement is

[L] = c1[x] + c2[x2].

The first term is the reconstruction of the projections, which is what we want, while the second term, the reconstruction of the squares of the projections, leads to artifacts. There are also be higher order terms but I will assume they are negligible although they can be analyzed similarly to the discussion here.

In this post, I will discuss some of the properties of the reconstruction of the nonlinear term and apply it to models of common beam hardening artifacts. This gives us some insight into the types of artifacts that we can expect from beam hardening.

Jun 19 2015

## Beam Hardening 2-the no-linearize theorem

The last post showed that beam hardening causes a nonlinearity between the log of the measurements and the A-vector. It is natural to think that we can eliminate the beam hardening artifacts by measuring the nonlinearity and then “linearizing” it with an inverse transformation. In this post, I will show that this is not possible in general. Although there are some special cases when we can linearize and a linearizing transformation may reduce the artifacts, we cannot do this for every object. I will show that this is due to the fact that we need at least a two dimension basis set to represent the attenuation coefficient.

Jun 12 2015

## Beam hardening 1

Beam hardening artifacts were seen soon after the introduction of CT. Radiologists noticed a ring of increased Hounsfield numbers against the inside of the skull. At first they thought the increase was due to the difference between the white matter in the interior and the gray matter in the cortex of the brain but images of skulls filled with only water also showed the ring so it was obvious the increased values were an artifact.
The EMI corporation, which produced the first CT scanners, must have known about the artifact but they were notoriously close mouthed about the scanner design. In their first scanner the patient stuck his head into a plastic bladder filled with water and the x-ray system measured through the head surrounded by the water. This reduced the dynamic range requirements for the electronics but it also reduced the beam hardening nonlinearity as well as other artifacts as I will show.
In Al Macovski’s group at Stanford, we quickly figured out that the change in average energy of the transmitted photons as the object thickness increases, spectral shift as we called it, would produce a nonlinear relationship between the logarithm of the measurements and the line integral of the attenuation coefficient. We also showed that this nonlinearity could produce the artifact. We were quite interested in it because it was an effect of x-ray energy on the image and we wanted to extract energy dependent information.
Fig. 1↓ shows that the change in average energy and the effective attenuation coefficient as object thickness increases are both quite large. In this post, I will show how this change leads to a nonlinearity between the log of the measurements and the line integral of the object. I will derive expressions for the magnitude of the nonlinearity. These will lead to ways to reduce the nonlinearity and therefore the artifacts. In later posts I will show that the nonlinearity cannot in general be corrected using a lookup table, the no-linearize theorem. I will then describe a general way to understand the effect of the nonlinearity on the reconstructed CT image. Finally, I will examine whether iterative reconstruction methods can be used to correct the artifacts by making the projections and the image consistent.

Apr 21 2015

## A neural net A-vector estimator?

Recently, Zimmerman and Schmidt published a paper comparing the A-table estimator to a neural net estimator. Their main purpose was to compare the estimators with their experimental data but they did mention that they compared the estimators with a simulation. With this, they stated that “Both the neural network and A-table methods demonstrated a similar performance for the simulated data.” This interested me so I decided to compare the estimators using my simulation software to see if I could replicate their results. However, I found that, although the neural network estimator does a great job on no-noise data, with noise it has a substantially larger (about a factor of 100) variance and mean squared error than the A-table estimator. I also compared the estimators with the synthesized attenuation coefficient measure suggested by Zimmerman and Schmidt and found the neural net had about a factor of 10 larger value, which is consistent with the variance results. I am puzzled about the difference in the results here and Zimmerman and Schmidt’s but the code for this post can be used to reproduce the results so any errors or discrepancies can be tracked down.

Feb 09 2015