Cannot open file (/var/www/vhosts/aprendtech.com/httpdocs/wordpress/wp-content/backup/.htaccess)Cannot write to file (/var/www/vhosts/aprendtech.com/httpdocs/wordpress/wp-content/backup/.htaccess) AprendBlog » software


Mar 17 2016

Estimator for contrast agents-3 Monte Carlo simulation

Tag: Implementation,Physics,softwareadmin @ 10:51 am
In this post I continue the discussion of the paper[2], “Efficient, non-iterative estimator for imaging contrast agents with spectral x-ray detectors,” which is available for free download here. The paper extends the previous A-table estimator[1], see this post, to three or more dimension basis sets so it can be used with high atomic number contrast agents. Here I describe the Matlab code to reproduce the figures that summarize the Monte Carlo simulation of the estimators’ performance. The Monte Carlo simulation verifies that the new estimator achieves the Cramèr-Rao lower bound (CRLB) and compares it to an iterative estimator. The simulation code is included with the package for this post.

more –>;


Mar 07 2016

Estimator for contrast agents-2 The iterative estimator

Tag: Implementation,Math,softwareadmin @ 10:56 am
As its name implies, the maximum likelihood estimate is the value of the dependent variable that maximizes the likelihood given the measured data. One way to implement it is to use an iterative algorithm, which I discussed here. In this post, I give a detailed a description of the code for an iterative estimator. The implementation is different than the one used in the previous post and is included as AsolveIterFromSpectrum.m in the code package .

more –>;


Mar 03 2016

Estimator for contrast agents 1

Tag: Math,Noise,softwareadmin @ 11:22 am
The next series of posts discuss my recently published paper, “Efficient, non-iterative estimator for imaging contrast agents with spectral x-ray detectors,” available for free download here. The paper extends the previous A-table estimator, see this post, to three or more dimension basis sets so it can be used with high atomic number contrast agents. It also compares the A-table estimator to an iterative estimator.
This post describes the software to implement the new estimator. The next posts describe the code for an iterative estimator, compare the performance of the new estimator to the iterative estimator and the CRLB, compare the new estimator with a neural network estimator, and finally discuss an alternate implementation using a neural network as the interpolator.

more –>;


Jan 05 2015

SNR with pileup-3 PHA detector statistics with pileup

Tag: Math,Physics,softwareadmin @ 11:31 am
This post continues the discussion of my paper “Signal to noise ratio of energy selective x-ray photon counting systems with pileup”[1], which is available for free download here. Following the road map described in my last post, I am deriving and validating formulas for the statistics of photon counting detectors with pileup. In this post, I describe formulas for the expected value and covariance of pulse height analysis data with pileup and present software to verify the formulas with Monte Carlo simulations.

more –>;


Dec 15 2014

SNR with pileup-1

Tag: Implementation,Noise,Physics,softwareadmin @ 4:04 pm
In the next posts, I will discuss my paper “Signal to noise ratio of energy selective x-ray photon counting systems with pileup”, which is available for free download here. The paper uses an idealized model to derive limits on the effects of pileup on the SNR of A-vector data. There have been many papers (see, for example Overdick et al.[4] Taguchi et al.[3], and Taguchi and Iwanczyk [6]) that use more or less realistic models of photon counting detectors to predict the quality of images computed from their data. These models are necessarily complex since state of the art is relatively primitive compared with the extreme count rate requirements in diagnostic imaging. The complexity of detailed models makes it hard to generalize from the results. Moreover, as research continues, the properties of the detectors will improve and their response will approach an idealized limit. This is the case with the energy integrating detectors used in state of the art medical imaging systems whose noise levels have been reduced so that the principal source of noise is the fundamental quantum noise that is present in all measurements with x-ray photons.

 

In this post, I will describe the rationale for an idealized model of photon counting detectors with pulse height analysis with pileup and illustrate it with the random data it generates. The following posts will show how the model can be applied to compute the SNR of systems with pileup and to compare the SNR to the full spectrum optimal value. The model will be used to determine the allowable response time so that the reduction in SNR due to pileup is small.

more –>;


Sep 15 2014

Correlated noise reduction

Tag: Math,Noise,softwareadmin @ 10:28 am
The noise of the components of the A-vector is highly correlated and the previous post showed a way to produce low noise images analogous to conventional, non-energy selective images by “whitening” the A-vector data. That is good but is there a way to use the correlation to produce lower noise material selective images such as bone or soft tissue canceled? It turns out there are many methods that are seemingly different but are all based on the correlation. Al Macovski introduced the idea and his group at Stanford published several papers on it. It has been used in commercial systems. For example, the Fuji Corporation used an elaborate iterative method to reduce the noise in their “sandwich” photostimulable screen detector system[4]. Other companies like GE are more secretive but I think that they used a similar method with their voltage switching flat panel system.

 

In this post, I will describe a linear least mean squares method, which is a simplified version of the approach introduced by Cao et. al.[2], who also did her work at Stanford. This approach has straight-forward theory, is easy to implement and is effective at reducing noise. One problem with the approach is that it may change the quantitative values of the data in CT and Kalendar et al.[5] published an enhancement that may retain the quantitative information. However, if quantitative data are important, then the software can extract data from the underlying images guided by an operator using the noise-reduced image.

more –>;


Sep 04 2014

Dimensionality and noise in energy selective x-ray imaging-Part 3 low noise conventional images

Tag: Noise,Physics,softwareadmin @ 11:06 am
I have been discussing my recently published paper, Dimensionality and noise in energy selective x-ray imaging, available for free download here. In this post, I will show how to create low noise images with properties analogous to conventional images from the energy spectrum data used in the previous two posts of this series to compute the A-vector images. The results verify that the noise in the ’conventional’ images computed from energy spectrum information is lower than images computed from the total number of photons only.

more –>;


Aug 22 2014

Dimensionality and noise in energy selective x-ray imaging-Part 2

Tag: Math,Noise,Physics,softwareadmin @ 11:34 am

The previous post in this series discussed the mathematics behind the increase in noise with the dimensionality, the number of basis functions used to approximate the attenuation coefficient. The series of posts is based on my recently published paper, Dimensionality and noise in energy selective x-ray imaging, available for free download here. This post describes simulations of the increase in noise with an object composed of body materials and an x-ray tube spectrum. The next post will show how to make low-noise images with the same properties as conventional x-ray images from the energy spectrum data. The main purpose of these last two posts is providing and explaining the code to reproduce the images in the paper.

 
more –>;


May 07 2014

The singular value decomposition

Tag: Math,softwareadmin @ 1:59 pm
Not only is the singular value decomposition (SVD) fundamental to matrix theory but it is also widely used in data analysis. I have used it several times in my posts. For example, here and here, I used the singular values to quantify the intrinsic dimensionality of attenuation coefficients. In this post, I applied the SVD to give the optimal basis functions to approximate the attenuation coefficient and compared them to the material attenuation coefficient basis set[1]. All of these posts were based on the SVD approximation theorem, which allows us to find the nearest matrix of a given rank to our original matrix. This is an extremely powerful result because it allows us to reduce the dimensionality of a problem while still retaining most of the information.
In this post, I will discuss the SVD approximation theorem from an intuitive basis. The math here will be even less rigorous than my usual low standard since my purpose is to get an understanding of how the theorem works and what are its limitations. If you want a mathematical proof, you can find it in many places like Theorems 5.8 and 5.9 of the book Numerical Linear Algebra[2] by Trefethen and Bau. These proofs do not provide much insight into the approximation so I will provide two ways of looking at the theorem: a geometric interpretation and an algebraic interpretation.

more → Continue reading “The singular value decomposition”


Dec 26 2013

Parameters for the estimator

You may ask, what is the fundamental advantage of the new estimator? Yes, it is faster than the iterative method but so what? With Moore’s law, we can just throw silicon at the problem by doing the processing in parallel. I have two responses. The first is that not only is the iterative estimator slow but it also takes a random time to complete the calculation. This is a substantial problem since CT scanners are real-time systems. The calculations have to be done in a fixed time or the data are lost. The gantry cannot be stopped to wait for a long iteration to complete!
The second problem is that, as it has been implemented in the research literature, the iterative estimator requires measurement of the x-ray tube spectrum and the detector energy response to compute the likelihood for a given measurement. These are difficult measurements that cannot be done at most medical institutions. Because of drift of the system components, the measurements have to be done periodically to assure accurate results. There may be a way to implement an iterative estimator with simpler measurements but I am not aware of it.
In this post, I will show how the parameters required for the new estimator can be determined from measurements on a phantom placed in the system. This could be done easily by personnel at medical institutions and is similar to quality assurance measurements now done routinely on other medical systems.

more → Continue reading “Parameters for the estimator”


Next Page »