Cannot open file (/var/www/vhosts/aprendtech.com/httpdocs/wordpress/wp-content/backup/.htaccess)Cannot write to file (/var/www/vhosts/aprendtech.com/httpdocs/wordpress/wp-content/backup/.htaccess)
In my last post, I showed that the multivariate normal, abbreviated multinormal, is a good model for the noise w in a linearized x-ray system model. In this post, I will discuss some of the properties of the multinormal distribution. I will show a rationale for its expression using vectors and matrices. This will lead me to discuss matrix calculus. I will describe diagonalizing and whitening transformations and derive the moment generating functions of the uninormal and multinormal to show that linear combinations of multinormals are also multinormal. This post will provide math background for my discussions of detection and maximum likelihood estimation with the linearized x-ray model.
In my last post, I described a three part model used in statistical signal processing: (1) an information source produces outputs described by a finite dimensional vector, (2) a probabilistic mapping between the source outputs and the measured data, and (3) a receiver or processor that computes an estimate of the source output or makes a decision about the source based on the data. I showed that in x-ray imaging the information is summarized by the A vector whose components are the line integrals of the coefficients in the expansion of the x-ray attenuation coefficient. The basis set coefficients a(r) depend on the material at points r within the object and the line integrals Aj = ∫ℒaj(r)dr are computed along a line ℒ from the x-ray source to the detector. I then showed the rationale for a linearized model of the probabilistic mapping from A to the logarithm of the detector data L
In this post, I will try to convince you that the multivariate normal is a good model for the noise w. This will lead me to discuss tests for normality including probability plots and statistical tests based on them such as the Shapiro-Wilk test[4] (available online) for univariate data and Royston’s test[3] for multivariate data.
The last two articles discussed the use of energy information to increase the SNR of x-ray imaging systems. They assumed that the attenuation coefficient is a continuous function of energy and that the energy spectrum is measured with perfect resolution. But we know from my posts here, here, here, and here that the attenuation coefficient can be expressed as a linear combination of two functions of energy. In addition, as I discussed in my posts about deadtime, the extremely high count rates required for medical x-ray systems severely limit the energy resolution and the complexity of the signal processing.
My paper “Near optimal energy selective x-ray imaging system performance with simple detectors”, which is available for free download here, discusses the use of the two-function decomposition in the signal processing. By transforming the problem from infinite to finite dimensions, the decomposition allows us to get near ideal SNR using low energy-resolution measurements, which may be possible with high speed photon counting detectors.
My previous post discussed the mean and variance of photon counts with deadtime. In this post, I describe a model for the energy spectrum that might be measured by a photon counter with perfect pulse height analysis (PHA). Again, my purpose is to gain insight so I will use a highly simplified model. I derive a theoretical formula for the measured spectrum and then a Monte Carlo simulation to validate the model.
I previously discussed the rationale, the C++ implmentation, and the the Matlab interface for a computed tomography projection simulator. In this post, I discuss a Matlab-only implementation of a simulator. The simulator is limited to ellipses and parallel lines but it is simple and can be (fairly) easily extended to other object types and geometries.
In this post, I extend my PolylineIntersectSegment post to find all the intersections between two polylines. We can use the code to find the intersection of two curves by approximating them as polylines. The code is fast so you can use small intervals to get accurate results. Again, the use of complex variables makes the code easier to understand and modify.
A post by Loren Shure of Mathworks, reminded me of a function, PolylineIntersectSegment.m, that I wrote to compute the intersection of a polyline with a line segment. This function nicely combines the topics of my last two posts, use of complex variables and lines, so I will discuss it in more detail. The use of complex variables simplifies the code and makes it easier to understand and modify.
Computational geometry is an interesting and important topic for imaging in general and x-ray imaging in particular. In this post, I describe the basic formulas for perhaps the most fundamental geometric object—a straight line in three dimensions. The object can also be used in 2D to represent a line in an image.
One of my Matlab programming styles, some would say quirks, is wide use of complex variables. I not only use them in the standard mathematical places but I use them to represent two dimensional spatial vectors and to represent two dimensional quantities in general.