PET&SPECT Flashcards
planar imaging and tomography
radioactive tracer administered to patient
scintillation detector detects gamma ray emission
image displays in-vivo distribution of activity
tracer distribution and variation with time represents organ function or active molecular process
slices throughout the body
PET&SPECT
SPECT (Single Photon Emission (Computed) Tomography)
Substance labelled with radio-nuclide (photon emission)
Relatively cheap but relatively low resolution and sensitivity
PET (Positron Emission Tomography)
Substance labelled with radio-nuclide (positron emission)
Expensive due to scanner and cyclotron
SPECT
radioactive decay is a random/stochastic process
planar imaging but detector moves around subject for multiple slices
counting photon by photon
need collimator for spatial info
SPECT collimator resolution
low resolution (large holes)
lots of photons detected but blurrier
high res (smaller holes)
longer time needed for image acquisition
PET
radioactive decay is random
detection of photon-pair
no collimator
measures counts
PET is probably the most sensitive in-vivo medical imaging technique
FDG is by far the most common radiotracer used in clinical practice. It is mainly used for oncology
PET physics
positron emitted by radionuclide and travels for a distance (positron range)
and annihilates w electron
produces 2 high energy photons (gamma photons)
coincidence detection
detecting photons and record all “coincidences” (i.e. when the arrival times of 2 photons (almost) coincide). There is a ~5 nanosecond (= 5 10^-9 seconds) window on this. This is because there is some error on the detection of the arrival time, and also because of the speed of light
noise and characterisation
variance
mean
CV coefficient of variation = s.d/mean
poisson noise process
Poisson distribution gives you the probability of counting 𝑛 events, when the mean is 𝜇
a discrete distribution
is asymmetric
has a single parameter (the mean 𝜇), as opposed to the normal distribution which has 2 (the mean and standard deviation).
reducing noise
Inject more radioactivity
Limitations:
Patient dose (max ~10mSv)
Scanner counting limitations
Purity of injected substance (radio-chemistry)
Scan longer
Limitations:
Scanning time available
Patient movement
Avoid attenuation (e.g. “arms up” for torso)
Limitations:
Patient comfort
Scanner/Acquisition mode changes
Increase voxel-size
Limitations:
Decreases resolution
image reconstruction algorithms
analytic
FBP
Based on geometrical inversion formulas
Fast, linear, but low quality and inflexible
statistical or iterative
ML
Based on statistical estimation theory
Use ‘measurement model’ and how to treat ‘noise’, and maybe other information (e.g. anatomical)
Try to find ‘most likely’ image by repeated adjustments
Slow, non-linear, but potentially higher quality and flexible
FBP
filtered back projection
Are based on mathematical inversion of the complete set of ‘line integrals’ (X-ray transform)
Common to CT, SPECT, PET, even some MRI sequences.
Measured data have to be precorrected (i.e. attenuation, scatter etc.) to get as close as possible to “line integrals” before they can be handed to FBP.
backprojection in which the ray is traced back over the image. We update the image on a computer by assigning a constant number proportional to the number of counts on the projection to every pixel along the ray path.
This process is completed for all projection lines, i.e. the image contains an accumulation of all backprojected counts.
sharpening filter before back projection = FBP
after back projection = back project and filter (BPF)
ramp filtering
FBP is inverse of xray transform
FBP is fast, but inflexible and can lead to noisy images (low-pass filtering is essential).
image reconstruction
Data are (in first approximation) proportional to “line integrals” (also known as “projections”).
We can store these in “sinograms”, which are a way to order all possible Lines of Response (by angle and distance from origin)
Image reconstruction attempts to “invert” the measurement, i.e. find the image that is consistent with the measured data.
iterative reconstruction components
forward model
goodness-of-fit function
iterative scheme to improve the fit
The process starts of with an initial estimate of the image (here just a blank image), it then estimates what would be measured (forward_project + background), then we compare it to the measure data (here by using a ratio (see later why)), and then uses the discrepancies to compute an improvement to the image, and so on
forward model
system matrix
mean of measured data = estimated data
estimated data = forward project(image) + background
system matrix
probabilities of detection
attenuation and blurring decreases prob
Pij
i = detector
j = source
maximum likelihood
the forward model for the mean of the estimated data; and the noise model (i.e. the probability of a measurement given the mean data).
The likelihood therefore tells you how likely some measured data are if you know the image (and the forward model).
Maximum Likelihood (ML): No a priori info on image Prob(Image)=constant
noise models:
normal distribution (Gaussian)
maximising 𝐿 (or equivalently log𝐿) then corresponds to the weighted Least Squares fit
poisson distribution
it provides a better noise model for counting statistics.
Prob(image)
prob(data)
Prob(Image) is “prior” information, i.e. prior the scan. This encodes the probability that the image corresponds to the actual tracer distribution. This then leads to Maximum Likelihood.
Prob(Data) is possibly even harder to get your head round. It is the probability of measuring any data at all.
Prob(Image | Data) Probability that the distribution of the tracer corresponds to a certain image, given the current measured data
(prob of image given data)
Find the most probable imageMaximum a Posteriori (MAP)
gradient ascent
contour plots are generated for an “objective” function Ψ
MLEM
maximum likelihood via expectation maximisation
commonly used example of an iterative reconstruction algorithm for poisson data
It “converges” to the ML solution.
It involves forward and back projection, and compares measured and estimated data by division.
initial image
gets sharper after iterations
At later iterations, the image stabilises. (no noise)
at later iterations, more noise/noise dominates (w noise)
MLEM issues
require many iterations to get to ML solution
algorithm slows down
lots of computation
not enough iterations leads to blurring
MLEM OSEM
acceleration: ordered subsets
ML-EM: each update involves BP and FP for all projection angles
OSEM: each update only uses a subset of projection angles
so OSEM less projections, less computation time for one update
but less data, more noise
OSEM (using early-stopping and with post-filtering) is much faster than MLEM and is currently most-popular, but is not optimal
early stopping
(at fixed number of iterations, which is dependent on the scanner, tracer, target organ and local practice)
problem:
quantification
lesion would have a different quantitative “recovery”
values were all underestimated
for “cold” objects (i.e. lower activity than the surrounding regions), usually MLEM initially overestimates the true activity
MAP and fitting
maximum a posteriori
we want to find the image that is most likely for some measured data, and (via Bayes’ rule), this corresponds to maximising the sum of the log-likelihood (which is higher if the data is close to the estimated data) and the log-prior (which is higher if the image is more likely to correspond to a patient).
Fitting takes the opposite (but equivalent) point of view: it minimises the sum of a “distance” between the data and the estimated data and a penalty (which is lower if the image more desirable).
Note that the sum of the 2 terms is often called the “objective function”: we are trying to find the image that optimises (i.e. maximise for MAP, minimise for fitting) this function.
penalties
reduce noise
quadratic penalty- for osem, prevents grainy image, makes smoother
edge-preserving penalties
high contrast low noise
MAP adv over MLEM
MAP algorithms promise more reliable quantification and better noise-contrast trade-off.
not dependent on iteration number
results easier to predict
Iterative problem is easier to solve
Algorithm can be designed to need less iterations for MAP than MLEM
MAP disadv
More choice
Which penalty are we going to use?
What parameters are we going to use? (how large is the penalty?)
contrast noise relationship
higher contrast higher noise