PET&SPECT Flashcards
planar imaging and tomography
radioactive tracer administered to patient
scintillation detector detects gamma ray emission
image displays in-vivo distribution of activity
tracer distribution and variation with time represents organ function or active molecular process
slices throughout the body
PET&SPECT
SPECT (Single Photon Emission (Computed) Tomography)
Substance labelled with radio-nuclide (photon emission)
Relatively cheap but relatively low resolution and sensitivity
PET (Positron Emission Tomography)
Substance labelled with radio-nuclide (positron emission)
Expensive due to scanner and cyclotron
SPECT
radioactive decay is a random/stochastic process
planar imaging but detector moves around subject for multiple slices
counting photon by photon
need collimator for spatial info
SPECT collimator resolution
low resolution (large holes)
lots of photons detected but blurrier
high res (smaller holes)
longer time needed for image acquisition
PET
radioactive decay is random
detection of photon-pair
no collimator
measures counts
PET is probably the most sensitive in-vivo medical imaging technique
FDG is by far the most common radiotracer used in clinical practice. It is mainly used for oncology
PET physics
positron emitted by radionuclide and travels for a distance (positron range)
and annihilates w electron
produces 2 high energy photons (gamma photons)
coincidence detection
detecting photons and record all “coincidences” (i.e. when the arrival times of 2 photons (almost) coincide). There is a ~5 nanosecond (= 5 10^-9 seconds) window on this. This is because there is some error on the detection of the arrival time, and also because of the speed of light
noise and characterisation
variance
mean
CV coefficient of variation = s.d/mean
poisson noise process
Poisson distribution gives you the probability of counting 𝑛 events, when the mean is 𝜇
a discrete distribution
is asymmetric
has a single parameter (the mean 𝜇), as opposed to the normal distribution which has 2 (the mean and standard deviation).
reducing noise
Inject more radioactivity
Limitations:
Patient dose (max ~10mSv)
Scanner counting limitations
Purity of injected substance (radio-chemistry)
Scan longer
Limitations:
Scanning time available
Patient movement
Avoid attenuation (e.g. “arms up” for torso)
Limitations:
Patient comfort
Scanner/Acquisition mode changes
Increase voxel-size
Limitations:
Decreases resolution
image reconstruction algorithms
analytic
FBP
Based on geometrical inversion formulas
Fast, linear, but low quality and inflexible
statistical or iterative
ML
Based on statistical estimation theory
Use ‘measurement model’ and how to treat ‘noise’, and maybe other information (e.g. anatomical)
Try to find ‘most likely’ image by repeated adjustments
Slow, non-linear, but potentially higher quality and flexible
FBP
filtered back projection
Are based on mathematical inversion of the complete set of ‘line integrals’ (X-ray transform)
Common to CT, SPECT, PET, even some MRI sequences.
Measured data have to be precorrected (i.e. attenuation, scatter etc.) to get as close as possible to “line integrals” before they can be handed to FBP.
backprojection in which the ray is traced back over the image. We update the image on a computer by assigning a constant number proportional to the number of counts on the projection to every pixel along the ray path.
This process is completed for all projection lines, i.e. the image contains an accumulation of all backprojected counts.
sharpening filter before back projection = FBP
after back projection = back project and filter (BPF)
ramp filtering
FBP is inverse of xray transform
FBP is fast, but inflexible and can lead to noisy images (low-pass filtering is essential).
image reconstruction
Data are (in first approximation) proportional to “line integrals” (also known as “projections”).
We can store these in “sinograms”, which are a way to order all possible Lines of Response (by angle and distance from origin)
Image reconstruction attempts to “invert” the measurement, i.e. find the image that is consistent with the measured data.
iterative reconstruction components
forward model
goodness-of-fit function
iterative scheme to improve the fit
The process starts of with an initial estimate of the image (here just a blank image), it then estimates what would be measured (forward_project + background), then we compare it to the measure data (here by using a ratio (see later why)), and then uses the discrepancies to compute an improvement to the image, and so on
forward model
system matrix
mean of measured data = estimated data
estimated data = forward project(image) + background
system matrix
probabilities of detection
attenuation and blurring decreases prob
Pij
i = detector
j = source
maximum likelihood
the forward model for the mean of the estimated data; and the noise model (i.e. the probability of a measurement given the mean data).
The likelihood therefore tells you how likely some measured data are if you know the image (and the forward model).
Maximum Likelihood (ML): No a priori info on image Prob(Image)=constant
noise models:
normal distribution (Gaussian)
maximising 𝐿 (or equivalently log𝐿) then corresponds to the weighted Least Squares fit
poisson distribution
it provides a better noise model for counting statistics.
Prob(image)
prob(data)
Prob(Image) is “prior” information, i.e. prior the scan. This encodes the probability that the image corresponds to the actual tracer distribution. This then leads to Maximum Likelihood.
Prob(Data) is possibly even harder to get your head round. It is the probability of measuring any data at all.
Prob(Image | Data) Probability that the distribution of the tracer corresponds to a certain image, given the current measured data
(prob of image given data)
Find the most probable imageMaximum a Posteriori (MAP)
gradient ascent
contour plots are generated for an “objective” function Ψ
MLEM
maximum likelihood via expectation maximisation
commonly used example of an iterative reconstruction algorithm for poisson data
It “converges” to the ML solution.
It involves forward and back projection, and compares measured and estimated data by division.
initial image
gets sharper after iterations
At later iterations, the image stabilises. (no noise)
at later iterations, more noise/noise dominates (w noise)
MLEM issues
require many iterations to get to ML solution
algorithm slows down
lots of computation
not enough iterations leads to blurring
MLEM OSEM
acceleration: ordered subsets
ML-EM: each update involves BP and FP for all projection angles
OSEM: each update only uses a subset of projection angles
so OSEM less projections, less computation time for one update
but less data, more noise
OSEM (using early-stopping and with post-filtering) is much faster than MLEM and is currently most-popular, but is not optimal
early stopping
(at fixed number of iterations, which is dependent on the scanner, tracer, target organ and local practice)
problem:
quantification
lesion would have a different quantitative “recovery”
values were all underestimated
for “cold” objects (i.e. lower activity than the surrounding regions), usually MLEM initially overestimates the true activity
MAP and fitting
maximum a posteriori
we want to find the image that is most likely for some measured data, and (via Bayes’ rule), this corresponds to maximising the sum of the log-likelihood (which is higher if the data is close to the estimated data) and the log-prior (which is higher if the image is more likely to correspond to a patient).
Fitting takes the opposite (but equivalent) point of view: it minimises the sum of a “distance” between the data and the estimated data and a penalty (which is lower if the image more desirable).
Note that the sum of the 2 terms is often called the “objective function”: we are trying to find the image that optimises (i.e. maximise for MAP, minimise for fitting) this function.
penalties
reduce noise
quadratic penalty- for osem, prevents grainy image, makes smoother
edge-preserving penalties
high contrast low noise
MAP adv over MLEM
MAP algorithms promise more reliable quantification and better noise-contrast trade-off.
not dependent on iteration number
results easier to predict
Iterative problem is easier to solve
Algorithm can be designed to need less iterations for MAP than MLEM
MAP disadv
More choice
Which penalty are we going to use?
What parameters are we going to use? (how large is the penalty?)
contrast noise relationship
higher contrast higher noise
survival probability
probability that a photon is not scattered
attenuation modelling in SPECT
Different points on a LOR will therefore each have different attenuation (increasing away from the detector).
Therefore, in SPECT we cannot “correct” the measured data for attenuation. But this means that SPECT data cannot really be precorrected to fit the “line-integral” model (or “x-ray transform”) that we used to derive FBP.
Approximate attenuation correction in SPECT: “Chang” method
Calculate average attenuation factor for each point in the object / patient
Calculate correction factor for each point ascorrection = 1/mean attenuation
Multiply reconstructed NAC (“No Attenuation Correction”) image with correction factor at every point
Nowadays, the method is mostly useful to get a rough estimate of the effect of attenuation in a part of the image.
edge /middle photons chance of reaching detector
the photons from the edge have a larger chance of reaching the detector than those from the middle of the patient/phantom.
most popular method for scatter correction in SPECT.
Triple Energy Window scatter estimation
relatively effective and simple
measuring SPECT and PET transmission
SPECT
measure a to find mu
PET
measure a
can be immediately precorrected
using CT with SPECT/PET
Anatomical localisation
Attenuation correction
Quantification (e.g. scatter, partial volume correction)
Complementary diagnostic information
attenuation effect
attenuation results in apparent reduced uptake (looks like a perfusion defect)
No Attenuation Correction decreases ‘relative activity’ where high attenuation
breathing artefacts
CT streak artefacts and movement artefacts (&metal implants)
CT artefacts cause PET/SPECT errors
2D vs 3D PET
2D
the volume is obtained as a stack of slices
SEPTA between crystals
3D
Detected LORs are therefore not in a plane, but have all possible orientations. This needed a step-up in various aspects (faster electronics, faster image reconstruction, better scatter estimation, etc etc). However, these days, all PET scanners are 3D only.
NO SEPTA
SNR better in 3D:
More counts!
Compensated by higher randoms and scatter fractions.
Balances to overall gain (at lower injected activity)
time of flight PET
the different in arrival time is related to the location along the LOR where the annihilation occurred. If the difference Δ𝑡 is zero, the annihilation occurred at the mid-point between the 2 detectors. For non-zero arrival time difference, the formula below:
both gammas travel with speed of light (c)
difference in time of detection is (t2-t1)
emission origin is at distance d from centre of LOR
where d = (t2-t1) c/2
TOF timing resolution
Higher TOF timing resolution
Reduced uncertainty in localisation
Reduced noise
TOF adv
Enabling TOF increases your certainty about the location.
TOF converges faster and achieves better contrast for given noise
(for OSEM iterations)`
PET coincidences
Accidental coincidences occur when 2 photons are detected (within the coincidence timing window) that originate from different annihilations.
true (unscattered)
scatter
random
single (1 out of 2 detected)
estimating accidental coincidences: delayed time window
delayed time window
If you put a time delay (of a few milliseconds) in your coincidence circuitry and it detects a (delayed) coincidence, you know that the 2 photons were from different annihilations
So, the mean number of accidental coincidences in the (non-delayed) coincidence window), is equal to the mean number of delayed coincidences.
The delayed method has 2 disadvantages:
The (mean) number of delayed coincidences is low, so it’s a noisy estimate of the mean of the randoms
It needs extra electronics and can keep your coincidence circuitry busy, so could increase dead-time (although that’s not a problem in current systems anymore)
estimating accidental coincidences: randoms from singles
Provide nearly noiseless estimate of the mean background
If you can detect the number of singles in each crystal in a certain time interval, then this can be used to estimate your mean randoms-rate.
adv:
singles-rates are quite high. Therefore, if we measure the number of singles in a time interval, it will allow us to give an accurate estimate of the singles-rate (Poisson statistics again!).
RFS estimate is far less noisy than the delayed estimate
disadv:
now you need to count those singles, so you need extra electronics.
this formula ignores dead-time in the coincidence circuitry
PET resolution
factors:
crystal size, which is around 4-5mm on current clinical systems.
smaller crystals are more expensive and they also might have a somewhat lower detection efficiency. There will also be more inter-crystal scatter etc
positron range
colinearity
SPECT resolution
mostly determined by collimator
usually given as a sum of 2 components: the collimator blurring and the intrinsic detector blurring. They are both often modelled as a Gaussian blur.
detector efficiency
Probability of detection of a photon depends on
Place and direction of incidence of photon on the detector block
Energy of the incoming photon
Scintillator
PMT/APD/SiPM efficiency
Electronics
Incidence rate (singles dead-time)
Probability of detection of a pair of photons depends in addition on
Timing circuit
Downstream processing (coincidence dead-time)
detector efficiency in block detectors
As the crystals in the middle of the block have a larger chance of stopping gamma photons than the crystals at the edge, the detection efficiency of a block-detector is usually highest in the middle of the block.
determining detector efficiencies
2-D PET
fan sum method
Sum of coincidence counts seen by 1 detector is proportional to its efficiency
you add a lot of LORs, so reduce noise. It’s therefore easy to use this method for a regular check/re-calibration of the scanner.
SPECT
uniformity measurement
In SPECT, it’s easiest to use a plane source on top of the collimator. As long as that plane source is uniform, the detected counts in each bin will be proportional to the detection efficiency
optimal operating range
determined by looking at CV of corrected counts
QA/QC
QA sets rules
QC tests against those rules
daily PET QC
the aim of the test is to check the response of the PET detectors. This can be either done by using a rotating road source or a solid phantom such as a Germanium phantom shown here.
After the acquisition we can either look at the sinograms to check to see if there are any problems. On the left here we see a good example, and on the right an example where there is a problem with a detector (giving us a diagonal line).
Some manufacturers perform tests where the performance of each detector is tested with regards to a number of different parameters
weekly PET QC
On a weekly basis minor tune-ups of the system can be recommended, and it is sensible to perform a check on the SUV or the activity concentration measured by the system using a cylindrical phantom filled with F18 or Germanium.
infrequent PET QC
On a less frequent basis, other detector calibrations will be performed together with activity concentration or SUV calibrations. The frequency of these tests will depend on the make of system and the level of expertise available at the site. In fact many of these tests will be performed by the manufacturers field engineers and not by local staff.
A critical test that should be performed is a check on the inherent registration between PET and CT images on dedicated PET/CT systems. Normally calibrations are in place to ensure that images acquired on PET/CT systems are overlayed correctly. However it is important on perhaps a quarterly basis or whatever is suggested by your manufacturer to test these calibrations.
QC: image quality, accuracy of attenuation and scatter corrections
Calculate Contrast of hot and cold spheres
Variability of background activity
Scatter and attenuation correction algorithms checked with ROI analysis on central lung insert
Activity to mimic clinical whole body study
Additional activity distribution placed outside FOV
Scan length to simulate whole body scan time
SUV
standard uptake value
SUV = injected activity / patient weight
kBq/ml
partial volume effects (PVE)
PET and SPECT Definition:
Any effects on reconstructed image due to limited resolution (system, image reconstruction settings, filters…)
PVEs degrade the quantitative accuracy of PET and SPECT.
recovery factor
Recovery factor:Ratio of measured ROI mean (or max) over true value
RF, also sometimes called the Recovery Coefficient) allows you to convert a measured value to the true value. (just divide by the RF
dependent on object shape.
types of PVE effects
Between voxel-effects:
Spill-over: (or “spill-out”)activity from inside the ROI appears outside
Spill-in:activity from outside the ROI appears inside
Within voxel effects “Tissue fraction effect”:
A voxel usually contains different tissues (and certainly cells). We measure an average activity in the voxel (at best)
Discretisation effects for voxels at the edge of an organ/lesion
correcting for PVE
ROI-based PV Correction methods:
Objects with known size
Usually assume uniform uptake in every region
Deconvolution methods:
Generic
Sensitive to noise
Hybrid methods:
Future
region based PVE correction:
geometric transfer matrix
segment anatomical regions for structures or tissues
smooth regions to match emission resolution
define contribution of activities in structures to each region (geometric transfer matrix: 𝑤_𝑖𝑗)
compute ROI values 𝑡_𝑗
solve for activity in each region (𝑇_𝑖), given measurement (𝑡_𝑗)
tracer kinetic modelling
adv and disadv
Advantages
Extracted parameters are independent of delivery, therefore more reliable(e.g. for follow-up studies as part of the evaluation of a therapy)
Allows incorporation of biological information
Disadvantages
Longer acquisition time
Results can depend on the accuracy of model
motion
causes and effects
Motion occurs because of:
Respiratory motion (breathing)
Cardiac motion (heartbeats)
Gross patient motion
Patient motion impact:
Artifacts
Diagnostic uncertainty
Radiation Treatment
Quantitation
Reproducibility
Prevents use of anatomical information (e.g. from CT or MR) for regularisation.
- degrades image quality
- decreased resolution
- decreased lesion detectability
- accuracy of quantification
reduce motion
Step 1: Minimise motion
Patient management
Step 2: Gating and time frames
Split data in different “motion states”
Step 3: Motion correction
Combine data (e.g. through Image Registration) to reduce noise
reduce motion
gating
Gating reduces the amount of motion. However, it means each image is reconstructed with fewer counts. In current practice, the scan duration is therefore usually increased. Instead, it is better to combine the data from all the gates somehow
Use data from a single GATE
but lower counts -> higher noise!
Combine PET data from all gates
Estimate motion between gates from either
gated PET images
gated CT (or MR) images
Combine all PET gates into single image
add registered PET images
incorporate motion into image reconstruction and estimate single motion-free image
Sort data into multiple “gates” based on motion information
Independently reconstruct all gates
Register to a reference gate and (weighted) add
PET/MR issues and benefits
Problems
Attenuation correction
Issues with QA and testing(lack of phantoms that are suitable for both PET and MR QA/QC)
Benefits
“One stop shop”
Possibilities for “joint solutions”
Motion from MR
PVC
attenuation correction for PET MR
Basic problem
Attenuation is related to density
MR signal is not
Various solutions:
Segmentation-based
Estimate attenuation from PET data
Atlas/database methods (Machine Learning)
Combined approaches
PET MR adv related to PET image quality
Benefits
Increased scan duration (to accommodate for multiple MR sequences)
Motion monitoring and correction
Anatomical (and other) information
Joint kinetics
Future opportunities
future
Radio-chemistry
Algorithms
Better corrections
Better scanner modelling
Better regularisation
Motion correction
Machine Learning
Instrumentation
Collimator design
Scintillators and detectors
Higher TOF resolution
Multi-modality
SUV max
very easy, but single point measure, sensitivity to noise
SUV peak
(still easy and overcomes some main SUVmax limitations)average SUV within a 1 cm3spherical ROI centred on the “hottest focus
PSF
point spread function
the image (or volume) obtained of an object consisting of a single point source with activity 1kBq (or other units)
Regularisation in PET/SPECT
Low-pass filtering
reconstructed image
data before reconstruction, which is often used in FBP
Early-stopping of MLEM/OSEM
Relies on initialisation with “smooth” image and behaviour of MLEM (first updates low frequency features and then “adds in” rest)
Is often used, but can lead to wrong quantification (as convergence rate is object/location dependent)
Maximum a Posteriori (MAP) / penalised reconstruction
Incorporate prior information (e.g. expected image appearance/features) into probability model
Penalise undesirable features when maximising
Balance between data-fitting and prior/penalty
Use of different basis for image reconstruction
(e.g. not voxels but “blobs”)