2nd Year: Commissioning and General Machine/TPS Knowledge Flashcards
If you take a PDD curve, and the curve is parallel but offset to what gold beam data expects, is this an issue with the Linac? Why or why not?
Most likely not
If energy/beam quality differs, the curve will be skewed in a non-parallel manner (attenuation differences)
If it just appears offset, you most likely just have a bad setup on measurement
If energy of a flattened beam is higher than expected, what happens to the horns?
Horns increase
What beam characteristic is generally the best distinguisher of beam energy?
Flatness (for flattened beams)
Ex. for a 6M beam, a 1% error in energy presents as a 4% error in profile
List some equipment that is required for acquisition of commissioning data
3D Water Tank with large enough dimensions for full scatter
Ion chamber for TG-51
Small field detector
Scanning chambers/detectors
Levels for the tank and gantry head
For water tank scanning, how is a CAX correction applied? How is it measured?
Measured using geometry that extracts inplane offset, crossplane offset and angles based off in-plane and crossplane profiles at 2 or 3 different depths
Translational corrections can be applied to all data collected therafter, but rotational corrections cannot be.
For large fields, not applying rotational corrections does not make a big difference. For small fields, it does so you want rotational CAX measurement to be as close to 0 as possible
For tanks that are unable to measure inplane, crossplane and diagonal profiles for the largest field sizes, what method is used to do this?
Shifted tank method
Center field at a quadrant of the tank, with approx 5 cm from any tank border from crosshair and measure only towards center of tank
With this method you are measuring half the profile, which is fine because in TPS you would just mirror the profile anyway
Why for commissioning do we do 10 x 10 PDD profiles for FFF beams with and without the lead foil?
TG-51 requires it with the lead foil
TPS requires it WITHOUT the lead foil. Which makes complete sense as you never treat a patient with lead foil
At what depths does Eclipse TPS want inplane and crossplane profiles for photons?
max, 5 cm, 10 cm, 20 cm, 30 cm
How do you measure Sp?
You don’t
You measure Scp in a water tank
You measure Sc in air
You then divide Scp/Sc for all respective field sizes to get Sp
How is DLG measured?
Varian created template plan where there’s small slit of opening with MLCs that slides across the field with varying gap sizes
Detector measures integrated dose for each slit size plans
Create a plot of integrated dose vs gap size
Extrapolate down to a gap of 0 mm and that gives you your DLG
What is the difference between inplane and crossplane?
Inplane is along the axis of the linac (LNG)
Crossplane is perpendicular to the axis of the LINAC (LAT)
Why are you allowed during commissioning to assume perfect symmetry of your profiles?
Because you normally check ahead of time during acceptance testing
Afterwards, during all your QA, you ensure symmetry is close to perfect
What is the most ideal small field dosimeter for scanning? What about output factors?
Scanning: diode detector such as Edge or MicroDiamond
OF: W1 or W2
Note: Although W2 has scanning capabilities, it’s not too great. Software kinda stinks. Also scanning requires high doses, and this over time causes plastic yellowing throughout course of commissioning
What criteria is used to compare measured profiles to VRB data?
2% / 2mm gamma analysis for inplane, crossplane and PDD
3% / 3 mm gamma analysis for diagonals
Where does the VRB data come from?
Real measured data which is a composite of multiple sets of data measured across multiple varian truebeams at Duke, with different physicists
What does VRB stand for?
Varian Reference Beam
How is VRB used during commissioning?
Two approaches based off the client
- VRB is used in the TPS to make models, and all commissioning measurements are used to validate the VRB
- The measured data is used in TPS to create the models and are double checked with VRB
In both cases, you need to ensure VRB data is close to measured data anyway. It’s then just a matter of which data do you want to use for your model? Which do you trust more?
How is data processed for PDD curves?
- Shift
- Smooth
- Normalize
- For electrons, software should have a depth dependent PDI to PDD correction for chamber
How is data processed for inplane and cross plan measurements?
Smooth
CAX correction
Make symmetric
The microdiamond and edge detectors are both diode detectors. What is the main compositional difference between them though?
Microdiamond is artifical diamong with a very small dye and is near tissue equivalent
Edge is silicon
Down to what field size is PDD data measured?
1 x 1 cm2
What field sizes are MLC and Jaw collimated small field profiles taken for in plane, crossplane and PDD curves?
1x1 –> 3x3 cm2 for PDD
2x2 and 3x3 for all profiles and PDDs
Down to what field size does eclipse accept in and crossplane data?
3x3 cm2
Anything below will not be accepted by Eclipse
Why for small field PDD can you feel confident measuring directly and not needing a per depth correction factor?
Although best practice is to do a per depth correction, it is deemed non-essential by commissioning standards, as the actual energy spectrum, and thus perturbation correction factor barely changes vs depth. Most commissioning don’t apply any factors for scans, only point measurements
This is due to field spreading increasing low energy scatter, while beam hardening increases energy, thus resulting in a somewhat balance. The factors will change, but not by a significant amount
For a output factor corrected reading at an intermediate field, how do you find output factor for fields smaller than the intermediate field using daisy chaining?
Measure response at the intermediate field
This gives you a dose/response
Then field response at smaller fields
Multiply responses by correction factor to go from small field to intermediate field and multiply by dose/response of intermediate field
How are small field scatter factors normalized to a 10x10 field?
Two steps,
First the corrected scatter factor is normalized to intermediate field, then normalized to 10x10
That is, daisy chaining
True or False
Varian machines tend to behave very closely to their pre-calculated VRB?
True!
True or False
Elekta machines tend to act very closely to their representative data?
False
For vendors besides Varian, there is no representative data. All models must be based off user collected data.
Assuming you use VRB data to create TPS model, how do you perform validation during commissioning?
You can do it as you take the real data (this is most efficient)
Generate plans in liquid water with the exact same setup as the measurements you’re taking, then you can do either a point by point or gamma analysis comparison between the two
In addition you will need to generate series of test plans for delivery of course
Assuming you didn’t want to use VRB data for your TPS models, how does validation work?
You cannot validate data as you collect it. Meaning you must collect everything first, enter into TPS, generate models, then create plans to compare to the measured profiles
What critieria is needed for two linacs to be “matched”
Energy is exactly equal
MLCs are the same
DLG and transmission factors are the same
**Note: **There is a tuning factor that allows you to have the same DLG. But that doesn’t exist for transmission facotrs
How is transmission factor measured?
100 SAD, 5 cm depth
10x10 cm2 measurement first
Take a measurement with MLC bank A covering field
Then another with MLC bank B covering
Transmission factor is average of A and B covering divided by open field
For small fields, why should you NOT measure PDD? Only TMR
It’s very difficult to measure PDD due to chamber movement from CAX as the chamber changes depth. Your alignment is critical and has to be centered perfectly the entire range of depths
For TMR, since the chamber only stays in one spot, it only needs to be aligned once
If you are gonna measure PDD, you better make sure your chamber alignment is excellent at all depths
What data must be measured during commissioning for CDC/ECDC?
10 x 10 cm2 open field absolute dosimetry
Output factor for all cones (95 SSD, 5 cm depth)
TMR for at least three cones
OAR for at least three cones measured at ATLEAST one SSD and one depth
What data did we collect for commissioning of CDC and ECDC?
Profiles at 80, 90, 100 SSD for every cone at d = 5 cm
TMR at STD = 100 for all cones
Output factor for each cone at 95 SSD, 5 cm depth for all cones, 5x5 cm2 jaws
Which detectors did we use for cone commissioning?
Edge and W1
They were compared to one another, and W1 was compared to MC
In addition to data collection for TPS, what else is done for Cone Commissioning?
- E2E test
- Comparison of data vs golden beam provided by varian for 5x5 cm2 jaw setting
- Winston-Lutz isocentricity test
- Center check output factor
- Cone Jaw setting check
- Cone concentricity
When commissioning your 3D tank for scans, what are 4 things you must measure/determine extent of?
Alignment
Orthogonality
Distance Accuracy
Hysteresis effects
For small fields, where you want to measure TMR but can’t, how do you convert from PDD to TMR?
British journal of radiology 25 is a supplement that explains the conversion
True or False
Sc and Sp data are never actually used by the TPS
True
You do not input this data into the TPS. It is purely for your own databook in case someone wants to do a hand calc
They contribute nothing to the models
The output factor tables already include all scatter info and the TPS can just derive a model from that
What are some tests used to commission RapidArc?
Plans with slits at various gantry angles to test affect of gravity on MLCs
PF at cardinal angles
PF during rapid arc delivery (gantry rotation affect vs MLC positioning)
Intentional PF error during rapid arc and ability of system to detect
Dose rate and gantry speed control (deliver same dose with 7 different dose rates and gantry speeds)
DMLC dosimetry with MLC slit at 4 different angles
Briefly, how is portal dosimetry commissioned?
PVA calibration all scans
During PVA calibration MV dosimetry mode, deliver the beam, normalize response across all sub imagers in panel array, then upload the beam data for your TPS model and specify dose at CAX
This will tell the panel exactly what dose was delivered to each point and it gives it a reference to calibrate all it’s sub arrays to
Repeat for all energies
Run series of test plans (3D, IMRT, chair) to validate
What does the chair sample plan test? (3 major components of IMRT)
- Inverse planning module
- Leaf motion calculator
- Dynamic MLC control
How does your TPS create its model given input dataset?
It’s essentially an optimization process
The model tweaks its parameters and optimizes to to try best match input data
What is the photon source size for the linac?
It is not one singular value, source size is defined by the algorithm, NOT physical size, and depends on if you collimate MLC or Jaws
For AAA: x source size is 1 mm and y is 0 for MLC. Both are 0 for jaw
For acuros: x source size is 1.5 mm and y is 0. Both are 0 for jaw
Both models just need to use those values for their calcs, taking into account MLC or jaw transmissions in either direction
Per TG-106, where should physical wedge factors be measured?
100 SSD at dmax
Per TG 106 where should scatter factors be measured?
100 SSD
either at dmax or 10 cm depth
Per TG-106, where are cone output factors defined?
100 SSD, dmax
What do the three regions of the chair test, test?
- Investigate an area of only leaf transmission
- Provide a homogenous area for absolute dose verification
- Area of zero fluence between legs which require leaves to move over this area at max speed to minimize dose in the region. When the banks move over the region, they need minimal leaf gap. Errors that occur when modeling leaf gap will be evident by a mismatch in the zero fluence region between calculated and measured
True or False
Per TG-106, it is actually recommended NOT to use VRB data for your TPS models?
TRUE
Although a lot of clinics and commissioning teams do it anyway.
Per TG-106, what is the bare minimum data to be measured for photon beam commissioning?
- PDD curves
- In-plane and cross-plane profiles
- Wedge factors
- MLC data (leakage, penumbra, tongue and groove, DLG)
- Scatter factors
- Tray factors
Per TG-106, what is the bare minimum data to be collected during electron commissioning?
- PDD
- In-plane and cross-plane profiles
- Cone factors
- Insert factors
- VSD
Typically, how many beam models would you have for a given photon energy?
5 or more
One for open field, the rest are for physical and/or dynamic wedges
These are all individual models
What is the minimum dimension for a water tank to be considered “full-scatter”?
5 cm margin around the largest field size dimension projected at all depths in the tank
Down toa depth of 40 cm, a conservative size tank would be about 75x75 cm2
When may you see detector arrays used for commissioning?
Wedge profile measurements
They are really good at measuring these
When scanning, what must the chamber direction of motion be level with, the tank or the water?
ALWAYS THE WATER
Remember, WATER IS ALWAYS LEVEL. A tank is not
As long as the chamber runs parallel with water surface, you’re leveled
What is wrong with the setup that acquired the dotted data?
The gantry is tilted
What is an idea detector for scanning of electron beams?
Diodes!
Stopping power ratio is unaffected by depth and energy of beam, meaning easy PDI to PDD conversion
BUT, they’re not great in the bremsstrahlung tail
Why are diodes good for scanning electrons, bad at measuring in the bremmstrahlung region of the curve?
Because bremmstrahlung region is all photons. Diodes made to scan electrons are not good at scanning photons
For scanning, is it best to scan slow or fast? Why?
Slow
Less noise, and less water movement effects
Should you measure PDD shallow to deep or deep to shallow?
Deep to shallow
Less water ripple effect
Regarding water in the 3D tank, what are some considerations you may want to take into account?
Use distilled water (avoid algae getting in mechanisms)
Check water level atleast once daily for evaporation
Per MPPG 5a, what is the only recommended data collection for your CT sim in the TPS?
HU mapping data with a wide range of material densities
Down to atleast what field size should small field output factors be measured?
2x2 cm2 or below
Why does TPS not accept small field data below 2x2 cm2?
It understands there is a lot of uncertainty on the data collection
It’s better at that point to just extrapolate the data to get the really small field stuff
Before validating measured data, what are three useful practices you can do to assess the quality of your data?
- Compare to reference data
- Compare to expected physics (Ex. PDD should mimic what’s roughly expected)
- After TPS processes, make sure there was no issues with smoothing, mirroring or importing
What two types of validation datasets should you do? What are the uses of both?
- Validation data calculated with plans exactly mimicing the collection conditions of data put into model (this makes sure the optimization didn’t stray too far from the input data)
- Tests with different setups than those used for collection (identify model’s capability to calculate a dose for a scenario that was not used for the model)
What is tolerance per MPPG 5a for point dose comparisons of sample plans vs collected data used in model?
2%
Which two reports give a full list of recommended tests to do for commissioning validation?
MPPG5a and TG 106
Per MPPG 5a, what level of agreement should dose per MU at reference condition have with TPS model post commissioning?
0.5%
In what regions of curves/profiles do you expect the most inherent uncertainty/difficulty of the TPS model to predict dose in?
Buildup region
Profiles with oblique incidence
Penumbra
Tails
Before conducting validation tests of heterogenity correction algorithms, what two pre-requisites are required?
Beam models have already been created an validated
HU curves have been created from the CT Sim
What report gives a full list of recommended sample plans for IMRT/VMAT validation?
TG 119
**Advice: **
In general, know that you don’t need to memorize all the recommended tests and tolerances in every report. Just know where to find the info. What are the three major reports you should know?
TG 106 and MPPG5a for commissioning and validation
TG 119 for IMRT/VMAT validation
All three of them have very detailed explanations of each test and recommended tolerances
In general for point dose validation, which regions have the most generous tolerances?
Regions that are difficult to model,
- Penumbra
- Tails
- Buildup region
What tests does TG 119 recommend for IMRT commissioning?
A catalogue of sample plans ranging from minimal complexity to most complex. The report gives detailed descriptions on how to optimize these plans
The sample plans are…
- AP/PA
- Bands
- Multi-target
- Mock prostate
- Mock H&N
- Complex C-Shape
- Complex C-Shape 2
For the TG 119 recommended sample tests, what is the recommended phantom to be used?
A large water equivalent slab where a chamber smaller than a farmer chamber, and film, can be inserted for analysis
Film is analyzed with gamma analysis
Chamber is point dose %diff
Per MPPG 5a, what are the recommended TPS tests meant to check (two purposes)? How are they done?
- Check that TPS has not been unintentionally modified (use checksums or uncalc and recalc standard beams. We do both monthly)
- Dose calculation is consistent following TPS upgrades (uncalc and recalc standard beams)
Which TG report gives recommendations for commissioning of MLC, and what are the recommendations?
TG-50
Transmission test
PDD, output factors and TMRs for MLC-generated fields spot check against jaw collimated fields
In-plane and cross-plane profiles of MLC collimated fields spot checked against jaw collimated fields
What is the setup when measuring wedge and scatter factors?
It’s completely vendor dependent. Whatever they want for their model
Ex. Varian wants wedge profiles at dmax and wedge factors at d10 SAD setup
What setup does the TPS want for measurement and modeling of EDW factors/profiles?
Trick question, you never feed the model EDW. It’s calculated by the TPS after the model is already made
All you do for EDW factors and profiles is validate against a sample plan of EDWs that you generate on the TPS. There’s no data to be taken for the model itself
What setup does Varian want scatter factors measured at for the model?
Sc and Sp do not go into the model, they are purely for the data book
Scp is measured at 10 cm depth, 100 SAD. It’s effectively just an output factor
What is the primary and secondary objectives of a commissioning job. What are the two parts that you are commissioning.
Two parts:
1. Which machine are you commissioning?
2. Which TPS are you commissioning?
Two goals of commissioning
1. Collect data for the TPS (this is required)
2. Collect extra data for data books in case hand calcs are needed (some jobs do this, but this is absolutely NOT REQUIRED)
How is VSD calculated for electrons?
Take scans at 3 different distances from source
Use software to find FWHM at all 3 depths
Allows you to plot out the beam divergence
Backward project divergence until you get to a point
Why for electron commissioning do we take uncollimated air scans at the defaulted field sizes per energy, even though we will never treat without the cones?
eMC needs it to help model scatter off jaws
What data does eMC need for modeling?
Un-collimated air scans
Uncollimated 40x40 water scans
Collimated inplane, crossplane and PDDs for all cones and all energies
In what tab in aria are DLG and transmission factors input into?
RT Admin, MLC modeling tab
But with Eclipse 16, u can also add it in beam config
What is a “test box”?
Play area essentially where everything is configured for a version of TPS that isn’t FDA approved or clinically used yet
Allows for commissioning of models in a safe area that won’t effect any current models
True or False,
For varian commissioning jobs, they don’t need to calculate models on the spot, they just need to import the models from a library
True
They have all the models pre-calc’d ahead of time. This helps because some wedge and electron models take a VERY long time to calc.
When calculating a model, what is the TPS actually doing?
It takes your input data and tweaks your beam parameters in such a way that it gives you an optimized model to match your data
What are all algorithms that need to be modeled during an eclipse commissioning? (6)
PO - photon optimizer
PDIP - portal dosimetry
AAA
Acuros
eMC
ECDC if you have cones
In general, what is your measured data validated against in a commissioning?
VRB data
AAA calc’d plans
Acuros calc’d plans
eMC calc’d plans
PDIP calc’d plans with multiple test plans
These are all either point does (like absolute point doses, OARs, output factors, scatter factors, etc), and/or scan validations (3%/3mm gamma, such as in-plane cross plane, PDDs, wedge profiles, etc)
True or False
Each scanning and analysis software (IBA, SNC, Mephysto) have a different file format that you need to convert between in order to share the data with customers that don’t have the same syste as you.
True
What is the difference between gold beam data and VRB? Which does varian use during commissioning?
Gold beam data was notorious for having some issues and processing inconsistencies
VRB is an updated version of gold beam data that was acquired from 3 truebeams in Duke
Varian during commissioning used VRB for everything other than small fields. This is because duke measured small field scans with a cc13. THe varian commissioning team has improved data for small fields to replace the VRB data
What impact does tweaking the target spot size in beam config have on your model?
It adjusts the geometric penumbra of your beam model
In addition, it also have a very small impact on the hotspot
What are some of the values that model tweaks during optimization? Where are these characteristics held in the model, what is that all encompassing data called?
Energy spectrum
Mean energy
Head scatter and leakage
Energy intensity vs radial
Electron contamination
Phase space is characterized by these
Most of the time the default field size settings for cones with electron energy remain what varian sets them as. When would this not be the case and what must be tweaked in TPS if this occurs?
Sometimes the engineers during installation cannot get beam steering into spec, so they may have to change default field sizes to do this
If this happens, you need to make sure they update the default value list in RT Admin
An important note on one of the purposes of commissioning/validation. Read back
You may see during validation that model calc’d profiles or point doses may not perfectly agree with measured. At times they may even be out of tolerance
Part of commissioning and validation is to know where your uncertainties are
Ex. TPS is not great at modeling dose beyond the boundaries of your wedge, So you often see gamma failure outside of hard wedge fields. It’s good to know this limitation but to also know outside wedge is often low dose and of minimal clinical impact
How is absolute dose defined in the TPS?
In beam config,each model has a subsection for absolute dosimetry parameters
You need to define a single geometry and associated dose/MU there
It can, in theory, be any setup as long as you are consistent in your calibration and QA
Ex. Varian actually recommends 95 SSD, 5 cm as the point of dose calibration, but your dose/MU there is not 1 cGy/MU, that is dependent on whether you’re SSD or SAD calibrated
Conceptually, what is DLG?
Parameter in RT Admin used to model the rounded leaf effect of the leaf collimator (MLC). This parameter attempts to model the physical difference between the radiation and light field and accounts for inherent leakage between adjoining leaves. It has units of cm and is the FWHM of the field created by leaves touching each other
Why does varian recommend setting absolute output in parameters at 95,5 or 90,10 instead of dmax like we calibrate in our clinic?
Because TPS gives the best absolute output match at whatever your reference condition is in the beam config. Since targets are almost never at dmax, and are usually at depth instead, you want the best matching at depth
True or False
During the model calculation, the TPS can change parameters set by you.
False
These are all hard set and include reference conditions for absolute dose calibration, effective target spot size, SAD distance, etc
**NOTE: ** Optimization does NOT tweak the effective target spot size. You set that based on varian recommendation that best models penumbra
What are some things that your model calculates during optimization based on input data?
Absolute dose scaling factor
Mean radial energy
Intensity profile
Particle fluence spectrum
Head leakage/scatter
Electron contamination
Calculated profiles
Calculated depth dose
Calculated gamma error histogram
Calculated wedge parameters (also dose scaling)
Why does eclipse use DLG but no other TPS does?
Eclipse does not model MLC leaf ends, so instead they created DLG which is a factor that acts as essentially a fudge factor that accounts for leakage of the leaf ends
For validation point doses, what sorts of setups, in general, tend to yield the biggest expected %diff?
Those where you change multiple parameters from reference
Example, non-10x10 field + off axis + deep depth + wedged field, etc
The more changes you have to your reference condition, the worse you expect uncertainty to be
For Sc measurements, is it better to use acrylic or brass buildup caps? Why?
Brass
You get your buildup with a smaller volume of material, meaning the cap+detector can be fully in the field down to smaller fields and up to higher energies relative to acrylic
What do the marked 3 regions of the below test allow the user to inspect?
Region 1: Has an aea of completely blocked dose. This lets user inspect accuracy of modeled transmission factor
Region 2: Is completely open, allowing the user to inspect accuracy of homogenous open field absolute dose verification
Region 3: Has a center zero region in which the leaves must move very quickly from left to right while maintaining a zero gap in order to minimize dose in the region. Inspects travel speed and leaf gap modeling
Why does TG-106 recommend use of daisy chaining if diodes are used for SF output factors?
To minimize energy dependence effects
per MPPG 5B validation with point sources, what are the recommended tolerances for difference between calc and measured for…
High dose region with 1 parameter change from reference -
High dose region with multiple parameters changed from reference -
Penumbra distance to agreement -
Low dose tail region up to 5 cm away -
High dose region with 1 parameter change from reference - 2%
High dose region with multiple parameters changed from reference - 5%
Penumbra distance to agreement - 2 mm
Low dose tail region up to 5 cm away - 3% of field max dose
per MPPG 5a, what is the tolerance for dose per MU at reference condition for TPS vs measured?
0.5%
Per MPPG 5a, what is the heterogeneity point dose tolerance for a setup of heterogeneity above and below measurement point?
3%
True or False
Dosimetry for SFD is often extrapolated by the TPS?
True
Most TPS do not take field size data at 2x2 or below. Instead they extrapolate because the process of measuring that data presents too much uncertainty
So during commissioning, when we take SFD data, it’s purely for validation. Ex. MLC collimated small field PDDs and small field profiles and PDDs and output factors
In addition to dose comparison for heterogeneity validation, what else does MPPG 5B recommend for validation of heterogeneity calculation?
Confirmation of lookup tables, bulk densities , HU to electron density and mass density curves, and material assignment tables for the TPS
Per IIROC, what percentage of facilities pass the anthropromorphic phantom E2E with very loose gamma criteria? What about slightly more strict but still pretty loose criteria?
Very loose - 90% of facilities pass
Less loose but still loose - 70% f facilities pass
Per MPPG 5B, what four types of tests are recommended for validation of electron model?
What fraction of commissionings actually do a full MPPG 5B validation?
Around 1/4th
the DLG is a value defined in cm. How is the value used by the TPS?
When calculating fluence from the beam, the TPS will automatically bring back the leaf ends by half of the dosimetric leaf gap. This is because the TPS does not actually model transmission through leaf, as a result they have to bring the leaf back virtually to account for the unaccounted for leakage fluence
From beam config guide: “shifting the leaf tip positions in the actual fluence calculation. Leaf tips are shifted by pulling each of them back by half the value of the dosimetric leaf gap parameter so that the gap between a fully closed leaf pair equals the dosimetric leaf gap parameter.”
In Varian TPS model, what is the “second source?” What does it help model?
The second source is a Gaussian plane source located at the bottom plane of the flattening filter. It models the photons that result from interactions in the accelerator head outside the target (primarily in the flattening filter primary collimators and secondary jaws).
What three “sources” do both AAA and Acuros model?
Primary source
Secondary source
Electron contamination source
All three sources and their characteristics are contained in the phase space created by the model
True or False
To create the model for PDC, you don’t have to take additional data?
True
The Portal Dosimetry pre-configuration package consists of:
■ Pre-configured beam data for the “Portal Dose Image Prediction” calculation
model.
■ A two dimensional beam profile correction file to be imported during “Dosimetry
Calibration” on the 4DITC workstation.
■ A set of verification plans ensuring the proper installation of the package.
What 5 sources make up the electron source in eMC model?
■ Main source (electrons and photons) as a point source near the scattering foil
■ Jaws source (electrons and photons)
■ Scraper sources for electrons scattered at the upper applicator scrapers
■ Edge source for electrons scattered at the last applicator scraper or insert
■ Transmission photons