Ultrasound 2 Flashcards
What do transducers do (as a source and as a receiver)?
Transducers convert mechanical movement into an electrical signal or vice versa
Electrical - mechanical movement or the other way around
What is the piezoelectric effect?
Piezoelectric crytstal is heated up / poled, so that charges become aligned
What happens to the piezoelectric crystal when the transducer is acting as a source vs as a receiver?
As a source, a voltage is applied to cause the piezoelectric crystal to compress and to rarefract, producing a sound wave
As a receiver, the wave’s mechanical movements cause the crystal electrodes to compress or to stretch, and depending on the direction of movement, a voltage with a different polarity is produced. 2 electrodes
What contains the piezoelectric element in the ultrasound transducer construction?
2 electrodes
What is the thickness of the piezoelectric element and why is it chosen as such?
Half of the wavelength: when the sound travels through the PZT element, it will have traveled half the wavlenegth. As the matching layer has a lower impedance to the PZT element, some of the waves are reflected and some are transmitted. The reflected waves are inverted, so destructively interfere with each other, and the transmitted waves are in phase so they constructively interfere to produce waves with a greater amplitude.
What is the purpose of using a backing layer?
The backing layer allows the sound to reflect at the back of the piezoelectric element so that it can be transmitted eventually to the tissue and the detecting lens.
What are the relative impedances of the backing layer, the PZT element and the matching layer?
The impedance of the matching layer should be equal to the square root of the impedance of the PZT element and the tissue
What do the lenses at the end of the matching layer do?
The lenses are curved, and as the sound approaches the curved surface at different poins, they will be refracted at different points. The lens curvature is designed so that the waves are refracted towards the beam axis and focus the beam at a particular point
What is the point of focusing (on transmission or reception) in diagnostic imaging?
To improve spatial localization, helps concentrate energy onto a specific region (resolution)
What is the focal length?
The distance between the end of the matching layer and beam axis
What are the names of the 2 different planes in the beam axes?
- The elevation plane
- The scan plane
What is the width of the beam at -3dB
half of the maximum intensity
What is the width of the beam at -6dB
1/4 of the maximum intensity
What are the 3 different types of probe types and what are the main characteristics?
Linear: PZT elements arranged linearly straight
Curvilinear: PZT elements curve around a structure. They are placed straight parallel to each other, but the surface they adhere to is curved
Phased: PZT elements are placed at angles to each other which increase as the array is more towards the outside, and the surface to which they are adhered to is straight.
What is the name of the surface to which the element arrays are adhered to?
the field of view
What are the applications of linear transducer array?
Imaging superficial structures such as blood vessels in the neck
What are the applications of curvilinear transducer array?
Imaging abdominal organs and fetuses
What are the applications of phased transducer array?
Small footprints used where there is a limited window for imaging at the skin surface but a wider field of view is needed at depth
What is the element width approximately equal to?
The wavelength of the ultrasound wave
What is the element length approximately equal to ?
30*wavelength + lens
What are the small spaces between elements called ?
kerf
What is the centre to centre distance of elements called?
Pitch
What is the field divided into?
2 sections: the near field and the far field
What is the near field main characteristics in terms of path differences between wavelets?
There are large path differences between wavelets arriving from different points on the source, resulting in a complex pattern of pressure minima and maxima
What is the far field main characteristics in terms of path differences between wavelets?
the path differences are smaller, as all points are far from the source, and there is a more simple distribution or radial profile of pressure, the beam begins to diverge
What does the position of the last axial maximum depend on?
The source radius and wavelength of the ultrasound
What happens for small low frequency sources?
They have a large wavelength as wavelength is inversely proportional to frequency, such that the last axial maximum is close to the source, and the field diverges quickly (as the tranition between the near and far fields occurs quicker)
What happens for a large frequency source?
As the wavelength is inversely proportional to frequency, the last axial maximum will be farther and the beam will be well collimated (well directed)
What is the equation for the last axial maximum?
z_max = a^2 / wavelength
What happens to a beam with a high axial maximum in terms of the relative pressure amplitude ?
A beam with a high axial maximum will be well collimated, producing a complex pattern of pressure minima and maxima
If the source radius»_space; wavelength of sound in the material, what happens to the wavefronts of the beam?
They are quite long and narrow
What happens when the source radius «_space;wavelength of sound in the material?
the source acts like a point source, with sound radiated over a wide angle from close to the source due to diffraction effects
What is the equation used to calculate the angle of divergence of the far field?
sin(theta) = 0.61 * wavelength / source radius
What does the equation for the angle of divergence of the far field tell us?
The angle of the central, axial region, the main lobe, is surrounded by sidelobes areas.
The sidelobes areas are areas of alternate high and low pressure which look like rings around the main lobe
In practice, how is imaging done?
using arrays of small elements and the total field is the sum of the fields form the individual elements in the active group of elements
How is the distance of structures from the source determined and used to form an image?
Short pulses are used for imaging
Why are short pulses used so that the distance of structures from the source can be determined and used to form an image?
Short pulses contain a range of frequencies, and there are fewer wavefronts to interfere as:
1. the near field beam patterns are far simpler
2. the pressure amplitude on axis is more uniform
What is the central frequency of the pulse spectrum?
The frequency that produces the highest amplitude pulse
What does the width of the pulse spectrum give?
The bandwidth: the range of frequencies contained in the pulse
What happens to a shorter pulse?
It has a wider bandwidth
How can images be formed?
From a series of scan lines which make up the image
What occurs to the small elements used to form the image vs the wavelength?
They result in a divergent field, so to make a narrower beam, the elements are pulsed sequentially in groups.
What is the term for when small elements are pulsed sequentially in groups?
the transmit beam
What’s the point of doing transmit beam?
Small elements result in a divergent field
What is the result of transmit beam?
The beams result in several elements combined, so they are less divergent
How can a focus be formed at different depths?
Using electronic steering where each element is fired at a slightly different time
How are the beams making up the image scan lines formed?
By an active group of elements
In which the active group is moved along the array by 1 element each time
How is the focus depth chosen?
By the operator, or based on an imaging preset
How does the machine adjust the focus depth?
Accounts for differences in path length between each element and focus position
how does the machine account for differences in path length between each element and focus position on transmit vs on receive?
On transmit: by relative delay between signals on elements
On receive: by adding relative delays to signals before summing
What happens to the echoes after they are received by the transducer elements?
Added together
Process of ultrasound imaging:
- echo reception
- signal addition
- signal processing
3.1 compenation for attenuation
3.2 envelope detection
3.3 brightness profile formation
Echo reception
When an ultrasound probe sends out sound waves into the body, they bounce off different tissues and structures. The echoes are picked up by the transducer elements in the probe.
Signal addition
The echoes received by the trasnducer elements are combined or added together. The process enhances the signal qualty
Signal processing
After the echoes are received and combined they undergo several processing steps
Compensation for attenuation
As sound waves travel through the body, they lose energy due to absorption and scattering, a phenomenon known as attenuation
The attenuation is compensated based on how deep they traveled into the body and back, helping to maintain image quality and accuracy at different depths
Evelope detection
The envelope of a signal represents the variation of its amplitude over time
In ultrasound imaging, the envelope of the received signals is extracted, providing information about the strength or intensity of the echoes
Brightness profile formation
The amplitude of the signal envelope is used to create a brightness profile along the lines of the ultrasound image.
What happens to the areas with stronger echoes in a brightness profile?
They will appear brighter on the image
What is the frame rate?
The rate at which we can form a complete image
What does the frame rate for a focused beam depend on ?
The number of scan lines (ecah scan line uses a separately formed beam) used to make up the image
How long the sound takes to travel to its furthest point and back (which depends again on the maximum depth of tissue we are imaging)
How is each scan line generated?
By a group of elements- using enough elements to form a square aperture
Example of formation of a square aperture
30 * wavelength = k * 1.3 * wavelength
30 / 1.3 = k = 23
k will be the number of elements in the active group
What is the number of scan lines needed to make a complete image?
N-k+1
number of elements in the array - numer of elements in the active group + 1
Example question:
The length of an element is 30 * wavelength
The width of an element is 1.3 * k * wavelength
128 element array
ultrasound images are acquired with frame rates of 30fps
How far does sound travel single trip?
To make a square aperture = k must equal to 23
For a 128 element array, there are 106 scan lines per image if k = 23
if acquisition rate is 30 fps 30 * 106 scan lines/ s, each scan line can take a maximum of 314 microseconds.
sound travels a distance d= c* t = 1540 * 314 * 10^ -6 = 0.48m
one way trip distance = 0.48 / 2 = 23 cm
What happens when fields are combined from several small elements?
A divergent field is generated and can add up to give a field with a more planar shape. Not focused so not very well localized laterally.
How to concentrate the waves into a smaller focal region in transmit focusing?
Shuffling the elements until the fields from each element line up or arrive in phase at one point.
Shuffling electronically by sending signals earlier from the edge elements from which the sound has to travel further to get to the focal point
Focusing on transmit
delays are applied to each element to compensate for differences in path length between the element and the focus. Delays are applied to each element to compensate for the difference in time of flight so that the time of arrival at the focus is the same for sound from each element.
What happens to the path length to the focal point (at a distance F) for elements either side of the central element of the group?
The farther the element to the central element, the farther the path length.
What happens to the wavelets with their relative delays from the surfaces of the elements?
sum to make curved wavefronts which converge on the focus
How can receive focusing be applied?
To create foci at multiple depths from echoes returning from the same transmit pulse
How is the compensation mechanism for transmit focusing different to receive focusing?
Transmit focusing applies the delay to the transmitted echoes
Receive focusing applied the delay to the received echoes
What happens to the received echoes?
delays are applied to the signals received at each returning echoes so that they are lined up when processes.
What is the delay and sum method?
Delays are applied to signals received at each element which compensate for the differences in path length so that the effective time of arrival is the same at each element for echoes returning from the focus. Signals are then summed and constructively interfere.
How does electronic focusing help?
Reduces the beam width in the scan plane
What is the benefit of a reduced scan plane?
Gives us spatial localization of the echoes so we know where they came from and where to place them on the image
Results in higher sensitivity and lower noise as the echoes come from a small volume where the signal amplitude is higher
how are the wavefronts angled in phased arrays?
increasing delays are added to the element signals betwen each successive element
what is the formula for the increase in path length between each element and the next to angle the wavefronts at an angle theta to the face of the array which has centre-to-centre element spacing of d?
dx = dsintheta
what is the time delay between each element in a phased array?
change in time= dsin(theta)/c
What is the main type of delay in beam steering in phased arrays?
focusing delays
How is the total delay calculated?
The differences in path length for sound travelling between each element and a focal point is the change in the x_1 distance or x_n, where n is the relative number of the element with respect to a central element
What are grating lobes?
additional beams which are formed at angles to the main beam due to specific spacing between transducer elements. It occurs when the path length difference between waves is equal to one wavelength, leading to constructive interference and the formation of additional beams
What is plane wave imaging?
All the elements are fired together and beamforming is performed on receive, focusing at each point in the image
With only 1 plane wave, what are the realistic effects of the image in terms of the beam, resolution and boundaries not parallel to the wavefronts?
- low amplitude beam
- poor lateral resolution
- boundaries not parallel to wavefronts are not visible
How to improve image quality of plane wave imaging and other types of imaging?
Images are composed from an average of several images obtained with different beam angles
What is the effect of compounding to improve the image quality of plane wave images?
- enhances the boundaries lying at an angle to the transducer face
- increases SNR
How does changing the power change the imaging depth?
Higher power increases imaging depth
How is the receive signal adjusted?
When amplified, the user can increase the gain to make echoes brighter
What happens when the echoe is made too bright?
Too much gain from the amplifier will amplify the noise too much
What happens after the received echo is amplified?
Coherent processing: the raw signal contains amplitude and phase information
What happens after coherent processing?
demodulation: the rf waveforms are rectified and filtered to obtain the signal envelope which contains only brightness information
What happens in demodulation to the phase information?
It is removed
Non -coherent post processing
Applied after demodulation (removal of the phase information)
compression of the echo brightness scale, frame averaging
How to compensate for the attenuation of ultrasound in tissues which reduces the amplitude of deeper echoes?
A varying gain is applied to signals depending on time of arrival, and so depth they originated from.
The default amplification is set from the assumed attenuation coefficient at the transducer centre frequency.
Further adjustment is made by the user, with sliders to adjust amplification at different depths.
B-mode
brightness mode
Received echo amplitude is used to form 2D grayscale images made from scan lines
M-mode
Movement mode. A single scan line is viewed through time. Used for imaging movement
3D/4D imaging
3D images are acquired by scanning over a volume
when 3D images acquired by scanning over a volume are acquired in real time, 4D imaging
How can probes acquire a number of slices to build up and image the volume?
- free hand scanning
- 2D array probe
- mechanical probe
- endoprobe
- 1D array with location tracking
- 2D array probe: steering in both dimensions
- Mechanical 3D probe
- Endoprobe: imagesas pulled through a cavity
Doppler modes
used to examine blood flow
Spatial resolution
resolve echoes from a point target as a point is ideal spatial resolution
in reality, why do the echoes from point targets appear as blobs or streaks?
the image is constructed using beams that have some width in space, with pulses that have some length in time
axial resolution
the smallest separation of 2 targets on the beam axis that appear as 2 separate image features.
what is the axial resolution determined by?
temporal pulse length
What is the profile of the image brightness corresponding to the returning echo from a scatterer determined by?
the shape of the pulse envelope
what is the axial resolutoin for objects very close together axially?
the echo pulses will overlap in time, the objects can’t be separated:
axial resolution = L_p / 2
half the pulse length
What can be said abut the echoes when the round trip distance is greater than the pulse length?
the echoes are separated
What can be said about the echoes when the round trip distance is shorter than the pulse length?
the echoes overlap
What is the quality factor?
the ratio of the central frequency/ the bandwidth of the pulse
A longer pulse has a narrower/wider bandwidth?
narrower
What is the implication of the longer pulses having a narrower bandwidth?
higher Q factor
lower axial resolution
Pulse frequency: relationship between axial resolution and frequency
the pulse length decreases with increasing frequency
a longer pulse length means a narrower bandwidth, higher Q factor and lower axial resolution
How to assess resolution?
image quality phantoms
imaging wires emedded in a gel phantom
How can the result from the image quality phantom be sread?
The features blur together when closer together than the limit of the resolution
Lateral resolution
The smallest separation of the 2 targets placed laterally at the same depth that appear as 2 separate image features
What determines the lateral resolution?
The beam width
What happens to the B-mode image where beams overlap?
Objects may appear on more than one scan line
What does image brightness correspond to?
The position of the object relative to the beam axis
For overlapping beams, what will the object appear as?
On several scan lines with a different brightness
How to calculate lateral resolution?
W/2
If objects are separated by half the beam width how resolvable are they?
just resolvable
Where is the beam width the narrowest?
at the fixed focus determined by the lens
how to calculate contrast between area of tissue and its surroundings
contrast = mean brightness of the lesion- mean brightness of the backgroundd all over the brightness of the background
how to calculate contrast to noise resolution?
(mean brightness of the lesion- mean brightness of the background)/ standard deviation of background brightness
What is contrast partly a funciton of?
Gain, frequency, machine settings, etc.
What is contrast limited by?
Noise: electronic and acoustic (speckle, sidelobe and grating lobe artefacts)
Speckle
Small scale fluctuations in brightness that are characteristic of ultrasound images
Where does speckle arise from
the interference of echoes from small scatterers in tissue
How is speckle pattern appearance related to beam width and ultrasound frequency?
smaller speckle in focal zones, streaks at larger depths
How to reduce speckle?
angle and frequency compounding
use of higher frequencies - finer speckle pattern
Relationship between resolution and imaging frequency
shorter pulse length = better axial resolution
narrower beam/focal region = better lateral resolution
What is a donwside of increasing imaging frequency?
absorption increases
due to the absorption of sound, penetration depth is reduced
What is the chosen imaging frequency a trade-off between?
- the improvement of the lateral/axial resolutions
- the decrease in the penetration of the ultrasound
What is the general imaging frequency of superficial structures?
Superficial structures are imaged with higher frequencies, with better resolution as the penetration ability of the ultrasound doesn’t need to be very high
What is the chosen imaging frequnecy in imaging deeper structures generally?
Imaged with lower frequencies at a lower resolution