Advanced Optical Microscopy Flashcards
Describe the steps of image formation (on the observer’s retina or on a CCD camera sensor) in a transmission optical microscope.
In the optical microscope, the light coming from an illumination source passes through the condenser and arrives on the object. Part of the light emerges from the object, and it goes in every direction. This diffracted light enters the objective (which is a lens) and it converges on the focal plane on the other side of the lens. The objective projects a magnified image of the object which is known as a “real image”. The eyepiece further magnifies the image projected and it creates the so-called “virtual image”. The rays of light when passing through the eye, form a particular angle, and the image is formed on the retina which is the backside of the wall. The image that is formed is inverted. This image is then interpreted by the brain and re-inverted. Due to angle formation, we can perceive the height and depth of the object we are seeing.
(CCD: it stands for the Charge-Coupled Device. It is an image sensor that senses the values and converts them into an electric signal. It’s a rectangular grid, composed of a matrix with cells that contain a sensor sensible to the intensity of photon. When light falls on the object, the light reflects after striking the object and it is allowed to enter inside the camera.)
Describe what is meant by numerical aperture, its range of values, and how it is related to image formation.
Light comes from all directions into the objective and the objective can accept only some of them within a certain range of angle which then created a cone of lights. Numerical Aperature (NA) determines how fat is the cone of light.
Light comes to the objective from the object, converges in a point that is not a point, and has a minimum size and a certain shape that is called an airy disk.
The minimum size of the point and the thickness of the airy disk depend on NA. Larger the NA smaller the minimum size, thinner the airy disk.
The numerical aperture is one of the properties of the lenses and the objective. It is printed on the objective together with the magnification value and other information. NA usually has a range of values between 0.1 and 1.7. It is the product of the reflective index (n) of the medium between the specimen and the objective lens which can be water, air, or oil, and the sin of the angle (α) formed by the light rays that are collected on the objective. The rays from a light source go in all directions and some of them enter the objective forming a cone. NA tells how fat is the cone of light that the objective can gather. The NA controls the maximum resolution of the microscope and thus its effectiveness in magnifying. The larger the NA, the higher the resolving power of the objective. This occurs because higher values of NA permit increasingly oblique rays to enter the objective front lens, which produces a more highly resolved image and allows smaller structures to be visualized. So, the higher the NA, the brighter the image acquired.
Describe what is the size and shape of the image (on the image plane of a microscope) of a point-like source of light.
The Point Spread Function (PSF) is the shape of the image of a bright point, that is its profile moving across the image along the image plane.
PSF can have an airy disk shape in an ideal setup. But airy disk cannot be measured while PSF can be measured.
The radius of the disk is determined by the numerical aperture (NA). Larger the NA thinner the airy disk which means the smaller the radius.
————
The image of a point is not a geometrical point because It has a size. The size depends on the numerical aperture (NA), the magnification (M), and the wavelength, as shown in the formula: ( ). The image of a point has a particular shape: it is like a peak surrounded by blurred concentric rings which are called airy patterns, the majority of the light is in the central part (at least 85% of the light) while the remaining 15% is in the rings. This structure is known as an airy disk. The radius of the disk is determined by the numerical aperture; thus, the resolving power of an objective lens can be evaluated by measuring the size of an airy disk. So, the airy disk can be considered the ideal shape of the image of a point, on the other hand, the point spread function (PSF) is what can be measured. The point spread function is the shape of the image of a bright point, that is its profile when moving across the image along the image plane.
Describe what are the connections, if any, between magnification, resolution, and pixel size.
Resolution and magnification are not the same things.
Magnification is how big you make an image;
Resolution is how small
a structure on the object you can distinguish.
If you have a high magnification without a good resolution, you see, for example, the cells very big but blurry, you cannot distinguish cellular compartments or maybe 2 different cells.
If you have a good resolution without magnification, you see the cells as more detailed but very small.
Thus, the ideal setup is proportional to magnification and resolution.
In camera a, there are pixels and pixels are area elements that hold values of brightness or colors.
Resolution is equal to pixels.
Resolution and magnification are two different aspects that are independent of each other, however, they need to be linked together. Magnification is the process of enlarging something (like an image), resolution instead is the ability to distinguish two close points on a specimen as two different entities. A large image doesn’t mean a better resolution: to work well we should avoid large unfocused images and also images that are full of details, but too small to be analyzed. So, the best situation is to have an image that is neither too small nor too big and that isn’t characterized by the loss of details.
In scanning microscopy there is also a connection between pixel size (or voxel in the case of 3D images) and resolution as the space is intrinsically divided in positions. The resolution is equal to pixels (also called “picture elements”) and the pixels are just squares of value, so to distinguish them a certain magnification is needed.
Describe what is the “point spread function” and how it is related to the property of the microscope.
The Point Spread Function (PSF) is the shape of the image of a bright point, that is its profile moving across the image along the image plane.
PSF can have an airy disk shape in an ideal setup.
PSF is an effective characterization of the optical system to determine the effective numerical aperture (NA) of the objective and determine the quality of the alignment and symmetry.
NA determines the radius of the disk.
The point spread function (PSF) is the shape of the image of a bright point, that is its profile when moving across the image along the image plane. It can have the shape of an airy disk which is an idealized shape in a perfect setup that can’t be measured. The airy disk has a particular shape: in the central part, it is like a peak (where at least 85% of the light is concentrated) surrounded by blurred concentric rings which are called airy patterns. The point spread function is the real shape that can be measured. The size of the PSF can be written as referring to the size of the image or the resolution of the object. The size of the point spread function (s) is given by resolution (w) x magnification (M) (s = w x M). This function is used to evaluate the resolution in the object (w = s/M).
Describe what it is meant by the Fourier component of an image, and what are the parameters that describe each Fourier component of a 2D image.
Fourier transform (FT) is the procedure to determine the frequencies contained within a function (spatial frequencies within an image). FT can be expressed as a summation of sine and cosine, a summation of periodical/sinusoidal lines.
The sinusoidal waves can be thought of as a combination of periodicity in x and y-axises.
Fourier components that are required for the 2D image are the orientation (or direction of the lines) and the spatial frequency both in the x and y axes.
———————-
The image of a point is formed by the sum of spatial frequencies known as Fourier components. A procedure to determine the frequencies contained in the image using a function is known as the Fourier transformation. This function is expressed as a summation of sine and cosine. Every pattern in two dimensions can be written as a summation of periodical/sinusoidal lines. The parameters needed to describe each Fourier components are the spatial frequency in the X and Y-axis of the plane and also the directions of lines. So, every wave is represented by one point on the surface and a pattern is made by two points. The periodical components and the image contain the same information: the Fourier space expresses the information of brightness for periodical components, while the image expresses the information of brightness per pixel. However, you can calculate back and forth between the image and the periodical components.
Describe the relationship between the Fourier component of an image and the resolution of the microscope used to acquire it.
The image in the microscope is formed by the sum of spatial frequencies also known as Fourier components. The resolution of the microscope defines the dimension of spatial frequencies: the Fourier components that can pass through the lens and that can be part of the image. It defines the numerical aperture that cuts away part of the periodicity of the Fourier components of an image and leaves only the central part of the Fourier transformation. High spatial frequency gets a larger angle, and it goes off the lens. In this way, these components will be lost due to too much diffraction or no diffraction at all, therefore there will be a loss of resolution and the image will become blurry. On the contrary, the small spatial frequency gets an angle that goes through the lens and becomes part of the image. So, the lens can capture only some of the periodical components: the image will contain the same information as before but without the finest components.
Discuss the pros and cons of confocal microscopy over conventional fluorescence microscopy. Is there a gain in resolution?
There is a gain in resolution in confocal microscopy because even though we have the resolution in the x-y plane we don’t have the resolution in the z-axis in conventional microscopes but we have z-resolution in the confocal microscope. Also, there is a gain in resolution in the x-y plane because contrary unlike transmission or widefield microscopies, confocal microscopy illuminates the sample point by point.
Advantages:
- 3D imaging: The ability to create a sharp optical section make it possible to build a 3D image of the specimen because PC creates a digital image combining 2D images.
- resolution close to theoretical limit: the pin-hole allows performing 3D imaging whose resolution is close to the theoretical limit (however, the x-y resolution is the same as the transmission and fluorescence microscopy).
- selectivity of illumination: laser is focused on a specific part of the sample, so even if we have photobleaching on Z-axis there’s no bleaching in other areas that we are not looking at.
- improvement of signal-to-noise ratio: elimination or reduction of background information away from the focal plane (that leads to image degradation)
Disadvantages:
-higher cost
-time-consuming
——————————————–
Confocal microscopy shows several advantages: first, it allows to have three-dimensional images, then it also enables to be more selective on illuminating a certain region of interest and also on how to illuminate, especially in the monobeam confocal microscopy. Scanning techniques allow to illuminate just a point at a time and not the whole sample. Moreover, it is useful because of the diffraction-limited resolution thanks to the use of the laser as a source of illumination. Regarding the disadvantages, confocal microscopy is more expensive (€250-50k) than conventional fluorescence microscopy due to the more sophisticated and complicated technology required to scan, control the movements, and record with synchrony and elaborate data. The resolution (w) of the X-Y plane in the confocal microscope is the same as the conventional fluorescence microscopy, so the formula is still. The Z resolution, instead, is equal to the length of the neck of the light induced by the hourglass, as shown in the formula.
Describe how a monobeam confocal microscopy obtains three-dimensional images
In this laser scanning confocal microscopy (LSCM) the laser produces and focuses only a single beam of light at one narrow depth level at a time, that is why it is called monobeam confocal microscopy. The beam is scanned across the sample in the horizontal plane by using multiple oscillating mirrors and then it is focused on a pinhole, which is an optically conjugate plane in front of the detector to eliminate out-of-focus signal. The light is then detected by a photomultiplier tube (PMT) that translates the light into current. Every photon absorbed frees an electron in the system that becomes ready to move. This very sensitive detector takes about 1 µs per point, thus resulting in 1-5 frames per second. It does not form an optical (real) image. To enable image formation, the sampling spot must be moved through the specimen and the resulting signal collected and stored.
Describe how a multibeam confocal microscopy obtains three-dimensional images.
The multibeam technology is inspired by the Nipkow disk which is the first time the production of a moving image.
Nipkow disk is a rotating disk with holes. One side of the disk is dark while there is a lamp on the other side. When the disk rotates you see light coming through from holes and if you rotate the disk fast enough, you cant distinguish each hole, instead you see a movement.
The multi-beam confocal microscopy was inspired by the Nipkow disk, a spinning disk with a set of equally-distance holes of equal diameter. In this type of microscopy, the laser produces a beam of light that goes on a spinning disk that can rotate up to 5-10 k rpm where there are some holes of fixed size, each one of them has a lens inside. The light that goes through illuminates a few lenses that focus down the beam on the focal plane of a second disk which is connected to the first one. The light that goes through the holes of the second disk enlarges and is focused down again on the sample by the objective. The sample contains fluorophores that, once excited, emit fluorescent light in all directions. If this light comes from the central part of the various beams, it goes through the objective and then exactly in the pinhole of the second disk. Then the semi-reflecting mirror reflects completely one wavelength (the emission light) and lets another wavelength (the excitation light) go through. In this way, an image of the pinhole is made on a CCD camera, a detector that translates light into current. It’s important to stress that the light that comes from the sample can come from the beam waist (where the light is more intense and focused), from above or below, but only the one that comes from the beam waist forms an image exactly on the second pinhole is and goes through. You can collect up to 1000 frames per second.
Describe the resolution of a confocal microscope.
Resolution of the confocal microscopy is the minimum distance between 2 points in the object that can be distinguished which is the same as the minimum size that can be illuminated.
The resolution of the confocal microscopy (w) is the same as the other one in the x-y plane, and z is the resolution in the third dimension and it is equal to the length of the neck of the light induced by the hourglass.
w=2λ/πNA
z=πλ/2(NA)^2
λ: wavelength
NA: numerical aperture
—————————————–
The resolution of the confocal microscopy (w) is the same as the previous microscopes in the x-y plane, and z is the resolution in the third dimension. As in wide-field microscopy, resolution at the focal plane is determined by the diameter of the Airy disk. Instrument parameters such as scan rate and pin-hole diameter can be set to achieve maximum resolution. I start with a point light source and I make an image in the sample of a point light source and this defines the area that I’m illuminating, which is an hourglass. Then I take the center of the hourglass, where the light is tighter, and I make an image of this on the second pinhole. In this case, only the light that comes from this part is an image; if I am indifferent position it doesn’t become an image. So, I’m selecting both x-y and z. If the sample has low light emission the pinhole can be opened greater than one Airy diameter to allow more light to the detector. In this case, however, the resolution will decrease (slightly) at the expense of being able to image the sample at all.
Describe the difference in the structure of monobeam vs. multibeam confocal microscopes.
Monobeam:
- LSCM (Laser Scanning Confocal Microscope)
- Detector: PMT
- Slow: One point detected at a time (circa 1microsecond per point - 1-5 frames per second)
Multibeam:
- Spinning disk
- Detector: CCD
- Fast: One detection for every XY scan (disk can rotate up to 5-10 k rpm- up to 1000 frames per second)
In monobeam, you have a single beam and total control of what you are illuminating, but you go slower.
The monobeam is a laser scanning confocal microscopy. We have a light source which is a laser, in this solution we have a laser that produces one beam. We have also a microscope optical system and a scanning system. The photomultiplier tube is the sensitive light detector that translates the light into an electrical signal. Finally, there is a computer that translates the electrical signals into the image. (the objective is on the top of the sample).
In the multibeam microscope, there is a beam of light that goes on a disk where there are a lot of holes and each hole has a lens inside. The light that passes through is focused down on a second disk. Then there is the objective that focuses the beam down again on the sample. Moving on there is a semi-reflecting mirror that reflects the emission light and finally, there is the CCD which is the detector that translates light into current. In the monobeam, I have a single beam and total control of what I am illuminating. However, it is slower. Instead with the multibeam, you can go as fast as you want, but the limitation is that if you go too fast, for every position there is too little light, and you don’t see enough. So, it is always a balance between going slow enough to have enough light but not too slow to burn the sample, so you have to find the working conditions. Both the PMT and CCD are devices that translate light into current. In fact, in both cases, every photon that arrives generates/frees an electron in the system.
Describe the working principle of light-sheet microscopy.
The light beam is focused perpendicular which means the sample is illuminated from the side, perpendicularly. Since the sample is perpendicular to the light, you only can see the part/layer that light comes through it.
Lasers shine their light, the mirrors bring to light that is then focused
————————————
Light-sheet microscopy is a fluorescence microscopy technique in which the light illuminating the sample is focused on a very thin sheet. The illumination axis is perpendicular to the detection axis. Since the light has the shape of a thin sheet, it illuminates a thin volume of sample in the focal plane of the objective, thus reducing photobleaching if compared to confocal microscopy. By scanning the illumination we can form a 3D image. Since the resolution in z is given by the resolution of the light sheet, we are within the limit of conventional diffraction resolution (a few microns). Light emitted by the excited sample is collected by CCD cameras.
Describe what is commonly done, in terms of analysis, when a sample is marked by two or three different fluorophores which have some degree of overlap in the emission spectra.
When there is a degree of crossover between the emission spectrum of different fluorophores, we have to rely on spectral imaging (technical) coupled with linear unmixing (analytical). Linear unmixing compares the summed spectra measured (black line) to all possible sum combinations of the fluorophores involved (red and orange lines). The software finds the correct spectral contribution according to the best-fit parameter (based on algorithms) and separates the lambda stack into different images for each fluorophore.
Describe the principles of SNOM microscopy.
SNOM: Scanning Near-field Optical Microscopy (or NSOM: Near-field Scanning Optical Microscopy)
When you have a very small emitted light, some details don’t propagate, they stay just around the object. What propagates is the wavelength. So, we don’t capture the wavelength that propagates, we capture the local electric field.
The image is obtained by attributing the measured intensity to the position of the fiber that collects it.
Scanning Near-field optical microscopy is a technique used to break the diffraction limit. The light from a laser is collimated by a tiny aperture on a fiber probe (50-100 nm). This creates an evanescent wave in the near field which gets broader in the far-field. If we place the sample in the near field zone, the resolution we can reach depends on the aperture and not on the wavelength of light. In the near field, there is no diffraction so we are not bound to the diffraction limit. This is also why we are NOT making an image. The correct positioning of the sample is achieved using a feedback mechanism based on the contact between the probe and the sample. A computer then puts together the measured intensity and the position where it was collected.