Advanced Optical Microscopy Flashcards

1
Q

Describe the steps of image formation (on the observer’s retina or on a CCD camera sensor) in a transmission optical microscope.

A

In the optical microscope, the light coming from an illumination source passes through the condenser and arrives on the object. Part of the light emerges from the object, and it goes in every direction. This diffracted light enters the objective (which is a lens) and it converges on the focal plane on the other side of the lens. The objective projects a magnified image of the object which is known as a “real image”. The eyepiece further magnifies the image projected and it creates the so-called “virtual image”. The rays of light when passing through the eye, form a particular angle, and the image is formed on the retina which is the backside of the wall. The image that is formed is inverted. This image is then interpreted by the brain and re-inverted. Due to angle formation, we can perceive the height and depth of the object we are seeing.

(CCD: it stands for the Charge-Coupled Device. It is an image sensor that senses the values and converts them into an electric signal. It’s a rectangular grid, composed of a matrix with cells that contain a sensor sensible to the intensity of photon. When light falls on the object, the light reflects after striking the object and it is allowed to enter inside the camera.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Describe what is meant by numerical aperture, its range of values, and how it is related to image formation.

A

Light comes from all directions into the objective and the objective can accept only some of them within a certain range of angle which then created a cone of lights. Numerical Aperature (NA) determines how fat is the cone of light.

Light comes to the objective from the object, converges in a point that is not a point, and has a minimum size and a certain shape that is called an airy disk.

The minimum size of the point and the thickness of the airy disk depend on NA. Larger the NA smaller the minimum size, thinner the airy disk.

The numerical aperture is one of the properties of the lenses and the objective. It is printed on the objective together with the magnification value and other information. NA usually has a range of values between 0.1 and 1.7. It is the product of the reflective index (n) of the medium between the specimen and the objective lens which can be water, air, or oil, and the sin of the angle (α) formed by the light rays that are collected on the objective. The rays from a light source go in all directions and some of them enter the objective forming a cone. NA tells how fat is the cone of light that the objective can gather. The NA controls the maximum resolution of the microscope and thus its effectiveness in magnifying. The larger the NA, the higher the resolving power of the objective. This occurs because higher values of NA permit increasingly oblique rays to enter the objective front lens, which produces a more highly resolved image and allows smaller structures to be visualized. So, the higher the NA, the brighter the image acquired.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Describe what is the size and shape of the image (on the image plane of a microscope) of a point-like source of light.

A

The Point Spread Function (PSF) is the shape of the image of a bright point, that is its profile moving across the image along the image plane.
PSF can have an airy disk shape in an ideal setup. But airy disk cannot be measured while PSF can be measured.
The radius of the disk is determined by the numerical aperture (NA). Larger the NA thinner the airy disk which means the smaller the radius.
————
The image of a point is not a geometrical point because It has a size. The size depends on the numerical aperture (NA), the magnification (M), and the wavelength, as shown in the formula: ( ). The image of a point has a particular shape: it is like a peak surrounded by blurred concentric rings which are called airy patterns, the majority of the light is in the central part (at least 85% of the light) while the remaining 15% is in the rings. This structure is known as an airy disk. The radius of the disk is determined by the numerical aperture; thus, the resolving power of an objective lens can be evaluated by measuring the size of an airy disk. So, the airy disk can be considered the ideal shape of the image of a point, on the other hand, the point spread function (PSF) is what can be measured. The point spread function is the shape of the image of a bright point, that is its profile when moving across the image along the image plane.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe what are the connections, if any, between magnification, resolution, and pixel size.

A

Resolution and magnification are not the same things.

Magnification is how big you make an image;
Resolution is how small
a structure on the object you can distinguish.

If you have a high magnification without a good resolution, you see, for example, the cells very big but blurry, you cannot distinguish cellular compartments or maybe 2 different cells.
If you have a good resolution without magnification, you see the cells as more detailed but very small.
Thus, the ideal setup is proportional to magnification and resolution.

In camera a, there are pixels and pixels are area elements that hold values of brightness or colors.

Resolution is equal to pixels.

Resolution and magnification are two different aspects that are independent of each other, however, they need to be linked together. Magnification is the process of enlarging something (like an image), resolution instead is the ability to distinguish two close points on a specimen as two different entities. A large image doesn’t mean a better resolution: to work well we should avoid large unfocused images and also images that are full of details, but too small to be analyzed. So, the best situation is to have an image that is neither too small nor too big and that isn’t characterized by the loss of details.
In scanning microscopy there is also a connection between pixel size (or voxel in the case of 3D images) and resolution as the space is intrinsically divided in positions. The resolution is equal to pixels (also called “picture elements”) and the pixels are just squares of value, so to distinguish them a certain magnification is needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Describe what is the “point spread function” and how it is related to the property of the microscope.

A

The Point Spread Function (PSF) is the shape of the image of a bright point, that is its profile moving across the image along the image plane.

PSF can have an airy disk shape in an ideal setup.

PSF is an effective characterization of the optical system to determine the effective numerical aperture (NA) of the objective and determine the quality of the alignment and symmetry.

NA determines the radius of the disk.

The point spread function (PSF) is the shape of the image of a bright point, that is its profile when moving across the image along the image plane. It can have the shape of an airy disk which is an idealized shape in a perfect setup that can’t be measured. The airy disk has a particular shape: in the central part, it is like a peak (where at least 85% of the light is concentrated) surrounded by blurred concentric rings which are called airy patterns. The point spread function is the real shape that can be measured. The size of the PSF can be written as referring to the size of the image or the resolution of the object. The size of the point spread function (s) is given by resolution (w) x magnification (M) (s = w x M). This function is used to evaluate the resolution in the object (w = s/M).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Describe what it is meant by the Fourier component of an image, and what are the parameters that describe each Fourier component of a 2D image.

A

Fourier transform (FT) is the procedure to determine the frequencies contained within a function (spatial frequencies within an image). FT can be expressed as a summation of sine and cosine, a summation of periodical/sinusoidal lines.
The sinusoidal waves can be thought of as a combination of periodicity in x and y-axises.
Fourier components that are required for the 2D image are the orientation (or direction of the lines) and the spatial frequency both in the x and y axes.
———————-
The image of a point is formed by the sum of spatial frequencies known as Fourier components. A procedure to determine the frequencies contained in the image using a function is known as the Fourier transformation. This function is expressed as a summation of sine and cosine. Every pattern in two dimensions can be written as a summation of periodical/sinusoidal lines. The parameters needed to describe each Fourier components are the spatial frequency in the X and Y-axis of the plane and also the directions of lines. So, every wave is represented by one point on the surface and a pattern is made by two points. The periodical components and the image contain the same information: the Fourier space expresses the information of brightness for periodical components, while the image expresses the information of brightness per pixel. However, you can calculate back and forth between the image and the periodical components.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Describe the relationship between the Fourier component of an image and the resolution of the microscope used to acquire it.

A

The image in the microscope is formed by the sum of spatial frequencies also known as Fourier components. The resolution of the microscope defines the dimension of spatial frequencies: the Fourier components that can pass through the lens and that can be part of the image. It defines the numerical aperture that cuts away part of the periodicity of the Fourier components of an image and leaves only the central part of the Fourier transformation. High spatial frequency gets a larger angle, and it goes off the lens. In this way, these components will be lost due to too much diffraction or no diffraction at all, therefore there will be a loss of resolution and the image will become blurry. On the contrary, the small spatial frequency gets an angle that goes through the lens and becomes part of the image. So, the lens can capture only some of the periodical components: the image will contain the same information as before but without the finest components.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Discuss the pros and cons of confocal microscopy over conventional fluorescence microscopy. Is there a gain in resolution?

A

There is a gain in resolution in confocal microscopy because even though we have the resolution in the x-y plane we don’t have the resolution in the z-axis in conventional microscopes but we have z-resolution in the confocal microscope. Also, there is a gain in resolution in the x-y plane because contrary unlike transmission or widefield microscopies, confocal microscopy illuminates the sample point by point.

Advantages:

  • 3D imaging: The ability to create a sharp optical section make it possible to build a 3D image of the specimen because PC creates a digital image combining 2D images.
  • resolution close to theoretical limit: the pin-hole allows performing 3D imaging whose resolution is close to the theoretical limit (however, the x-y resolution is the same as the transmission and fluorescence microscopy).
  • selectivity of illumination: laser is focused on a specific part of the sample, so even if we have photobleaching on Z-axis there’s no bleaching in other areas that we are not looking at.
  • improvement of signal-to-noise ratio: elimination or reduction of background information away from the focal plane (that leads to image degradation)

Disadvantages:
-higher cost
-time-consuming
——————————————–
Confocal microscopy shows several advantages: first, it allows to have three-dimensional images, then it also enables to be more selective on illuminating a certain region of interest and also on how to illuminate, especially in the monobeam confocal microscopy. Scanning techniques allow to illuminate just a point at a time and not the whole sample. Moreover, it is useful because of the diffraction-limited resolution thanks to the use of the laser as a source of illumination. Regarding the disadvantages, confocal microscopy is more expensive (€250-50k) than conventional fluorescence microscopy due to the more sophisticated and complicated technology required to scan, control the movements, and record with synchrony and elaborate data. The resolution (w) of the X-Y plane in the confocal microscope is the same as the conventional fluorescence microscopy, so the formula is still. The Z resolution, instead, is equal to the length of the neck of the light induced by the hourglass, as shown in the formula.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Describe how a monobeam confocal microscopy obtains three-dimensional images

A

In this laser scanning confocal microscopy (LSCM) the laser produces and focuses only a single beam of light at one narrow depth level at a time, that is why it is called monobeam confocal microscopy. The beam is scanned across the sample in the horizontal plane by using multiple oscillating mirrors and then it is focused on a pinhole, which is an optically conjugate plane in front of the detector to eliminate out-of-focus signal. The light is then detected by a photomultiplier tube (PMT) that translates the light into current. Every photon absorbed frees an electron in the system that becomes ready to move. This very sensitive detector takes about 1 µs per point, thus resulting in 1-5 frames per second. It does not form an optical (real) image. To enable image formation, the sampling spot must be moved through the specimen and the resulting signal collected and stored.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Describe how a multibeam confocal microscopy obtains three-dimensional images.

A

The multibeam technology is inspired by the Nipkow disk which is the first time the production of a moving image.

Nipkow disk is a rotating disk with holes. One side of the disk is dark while there is a lamp on the other side. When the disk rotates you see light coming through from holes and if you rotate the disk fast enough, you cant distinguish each hole, instead you see a movement.

The multi-beam confocal microscopy was inspired by the Nipkow disk, a spinning disk with a set of equally-distance holes of equal diameter. In this type of microscopy, the laser produces a beam of light that goes on a spinning disk that can rotate up to 5-10 k rpm where there are some holes of fixed size, each one of them has a lens inside. The light that goes through illuminates a few lenses that focus down the beam on the focal plane of a second disk which is connected to the first one. The light that goes through the holes of the second disk enlarges and is focused down again on the sample by the objective. The sample contains fluorophores that, once excited, emit fluorescent light in all directions. If this light comes from the central part of the various beams, it goes through the objective and then exactly in the pinhole of the second disk. Then the semi-reflecting mirror reflects completely one wavelength (the emission light) and lets another wavelength (the excitation light) go through. In this way, an image of the pinhole is made on a CCD camera, a detector that translates light into current. It’s important to stress that the light that comes from the sample can come from the beam waist (where the light is more intense and focused), from above or below, but only the one that comes from the beam waist forms an image exactly on the second pinhole is and goes through. You can collect up to 1000 frames per second.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Describe the resolution of a confocal microscope.

A

Resolution of the confocal microscopy is the minimum distance between 2 points in the object that can be distinguished which is the same as the minimum size that can be illuminated.

The resolution of the confocal microscopy (w) is the same as the other one in the x-y plane, and z is the resolution in the third dimension and it is equal to the length of the neck of the light induced by the hourglass.

w=2λ/πNA
z=πλ/2(NA)^2

λ: wavelength
NA: numerical aperture
—————————————–
The resolution of the confocal microscopy (w) is the same as the previous microscopes in the x-y plane, and z is the resolution in the third dimension. As in wide-field microscopy, resolution at the focal plane is determined by the diameter of the Airy disk. Instrument parameters such as scan rate and pin-hole diameter can be set to achieve maximum resolution. I start with a point light source and I make an image in the sample of a point light source and this defines the area that I’m illuminating, which is an hourglass. Then I take the center of the hourglass, where the light is tighter, and I make an image of this on the second pinhole. In this case, only the light that comes from this part is an image; if I am indifferent position it doesn’t become an image. So, I’m selecting both x-y and z. If the sample has low light emission the pinhole can be opened greater than one Airy diameter to allow more light to the detector. In this case, however, the resolution will decrease (slightly) at the expense of being able to image the sample at all.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Describe the difference in the structure of monobeam vs. multibeam confocal microscopes.

A

Monobeam:

  • LSCM (Laser Scanning Confocal Microscope)
  • Detector: PMT
  • Slow: One point detected at a time (circa 1microsecond per point - 1-5 frames per second)

Multibeam:

  • Spinning disk
  • Detector: CCD
  • Fast: One detection for every XY scan (disk can rotate up to 5-10 k rpm- up to 1000 frames per second)

In monobeam, you have a single beam and total control of what you are illuminating, but you go slower.

The monobeam is a laser scanning confocal microscopy. We have a light source which is a laser, in this solution we have a laser that produces one beam. We have also a microscope optical system and a scanning system. The photomultiplier tube is the sensitive light detector that translates the light into an electrical signal. Finally, there is a computer that translates the electrical signals into the image. (the objective is on the top of the sample).
In the multibeam microscope, there is a beam of light that goes on a disk where there are a lot of holes and each hole has a lens inside. The light that passes through is focused down on a second disk. Then there is the objective that focuses the beam down again on the sample. Moving on there is a semi-reflecting mirror that reflects the emission light and finally, there is the CCD which is the detector that translates light into current. In the monobeam, I have a single beam and total control of what I am illuminating. However, it is slower. Instead with the multibeam, you can go as fast as you want, but the limitation is that if you go too fast, for every position there is too little light, and you don’t see enough. So, it is always a balance between going slow enough to have enough light but not too slow to burn the sample, so you have to find the working conditions. Both the PMT and CCD are devices that translate light into current. In fact, in both cases, every photon that arrives generates/frees an electron in the system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Describe the working principle of light-sheet microscopy.

A

The light beam is focused perpendicular which means the sample is illuminated from the side, perpendicularly. Since the sample is perpendicular to the light, you only can see the part/layer that light comes through it.
Lasers shine their light, the mirrors bring to light that is then focused
————————————
Light-sheet microscopy is a fluorescence microscopy technique in which the light illuminating the sample is focused on a very thin sheet. The illumination axis is perpendicular to the detection axis. Since the light has the shape of a thin sheet, it illuminates a thin volume of sample in the focal plane of the objective, thus reducing photobleaching if compared to confocal microscopy. By scanning the illumination we can form a 3D image. Since the resolution in z is given by the resolution of the light sheet, we are within the limit of conventional diffraction resolution (a few microns). Light emitted by the excited sample is collected by CCD cameras.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Describe what is commonly done, in terms of analysis, when a sample is marked by two or three different fluorophores which have some degree of overlap in the emission spectra.

A

When there is a degree of crossover between the emission spectrum of different fluorophores, we have to rely on spectral imaging (technical) coupled with linear unmixing (analytical). Linear unmixing compares the summed spectra measured (black line) to all possible sum combinations of the fluorophores involved (red and orange lines). The software finds the correct spectral contribution according to the best-fit parameter (based on algorithms) and separates the lambda stack into different images for each fluorophore.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Describe the principles of SNOM microscopy.

A

SNOM: Scanning Near-field Optical Microscopy (or NSOM: Near-field Scanning Optical Microscopy)

When you have a very small emitted light, some details don’t propagate, they stay just around the object. What propagates is the wavelength. So, we don’t capture the wavelength that propagates, we capture the local electric field.

The image is obtained by attributing the measured intensity to the position of the fiber that collects it.

Scanning Near-field optical microscopy is a technique used to break the diffraction limit. The light from a laser is collimated by a tiny aperture on a fiber probe (50-100 nm). This creates an evanescent wave in the near field which gets broader in the far-field. If we place the sample in the near field zone, the resolution we can reach depends on the aperture and not on the wavelength of light. In the near field, there is no diffraction so we are not bound to the diffraction limit. This is also why we are NOT making an image. The correct positioning of the sample is achieved using a feedback mechanism based on the contact between the probe and the sample. A computer then puts together the measured intensity and the position where it was collected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Describe the concept and the main features of two-photon microscopy.

A

One-Photon Excitation: A photon that carries energy, comes into a fluorophore molecule and if the energy is enough, the molecule gets excited electrically and vibrationally and moves to a higher orbital. Vibrational energy is lost very quickly. Molecule gives back the energy it gained by this energy is less than it gained at the beginning because it is absorbed.

Two-Photon Microscopy is a fluorescent microscopy technique based on the simultaneous absorption of two photons emitted by a pulsed laser (700nm to 1000nm for picoseconds). Since light intensity is higher in the beam waist, the fluorophore excitation is restricted to a small focus resulting in optical sectioning a priori, without the use of a pinhole. The number of exciting fluorophores is proportional to the squared intensity: Two-photon microscopy is NOT linear.
In the overall process, we don’t gain resolution to confocal microscopy: even though we have such a small focus (gain of resolution), the wavelength is double the one that’d be used in linear excitation (loss of resolution).
The main advantages are that we form a 3D image without a pinhole, we decrease photobleaching if compared to confocal microscopy and we can go deeper into the sample (z-axis) because a higher lambda gets absorbed less.

17
Q

Describe how is it possible to obtain 3D images in two-photon microscopy.

A

In two-photon microscopy, we are selectively sensitive in “z” without any pinholes. We can have 3D images with no pinholes because only the fluorophores get excited where the beam is focused.
When we are using two (or more) photons, the probability of catching one photon the other one is increased and maximum in the beam-waist, and therefore only fluorophores that get excited are in the waist.
——————————–
Two-photon microscopy enables the formation of a 3D image because:
• the light we use has half the frequency needed to excite the fluorophores. Since the density of photons is higher in the beam waist, that is the only spot where the fluorophores get excited by 2 photons simultaneously. Thus the excitation is localized to a tiny volume.
• The focal point is scanned throughout the sample for a computer to form a 3D image

18
Q

What are the pros and cons of two-photon microscopy with respect to linear fluorescence microscopy, either traditional or confocal?

A

Two-photon microscopy

Advantages:

  • it enables 3D scanning of samples without using bleaching fluorophores that do not contribute to the signal (as confocal does)
  • it enables a larger penetration in thick samples since the infrared employed is less absorbed in biological samples.
  • it enables a resolution better than the one given by the diffraction limit for the wavelength in use.

Disadvantages:
- such wavelength is longer (double) than the one that would be used for the linear excitation of the same fluorophore.
- no real gain in resolution is achieved for confocal microscopy using the same fluorophore.
——————————————-
Two-photon microscopy enables obtaining a 3D image of the sample.
Its main pros are:
• If compared to traditional microscopy it suppresses the background signal significantly since the fluorophore excitation is localized
• if compared to confocal microscopy, it has a significantly lower amount of photobleaching because of the localized fluorophore excitation. Moreover, it requires a simpler setup since it doesn’t need a pinhole.
• If compared both to traditional and confocal, it is better for in vivo applications because it involves a higher wavelength, which is less absorbed by the tissue thus causing less damage. Being less absorbed, it is also useful in thicker samples (up to 1 mm) because light can go deeper.
Its main cons are:
• Photobleaching and photodamage con be greater than in single-photon excitation in the focus plane

19
Q

Describe what type of illumination of the sample (wavelength, shape, timing) is used in STED microscopy.

A

It is microscopy that uses pulses laser as 2 photons microscopy. It uses two wavelengths: the first for the excitation and the second for the stimulated emission. The larger the stimulus, the faster the decay.
The first pulse of the laser is used for excitation: it is the wavelength at which the molecule is excited. In STED the excitation is linear. Then, the second pulse instead is switched off: it stimulates the emission of the fluorophores.
The central part in the switching off pulse is dark and it is surrounded by stimulated emission light. The central part has a size that is limited by resolution, and thus equal to the minimum with an ordinary beam, such as the excitation pulse. In a condition of linear response, the switching off pulse would not change at all the size of the region of excited fluorophores. Instead in a non-linear response as the intensity of the STED increases, the dark region contracts while the bright regions saturate their quenching effect.

20
Q

Why does STED microscopy enable super-resolved images?

A

STED, as 2 photons microscopy, is a super-resolved optical scanning that enables 3D image formation. It is another way to overcome the resolution limit set by diffraction, by exploiting a non-linear regime, using the stimulated emission. To enter the non-linear regime we have to quench a large fraction of fluorophores.
The smallest is the dark region containing excited fluorophores, the more the gaining in resolution.
When the STED pulses increase, the bright regions saturate their quenching effect, and the dark region contracts. To enter the non-linear regime, we have to quench a large fraction of fluorophores. So, in a nonlinear condition, it enables a large improvement over the resolution limit and so it can overcome the diffraction barrier.

21
Q

Describe the working principle of STED microscopy.

A

STED microscopy stands for Stimulated Emission Depletion Fluorescence Microscopy. The stimulation emission is a phenomenon in which the exited fluorophore loses its energy through the emission of light at an increasing rate if the fluorophore is illuminated with light that has the same wavelength as its emission. The fluorophore is shined with the same light that would be emitted spontaneously after the fluorescence lifetime.
Therefore, the excitation lifetime (the time in which the fluorophore stays, after excitation, excited, before emitting its light and so losing its energy) is shortened because the light matches the emission wavelength. This phenomenon can be thought of as a resonance. So, the change in time of the molecules depends on lifetime, the intensity of the excitation light, and also the intensity of the stimulated emission laser.

22
Q

Describe the interplay of image (fluorescence) intensity and resolution in STED microscopy.

A

In a linear response, the switching off pulse would not change at all the size of the region excited by fluorophores.
But STED can be used with a NON-linear stimulated emission, so the number of fluorophores that were previously excited and are now quenched is NOT proportional to the stimulated emission intensity. When the STED pulses increase, the bright regions saturate their quenching effect, and the dark region contracts. To enter the non-linear regime we have to quench a large fraction of fluorophores. So, in a nonlinear condition, it enables a large improvement over the resolution limit and so it can overcome the diffraction barrier.

23
Q

Describe the working principle of TIRF microscopy.

A

The working principle in TRIF microscopy is the Total Internal Reflection, which is obtained with an incidence angle larger than arcsin (n2/n1) when light is propagated into a transparent medium (with refractive index n1) and it hits an interface to a different medium of smaller refractive index (n2) and less dense than the first one. So, light, going through different refractive indexes, changes speed and direction and it is reflected. In the second medium, an evanescent wave is formed. The evanescent wave decays exponentially as we move away from the surface. The evanesce wave is given by I = I0 exp (-z/d) and both the parameters of this wave depend on the incidence angle. There is a threshold angle which is 66°.
TIR enables a resolution in the z-direction, better than the one given by the diffraction limit. So, TIR overcomes the limit of resolution, set by diffraction because it does not make images, but uses an evanescent wave.

24
Q

Describe how samples are illuminated in TIRF microscopy

A

TIRF is a strategy to illuminate the sample in fluorescence microscopy. There are two ways to realize TIRF illumination in microscopy: one is through the objective and the second instead is in the opposite direction through the objective.
So, TIR illumination can use an objective, but it requires a large numerical aperture. Given the Snell Law (n1 x sin ϑ1 = n2 x sin ϑ2 ) there is an intrinsic requirement to the NA of the objective, related to the refractive index of the system of the study. Typically, are required NA > or equal to 1.45. But the illumination through the objective doesn’t enable to change much the incidence angle. For this reason, the thickness of the evanescent wave cannot change so much.
We can also have illumination, which is produced in the opposite direction to the objective (typically using a prism). With this strategy, it is possible to change the angle of incidence and therefore also the illumination thickness.

25
Q

Describe an example of the use of TIRF microscopy and discuss why this technology has offered an advantage.

A

As we know, TIRF relies on the evanescent wave which allows monitoring processes on the surface of the cell. We can use it to study the kiss-and-run phenomenon concerning caveolae recycling: caveolae reach the cell surface, interacts with the extracellular environment, and come back in the cytoplasm. Both the internal and shell were tagged by two different fluorophores, also the internal one is sensitive to pH (so if pH decreases the fluorescence is quenched).
This experiment can be done with both TIRF and EPI illumination, what do we see?
We can see the caveola that comes closest to the cell’s membrane, touches it, and then goes back inside. With EPI we see it immediately, with TIRF we can catch the fluorescence only when the caveola is close to the cell’s surface (because of evanescent wave max distance).
Epi and TIRF can use channels to look at both fluorophores emissions: at the beginning, we see both, after the “kiss” we only have the shell’s signal because the internal fluorescence is quenched to acidification of the vesicle, so we know that the caveola reaches the membrane, communicates, and comes back (because we can see the shell’s signal going further from the surface). In TIRF we can’t see the caveola when it is far from the surface but what we gain is a nice black background because we excite only a layer, so the signal to noise ratio is better than the EPI one in which we get them out-of-the-focus fluorescence from all the cell’s layers.

26
Q

Describe the working principle of PALM microscopy.

A

Photoactivable Localization Microscopy tried to overcome the resolution problem by changing the question: where the molecule-of-interest is instead of how it’s made. We want to detect the center of the object which depends on resolution (s) and the pixel size (p) and also the number of photons I can collect from the fluorophore according to photostability we can get more or less 100 photons because then the fluorophore dies (due to photobleaching).
So PALM’s precision relies on (enter the formula with the root here), and when the pixel size is small enough I can bypass it.
PALM technology exploits the photoactivable fluorophores that can be randomly switched, then we excite them, collect the light and localize precisely the excited molecules, the signal we collect is fast because then the fluorophore bleaches. Then I repeat the activation-excitation till I have no signal anymore. The resolution is high because I can see a few fluorophores at a time and so I can localize precisely the center of the point spread function.

27
Q

Describe which factors control the resolution in PALM microscopy.

A

Resolution in palm microscopy is dependent on the ability to switch on a few fluorophores and look at them separately from the group, to do so we shine UV light to photoactivate randomly some fluorophores.
PALM precision relies on:
• Resolution
• pixel’s size
• photons collected by the sample, approximately 100 or so because of photobleaching.

28
Q

What are photoactivable fluorophores? How are they used in microscopy?

A

Photoactivable means that the fluorophores are activated by a light (UV light) and then can be excited with the absorbance wavelength to have fluorescence. Traditional fluorophores can be always excited while the photoactivable ones need the activation phase to be excited and emit. We can exploit this feature to selectively excite some molecules and look at them singularly.

29
Q

List in which way the various super-resolved microscopies presented in the class do not form linear images of the samples.

A

Super-resolving microscopy aims to overcome the diffraction limit, but as we know the diffraction limit is intrinsic in linear image-making, so we always lose information in the process. So scientists decided to overcome the problem in two ways: not making an image and not making a linear image.
We saw two examples of non-linear images, STED and Two-photons-excitation microscopy.
In STED we overcome linearity because the intensity of sted pulse is linearly related to the de-excited fraction till we reach 100% of de-excited molecules, if we increase further the intensity the fraction is still 1, that’s how we gain resolution: we narrow the hole of the donut-shaped STED pulse by increasing intensity beyond the one associated with 100% of de-excited molecules and we can focus light with a higher resolution.
In Two-photons-excitation microscopy, we make a non-linear image because the technique requires 2 photons to excite one fluorophore, the probability to find two photons at the same point is proportional to the I^2 of the beam, while in the wide-field we have the probability proportional to the I. I^2 means that we don’t excite a fluorophore in every part of the beam’s volume, but only when I increase (little volume) so if the distribution of I in the beam is a gaussian, the I^2 is a narrower Gaussian and only a little part of the beam is used to excite, so I have virtually gain resolution (virtually because, given the 2 photons with longer wavelength, I am both reducing and doubling the ray’s dimensions).