Remote Sensing & GIS 2 Flashcards

1
Q

For image histogram (a) below, chose one contrast enhancement technique that you think would best enhance the image.

Describe the technique and explain the reason for your choice.

Sketch the histogram and draw the shape of the function in the relevant histogram.

A

Linear contrast enhancement (LCE).

Here, because the image histogram is normally distibuted but the output DN range is wider than the input DN range, the function improves image contrast without distorting the image information.

I.e. the LCE does nothing but widen the increment of DN levels and shift histogram position along the image DN axis.

Linearly scale the DN range of the image to the full dynamic range of a display system (8 bits) based on the maximum and minimum of the input image X.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

For image histogram (b) below, chose one contrast enhancement technique that you think would best enhance the image.

Describe the technique and explain the reason for your choice.

Sketch the histogram and draw the shape of the function in the relevant histogram.

A

Logarithmic contrast enhancement.

Histogram (b) is in a form of a logarithmic normal distribution. Hence, a logarithmic function will modify such a histogram into a shape of normal distribution.

The gradient of the function is greater than 1 in the low DN range thus it spreads out low DN values while in the high DN range, the gradient of the function is less than 1 and thus it compresses high DN values.

As the result, logarithmic contrast enhancement shifts the peak of image histogram to the right and highlights the details in dark areas in an input image.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

For image histogram (c) below, chose one contrast enhancement technique that you think would best enhance the image.

Describe the technique and explain the reason for your choice.

Sketch the histogram and draw the shape of the function in the relevant histogram.

A

Exponential contrast enhancement.

Histogram (c) is in a form of an exponential normal distribution. Hence, an exponential function will modify such a histogram into a shape of normal distribution.

The gradient of the function is less than 1 in the low DN range thus it compresses low DN values while in the high DN range, the gradient of the function is greater than 1 and thus it spreads out high DN values.

As the result, logarithmic contrast enhancement shifts the peak of image histogram to the left and enhances details in light areas at the cost of suppressing the tone variation in dark areas.

Exponential and logarithmic functions are the inverse of each other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the preferred technique for generating a colour composite image with optimised colour balance and contrast?

How is this done?

A

Balance contrast enhancement technique (BCET)

Colour bias is one of the major causes of poor colour composite images. The BCET is a simple solution to this problem, where the three bands used for colour composition must have an equal value range and mean.

Using the parabolic function [y = a(x - b)2 + c] derived from an input image, BCET can stretch (or compress) the image to a given value range and mean without changing the basic shape of the image histogram.

Thus three image bands for colour composition can be adjusted to the same value range and mean to achieve a balanced colour composite.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Use a diagram for to explain the principle of principal component analysis (PCA).

A
  • In general, for multi-spectral imagery, the reflective bands are highly correlated, and, for example, if there is a correlation of 0.99 between two bands, this means that there is 99% information redundancy between the two bands, with only 1% of unique information.
  • As such, multi-spectral imagery is not efficient for information storage.
  • Consider an m band multi-spectral image as an m-dimensional raster dataset in an m dimension orthogonal coordinate system, forming an m dimensional ellipsoid cluster (represented by its covarience matrix, Σ<em>x</em>).
  • Then the coordinate system is oblique to the axes of the ellipsoid data cluster if the image bands are correlated (e.g. correlation > 0.8 is very high).
  • The axes of the data ellipsoid cluster formulate an orthogonal coordinate system and in this system, the same imagery data are represented by n (n<=m) independent components that are called principal components.
  • In other words, the principal components are the imagery data representation in the axes of the ellipsoid data cluster.
  • Thus, principal component analysis is a coordinate rotational operation to rotate the coordinate system of the original image bands to match the axes of the ellipsoid of the imagery data cluster.
  • The first PC is represented by the longest axis of the data cluster and the second PC the second longest, and so on.
  • The axes representing high order PCs may be too short to represent any substantial information and then the apparent m dimensional ellipsoid is effectively degraded to n (n<=m) independent dimensions.
  • In this way, PCA reduces image dimensionality and represent the nearly same imagery information with fewer independent dimensions in a smaller dataset without redundancy.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Summarize the principle of principal component analysis (PCA).

A
  • The principal component analysis is a linear transformation converting m correlated dimensions to n (n<=m) independent (uncorrelated) dimensions.
  • This transform is a coordinate rotation operation to rotate the coordinate system of the original image bands to match the axes of the ellipsoid of the imagery data cluster.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Describe the principle and operation of PCA based decorrelation stretch.

A

The PCA based decorrelation stretch generates a colour composite from three image bands with reduced inter-band correlation and thus more distinctive and saturated colours without distortion in hues.

The idea of PCADS is to stretch multidimensional imagery data along their PC axes (the axes of the data ellipsoid cluster) rather than original axes representing image bands.

In this way, the volume of data cluster can be effectively increased and the inter-band correlation is reduced as illustrated below:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Describe the major steps of PCADS.

A

The PCADS is achieved in three steps:

  • PCA to transform data from the original image bands to PCs.
  • Contrast enhancement on each of the PCs (stretching the data cluster along PC axes).
  • Inverse PCA to convert the enhanced PCs back to the relevant image bands.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Compare the images of band average, intensity of HSI, and PC1, and discuss their similarity (in operation) and common merits, and comment on the differences between them.

A

Average image, intensity image of HSI and PC1 image are largely in common.

The three types of images are all summation of spectral bands and they all increase the image SNR.

  • Band average is an equal weight summation of any number of image bands,
  • Intensity image of HSI is an average of three bands used for RGB HSI transformation,
  • PC1 is a weighted summation of all the image bands based on the 1st eigenvector of the image covariance matrix.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Compare the three DS techniques, PCADS, HSIDS, DDS, in principle, results and processing efficiency.

A
  • The PCADS technique is similar to the HSIDS in results though based on quite different principles.
  • PCADS is statistically scene dependent as the whole operation starts from the image covariance matrix. It can be operated on all image bands by one go.
  • The HSIDS is not statistically scene dependent and only operates on three bands.
  • Both techniques involve complicated forward and inverse coordinate transformations.
  • In particular, the PCADS requires quite complicated inverse operations of eigenvector matrix for inverse PC transformation and therefore not widely used.
  • The Direct Decorrelation Stretch (DDS) is the most efficient technique and it can be quantitatively controlled based on the saturation level of the image.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Using diagrams, explain the principle of additive RGB colour display based on Tristimulus Colour theory. Discuss how colours are used as a means to visualise information beyond the spectral range of human vision

A
  • The human retina has 3 types of cones. The response of each type of cone is a function of the wavelength of the incident light and peaks at 440nm (B), 545nm (G), and 680nm (R) individually.
  • I.e. each type of cone is primarily sensitive to one of the primary colours: B, G or R.
  • A colour perceived by a person depends on the proportion of each of these three types of cones been stimulated and thus can be expressed as a triplet of numbers (r, g, b) even though visible light is electromagnetic radiation in a continuous spectrum of 400-700nm.
  • Digital image colour display entirely based on the tristimulus colour theory.
  • A colour monitor, like a colour TV, is composed of three geometrically registered guns: R, G and B.
  • In the red gun, pixels of an image are displayed in reds of different intensity (i.e. dark red, light red etc.) depending on their DNs. So are the green and blue guns.
  • Thus, if three bands of a multi-spectral image are displayed in R, G and B simultaneously, a colour image display is generated, in which the colour of a pixel is decided by its DNs in R, G and B bands (r, g, b).
  • This kind of colour display system is called Additive RGB Colour Composite System.
  • In this system, different colours are generated by additive combinations of R, G and B components.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Describe the pseudo colour display method.

A
  • Human eye can recognize far more colours than grey levels.
  • Thus colour may be used very effectively to enhance small grey level differences in a monochrome image.
  • The technique to display a monochrome image as a colour image is called pseudo colour display.
  • A pseudo colour image is generated by assigning each grey level to a unique colour.
  • This can be done by interactive colour editing or by automatic transformation based on certain logic.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Discuss the advantages and disadvantages of the pseudo colour display method in image visualisation and interpretation.

A
  • The advantage of psuedo colour display is also its disadvantage.
  • When a digital image is displayed in a grey scale based on its DNs in a B/W display, the quantitative sequential relationship between different DN values is effectively presented.
  • This crucial information is lost in a pseudo colour display bc the assigned colours to various grey levels are not quantitative and sequential.
  • Indeed, the image of a pseudo display is an image of symbols rather than numbers; it is no longer a digital image.
  • We can regard the grey B/W display as a special case of pseudo colour display in which a sequential grey scale based on DN levels is used instead of a colour scheme.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is meant by smoothing in digital image filtering?

A
  • Smoothing (low pass) filters are designed to remove high frequency information and retain low frequency information,
  • thus, reducing the noise at the cost of degrading details in an image
  • The figure illustrates a typical low pass (smoothing) filter H(u,v) and the corresponding PSF h(x,y).
  • Most kernel filters for smoothing involve weighted average among the pixels within the kernel.
  • The larger is the kernel, the lower frequency information is retained.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Give some examples of low pass filter kernels.

A
  • Mean filters
  • Weighted mean filters
  • Gaussian filter
  • Edge preserve low pass filters
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Discuss the major drawback of mean filters and the importance of edge preserve smoothing filters.

A
  • Smoothing based on average is effective to eliminate noise pixels which are often distinct as very different DNs from their surrounding pixels, but the process blurs an image by removing high frequency information.
  • For this reason, “edge-preserving smoothing” technique becomes an important research topic of filtering.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Describe the concept of k nearest mean filter and discuss its applications and merits.

A
  • Kind of Edge preserve low pass filter
  • Re-assign a pixel xij (central pixel) of an image X to the average of the k neighbor pixels in the kernel window whose DNs are closest to that of xij.
  • A typical value of k is 5 for a 3x3 square window.
  • In the image on the LHS, 0 is noise bc it’s significantly different from the neighborhood pixels.
  • There is a distinct image boundary in the image on the RHS that is preserved after filtering.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Describe the concept of median filter and discuss its applications and merits.

A
  • Kind of Edge preserve low pass filter
  • Re-assign a pixel xij of image X to the median DN of its neighboring pixels (including itself) in a kernal window (e.g. 3x3)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Describe the concept of adaptive median filter and discuss its applications and merits.

A
  • Kind of Edge preserve low pass filter
  • The adaptive median filter is designed based on the basic principle of median filter with such an approach:
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Describe the concept of majority filter and discuss its applications and merits.

A
  • Kind of Edge preserve low pass filter
  • This is a rather democratic filter.
  • A pixel is re-assigned to the most popular DN among its neighbourhood pixels.
  • This filter performs smoothing based on the counting of pixels in the kernel rather than numerical calculations.
  • Thus it is suitable for smoothing images of non-sequential data (symbols) such as classification images.
  • For a 3x3 kernel, the recommended majority number is 5.
  • If there is no majority found within a kernel window, then the central pixel in the window remains unchanged.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

To smooth a classification image, what filter is appropriate and why? Describe this filter with an example.

A
  • There are 6 pixels with DN=6, t.f. the central DN, 2, is replaced by 6.
  • For a classification image, the numbers in this window are the class numbers and their meaning is no difference to class symbols A, B and C.
  • If we use a mean filter, the average of the DNs in the window is 5.3. Class 5.3 has no meaning in a classification image.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is meant by edge enhancement in digital image filtering?

A
  • Edges and textures in an image are typically high frequency information.
  • High pass (edge enhancement) filters remove low frequency image information and t.f. enhance high frequency information such as edges.
  • The figure below illustrates a typical high pass filter H(u,v) and the corresponding PSF h(x,y).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Compare Laplacian filters with gradient filters and discuss their applications

A
  • Most commonly used edge enhancement filters are based on first and second derivatives or Gradient and Laplacian.
  • The two types of high pass filters work in different ways:
  • Gradient is the first derivative at pixel f(x,y) and as a measurement of DN change rate, it is a vector characterising the maximum magnitude and direction of the DN slope around the pixel.
  • Laplacian, as the second derivative of f(x,y), is a scalar that measures the change rate of gradient.
  • I.e. Laplacian describes the curvature of a slope but not its magnitude and direction.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What does this figure show?

Explain the figure in the context of edge enhancement.

A
  • Geometric meaning of first and second derivatives.
  • A flat DN slope has a constant gradient but zero Laplacian bc the change rate of a flat slope is zero.
  • For a slope with constant curvature (an arc of a circle), the gradient is a variable while the Laplacian is a constant.
  • Only for a slope with varying curvature, both gradient and Laplacian are variables.
  • This is why Laplacian suppresses all the image features except sharp edges where DN gradient changes dramatically, while gradient retains edge as well as slope information.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Give the mathematical definitions of gradient and Laplacian filters.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Give examples of gradient filters.

A
  • Prewitt filters
  • Sobel filters
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Give examples of Laplacian filters.

A
  • Standard Laplacian filter
  • Laplacian filter
  • Edge sharpening filter
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is an edge sharpening filter?

What are the major applications of edge sharpening filter?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is the purpose of edge preserved filters?

A

In order to remove random noise with the minimum degradation of resolution, various edge preserved filters have been developed such as adaptive median filter.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Discuss the principle of SPCA for colour composition.

(Not actually in q’s, might be useful for understanding)

A
  • As PCs are independent without information redundancy, colour composites of PCs are often very effective to highlight particular ground objects and minerals that are not distinguishable in colour composites of the original bands.
  • However, in PC colour composites, the noise may be exaggerated because the high rank PCs contain significantly less information than lower rank PCs and have very low SNR.
  • When PC images are stretched and displayed in the same value range, the noise in higher rank PCs is improperly enhanced.
  • We would like to use three PCs with comparable information levels for colour composite generation.
  • Selective Principal Component Analysis (SPCA) techniques can produce PC colour composites condensing maximum information of either topographic or spectral features and meantime makes the information content in each PC displayed in RGB better balanced.
  • There are two types of SPCA: dimensionality and colour confusion reduction and spectral contrast mapping.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Discuss the SPCA technique dimensionality and colour confusion reduction and its application.

A
  • The spectral bands of a multi‐spectral image are arranged into three groups and each group is composed of highly correlated bands.
  • PCA is performed on each group and then the three PC1s derived from these three groups are used to generate an RGB colour composite.
  • As bands in each group are highly correlated, the PC1 concentrates the maximum information of each group.
  • For six reflective spectral bands of a TM or ETM+ image, the technique may condense more than 98% variance (information) in the derived PC1 colour composite. The recommended groups for six reflective spectral bands of TM or ETM+ images are shown in the Table.
  • The approach is actually equivalent to generating a colour composite using broader spectral bands, given that a PC1 is a positively weighted summation of the bands involved in the PCA.
  • It essentially creates a colour composite of a broad visible band in blue, an NIR band in green and broad SWIR band in red.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Discuss the SPCA technique dimensionality and colour confusion reduction and its application.

A
  • The spectral bands of a multi‐spectral image are arranged into three groups and each group is composed of highly correlated bands.
  • PCA is performed on each group and then the three PC1s derived from these three groups are used to generate an RGB colour composite.
  • As bands in each group are highly correlated, the PC1 concentrates the maximum information of each group.
  • For six reflective spectral bands of a TM or ETM+ image, the technique may condense more than 98% variance (information) in the derived PC1 colour composite. The recommended groups for six reflective spectral bands of TM or ETM+ images are shown in the Table.
  • The approach is actually equivalent to generating a colour composite using broader spectral bands, given that a PC1 is a positively weighted summation of the bands involved in the PCA.
  • It essentially creates a colour composite of a broad visible band in blue, an NIR band in green and broad SWIR band in red.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Discuss the SPCA technique spectral contrast mapping and its application.

A
  • A more interesting and useful approach (as opposed to DCCR) is spectral contrast mapping, where the primary objective is to map the contrast between different parts of the spectrum, and so to identify information unique to each band rather than information in common.
  • For this purpose, PC2s derived from band pairs are used instead of PC1s.
  • By using only two bands as inputs, the information that is common to both bands is mapped to PC1 and the unique information is mapped to PC2.
  • In general, low or medium correlation between the bands in each pair is preferred for this approach.
  • The recommended grouping for the six reflective spectral bands of TM/ETM+ is listed in the Table.
  • Based on the above principle, groups of three or more bands may also be used for spectral contrast mapping.
  • The technique can generate spectrally informative colour composites with significantly reduced topographic shadow effects and better SNR.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Describe the Decorrelation Stretch (DS) technique, PCADS, in principle, results and processing efficiency.

A
  • The idea of PCADS is to stretch multidimensional imagery data along their PC axes (the axes of the data ellipsoid cluster) rather than original axes representing image bands.
  • In this way, the volume of data cluster can be effectively increased and the inter-band correlation is reduced.
  • The PCADS technique is similar to the HSIDS in results (both increase the three dimensionality of the data cluster) though based on quite different principles.
  • PCADS is statistically scene dependent as the whole operation starts from the image covarience matrix.
  • It can be operated on all image bands by one go.
  • Both PCADS and HSIDS involve complicated forward and inverse coordinate transformations.
  • In particular, the PCADS requires quite complicated inverse operations of eigenvector matrix for inverse PC transformation and t.f. not widely used.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Comment on the Decorrelation Stretch (DS) technique, HSIDS, in principle, results and processing efficiency.

A
  • The HSIDS technique is similar to the PCADS in results (both increase the three dimensionality of the data cluster) though based on quite different principles.
  • The HSIDS is not statistically scene dependent and only operates on three bands.
  • Both HSIDS and PCADS involve complicated forward and inverse coordinate transformations.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Comment on the Decorrelation Stretch (DS) technique, DDS, in principle, results and processing efficiency.

A

The Direct Decorrelation Stretch (DDS) is the most efficient technique and it can be quantitatively controlled based on the saturation level of the image.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What are the major advantages and disadvantages of PC colour composition?

A
  • As PC1 is mainly topography, colour composites excluding PC1 may better present spectral information with topography subdued.
  • PCs represent condensed and independent image information and therefore produce more colourful (i.e. informative) colour composites.
  • However, here we have a problem in that a PC is a linear combination of the original spectral bands, so its relationship to the original spectral signatures of targets representing ground objects is no longer apparent.
  • To solve this problem, a feature‐oriented PC selection (FPCS) method for colour composition was proposed.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Use a diagram for a two dimensional case to explain the principle of principal component analysis (PCA).

A
  • In general, for multi-spectral imagery, the reflective bands are highly correlated, and, for example, if there is a correlation of 0.99 between two bands, this means that there is 99% information redundancy between the two bands, with only 1% of unique information.
  • As such, multi-spectral imagery is not efficient for information storage.
  • As shown by the 2D illustration, suppose the image data points form an elliptic cluster;
  • The aim of PCA is to rotate the orthogonal co‐ordinate system of band 1 and band 2 to match the two axes of the ellipsoid, the PC1 and PC2.
  • The coordinates of each data point in the PC co-ordinate system will be the DNs of the corresponding pixels in the PC images.
  • The first PC is represented by the longest axis of the data cluster and the second PC the second longest, and so on.
  • The axes representing high order PCs may be too short to represent any substantial information, and then the apparent m-dimensional ellipsoid is effectively degraded to n (n<=m) independent dimensions.
  • In this way, PCA reduces image dimensionality and represent the nearly same imagery information with fewer independent dimensions in a smaller dataset without redundancy.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What is a point operation for image processing?

Give the mathematical definition.

A

A process that transforms a single input image, X, to a single output image, Y, through a function f, in such a way that the DN of an output pixel, yij, depends only on the DN of the corresponding input pixel xij.

yij = f(xij)

When X represents a digital image and xij, the DN of any pixel in the image at the ith line and jth column, Y represents the image derived from X by a function, f, and yij is the output value corresponding to xij.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Using a diagram, explain why point operation is also called histogram modification.

A
  • Point operation is also called histogram modification because the operation only alters the histogram. h(x), of an image but not spatial relationship of image pixels.
  • For pixels with the same input DN, x, but different locations, the function, f, will produce the same output DN, y.
  • T.f. point operation can be more concisely defined as:

y = f(x)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What is contrast enhancement?

A
  • Sometimes called radiometric enhancement or histogram modification, contrast enhancement is the most basic but very effective and efficient technique to optimise the image contrast and brightness for visualization or highlight information in particualr DN ranges.
  • Contrast enhancement is a point operation that modifies the image brightness and contrast but does not alter the image size and textures.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What is a false colour composite?

A

If image bands displayed in red, green and blue do not match the spectra of these three primary colours, a false colour compositeimage is produced.

A typical example is the so called standard false colour compositein which nearer infrared band is displayed in red, red band in green and green band in blue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Describe the basic configeration of single path and repeat path InSAR, using diagrams.

A
  • A SAR interferometer acquires two SLC images of
    the same scene with the antenna separated by a distance B called the baseline.
  • For a single-pass SAR interferometer, such as a SAR interferometer on board an aircraft or the space shuttle (e.g. SRTM mission), two images are acquired simultaneously via two separate antennas; one sends and receives microwave signals while the other receives only (Figure 1a).
  • In contrast, a repeat-pass SAR interferometer acquires a single image of the same area twice from two separate obits with minor drift which forms the baseline B (Figure 1b); this is the case for most SAR EO satellites.
44
Q

What is a single look complex (SLC) image?

A

An SLC image is composed of pixels of complex numbers which record not only the intensity (the energy of microwave signals returned from targets) but also the phase of the signal which is determined by the distance between the corresponding ground position and the radar antenna.

Given a complex number of an SLC pixel: c = a + ib, the magnitude of c (the SAR intensity image) is:

Mc = (a2 + b2)½

45
Q

Use a diagram to explain the principle of InSAR.

A
  • InSAR (interferometric Synthetic Aperture Radar) technology exploits the phase information in SAR single look complex (SLC) images for earth and pletary observations.
  • A SAR interferogram shows the phase differences between the corresponding pixels of the same object in two SAR images taken from two slightly different positions. It represents topography as fringes of interference.
  • Based on this principle, InSAR technology has been developed and applied successfully for topographic mapping, measurement of terrain deformation caused by
    earthquakes, subsidence, volcano deflation and glacial flow.
  • The purpose of InSAR is to derive an SAR interferogram, ø, which is the phase difference between the
    two coherent SLC images
    (often called a fringe pair).

ø = ø1 - ø<span>2</span>

  • The applications of InSAR are largely based on the relationships between the interferogram, topography and terrain deformation, for which the baseline B, especially the perpendicular baseline, B1, plays a key role.
46
Q

Describe the main application and basis of the InSAR (as opposed to DInSAR and coherence) technique?

A
  • InSAR interferogram is used for DEM generation.
  • The single pass configeration of InSAR is preferred for the complete elimination of temporal decorrelation and to ensure high quality interferogram.
47
Q

Describe the main application and basis of the DInSAR (as opposed to InSAR and coherence) technique?

A
  • Differential InSAR (DInSAR) is used for the measurement of terrain deformation.
  • This uses repeat pass InSAR with at least one pair of across-deformation event SAR acquisitions.
48
Q

Describe the main application and basis of the InSAR coherence (as opposed to InSAR and DInSAR) techniques?

A
  • InSAR coherence technique is for random change detection.
  • It is very important to note that this technique is for detection rather than measurement.
  • A coherence technique must be based on repeat pass InSAR.
49
Q

In comparison with photogrammetric method using stereo imagery, discuss the advantages and disadvantages of using InSAR for DEM generation.

A
  • The InSAR measured phase difference is a variable in the 2π period, or is 2π-modulo wrapped, with fringes that occur in repeating 2π cycles and do not give the actual phase differences which could be n times of 2π, plus the InSAR measured phase difference.
  • The interferometric phase therefore needs to be unwrapped to remove the 2π-modulo ambiguity in order to generate DEM data.
  • There are also other corrections necessary such as the removal of the ramps caused by the earth’s curvature and by the direction angle between the two paths as they are usually not strictly parallel.
  • For InSAR, the wider the perpendicular baseline, Bl, the higher the elevation resolution. However, a too wide Bl introduces more severe spatial decorrelation which degrades the quality of the interferogram and then the DEM.
  • Usually a Bl of several 10’s - 1000 m is used for C band in InSAR.
  • The elevation resolution of InSAR, is t.f., for C-band for example, no better than 10 m in general.
50
Q

Select the only correct statement from the four below and explain the reasons for your choice.

  • *(a) InSAR technique can be used to produce DEMs of cm level accuracy of elevation.**
    (b) Differential InSAR technique can be used to produce DEMs of cm level accuracy of elevation.
  • *(c) Differential InSAR technique can measure better than cm level deformation of land surface.**
    (d) SAR coherence image can measure up to cm level (half radar wavelength) deformation of land surface.
A

(a) InSAR technique can be used to produce DEMs of NO BETTER THAN 10 M level accuracy of elevation.
(b) Differential InSAR technique can be used FOR THE MEASUREMENT OF TERRAIN DEFORMATION AS CASUED BY EARTHQUAKES, SUBSIDENCE, GLACIAL MOTION AND VOLCANO DEFLATION.

(c) Differential InSAR technique can measure better than cm level deformation of land surface.

(d) SAR coherence image can DETECT up to cm level (half radar wavelength) RANDOM CHANGES of land surface.

51
Q

Select the only correct statement from the four below and explain the reasons for your choice:

  • *a. InSAR technique can be used to produce DEMs of cm level accuracy of elevation.**
    b. Differential InSAR technique can be used to produce DEMs of cm level accuracy of elevation.
  • *c. Differential InSAR technique can detect up to cm level (half radar wavelength) random changes of land surface.**
    d. SAR coherence image can detect up to cm level (half radar wavelength) random changes of land surface.
A

a. InSAR technique can be used to produce DEMs of NO BETTER THAN 10 M level accuracy of elevation.
b. Differential InSAR technique can be used FOR THE MEASUREMENT OF TERRAIN DEFORMATION AS CASUED BY EARTHQUAKES, SUBSIDENCE, GLACIAL MOTION AND VOLCANO DEFLATION.
c. Differential InSAR technique can MEASURE up to cm level (half radar wavelength) DEFORMATION of land surface.

d. SAR coherence image can detect up to cm level (half radar wavelength) random changes of land surface.

52
Q

What is decision making in GIS?

A
  • A decision must be made based on the level of acceptable risk and the degree of confidence (error and uncertainty) in the available data:
  • Aiming for a quantitative prediction, with assurances of minimised uncertainty.
  1. Deterministic Decision Making
  2. Probabilistic Decision Making
  3. Fuzzy Decision Making
53
Q

Describe Deterministic Decision Making

A
  • ‘Controls’ data and relationships between inputs,
  • Decisions and outcomes are understood with some certainty (rare!)
  • Categorical classes, rules and thresholds can be applied
54
Q

Describe Probabilistic Decision Making

A
  • Environement, relationships and outcomes are uncertain.
  • In general, uncertainty is treated as ‘randomness’ (not normal in nature’).
  • Categorical classes, rules and thresholds should not be applied
  • Produces a true or false result - no degree of uncertainty is accommodated unless as a seperate and distinct state.
55
Q

Describe Fuzzy Decision Making

A
  • Uncertainties are related to natural variation, imprecision, lack of understanding or insufficient data (or all of these).
  • Consider variable class membership (rather than true or false) - allowing infinite number of states, representing the increasing possibility of being true.
56
Q

What are the uncertaincies in GIS spatial analysis?

A
  1. Criterion uncertainty (data quality)
  2. Threshold uncertainty (data & class descriptions)
  3. Decision Rule uncertainty (data handling)

Fuzzy logic can assist with 2 and 3

57
Q

What is Criterion uncertainty?

A
  • (data quality)
  • Arise from measurement and identification errors, and data quality
58
Q

What is Threshold uncertainty?

A
  • (data & class descriptions)
  • Boundaries between classes are usually artificially applied but are rarely rigid in nature.
  • Concept of ‘possibility’ would be useful in describing natural variation rather than categories, so fuzzy boundaries can be used.
59
Q

What is Decision Rule uncertainty?

A
  • (data handling)
  • ‘Hard’ rules for firm evidence, plentiful knowledge and/or legal prescrption, or;
  • ‘Soft’ rules for circumstantial evidence, belief and plausibility.
60
Q

Briefly describe the two basic types of criteria used in multiple criterion evaluation (MCE) excerice?

A
  • The criteria being evaluated are generally one of two types,
  • constraints that are inherently Boolean and limit the area within which the phenomenon is feasible, or
  • factors that are variable and are measured on a relative scale representing a variable degree of likelihood for the occurrence of the phenomenon.
  • Preperation of both types is important
  • Each has a different role within the model
61
Q

What are the features of constraint (thematic and discrete) criterion for MCE?

A
  • Discretely sampled phenomena (maybe Boolean)
  • Containing values on nominal or ordinal measurement scale (i.e. not real DN but symbols or ‘pointers’)
  • Often used to satisfy legally prescriptive requirements
62
Q

What are the features of factor (variable) criterion for MCE?

A
  • Spatially variable, continuously sampled phenomena (real DN)
  • Containing values on interval, ratio and cycle scales (integer or floating point)
  • Must be scaled to a consistent value range and direction
63
Q

Describe, with aid of a diagram, the principles of fuzzy membership.

A
  • Fuzzy membership or fuzzy sets provide an elegant solution to the problem of threshold and decision rule uncertainty by allowing ‘soft’ thresholds and decisions to be made.
  • It converts criterion value ranges to consistent value range/type and incorporates uncertainties by..
  • Breaking the requirement of ‘total membership’ of a class;
  • Instead of just two states of belonging for a class, a fuzzy variable can have one of an infinite number of states ranging from 0 (non‐membership) to 1 (complete membership) and the values in between represent the increasing possibility of membership.
  • The fuzzy set membership function (expressed on a continuous scale from 0 (non-member) to 1 (full membership)) can be most readily appreciated with reference to a simple linear function but it may also be sigmoidal or J‐shaped, and monotonic or symmetric in form (Fig. 18.3).
64
Q

Give an example of an instance of ‘fuzzy membership’ use.

A
  • e.g. for slope instability
  • Gentle slopes (<4°) are unlikely to be associated with stability and so would have a fuzzy membership of 0,
  • whereas steeper slopes (>10°) are likely to be associated with instability and so would have a fuzzy membeship of 1.0.
  • A simple linar membership function, µ(x), is a good example, where x is the slope angle, and
  • every value of x is associated with a value of µ(x), and the ordered pairs [x, µ(x)] are collectively known as the Fuzzy Set.
65
Q

How does MCE help to reduce ‘data uncertainty’?

A
  • Increase resolution of data,
  • Improve standards and procedures
  • More rigorous sampling
  • Error assessment techniques (RMS)
66
Q

How does MCE help to reduce ‘threshold uncertainty’?

A
  • GIS data handling to create soft or fuzzy boundaries and scaling
67
Q

How does MCE help to reduce ‘decision rule uncertainty’?

A

Decision rule uncertaincies can be quanitfied and accommodated (to varying degrees) through use of different MCE techniques:

  • Fuzzy sets, fuzzy logic & fuzzy modelling (also Vectoral Fuzzy Modeling)
  • AHP & Weighted factors in linear combination (WLC)
  • Probability estimates & distribution functions (Bayesian), weights of evidence modeling
  • Dempster-Shafer Theory (incorporate concepts of belief and plausibility)
68
Q

How does MCE help to reduce uncertainty?

A

Choice of method is made according to the type of problem, type of uncertainties involved, degree of understanding around the problem

69
Q

What is hyperspectral remote sensing?

A
  • Revolutionary developement of passive sensors via the combination of an imaging system with a spectro-radiometer, to a collect continuous spectral signature or profile using bands of a few nm width.
  • This produces a data cube (dense volume): many rows, columns & 100s bands.
  • Demands processing via analysis of the whole spectral signature (or fingerprint) of a material rather than looking at differences in reflectance between bands.
  • Allows identification of materials (not merely discrimination)
70
Q

What are the principles of a hyperspectral imaging system? Use a diagram to supplement your answer.

A
  • The incoming EMR from land surface goes through the sensor optics and is split into hundreds of very narrow spectral beams by a complicated spectral dispersion device (e.g. interference filter),
  • and finally the spectral beams are detected by arrays of CCDs corresponding to, say 224, spectral bands.
  • A hyper-spectral system can be either an across track mechanical scanner with a small number of detectors for each band, or an along-track push-broom scanner with a panel of hundreds of line arrays of CCDs.
  • Hyper-spectral sensors are mainly operating either in the VNIR only or in the VNIR & SWIR spectral ranges; there are very few TIR hyperspectral sensors as yet but they are developing.
71
Q

Explain the purpose of the 2D CCD sensor array of detectors in this hyperspectral imaging system.

A
  • One dimension used for spectral separation and the 2nd dimension used for imaging in one spatial direction.
  • Second spatial dimension arises from movement of scanning camera over the scene (e.g. aircraft flight).
  • Result is one 2D image for each spectral channel, i.e. every pixel contains one full spectrum.
72
Q

What is a hyperspectral sensor designed to do?

A

Collect data of very high spectral resolution, to rpovide a near contiguous spectral signature, and thus to identify minerals/substances

73
Q

Explain the pre-processing that is necessary before hyperspectral imegary can be used for mineral identification

A
  • The hyperspectral imagery data represent radiance spectra, of the land surface targets, which are dependent on solar radiation, atmospheric scattering and absorption and target spectral reflectance.
  • To analyse the hyperspectral data in terms of the target spectral properties (reflectance) the data must be calibrated to be comparable with reflectance.
  • There are several commonly used methods to achieve this.
74
Q

There are several commonly used methods to achieve data calibration for hyperspectral sensing. What are they?

A
  • Flat Field Calibration
  • Internal Average Relative Reflectance (IARR)
  • Empirical Line Calibration

The first two strategies rely on information from the image only; the last requires knowledge of the surface physical properties and atmospheric conditions at time of imaging.

75
Q

Explain the Flat Field Calibration (FFC) pre-processing method that is necessary before hyperspectral imegary can be used for mineral identification.

A
  • The ‘Flat Field Calibration’ technique is used to normalise the images to an area of known ‘flat’ or uniform reflectance.
  • The method requires that a large, spectrally flat and uniform area can be located in the image data.
  • The radiance spectrum for this area is assumed to be composed of primarily atmospheric effects (scattering & absorption) and the solar spectrum - and has a reletively flat spectral reflectance curve.
    • Preferably a bright area (with min. noise) - not always easy to find.
  • The average radiance spectrum of the area is used as a reference spectrum, which is then divided into the spectrum for each pixel of the image as defined below.
  • Result = apparent reflectance data that can be compared with lab. spectra.
76
Q

Explain the Internal Average Relative Reflectance Conversion (IARR) pre-processing method that is necessary before hyperspectral imegary can be used for mineral identification.

A
  • The IARR calibration technique is used to normalise images to a scene-average spectrum.
  • The radience values in each spectrum are scaled so that their sum is constant over the entire image.
  • Remove gross topographic shading and overall brightness variations.
  • This is particularly effective in an area where no ground measurements data available and little is known about the scene.
  • It works particularly well for arid and semi-arid areas with no dense vegetation.
  • Result = apparent reflectance data that can be compared with lab. spectra.
77
Q

What assumptions are made while using the Internal Average Relative Reflectance Conversion (IARR) pre-processing method before use of hyperspectral imegary.

A

Method assumes the scene is heterogeneous enough that spatial variations in spectral relfectance characteristics cancel out, producing a mean spectrum similar to a flat field spectrum (generally, but not always true).

78
Q

Explain the Empirical Line Calibration (ELC) pre-processing method that is necessary before hyperspectral imegary can be used for mineral identification.

A
  • Force image data to match selected field reflectance spectra - requires ground (field-spectral) measurements and/or knowledge.
  • Need 2+ ground targets for which reflectance is measured in the field - targets of at least 1 light and 1 dark area.
  • Find linear regression for each band:band plot to find the transformation between radience and reflectance.
  • Slope quantifies combined effects of multiplicative radiance factors (gain), and interpret with the radience axis gives the additive component (offset).
  • Gains and offsets (a and b) calculated in the regression applied to the radience spectra to produce apparent reflectance on a pixel-by-pixel basis.
  • Does not account for all topographic effects.
  • ELC is best but not always possible.
79
Q

What is the problem with pre-processing methods in hyperspectral analysis?

A
  • Calibration and correction for apparent reflectance only.
  • Apparent reflectance spectrum presents the general spectral shape but:
    • Not actual reflectance values
    • Not corrected for albedo information
80
Q

What are the approaches employed to extract mineralogical information from narrow-band hyperspectral (HS) image datasets, giving examples of what the realistic targets are in each case.

A
  1. Each band DN represents average reflectance across very narrow range of wavelengths, few nanometres.
  2. Analyse individual spectral signatures.
  3. Interpretation is impossible without understanding spectral signatures of minerals & rocks.
  4. First calibrate to apparent reflectance.
  5. Identify noise component e.g. using MNF
  6. Use n-dimensional plotting to identify pure pixels (PPI) - produce an end-member spectral library.
  7. Use these image end-members (or lab. spectra) as references to classify the image spectra according to their similarity to the reference spectra
  8. Spectral classification methods include: spectral matching, SAM, LSU, MTMF, SFM.
  9. Narrow bands (complete spectrum) can be used to perform mineral species level discrimination and identification.
81
Q

What technique(s) would one use to identify the noise component while processing HS information?

A

The Minimum Noise Fraction (MNF) transform

82
Q

What spectral classification technique(s) would one use while processing HS information?

A
  • Spectral Angle Mapper (SAM)
  • Linear spectral unmixing (LSU)
  • Matched Filtering (MF) & Mixture Tuned Match Filtering (MTMF)
  • Spectral Feature Fitting (SFF)
83
Q

What are the approaches employed to extract mineralogical information from broad band multispectral (MS) image datasets, giving examples of what the realistic targets are in each case.

A
  1. Each band DN represents average reflectance across broad range of wavelengths, microns.
  2. General image enhancement (e.g. using BCET) and remove highly correlated information - spectral highs and lows remain.
  3. Extract broad spectral signatures of target materials and identify significant spectral absorptions.
  4. Devise ratios and ‘band algebra’ to highlight or enhance particular spectral absorptions, e.g. iron-oxide ratio or clay ratio or NDVI.
  5. Calculate image statistics to see contribution of each band to individual PCs etc.
  6. Use other techniques like PCA, classification, HIS to enhance spectrally significant portions of the data.
  7. Broad bands ‘alias’ spectral absorbtions and so can only detect spectral differences between lithologies and some mineral groups.
84
Q

Summarise HS data and processing techniques.

A
  • The ‘Hyper’ or ‘over-sampled’ nature (more than enough bands) and high degree of interband correlation in HS data demands a different approach from ‘hypo’ or broad-band data.
  • High potential for mixed pixels but allows sub-pixel spectral mapping (although can never arrive at exact mixtures)
  • If data are not truly HS then the Minimum Noise Fraction (MNF) process will reveal this.
  • Some HS techniques (e.g. Spectral Angle Mapper, SAM) can sometimes be used with non-HS data (e.g. Aster, and potentially Worldview-3) to good effect - but with caution (spectral resolution will always limit the results).
  • HS processing focuses on reducing data dimensionality first, then allows us to classify the data rich components using image-derived or pure/library spectra.
  • Species level identification and mapping is possible and has been used to great effect over past 40 yrs.
  • Techniques and instruments now adapted to core-scanning & outcrop mapping.
  • HS sensors are already onboard UAVs (VNIR only as yet) but soon will include SWIR and TIR.
85
Q

Define ‘Hazard’

A

Probability (likelyhood) of the occurence of an unwelcome event

86
Q

Define ‘risk’

A

The expected degree of loss, in terms of probability & cost, as caused by an event.

Probability (hazard) modulated by the economic value of the losses per event

87
Q

Express ‘risk’ as an “equation”

A

Risk = [(Hazard x Vunerability) x Exposure)

or more simply:

R = H x V

88
Q

Define ‘exposure’

A

Refers to the physical aspects of vunerability;

the sum of the elements at risk

89
Q

Define ‘vulnerability’

A

Physical, social, economic or environmental conditions that increase the susceptibility of a community to the impact of hazards

90
Q

Use a flow-chart (based on an example) to explain, simply, a multi-criteria evaluation (MCE) system.

A
  • Slope stability hazard assessment is the principle objective
  • There are 3 sub-objectives
  • 6 input criteria and several possible outcomes or environmental states, representing differing levels of hazard (probability or likelyhood)
91
Q

Use a diagram to explain the principles of IHS transformation based saturation stretch (decorrelation stretch technique, IHSDS).

Comment on methodology and computing efficiency.

A
  • High correlation generally exists among spectral bands of multi-spectral images.
  • As a result, original image bands displayed in RGB formulate a slim cluster along the grey line occupying only a very small part of the space of the colour cube (left figure),
  • Contrast enhancement on individual image bands can elongate the cluster in the colour cube but it is not effective to increase the volume of the cluster bc such stretch is equivalent to stretch intensity only (middle figure).
  • In order to increase the volume of data cluster in the colour cube, the data cluster should expand in both directions along and perpendicular to the grey line.
  • This is equivalent to stretch both intensity and saturation components (right figure).
  • IHSDS is interactive and flexible based on user observation of the saturation image and its histogram.
92
Q

The IHSDS technique involves what steps?

A
  1. (BCET stretch - or linear stretch with appropriate clipping)
  2. RGB-IHS transformation
  3. Saturation component stretching
  4. IHS-RGB transformation
93
Q

Explain the principle of the direct decorrelation stretch (DDS) technique.

Comment on methodology and computing efficiency.

A
  • This technique performs a direct saturation stretch without using RGS-IHS and IHS-RGB transformations.
  • The DDS achieves the same effect as the IHS decorrelation stretch.
  • As DDS involves only simple arithmetic operations and can be controlled quantitatively, it is much faster, more flexible and more effective than IHS DS technique.
  • As with IHSDS, the 3 bands for colour composition must be well stretched (e.g. BCET or linear stretch with appropriate clipping) before the DDS is applied.
94
Q

Describe the principle of the Smoothing Filter based Intensity Modulation (SFIM) method.

A
  • Both IHS and Brovey transform image fusion techniques can cause colour distortion if the spectral range of the intensity replacement (or modulation) image is different from the spectral range covered by the three bands used in a colour composite.
  • This problem is inevitable in colour composites that do not use consecutive spectral bands.
  • The problem may become serious for vegetation and crop fields if the images for fusion are taken in different growth seasons.
  • Preserving the original spectral properties is very important for most remote sensing applications based on spectral signatures, such as lithology, soil and vegetation.
  • The spectral distortion introduced by these fusion techniques is uncontrolled and not quantified bc the images for fusion are often taken by different sensor systems on different dates or in different seasons.
  • It t.f. cannot be regarded as spectral enhancement and should be eliminated by all means to avoid unreliable interpretation for applications.
  • The SFIM technique is a genuine spectral preserve image fusion technique applicable to co-registered multi-resolution images.
95
Q

Explain how the SFIM operates.

A

For the SFIM operations, the lower resolution image must be interpreted to the same pixel size as the higher resolution image by bilinear or cubic interpolation in the co-registration process.

96
Q

Explain the major advantage of SFIM as an image fusion technique in comparison with HSI and Brovey transformation based image fusion techniques.

A
  • The advantage of the SFIM over the IHS and Brovey transform fusion technique is that it improves spatial details with the fidelity to the image spectral properties and contrast.
97
Q

What problem may occur in an SFIM fused image (e.g. TM and SPOT Pan fusion)? Explain how to deal with the problem to improve the image quality.

A
  • The SFIM is more sensitive to image co-registration accuracy than the IHS and Brovey transform.
  • Inaccurate co-registration may result in blurring edges in the fused images.
  • This problem can be alleviated using a smoothing filter with a kernel larger than the resolution ratio between the higher and lower resolution images.
  • E.g., for TM and SPOT Pan fusion 5x5 and 7x7 filters will produce better results than a 3x3 filter.
  • We have developed advanced pixel-wise image co-registration algorithm that resolves the problem.
98
Q

Why is the image algebraic operation also called the multi-image point operation?

A
  • For multi-spectral (or more generally, multi-layer) images, algebraic operations, such as four basic arithmetic operations (+, -, x, /), logarithmic, exponential, sin, tan etc., can be applied to the DNs of different bands for each pixel to produce a new image.
  • Such processing is called image algebraic operation.
  • Algebraic operations are performed pixel by pixel among DNs of spectral bands (or layers) for each pixel without involving neighbourhood pixels.
  • They can t.f. be considered as multi-image point operations.
99
Q

Give the mathematical definition of the multi-image point operation.

A

y = f (x1, x2, …, xn)

where n is the number of bands (or layers).

100
Q

Describe the image difference (subtraction) operation.

A
  • Image subtraction produces a difference image from two input images

Y = (1/k) (wiXi - wjXj)

  • The weights, wi and wj, are important to assure a balanced differencing is performed.
  • If the brightness of Xi is significantly higher than that of Xj, for instance, the difference image will be dominated by Xi and as a result, the true difference between the two images is not effectively revealed.
101
Q

What are the advantages of image subtraction (differencing)?

A
  • Subtracting is one of the simplest and most effective techniques for selective spectral enhancement.
  • It is also useful for change detection and removal of background illumination bias.
102
Q

Do you think that the differencing operation between two images will increase or decrease SNR and why?

A
  • Subtraction operation reduces the image information and decreases SNR.
  • This is obvious bc image subtraction removes the common features while likely retaining the random noise in both images.
103
Q

Describe image ratio (division) operations.

A

Y = Xi / Xj

  • For image division processing, certain protection is needed to avoid overflow in case than a number is divided by zero.
  • A commonly used trick is to shift up the value range of denominator image by 1 to avoid zero.
  • We can consider ratio as a coordinate transformation from a Cartesian coordinate system to a polar coordinate system, then

Y = Xi / Xj = tan (a)

  • a* = arctan (Xi / Xj)
  • Ratio image Y is actually a tangent image of the angle a.
104
Q

Ratio images are designed for what purpose?

How is this purpose fulfilled/operated?

A
  • Ratios are often designed to highlight the target features as high ratio DNs.
  • Direct stretch on ratio image Y may enhance the target features better, at the cost of losing the information represented by low ratio DNs
  • E.g. TM1/TM3 and TM3/TM1 contain the same information, but are different after linear scale.
  • Remember, when you design a ratio, make sure the target information is in high values in the ratio image.
105
Q

What are the applications of ratio images?

A
  • Ratio is an effective technique to selectively enhance spectral features.
  • Ratio images derived from different band pairs are often displayed in RGB system to generate ratio colour composites.
  • E.g. a colour composite of TM5/TM7 (blue), TM4/TM3 (green) and TM3/TM1 (red) may highlight clay mineral in blue, vegetation in green and iron oxide in red.
  • Many indices, such as Normalized Difference Vegetation Index (NDVI), have been developed based on both differencing and ratio operations.
106
Q

Using a diagram to explain how the ratio technique suppresses topography.

A
  • For a given incident angle of solar radiation, the radiation energy recived by a land surface depends on the angle between the land surface and the incident radiation.
  • T.f., solar illumination on land surface varies with terrain slope and aspect, which results in topographic shadows.
  • The DNs in different spectral bands of a MS image are proportional to the solar radiation recieved by land surface and its spectral reflectance.
  • Suppose a pixel representing a land surface facing the sun receives n times radiation energy of that received by another pixel of land surface facing away from the sun, then the DNs of the two pixels in spectral bands i and j are as below: