Final Exam Flashcards

1
Q

How are Digital Numbers (DNs) Represented?

A

Remote sensing instruments measure irradiance in W/m2/sr.

For storage efficiency these values are converted to DNs (typically 8- or 16-bit)

8-bit: 2^8 = 256 values (0-255 range)

16 bit: 2^16 = 65536 values (0-65535 range)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is Irradiance? How is it Measured?

A

Irradiance (E): amount of radiant flux incident per unit area of a surface (W/m2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is Exitance? How is it Measured?

A

Exitance (M): amount of radiant flux leaving per unit area of a surface (W/m2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is Radiance? What unit is used to Measure it?

A

Radiance: is the radiant flux emitted, reflected, transmitted or received by a given surface, per unit solid angle per unit projected area.

Radiant energy (Q): is the energy carried by a photon

Unit: Joules (J)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Reflectance?

A

Reflectance (Rλ): is the ratio of the radiant flux reflected from a surface (Lλ) to the radiant flux incident to it (Eλ), at certain wavelengths

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why do we need to perform Radiometric Corrections?

A

Number of reasons; Data can be Multi-temporal / acquired from different satellites (various sensors) / Biophysical parameters will also be estimated (e.g. chlorophyll concentration, vegetation biomass, NDVI)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the two types of Radiometric Correction?

A

Relative: Uses information derived from the image itself or other images.


Absolute: Uses statistical methods - empirical line calibration (ELC) & 
Modelling - based on sensor and atmosphere characteristics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are some examples of Relative Correction Methods?

A

Relative correction examples: Image normalization using histogram adjustments / Multiple date image normalization using regression / Dark object subtraction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are some examples of Absolute Correction Methods

A

Empirical line calibration - Forces remote sensing image data to match in situ reflectance measurements [ideally obtained roughly at the same time] done by selecting two or more homogenous targets, one bright, one dark [sand pile & deep water]


ATCOR-2: A spatially-adaptive, fast, atmospheric correction and haze removal algorithm for flat terrain satellite images. Model uses sensor parameters and information about the atmosphere at the time of imaging.


How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why is Geometric Correction Important?

A

Because it corrects for displacements present in remotely sensed imagery and and ensures that pixels/features in the image are in their proper and exact position on the earth’s surface.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is Image Rectification?

A

It is the process of spatially anchoring an image to the world in order to allow images and data from the same areas to be used together or compared.

By transforming an image from one grid system to another (e.g. image row data and column cooridiantes to map projection coordinates) we can remove geometric distortion so that individual pixels are in their proper planimetric (x,y) map locations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the two types of distortion common to remote sensing imagery? Explain.

A

Geometric and Radiometric.

Geometric distortion is related to how an image is anchored in space, and how accurately pixels reflect their actual location on the earth.

Radiometric distortion is related actual pixel DN values and unwanted contributions to measure TOA radiance (e.g. atmospheric attenuation)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Explain the difference between image -> image, and image -> map approaches to geometric correction. How does orthorectification compare to these methods?

A

Image-to-Image Registration (coregistration): Is the alignment of two images. Not necessarily conforming to a specific map projection.

Image-to-Map rectification: Removes distortions caused by sensors, geometry of image is also made planimetric (conforms to map projection, allowing for measurements and interaction with other projected data)

Orthorectification: Is all of the above, plus the specific removal of distortions caused by relief displacement. Requires a digital elevation model (DEM).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a GCP?

A

Ground Control Points (GCPs): Are points on the surface where both the image coordinates (x,y) and map coordinates (E/N; Lat/Long, etc) are known. They can be chosen from a map, rectified image, vectors, GPS, etc…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What characteristics would an ideal set of GCP’s have?

A

An ideal set of GCPs would meet the following criteria:


  • Clearly identifiable on both uncorrected image and source;
    
- They must be permanent features;
    
- Must be distributed across the entire area;
    
- Must be an adequate number for the mathematical model chosen for spatial interpolation.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Describe 2 mathematical translation models. What are the minimum number of GCP’s required for each?

A

First order polynomial (In this transformation the image can be transformed independently in the x,y directions and parallel lines are preserved) 1st order transformation (shape of image not always preserved. Only three pairs of GCPs are requires but more can be used to better define the geometric properties of the image.


Higher order transformations: Requires 6 or more pairs of GCPs depending on the degree of the polynomial. Higher the order, the more GCPs required. (this transformation also does not preserve parallel lines)


How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is RMS error? What is the strategy to lower it if it too high?

A

Root Mean Square (RMS) error of each GCP: indicates the level of distortion in the map after the transformation. Acceptable RMS error is equal to half of the length of one cell on an image. (RMS<0.5)

What to do when RMS error is high: 

1. Reject the GCP

2. Collect more GCPs

3. Compute the RMS error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Describe the 3 intensity interpolation resampling

methods.

A

Nearest Neighbour Resampling: The DN from the pixel in the uncorrected image is transferred to the nearest pixel location in the new corrected image. (Simplest method, original values are not altered though some may be duplicated.

Bilinear Interpolation Resampling: Method takes and average of four DN pixels in the original image nearest to the new pixel location. Averaging process alters the original Pixel DN values and creates new digital values in the output image. (Smoothes the Image, More processing time)

Cubic Convolution Resampling: this method calculates a block of 16DN pixels from the unrectified image which surrounds the new pixel location in the corrected image. Image is much less blurry than with bilinear interpolation. Best method for smoothing out noise. (even greater processing time).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the goal of Image Enhancements?

A

The goal of image enhancement is to improve the usefulness of an image for a given task. Applied to remote sensing data to aid human visual and digital image analysis/interpolation, and feature identification.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the Spectral Profile useful for?

A

This can be used to determine a displays enhancement. Spectral profile charts allow you to select areas of interest or ground features on the image and review the spectral information of all bands in a chart format.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is Contrast Stretching? Why is it used?

A

Contrast Stretching expands the original DNs in an image, increasing the contrast and interpretability of the image.

Typically devices have 256 grey values, DNs in a single image usually have a range of less than 256, thus you can take a narrow distribution and stretch it out over the full range.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What are the two types of Contrast stretching? Explain.

A

Linear stretching - Increases the contrast while preserving the original relationships of the DNs to each other. By expanding the original input values of the image, the total range of sensitivity of the display device can be utilized

Non-linear stretching (e.g. histogram stretching) - Increases the contrast without preserving the original relationship of the DN values. Portions of the image histogram may be stretched differently than others. Input DNs are assigned to new output DNs based on their frequency. Frequently occurring values are stretched more than others.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Why is Spatial filtering used?

A

Spatial filtering is used to improve visualization. It uses values and relationships with neighbouring pixels to either enhance the image or remove noise.

The brightness value of a pixel at a particular location in the filtered image is a function of the brightness values of it and its neighbouring pixels from the original image.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are the two types of Spatial Filters? Explain.

A

Low-pass / High-pass

Low-pass Filters: emphasize smooth features and are used for reducing noise. They Emphasize features that have a low spatial frequency, while de-emphsizing areas that have high spatial frequency; smoothing out edged. An averaging filter is an example of a low-pass filter.

High-pass Filters: emphasize abrupt features by removing slowly varying components and enhancing high-frequency local variations; i.e. enhances the spatial detail that changes quickly with distance. Often these filters are used for edge detection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What are the two types of Edge Detection Filters?

A

(1) non-directional; or (2) directional.

Directional filters highlight features in one direction, limited number of directions; e.g. sobel edge detector.

Non-directional filters deal with linear features in all directions; e.g. laplacian filter

26
Q

What is the goal of Band Math?

A

To amplify a specific signal within an image (i.e. vegetation health) and reduce the number of data sets that we input in an image classification.

27
Q

What is a Band ratio? Explain what it does.

A

An enhancement resulting from the division of DN values in one spectral band by the corresponding values in another. This is useful for simple change detection or for adding value to the interpretation process.

28
Q

What are the three types of Vegetation Indicies?

A

(Ratio, NDVI, Tassled Cap Transformation)

29
Q

What is a Simple ratio? How does it work for interpolating vegetation?

A

Near-infrared (NIR) divided by red band gives us our simple ratio (SR)
SR = NIR/red
This takes advantage of inverse relationship between chlorophyll absorption of red radiant energy and increased reflectance of near-infrared energy for healthy plant canopies.

Con: The values here are unbound and are image dependant making them difficult to use for temporal comparison.

30
Q

Explain NVDI. Why is this method useful?

A

NDVI quantifies vegetation health by measuring the difference between near-infrared (which vegetation strongly reflects) and red light (which vegetation absorbs).

NDVI = NIR - red / NIR + red

Healthy vegetation - most of the visible light is absorbed, large amounts of NIR are reflected
. (Values close to 1)
Unhealthy vegetation - more visible light reflected, less NIR reflected.
 (Values close to 0)

Pro: This method standardizes the measurements making them useful for temporal comparison.

31
Q

What is Tasseled Cap Transformation?

A

A more sophisticated approach to vegetation mapping. Three components derived from visible and infrared bands of Landsat thematic mapper imagery:


Brightness - variations in soil background reflectance (urbanized ares, beaches bare-dry agricultural fields show up well here)


Greenness - variations in the visor or green vegetation (bright areas show highest green biomass)


Wetness - used for looking at soil wetness (dark = dry, wet = bright)

32
Q

What is Principle Component Analysis? Explain.

A

PCA is an assumption that data sets which are closely spaced in the spectral domain may have a great degree of correlation or redundancy.

This data transformation technique used to create a smaller but easier to interpret set of image bands that contain most of the same information as the original dataset.

Main advantages are improved algorithm performance (less redundant data) and improved visualization.


33
Q

What is the key goal of many Remote Sensing projects?

A

To produce a thematic interpretation of Images.

34
Q

From a “data” perspective, Classification is __?

A

The arrangement of spectral classes into information classes.

35
Q

What does it mean when a Spectral class is “Hard” or “Fuzzy”?

A

“Hard” means that each pixel can only belong to one class; boundaries between classes are distinct. (most commonly used class type).

“Fuzzy” or “Soft” means each pixel can be a heterogenous mix of classes; boundaries between classes can be gradual.

36
Q

What is a “Mixed Pixel”?

A

A Pixel which contains mixed spectral signals from a variety of classes.

37
Q

What are the three main types of algorithms used in Image Classification?

A

Unsupervised, Supervised, Hybrid (Some combination of 1 & 2)

38
Q

What is Unsupervised Image Classification? Explain.

A

Unsupervised Classification is used when there is little or no “a priori” knowledge of landcover characteristics. Here, spectral classes are grouped solely based on DNs in the image bands, and are then interpreted (classes assigned) a posteriori by the analyst.

39
Q

What is Supervised Image Classification? Explain.

A

Supervised Classification is used when there is information available for determining classes, for manually identifying classes and for manually identifying training areas in the image. Samples selected within an image of known identity are used to classify pixels of unknown identity. Samples selected are called “Training sites”.

40
Q

Name the three ways Data can be evaluated for potential spectral and information classes? (look for patterns)

A
  1. Observation Space (Image)
  2. Spectral space (Scattergram)
  3. Frequency data space (Histogram)
41
Q

List the general steps for thematic information extraction. (hint, 5 steps)

A

A) define the classification problem (scheme)
B) acquire the appropriate data
C) perform pre-processing tasks
D) perform image processing and classification
E) report and distribute data

42
Q

What is the USGS Land Use/Land Cover Classification System?

A

A hierarchical classification scheme designed specifically for remote sensing classification.

43
Q

Explain how K-Means Cluster analysis works.

A

Clusters are defined arbitrarily. Here the Cmax Is defined (maximum number of clusters). Each pixel of image data is then assigned to the cluster that is closest to it in spectral space.

New cluster means are then calculated from the results of the previous iteration and the pixels are reassigned to the new clusters during a subsequent iteration.

44
Q

What attributes should a “training site” ideally have?

A
it should accurately characterize the spectral patterns of the desired information class and
must be both representative of an array of values within the desired classification.
45
Q

What is the Purpose of Evaluating Signatures?

A

Essentially the goal is to determine class separability, which allows you to evaluate how likely the classification algorithm will perform (poor separability means classes will not be discriminated). Allows you to determine which subset of image bands provides the greatest degree of separability between classes; provides a means to reconsider the classification scheme you have set-up. Often trading sites need to be edited/refined.

46
Q

What are the differences between Pixel and Object based classification?

A

Pixel Classification is based on groups of pixels which possess intrinsic size, shape, and geographic relationships with real-world scene components they model

Polygon based classification is where the image is segmented into zones, then classification is done on those zones, instead of pixels.

47
Q

Explain the “Minimum Distance to Means” classification technique.

A

This method relies on euclidian (straight line) distance from the class means to unclassified pixels, in feature space.

No unclassified pixels here.

This method is mathematically simple, computationally efficient, works well for spectrally distinct classes, assumes class data are normally distributed, i.e. class mean is representative of the class.

48
Q

Explain the “Maximum likelihood” classification technique.

A

Maximum liklihood based on assignment of pixels to the class of which it has the highest probability of belonging to. Based on the main and variance of training data. Example (histogram of training data overlaid with normal probability density). 


Unclassified pixels are assigned to the class that it has the highest probability of belonging too.

49
Q

Explain the “CART” classification technique.

A

Classification and Regression Tree (CART) builds the decision tree using the input training data. All the values are line up ad different split points are tried and tested using a cost function. The split with the best cost (lowest cost) is then selected. All input variables and all possible split points are evaluated and chosen (e.g. best split point is chosen each time).

This method is however non-parametric and is mathematically complex.

50
Q

What is the Goal of an Accuracy Assessment?

A

Simply put, to report the accuracy of the classification. A subset of individual pixels in the classification result are evaluated against testing data.
E.g., is a pixel classified as “forest” actually forest or is it a waterbody in reality?

51
Q

What is “Testing Data”? How does it relate to “Training Data”?

A

Data which is independent of the training data site used in classification.
Optimally testing data are collected in the field. E.g., a geo-located testing stat points can be evaluated against the classification result pixels.

Usually, testing data are generated the same way as training data: field data, analyst knowledge, imagery, maps, etc…

52
Q

What are the 5 common Sampling Designs?

A
1- random sampling
2- systematic sampling
3- stratified random sampling
4- stratified systematic unaligned sampling
5- cluster sampling
53
Q

What do the Rows in an Error Matrix Represent?

A

Predicted Classes

54
Q

What do the Columns in an Error Matrix Represent?

A

The Ground Reference Information

55
Q

What does the Diagonal in an Error Matrix Represent?

A

The number of correctly classified samples in each category.

56
Q

What does the “Users Accuracy” in an Error Matrix Represent?

A
From the prospective of the user of the classified map, how accurate is the map? 
i.e., for a given class, how many of the pixels on the map are actually what they say they are? Probability that a pixel on the map represents the class on the ground. 

Calculation: number of correctly identified in a given class / number claimed to be in that map class. (Uses row data)

57
Q

What does the “Producers Accuracy” refer to in terms of an Error Matrix?

A

From the perspective of the maker of the classified map, how well was a certain class classified? For a given class in reference plots, how many of the pixels on the map are labelled correctly? (Probability of the sample being correctly classified).

Calculated as: number of correctly identified in ref. Plots of given class/Number actually in that reference class. Uses Column data.

58
Q

What is a Commission Error?

A

A Commission error (aka inclusion error) is the percentage of pixels incorrectly included with classes not representative of their actual classification.

59
Q

What is an Omission Error?

A

An Omission Error is the pixels which end up being incorrectly classified based on the difference between users accuracy and ground reference information.

60
Q

How do you calculate the Overall Accuracy of an Error Matrix?
What can be the problem with this number?

A

Overall classification accuracy = Total number of correct class predictions/ Number of Samples. Σ (diagonal) / N

Issues can arise as its a summary value, an average. It does not reveal if error was evenly distributed between classes or if some classes were really bad and some were really good.

61
Q

What is a Kappa Coefficient?

A

In essence, the kappa coefficient is a measure of how closely the instances classified by the machine learning classifier matched the data labeled as ground truth.
It can take values from 0 to 1. If kappa coefficient equals to 0, there is no agreement between the classified image and the reference image.
Here values greater than 0.8= good accuracy; less than 0.4 = poor accuracy

62
Q

What is Empirical Line Calibration? Explain.

A

Empirical line calibration forces a remote sensing image data to match in situ reflectance measurements [ideally obtained roughly at the same time] done by selecting two or more homogenous targets, one bright, one dark [sand pile & deep water]