Final Exam Flashcards
How are Digital Numbers (DNs) Represented?
Remote sensing instruments measure irradiance in W/m2/sr.
For storage efficiency these values are converted to DNs (typically 8- or 16-bit)
8-bit: 2^8 = 256 values (0-255 range)
16 bit: 2^16 = 65536 values (0-65535 range)
What is Irradiance? How is it Measured?
Irradiance (E): amount of radiant flux incident per unit area of a surface (W/m2)
What is Exitance? How is it Measured?
Exitance (M): amount of radiant flux leaving per unit area of a surface (W/m2)
What is Radiance? What unit is used to Measure it?
Radiance: is the radiant flux emitted, reflected, transmitted or received by a given surface, per unit solid angle per unit projected area.
Radiant energy (Q): is the energy carried by a photon Unit: Joules (J)
What is Reflectance?
Reflectance (Rλ): is the ratio of the radiant flux reflected from a surface (Lλ) to the radiant flux incident to it (Eλ), at certain wavelengths
Why do we need to perform Radiometric Corrections?
Number of reasons; Data can be Multi-temporal / acquired from different satellites (various sensors) / Biophysical parameters will also be estimated (e.g. chlorophyll concentration, vegetation biomass, NDVI)
What are the two types of Radiometric Correction?
Relative: Uses information derived from the image itself or other images.
Absolute: Uses statistical methods - empirical line calibration (ELC) & Modelling - based on sensor and atmosphere characteristics.
What are some examples of Relative Correction Methods?
Relative correction examples: Image normalization using histogram adjustments / Multiple date image normalization using regression / Dark object subtraction.
What are some examples of Absolute Correction Methods
Empirical line calibration - Forces remote sensing image data to match in situ reflectance measurements [ideally obtained roughly at the same time] done by selecting two or more homogenous targets, one bright, one dark [sand pile & deep water]
ATCOR-2: A spatially-adaptive, fast, atmospheric correction and haze removal algorithm for flat terrain satellite images. Model uses sensor parameters and information about the atmosphere at the time of imaging.
Why is Geometric Correction Important?
Because it corrects for displacements present in remotely sensed imagery and and ensures that pixels/features in the image are in their proper and exact position on the earth’s surface.
What is Image Rectification?
It is the process of spatially anchoring an image to the world in order to allow images and data from the same areas to be used together or compared.
By transforming an image from one grid system to another (e.g. image row data and column cooridiantes to map projection coordinates) we can remove geometric distortion so that individual pixels are in their proper planimetric (x,y) map locations.
What are the two types of distortion common to remote sensing imagery? Explain.
Geometric and Radiometric.
Geometric distortion is related to how an image is anchored in space, and how accurately pixels reflect their actual location on the earth.
Radiometric distortion is related actual pixel DN values and unwanted contributions to measure TOA radiance (e.g. atmospheric attenuation)
Explain the difference between image -> image, and image -> map approaches to geometric correction. How does orthorectification compare to these methods?
Image-to-Image Registration (coregistration): Is the alignment of two images. Not necessarily conforming to a specific map projection.
Image-to-Map rectification: Removes distortions caused by sensors, geometry of image is also made planimetric (conforms to map projection, allowing for measurements and interaction with other projected data)
Orthorectification: Is all of the above, plus the specific removal of distortions caused by relief displacement. Requires a digital elevation model (DEM).
What is a GCP?
Ground Control Points (GCPs): Are points on the surface where both the image coordinates (x,y) and map coordinates (E/N; Lat/Long, etc) are known. They can be chosen from a map, rectified image, vectors, GPS, etc…
What characteristics would an ideal set of GCP’s have?
An ideal set of GCPs would meet the following criteria:
- Clearly identifiable on both uncorrected image and source;
- They must be permanent features;
- Must be distributed across the entire area;
- Must be an adequate number for the mathematical model chosen for spatial interpolation.
Describe 2 mathematical translation models. What are the minimum number of GCP’s required for each?
First order polynomial (In this transformation the image can be transformed independently in the x,y directions and parallel lines are preserved) 1st order transformation (shape of image not always preserved. Only three pairs of GCPs are requires but more can be used to better define the geometric properties of the image.
Higher order transformations: Requires 6 or more pairs of GCPs depending on the degree of the polynomial. Higher the order, the more GCPs required. (this transformation also does not preserve parallel lines)
What is RMS error? What is the strategy to lower it if it too high?
Root Mean Square (RMS) error of each GCP: indicates the level of distortion in the map after the transformation. Acceptable RMS error is equal to half of the length of one cell on an image. (RMS<0.5)
What to do when RMS error is high:
1. Reject the GCP
2. Collect more GCPs
3. Compute the RMS error
Describe the 3 intensity interpolation resampling
methods.
Nearest Neighbour Resampling: The DN from the pixel in the uncorrected image is transferred to the nearest pixel location in the new corrected image. (Simplest method, original values are not altered though some may be duplicated.
Bilinear Interpolation Resampling: Method takes and average of four DN pixels in the original image nearest to the new pixel location. Averaging process alters the original Pixel DN values and creates new digital values in the output image. (Smoothes the Image, More processing time)
Cubic Convolution Resampling: this method calculates a block of 16DN pixels from the unrectified image which surrounds the new pixel location in the corrected image. Image is much less blurry than with bilinear interpolation. Best method for smoothing out noise. (even greater processing time).
What is the goal of Image Enhancements?
The goal of image enhancement is to improve the usefulness of an image for a given task. Applied to remote sensing data to aid human visual and digital image analysis/interpolation, and feature identification.
What is the Spectral Profile useful for?
This can be used to determine a displays enhancement. Spectral profile charts allow you to select areas of interest or ground features on the image and review the spectral information of all bands in a chart format.
What is Contrast Stretching? Why is it used?
Contrast Stretching expands the original DNs in an image, increasing the contrast and interpretability of the image.
Typically devices have 256 grey values, DNs in a single image usually have a range of less than 256, thus you can take a narrow distribution and stretch it out over the full range.
What are the two types of Contrast stretching? Explain.
Linear stretching - Increases the contrast while preserving the original relationships of the DNs to each other. By expanding the original input values of the image, the total range of sensitivity of the display device can be utilized
Non-linear stretching (e.g. histogram stretching) - Increases the contrast without preserving the original relationship of the DN values. Portions of the image histogram may be stretched differently than others. Input DNs are assigned to new output DNs based on their frequency. Frequently occurring values are stretched more than others.
Why is Spatial filtering used?
Spatial filtering is used to improve visualization. It uses values and relationships with neighbouring pixels to either enhance the image or remove noise.
The brightness value of a pixel at a particular location in the filtered image is a function of the brightness values of it and its neighbouring pixels from the original image.
What are the two types of Spatial Filters? Explain.
Low-pass / High-pass
Low-pass Filters: emphasize smooth features and are used for reducing noise. They Emphasize features that have a low spatial frequency, while de-emphsizing areas that have high spatial frequency; smoothing out edged. An averaging filter is an example of a low-pass filter.
High-pass Filters: emphasize abrupt features by removing slowly varying components and enhancing high-frequency local variations; i.e. enhances the spatial detail that changes quickly with distance. Often these filters are used for edge detection.