Midterm 1 (T1-T7) Flashcards
Define geography
Geography seeks to discover the spatial relationships of phenomena (both physical and human) on the surface Earth
Define science
Refers to a system of acquiring knowledge. This system uses observation and experimentation to describe and explain natural phenomena.
Define cartography.
The art, science, and technology of making maps, together with their study as scientific documents and works of art.
Define GIS
An information system that is designed to work with data referenced by spatial or geographic coordinates. In other words, a GIS is both a database system with specific capabilities for spatially-referenced data, as well as a set of operations for working with the data.
Define remote sensing.
Obtaining info about something without being in direct contact with that thing.
Which of the 5 senses are considered remote sensing and why or why not?
Touch- no bc direct contact with object
Smell- No bc particles enter nose
Taste- No bc direct contact with tongue
Hearing - Yes because soundwaves went through environment to reach us indirectly
Sight - Yes bc nothing we see physically touches our eyes
Difference between remote and in situ sensing
remote- indirect
in situ - direct “in the situation”
Neither is better than the other but both are needed.
Advantages and limitations of remote sensing (5 of each)
Advantages: increased perspective, generally unobtrusive, broad electromagnetic sensitivity, systematic, unbiased observation, digital extensions
Limitations: external noise/interference, often relies on surrogate measures, technical/calibration issues, can be obtrusive, can be expensive
What is a remote sensing platform and give examples
It is a platform that holds a camera. Examples include a satellite, a person, a plane, or any animal with a camera strapped to it like a shark lol
Define geotechnology**
the application of the methods of engineering and science for the exploitation of natural resources.
Define geomatics**
the discipline of gathering, storing, processing and delivering of geographic info or spatially referenced info. Commonly defined as “hunter and gatherer”.
GIScience definition and examples**
It represents a transdisciplinary integration of theory, methods, technology, and science that allow us to better visualize, monitor, model and manage our interaction on this planet at various scales. Examples of this are GIS and EO (Earth observation) platforms.
Define GEOBIA*
It is a sub-discipline of GIScience devoted to developing automated methods to PARTITION remote sensing imagery into meaningful image-objects and assessing their characteristics through scale. It aims to put geographic ingo into GIS ready format so new geo-intelligence can be obtained
Define geo-intelligence
“spatial content, in context” or gathering the right information at the right time or place
What does GEOBIA require and how does it achieve this? What makes GEOBIA distinct?
It requires image segmentation, attribution, classification, and the ability to query and link individual objects in space and time.
It achieves this by incorporating knowledge from a vast array of disciplines involved in generating and using GI (geographic info). It is uniquely focused on RS and GI.
Does panchromatic or multispectral have a higher resolution? Explain why.
Panchromatic (black and white) resolution is much higher than multispectral resolution (colour) because satellites are able to cover smaller areas in detail if it is in black and white rather than colour. This is why coloured satellite images usually cover larger areas.
Who is GeoEye’s largest customer and what do they do for Google?
largest customer = National Geospatial-Intelligence Agency
* provides images to Google Earth and Google Maps
*Tied w Google’s plans for Android
*future location based services
What are key benefits from Google getting a 700 trillion pixel upgrade?
*new map (mosaic) has fewer clouds (second time Google revealed a cloudless map)
*low and medium resolution maps havent been updated in 3 years
*uses the most recent data from Landsat 8 which launched in 2013 and can capture a greater array of light (deep blue and infrared)
What can we use high resolution (h-res) data for? (4)
- integrate real time traffic info
-easily track GPS-equipped vehicles
-analyze multiple or time series images (ex. disasters)
-change detection could be a new market
What is the problem with new satellites?
They have similar abilities to those of military spy satellites except they can sell to anyone who can pay (ex. insurance companies, Telus buys agriculture images)
Solution for preventing distortion from map projecting
*An orderly system of parallels of latitude (N and S of equator) and meridians (E and W of prime meridian) to draw a sphere on a flat surface
What is the Universal Transverse Mercator (UTM) projection and how many zones are in the planet and in Canada?
It is the most modern topographic map. 60 in the world, 16 in Canada.
Describe the 4 types of resolution.**
- Spatial - relates to pixel size
- Spectral- the number and dimension of wavelength intervals the remote sensor is sensitive to (how many and how wide are the bands for that wavelength)
- temporal- how long it takes to revisit the same place (observation frequency)
- radiometric- the number of unique values (256 in 8-bit)
What is a raster data model
*Based on pixels
*easy to process
*each pixel holds one attribute
Basic colour theory
Also called additive colour theory, combines the primary colours of blue, red, and yellow
Colour composites and multiband images
There are different colours or bands at different wavelengths
Lookup tables (LUT) and pseudocolour tables
LUT = data structures that map DN values to RGB colours for display purposes
pseudocolour- colourizes monochrome images
Histograms and scatterplots**
Histograms show the frequency of DNs at a specific Band while scatterplots show the relationship between two different bands in terms of their frequency of DNs
3 types of contrast manipulation**
- gray-level thresholding
- density slicing
3.contrast stretching
What are bits and how many bits are in a byte? What range of digital numbers are assigned to an 8 bit integer?
*bits = Binary digITS, are assigned either a 0 or 1
*1 byte = 8 bits
*0-255 because 2^8 = 255
What range is used by an unsigned 16 bit integer versus a signed 16 bit integer? When is the signed integer used?
Unsigned = (0-65535)
Signed = (-32767 to +32767) which is common for digital elevation models (above and below sea level)
What is important about Landsat 7 (The Enhanced Thematic Mapper Plus = ETM+) and ETM+ scenes?
*first time using a panchromatic dataset which has a higher spatial resolution. Landsat 7 collects data in accordance with World Wide Reference System 2 which has catalogued the world into 57,784 scenes.
*ETM+ has a fixed, “whisk-broom” 8-band, multispectral scanning radiometer in 183km wide swaths (not very wide).
Scenes: 3.3 GB of data for each scene. Has an IFOV (instantaneous field of view) of 30m in bands 1-5 and 7 and 15m for band 8 (panchromatic).
Rods versus Cones
Rods- many more rods (120 million) because when there is less light at night, more rods are needed to pick up that light, related to monochromatic vision (night vision)
Cones- only 6-7 million, RBG sensitive (3 types)
What is another name for the RGB Colour model? Define it.
Additive colour theory
It is the mixture of colours between red, green and blue (magenta, yellow, cyan)
Define subtractive colour theory**
*based on the colour-absorbing quality of ink
*The portion of white light that is NOT absorbed is reflected back to our eyes
“we see the colour that something is not”
Define CMYK Printing Process
Printers that lay down overlapping layers of cyan, magenta, and yellow ink
*four colour printers add a final black (K) layer for sharpness
Lookup Tables (LUTs) versus Pseudocolour Tables. How is colour quantizing (compression) related to LUTs
LUTs- data structures that map DN values (0-255) to actual RGB colours for display purposes, compression assigns colours to a reduced palette in the form of a LUT, each DN value is assigned a colour
pseudocolour tables- used to colourize monochrome images, similar to a LUT but assigns RGB loadings to a RANGE of colours, not the “true colour” (ex. colour tables on ENVI that change a black and white image to rainbow), the DN values are all assigned a RANGE of colours
Define a histogram*
a graph that shows the number/frequency of DNs plotted against the range of DNs for a single channel of imagery (ex. 0-255 for greyscale images)
*an important data exploration/summary tool that tells us the distribution of DNs in a single band of imagery
Define a scatterplot**
a graph displaying BIVARIATE distribution of 2 channels of imagery (needs 2 images), each point in the graph represents the bivariate DNs of a single pixel
*allows us to explore the distribution of DNs in 2 bands of imagery at a time
Define grey-level image thresholding**
a type of contrast manipulation used to segment an input image into 2 classes:
1. a class for values BELOW a specified DN
2. a class for values ABOVE a specified DN
*Used to prepare a BINARY image to separate spectrally distinct features for further analysis. This only applies to 1 band
Define density-level slicing**
a type of contrast manipulation where DNs along the x-axis of a histogram are segmented into analyst-defined intervals (slices)
*similar to thresholding except it involves numerous classes. This only applies to 1 band
Define contrast enhancement (stretching)
Original DN values rarely extend over the entire output range of a display device (0-255) so enhancement stretches the original data to accentuate contrast and improve visual interpretability (the closer the # is to 255, the brighter)
Name the 2 contrast enhancement techniques** how do they change the original image?
- Linear contrast stretch makes the new image proportionally the same as the original so the histogram will be shaped the same as well.
- Histogram equalization - applies the greatest contrast enhancement to the MOST POPULATED range of DNs in the image (not proportional). The histogram is shaped differently and is called a cumulative frequency histogram. The area with the greatest frequency will have the most proportional stretch.
(T4) Define Electromagnetic Radiation (EMR) and the 2 ways it can be observed
It is the energy released when an electrically charged electron moves from an excited state to a de-excited state. It can be observed as both a wave in motion (wave theory of light) or as a single discrete packet (photon)
What are the 3 states of EMR according to the Particle Theory of Light?
- Ground state
- Excitation - when the photon is absorbed
- De-excitation (quantum leap)- the state that causes a photon of light to be emitted
What direction does EMR travel in and at what speed? What temperature of objects is it emitted from?
-Travels orthogonally (perpendicular) at 3x10^8 m/s (speed of light) and is emitted by objects above -273 degrees C (0 Kelvin).
Wavelength versus Frequency**
wavelength- the distance between crests of a wave (measured in um or nm)
frequency - the number of crests that pass the same point per unit of time (ex. every second), measured in MHz or GHz
The formula that relates wavelength and frequency
c = λv, λ = c/v
c= speed of light (3x10^8)
λ = wavelength
v = frequency
The formula and patterns that relates energy and frequency
(remember frequency = 1/wavelength)
Q=h*v
Q = energy of photon (Joules)
h = Planck constant
v = frequency of radiation
**Energy is directly proportional to frequency
The formula and patterns that relates energy and wavelength
Q= c*h/λ
Q= energy
c=speed of light
h = planck constant
λ = wavelength
* energy is inversely proportional to wavelength
Define black bodies and the Stefan Boltzmann law
a theoretical object that completely absorbs ALL incident radiation and emits the absorbed energy at the MAX possible rate (Stefan Boltzmann law)
M = σ*T^4
M= total emitted radiation
T= temperature
σ = stefan boltzmann constant
Explain Wien’s displacement law which describes peak blackbody emittance. what is the relationship between wavelength and temperature?
λmax = k/T
T=temperature
k=wien’s constant
**Wavelength decreases as temp increases (inversely proportional) = a short wavelength is therefore hotter
relationship btwn radiation intensity and temperature
radiation intensity increases with temperature (incandescent at 2200degC)
What kind of energy is leaving the sun?
Short wavelength radiation reaches and is either reflected by Earth at the speed of light or emitted by Earth in the form of long wavelength radiation.
Active versus passive sensors
*active sensors carry their own source of EMR- operates in low energy regions of the spectrum
*passive sensors have no on-board source of EMR- operates in visible and infrared portions of the spectrum, only works when it is bright outside in that country
What are the 4 interactions btwn EMR and matter?
- scattered
- reflected
- absorbed
- transmitted and refracted
Describe refraction/transmission. define the index of refraction. (Type 4)
*refraction occurs when EMR is transmitted THROUGH the matter causing the light to bend- depends on wavelength (NIR would not refract as much)
index of refraction = how much the speed of light/sound is REDUCED in the medium
Define scattering and list the 3 types of scattering** (Type 1)
scattering = similar to reflection but unpredictable and involves absorption and re-radiation of radiation by atoms/molecules
1. Raleigh
2. Mie
3. Non-selective
What is rayleigh scattering
When the particles are smaller (<0.1 times smaller than wavelength) and is caused mainly by gases in the UPPER atmosphere
Mie scattering
When particles are the same size as the wavelength and caused by dust, smoke and particulates in the LOWER atmosphere
Non-selective scattering
When particles are much GREATER in size as the wavelength and are caused by water droplets and ice crystals in the LOWER atmosphere.
It is non-selective with the light that it reflects back (ex. clouds reflect different wavelengths)
Describe absorption (Type 3). define atmospheric windows
Occurs when EMR is absorbed by a material and then converted into other forms of energy
*Depends on wavelength: the range of wavelengths that atmospheric gases only slightly absorb radiation is called atmospheric windows (most energy passes through)
Define atmospheric thickness and extinction coefficients
the rate of reduction of transmitted light through scattering and absorption for a medium
Describe reflection. what are the 2 types?
The reradiation of photons in unison in a layer that is half a wavelength deep. (ex. clouds reflect a large amount of incident radiation)
*2 types are specular and diffuse
* NOT the same as spectral reflectance (albedo) which is the ratio of incident to reflected radiant flux.
Define specular reflection.*
When incoming radiation is reflected in a single direction. (usually happens on smooth surfaces like mirrors)
Define diffuse reflection and a Lambertian Surface.
When incoming radiation is reflected in many directions. (usually happens on rough surfaces which have many specular planes)
Lambertian surface - the perfect diffuse reflector (brightness looks the same from every angle)
how does wavelength depend on reflectors?
A surface can be rough at one wavelength and smooth at another.
Define radiance/radiant flux and a steradian**
radiant flux is the total amount of radiation per area for a specific wavelength (watts per steradian per square meter)- it is what a sensor measures rather than reflectance
steradian = unit of solid angle adopted under SI units
Define irradiance versus exitance**
irradiance = total amount of incident radiation per unit area
exitance = total amount of outgoing (reflected or emitted) radiation per unit area
Define reflectance versus hemispherical reflectance
reflectance- the fraction of radiant energy reflected from a surface
hemispherical reflectance = ratio of exitance to irradiance (also known as albedo or spectral reflectance)
*bright surfaces have high albedo
What is the goal of image processing (T5)
to produce a corrected image that is as radiometrically and geometrically close as possible to the true radiant energy and spatial characteristics of the study area at the time of data acquisition. Must identify internal and external errors to correct RS data.
define radiometry
the science of measuring light in any portion of the electromagnetic spectrum (EM)
Internal errors (systematic) versus external errors (non-systematic)**
internal- produced by the remote sensing system itself, systematic/predictable, can be identified or corrected prelaunch or during the flight of the sensor using calibration (ex. line striping)
external- non-systematic (unpredictable), caused by external variables such as atmosphere and terrain elevation, slope and aspect, can be corrected by relating empirical observations to sensor system measurements
Define radiometric corrections versus radiometric errors
Improves fidelity/accuracy of brightness value magnitudes, reduces the influence of errors in image brightness values.
errors- referred to as noise, considered as any undesirable spatial or temporal variations in image brightness
Define radiometry
The science that studies the measurement of electromagnetic radiation, including visible light.
What are the 3 causes of noise/ non-systematic variance (radiometric errors)?
- intervening atmosphere (atmospheric attenuation-thickness)
- sun-sensor geometry (scene illumination)
- the sensor itself (sensor noise)
What do sensors measure and how is degradation fixed?
*measures the electrical current flowing from a photosensitive material that converts light (radiance) into an electrical charge
*the detector’s sensitivity degrades over time and requires constant calibration
Define a solar diffuser.
The primary calibration for MODIS which targets reflective bands (short wave), any change in the output is due to degradation of MODIS or the diffuser since the sun has a constant output
How is a black body used to calibrate MODIS
it calibrates for mid and long-wave infrared bands where the brains of the operation is the spectroradiometric calibration assembly
How do digital numbers (DNs) relate to radiance
Remote sensing instruments measure radiance which is converted to DNs in 8-bit or 16-bit. This conversion follows a calibrated response formula that is unique for each channel. Gains and Bias values are needed to convert them.
L=G*DN+B
L=spectral radiance
G= slope of response function (gain)
DN=digital number
B= y-intercept of the response function (bias or offset)
How are gains and biases derived from the spectral radiance min and max
Gain = (Lmax-Lmin)/255
Bias=Lmin
3 causes of image striping and banding
- relative gain and offset differences among detectors within a band
- shift between pixel rows
- Landsat 1 (L1) processing errors
How to solve systematic and non-systematic errors
systematic- corrected with mathematical formulas
non-systematic = can be filtered, filled, or ignored
Define and determine how to fix scene illumination
*controlled by solar elevation which has a big impact on radiance
- may need to be normalized to facilitate comparisons between different dates which is done by converting radiance to reflectance
(fixed via normalization which converts radiance to reflectance)
Define at-satellite reflectance
- the top of the atmosphere that doesn’t have the atmospheric complications that ground reflectance calculations experience
2 contradictory ways that atmosphere affects sensor measurements
- attenuates/reduces radiance measurements due to absorption and scattering
- Adds/increases radiance measurements due to path radiance (backscattering of radiance back to space for the sensor to pick up)
2 approaches to atmospheric correction/normalization**
- absolute correction- calculates surface reflectance by removing effects of the atmosphere
- Relative correction-normalizes effects of atmosphere by radiometrically matching one or more slave images to a master image
Slave versus master image
slave image- needs to be corrected (taken at time 2)
master image- is already correct (taken at time 1)
List the absolute atmospheric correction complex radiative transfer models (3). What does ATCOR do to the image?
MODTRAN, 6S, ATCOR (atmospheric correction program)
ATCOR uses a blackbody correction to remove clouds from an image by identifying everything that is “hazy coloured”. This requires detailed atmospheric observations which are almost never available.
Describe alternative absolute atmospheric corrections.
Selects standard atmospheres instead of detailed observations and then selects image targets to match “reference” reflectance libraries (things that don’t change as a reference) = untrustworthy results
What are pseudo-invariant features and how does it relate to master and slave images
pseudo invariant = features that sometimes do not change
DNslave = G*DNmaster+B
DNλ = band specific digital number
G = slope of response
B= y-intercept of response
Explain empirical line calibration method
The empirical line method (ELM) measures reference targets of known reflectance in the scene. ELM methods require minimal environmental observations and are conceptually simple. However, calibration coefficients are unique to the image containing the reflectance reference
Explain relative atmospheric normalization
Does not retrieve surface reflectance. Does radiometric matching of one or more slave images to a master instead. After, all the images should look like it was taken with the same sensor (one year apart to reduce illumination and histograms match)
What is the best atmospheric correction method?** Define RMSE
Radiometric normalization/transformation based on pseudoinvariant features (PIFs) because it has the lowest RMSE
RMSE = root main square error
What to do and when to do radiometric pre-processing**
- Single-scene classification- no
- composite scene classification - yes bc based on multiple images
- change detection (post-classification, image differencing) - yes, in case of illumination is different because shadows are objects and hard to remove
- vegetation monitoring with NDVI- yes bc it changes over time
Define image rectification (T6)
transforming an image from one grid system to another (ex. image row and column coordinates to map projection coordinates)
What are the 4 systematic distortions (geometric errors) and how are they corrected**
- scan skew
- mirror scan velocity variance
- earth rotation
- platform velocity
*all corrected with mathematical formulas
Describe non-systematic geometric errors
Unforeseen changes in sensor geometry such as altitude variance and platform altitude (roll, pitch, yaw), relief displacement, corrected by GCPs (ground control points) and rectification
Define relief displacement
When objects closer to the sensor appear larger than objects farther away, everything falls away from the PP (principle point)
What are the 3 levels of rectification
- image registration/co-registration = alignment of 2 images, does not conform to specific map projection
- rectification - removing distortions caused by sensor, geometry of image made planimetric-conforms to map projection
- orthorectification= aligns and removes distortions caused by relief displacement, requires DEM
Define orthophotograph
measures true distances, accurate representation of the earth’s surface because it was adjusted for topographic relief, lens distortion, and camera tilt
Image-to-map rectification process
- establish GCPs that match image pixels to map locations
- Use math to establish a relationship btwn the map and the uncorrected image
- transform the image to new projected geometry
- interpolate DNs of new re-oriented pixels
Define ground control points (GCPs)
points on surface where both image coordinates and map coordinates are known
Difference between image-to-map-rectification and orthorectification
image-to-map is geometrically corrected for elevation differences while orthorectification is dependent on elevation itself
Define rubber sheeting
a form of spatial interpolation that fits the points and lines you plot to the correct location by “stretching it out”
Explain the first spatial interpolation; the polynomial method. Define least squares technique. How do you calculate a square? Define the order of rectification
When polynomial equations model the geometric distortions btwn the original image and the reference map.
least squares technique- determines line of best fit by making the sum (diff btwn the original and new point) as small as possible.
square = square the distance btwn data point and regression line
order of rectification = highest order exponent of polynomial
How do you choose the polynomial order? why
Choose the lowest polynomial order so it is more efficient and there is less chance of geometric distortion in areas with no GCPs
What is the problem with high-order polynomials
more accurate fit in the location of GCPs but more distortion in areas further away from those GCPs
Describe the second spatial interpolation: thin plate spline
*alternative to polynomial
- surface is forced to fit through all GCPs make the curves localized around the GCPs
- diminishes rapidly away from the points
*works best for varied smooth surfaces
Disadvantage: requires many points for rough terrain, there is no built in error check since the GCPs are all in the model
Type of distortion that polynomials and thin plate splines DO NOT and DO account for
*do not account for topographic distortion
*does account for relief displacement, elevation (orthorectified), complex terrains
What are the 3 requirements of 3D physical models (spatial interpolation models)?
- DEM (digital elevation model)
- orbital data specific to the sensor and the scene under investigation
- sophisticated software
List the 3 methods of DN interpolation with brief explanations.**
- nearest neighbour- uses input cell value closest to the output cell
- bilinear interpolation- uses distance weighted average of the 4 closest input cells
- cubic convolution -uses distance-weighted average of the 16 closest input cells
Advantages and disadvantages of nearest neighbour
adv- data is not averaged but processing is very fast, does not alter DNs
disadv- looks rough, some values will be lost while others will be duplicated, not good for resampling to a larger spatial resolution
Advantages and disadvantages of bilinear interpolation
adv- smoother than NN
disadv- alters original DNs, slower than NN, intense computation
Advantages and disadvantages of cubic convolution
adv- smoother than BI and NN, looks closest to the original image
disadv- alters original DNs, more computationally intensive than BI and NN
History and origin of remote sensing
- dates back to 1867 when photo was taken from hot air balloon
-used airplanes to take images of trenches and bases in WWI, Regional map of Connecticut made from photo mosaic - infrared detection and radars used in WWII
- used for missiles before rockets and race to the moon
Why observe the Earth?
- resource management (lack of resources lead to conflict)- agriculture, forestry, fisheries
- environmental monitoring- weather forecasting, disaster control
- understanding Earth system processes - hydrologic, physical climate, climate change
list the 7 photo interpretive elements
- size
- shape
- shadow
- tone and colour
- pattern
- height and depth
- site, situation, and association
What are the 2 multispectral image systems
- orbital characteristics
- worldwide reference system
Define orbital characteristics and list the 3 characteristics
Describes the path of a satellite through space
1. altitude- satellite’s height
2. orbital period- time needed to complete one trip
3. inclination- the angle of the orbit relative to the equator
Define a polar orbit versus a geosynchronous orbit
polar- a high inclination orbit, passing over or near the poles (earth rotates beneath the satellite for global coverage)
earth observation-
What’s the difference between LANDSAT 7 and LANDSAT 8
Landsat 7 - detects panchromatic bands, altitude of 705km and 183km wide swath, 8 bit radiometric resolution, 16 day temporal resolution, 98.2 degree inclination
Landsat 8- launched in 2013, also 705km, 185km swath