OBIA Flashcards

1
Q

Objects and human interpretation, objects are?

A
  • Objects are primitives that form an image
  • Humans see objects, with meaning in real world, rather than pixels
  • Humans use shape, texture, colour, context etc. for understanding objects
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Image objects are what?

A
  • Basic entities w/in an image that are composed of H-res pixel groups
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

L-res vs. H-res

A
  • L: 1 pixel composed of many integrated objects

- H: 1 object composed of many individual pixels

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

H-res pixel groups in image objects

A
  • Each H-res pixel group possesses an intrinsic size, shape, and geographic relationship w/ real-world scene components it models
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is are key drivers to image objects?

A
  • Very high resolution satellite imagery
  • Integration with GIS
  • Added dimensions to analysis and application
  • Solution to MAUP
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Pixels vs. objects in GIS

A
  • Pixel = Raster (spectral response, DN)

Object = Vector (Spectral variables, shape variables, texture variables, context)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Added Dimensions to analysis and application using objects

A
  • Classification
  • Change detection
  • Modelling
  • Statistical analysis
  • and more
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Spectral variables

A
  • Features related to the values of pixels within an object

- Statistical properties (mean, variance, etc.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Shape variables

A
  • Features related to the shape of an object

- Area, length, width, border, length, shape index, direction, asymmetry, curvature, compactness, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Texture variables

A
  • Features related to spatial patterns of pixels within an image object
  • GLCM texture measures etc.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Context variables

A
  • Features related to the spatial setting of an object w/in a scene and/or it’s relationship to other objects in that scene
  • Proximity, distance to neighbour, etc.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A 1:1 correspondence between image objects and geographic entity it represents (e.g. lake, field, etc.) means what?

A
  • That the correct spatial scale is being used

- But difficult to attain w/ perfection b/c objects at different scale levels are extracted from same image

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Image object

A

Discrete region of a digital image that is internally coherent and different from its surroundings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the 3 traits of image objects for segmentation?

A
  • Discreteness (separable and unique from neighbour)
  • Coherence (internal is uniform)
  • Contrast (DN is different from neighbour)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Goal of segmentation, and what are geo-objects, what does it require?

A
  • Image objects represent geo-objects
  • Geo-objects are identifiable and bounded geographic region (forest, lake, tree, etc.)
  • Requires successful segmentation procedure unique to application
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the ingredients for segmentation?

A
  • Segmentation algorithm

- Expert knowledge of your image and intended application

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the directions for segmentation?

A
  • Calibrate, ie trial and error, until segments (image objects) represent geo-objects
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the 2 types of segmentation algorithms used in image processing?

A
  • Discontinuity based, Similarity based
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Discontinuity based segmentation algorithms

A
  • Image is partitioned based on abrupt changes in intensity

- Edge detection and enhancement, Limited use in OBIA

20
Q

Similarity based segmentation algorithms

A
  • Image is partitioned into regions based on a set of pre-defined criteria
  • Uses spectral and/or spatial information
  • Methods include thresholding, region-based, watershed
21
Q

Segmentation: Laplacian Edge Detection (Discontinuity based)

A
  • 2D, 2nd order derivative between adjaent pixels
  • Directionally invariant edge detection
  • Convolution mask is used
  • Basically replaces center pixel value as a function of weighted pixels surrounding
  • Kernals can be 3x3, 7x7, etc.
  • Additional weight to center pixel, 0 weight to diagonal pixels
22
Q

Segmentation: Sobel Edge Detection (Discontinuity based)

A
  • Stronger gradients are brighted, finds discontinuities and highlights edges
23
Q

Segmentation: Thresholding (Similarity Based)

A
  • Partitioning of image histograms
  • Uses spectral info only, - Good for separating distinct objects from image background
  • Can get binary image with 0 below threshold and 1 above
  • Multi-thresholding is more complex
24
Q

What can be used to get melt-pond fraction coverage on sea ice? Why are melt ponds important?

A
  • Single thresholding segmentation using binary image of 0 below threshold, 1 above
  • Meltponds are windows to ocean below and transmit light to increase primary productivity
25
Segmentation: Region-based (Similarity Based)
- Partitioning of image using the Region Growing approach - Selects Anchor points in image, true/false function - Start w/ seed points, grow regions by merging adjacent pixels when predefined similarity criteria is true - Similarity criteria: P(Ri) = True - Spectral DN and spatial contexts like texture, shape, etc. can be used
26
Segmentation: Example of Region-based
- Similarity criteria P(Ri) = True is at least 80% of pixels in Ri have property zj-mi < 2sigma I, where zj = DN of jth pixel in proposed region Ri, mi and sigma I are the mean and std. dev. Of Ri - When Ture, an oject is made, all pixels in Ri assigned the value of mi
27
What are some examples of other segmentation techniques that are less common?
- Watershed, fractals, wavelets, Markov random fields | - Algorithms often complex, application specific and not readily availble for users
28
What are 2 examples of built-in segmentation algorithms?
- Trimble eCognition, ENVI Feature Extraction Module
29
Trimble's eCognition multiresolution segmentation algorithm
- Region merging technique, - 1 pixel objects merged into larger objects based on a user-specified heterogeneity threshold - Result is objects of similar sixe and comparable scale for one segmentation - Process can be repeated at multiple scales, hence multiresolution/multiscale - Robust for creating objects of similar size
30
Flow of Trimble's eCognition segmentation
- User inputs (Scale parameter, heterogeneity criteria, and band weights - One pixel objects created, - Local optimization (Potential increase in heterogeneity (f) for each possible merge evaluated against threshold (scale parameter), P(Ri) = true when f< scale parameter - Global Optimization (overall segmentation stops when global measure of heterogeneity is minimized)
31
Trimble eCognition Scale and composition of homogeneity
- Scale: Allowable heterogeneity, defines max std. dev. of homogeneity criteria, higher = larger resulting image objects
32
Trimble eCognition composition of homogeneity, what are the 4 types
- Composition of homogeneity: 4 criterion that define the total relative homogeneity for the resulting image objects - 4 criterion weighted % equalized to a value of 1 - Colour, shape, smoothness, compactness
33
Trimble eCognition Color
- Digital value (colour) of the resulting image objects - Color = 1 - shape - Emphasis on spectra values
34
Trimble eCognition Shape
- Defines textural homogeneity of the resulting image objects - A wrapper, shape = smoothness + compactness
35
Trimble eCognition Smoothness
- Optimizes the resulting image objects in regard to smooth borgers w/in the shape criterion - Smoothess = 1 - Betacompactness x shape
36
Trimble eCognition Compactness
- Optimizes the resulting image objects in regard to the overall compactness w/in the shape criterion - Compactness = Betacompactness x shape
37
OBIA workflow depends on what?
- Purpose
38
3 basic steps of image classification?
- Original image - Segmented image - Scene understanding through quantitative analysis, classified image
39
Image metrics
- Algorithms for means, std. dev., length, width, area, ratio's, spectra mean - Can be carried forward to add more dimensions to a classification and quantitative analysis
40
GLCM
- Gray Level Co-occurence Matrix - Texture measurements - eg land cover separability increases objects viewed as mean texture values
41
Forestry applications for OBIA
- Quantify canopies and biomass - Identify species - Delineate tree crows - Measure cutblocks
42
Urban applications for OBIA
- Plan infrastructure - Improve taxation - Build greener cities
43
Marine and climate applications for OBIA
- Ecosystem study - Harbour and border management - Disaster response - Change detection
44
Agriculture applications for OBIA
- Precision farming | - Regional resource management (agriculture and surrounding environment)
45
OBIA summary
- Goes beyond pixel-based approach - Adds context to image classification and quantitative analysis by means of groups of pixels known as image objects - Context (colour, shape, etc.) adds info to the classification or quantitative analysis, giving analyst more intelligence
46
Plate setting analogy
- Placemat is pixel-based approach | - Placemat with table setting of plate, utensils, is OBIA