OBIA Flashcards
Objects and human interpretation, objects are?
- Objects are primitives that form an image
- Humans see objects, with meaning in real world, rather than pixels
- Humans use shape, texture, colour, context etc. for understanding objects
Image objects are what?
- Basic entities w/in an image that are composed of H-res pixel groups
L-res vs. H-res
- L: 1 pixel composed of many integrated objects
- H: 1 object composed of many individual pixels
H-res pixel groups in image objects
- Each H-res pixel group possesses an intrinsic size, shape, and geographic relationship w/ real-world scene components it models
What is are key drivers to image objects?
- Very high resolution satellite imagery
- Integration with GIS
- Added dimensions to analysis and application
- Solution to MAUP
Pixels vs. objects in GIS
- Pixel = Raster (spectral response, DN)
Object = Vector (Spectral variables, shape variables, texture variables, context)
Added Dimensions to analysis and application using objects
- Classification
- Change detection
- Modelling
- Statistical analysis
- and more
Spectral variables
- Features related to the values of pixels within an object
- Statistical properties (mean, variance, etc.)
Shape variables
- Features related to the shape of an object
- Area, length, width, border, length, shape index, direction, asymmetry, curvature, compactness, etc.
Texture variables
- Features related to spatial patterns of pixels within an image object
- GLCM texture measures etc.
Context variables
- Features related to the spatial setting of an object w/in a scene and/or it’s relationship to other objects in that scene
- Proximity, distance to neighbour, etc.
A 1:1 correspondence between image objects and geographic entity it represents (e.g. lake, field, etc.) means what?
- That the correct spatial scale is being used
- But difficult to attain w/ perfection b/c objects at different scale levels are extracted from same image
Image object
Discrete region of a digital image that is internally coherent and different from its surroundings
What are the 3 traits of image objects for segmentation?
- Discreteness (separable and unique from neighbour)
- Coherence (internal is uniform)
- Contrast (DN is different from neighbour)
Goal of segmentation, and what are geo-objects, what does it require?
- Image objects represent geo-objects
- Geo-objects are identifiable and bounded geographic region (forest, lake, tree, etc.)
- Requires successful segmentation procedure unique to application
What are the ingredients for segmentation?
- Segmentation algorithm
- Expert knowledge of your image and intended application
What are the directions for segmentation?
- Calibrate, ie trial and error, until segments (image objects) represent geo-objects
What are the 2 types of segmentation algorithms used in image processing?
- Discontinuity based, Similarity based
Discontinuity based segmentation algorithms
- Image is partitioned based on abrupt changes in intensity
- Edge detection and enhancement, Limited use in OBIA
Similarity based segmentation algorithms
- Image is partitioned into regions based on a set of pre-defined criteria
- Uses spectral and/or spatial information
- Methods include thresholding, region-based, watershed
Segmentation: Laplacian Edge Detection (Discontinuity based)
- 2D, 2nd order derivative between adjaent pixels
- Directionally invariant edge detection
- Convolution mask is used
- Basically replaces center pixel value as a function of weighted pixels surrounding
- Kernals can be 3x3, 7x7, etc.
- Additional weight to center pixel, 0 weight to diagonal pixels
Segmentation: Sobel Edge Detection (Discontinuity based)
- Stronger gradients are brighted, finds discontinuities and highlights edges
Segmentation: Thresholding (Similarity Based)
- Partitioning of image histograms
- Uses spectral info only, - Good for separating distinct objects from image background
- Can get binary image with 0 below threshold and 1 above
- Multi-thresholding is more complex
What can be used to get melt-pond fraction coverage on sea ice? Why are melt ponds important?
- Single thresholding segmentation using binary image of 0 below threshold, 1 above
- Meltponds are windows to ocean below and transmit light to increase primary productivity
Segmentation: Region-based (Similarity Based)
- Partitioning of image using the Region Growing approach
- Selects Anchor points in image, true/false function
- Start w/ seed points, grow regions by merging adjacent pixels when predefined similarity criteria is true
- Similarity criteria: P(Ri) = True
- Spectral DN and spatial contexts like texture, shape, etc. can be used
Segmentation: Example of Region-based
- Similarity criteria P(Ri) = True is at least 80% of pixels in Ri have property zj-mi < 2sigma I, where zj = DN of jth pixel in proposed region Ri, mi and sigma I are the mean and std. dev. Of Ri
- When Ture, an oject is made, all pixels in Ri assigned the value of mi
What are some examples of other segmentation techniques that are less common?
- Watershed, fractals, wavelets, Markov random fields
- Algorithms often complex, application specific and not readily availble for users
What are 2 examples of built-in segmentation algorithms?
- Trimble eCognition, ENVI Feature Extraction Module
Trimble’s eCognition multiresolution segmentation algorithm
- Region merging technique, - 1 pixel objects merged into larger objects based on a user-specified heterogeneity threshold
- Result is objects of similar sixe and comparable scale for one segmentation
- Process can be repeated at multiple scales, hence multiresolution/multiscale
- Robust for creating objects of similar size
Flow of Trimble’s eCognition segmentation
- User inputs (Scale parameter, heterogeneity criteria, and band weights
- One pixel objects created, - Local optimization (Potential increase in heterogeneity (f) for each possible merge evaluated against threshold (scale parameter), P(Ri) = true when f< scale parameter
- Global Optimization (overall segmentation stops when global measure of heterogeneity is minimized)
Trimble eCognition Scale and composition of homogeneity
- Scale: Allowable heterogeneity, defines max std. dev. of homogeneity criteria, higher = larger resulting image objects
Trimble eCognition composition of homogeneity, what are the 4 types
- Composition of homogeneity: 4 criterion that define the total relative homogeneity for the resulting image objects
- 4 criterion weighted % equalized to a value of 1
- Colour, shape, smoothness, compactness
Trimble eCognition Color
- Digital value (colour) of the resulting image objects
- Color = 1 - shape
- Emphasis on spectra values
Trimble eCognition Shape
- Defines textural homogeneity of the resulting image objects
- A wrapper, shape = smoothness + compactness
Trimble eCognition Smoothness
- Optimizes the resulting image objects in regard to smooth borgers w/in the shape criterion
- Smoothess = 1 - Betacompactness x shape
Trimble eCognition Compactness
- Optimizes the resulting image objects in regard to the overall compactness w/in the shape criterion
- Compactness = Betacompactness x shape
OBIA workflow depends on what?
- Purpose
3 basic steps of image classification?
- Original image
- Segmented image
- Scene understanding through quantitative analysis, classified image
Image metrics
- Algorithms for means, std. dev., length, width, area, ratio’s, spectra mean
- Can be carried forward to add more dimensions to a classification and quantitative analysis
GLCM
- Gray Level Co-occurence Matrix
- Texture measurements
- eg land cover separability increases objects viewed as mean texture values
Forestry applications for OBIA
- Quantify canopies and biomass
- Identify species
- Delineate tree crows
- Measure cutblocks
Urban applications for OBIA
- Plan infrastructure
- Improve taxation
- Build greener cities
Marine and climate applications for OBIA
- Ecosystem study
- Harbour and border management
- Disaster response
- Change detection
Agriculture applications for OBIA
- Precision farming
- Regional resource management (agriculture and surrounding environment)
OBIA summary
- Goes beyond pixel-based approach
- Adds context to image classification and quantitative analysis by means of groups of pixels known as image objects
- Context (colour, shape, etc.) adds info to the classification or quantitative analysis, giving analyst more intelligence
Plate setting analogy
- Placemat is pixel-based approach
- Placemat with table setting of plate, utensils, is OBIA