Definitions Flashcards

1
Q

What is the Euler number in the context of binary image analysis?

A

The Euler number describes the number of connected components and holes in a binary image. This concept is used in image processing to identify and classify the objects in a binary image.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Explain, in detail, Contrast Stretching. Outline a specific example of how the operation is used in practice

A
  • Improves the contrast of an image by expanding the intensity range of the pixels in the image.
  • Commonly used to enhance the visibility of details in an image.
  • e.g. Could be used to improve the visibility of small objects in an ultrasound image.
    1. first, the intensity range of the pixels would be calculated.
    2. A new intensity range would be defined that covers a wider range of values, such as 0-255 for an 8-bit image.
    3. The intensity values of the pixels in the image would be mapped to the new intensity range, resulting in an image with improved contrast and better visibility.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Explain, in detail, Image Decovolution. Outline a specific example of how the operation is used in practice

A
  • Restores the original sharpness of an image that has been degraded by blur or noise.
  • Commonly used to improve the quality of an image.

e.g. Used to remove blur from an image of a person’s face.
1. A model of the blur that has degraded the image would be created (gaussian blur).
2. An algorithm would be used to invert the blur model and apply it to the degraded image.
This would remove the blur from the image and restore its original sharpness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Explain, in detail, Histogram Matching. Outline a specific example of how the operation is used in practice

A
  • Used to match the intensity distributions of two images.
  • useful for comparing or combining them into a single image.

e.g. Align the intensity values of 2 images taken of the same scene at two different times of the day.
1. Intensity histograms of the two images would be calculated and compared.
2. Intensity values of one of the images would be adjusted to match that of the other image.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Roberts edge detection

A

Roberts approach: Involves convolving the image with 2 kernels, one for horizontal edges and one for vertical. The kernel has values 1,0,0,-1 and 0,1,-1,0. The convolution is performed by sliding the kernel over the image and calculating the dot product between both. The resulting values are then thresholded to identify the edges in the image.
c(I,j)=sqrt(R1^2 +R2^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Prewitt Edge detection

A

Prewitt approach: Similar to Roberts, but uses different kernels. -1,-1,-1,0,0,0,1,1,1 and -1,0,1,-1,0,1,-1,0,1. The convolution is performed in the same way as the Roberts approach.
c(I,j)=sqrt(P1^2 +P2^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Explain region labelling by area

A

a method of labelling the regions in a binary image based on the area or size of the regions. Larger regions are given higher labels and smaller regions are given lower labels. Useful for identifying and classifying objects in an image based on their size, as larger objects tend to be more important or significant than smaller objects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Explain region labelling by location

A

A method of labelling the regions in a binary image based on the location or position of the region. Regions closer to a reference point / region are given a higher label and regions that are farther away being given lower labels.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Explain, in detail, Reconstruction by Dilation. Give an example of how this operation is used in image processing / analysis applications.

A

Mathematical morphology operation that is used to restore or reconstruct the original shape of an object in an image. Performed by first applying dilation to the object to expand its shape, and then applying a mask to the dilated image that keeps only the pixels that are inside the original shape of the object.

Can be useful for removing noise or small, unwanted objects from an image while preserving the original shape of the object.
e.g. removing small, isolated foreground pixels from a binary image of a fingerprint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Explain, in detail, Greyscale Dilation. Give an example of how this operation is used in image processing / analysis applications.

A

The maximum for which the structuring element at that location does not fit entirely within the background above the surface. Used to expand the shape of an object in a greyscale image.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Explain, in detail, Geodesic Dilation. Give an example of how this operation is used in image processing / analysis applications.

A

Point-wise minimum between a mask image and the elementary dilation of the marker image.

Used to expand the shape of an object in an image based on the predefined metric or distance function. Similar to regular dilation, but it allows for more control over the shape of the dilated object by using a distance function to define how the pixels in the image should be expanded.

e.g. Used to expand the shape of a blood vessel in an image of the human retina.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Explain Hough Transform

A

Hough Transform is a mathematical method that can be used to detect lines, circles or other geometric shapes in an image. This method works by converting the image into parameter space, where each point in the image is represented by a set of parameters, such as the slope and the intercept of a line, or the centre and radius of a circle. The Hough Transform then searches the parameter space for clusters of points that represent lines or shapes in the image, and returns the corresponding parameters as the output.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Explain, in detail, Unsharp Masking. In each case, outline a specific example of how each of the image processing and analysis operations can be used in practice.

A

Used to enhance the contrast and sharpness of an image. Works by subtracting a blurred version of the image from the original image, resulting in a new image that emphasises the edges and other high-frequency details of the original image.

e.g. to improve quality of an image of a fingerprint for use in biometric identification

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Explain, in detail, Intensity Normalisation. In each case, outline a specific example of how each of the image processing and analysis operations can be used in practice.

A

Used to adjust the brightness and contrast of an image so that its intensity values are distributed evenly across the full range of possible values. Can be achieved by mapping the intensity values of an image into a standard distribution, such as a Gaussian distribution.

This can be useful when comparing images taken under different lighting conditions or when combining multiple images into a single composite image.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Explain, in detail, DOLP filtering. In each case, outline a specific example of how each of the image processing and analysis operations can be used in practice.

A

Difference of low pass filtering is used to enhance the contrast of an image by suppressing the low-frequency components of the image and emphasising the high-frequency components. Can be achieved by convolving the image with a special filter kernel designed to suppress the low-frequency components of the image and enhance the high-frequency components.

Can be useful in medical imaging, where DOLP filtering can be used to improve the visibility of fine details in an image of a tissue sample.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

State the difference between a Flat and Non-Flat structuring element.

A

Main difference: The presence or absence of internal structure. Flat SE’s are simple and easy to use, but are limited in their capabilities, while non-flat SE’s are more complex and versatile, but require more care and attention in their use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Explain a Flat Structuring element

A

has only one background pixel and the rest are foreground pixels. Therefore it is shaped like a single pixel, with a defined size and shape but no internal structure. Useful for simple operations such as erosion and dilation where the shape and size of the neighbourhood is the only important factor.

Can be used to define an arbitrary shaped SE but it will be a mask kernel, not a grey level SE.

18
Q

Explain a non-flat SE

A

has multiple foreground and background pixels. Therefore it has an internal structure.
can be used to define more complex operations, such as hit-and-miss and distance transforms.

Can be used to define an arbitrary shaped grey level SE.

19
Q

Detail the operation of the White Top-Hat transform.

A

extracts the small details, or the bright spots, from an image.
Based on the difference between the input image and its opening (removes the small details from the image).
WhiteTopHat(image) = Image - opening(image)
Can be applied to greyscale, colour or binary images.

20
Q

Explain, in detail, Idempotency. Give an example of how this operation could be used in image processing/analysis.

A

The property of an operation that produces the same result when applied multiple times, regardless of the order or the number of times the operation is applied. Useful property of some operations, because it allows them to be applied multiple times, without changing the result. e.g. the thresholding operation, which maps the input image to a binary image, is idempotent, because it always produces the same binary image, regardless of the number of times it is applied.

AB = ( AB )B and A.B = ( A.B ).B.
 = white circle
. = black circle

21
Q

Explain Greyscale Erosion. Give an example of how it could be used in image processing/analysis.

A

maximum value for which the structuring element centred at that point and level, still fits entirely within the foreground under the surface.

Erodes the boundaries of objects in a greyscale image.

e.g. If an image has some speckle noise or other artefacts that appear as small, isolated pixels, greyscale erosion can be used to remove these pixels and improve the overall quality of the image.

22
Q

Explain Hit-and-Miss Transform. Give an example of how it could be used in image processing/analysis.

A

Used to select pixels with certain geometric properties (e.g. corners):
Typically used to find and extract objects with a specific shape or orientation, or to locate specific features in an image.

Performed by convolving the image with 2 structuring elements, known as the “hit” and “miss” elements. Hit specifies the shape and orientation of the pattern that the transform is looking for. Miss specifies the shape and orientation of any pixels that must not be present in the pattern.

At each position, the values of the hit and miss elements are multiplied together and the result is compared to the value of the corresponding pixel in the image. result = 0, pattern is not present, result =/ 0 pattern is present

e.g. if an image contains many objects of different shapes and sizes, the hit and miss can be used to find and extract only the objects that have a specific shape or orientation.

23
Q

Define Template Matching

A

method for finding and matching a pre-defined pattern, or template, in an image. It is typically used for tasks such as object recognition or tracking, where the goal is to find a specific pattern or shape in the image.
e.g. commonly used in security systems, where it is used to find and track objects or people in surveillance video.

24
Q

Define Pattern Recognition using Feature Extraction

A

Method for identifying and classifying objects in an image based on their characteristics. Typically used for tasks such as classification or clustering, where the goal is to assign objects in the image to different classes or groups.
e.g. commonly used in medical imaging, where it is used to identify and classify abnormalities in medical images.

25
Q

Define global thresholding

A

Dependent on the grey level of a given point.
Uses a single, fixed threshold value for the entire image.
Algorithm compared the intensity value of each pixel in the image to the fixed threshold value. If the intensity value is greater than the threshold value, the pixel is set to white, otherwise it is set to black.

26
Q

Define adaptive / local thresholding

A

Dependent on the grey level of a given point and neighbouring points.
Uses a single, fixed threshold value for the entire image.
Algorithm compares the intensity value of each pixel in the image to the fixed threshold value. If the intensity value is greater than the threshold value, the pixel is set to 1, otherwise it is set to 0.

27
Q

Define Dynamic thresholding

A

Local + Dependent on the point’s co- ordinates. Can be used to deal with uneven illumination.
Uses a single, variable threshold value that is updated dynamically based on the characteristics of the image.
Algorithm starts by setting the initial threshold value to the average intensity value of the entire image. Compares the intensity value of each pixel in the image to the threshold value. If the intensity value of the pixel is greater than the threshold value, algorithm increases the threshold value slightly.

28
Q

Outline Point-by-point operations

A

Involve applying a mathematical operation to each individual pixel in an image. Different from spatial operations, which involve applying an operation to a group of pixels in a local region in an image.

Useful for colour-correction, contrast adjustment, noise reduction, edge detection.

29
Q

Outline, in detail, classification operations in image processing/analysis

A

Involve using algorithms and techniques to assign labels or classes to pixels or regions in an image.

Useful for object recognition, pattern recognition, image segmentation.

30
Q

Explain correlation

A

Measurement of the similarity between 2 signals / sequences

31
Q

Explain convolution

A

measurement of the effect of one signal on the other signal.

32
Q

Grass fire transform

A

distances to the nearest edge point in the image

33
Q

Closing

A

Dual morphological operation of opening. The closing of image set A by structuring element B

34
Q

Binary dilation

A

denoted A + B): Expansion of image set A by structuring element B. Represented as the union of translates of the structuring element B

35
Q

Binary erosion

A

(denoted A - B): Equivalent to the shrinking (or reduction) of image set A by structuring element B. Represented as the intersection of the negative translates:

36
Q

Internal gradient (half-gradient by erosion)

A

enhances the internal boundaries of objects brighter than the background.

External boundaries of objects darker than the background are also enhanced.

A-(A-B)

37
Q

External Gradient (half-gradient by dilation)

A

Complementary operation to the internal gradient

(A+B)-A

38
Q

Morphological gradient

A

Sum of the internal and external gradients

(A+B)-(A-B)

39
Q

Covariance

A

Erosion of an image with points separated by an increasing distance along the horizontal direction.

40
Q

Conditional dilation

A

If dilation is successfully repeated, the original image grows without bound.

41
Q

Global Image Transforms

A

Each element in the output picture,B, is calculated using all or at least a large proportion of the pixels in A