Machine Vision Flashcards

1
Q

T:F If an object absorbs all light illumination, it appears white.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

T:F Perspective transformation preserve parallel lines only when they are parallel to the projection plane.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

T:F Binary Images has three different color channels for every pixel.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

T:F Hough Transforms is mainly used for corner detection.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Watershed algorithm grow level by level, starting with dark pixels.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

In composite transformation, each affine transformation matrix is applied to input image one by one.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Sobel edge detection uses alignment of the edges when detecting the edges.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Erosion operation in morphological filters can be used removing small noisy pixels from the input image.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

SIFT descriptors are used for detecting the circle parameters

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

SIFT descriptors are scale and rotation invariant.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

In traditional machine vision pipeline, hand-crafted features are used for classification.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

_________ is the principal factor determining the spatial resolution of an image.

A

Sampling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

We use the validation data split to __________.

A

Tune hyperparameters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

In SIFT descriptors, we use ___________ to obtain invariance to illumination.

A

Gradients

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

In order to detect lines in a binary image, which method can be used?

A

Hough

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Grayscale images have one channel but color images have more than one channel. For color images, we can use RGB color model and two additional color models. Shortly explain the RGB and one of the two additional color models.

A

RGB is red, green , blue, and is a color model typically used for screens and have 3 samples of digits from 0-255.
CMY is cyan, magenta and yellow is a color model used for printers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

The definition of image resolution is two fold, gray level resolution and spatial resolution. Sampling and quantization may change the resolution. Briefly explain the sampling and quantization.

A

Sampling: principal factor for determining the spatial resolution (dpi/pixels per unit) of an image.
Quantization: changes the analogue signal into a digital representation of the image

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

To perform affine and perspective transformations, we can use inverse and forward transform. Comment on the differences between inverse and forward transform. Which one generates better output images? Shortly explain your reasons.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

________ is the principal factor for determining the spatial resolution.

A

Sampling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

The more samples in a fixed range, the lower the resolution.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Goals for Machine Vision are Acquisition, Improvement, Analysis, and Understanding. Please match them with descriptions.

Using features and attributes find the meaningful results from the images.

A

Understanding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Goals for Machine Vision are Acquisition, Improvement, Analysis, and Understanding. Please match them with descriptions.

Designing an imaging system to visualize the environment.

A

Acquisition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Goals for Machine Vision are Acquisition, Improvement, Analysis, and Understanding. Please match them with descriptions.

Preparing images for measurement of the features and structures present.

A

Analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Improving the visual appearance of captured images and videos.

A

Improvement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

If an object absorbs (subtracts) all the light illuminating it, it appears __________.

A

black

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

T:F Intensity level resolution refers to the number of intensity levels used to represent the image.

A

True

30
Q

_________ is the process of converting a continuous analogue signal into a digital representation of this signal.

A

Quantization

31
Q

_________ color model is useful for hardware implementations and is luckily related to the way in which the human visual system works.

A

RGB

32
Q

Which color model is used in the color printers and copiers?

A

CMYK

33
Q

Common Image formats include Black & White, Grayscale, and color. Please match each with its descriptions.

3 samples per point with integer values.

A

Color

34
Q

Common Image formats include Black & White, Grayscale, and color. Please match each with its descriptions.

1 sample per point with a integer value.

A

Grayscale

35
Q

Common Image formats include Black & White, Grayscale, and color. Please match each with its descriptions.

1 sample per point with a binary value.

A

Black & White Image

36
Q

Subtractive color system involves light emitted directly from a source.

A

False

37
Q

For noise reduction, what kind of filters is useful?

A

Smoothing

38
Q

In smoothing filters, sum of the mask of coefficients is 0.

A

False

39
Q

Which filtering technique provides enhanced edges on dark background?

A

Sharpening

40
Q

What kind of filter is given below?

A

Weighted Averaging Filter

41
Q

Which image enhancement technique directly manipulates the image pixels?

A

Spatial Domain Techniques

42
Q

For segmentation, which method can be used to isolate an object of interest from a background?

A

Thresholding

43
Q

Which technique can be used to remove imperfections added during segmentation?

A

Morphological Processing

44
Q

The closing of image f by structuring elements s is simply a dilation followed by an erosion.

A

True

45
Q

Structuring elements can be fixed size and make any shape.

A

False

46
Q

For dilation operation in morphological filters, the structuring element s is positioned with its origin at (x,y) and the new pixel value determined using the fit rule.

A

False

47
Q

Which of the following filters are not used for Sharpening?

Sobel
Prewitt
Laplacian
Median

A

Median

48
Q

What filters are used for sharpening?

A

Sobel
Prewitt
Laplacian

49
Q

For line detection, which technique can used?
Sobel
Median
Canny
Hough

A

Hough

50
Q

When we shift a window on an image, we may see change in grayscale. This helps us to determine the edge, corner, and flat regions. Match the following descriptions and change in grayscale values.

Significant change in all directions.

A

Corner

51
Q

When we shift a window on an image, we may see change in grayscale. This helps us to determine the edge, corner, and flat regions. Match the following descriptions and change in grayscale values.

No change in all directions.

A

Flat

52
Q

When we shift a window on an image, we may see change in grayscale. This helps us to determine the edge, corner, and flat regions. Match the following descriptions and change in grayscale values.

No change along one direction.

A

Edge

53
Q

SIFT descriptors is NOT scale and rotations invariant.

A

False

54
Q

What step is this in generating in SIFT descriptors?

Normalize the rotation / scale of the patch.

A

1st

55
Q

What step is this in generating in SIFT descriptors?

Divide into sub-patches

A

3rd

56
Q

What step is this in generating in SIFT descriptors?

Describe the patch with 128 numbers.

A

5th

57
Q

What step is this in generating in SIFT descriptors?

Compute gradient at each pixel.

A

2nd

58
Q

What step is this in generating in SIFT descriptors?

In each sub-patch, compute histogram of 8 gradient directions.

A

4th

59
Q

What are the steps is in generating in SIFT descriptors?

A

1st: Normalize the rotation / scale of the patch.

2nd: Compute gradient at each pixel.

3rd: Divide into sub-patches.

4th: In each sub-patch, compute histogram of 8 gradient directions.

5th: Describe the patch with 128 numbers.

60
Q

Using _______ gives invariance to illumination in SIFT descriptors.

A

Gradients

61
Q
A

Scaling

62
Q
A

Forward Transform

63
Q

Sort the following steps for the outline of the Bag of Features:

_______ Learn “visual vocabulary”.
_______ Represent images by frequencies of “visual words”
_______ Quantize local features using visual vocabulary.
_______ Extract local features.

A

2nd

4th

3rd

1st

64
Q

What step is this in the outline of the Bag of Features:

_______ Learn “visual vocabulary”.

A

2nd

65
Q

What step is this in the outline of the Bag of Features:

_______ Represent images by frequencies of “visual words”

A

4th

66
Q

What step is this in the outline of the Bag of Features:

_______ Quantize local features using visual vocabulary.

A

3rd

67
Q

What step is this in the outline of the Bag of Features:

_______ Extract local features.

A

1st

68
Q

We use the validation data split to _________

A

tune hyperparameters

69
Q

When we split a dataset intro training, validation, and testing portions, we use the largest portion for testing.

A

False

70
Q

Manipulating Images in the HSI Model

A
71
Q
A