Machine Vision COPY Flashcards
T:F If an object absorbs all light illumination, it appears white.
False
T:F Perspective transformation preserve parallel lines only when they are parallel to the projection plane.
True
T:F Binary Images has three different color channels for every pixel.
False
T:F Hough Transforms is mainly used for corner detection.
False
Watershed algorithm grow level by level, starting with dark pixels.
True
In composite transformation, each affine transformation matrix is applied to input image one by one.
False
Sobel edge detection uses alignment of the edges when detecting the edges.
False
Erosion operation in morphological filters can be used removing small noisy pixels from the input image.
True
SIFT descriptors are used for detecting the circle parameters
False
SIFT descriptors are scale and rotation invariant.
True
In traditional machine vision pipeline, hand-crafted features are used for classification.
True
False
_________ is the principal factor determining the spatial resolution of an image.
Sampling
We use the validation data split to __________.
Tune hyperparameters
In SIFT descriptors, we use ___________ to obtain invariance to illumination.
Gradients
In order to detect lines in a binary image, which method can be used?
Hough
Grayscale images have one channel but color images have more than one channel. For color images, we can use RGB color model and two additional color models. Shortly explain the RGB and one of the two additional color models.
RGB is red, green , blue, and is a color model typically used for screens and have 3 samples of digits from 0-255.
CMY is cyan, magenta and yellow is a color model used for printers.
The definition of image resolution is two fold, gray level resolution and spatial resolution. Sampling and quantization may change the resolution. Briefly explain the sampling and quantization.
Sampling: principal factor for determining the spatial resolution (dpi/pixels per unit) of an image.
Quantization: changes the analogue signal into a digital representation of the image
To perform affine and perspective transformations, we can use inverse and forward transform. Comment on the differences between inverse and forward transform. Which one generates better output images? Shortly explain your reasons.
________ is the principal factor for determining the spatial resolution.
Sampling
The more samples in a fixed range, the lower the resolution.
False
Goals for Machine Vision are Acquisition, Improvement, Analysis, and Understanding. Please match them with descriptions.
Using features and attributes find the meaningful results from the images.
Understanding
Goals for Machine Vision are Acquisition, Improvement, Analysis, and Understanding. Please match them with descriptions.
Designing an imaging system to visualize the environment.
Acquisition
Goals for Machine Vision are Acquisition, Improvement, Analysis, and Understanding. Please match them with descriptions.
Preparing images for measurement of the features and structures present.
Analysis
Improving the visual appearance of captured images and videos.
Improvement
If an object absorbs (subtracts) all the light illuminating it, it appears __________.
black
T:F Intensity level resolution refers to the number of intensity levels used to represent the image.
True
_________ is the process of converting a continuous analogue signal into a digital representation of this signal.
Quantization
_________ color model is useful for hardware implementations and is luckily related to the way in which the human visual system works.
RGB
Which color model is used in the color printers and copiers?
CMYK
Common Image formats include Black & White, Grayscale, and color. Please match each with its descriptions.
3 samples per point with integer values.
Color
Common Image formats include Black & White, Grayscale, and color. Please match each with its descriptions.
1 sample per point with a integer value.
Grayscale
Common Image formats include Black & White, Grayscale, and color. Please match each with its descriptions.
1 sample per point with a binary value.
Black & White Image
Subtractive color system involves light emitted directly from a source.
False
For noise reduction, what kind of filters is useful?
Smoothing
In smoothing filters, sum of the mask of coefficients is 0.
False
Which filtering technique provides enhanced edges on dark background?
Sharpening
What kind of filter is given below?
Weighted Averaging Filter
Which image enhancement technique directly manipulates the image pixels?
Spatial Domain Techniques
For segmentation, which method can be used to isolate an object of interest from a background?
Thresholding
Which technique can be used to remove imperfections added during segmentation?
Morphological Processing
The closing of image f by structuring elements s is simply a dilation followed by an erosion.
True
Structuring elements can be fixed size and make any shape.
False
For dilation operation in morphological filters, the structuring element s is positioned with its origin at (x,y) and the new pixel value determined using the fit rule.
False
Which of the following filters are not used for Sharpening?
Sobel
Prewitt
Laplacian
Median
Median
What filters are used for sharpening?
Sobel
Prewitt
Laplacian
For line detection, which technique can used?
Sobel
Median
Canny
Hough
Hough
When we shift a window on an image, we may see change in grayscale. This helps us to determine the edge, corner, and flat regions. Match the following descriptions and change in grayscale values.
Significant change in all directions.
Corner
When we shift a window on an image, we may see change in grayscale. This helps us to determine the edge, corner, and flat regions. Match the following descriptions and change in grayscale values.
No change in all directions.
Flat
When we shift a window on an image, we may see change in grayscale. This helps us to determine the edge, corner, and flat regions. Match the following descriptions and change in grayscale values.
No change along one direction.
Edge
SIFT descriptors is NOT scale and rotations invariant.
False
What step is this in generating in SIFT descriptors?
Normalize the rotation / scale of the patch.
1st
What step is this in generating in SIFT descriptors?
Divide into sub-patches
3rd
What step is this in generating in SIFT descriptors?
Describe the patch with 128 numbers.
5th
What step is this in generating in SIFT descriptors?
Compute gradient at each pixel.
2nd
What step is this in generating in SIFT descriptors?
In each sub-patch, compute histogram of 8 gradient directions.
4th
What are the steps is in generating in SIFT descriptors?
1st: Normalize the rotation / scale of the patch.
2nd: Compute gradient at each pixel.
3rd: Divide into sub-patches.
4th: In each sub-patch, compute histogram of 8 gradient directions.
5th: Describe the patch with 128 numbers.
Using _______ gives invariance to illumination in SIFT descriptors.
Gradients
Scaling
Forward Transform
Sort the following steps for the outline of the Bag of Features:
_______ Learn “visual vocabulary”.
_______ Represent images by frequencies of “visual words”
_______ Quantize local features using visual vocabulary.
_______ Extract local features.
2nd
4th
3rd
1st
What step is this in the outline of the Bag of Features:
_______ Learn “visual vocabulary”.
2nd
What step is this in the outline of the Bag of Features:
_______ Represent images by frequencies of “visual words”
4th
What step is this in the outline of the Bag of Features:
_______ Quantize local features using visual vocabulary.
3rd
What step is this in the outline of the Bag of Features:
_______ Extract local features.
1st
We use the validation data split to _________
tune hyperparameters
When we split a dataset intro training, validation, and testing portions, we use the largest portion for testing.
False