Computer Vision (15-20%) Flashcards

Concepts related to Azure Computer Vision resources

1
Q

Use this feature for general, unstructured documents with smaller amount of text, or images that contain text.

A

Azure AI Vision - OCR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Use this service to read small to large volumes of text from images and PDF documents.

A

Azure AI Document Intelligence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which service do you use to read text from street signs, handwritten notes, and store signs?

A

OCR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which API would be best for this scenario? You need to read a large number of files with high accuracy. The text is short sections of handwritten text, some in English and some of it is in multiple languages.

A

Image Analysis service OCR feature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What levels of division are the OCR results returned?

A

Results contain blocks, words and lines, as well as bounding boxes for each word and line.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You’ve scanned a letter into PDF format and need to extract the text it contains. What should you do?

A

The Document Intelligence API can be used to process PDF formatted files.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You want to use the Azure AI Vision Analyze Image function to generate an appropriate caption for an image. Which visual feature should you specify?

A

To generate a caption, include the Description visual feature in your analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the purpose of the Azure AI Vision service?

A

The Azure AI Vision service is designed to help you extract information from images through various functionalities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

In AI Vision Service what features are available in the VisualFeatures enum?

A

VisualFeatures.TAGS: Identifies tags about the image, including objects, scenery, setting, and actions
VisualFeatures.OBJECTS: Returns the bounding box for each detected object
VisualFeatures.CAPTION: Generates a caption of the image in natural language
VisualFeatures.DENSE_CAPTIONS: Generates more detailed captions for the objects detected
VisualFeatures.PEOPLE: Returns the bounding box for detected people
VisualFeatures.SMART_CROPS: Returns the bounding box of the specified aspect ratio for the area of interest
VisualFeatures.READ: Extracts readable text

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What functionality exists in Azure Video Indexer?

A

Facial recognition - detecting the presence of individual people in the image. This requires Limited Access approval.
Optical character recognition - reading text in the video.
Speech transcription - creating a text transcript of spoken dialog in the video.
Topics - identification of key topics discussed in the video.
Sentiment - analysis of how positive or negative segments within the video are.
Labels - label tags that identify key objects or themes throughout the video.
Content moderation - detection of adult or violent themes in the video.
Scene segmentation - a breakdown of the video into its constituent scenes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What extensions can be made to Azure Video Indexer for custom insights?

A

People. Add images of the faces of people you want to recognize in videos, and train a model. Video Indexer will then recognize these people in all of your videos.
Note
This only works after Limited Access approval, adhering to our Responsible AI standard.

Language. If your organization uses specific terminology that may not be in common usage, you can train a custom model to detect and transcribe it.
Brands. You can train a model to recognize specific names as brands, for example to identify products, projects, or companies that are relevant to your business.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You want Azure Video Indexer to analyze a video. What must you do first?

A

Upload the video to Azure Video Indexer and index it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You want Azure Video Indexer to recognize brands in videos recorded from conference calls. What should you do?

A

Edit the Brands model to show brands suggested by Bing, and add any new brands you want to detect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What resources need to be provisiioned to use the AI Custom Vision Service?

A

A training resource (used to train your models). This can be:
An Azure AI Services resource.
An Azure AI Custom Vision (Training) resource.
A prediction resource, used by client applications to get predictions from your model. This can be:
An Azure AI Services resource.
An Azure AI Custom Vision (Prediction) resource.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Explain multiclass classification

A

there are multiple classes in the image dataset, but each image can belong to only one class

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Explain multilabel classification

A

an image might be associated with multiple labels

17
Q

What steps are performed in the Azure AI Custom Vision portal?

A

Create an image classification project for your model and associate it with a training resource.
Upload images, assigning class label tags to them.
Review and edit tagged images.
Train and evaluate a classification model.
Test a trained model.
Publish a trained model to a prediction resource.

18
Q

You want to train a model that can categorize an image as “cat” or “dog” based on its subject. What kind of Azure AI Custom Vision project should you create?

A

To train a model that classifies an image using a single tag, use an Image classification (multiclass) project.

19
Q

Which of the following kinds of Azure resource can you use to host a trained Azure AI Custom Vision model?

A

You can publish a trained Azure AI Custom Vision model to either an Azure AI Custom Vision (Prediction) resource, or an Azure AI Services resource.

20
Q

What features are available in the Face service within Azure AI Vision?

A

Face detection (with bounding box).
Comprehensive facial feature analysis (including head pose, presence of spectacles, blur, facial landmarks, occlusion and others).
Face comparison and verification.
Facial recognition.
Facial landmark location
Facial liveness - liveness can be used to determine if the input video is a real stream or a fake

21
Q

What method do you take to detect and analyze faces using the Azure AI Vision service?

A

call the Analyze Image function (SDK or equivalent REST method), specifying People as one of the visual features to be returned.

22
Q

What attributes can be returned in the Facial Attribute analysis?

A

Head pose (pitch, roll, and yaw orientation in 3D space)
Glasses (NoGlasses, ReadingGlasses, Sunglasses, or Swimming Goggles)
Blur (low, medium, or high)
Exposure (underExposure, goodExposure, or overExposure)
Noise (visual noise in the image)
Occlusion (objects obscuring the face)
Accessories (glasses, headwear, mask)
QualityForRecognition (low, medium, or high)

23
Q

How can the Face service be provisioned?

A

You can provision Face as a single-service resource, or you can use the Face API in a multi-service Azure AI Services resource.

24
Q

Describe the Face detection process in the Face API

A

When a face is detected by the Face service, a unique ID is assigned to it and retained in the service resource for 24 hours. The ID is a GUID, with no indication of the individual’s identity other than their facial features.

While the detected face ID is cached, subsequent images can be used to compare the new faces to the cached identity and determine if they are similar (in other words, they share similar facial features) or to verify that the same person appears in two images.

25
Q

What steps do you take to train a facial recognition model in the Face service?

A

Create a Person Group that defines the set of individuals you want to identify (for example, employees).
Add a Person to the Person Group for each individual you want to identify.
Add detected faces from multiple images to each person, preferably in various poses. The IDs of these faces will no longer expire after 24 hours (so they’re now referred to as persisted faces).
Train the model.

26
Q

You need to create a facial recognition solution to identify named employees. Which service should you use?

A

Use the Face service to create a facial recognition solution.

27
Q

You need to verify that the person in a photo taken at hospital reception is the same person in a photo taken at a ward entrance 10 minutes later. What should you do?

A

Verify the face in the ward photo by comparing it to the detected face ID from the reception photo. This is the most efficient approach as the photos were taken within 24 hours

28
Q

What is the easiest option for labeling images for object detection?

A

use the interactive interface in the Azure AI Custom Vision portal. This interface automatically suggests regions that contain objects, to which you can assign tags or adjust by dragging the bounding box to enclose the object you want to label.

29
Q

What do the bounding box measurement units quantify in image labeling?

A

The values are the proportion of the full image size. Consider the example:
Left: 0.1
Top: 0.5
Width: 0.5
Height: 0.25
This defines a box in which the left is located 0.1 (one tenth) from the left edge of the image, and the top is 0.5 (half the image height) from the top. The box is half the width and a quarter of the height of the overall image.

30
Q

What must you do before taking advantage of the smart labeler tool when creating an object detection model?

A

To take advantage of the smart labeler, tag some images and train an initial model. Subsequently, the portal will suggest tags for new images.

31
Q

What steps are needed to use the Read API for OCR?

A

call the ImageAnalysis function (REST API or equivalent SDK method), passing the image URL or binary data, and optionally specifying a gender neutral caption or the language the text is written in (with a default value of en for English).

result = client.analyze(
image_url=,
visual_features=[VisualFeatures.READ]
)

https:///computervision/imageanalysis:analyze?features=read&…

32
Q

You build an app named App1 that uses the Azure AI Face service.
You need to optimize the app for images that contain blurry faces.
What should you do?

A

Set the detection model to detection_02.

33
Q

You are building an app that will be deployed to an edge device and will use Azure AI Custom Vision to analyze images of fruits.
You need to select a model domain for the app. The solution must support running the app without internet connectivity.
Which model should you use?

A

Compact domain is correct. The Azure AI Custom Vision service only exports compact domains, and the models generated by compact domains are optimized for the constraints of real-time classification on mobile devices.