certlibrary Flashcards

from certlibrary

1
Q

You build a bot by using the Microsoft Bot Framework SDK.

You start the bot on a local computer.

You need to validate the functionality of the bot.

What should you do before you connect to the bot?

A. Run the Bot Framework Emulator.
B. Run the Bot Framework Composer.
C. Register the bot with Azure Bot Service.
D. Run Windows Terminal.

A

A. Run the Bot Framework Emulator.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You have an Azure OpenAI model named AI1.

You are building a web app named App1 by using the Azure OpenAI SDK.

You need to configure App1 to connect to AI1.

What information must you provide?

A. the endpoint, key, and model name
B. the deployment name, key, and model name
C. the deployment name, endpoint, and key
D. the endpoint, key, and model type

A

C. the deployment name, endpoint, and key

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You are building a solution in Azure that will use Azure Cognitive Service for Language to process sensitive customer data.

You need to ensure that only specific Azure processes can access the Language service. The solution must minimize administrative effort.

What should you include in the solution?

A. IPsec rules
B. Azure Application Gateway
C. a virtual network gateway
D. virtual network rules

A

D. virtual network rules

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You plan to perform predictive maintenance.

You collect IoT sensor data from 100 industrial machines for a year. Each machine has 50 different sensors that generate data at one-minute intervals. In total, you have 5,000 time series datasets.

You need to identify unusual values in each time series to help predict machinery failures.

Which Azure service should you use?

A. Azure AI Computer Vision
B. Cognitive Search
C. Azure AI Document Intelligence
D. Azure AI Anomaly Detector

A

D. Azure AI Anomaly Detector

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You are developing a system that will monitor temperature data from a data stream. The system must generate an alert in response to atypical values. The solution must minimize development effort.

What should you include in the solution?

A. Multivariate Anomaly Detection
B. Azure Stream Analytics
C. metric alerts in Azure Monitor
D. Univariate Anomaly Detection

A

D. Univariate Anomaly Detection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

You have a Microsoft OneDrive folder that contains a 20-GB video file named File1.avi.

You need to index File1.avi by using the Azure Video Indexer website.

What should you do?

A. Upload File1.avi to the www.youtube.com webpage, and then copy the URL of the video to the Azure AI Video Indexer website.
B. Download File1.avi to a local computer, and then upload the file to the Azure AI Video Indexer website.
C. From OneDrive, create a download link, and then copy the link to the Azure AI Video Indexer website.
D. From OneDrive, create a sharing link for File1.avi, and then copy the link to the Azure AI Video Indexer website.

A

C. From OneDrive, create a download link, and then copy the link to the Azure AI Video Indexer website.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You have an Azure subscription that contains an Azure AI Service resource named CSAccount1 and a virtual network named VNet1. CSAaccount1 is connected to VNet1.

You need to ensure that only specific resources can access CSAccount1. The solution must meet the following requirements:

  • Prevent external access to CSAccount1.
  • Minimize administrative effort.

Which two actions should you perform? Each correct answer presents part of the solution.

NOTE: Each correct answer is worth one point.

A. In VNet1, enable a service endpoint for CSAccount1.
B. In CSAccount1, configure the Access control (IAM) settings.
C. In VNet1, modify the virtual network settings.
D. In VNet1, create a virtual subnet.
E. In CSAccount1, modify the virtual network settings.

A

A. In VNet1, enable a service endpoint for CSAccount1.

D. In VNet1, create a virtual subnet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You are building an internet-based training solution. The solution requires that a user’s camera and microphone remain enabled.

You need to monitor a video stream of the user and detect when the user asks an instructor a question. The solution must minimize development effort.

What should you include in the solution?

A. speech-to-text in the Azure AI Speech service
B. language detection in Azure AI Language Service
C. the Face service in Azure AI Vision
D. object detection in Azure AI Custom Vision

A

A. speech-to-text in the Azure AI Speech service

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

SIMULATION -
You need to create a search service named search12345678 that will index a sample Azure Cosmos DB database named hotels-sample. The solution must ensure that only English language fields are retrievable.
To complete this task, sign in to the Azure portal.

A

Part 1: Create a search service search12345678
Step 1: Sign in to the QnA portal.
Step 2: Create an Azure Cognitive multi-service resource:

Step 3: On the Create page, provide the following information.

Name: search12345678 -

Step 4: Click Review + create -
Part 2: Start the Import data wizard and create a data source
Step 5: Click Import data on the command bar to create and populate a search index.

Step 6: In the wizard, click Connect to your data > Samples > hotels-sample. This data source is built-in. If you were creating your own data source, you would need to specify a name, type, and connection information. Once created, it becomes an “existing data source” that can be reused in other import operations.

Step 7: Continue to the next page.
Step 8: Skip the “Enrich content” page
Step 9: Configure index.
Make sure English is selected for the fields.

Step 10: Continue and finish the wizard.
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-apis-create-account https://docs.microsoft.com/en-us/azure/search/search-get-started-portal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

SIMULATION -
You plan to create a solution to generate captions for images that will be read from Azure Blob Storage.
You need to create a service in Azure Cognitive Services for the solution. The service must be named captions12345678 and must use the Free pricing tier.
To complete this task, sign in to the Azure portal.

A

Part 1: Create a search service captions12345678
Step 1: Sign in to the QnA portal.
Step 2: Create an Azure Cognitive multi-service resource:

Step 3: On the Create page, provide the following information.
Name: captions12345678ֲ¨

Pricing tier: Free -

Step 4: Click Review + create -
(Step 5: Create a data source
In Connect to your data, choose Azure Blob Storage. Choose an existing connection to the storage account and container you created. Give the data source a name, and use default values for the rest.)

Reference:
https://docs.microsoft.com/en-us/azure/search/search-create-service-portal https://docs.microsoft.com/en-us/azure/search/cognitive-search-quickstart-ocr

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

SIMULATION -
You need to create a Form Recognizer resource named fr12345678.
Use the Form Recognizer sample labeling tool at https://fott-2-1.azurewebsites.net/ to analyze the invoice located in the C:\Resources\Invoices folder.
Save the results as C:\Resources\Invoices\Results.json.
To complete this task, sign in to the Azure portal and open the Form Recognizer sample labeling tool.

A

Step 1: Sign in to the Azure Portal.
Step 2: Navigate to the Form Recognizer Sample Tool (at https://fott-2-1.azurewebsites.net)
Step 3: On the sample tool home page select Use prebuilt model to get data.

Step 4: Select the Form Type you would like to analyze from the dropdown window.
Step 5: In the Source: URL field, paste the selected URL and select the Fetch button.
Step 6: In the Choose file for analysis use the file in the C:\Resources\Invoices folder and select the Fetch button.

Step 7: Select Run analysis. The Form Recognizer Sample Labeling tool will call the Analyze Prebuilt API and analyze the document.
Step 8: View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected.

Step 9: Save the results as C:\Resources\Invoices\Results.json.
Reference:
https://docs.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/quickstarts/try-sample-label-tool

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

SIMULATION -
Use the following login credentials as needed:
To enter your username, place your cursor in the Sign in box and click on the username below.
To enter your password, place your cursor in the Enter password box and click on the password below.

Azure Username: admin@abc.com -

Azure Password: XXXXXXXXXXXX -
The following information is for technical support purposes only:

Lab Instance: 12345678 -

Task -
You plan to build an application that will use caption12345678. The application will be deployed to a virtual network named VNet1.
You need to ensure that only virtual machines on VNet1 can access caption12345678.
To complete this task, sign in to the Azure portal.

A

Step 1: Create private endpoint for your web app
1. In the left-hand menu, select All Resources > caption12345678 - the name of your web app.
2. In the web app overview, select Settings > Networking.
3. In Networking, select Private endpoints.
4. Select + Add in the Private Endpoint connections page.
5. Enter or select the following information in the Add Private Endpoint page:
Name: Enter caption12345678.
Subscription Select your Azure subscription.
Virtual network Select VNet1.
Subnet: -
Integrate with private DNS zone: Select Yes.
6. Select OK.

Reference:
https://docs.microsoft.com/en-us/azure/private-link/tutorial-private-endpoint-webapp-portal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

SIMULATION -
Use the following login credentials as needed:
To enter your username, place your cursor in the Sign in box and click on the username below.
To enter your password, place your cursor in the Enter password box and click on the password below.

Azure Username: admin@abc.com -

Azure Password: XXXXXXXXXXXX -
The following information is for technical support purposes only:

Lab Instance: 12345678 -

Task -
You need to build an API that uses the service in Azure Cognitive Services named AAA12345678 to identify whether an image includes a Microsoft Surface Pro or
Surface Studio.
To achieve this goal, you must use the sample images in the C:\Resources\Images folder.
To complete this task, sign in to the Azure portal.

A

Step 1: In the Azure dashboard, click Create a resource.
Step 2: In the search bar, type “Cognitive Services.”
You’ll get information about the cognitive services resource and a legal notice. Click Create.
Step 3: You’ll need to specify the following details about the cognitive service (refer to the image below for a completed example of this page):
Subscription: choose your paid or trial subscription, depending on how you created your Azure account.
Resource group: click create new to create a new resource group or choose an existing one.
Region: choose the Azure region for your cognitive service. Choose: East US Azure region.
Name: choose a name for your cognitive service. Enter: AAA12345678
Pricing Tier: Select: Free pricing tier
Step 4: Review and create the resource, and wait for deployment to complete. Then go to the deployed resource.
Note: The Computer Vision Image Analysis service can extract a wide variety of visual features from your images. For example, it can determine whether an image contains adult content, find specific brands or objects, or find human faces.

Tag visual features -
Identify and tag visual features in an image, from a set of thousands of recognizable objects, living things, scenery, and actions. When the tags are ambiguous or not common knowledge, the API response provides hints to clarify the context of the tag. Tagging isn’t limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on.
Try out the image tagging features quickly and easily in your browser using Vision Studio.
Reference:
https://docs.microsoft.com/en-us/learn/modules/analyze-images-computer-vision/3-analyze-images https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview-image-analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

SIMULATION -
Use the following login credentials as needed:
To enter your username, place your cursor in the Sign in box and click on the username below.
To enter your password, place your cursor in the Enter password box and click on the password below.

Azure Username: admin@abc.com -

Azure Password: XXXXXXXXXXXX -
The following information is for technical support purposes only:

Lab Instance: 12345678 -

Task -
You plan to build an API that will identify whether an image includes a Microsoft Surface Pro or Surface Studio.
You need to deploy a service in Azure Cognitive Services for the API. The service must be named AAA12345678 and must be in the East US Azure region. The solution must use the Free pricing tier.
To complete this task, sign in to the Azure portal.

A

Step 1: In the Azure dashboard, click Create a resource.
Step 2: In the search bar, type “Cognitive Services.”
You’ll get information about the cognitive services resource and a legal notice. Click Create.
Step 3: You’ll need to specify the following details about the cognitive service (refer to the image below for a completed example of this page):
Subscription: choose your paid or trial subscription, depending on how you created your Azure account.
Resource group: click create new to create a new resource group or choose an existing one.
Region: choose the Azure region for your cognitive service. Choose: East US Azure region.
Name: choose a name for your cognitive service. Enter: AAA12345678
Pricing Tier: Select: Free pricing tier

Step 4: Review and create the resource, and wait for deployment to complete. Then go to the deployed resource.
Note: The Computer Vision Image Analysis service can extract a wide variety of visual features from your images. For example, it can determine whether an image contains adult content, find specific brands or objects, or find human faces.

Tag visual features -
Identify and tag visual features in an image, from a set of thousands of recognizable objects, living things, scenery, and actions. When the tags are ambiguous or not common knowledge, the API response provides hints to clarify the context of the tag. Tagging isn’t limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on.
Try out the image tagging features quickly and easily in your browser using Vision Studio.
Reference:
https://docs.microsoft.com/en-us/learn/modules/analyze-images-computer-vision/3-analyze-images https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview-image-analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You have an Azure Video Analyzer for Media (previously Video Indexer) service that is used to provide a search interface over company videos on your company’s website.
You need to be able to search for videos based on who is present in the video.
What should you do?
A. Create a person model and associate the model to the videos.
B. Create person objects and provide face images for each object.
C. Invite the entire staff of the company to Video Indexer.
D. Edit the faces in the videos.
E. Upload names to a language model.

A

A. Create a person model and associate the model to the videos.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You have the following Python function for creating Azure Cognitive Services resources programmatically. def create_resource (resource_name, kind, account_tier, location) : parameters = CognitiveServicesAccount(sku=Sku(name=account_tier), kind=kind, location=location, properties={}) result = client.accounts.create(resource_group_name, resource_name, parameters)
You need to call the function to create a free Azure resource in the West US Azure region. The resource will be used to generate captions of images automatically.
Which code should you use?
A. create_resource(“res1”, “ComputerVision”, “F0”, “westus”)
B. create_resource(“res1”, “CustomVision.Prediction”, “F0”, “westus”)
C. create_resource(“res1”, “ComputerVision”, “S0”, “westus”)
D. create_resource(“res1”, “CustomVision.Prediction”, “S0”, “westus”)

A

B. create_resource(“res1”, “CustomVision.Prediction”, “F0”, “westus”)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You use the Custom Vision service to build a classifier.
After training is complete, you need to evaluate the classifier.
Which two metrics are available for review? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. recall
B. F-score
C. weighted accuracy
D. precision
E. area under the curve (AUC)

A

A. recall
D. precision

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You have an Azure Cognitive Search solution and a collection of handwritten letters stored as JPEG files.

You plan to index the collection. The solution must ensure that queries can be performed on the contents of the letters.

You need to create an indexer that has a skillset.

Which skill should you include?

A. image analysis
B. optical character recognition (OCR)
C. key phrase extraction
D. document extraction

A

B. optical character recognition (OCR)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

You need to build a solution that will use optical character recognition (OCR) to scan sensitive documents by using the Computer Vision API. The solution must
NOT be deployed to the public cloud.
What should you do?
A. Build an on-premises web app to query the Computer Vision endpoint.
B. Host the Computer Vision endpoint in a container on an on-premises server.
C. Host an exported Open Neural Network Exchange (ONNX) model on an on-premises server.
D. Build an Azure web app to query the Computer Vision endpoint.

A

B. Host the Computer Vision endpoint in a container on an on-premises server.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

You have an app that captures live video of exam candidates.

You need to use the Face service to validate that the subjects of the videos are real people.

What should you do?

A. Call the face detection API and retrieve the face rectangle by using the FaceRectangle attribute.
B. Call the face detection API repeatedly and check for changes to the FaceAttributes.HeadPose attribute.
C. Call the face detection API and use the FaceLandmarks attribute to calculate the distance between pupils.
D. Call the face detection API repeatedly and check for changes to the FaceAttributes.Accessories attribute.

A

A. Call the face detection API and retrieve the face rectangle by using the FaceRectangle attribute.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

You have an Azure subscription that contains an AI enrichment pipeline in Azure Cognitive Search and an Azure Storage account that has 10 GB of scanned documents and images.

You need to index the documents and images in the storage account. The solution must minimize how long it takes to build the index.

What should you do?

A. From the Azure portal, configure parallel indexing.
B. From the Azure portal, configure scheduled indexing.
C. Configure field mappings by using the REST API.
D. Create a text-based indexer by using the REST API.

A

A. From the Azure portal, configure parallel indexing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

You have a mobile app that manages printed forms.

You need the app to send images of the forms directly to Forms Recognizer to extract relevant information. For compliance reasons, the image files must not be stored in the cloud.

In which format should you send the images to the Form Recognizer API endpoint?

A. raw image binary
B. form URL encoded
C. JSON

A

A. raw image binary

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

You plan to build an app that will generate a list of tags for uploaded images. The app must meet the following requirements:

  • Generate tags in a user’s preferred language.
  • Support English, French, and Spanish.
  • Minimize development effort.

You need to build a function that will generate the tags for the app.

Which Azure service endpoint should you use?

A. Content Moderator Image Moderation
B. Custom Vision image classification
C. Computer Vision Image Analysis
D. Custom Translator

A

B. Custom Vision image classification

22
Q

You are building an app that will include one million scanned magazine articles. Each article will be stored as an image file.

You need to configure the app to extract text from the images. The solution must minimize development effort.

What should you include in the solution?

A. Computer Vision Image Analysis
B. the Read API in Computer Vision
C. Form Recognizer
D. Azure Cognitive Service for Language

A

A. Computer Vision Image Analysis

23
Q

You have a 20-GB video file named File1.avi that is stored on a local drive.

You need to index File1.avi by using the Azure Video Indexer website.

What should you do first?

A. Upload File1.avi to an Azure Storage queue.
B. Upload File1.avi to the Azure Video Indexer website.
C. Upload File1.avi to Microsoft OneDrive.
D. Upload File1.avi to the www.youtube.com webpage.

A

B. Upload File1.avi to the Azure Video Indexer website.

24
Q

You are building an app that will use the Azure AI Video Indexer service.

You plan to train a language model to recognize industry-specific terms.

You need to upload a file that contains the industry-specific terms.

Which file format should you use?

A. XML
B. TXT
C. XLS
D. PDF

A

B. TXT

24
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You build a language model by using a Language Understanding service. The language model is used to search for information on a contact list by using an intent named FindContact.
A conversational expert provides you with the following list of phrases to use for training.
✑ Find contacts in London.
✑ Who do I know in Seattle?
✑ Search for contacts in Ukraine.
You need to implement the phrase list in Language Understanding.
Solution: You create a new pattern in the FindContact intent.
Does this meet the goal?
A. Yes
B. No

A

B. No

25
Q

You are building an app that will share user images.

You need to configure the app to perform the following actions when a user uploads an image:

  • Categorize the image as either a photograph or a drawing.
  • Generate a caption for the image.

The solution must minimize development effort.

Which two services should you include in the solution? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A. object detection in Azure AI Computer Vision
B. content tags in Azure AI Computer Vision
C. image descriptions in Azure AI Computer Vision
D. image type detection in Azure AI Computer Vision
E. image classification in Azure AI Custom Vision

A

C. image descriptions in Azure AI Computer Vision

E. image classification in Azure AI Custom Vision

26
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You develop an application to identify species of flowers by training a Custom Vision model.
You receive images of new flower species.
You need to add the new images to the classifier.
Solution: You create a new model, and then upload the new images and labels.
Does this meet the goal?
A. Yes
B. No

A

B. No

26
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You develop an application to identify species of flowers by training a Custom Vision model.
You receive images of new flower species.
You need to add the new images to the classifier.
Solution: You add the new images, and then use the Smart Labeler tool.
Does this meet the goal?
A. Yes
B. No

A

B. No

26
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You develop an application to identify species of flowers by training a Custom Vision model.
You receive images of new flower species.
You need to add the new images to the classifier.
Solution: You add the new images and labels to the existing model. You retrain the model, and then publish the model.
Does this meet the goal?
A. Yes
B. No

A

A. Yes

26
Q

You are building a Conversational Language Understanding model for an e-commerce chatbot. Users can speak or type their billing address when prompted by the chatbot.
You need to construct an entity to capture billing addresses.
Which entity type should you use?
A. machine learned
B. Regex
C. list
D. Pattern.any

A

B. Regex

26
Q

You are building an Azure WebJob that will create knowledge bases from an array of URLs.
You instantiate a QnAMakerClient object that has the relevant API keys and assign the object to a variable named client.
You need to develop a method to create the knowledge bases.
Which two actions should you include in the method? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
A. Create a list of FileDTO objects that represents data from the WebJob.
B. Call the client.Knowledgebase.CreateAsync method.
C. Create a list of QnADTO objects that represents data from the WebJob.
D. Create a CreateKbDTO object.

A

A. Create a list of FileDTO objects that represents data from the WebJob.
C. Create a list of QnADTO objects that represents data from the WebJob.

27
Q

You are building a Language Understanding model for an e-commerce platform.
You need to construct an entity to capture billing addresses.
Which entity type should you use for the billing address?
A. machine learned
B. Regex
C. geographyV2
D. Pattern.any
E. list

A

B. Regex

28
Q

You need to upload speech samples to a Speech Studio project for use in training.
How should you upload the samples?
A. Combine the speech samples into a single audio file in the .wma format and upload the file.
B. Upload a .zip file that contains a collection of audio files in the .wav format and a corresponding text transcript file.
C. Upload individual audio files in the FLAC format and manually upload a corresponding transcript in Microsoft Word format.
D. Upload individual audio files in the .wma format.

A

B. Upload a .zip file that contains a collection of audio files in the .wav format and a corresponding text transcript file.

28
Q

You are building a conversational language understanding model.
You need to enable active learning.
What should you do?
A. Add show-all-intents=true to the prediction endpoint query.
B. Enable speech priming.
C. Add log=true to the prediction endpoint query.
D. Enable sentiment analysis.

A

C. Add log=true to the prediction endpoint query.

28
Q

SIMULATION -
You need to configure and publish bot12345678 to support task management. The intent must be named TaskReminder. The LUDown for the intent is in the C:
\Resources\LU folder.
To complete this task, use the Microsoft Bot Framework Composer.

A

Answer : See explanation below.

Step 1: Open Microsoft Bot Framework Composer
Step 2: Select the bot bot12345678
Step 3: Select Import existing resources. Read the instructions on the right side of the screen and select Next.

Step 4: Browse to the C:\Resources\LU folder and select the available .lu file
Step 5: In the pop-up window Importing existing resources, modify the JSON file content based on your resources information: Name the intent TaskReminder
Step 6: Select Publish from the Composer menu. In the Publish your bots pane, select the bot to publish (bot12345678), then select a publish profile from the
Publish target drop-down list.

Reference:
https://docs.microsoft.com/en-us/composer/how-to-publish-bot

28
Q

You have a chatbot that was built by using the Microsoft Bot Framework.
You need to debug the chatbot endpoint remotely.
Which two tools should you install on a local computer? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
A. Fiddler
B. Bot Framework Composer
C. Bot Framework Emulator
D. Bot Framework CLI
E. ngrok
F. nginx

A

C. Bot Framework Emulator
E. ngrok

28
Q

You are developing a method for an application that uses the Translator API.
The method will receive the content of a webpage, and then translate the content into Greek (el). The result will also contain a transliteration that uses the Roman alphabet.
You need to create the URI for the call to the Translator API.
You have the following URI.
https://api.cognitive.microsofttranslator.com/translate?api-version=3.0
Which three additional query parameters should you include in the URI? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
A. toScript=Cyrl
B. from=el
C. textType=html
D. to=el
E. textType=plain
F. toScript=Latn

A

C. textType=html
D. to=el
F. toScript=Latn

28
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You build a language model by using a Language Understanding service. The language model is used to search for information on a contact list by using an intent named FindContact.
A conversational expert provides you with the following list of phrases to use for training.
✑ Find contacts in London.
✑ Who do I know in Seattle?
Search for contacts in Ukraine.

You need to implement the phrase list in Language Understanding.
Solution: You create a new entity for the domain.

Does this meet the goal?

A. Yes
B. No

A

B. No

28
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You build a language model by using a Language Understanding service. The language model is used to search for information on a contact list by using an intent named FindContact.
A conversational expert provides you with the following list of phrases to use for training.
✑ Find contacts in London.
✑ Who do I know in Seattle?
✑ Search for contacts in Ukraine.
You need to implement the phrase list in Language Understanding.
Solution: You create a new intent for location.

Does this meet the goal?

A. Yes
B. No

A

A. Yes

29
Q

You are training a Language Understanding model for a user support system.
You create the first intent named GetContactDetails and add 200 examples.
You need to decrease the likelihood of a false positive.

What should you do?

A. Enable active learning.
B. Add a machine learned entity.
C. Add additional examples to the GetContactDetails intent.
D. Add examples to the None intent.

A

A. Enable active learning.

30
Q

SIMULATION -
You need to configure bot12345678 support the French (FR-FR) language.
Export the bot to C:\Resources\Bot\Bot1.zip.
To complete this task, use the Microsoft Bot Framework Composer.

A

Answer : See explanation below.

Step 1: Open Microsoft Bot Framework Composer
Step 2: Select the bot bot12345678
Step 3: Select Configure.
Step 4: Select the Azure Language Understanding tab
Step 5: Select the Set up Language Understanding button. The Set up Language Understanding window will appear, shown below:

Step 6: Select Use existing resources and then select Next at the bottom of the window.
Step 7: Now select the Azure directory, Azure subscription, and Language Understanding resource name (French).
Step 8: Select Next on the bottom. Your Key and Region will appear on the next on the next window, shown below:

Step 9. Select Done -
Reference:
https://docs.microsoft.com/en-us/composer/concept-language-understanding https://docs.microsoft.com/en-us/composer/how-to-add-luis

31
Q

SIMULATION -
You need to configure and publish bot12345678 to answer questions by using the frequently asked questions (FAQ) located at https://docs.microsoft.com/en-us/ azure/bot-service/bot-service-resources-bot-framework-faq. The solution must use bot%@lab.LabInstance.Id-qna-qna%.
To complete this task, use the Microsoft Bot Framework Composer.

A

Answer : See explanation below.

Step 1: Open Microsoft Bot Framework Composer
Step 2: Select the bot bot12345678
Step 3: Open the Configure page in Composer. Then select the Development resources, and scroll down to Azure QnA Maker.
Step 4: To access the Connect to QnA Knowledgebase action, you need to select + under the node you want to add the QnA knowledge base and then select
Connect to QnAKnowledgeBase from the Access external resources action menu.

Step 5: Review the QnA Maker settings panel after selecting the QnA Maker dialog.
Use:
Instance: bot%@lab.LabInstance.Id-qna-qna%
Reference:
https://docs.microsoft.com/en-us/composer/how-to-create-qna-kb https://docs.microsoft.com/en-us/composer/how-to-add-qna-to-bot

32
Q

You need to measure the public perception of your brand on social media by using natural language processing.
Which Azure service should you use?
A. Language service
B. Content Moderator
C. Computer Vision
D. Form Recognizer

A

A. Language service

32
Q

You have the following data sources:
✑ Finance: On-premises Microsoft SQL Server database
✑ Sales: Azure Cosmos DB using the Core (SQL) API
✑ Logs: Azure Table storage
✑ HR: Azure SQL database
You need to ensure that you can search all the data by using the Azure Cognitive Search REST API.
What should you do?
A. Migrate the data in HR to Azure Blob storage.
B. Migrate the data in HR to the on-premises SQL server.
C. Export the data in Finance to Azure Data Lake Storage.
D. Ingest the data in Logs into Azure Sentinel.

A

C. Export the data in Finance to Azure Data Lake Storage.

33
Q

SIMULATION -
Use the following login credentials as needed:
To enter your username, place your cursor in the Sign in box and click on the username below.
To enter your password, place your cursor in the Enter password box and click on the password below.

Azure Username: admin@abc.com -

Azure Password: XXXXXXXXXXXX -
The following information is for technical support purposes only:

Lab Instance: 12345678 -

Task -
You need to create and publish a Language Understanding (classic) model named 1u12345678. The model will contain an intent of Travel that has an utterance of
Boat.
To complete this task, sign in to the Language Understanding portal at httptc//www.luis-ai/.

A

Answer : See explanation below.

Create your LUIS model -
1. You should navigate to your LUIS.ai management portal and create a new application. In the portal create a model.

Model name: 1u12345678 -
2. Define one intent as ג€Travelג€ and add an example utterances of Boat.

  1. Publish the model
    In order to use your model, you have to publish it. This is as easy as hitting the Publish tab, selecting between the production or staging environments, and hitting
    Publish. As you can see from this page, you can also choose to enable sentiment analysis, speech priming to improve speech recognition, or the spell checker.
    For now, you can leave those unchecked.
    Reference:
    https://docs.microsoft.com/en-us/azure/health-bot/language_model_howto https://www.codemag.com/article/1809021/Natural-Language-Understanding-with-LUIS
34
Q

SIMULATION -
Use the following login credentials as needed:
To enter your username, place your cursor in the Sign in box and click on the username below.
To enter your password, place your cursor in the Enter password box and click on the password below.

Azure Username: admin@abc.com -

Azure Password: XXXXXXXXXXXX -
The following information is for technical support purposes only:

Lab Instance: 12345678 -

Task -
You need to create a version of the 1u12345678 Language Understanding (classic) model. The new version must have a version name of 1.0 and must be active.
To complete this task, sign in to the Language Understanding portal at https://www.luis.ai/.

A

Step 1: Clone a version -
1. Select the version you want to clone (1u12345678) then select Clone from the toolbar.
2. In the Clone version dialog box, type a name for the new version. Type 1.0

Step 2: Set active version -
Select a version from the list, then select Activate from the toolbar.
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-how-to-manage-versions

35
Q

You have a Language service resource that performs the following:

  • Sentiment analysis
  • Named Entity Recognition (NER)
  • Personally Identifiable Information (PII) identification

You need to prevent the resource from persisting input data once the data is analyzed.

Which query parameter in the Language service API should you configure?

A. model-version
B. piiCategories
C. showStats
D. loggingOptOut

A

D. loggingOptOut

36
Q

You have an Azure Cognitive Services model named Model1 that identifies the intent of text input.

You develop an app in C# named App1.

You need to configure App1 to use Model1.

Which package should you add to App1?

A. Universal.Microsoft.CognitiveServices.Speech
B. SpeechServicesToolkit
C. Azure.AI.Language.Conversations
D. Xamarin.Cognitive.Speech

A

A. Universal.Microsoft.CognitiveServices.Speech

37
Q

You are building a social media extension that will convert text to speech. The solution must meet the following requirements:

  • Support messages of up to 400 characters.
  • Provide users with multiple voice options.
  • Minimize costs.

You create an Azure Cognitive Services resource.

Which Speech API endpoint provides users with the available voice options?

A. https://uksouth.api.cognitive.microsoft.com/speechtotext/v3.0/models/base
B. https://uksouth.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis/voices
C. https://uksouth.tts.speech.microsoft.com/cognitiveservices/voices/list
D. https://uksouth.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}

A

D. https://uksouth.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}

38
Q

You develop a custom question answering project in Azure Cognitive Service for Language. The project will be used by a chatbot.

You need to configure the project to engage in multi-turn conversations.

What should you do?

A. Add follow-up prompts.
B. Enable active learning.
C. Add alternate questions.
D. Enable chit-chat.

A

A. Add follow-up prompts.

39
Q

You train a Conversational Language Understanding model to understand the natural language input of users.

You need to evaluate the accuracy of the model before deploying it.

What are two methods you can use? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

A. From the language authoring REST endpoint, retrieve the model evaluation summary.
B. From Language Studio, enable Active Learning, and then validate the utterances logged for review.
C. From Language Studio, select Model performance.
D. From the Azure portal, enable log collection in Log Analytics, and then analyze the logs.

A

A. From the language authoring REST endpoint, retrieve the model evaluation summary.
C. From Language Studio, select Model performance.

40
Q

You have an Azure subscription that contains an Azure Cognitive Service for Language resource.

You need to identify the URL of the REST interface for the Language service.

Which blade should you use in the Azure portal?

A. Identity
B. Keys and Endpoint
C. Networking
D. Properties

A

B. Keys and Endpoint

41
Q

You are building a retail kiosk system that will use a custom neural voice.

You acquire audio samples and consent from the voice talent.

You need to create a voice talent profile.

What should you upload to the profile?

A. a .zip file that contains 10-second .wav files and the associated transcripts as .txt files
B. a five-minute .flac audio file and the associated transcript as a .txt file
C. a .wav or .mp3 file of the voice talent consenting to the creation of a synthetic version of their voice
D. a five-minute .wav or .mp3 file of the voice talent describing the kiosk system

A

C. a .wav or .mp3 file of the voice talent consenting to the creation of a synthetic version of their voice