(AZ-204 topic) Develop Azure Compute Solutions Flashcards

Test takers will be expected to develop solutions using Azure Virtual Machines, Azure Containers Instances and Container Registry, in addition to know how to deploy web applications to Azure App Service and develop Azure Functions. Questions for this domain comprise 25% of the total questions for this exam.

1
Q

Your company wants to move it’s website to Azure. You currently host your website on a docker container that see’s high-volume during peak business hours that causes CPU spikes and has been set-up to be able to failover to a local node should there be an issue. You’ve been instructed to shift the website to the cloud with as little change as possible while also keeping your webstire secured, resilient, and with costs at a minimum. Which situation would be the best solution?

  • Upload your container to Azure Container Registry & deploy a new Azure Web App Service at the Standard tier with auto-scaling
  • Upload your container to Azure Container Registry & deploy to Azure Container Instances with Auto-Scaling set up
  • Build an Azure Virtual Machine that runs docker in a public facing virtual network. Move your container to the Virtual Machine and set your Virtual Machine to scale under high CPU conditions
  • Build a Virtual Machine Scale Set that runs docker in a public facing virtual network. Configure your scale set to loadblance and increase the number of nodes based on CPU load.
A

-Upload your container to Azure Container Registry & deploy a new Azure Web App Service at the Standard tier with auto-scaling

Azure App Service would allow you to not only run, but also secure and offer high-availability for your web application with minimal effort when using the Standard Tier.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The ordering system for your company is getting an upgrade which will update a separate customer application whenever an order is completed. The order system processes at most 1000 orders per day, and the application is built using Azure Functions. What is the most efficient and economical way for the ordering system to notify the application when an order is complete?

  • Use Cosmos DB for the data and use the built-in event notification service.
  • Poll the order database from the application using a timer trigger to check if an order has been completed.
  • Use an Azure Event Hub to collect and manage the order completion events. Then, build a pipeline to send the data to the application.
  • Use a webhook to an Azure Function which can update the application as the order is completed.
A

-Use a webhook to an Azure Function which can update the application as the order is completed.

Webhooks are great for passive events, where you don’t know when the event might happen. No polling is necessary, and as such, it is efficient and “cheap”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Your company has developed an application that needs to be able to accept, store, and process images. This application utilizes Azure App Services to host the web app, utilizes OAuth for authentication, and uses a General-Purpose v2 Blob Storage account for the images. You’ve been asked to ensure images uploaded are processed to create a better experience for people viewing from mobile devices by converting them to more manageable sizes & formats. The process should only run when new images are uploaded or updated. What is the best method to achieve this result?

  • Use Azure Storage Blob Compression to process images as they come in
  • Create a Function that is triggered whenever an upload request comes through the webapp, catching the image before it lands in the Blob Storage Account
  • Build an Event Hub trigger event that kicks off a Function that will process the images.
  • Build an Azure Function that uses a Blob Storage Trigger for any changes and runs whenever it detects new images or when an image is updated.
A

-Build an Azure Function that uses a Blob Storage Trigger for any changes and runs whenever it detects new images or when an image is updated.

Polling the blob container for updates creates a simple and efficient method of triggering your function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which framework does Azure Durable Functions use?

  • Azure Serverless Framework
  • .NET Framework
  • Azure API Management
  • Durable Task Framework
A

-Durable Task Framework

Durable functions are built on the Durable Task Framework, which provides orchestration, event storming and event sourcing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which scenario is best suited for using Azure Container Instances to host your application?

  • A legacy application that needs to run on a specific version of Windows Server.
  • An application that is expected to scale and grow rapidly.
  • An application that is being tested for a small user group in a single region.
  • An application that requires native TLS support for the public Internet.
A

-An application that is being tested for a small user group in a single region.

ACI provides a group of instances, which is a collection of containers that get scheduled on the same host machine. The containers in a container group share a lifecycle, local network, and storage volumes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Your company has asked you to make an update to one of your ARM Templates that deploys an environment that follows your security compliance standards. You’ve been tasked with updating the SKU that can be used for your Virtual Machines to include Standard_D1_v2 to Standard_E2_v3. Under which Element can you accomplish this?

  • Under the “Functions” element
  • Under the “Variables” element
  • Under the “Parameters” element
  • Under the Outputs Element
A

-Under the “Parameters” element

Parameters is where you can set an array of values that are allowed to be used with your deployments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

When creating a container registry, what Azure CLI command can be used to initiate the process?

  • az registry create
  • az registry new
  • az acr new
  • az acr create
A

-az acr create

This will create a new Azure Container Registry.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How would you retrieve the ARM template for an existing service in Azure, in order to reuse and automate it?

  • Lodge a support ticket with Azure Support to have the template generated. This requires a Standard support plan or higher.
  • Use the PowerShell cmdlet Export-AzARMTemplate.
  • ARM templates can only be retrieved when the resource is created.
  • In the Azure Portal use the “Export Template” option.
A

-In the Azure Portal use the “Export Template” option.

In the “Automation” section you can export the ARM template to exactly duplicate the resource.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

There have been some concerns in your company about the security of the Azure Web Apps you are using. Your development manager has asked you to ensure traffic to and from the Web Apps is secure. What is the best way to do this?

  • Use a system-assigned managed identity to hide any credentials passing through the network.
  • Install an SSL certificate on the App Service itself to encrypt all the web traffic.
  • Use Azure Key Vault to protect the database connection and encrypt the certificate credentials.
  • Use an App Service deployment slot to redirect traffic through a secure zone.
A

-Install an SSL certificate on the App Service itself to encrypt all the web traffic.

An SSL certificate is used to encrypt the data passing over the Internet. They can ensure that traffic to and from a Web App is secure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which of these is not a required element on an ARM template?

  • resources
  • $schema
  • variables
  • contentVersion
A

-variables

You don’t have to include variables in an ARM template. It is okay, but not very dynamic, to specify everything directly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You have been asked to deploy a static text file asynchronously to a Web App called acg204 in resource group wizardsRG. Fill in the two missing value in the Azure CLI command.

az webapp _______ –resource-group wizardsRG –name acg204 –src-path SourcePath –type static –async _______

  • deploy, true
  • deploy, IsAsync
  • deployment, IsAsync
  • deployment, true
A

-deploy, true

‘deploy’ deploys a provided artifact to Azure Web Apps. Valid values for –async are ‘true’ and ‘false’.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which properties do you get when using a Windows Azure Container Instance (ACI) for your application? (Choose 3.)

  • A public IP address
  • Virtual Network deployment.
  • Fully qualified domain name (FQDN)
  • Access to the virtual machine running the container
  • Greater security for customer data
  • Integration with Docker Hub and Azure container registry .
A

-A public IP address
This is the IP address you can access the container on over the Internet.

-Fully qualified domain name (FQDN)
Your container will get a default FQDN, or you can set up your own using DNS.

-Integration with Docker Hub and Azure container registry .
You can create container instances directly from Docker Hub or ACR. Neat.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Which of the following are valid Azure Function Triggers? (Choose 3.)

  • IoT
  • HTTP
  • Service Bus
  • Webhook
  • App Service
  • JavaScript
A

-HTTP
A HTTP Trigger is a basic and simple trigger for your Azure Function.

-Service Bus
Use the Service Bus trigger to respond to messages from a Service Bus queue or topic. I like buses.

-Webhook
If an external system supports webhooks, it can be configured to call an Azure Functions Webhook using HTTP and pass on relevant data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which of the following languages is NOT supported by Durable Functions?

  • PowerShell
  • Java
  • Javascript
  • F#
A

-Java

Currently, Durable Functions only supports C#, JavaScript, Python, F#, and PowerShell. More languages will be supported over time, but it currently does not support Java.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The new infrastructure you are designing for your new airbending service is using Azure Functions as part of the architecture. The functions will work in conjunction with an App Service hosted on an App Service Plan that runs close to computing capacity. The functions are expected to have minimum use, as they perform a critical but infrequent maintenance task. What is the most cost effective service to host these Functions on?

  • Create a new App Service Plan only for the Functions.
  • Use a consumption model.
  • Scale up the existing App Service Plan and use that.
  • Use the existing App Service Plan.
A

-Use a consumption model.

The first 1 million function requests every month are free on the consumption model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You are the administrator of the Nutex Corporation. You have created an Azure function in Visual Studio and have uploaded the function to Azure. You want to use the recommended method to monitor the execution of your function app.
Which Azure resource do you have to create after publishing the function app with Visual Studio?

  • System Center Operations Manager
  • Azure Monitor
  • Azure Service Bus
  • Application Insights
A

-Application Insights

You would create the Application Insights resource because the recommended way to monitor the execution of your functions is by integrating your function app with Azure Application Insights. Integrating your function app with Azure
Application Insights is done automatically. When you create your function app during Visual Studio publishing, the integration of your function in Azure is not complete. You need to enable the Application Insights integration manually after publishing the function app.

You would not choose Azure Monitor because this is not the recommended way to monitor the execution of function apps.

You would not choose System Center Operations Manager because it is not used primarily for Azure function apps. Instead, it is an overall monitoring solution.

You would not choose Azure Service Bus because this is a messaging service and not usable for application
monitoring.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Lana has been asked to deploy a complex solution in Azure involving multiple VMs running various custom service applications. She has been asked to do this at least three times because Test, Development, and Production environments are required. The Test and Development solutions will need to be able to be destroyed and recreated regularly, incorporating new data from production each time. The Nutex system administration team is already using Ansible internally to accomplish deployments.

What will Lana need to do to get things started in Azure?

  • On each Ansible managed node, Lana will need to install Azure Dependencies using pip.
  • Using the Azure Cloud Shell, Lana needs to install the Ansible modules.
  • On the Ansible control machine, Lana will need to install Azure Resource Manager modules using pip.
  • Working in a local Windows Powershell, Lana will need to install the Ansible modules.
A

-On the Ansible control machine, Lana will need to install Azure Resource Manager modules using pip.

Lana needs to install Azure Resource Manager modules on the Ansible control machine using pip. The Ansible control machine will need the Azure Resource Manager modules to appropriately communicate with Azure. Using pip allows for an easier managment of Python modules.

She would not use the Azure Cloud Shell to install the Ansible Modules. This is not necessary because the Ansible Modules are already installed in the Azure Cloud Shell.

She would not use a local Windows Powershell to install the Ansible modules. This is not currently possible because the Ansible Control Machine cannot currently run on a Windows PC, and therefore cannot be managed with a Windows Powershell.

She would not install Azure Dependencies on each Ansible managed node using pip. The managed nodes do not need any Azure Dependencies installed, that is one of the biggest selling points! They only require a Python install and SSH access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

You have a Kubernetes cluster in AKS and you deployed an app called MyApp. You increase the number of nodes from one to three in the Kubernetes cluster by using the following command:

______ –resource-group=myResourceGroup –name=myAKSCluster –nodecount 3

The output of the command is as follows:

“agentPoolProfiles”: [

{

“count”: 3,
“dnsPrefix”: null,
“fqdn”: null,
“name”: “myAKSCluster”,
“osDiskSizeGb”: null,
“osType”: “Linux”,
“ports”: null,
“storageProfile”: “ManagedDisks”,
“vmSize”: “Standard_D2_v2”,
“vnetSubnetId”: null

}

Fill in the missing part of the command.

A

Acceptable answer(s) for field 1:
az aks scale
You would type the az aks scale command. This command is used to scale the node pool in a Kubernetes cluster.
The –name parameter specifies the name of the cluster. The –resource group specifies the name of the
resource group. The –node parameter specifies the number of nodes in the pool.

19
Q

You are the administrator of the Nutex Corporation. You want to build images based on Linux and Windows for your Azure solutions.

Which Azure services can you use? (Choose all that apply.)

  • ImageX
  • Azure Kubernetes Service
  • Azure Pipelines
  • Azure Container Registry tasks
A
  • Azure Pipelines
  • Azure Container Registry tasks

You can use Azure Container Registry tasks and Azure Pipelines.

Azure Container Registry tasks allows you to build on-demand docker container images in the cloud.

Azure Pipelines allow you to implement a pipeline for building, testing, and deploying an app. The Azure Pipelines service allows you to build images for any repository containing a dockerfile.

You would not choose the Azure Kubernetes Service, because this service is there to manage container images for solutions and not to build images.

You would not choose the ImageX utility, because this is not an Azure service and cannot be used to create container images. ImageX allows you to capture an image of a hard drive in a Windows Imaging Format (WIM) file.

20
Q

You are the administrator of the Nutex Corporation. You want to deploy some virtual machines through an ARM template. You need a virtual network named VNET1 with a subnet named Subnet1, which has to be defined as a child resource. For that you have to define the corresponding JSON template.

Choose one of the following possibilities to complete the following ARM template (SEE IMAGE).

  • dependsOn
  • parametersLink
  • location
  • originHostHeader
A

dependsOn

You would use the dependsOn element because the child resource, which is the subnet that is marked as dependent on the parent, is the VNet resource. The parent resource must exist before the child resource can be deployed.

You would not use the originHostHeader element, because this element is used as a reference function to enable an expression. This expression derives its value from another JSON name or runtime resources.

You would not use the parametersLink element, because you use this element to link an external parameter file.

You would not use the location element, because this element defines the geographical location.

21
Q

You are the administrator of the Nutex Corporation. You want to create a new Azure Windows VM named VMNutex in a resource group named RG1 using Visual Studio and C#.

The virtual machine needs to be a member of an availability set and needs to be accessible through the network.

What steps should you perform?

A
  1. Create a Visual Studio project.
  2. Type Install-Package Microsoft.Azure.Management.Fluent in the Package Manager Console.
  3. Create the azureauth.properties file.
  4. Create the management client.

using Microsoft.Azure.Management.Compute.Fluent;
using Microsoft.Azure.Management.Compute.Fluent.Models;
using Microsoft.Azure.Management.Fluent;
using Microsoft.Azure.Management.ResourceManager.Fluent;
using Microsoft.Azure.Management.ResourceManager.Fluent.Core;

To complete the management client creation, you would add the following code to the Main method:

*var credentials = SdkContext.AzureCredentialsFactory
 .FromFile(Environment.GetEnvironmentVariable("AZURE\_AUTH\_LOCATION"));*

var azure = Azure
.Configure()
.WithLogLevel(HttpLoggingDelegatingHandler.Level.Basic)
.Authenticate(credentials)
.WithDefaultSubscription();

  1. create the resource group
    * “var groupName = “RG1”; var vmName = “VMNutex”; var location = Region.USWest; Console.WriteLine(“Creating resource group…”); var resourceGroup = azure.ResourceGroups.Define(groupName).WithRegion(location).Create();*
  2. create an availability set
    Console.WriteLine(“Creating availability set…”); var availabilitySet = azure.AvailabilitySets.Define(“myAVSet”).WithRegion(location)
    .WithExistingResourceGroup(groupName).WithSku(AvailabilitySetSkuTypes.Managed).Create();
  3. add the code to create the public IP address, the virtual network, and the network interface.
  4. create the virtual machine
    azure.VirtualMachines.Define(vmName).WithRegion(location)
    .WithExistingResourceGroup(groupName).WithExistingPrimaryNetworkInterface(networkInterface)
    .WithLatestWindowsImage(“MicrosoftWindowsServer”, “WindowsServer”, “2012-R2-Datacenter”)
    .WithAdminUsername(“AtlFalcon”).WithAdminPassword(“Ih8DaN0S8ntZ”)
    .WithComputerName(vmName).WithExistingAvailabilitySet(availabilitySet)
    .WithSize(VirtualMachineSizeTypes.StandardDS1).Create();
  5. Run the application

First, you need to create a Visual Studio project. You will then install the NuGet package so that you can add the additional libraries that you need in Visual Studio. You would choose Tools > Nuget Package Manager. From the Package Manager Console, you would type Install-Package Microsoft.Azure.Management.Fluent.

You would then create the azureauth.properties file. This file ensures that you have access to an AD service principal and you can do that through the authorization properties in the azureauth.properties file.

You would then create the management client. This can be done by opening the Program.cs file of the project and adding the following statements to the top of the file:

using Microsoft.Azure.Management.Compute.Fluent;
using Microsoft.Azure.Management.Compute.Fluent.Models;
using Microsoft.Azure.Management.Fluent;
using Microsoft.Azure.Management.ResourceManager.Fluent;
using Microsoft.Azure.Management.ResourceManager.Fluent.Core;

To complete the management client creation, you would add the following code to the Main method:

*var credentials = SdkContext.AzureCredentialsFactory
 .FromFile(Environment.GetEnvironmentVariable("AZURE\_AUTH\_LOCATION"));*

var azure = Azure
.Configure()
.WithLogLevel(HttpLoggingDelegatingHandler.Level.Basic)
.Authenticate(credentials)
.WithDefaultSubscription();

Then you need to create the resource group since all resources must be contained in the resource group. You can add the following code to the Main method to create the resource group:

“var groupName = “RG1”; var vmName = “VMNutex”; var location = Region.USWest; Console.WriteLine(“Creating resource group…”); var resourceGroup = azure.ResourceGroups.Define(groupName).WithRegion(location).Create();

You will then need to create an availability set because an availability set allows you to maintain virtual machines that are used by your applications. You can create the availability set by adding the following code to the Main method:

*Console.WriteLine("Creating availability set..."); var availabilitySet = azure.AvailabilitySets.Define("myAVSet").WithRegion(location)
.WithExistingResourceGroup(groupName).WithSku(AvailabilitySetSkuTypes.Managed).Create();*

Then you need to add the code to create the public IP address, the virtual network, and the network interface. The virtual machine needs a public IP to communicate with the virtual machine. A virtual machine must be in a subnet of the virtual network and has to have a network interface to communicate on the virtual network.

You will then create the virtual machine. You can create the virtual machine by adding the following code to the Main method:

azure.VirtualMachines.Define(vmName).WithRegion(location)
.WithExistingResourceGroup(groupName).WithExistingPrimaryNetworkInterface(networkInterface)
.WithLatestWindowsImage(“MicrosoftWindowsServer”, “WindowsServer”, “2012-R2-Datacenter”)
.WithAdminUsername(“AtlFalcon”).WithAdminPassword(“Ih8DaN0S8ntZ”)
.WithComputerName(vmName).WithExistingAvailabilitySet(availabilitySet)
.WithSize(VirtualMachineSizeTypes.StandardDS1).Create();

Then you will run the application. To run the code in Visual Studio, you have to run the application.

22
Q

You are the administrator of the Nutex Corporation. You have created different Azure functions. You have to decide which kind of input and output binding you have to choose for which type of function. The trigger causes the function to run. The input and output bindings to the function connect another resource to the function.

Scenario 2:

A scheduled job reads Blob Storage contents and creates a new Cosmos DB document.

What are the relevant Trigger, Input binding and Output binding to use in this scenario?

List of bindings/triggers:

  • Queue
  • Timer
  • Event Grid
  • HTTP
  • None
  • Blob Storage
  • Cosmos DB
  • SendGrid
  • Microsoft Graph
A

Trigger: timer

Input Binding: Blob Storage

Output Binding: Cosmos DB

Because a scheduled job reads Blob Storage contents and creates a new document, your scheduled job is time-based. Therefore, you need a Timer trigger. To make it possible to read from a blob storage, you need the blob storage input binding, and to create a new Cosmos DB document you need the Cosmos DB outbound binding.

The function trigger is not HTTP because, in this scenario, no HTTP request has been received. The function trigger is not Event Grid because this function does not have to respond to an event sent to an event grid topic. The function trigger is not Queue because this function is not based on another queue.

The function cannot have an input binding with None because it must be based on Blob Storage content. The function cannot use Cosmos DB as the input binding because it must read blob storage content and not content from Cosmos DB.

The function cannot have an output binding with Queue because it must create a new Cosmos DB document. The function cannot have an output binding with SendGrid because it cannot send an email. The function cannot have an output binding with Microsoft Graph because in this scenario you do not want to have an Excel spreadsheet, OneDrive, or Outlook as output.

23
Q

You are the administrator of the Nutex Corporation. You have created an Azure function app with Visual Studio. You want to upload the required settings to your function app in Azure. For that, you use the Manage Application Settings link. However, the Application Settings dialog is not working as you expected.

What is a possible solution for this?

  • manually create the host.json file in the project root.
  • manually create the local.settings.json file in the project root.
  • manually create an Azure storage account.
  • install the Azure storage emulator.
A
  • manually create the local.settings.json file in the project root.

You would manually create the local.settings.json file in the project root because by default the local.settings.json file is not checked into the source control. When you clone a local functions project from source control, the project does not have a local.settings.json file. In this case you need to manually create the local.settings.json file in the project root so that the Application Settings dialog will work as expected.

You would not manually create the host.json file in the project root because this file lets you configure the functions host. The settings apply both when running locally and in Azure.

You would not manually create an Azure storage account because, although Azure functions require a storage account, it is automatically created so you not have to create it manually.

You would not install the Azure storage emulator because with that you cannot upload the required settings.

24
Q

You are the administrator of the Nutex Corporation. You have created an Azure function in Visual Studio and have uploaded the function to Azure. You want to use the recommended method to monitor the execution of your function app.

Which Azure resource do you have to create after publishing the function app with Visual Studio?

  • Azure Monitor
  • Azure Service Bus
  • Application Insights
  • System Center Opterations Manager
A

-Application Insights

You would create the Application Insights resource because the recommended way to monitor the execution of your functions is by integrating your function app with Azure Application Insights. Integrating your function app with Azure Application Insights is done automatically. When you create your function app during Visual Studio publishing, the integration of your function in Azure is not complete. You need to enable the Application Insights integration manually after publishing the function app.

You would not choose Azure Monitor because this is not the recommended way to monitor the execution of function apps.

You would not choose System Center Operations Manager because it is not used primarily for Azure function apps. Instead, it is an overall monitoring solution.

You would not choose Azure Service Bus because this is a messaging service and not usable for application monitoring.

25
Q

You are working as a developer for the Nutex Corporation. You are developing the monitor pattern to observe an arbitrary endpoint with the following code:

[FunctionName(“AzureFunc-JobStatusMonitoring”)]
public static async Task Run(
[OrchestrationTrigger] IDurableOrchestrationContext context)
{
int jobId = context.GetInput();
int pollingInterval = GetPollingInterval();
DateTime expiryTime = GetExpiryTime();
while (context.CurrentUtcDateTime < expiryTime)
{
var jobStatus = await context.CallActivityAsync(“GetJobStatus”, jobId);
if (jobStatus == “Completed”)
{
// Code…
await context.CallActivityAsync(“SendAlert”, machineId);
break;
}
var nextCheck = context.CurrentUtcDateTime.AddSeconds(pollingInterval);
await context.CreateTimer(nextCheck, CancellationToken.None);
}

*// Code...
}*

You notice that the code does not work for more than seven days.

How should you easily resolve the problem? (Choose two, each answer is a complete solution.)

  • Use the while loop for simulating a timer API
  • redesign the solution to use Azure Batch
  • redesign the function to use a Durable Function
  • Use the for loop for simulating a timer API
A
  • Use the while loop for simulating a timer API
  • Use the for loop for simulating a timer API

Durable timers are limited to seven days. The workaround is to simulate using the timer API in a loop, such as a while loop or a for loop.

You would not redesign the function to use a Durable Function. The “AzureFunc-JobStatusMonitoring” function is already a Durable Function. In line number 3, the function uses IdurableOrchestrationContext, which means it is already a Durable Function (2.x).

You would not redesign the solution to use Azure Batch. Although this action could work since Azure Batch schedules jobs to run on nodes, we have no information about which nodes the application runs on. Using timer APIs in a loop is less complicated.

26
Q

You are working as a developer for the Nutex Corporation. You are responsible for developing a new online e-store system using PHP 7.4. You need to add a custom .dll extension to PHP.

How can you achieve this? (Choose all that apply.)

  • Enable Composer automation in Azure.
  • Add the PHP_EXTENSIONS key in the Application Settings section of the Configuration blade.
  • Add php.ini to the wwwroot directory.
  • Use a custom PHP runtime.
  • Add .htaccess to the wwwroot directory.
  • Add PHP_INI_SCAN_DIR to App Settings.
A
  • Add the PHP_EXTENSIONS key in the Application Settings section of the Configuration blade.
  • Add PHP_INI_SCAN_DIR to App Settings.

You can enable extensions in the default PHP runtime by putting the extension.dll and extensions.ini files with the correct content into the directory. You can add PHP_INI_SCAN_DIR Application Settings that point to the extensions.ini file. Alternatively, you can add PHP_EXTENSIONS to the Application Settings section of the Configuration blade.

You would not use a custom PHP runtime. This solution could resolve the problem if that extension is compiled to a custom PHP, but the scenario does not mention that the extension will be compiled to a custom PHP.

You would not enable Composer automation in Azure. Composer automation allows you to perform git add, git commit, and git push to your app. Composer automation will not add a custom .dll extension to PHP.

You would not add php.ini to the wwwroot directory. A php.ini file does not work in the web app.

You would not add .htaccess to the wwwroot directory because .htaccess cannot load an extension.

27
Q

You have an application named MyApp deployed in Kubernetes. The application needs to be updated. It needs to keep running during the update and should prove a rollback mechanism if a deployment failure occurs. What actions should you perform to apply the update?

A
  1. Change the application code in the config_file.cfg file.
  2. Use the docker-compose up –build command to re-create the application image.
  3. Tag the image with the loginServer of the container registry.
  4. Use the docker push command to push the image to the registry.

You would first make the change in the application by opening the config_file.cfg file with any code or text editor and making the appropriate changes to the code.

Once the application has been updated, you would re-create the application image and run the updated application. You can use the docker-compose command with the –build parameter to re-create the application image. The -d parameter should not be used because it runs containers in the background, and does not recreate the image.

docker-compose up –build

After you have recreated the image and tested the application, you would tag the image with the loginServer of the container registry. You can use the docker tag command to tag the image with the loginServer of the container registry. The following tags the azure-MyApp image with the loginServer of the container registry named acrNutex, and, adds :v2 to the end of the image name:

docker tag azure-Myapp acrNutex/azure-Myapp:v2

Once the image has been tagged, you would push the image to the registry with the docker push command. The following pushes the image to the acrNutex loginServer:

docker push azure-Myapp acrNutex/azure-Myapp:v2

You should ensure that multiple instances of the application pod are running to ensure maximum uptime. You would type the kubectl scale command to scale the resources. You can use the –replicas parameter to set the number of pods. The following command ensures that three pods are running:

kubectl scale –replicas=3 deployment/MyApp-front

28
Q

You are the administrator of the Nutex Corporation. You have enabled diagnostic logging for failed request traces on your Azure web app named NutexWebApp. You want to view the logged information in an easy manner. What should you do?

  • Use Log Parser.
  • Open the CSV file in blob storage.
  • Stream logging information with Azure CLI.
  • Open the freb.xsl file.
A

Open the freb.xsl file.

You would open the freb.xsl file because failed request traces are stored in XML files named fr####.xml in the /LogFiles/W3SVC#########/ directory. To make it easier to view the logged information, an XSL stylesheet named freb.xsl is provided in the same directory as the XML files.

You would not stream logging information with Azure CLI. By streaming logging information through Azure CLI, you can only stream information from .TXT, .LOG, or .HTM files, not .XML. With the command az webapp log tail you can stream logging information.

You would not open the CSV file in blob storage because failed request trace logging information is not stored in CSV-format in blob storage.

You would not use Log Parser because with the Log Parser utility you view web server logs and not failed request trace log information.

29
Q

You are working as a senior developer for the Nutex Corporation. Your colleague created a function to send data to your Nutex Corporation subsidiary. The trigger of that function is displayed in the following definition:

{
“bindings”: [
{
“name”: “Timer”,
“type”: “timerTrigger”,
“direction”: “in”,
“schedule”: “0 0 * * * *”
}
],
“disabled”: false
}

You need to run this function every weekday at 7 AM.

How should you modify the function.json?

A

{
“bindings”: [
{
“name”: “Timer”,
“type”: “timerTrigger”,
“direction”: “in”,
“schedule”: “0 0 7 * * 1-5”
}
],
“disabled”: false
}

The function uses Cron Style (NCRONTAB expressions) for scheduling, in the format second, minute, hour, day, month, and day of week. The value “0 0 7 * * 1-5” states that the function runs at 07:00 am, not every day but only Monday through Friday.

You would not modify function.json as follows:

{
“bindings”: [
{
“name”: “Timer”,
“type”: “timerTrigger”,
“direction”: “in”,
“schedule”: “0 0 0 7 1-5 *”
}
],
“disabled”: false
}

The line “schedule”: “0 0 0 7 1-5 *” means that the function runs at 12:00 am, on day 7 of the months January through May.

You would not modify function.json as follows:

{
“bindings”: [
{
“name”: “Timer”,
“type”: “timerTrigger”,
“direction”: “in”,
“schedule”: “0 0 7 1-5 * *”
}
],
“disabled”: false
}

The line “schedule”: “0 0 7 1-5 * *” means that the function runs at 07:00 am, from day 1 to day 5 of the month.

You would not modify function.json as follows:

{
“bindings”: [
{
“name”: “Timer”,
“type”: “timerTrigger”,
“direction”: “in”,
“schedule”: “0 7 1-5 * * *”
}
],
“disabled”: false
}

The line “schedule”: “0 7 1-5 * * *” means that the function runs at 7 minutes past the hour, between 01:00 am and 05:59 am.

30
Q

You are the administrator of the Nutex Corporation. You have created different Azure functions. You have to decide which kind of input and output binding you have to choose for which type of function. The trigger causes the function to run. The input and output bindings to the function connect another resource to the function.

Scenario 4:

A webhook that uses Microsoft Graph to update an Excel sheet.

What are the relevant Trigger, Input binding and Output binding to use in this scenario?

List of bindings/triggers:

  • Queue
  • Timer
  • Event Grid
  • HTTP
  • None
  • Blob Storage
  • Cosmos DB
  • SendGrid
  • Microsoft Graph
A

Trigger: HTTP

Input binding: None

Output binding: Microsoft Graph

The trigger will be HTTP because a webhook is HTTP-based. There is no HTTP or webhook input trigger available. Therefore the input binding in this scenario is NONE. The output binding is Microsoft Graph because this function must write to an Excel sheet.

The function trigger is not using Timer because this function does not have to run on a schedule. The function trigger is not using Queue because this function is not based on another queue. The function trigger is not using Event Grid because it is not based on events.

The function cannot have an input binding with None because it must be based on Blob Storage content. The function cannot use Cosmos DB as input binding because it must read blob storage content and not content from Cosmos DB.

The output binding of this function is not Queue because this function does not have to write content into another queue. The output binding of this function is not Cosmos DB because this function does not have to create documents in a Cosmos DB. The output binding of this function is not SendGrid because this function does not have to send email.

31
Q

You have a Kubernetes cluster in AKS and you deployed an app called MyApp. You increase the number of nodes from one to three in the Kubernetes cluster by using the following command:

_____________________ –resource-group=myResourceGroup –name=myAKSCluster –node-count 3

The output of the command is as follows:

“agentPoolProfiles”: [
{
“count”: 3,
“dnsPrefix”: null,
“fqdn”: null,
“name”: “myAKSCluster”,
“osDiskSizeGb”: null,
“osType”: “Linux”,
“ports”: null,
“storageProfile”: “ManagedDisks”,
“vmSize”: “Standard_D2_v2”,
“vnetSubnetId”: null
}

Fill in the blank.

A

az aks scale

You would type the az aks scale command. This command is used to scale the node pool in a Kubernetes cluster. The –name parameter specifies the name of the cluster. The –resource group specifies the name of the resource group. The –node parameter specifies the number of nodes in the pool.

32
Q

You are the administrator of the Nutex Corporation. You have created different Azure functions. You have to decide which kind of input and output binding you have to choose for which type of function. The trigger causes the function to run. The input and output bindings to the function connect another resource to the function.

Scenario 1:

A new queue message arrives which runs a function to write to another queue.

What are the relevant Trigger, Input binding and Output binding to use in this scenario?

List of bindings/triggers:

  • Queue
  • Timer
  • Event Grid
  • HTTP
  • None
  • Blob Storage
  • Cosmos DB
  • SendGrid
  • Microsoft Graph
A

Trigger: Queue

Input binding: None

Output Binding: Queue

In this scenario, a new queue message arrives which runs a function to write to another queue. The new queue message must trigger the function. The function will use Queue as the output binding because messages have to be written to another queue as output. This scenario does not need an input binding.

The function trigger is not HTTP because, in this scenario, no HTTP request has been received. The function trigger is not Event Grid, because this function does not have to respond to an event sent to an event grid topic. The function trigger is not Timer, because this function does not have to run on a schedule.

The function will not use only Blob storage for the input binding because, in this scenario, the function does not have to react to changes in the blob data along with changes in read and write values. The function is not using Cosmos DB, because the function does not have to listen for inserts and/or updates across partitions. Cosmos DB Trigger uses the Azure Cosmos DB Change Feed to listen for tasks like that.

The function will not use Cosmos DB as the output binding because, in this scenario, the function does not have to write a new document to an Azure Cosmos DB database. The function will not use SendGrid as the output binding because the function does not have to send email through SendGrid. The function will not use Microsoft Graph as the output binding because, in this scenario, you do not need an Excel spreadsheet, OneDrive, or Outlook as output.

33
Q

You are the administrator of the Nutex Corporation. You have to develop functions for your serverless applications. You want to simplify your development process with Azure durable functions.

Which application patterns can benefit from durable functions? (Choose all that apply.)

  • Fan-out/fan-in
  • Aggregator
  • Monitoring
  • Function chaining
  • Cache-aside
  • Gatekeeper
  • Federated Identity
  • Sharding
  • Valet Key
A
  • Fan-out/fan-in
  • Aggregator
  • Monitoring
  • Function chaining

With the function chaining pattern, you can use the context parameter DurableOrchestrationContext and the context.df object to invoke other functions by name, pass parameters, and return function output. A sequence of functions executes in a specific order. In this pattern, the output of one function is applied to the input of another function. You can implement control flow by using normal imperative coding constructs. Code executes from the top down. The code can involve existing language control flow semantics, such as conditionals and loops. You can include error handling logic in try/catch/finally blocks.

With the monitoring pattern, you can use durable functions to create flexible recurrence intervals, manage task lifetimes, and create multiple monitor processes from a single orchestration. In a few lines of code, you can use durable functions to create multiple monitors that observe arbitrary endpoints. The monitors can end execution when a condition is met, or another function can use the durable orchestration client to terminate the monitors. You can change a monitor’s wait interval based on a specific condition (for example, exponential backoff).

With the fan-out/fan-in pattern, multiple functions can execute in parallel. You can use the function to send multiple messages to a queue. This is referred to as fan out. In the fan in part, you can write code to track when the queue-triggered functions end and the function outputs are stored. With the durable functions extension, you can handle the fan-out/fan-in pattern with relatively simple code.

With the aggregator pattern, the data being aggregated could be received from multiple sources, sent in batches, or may be scattered over long periods of time. Sometimes the aggregator needs to take action on data when it is received, and clients need to query the aggregated data. Durable functions can use a single function to have multiple threads modifying the same data at the same time and ensure that the aggregator only runs on a single VM at a time.

The Federated Identity, Gatekeeper, Valet Key, Sharding, and Cache-aside patterns cannot benefit from durable functions.

34
Q

You are the administrator of Nutex. You want to run containers in Azure. You have to decide between the following Azure services:

Azure Container Instance

Azure Kubernetes Service.

What are the features of Azure Container Instance from the list of features below?

  • Fast startup
  • Custom sizes
  • persistent storage
  • Per-second billing
  • Hypervisor-level security
  • Linux and Windows
  • Full container orchestration
  • Service discovery across multiple containers
  • Automatic scaling
  • Coordinated application upgrades
A
  • Fast startup
  • Custom sizes
  • persistent storage
  • Per-second billing
  • Linux and Windows

Azure Container Instances (ACI) is the service that allows you deploy a container on Azure cloud without having to manage the underlying infrastructure. ACI allows you launch containers quickly. With ACI, you incur costs only when running the container. The billing is on a per-second instead of a per-minute billing. You can isolate an application in a container like a VM environment. You can specify custom sizes for an Azure Container by specifying exact values for CPU cores and memory. With ACI, you can mount Azure files for persistent storage. The shared files are part of the container and are in a persistant state. You can have scheduled Linux containers as well as Windows containers with the same API.

Microsoft recommends AKS instead of ACI when you need service discovery across multiple containers, coordinated application upgrades, and automatic scaling.

35
Q

You are the administrator of Nutex. You want to run containers in Azure. You have to decide between the following Azure services:

Azure Container Instance

Azure Kubernetes Service.

What are the features of Azure Kubernetes Service from the list of features below?

  • Fast startup
  • Custom sizes
  • persistent storage
  • Per-second billing
  • Hypervisor-level security
  • Linux and Windows
  • Full container orchestration
  • Service discovery across multiple containers
  • Automatic scaling
  • Coordinated application upgrades
A
  • Full container orchestration
  • Coordinated application upgrades
  • Service discovery across multiple containers
  • Automatic scaling

The Azure Kubernetes Service (AKS) manages a Kubernetes environment in Azure. AKS provides full container orchestration because you deploy and manage containerized applications without container orchestration expertise. AKS is scalable to meet growing demands by designs because it includes built-in application autoscaling.

Microsoft recommends AKS instead of ACI when you need service discovery across multiple containers, coordinated application upgrades, and automatic scaling.

36
Q

You need to build a development cluster named NutexAKSCluster for the resource group NutexResourceGroup.

What is the Azure CLI command to do this?

A

Acceptable answer(s) for field 1:

az aks create –name NutexAKSCluster –resource-group NutexResourceGroup

az aks create –resource-group NutexResourceGroup –name NutexAKSCluster

You would use the az aks create command to create an AKS cluster. You would use the –resource-group parameter to specify the resource group and the –name parameter to specify the name of the cluster.

37
Q

You are working as a web app enterprise consultant. Another administrator reports she cannot set up a backup for a Linux application named front01 in a resource group called Application01. (SEE ATTACHED IMAGE).

Your goal is to minimize Azure costs.

What should you recommend so that a backup for the front01 application can be established?

  • Scale up the size to S3.
  • Add the tag DailyBackup.
  • Deploy Windows Server 2012 R2 Datacenter.
  • Scale up the size to P1v2.
  • Deploy Windows Server 2019 Datacenter.
  • Scale up the size to P2v2.
A
  • Scale up the size to P1v2

You would scale up the size to P1v2. The App Service plan needs to be in the Standard or Premium tier to use the Backup and Restore feature. In this scenario, the P1v2 instance is cheaper than an S3 or P2v2 instance. The price per hour for an S3 instance and P2v2 instance are currently $0.40/hour, while the price per hour of an P1v2 instance is $0.20/hour. ( SEE IMAGE ATTACHED)

You would not deploy Windows Server 2019 Datacenter or Windows Server 2012 R2 Datacenter because the application is a Linux application, not a Windows application. You would need a Linux operating system to deploy the Linux application.

You would not add the tag DailyBackup. A tag is just for information to organize Azure resources. You can use a tag to identify resources in a subscription or resource group. A tag is not needed for the Backup and Restore feature.

38
Q

You have a Kubernetes cluster in AKS and you deployed an app called MyApp. You run the following command to list all pods in a namespace:

kubectl get pods

The output of the command is as follows: (SEE ATTACHED IMAGE)

You need to change the number of pods in the myapp-front deployment to 5. You type the following command:

_______________________–replicas=5 deployment/azure-vote-front

Fill in the blank.

A

Acceptable answer(s) for field 1:

  • kubectl scale

You would type the kubectl scale command to scale the resources. You can use the –replicas parameter to set the number of pods.

39
Q

You are going to deploy a web application onto Azure. You would make use of the App Service on Linux. You go ahead and create an App Service Plan. You then go ahead and publish a custom docker image onto the Azure Web App. You need to access the console logs generated from the container in real time.

You need to complete the following Azure CLI script for this

az webapp log __________ –name whizlabwebapp –resource-group whizlab-rg ______ filesystem

az ___________ log __________ –name whizlabwebapp –resource-group whizlab-rg

which of the following would go into the first blank?

  • config
  • download
  • show
  • tail
A

To configure “logging” we need to use the “az webapp log configure” command

  • config
40
Q

You have to develop an Azure Function that would perform the following activities

  • Read messages from an Azure Storage Queue
  • Process the messages and add entities to Azure Table Storage

You have to define the correct bindings in the function.json file

  • {*
  • “bindings”:[*
  • {*
  • “type”:”queueTrigger”,*
  • “direction”: _________,*
  • “name”:”neworder”,*
  • “queueName”:”whizlab-queue”,*
  • “connection”:”STORAGE_CONNECTION_3000”*
  • },*
  • {*
  • “type”:”table”,*
  • “direction”: _________,*
  • “name”: _________,*
  • “tableName”:”Orders”,*
  • “connection”:”STORAGE_CONNECTION_3000” }]}*

Which of the following would go into the first blank?

  • “in”
  • “out”
  • “trigger”
  • “$return”
  • “$table”
A

“in”

Suppose you want to write a new row to Azure Table storage whenver a new message appears in Azure Queue storage. this scenario can be implemented using an Azure Queue storage trigger and an Azure Table storage output binding.

Here’s a function.json file for this scenario:

  • {*
  • “bindings”:[*
  • {*
  • “type”:”queueTrigger”,*
  • “direction”: “in”,*
  • “name”:”neworder”,*
  • “queueName”:”whizlab-queue”,*
  • “connection”:”STORAGE_CONNECTION_3000”*
  • },*
  • {*
  • “type”:”table”,*
  • “direction”: “out”,*
  • “name”: “$return”,*
  • “tableName”:”Orders”,*
  • “connection”:”STORAGE_CONNECTION_3000” }]}*
41
Q

You have to deploy a microservice-based application to Azure. The application needs to be deployed to an Azure Kubernetes cluster. The solution has the following requirements.

  • Reverse proxy capabilities
  • Ability to configure traffic routing
  • Termination of TLS with a custom certificate

Which of the following would you use to implement a single public IP endpoint to route traffic to multiple microservices?

  • Helm
  • Brigade
  • Kubectl
  • Ingress Controller
  • Virtual Kubelet
A

Ingress Controller

You can use the Ingress controller to route traffic at the application layer

The Microsoft documentation mentions the following:

An ingress controller is a piece of software that provides reverse proxy, configurable traffic routing, and TLS termination for Kubernetes services. Kubernetes ingress resources are used to configure the ingress rules and routes for individual Kubernetes services. Using an ingress controller and ingress rules, a single IP address can be used to route traffic to multiple services in a Kubernetes cluster.

Since this is clearly given in the documentation, all other options are incorrect.

For more information on ingress controllers, please visit the following URL.

https://docs.microsoft.com/en-us/azure/aks/ingress-basic

42
Q

Your company has an Azure Kubernetes cluster in place named “whizlabcluster”. The company wants to create a new Azure AD Group and provide RBAC access for the group to the cluster.

You have to complete the below Azure CLI script to fulfil this requirement.

  • whizlabcluster_id=$(________________ *
  • –resource-group whizlabs-rg *
  • –name whizlabcluster *
  • –query objectId -o tsv)*
  • whizlab_grp=$(________________ –display-name whizlabdevelopers –mail-nickname whizlabdev –query objectId -o tsv)*
  • ____________ *
  • –assignee $whizlab_grp *
  • –role “Azure Kubernetes Service Cluster User Role” *
  • –scope $whizlabcluster_id*

Which of the following would go into the second blank?

  • az role assignment create
  • az role assignment update
  • az ad group create
  • az aks show
43
Q

A company is developing a shopping application for Windows devices. A notification needs to be sent on a user’s device whenever a new product is entered into the application. You have to implement push notifications.

You have to complete the missing parts in the partial code segment given in the attached image.

Which of the following would go into Slot2?

  • NotificationHubClient
  • NotificationHubClientSettings
  • NotificationHubJob
  • NotificationDetails
A

NotificationHubClient

An example of this is given in the Microsoft documentation

https://docs.microsoft.com/en-us/azure/notification-hubs/notification-hubs-enterprise-push-notification-architecture