DojoTutorials - Revieew Mode Set 3 Flashcards

1
Q

Your company has an existing subscription in Azure.

You provisioned an Azure Storage account named TutorialsDojoAccount and then created a file share named TDShare.

You need to create a script that will allow you to connect to your file share.

What is the UNC path of the file share?

A. \TutorialsDojoAccount.file.core.windows.net\TDShare
B.\file.core.windows.net.TutorialsDojoAccount\TDShare
C. \TutorialsDojoAccount.TDShare\file.core.windows.net
D. \TDShare.file.core.windows.net\TutorialsDojoAccount

A

A. \TutorialsDojoAccount.file.core.windows.net\TDShare

Explanation:
Azure Files enables you to set up highly available network file shares that can be accessed by using the standard Server Message Block (SMB) protocol. That means that multiple VMs can share the same files with both read and write access. You can also read the files using the REST interface or the storage client libraries.

One thing that distinguishes Azure Files from files on a corporate file share is that you can access the files from anywhere in the world using a URL that points to the file and includes a shared access signature (SAS) token. You can generate SAS tokens; they allow specific access to a private asset for a specific amount of time.

File shares can be used for many common scenarios:

  1. Many on-premises applications use file shares. This feature makes it easier to migrate those applications that share data to Azure. If you mount the file share to the same drive letter that the on-premises application uses, the part of your application that accesses the file share should work with minimal, if any, changes.
  2. Configuration files can be stored on a file share and accessed from multiple VMs. Tools and utilities used by multiple developers in a group can be stored on a file share, ensuring that everybody can find them and that they use the same version.
  3. Resource logs, metrics, and crash dumps are just three examples of data that can be written to a file share and processed or analyzed later.

About Azure file share backup - Azure Backup | Microsoft Docs

Azure Files is Microsoft’s easy-to-use cloud file system. Azure file shares can be seamlessly used in Windows and Windows Server.

In order to use an Azure file share outside of the Azure region it is hosted in, such as on-premises or in a different Azure region, the OS must support SMB 3.0. You can use Azure file shares on a Windows installation that is running either in an Azure VM or on-premises.

The Azure File Share UNC path format is:

\<storageAccountName>.file.core.windows.net\<File></File></storageAccountName>

For example:

\StoragePhilippines.file.core.windows.net\ElNidoPalawanFileShare

Hence, the correct answer is:

\TutorialsDojoAccount.file.core.windows.net\TDShare

References:

https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction

https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-windows

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Your company has an Azure Subscription that contains an Azure Container named TDContainer.

You are tasked with deploying a new Azure container instance that will run a custom-developed .NET application requiring persistent storage for operation.

You need to create a storage service that will meet the requirements for TDContainer.

What should you use?

A. Azure Blob storage
B. Azure Table storage
C. Azure Queue storage
D. Azure Files

A

A. Azure Blob storage

Explanation:
Containers are becoming the preferred way to package, deploy, and manage cloud applications. Azure Container Instances offers the fastest and simplest way to run a container in Azure, without having to manage any virtual machines and without having to adopt a higher-level service.

Azure Container Instances is a solution for any scenario that can operate in isolated containers, without orchestration. Run event-driven applications, quickly deploy from your container development pipelines, and run data processing and build jobs.

Containers offer significant startup benefits over virtual machines (VMs). Azure Container Instances can start containers in Azure in seconds, without the need to provision and manage VMs.

Bring Linux or Windows container images from Docker Hub, a private Azure container registry, or another cloud-based docker registry. Azure Container Instances caches several common base OS images, helping speed deployment of your custom application images.

By default, Azure Container Instances are stateless. If the container crashes or stops, all of its states are lost. To persist state beyond the lifetime of the container, you must mount a volume from an external store. Azure Container Instances can mount an Azure file share created with Azure Files.

Azure Files offers fully managed file shares hosted in Azure Storage that are accessible via the industry standard Server Message Block (SMB) protocol. Using an Azure file share with Azure Container Instances provides file-sharing features similar to using an Azure file share with Azure virtual machines.

Azure Disks or Files are commonly used to provide persistent volumes for Azure Container Instances and Azure VMs.

Hence, the correct answer is: Azure Files.

Azure Queue Storage is incorrect because this service is simply used for storing large numbers of messages to enable communication between components of a distributed application.

Azure Table Storage and Azure Blob Storage are both incorrect because Azure Container Services does not support direct integration of these services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Your company has an Azure subscription that contains an Azure Storage account named tutorialsdojoaccount.

There is a requirement to copy a virtual machine image to a container named tdimage from your on-premises datacenter. You need to provision an Azure Container instance to host the container image.

Which AzCopy command should you run?

Select the correct answer from the drop-down list of options. Each correct selection is worth one point.

AzCopy \_\_\_\_\_\_\_ A. Copy B. Make C. Sync 

“https://tutorialsdojoaccount.____.core.windows.net/tdimage”

A. File
B. Table
C. Blob

A

B. Make
C. Blob

Explanation:
Azure Blob storage is Microsoft’s object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn’t adhere to a particular data model or definition, such as text or binary data.

Blob storage is designed for:

– Serving images or documents directly to a browser.

– Storing files for distributed access.

– Streaming video and audio.

– Writing to log files.

– Storing data for backup and restore disaster recovery, and archiving.

– Storing data for analysis by an on-premises or Azure-hosted service.

A container organizes a set of blobs, similar to a directory in a file system. A storage account can include an unlimited number of containers, and a container can store an unlimited number of blobs. VHD files can be used to create custom images that can be stored in an Azure Blob container, which are used to provision virtual machines.

AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. The azcopy make command is commonly used to create a container or a file share.

The correct syntax in creating a blob container is:

azcopy make “https://[account-name].blob.core.windows.net/[top-level-resource-name]”

For example:

azcopy make “https://myaccount.blob.core.windows.net/mycontainer/myblob”

Therefore, the correct answers are:

AzCopy = Make

https://tutorialsdojoaccount.____.core.windows.net/tdimage = Blob

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Your company has a virtual network named TDVnet1 and a policy-based virtual network gateway named TD1 in your Azure subscription.

You have users that need to access TDVnet1 from a remote location.

Which two actions should you do so your users can establish a point-to-site connection to TDVnet1?

A. Download and install the VPN client configuration file
B. Deploy a gateway subnet
C. Reset TD1	
D. Deploy a route-based VPN gateway
E. Delete TD1
A

D. Deploy a route-based VPN gateway
E. Delete TD1

Explanation:
Point-to-Site (P2S) VPN connection allows you to create a secure connection to your virtual network from an individual client computer. A P2S connection is established by starting it from the client computer. This solution is useful for telecommuters who want to connect to Azure VNets from a remote location, such as from home or a conference. P2S VPN is also a useful solution to use instead of S2S VPN when you have only a few clients that need to connect to a VNet.

When you configure a point-to-site VPN connection, you must use a route-based VPN type for your gateway. Policy-based VPN type for point-to-site VPN connection is not supported by Azure.

If you create a policy-based VPN type as your gateway, you need to delete it and deploy a route-based VPN gateway instead.

Hence, the correct answers are:

– Delete TD1

– Deploy a route-based VPN gateway

The option that says: Deploy a gateway subnet is incorrect. A gateway subnet is a prerequisite when you create a point-to-site VPN connection and since there is already an existing point-to-site VPN connection in your Azure subscription, you don’t have to deploy one again.

The option that says: Reset TD1 is incorrect. Resetting TD1 will not work since it is a policy-based VPN type. Take note that you need a route-based VPN type for point-to-site VPN connections.

The option that says: Download and install the VPN client configuration file is incorrect. Even if you have downloaded and installed the VPN client configuration file, the users still won’t be able to connect to TDVnet1 because TD1 is a policy-based VPN type. You have to delete TD1 first and deploy a new route-based VPN gateway.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Your company has an Azure subscription that contains a virtual machine named TD1 and a virtual network named TDVnet1.

You have an on-premises Server Message Block (SMB) file server named FileServer1.

There is a requirement to connect TD1 to FileServer1.

What should you create?

A. Create an Azure virtual network peering
B. Create a Microsoft Entra Connect Sync
C. Create an Azure Virtual Network Gateway
D. Create an Azure Application Gateway

A

C. Create an Azure Virtual Network Gateway

Explanation:
Azure Virtual Network (VNet) is the fundamental building block for your private network in Azure. VNet enables many types of Azure resources, such as Azure Virtual Machines (VM), to securely communicate with each other, the Internet, and on-premises networks. VNet is similar to a traditional network that you’d operate in your own data center but brings with it additional benefits of Azure’s infrastructure, such as scale, availability, and isolation.

An Azure Virtual Network Gateway or VPN Gateway is a specific type of virtual network gateway that is used to send encrypted traffic between an Azure virtual network and an on-premises location over the public Internet.

You can also use a VPN gateway to send encrypted traffic between Azure virtual networks over the Microsoft network.

Each virtual network can have only one VPN gateway. However, you can create multiple connections to the same VPN gateway. When you create multiple connections to the same VPN gateway, all VPN tunnels share the available gateway bandwidth.

A site-to-site VPN gateway connection is used to connect your on-premises network to an Azure virtual network over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel.

Hence, the correct answer is: Create an Azure Virtual Network Gateway.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Your company has an Azure subscription named TDSubscription1 that contains the following resources:

TDVnet1 10.1.0.0/16 (subnet 10.1.0.0/24 and 10.1.1.0/24) Peered to TDVnet2
TDVneet2 10.10.0.0/16 (Subnet 10.10.0.0/24) Peered to TDVnet1

You recently added a new address space 10.30.0.0/16 to TDVnet1.

What should you do next?

A. Delete TDVnet2.
B. Re-create the peering between TDVnet1 and TDVnet2.
C. Delete the peering between TDVnet1 and TDVnet2.
D. Sync the peering between TDVnet1 and TDVnet2.

A

D. Sync the peering between TDVnet1 and TDVnet2.

Explanation:
You can resize the address space of Azure virtual networks that are peered without incurring any downtime on the currently peered address space. This feature is useful when you need to resize the virtual network’s address space after scaling your workloads. After resizing the address space, all that is required is for peers to be synced with the new address space changes. Resizing works for both IPv4 and IPv6 address spaces.

Addresses can be resized in the following ways:

– Modifying the address range prefix of an existing address range (For example, changing 10.1.0.0/16 to 10.1.0.0/18).

– Adding address ranges to a virtual network.

– Deleting address ranges from a virtual network.

– Resizing of address space is supported cross-tenant.

Hence, the correct answer is: Sync the peering between TDVnet1 and TDVnet2.

The statement that says: Delete TDVnet2 is incorrect because you can add an address space to your virtual network without deleting it.

The following statements are incorrect because you do not need to delete and re-create the peering when you add an address space to an existing virtual network peering. All you have to do is sync the peering after you have added an address space.

– Delete the peering between TDVnet1 and TDVnet2

– Re-create the peering between TDVnet1 and TDVnet2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Your Azure subscription contains a fleet of virtual machines.

You recently deployed an Azure bastion named TD1 with an SKU of Basic and a subnet size of /26.

There is a requirement that more than 90 users will concurrently use TD1. You need to be able to accommodate the number of users that will be accessing TD1. The solution must minimize administrative effort.
What should you do first?

A. Upgrade the SKU of TD1
B. Increase the instance count of TD1.
C. Deploy a new bastion server with an SKU of Standard
D. Increase the server size of TD1.

A

A. Upgrade the SKU of TD1

Explanation:
Two instances are created when you configure Azure Bastion using the Basic SKU. Using the Standard SKU, you can specify the number of instances. This is called host scaling.

Each instance can support 20 concurrent RDP connections and 40 concurrent SSH connections for medium workloads. The number of connections per instance depends on your actions when connected to the client VM. For example, if you are doing something data-intensive, it creates a more significant load for the instance to process. Once the concurrent sessions are exceeded, an additional scale unit (instance) is required.

Remember that you can only use host scaling if your bastion server has an SKU of Standard

To accommodate additional concurrent client connections, first, you need to upgrade the SKU of TD1 from Basic to Standard(after upgrading to Standard, you can not revert back to Basic SKU) After that, you can increase the instance count of TD1 to whatever number of servers are required to accommodate the 90 users.

Hence, the correct answer is: Upgrade the SKU of TD1.

The option that says: Deploy a new bastion server with an SKU of Standard is incorrect because there is no need to deploy a new bastion server with an SKU of Standard. You can upgrade the SKU of TD1 to Standard. One of the requirements is that your solution must minimize administrative effort.

The option that says: Increase the instance count of TD1 is incorrect because you will only be able to increase the instance count if TD1 is already using an SKU of Standard. Take note that the question asks what you will do first.

The option that says: Increase the server size of TD1 is incorrect because there is no option to increase the server size of a bastion server. If you need more computing power, you can increase the instance count of the bastion server. Remember that you need to use an SKU of Standard before being able to use host scaling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You have an Azure subscription that contains an Azure DNS zone named tutorialsdojo.com.

There is a requirement to delegate a subdomain named portal.tutorialsdojo.com to another Azure DNS zone.

What solution would satisfy the requirement?

A. Navigate to tutorialsdojo.com and add a PTR record named portal.
B. Navigate to tutorialsdojo.com and add an NS record named portal.
C. Navigate to tutorialsdojo.com and add a CNAME record named portal.
D. Navigate to tutorialsdojo.com and add a TXT record named portal.

A

B. Navigate to tutorialsdojo.com and add an NS record named portal.

Explanation:
Azure DNS is a hosting service for DNS domains that provides name resolution by using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services.

You can use the Azure portal to delegate a DNS subdomain. For example, if you own the tutorialsdojo.com domain, you can delegate a subdomain called portal to another, separate zone that you can administer separately from the tutorialsdojo.com zone.

To delegate an Azure DNS subdomain, you must first delegate your public domain to Azure DNS. Once your domain is delegated to your Azure DNS zone, you can delegate your subdomain.

You can delegate a subdomain by doing the following:

  1. Create a new Azure DNS zone named portal.tutorialsdojo.com. Copy down the four nameservers as you will need them for step 2.
  2. Navigate to the tutorialsdojo.com DNS zone and add an NS record named portal. Under records, enter the four nameservers from portal.tutorialsdojo.com and click ok.
  3. To verify your work, open a PowerShell window and type nslookup portal.tutorialsdojo.com

Hence, this statement is correct: Navigate to tutorialsdojo.com and add an NS record named portal.

The following statements are incorrect because PTR, CNAME, and TXT records are not used to delegate an Azure DNS subdomain.

– Navigate to tutorialsdojo.com and add a PTR record named portal.

– Navigate to tutorialsdojo.com and add a CNAME record named portal.

– Navigate to tutorialsdojo.com and add a TXT record named portal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Your company has an Azure subscription named ManilaSubscription that contains multiple virtual machines.

The subscription has a user named ManilaUser01 which has the following roles:

Backup Reader
Storage Blob Data Contributor
DevTest Labs User

You need to ensure that ManilaUser01 can assign a Reader role to all the users in the subscription.

What role should you assign?

A. Assign the Security Reader role.
B. Assign the User Access Administrator role.
C. Assign the Security Admin role.
D. Assign the Virtual Machine Contributor role.

A

B. Assign the User Access Administrator role.

Explanation:
The four fundamental Azure roles are Owner, Contributor, Reader, and User Access Administrator. To assign a Reader role to all the users in the Azure subscription, you must grant the user a User Access Administrator role. This role allows you to manage user access to the Azure resources.

Hence, the correct answer is: Assign the User Access Administrator role.

The option that says: Assign the Security Reader role is incorrect because this role only allows the user to view permissions in the Security Center.

The option that says: Assign the Virtual Machine Contributor role is incorrect because this role just lets you manage virtual machines. Take note that this role doesn’t allow you to access virtual machines directly nor assign a Reader role to all the users in the subscription.

The option that says: Assign the Security Admin role is incorrect. This role has the same permissions as the Security Reader role. The only difference is that it can update the security policy and dismiss alerts and recommendations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You plan to automate the deployment of Windows Servers using a virtual machine scale set.

You need to make sure that the web components are installed in the virtual machines.

Which two actions should you perform?

A. Create a policy.
B. Create a new scale set.
C. Create a configuration script.
D. Configure the extensionProfile section of the ARM template.
E. Create an automation account.

A

C. Create a configuration script.
D. Configure the extensionProfile section of the ARM template

Explanation:
The Custom Script Extension downloads and executes scripts on Azure virtual machines. This extension is useful for post-deployment configuration, software installation, or any other configuration or management tasks.

Hence, the correct answers are:

– Create a configuration script.

– Configure the extensionProfile section of the ARM template.

The option that says: Create an automation account is incorrect because an automation account wouldn’t help you automatically install web components. You still need to create a configuration script and extensionProfile in the ARM template.

The option that says: Create a policy is incorrect because this option only evaluates resources in Azure. Take note that you don’t need to create a policy to install web components.

The option that says: Create a new scale set is incorrect because this wouldn’t install the required web components. Instead of creating a new scale set, you should use a custom script extension to install the web components in the VMs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Your company has an Azure Subscription that contains an Azure Kubernetes Service (AKS) cluster and a Microsoft Entra tenant named tutorialsdojo.com.

You received a report that the system administrator is unable to grant access to Microsoft Entra users who need to use the cluster.

You need to grant the users in tutorialsdojo.com access to the cluster.

What should you implement?

A. Configure external collaboration settings.
B. Create an OAuth 2.0 authorization endpoint.
C. Add a namespace.
D. Create a new AKS cluster.

A

B. Create an OAuth 2.0 authorization endpoint.

Explanation:
The OAuth 2.0 authorization code grant can be used in apps that are installed on a device to gain access to protected resources. As shown in the image above, the Microsoft Entra ID client application will use kubectl to sign in users with OAuth 2.0 device authorization grant flow. Microsoft Entra ID will provide an access_token, id_token, and a refresh_token then the user will request to kubectl using an access_token from kubeconfig. After validation, the API will perform an authorization decision based on the Kubernetes Role/RoleBinding. Once authorized, the API server returns a response to kubectl.

Hence, the correct answer is: Create an OAuth 2.0 authorization endpoint.

The option that says: Configure external collaboration settings is incorrect because external collaboration settings only let you turn guest invitations on or off for different types of users in your organization. This option wouldn’t help you grant the users in tutorialsdojo.com access to the cluster.

The option that says: Create a new AKS cluster is incorrect because a cluster is just a set of nodes that run containerized applications. Creating a new cluster is not necessary. You need to create an authorization endpoint to grant the users access to the domain name.

The option that says: Add a namespace is incorrect because a namespace only divides cluster resources between multiple users. Remember that users can only interact with resources within their assigned namespaces. To grant the users in tutorialsdojo.com access to the cluster, you should create an OAuth authorization endpoint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Your company has a virtual network that contains a MySQL database hosted on a virtual machine.

You created a web app named tutorialsdojo-webapp using the Azure App service.

You need to make sure that tutorialsdojo-webapp can fetch the data from the MySQL database.

What should you implement?

A. Create an internal load balancer.
B. Enable VNet Integration and connect the web app to the virtual network.
C. Peer the virtual network to another virtual network.
D. Create an Azure Application Gateway.

A

B. Enable VNet Integration and connect the web app to the virtual network.

Explanation:
With Azure Virtual Network (VNets), you can place many of your Azure resources in a non-internet-routable network. The VNet Integration feature enables your apps to access resources in or through a VNet. VNet Integration doesn’t enable your apps to be accessed privately.

Azure App Service has two variations on the VNet Integration feature:

– The multitenant systems support the full range of pricing plans except for Isolated.

– The App Service Environment, which deploys into your VNet and supports Isolated pricing plan apps.

Hence, the correct answer is: Enable VNet Integration and connect the web app to the virtual network.

The option that says: Create an internal load balancer is incorrect because this option only distributes the traffic. An internal load balancer is mainly used to load balance traffic inside a virtual network.

The option that says: Peer the virtual network to another virtual network is incorrect because virtual network peering wouldn’t help the web app access the virtual machine.

The option that says: Create an Azure Application Gateway is incorrect because the distribution of web traffic is not needed in the scenario. An Azure Application Gateway is just a web traffic load balancer that enables you to manage traffic to your web applications. Take note that the only requirement is to ensure that tutorialsdojo-webapp can access the data from the MySQL database hosted on a virtual machine.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Your company has two Azure virtual networks named TDVNet1 and TDVNet2 in Central US region. A virtual machine named TD-VM1 is running in TDVNet1 while the other virtual network has a virtual machine named TD-VM2.

A web application is hosted on TD-VM1 and the data is retrieved and processed by TD-VM2.

Several users reported that the web application has a sluggish performance.

You are instructed to track the average round-trip time (RTT) of the packets from TD-VM1 to TD-VM2.

Which of the following options can satisfy the given requirement?

A. Connection Monitor
B. Connection Troubleshoot
C. IP flow verify
D. NSG flow logs

A

A. Connection Monitor

Explanation:
Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products which includes Virtual Machines, Virtual Networks, Application Gateways, Load balancers, etc.

In this scenario, you can use Connection Monitor to track the average round-trip time (RTT) of the packets from TD-VM1 to TD-VM2. In Azure Network Watcher, Connection Monitor provides unified end-to-end connection monitoring. The Connection Monitor feature also supports hybrid and Azure cloud deployments.

Benefits of using the Connection Monitor:

– Unified, intuitive experience for Azure and hybrid monitoring needs

– Cross-region, cross-workspace connectivity monitoring

– Higher probing frequencies and better visibility into network performance

– Faster alerting for your hybrid deployments

– Support for connectivity checks that are based on HTTP, TCP, and ICMP

– Metrics and Log Analytics support for both Azure and non-Azure test setups

Hence, the correct answer is Connection Monitor.

IP flow verify is incorrect because this feature only looks at the rules for all Network Security Groups (NSGs) applied to the network interface. It is stated in the scenario that you must track the packets from TD-VM1 to TD-VM2. IP flow verify is not capable of providing the average round-trip time of the packets from the source to the destination.

Connection Troubleshoot is incorrect because it simply checks connectivity between source and destination. Take note that you need to track the average round-trip time of the packets from VM1 to VM2. Therefore, you need to use Connection Monitor to analyze the end-to-end connection and not the Connection Troubleshoot operation.

NSG flow logs is incorrect because it only allows you to log information about IP traffic flowing (ingress and egress) through an NSG. Take note that you can’t use NSG flow logs to track the average RTT of the packets from TD-VM1 to TD-VM2. You need to use Connection Monitor to provide unified end-to-end connection monitoring.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You are managing an Azure subscription that contains a resource group named TD-RG1 which has a virtual machine named TD-VM1.

TD-VM1 has services that will deploy new resources on TD-RG1.

You need to make sure that the services running on TD-VM1 should be able to manage the resources in TD-RG1 using its identity.

Which of the following actions should you do first?

A. Configure the managed identity of TD-VM1.
B. Configure the access control of TD-RG1.
C. Configure the security settings of TD-RG1.
D. Configure the access control of TD-VM1.

A

A. Configure the managed identity of TD-VM1.

Explanation:
Microsoft Entra ID is a cloud-based identity and access management service that enables your employees access external resources. Example resources include Microsoft 365, the Azure portal, and thousands of other SaaS applications.

Microsoft Entra ID also helps them access internal resources like apps on your corporate intranet, and any cloud apps developed for your own organization.

There are two types of managed identities:

– System-assigned: some Azure services allow you to enable a managed identity directly on a service instance. When you enable a system-assigned managed identity, an identity is created in Microsoft Entra ID that is tied to the lifecycle of that service instance. So when the resource is deleted, Azure automatically deletes the identity for you. By design, only that Azure resource can use this identity to request tokens from Microsoft Entra ID.

– User-assigned: you may also create a managed identity as a standalone Azure resource. You can create a user-assigned managed identity and assign it to one or more instances of an Azure service. In the case of user-assigned managed identities, the identity is managed separately from the resources that use it.

In this scenario, you can use the system-assigned managed identity. Take note that this identity is restricted to only one resource. You can grant permissions to the managed identity by using Azure RBAC. The managed identity is authenticated with Microsoft Entra ID, so you don’t have to store any credentials.

Hence, the correct answer is: Configure the managed identity of TD-VM1.

The option that says: Configure the security settings of TD-RG1 is incorrect because it only provides security recommendations and security alerts for your resource group. As per the scenario, you need to ensure that the services running on TD-VM1 are able to manage the resources in TD-RG1 using its identity. Therefore, you need to configure the managed identity settings of TD-VM1.

The options that say: Configure the access control of TD-VM1 and Configure the access control of TD-RG1 are incorrect because these are only adding role assignments to an Azure resource. A role assignment is a process of attaching a role definition to a user, group, or service principal to provide access to a specific resource. Remember that access is granted by creating a role assignment, and access is revoked by removing a role assignment. You have to configure a managed identity instead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Your company has 12 peered virtual networks in your Azure subscription.

You plan to deploy a network security group for each virtual network.

There is a compliance requirement that port 80 should be automatically blocked between virtual networks whenever a new network security group is created. The solution must minimize administrative effort.

Solution: You create a security rule that denies incoming port 80 traffic.

Does the solution meet the goal?

A. Yes
B. No

A

B. No

Explanation:
Azure Network Security Group is used to filter network traffic to and from Azure resources in an Azure virtual network. A network security group contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol.

It is stated in the scenario that blocking port 80 should be done automatically whenever a new network security group is created. By creating a rule manually, it becomes quite cumbersome to configure as you need to create a security rule for every network security group you create. It’s best practice to always automate your security processes to avoid administrative overhead.

You should use a custom policy definition in order to automate the requirement.

Hence, the correct answer is: No.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Your company has 12 peered virtual networks in your Azure subscription.

You plan to deploy a network security group for each virtual network.

There is a compliance requirement that port 80 should be automatically blocked between virtual networks whenever a new network security group is created. The solution must minimize administrative effort.

Solution: You create a custom policy definition and assign it to the subscription.

Does the solution meet the goal?

A. No
B. Yes

A

B. Yes

Explanation:

Azure Policy helps to enforce organizational standards and to assess compliance at-scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to the per-resource, per-policy granularity. It also helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources.

Azure Policy has a list of built-in policy definitions, but if you need something more specific, you can create your own by creating a custom policy definition that will allow your organization to meet its compliance requirements.

A custom policy definition allows customers to define their own rules for using Azure. These rules often enforce:

– Security practices

– Cost management

– Organization-specific rules (like naming or locations)

In this scenario, you can create a custom policy to automatically block port 80 whenever a new network security group is created.

Hence, the correct answer is: Yes.

17
Q

Your company has 12 peered virtual networks in your Azure subscription.

You plan to deploy a network security group for each virtual network.

There is a compliance requirement that port 80 should be automatically blocked between virtual networks whenever a new network security group is created. The solution must minimize administrative effort.

Solution: You configure the network security group (NSG) flow log to automatically block port 80.

Does the solution meet the goal?

A. Yes
B. No

A

B. No

Explanation:
Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products which includes Virtual Machines, Virtual Networks, Application Gateways, Load balancers, etc.

Network security group (NSG) flow logs are a feature of Azure Network Watcher that allows you to log the source and destination IP address, port, protocol, and whether traffic was allowed or denied by an NSG. Flow data is sent to Azure Storage accounts from where you can access it as well as export it to any visualization tool, SIEM, or IDS of your choice.

It is stated in the scenario that port 80 should be automatically blocked between virtual networks whenever a new network security group is created. NSG flow logs are only used to monitor traffic that is allowed or denied by a network security group.

Hence, the correct answer is: No.

18
Q

You have an Azure Subscription and a Microsoft Entra group named Developers.

The Azure Subscription has a resource group named Dev.

You need to assign a role in the Developers group to allow the users to create Azure Logic Apps in the resource group.

Solution: In the Dev resource group, assign a User Access Administrator role to the Developers group.

Does the proposed solution meet the goal?

A. Yes
B. No

A

B. No

Explanation:
Azure role-based access control (Azure RBAC) is a system that provides fine-grained access management of Azure resources. Using Azure RBAC, you can segregate duties within your team and grant just the right access to users that they need to perform their jobs.

The associated permissions for the User Access Administrator role are only related to the specific access of each user to access different Azure resources. This role cannot create or manage any type of Azure resources.

Since the requirement in the scenario is to allow the users to create Azure Logic Apps in the resource group, you have to assign a Contributor role to the users of the Developers group.

Hence, the correct answer is: No.

19
Q

You have an Azure Subscription and a Microsoft Entra group named Developers.

The Azure Subscription has a resource group named Dev.

You need to assign a role in the Developers group to allow the users to create Azure Logic Apps in the resource group.

Solution: In the Dev resource group, assign a Logic App Operator role to the Developers group.

Does the proposed solution meet the goal?

A. No
B. Yes

A

A. No

Explanation:
Azure role-based access control (Azure RBAC) is a system that provides fine-grained access management of Azure resources. Using Azure RBAC, you can segregate duties within your team and grant only the needed access to allow your users to perform their jobs.

The Logic App Operator role only lets you read, enable, and disable logic apps. You can’t edit, update, or create logic apps.

To satisfy the requirement in the scenario, you have to assign a Contributor role to the Developers Microsoft Entra ID group of the Dev resource group.

Hence, the correct answer is: No.

20
Q

You have an Azure Subscription and a Microsoft Entra group named Developers.

The Azure Subscription has a resource group named Dev.

You need to assign a role in the Developers group to allow the users to create Azure Logic Apps in the resource group.

Solution: In the Dev resource group, assign a Contributor role to the Developers Microsoft Entra group.

Does the proposed solution meet the goal?

A. Yes
B. No

A

A. Yes

Explanation:
Azure role-based access control (Azure RBAC) is a system that provides fine-grained access management of Azure resources. Using Azure RBAC, you can segregate duties within your team and grant only the right amount of access that users need to perform their jobs.

The permissions for the Contributor Role are:

– Create and manage all types of Azure resources

– Create a new tenant in Microsoft Entra ID

– Cannot grant access to others

Assigning the Contributor role to the users will satisfy this requirement since it allows the users to create Azure Logic Apps within a certain resource group.

Hence, the correct answer is: Yes.

21
Q

Your company has already migrated the TutorialsDojoPortal to Azure.

There is a requirement to migrate the media files to Azure.

What should you do?

A. Use file explorer to copy the files by mapping a drive using an Azure storage account access key for authorization.
B. Use file explorer to copy the files by mapping a drive using a shared access signature (SAS) in the Azure storage account to grant temporary access.
C. Use Azure Import/Export service to copy the files.
D. Use Azure Storage Explorer to copy the files.

A

D. Use Azure Storage Explorer to copy the files.

Explanation:
Azure Blob storage is Microsoft’s object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn’t adhere to a particular data model or definition, such as text or binary data.

Blob storage is designed for:

– Serving images or documents directly to a browser.

– Storing files for distributed access.

– Streaming video and audio.

– Writing to log files.

– Storing data for backup and restore disaster recovery, and archiving.

– Storing data for analysis by an on-premises or Azure-hosted service.

Microsoft Azure Storage Explorer is a standalone app that is accessible, intuitive, feature-rich graphical user interface (GUI) for full management of cloud storage resources and makes it easy to work with Azure Storage data on Windows, macOS, and Linux. You can upload, download, and manage Azure blobs, files, queues, and tables, as well as Azure Cosmos DB and Azure Data Lake Storage entities.

The requirements to be considered for this scenario are:

– Migrate the media files to Azure over the Internet.

– The media files must be stored in a Blob container and cached via Content Delivery Network.

Hence, the correct answer is: Use Azure Storage Explorer to copy the files.

The option that says: Use Azure Import/Export service to copy the files is incorrect. Azure Import/Export service is primarily used to securely import large amounts of data to Azure Blob storage and Azure Files by shipping disk drives to an Azure datacenter. The requirement states that the transfer of the media files must be done over the Internet.

The following options are incorrect because you cannot mount a Blob container using file explorer. Take note that the requirement states that the media files must be stored in a Blob container.

– Use file explorer to copy the files by mapping a drive using a shared access signature (SAS) in the Azure storage account to grant temporary access.

– Use file explorer to copy the files by mapping a drive using an Azure storage account access key for authorization.

22
Q

Tutorials Dojo must meet the following technical requirements:

Migrate the TutorialsDojoPortal virtual machines to Azure.
Limit the number of ports between TutorialsDojoPortal tiers.
Backup and disaster recovery scenario for TutorialsDojoPortal servers.
Migrate the media files to Azure over the internet.
The media files must be stored in a Blob container and cached via Content Delivery Network.
The virtual machines must be joined to the Active Directory.
The SQL database server must run on virtual machines.
Minimize administrative effort whenever possible.

User Requirements

Create a new user named TutorialsDojoAdmin1 as the service admin for the Azure Subscription.
Ensure that the TutorialsDojoAdmin1 receive email alerts for budget alerts.
Ensure that only Administrators can create virtual machines.

You need to identify the storage requirements for TutorialsDojoPortal media files.

For each of the following items, choose Yes if the statement is true or choose No if the statement is false. Take note that each correct item is worth one point.

Questions 	Yes 	No	 A.     Azure Files storage meets the storage requirements of TutorialsDojoPortal media files.	

B. Azure Blob storage meets the storage requirements TutorialsDojoPortal media files.

C. Azure Table storage meets the storage requirements of TutorialsDojoPortal media files.

A

A. No
B. Yes
C. No

Explanation:
Azure Blob storage is Microsoft’s object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn’t adhere to a particular data model or definition, such as text or binary data.

Azure Table stores large amounts of structured data. The service is a NoSQL datastore that accepts authenticated calls from inside and outside the Azure cloud.

Azure Files enables you to set up highly available network file shares that can be accessed by using the standard Server Message Block (SMB) protocol. That means that multiple VMs can share the same files with both read and write access. You can also read the files using the REST interface or the storage client libraries.

Azure Content Delivery Network (CDN) is a distributed network of servers that is used to cache and store content. These servers are in locations that are close to end-users to minimize latency.

You can use Azure CDN to cache content from a Blob container and configure the custom domain endpoint for your Blob container, provision custom TLS/SSL certificates, and configure custom rewrite rules. Azure CDN also provides TLS encryption with your own certificate.

The server locations are referred to as Point-of-presence (POP) locations. CDNs store cached data on edge servers, or servers close to your users, in these POP locations.

The requirement to be considered for this scenario is:

– The media files must be stored in a Blob container and cached via Content Delivery Network.

Hence, this statement is correct: Azure Blob storage meets the storage requirements TutorialsDojoPortal media files.

The statement that says: Azure Table storage meets the storage requirements of TutorialsDojoPortal media files is incorrect because Azure Table is ideal for storing structured, non-relational data. You simply cannot integrate Azure Table with Azure CDN. Take note that the requirement states that the files must be stored in a blob container and cached via CDN.

The statement that says: Azure Files storage meets the storage requirements of TutorialsDojoPortal media files is incorrect. Azure Files can be only accessed through SMB protocol and cannot be put directly behind an Azure CDN which only supports HTTP(80) and HTTPS(443) protocols.

23
Q

You need to retrieve the JSON string of the Contributor role so you can customize it to create the AdatumAdministrator custom role.

Which command should you run?

A. Get-AzRoleAssignment -Name Contributor | ConvertFrom-Json
B. Get-AzRoleAssignment -Name Contributor | ConvertTo-Json
C. Get-AzRoleDefinition -Name Contributor | ConvertTo-Json
D. Get-AzRoleDefinition -Name Contributor | ConvertFrom-Json

A

C. Get-AzRoleDefinition -Name Contributor | ConvertTo-Json

Explanation:
Access management for cloud resources is a critical function for any organization that is using the cloud. Azure role-based access control (Azure RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.

Azure RBAC is an authorization system built on Azure Resource Manager that provides fine-grained access management of Azure resources.

If the Azure built-in roles don’t meet the specific needs of your organization, you can create your own custom roles. Just like built-in roles, you can assign custom roles to users, groups, and service principals at management group, subscription, and resource group scopes.

Take note that in this scenario, you need to create a custom role named AdatumAdministrator that is based on the built-in policy Contributor role. You need to retrieve the JSON format file of the Contributor role so that you can customize it to your needs.

To retrieve the JSON string of the Contributor role, you need to use the command:

– Get-AzRoleDefinition -Name <role_name> | ConvertTo-Json</role_name>

Hence, the correct answer is: Get-AzRoleDefinition -Name Contributor | ConvertTo-Json

Get-AzRoleDefinition -Name Contributor | ConvertFrom-Json is incorrect because the ConvertFrom-Json cmdlet just converts your JSON string to a PSCustomObject object that has a property for each field in the JSON string. Take note that you need to retrieve the JSON role so that you can customize it to your needs.

The following options are incorrect because the Get-AzRoleAssignment simply allows you to list Azure RBAC role assignments at the specified scope. By default, it lists all role assignments in the selected Azure subscription. You have to use the respective parameters to list assignments to a specific user, or to list assignments on a specific resource group or resource.

– Get-AzRoleAssignment -Name Contributor | ConvertTo-Json

– Get-AzRoleAssignment -Name Contributor | ConvertFrom-Json

24
Q

According to the sales department, vm3.adatum.com does not have connectivity to the Montreal office.

You need to determine if a network security group is causing the issue.

What Azure Network Watcher feature should you use?

A. Next hop
B . IP flow verify
C. Traffic Analytics
D. NSG Flow Logs

A

B . IP flow verify

Explanation:
Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products which includes Virtual Machines, Virtual Networks, Application Gateways, Load balancers, etc.

IP flow verify checks if a packet is allowed or denied to or from a virtual machine. If the packet is denied by a security group, the name of the rule that denied the packet is returned. IP flow verify helps administrators quickly diagnose connectivity issues from or to the Internet and from or to the on-premises environment.

IP flow verify first looks at the rules for all Network Security Groups (NSGs) applied to the network interface, such as a subnet or virtual machine NIC. Traffic flow is then verified based on the configured settings to or from that network interface. It is useful in confirming if a rule in a Network Security Group is blocking ingress or egress traffic to or from a virtual machine.

Hence, the correct answer is: IP flow verify.

Next hop is incorrect because this simply helps you determine if traffic is being directed to the intended destination, or whether the traffic is being sent nowhere. Take note that in this scenario, you need to determine if the network security group is blocking the ingress or egress traffic.

NSG Flow Logs is incorrect. It is only a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through a network security group.

Traffic Analytics is incorrect because this just allows you to process your NSG Flow Log data that enables you to visualize, query, analyze, and understand your network traffic.

25
Q

Your organization Azure subscription has the following resources:

Azure Kubernetes Service

Azure Container Registry

Azure Blob Storage

You need to create a container image and deploy it to the cluster.

Which of the following commands should you do first?

A. az acr build
B. az aks run
C. az aks create
D. az import-export create

A

A. az acr build

Explanation:
Azure Container Registry is a managed registry service based on the open-source Docker Registry 2.0. Create and maintain Azure container registries to store and manage your container images and related artifacts. Use Azure container registries with your existing container development and deployment pipelines, or use Azure Container Registry Tasks to build container images in Azure. Build on demand, or fully automate builds with triggers such as source code commits and base image updates.

To deploy an application on your AKS cluster, you’ll need to build a container image first. Then create a deployment manifest file to run the image in your cluster.

In this scenario, you need to identify what command should you use first, and if you take a look at the scenario again there is a statement you must create a container image. The command az acr build allows you to queue a quick build, providing streaming logs for an Azure Container Registry. So after you push the image to the container registry, you should run az acr build.

Hence, the correct answer is: az acr build.

The option that says: az aks create is incorrect because there is already an existing AKS cluster in your Azure subscription.

The option that says: az aks run is incorrect because in order to run a container image to your cluster, you need to build the image first and deploy it to the container.

The option that says: az import-export create is incorrect because this is a command to create a new job or updates an existing job in the specified subscription.

26
Q

Your company has an Azure subscription that contains several users.

You must ensure that only one user is able to deploy virtual machines and manage virtual networks.

Which of the following options should you use to satisfy the principle of least privilege?

A. Network Contributor
B. Virtual Machine Contributor
C.. Contributor
D. Owner

A

C.. Contributor

Explanation:
Azure RBAC is an authorization system built on Azure Resource Manager that provides fine-grained access management to Azure resources. Azure includes several built-in roles that you can use. For example, the Virtual Machine Contributor role allows a user to create and manage virtual machines. If the built-in roles don’t meet the specific needs of your organization, you can create your own Azure custom roles.

According to the “principle of least privilege,” workers should only have access to resources necessary for carrying out their job duties. In this scenario, the roles that you can use to deploy VMs and manage VNets are through Owner and Contributor roles, but the requirement is to assign a role with the least privilege.

The Owner grants full access to manage all resources, including the ability to assign roles in Azure RBAC. While the Contributor role grants full access to manage all resources but does not allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries.

Hence, the correct answer is: Contributor.

Owner is incorrect because this role will allow the user to have full access to all of the resources including the assignment of roles in Azure RBAC.

Virtual Machine Contributor is incorrect because this role does not grant you management access to the virtual network.

Network Contributor is incorrect because you can only use this role to manage the network but deploy virtual machines.

27
Q

You are currently managing multiple Azure virtual machines that are used for lab experiments.

The VMs are continuously backed up and stored in the Recovery Services vault named td-backup-labs.

You have been asked to delete td-backup-labs vault but it contains protected items.

Which of following options should you do first?

A. Modify the lock type of RSV.
B. Delete the backup data.
C. Modify the backup policy.
D. Stop the backup of each item.

A

D. Stop the backup of each item.

Explanation:
A Recovery Services vault is a storage entity in Azure that houses data. The data is typically copies of data, or configuration information for virtual machines (VMs), workloads, servers, or workstations. You can use Recovery Services vaults to hold backup data for various Azure services such as IaaS VMs (Linux or Windows) and SQL Server in Azure VMs.

To delete a Recovery Services vault, you need to stop the continuous backup first. Because if you try to delete the vault without stopping the backup, you would receive an error notification.

You can’t delete a Recovery Services vault with any of the following dependencies:

– You can’t delete a vault that contains protected data sources (for example, IaaS VMs, SQL databases, Azure file shares).

– You can’t delete a vault that contains backup data. Once backup data is deleted, it will go into the soft deleted state.

– You can’t delete a vault that contains backup data in the soft deleted state.

– You can’t delete a vault that has registered storage accounts.

If you try to delete the vault without removing the dependencies, you’ll encounter one of the following error messages:

– Vault cannot be deleted as there are existing resources within the vault.

– Recovery Services vault cannot be deleted as there are backup items in soft deleted state in the vault. The soft deleted items are permanently deleted after 14 days of delete operation.

Hence, the correct answer is: Stop the backup of each item.

The option that says: Modify the lock type of RSV is incorrect because there’s no lock type configured in scenario. Even if you modify the lock type, you still won’t be able to delete the vault.

The option that says: Delete the backup data is incorrect because you need to stop the backup first before you’re able to delete a backup data.

The option that says: Modify the backup policy is incorrect because you won’t still be able to delete the RSV even if you modify the backup policy. To delete a vault, stop the backup items.

28
Q

Your company is planning to launch an internal web app using an AKS cluster.

The app should be accessible via the pod’s IP address.

Which of the following network settings should you configure to meet this requirement?

A. Azure NSG
B. Azure Private Link
C. kubenet
D. Azure CNI

A

D. Azure CNI

Explanation:
Azure Kubernetes Service (AKS) simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks, like health monitoring and maintenance. Since Kubernetes masters are managed by Azure, you only manage and maintain the agent nodes. Thus, AKS is free; you only pay for the agent nodes within your clusters, not for the masters.

A Kubernetes cluster provides two options to configure your network:

– By default, AKS clusters use kubenet, and a virtual network and subnet are created for you. With kubenet, nodes get an IP address from a virtual network subnet.

– With Azure Container Networking Interface (CNI), every pod gets an IP address from the subnet and can be accessed directly.

Since you will connect to the app using the pod’s IP address, you need to select Azure CNI upon creation of your cluster.

Hence, the correct answer is: Azure CNI.

kubenet is incorrect because as stated in the scenario, you need to connect via the pods ip address. With this option, network address translation is then configured on the nodes, and pods receive an IP address behind the node IP.

Azure NSG is incorrect because you don’t need to allow or deny inbound and outbound network traffic.

Azure Private Link is incorrect because this just provides private access to Azure-hosted services. It will not allow you to configure the cluster network type to assign IP addresses to pods.

29
Q

You created a new Microsoft Entra group for Network Administrators in your organization Azure Subscription.

You need to make sure that the users in the group can enable Traffic Analytics and visualize traffic distribution.

Solution: Assign a Reader role to the group.

Does the solution meet the goal?

A. No
B. Yes

A

A. No

Explanation:
Traffic analytics is a cloud-based solution that provides visibility into user and application activity in your cloud networks. Specifically, traffic analytics analyzes Azure Network Watcher network security group (NSG) flow logs to provide insights into traffic flow in your Azure cloud.

With traffic analytics, you can:

– Visualize network activity across your Azure subscriptions.

– Identify hot spots.

– Secure your network by using information about the following components to identify threats: Open ports, Applications that attempt to access the internet, and VMs that connect to rogue networks.

– Optimize your network deployment for performance and capacity by understanding traffic flow patterns across Azure regions and the internet.

– Pinpoint network misconfigurations that can lead to failed connections in your network.

To enable traffic analytics, your account must have any of the following Azure roles at the subscription scope: owner, contributor, or network contributor.

But before you use traffic analytics, ensure your environment meets the following requirements:

– A Network Watcher enabled subscription.

– Network Security Group (NSG) flow logs enabled for the NSGs you want to monitor.

– An Azure Storage account to store raw flow logs.

– An Azure Log Analytics workspace with read and write access.

Going back to the given solution, the users in the group can visualize the traffic distribution by assigning a Reader role to the group.

Hence, the correct answer is: No.

30
Q

You created a new Microsoft Entra group for Network Administrators in your organization Azure Subscription.

You need to make sure that the users in the group can enable Traffic Analytics and visualize traffic distribution.

Solution: Assign a Security Operator role to the group.

Does the solution meet the goal?

A. Yes
B. No

A

B. No

Explanation:
Traffic analytics is a cloud-based solution that provides visibility into user and application activity in your cloud networks. Specifically, traffic analytics analyzes Azure Network Watcher network security group (NSG) flow logs to provide insights into traffic flow in your Azure cloud.

With traffic analytics, you can:

– Visualize network activity across your Azure subscriptions.

– Identify hot spots.

– Secure your network by using information about the following components to identify threats: Open ports, Applications that attempt to access the internet, and VMs that connect to rogue networks.

– Optimize your network deployment for performance and capacity by understanding traffic flow patterns across Azure regions and the internet.

– Pinpoint network misconfigurations that can lead to failed connections in your network.

To enable traffic analytics, your account must have any of the following Azure roles at the subscription scope: owner, contributor, or network contributor.

But before you use traffic analytics, ensure your environment meets the following requirements:

– A Network Watcher enabled subscription.

– Network Security Group (NSG) flow logs enabled for the NSGs you want to monitor.

– An Azure Storage account to store raw flow logs.

– An Azure Log Analytics workspace with read and write access.

Going back to the given solution, a Security Operator can only create and manage security events. By assigning this role, the users in the group won’t be able to enable traffic analytics. You must assign the required Azure roles to use the service.

Hence, the correct answer is: No.

31
Q

You created a new Microsoft Entra group for Network Administrators in your organization Azure Subscription.

You need to make sure that the users in the group can enable Traffic Analytics and visualize traffic distribution.

Solution: Assign a Contributor role to the group.

Does the solution meet the goal?

A. No
B. Yes

A

B. Yes

Explanation:
Traffic analytics is a cloud-based solution that provides visibility into user and application activity in your cloud networks. Specifically, traffic analytics analyzes Azure Network Watcher network security group (NSG) flow logs to provide insights into traffic flow in your Azure cloud.

With traffic analytics, you can:

– Visualize network activity across your Azure subscriptions.

– Identify hot spots.

– Secure your network by using information about the following components to identify threats: Open ports, Applications that attempt to access the internet, and VMs that connect to rogue networks.

– Optimize your network deployment for performance and capacity by understanding traffic flow patterns across Azure regions and the internet.

– Pinpoint network misconfigurations that can lead to failed connections in your network.

To enable traffic analytics, your account must have any of the following Azure roles at the subscription scope: owner, contributor, or network contributor.

But before you use traffic analytics, ensure your environment meets the following requirements:

– A Network Watcher enabled subscription.

– Network Security Group (NSG) flow logs enabled for the NSGs you want to monitor.

– An Azure Storage account to store raw flow logs.

– An Azure Log Analytics workspace with read and write access.

Going back to the given solution, the users in the group can visualize the traffic distribution by assigning a Contributor role to the group. A Contributor role can manage all resources but does not allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries.

Hence, the correct answer is: Yes.

32
Q

You have been assigned to manage two Azure virtual machines and Recovery Services vaults. Both VMs currently store back up to a single vault.

You must configure the other VM to backup in a different vault.

Which of the following options should you do first?

A. Stop the backup of one VM.
B. Change the VM target vault.
C. Delete the backup data.
D. Stop the backup of both VM.

A

A. Stop the backup of one VM.

Explanation:
A Recovery Services vault is a storage entity in Azure that houses data. The data is typically copies of data or configuration information for virtual machines (VMs), workloads, servers, or workstations. You can use Recovery Services vaults to hold backup data for various Azure services such as IaaS VMs (Linux or Windows) and Azure SQL databases. Recovery Services vaults support System Center DPM, Windows Server, Azure Backup Server, and more. Recovery Services vaults make it easy to organize your backup data while minimizing management overhead.

To change the Recovery Services vault of a virtual machine, you need to stop the backup first since Azure VMs may only be assigned a single Recovery Services Vault (RSV) at a time. After the backup stops, you can now assign a new vault to your VM.

Hence, the correct answer is: Stop the backup of one VM.

The option that says: Delete the backup data is incorrect because you need to stop the backup before you can delete the backup data.

The option that says: Stop the backup of both VM is incorrect because you don’t need to stop the backup of both VMs to change the vault of one VM.

The option that says: Change the VM target vault is incorrect because you also need to stop the backup of one VM to change the RSV.

33
Q

You are managing an Azure subscription with several virtual machines deployed in the East US Azure region. You need to monitor the network traffic of these virtual machines to identify trends, diagnose issues, and optimize performance.

You plan to use Traffic Analytics in Azure Network Watcher for this purpose.

Which two actions should you take among the following to meet this requirement?

Each correct answer presents part of the solution. (Select TWO).

A. Enable Azure Monitor Alerts
B. Implement a Microsoft Sentinel workspace.
C. Utilize a Log Analytics workspace.
D. Configure Azure Policy
E. Launch a Data Collection Rule (DCR) in Azure Monitor.

A

C. Utilize a Log Analytics workspace.
E. Launch a Data Collection Rule (DCR) in Azure Monitor.

Explanation:
Traffic Analytics is a cloud-based service that offers insights into user and application activities within your cloud networks. It specifically examines Azure Network Watcher flow logs to deliver detailed information on traffic patterns within your Azure cloud environment.

Log Analytics workspace is crucial because it serves as the repository for all the collected network traffic data. Traffic Analytics uses this workspace to store logs and telemetry data, which can then be queried and visualized to identify patterns, diagnose issues, and optimize performance. The Log Analytics workspace provides the tools necessary for analyzing traffic data and generating insights that can help improve network performance and security. By utilizing the capabilities of the Log Analytics workspace, organizations can transform raw data into actionable insights that enhance overall network efficiency and security.

In Azure Monitor, Data Collection Rule (DCR) defines which data should be collected and where it should be sent. For Traffic Analytics to work effectively, it must know what network data to gather from the virtual machines and how to process it. Data Collection Rules allow you to manage and customize data collection for your monitoring needs. By setting up a DCR, the organization ensures that the relevant network traffic data is collected and directed to the Log Analytics workspace for analysis. This setup is essential for the accurate and efficient functioning of Traffic Analytics.

Hence, the correct answers are:

 – Utilize a Log Analytics workspace.

 – Launch a Data Collection Rule (DCR) in Azure Monitor.

The option that says: Enable Azure Monitor Alerts is incorrect because it is primarily used to notify you about specific conditions or thresholds being met. Alerts are helpful for reactive monitoring but do not directly contribute to the configuration or operation of Traffic Analytics. Moreover, traffic analytics is focused on collecting and analyzing network traffic data, whereas alerts are used to respond to specific events or metrics.

The option that says: Configure Azure Policy is incorrect because this only enforces organizational standards and ensures compliance across your Azure resources. It helps to audit and apply rules to your Azure environment but does not assist in setting up or managing Traffic Analytics. Traffic Analytics requires data collection and analysis, which are not functions of Azure Policy.

The option that says: Implement a Microsoft Sentinel workspace is incorrect because Sentinel is simply a scalable, cloud-native security information and event management (SIEM) solution designed for security monitoring and cyber threat detection, not for monitoring network traffic through Traffic Analytics. Using Sentinel would not provide the necessary functionality for network traffic analysis as required by Traffic Analytics.

34
Q

You have a virtual machine named TD1 that is part of your Azure subscription. Additionally, you manage an on-premises data center containing a domain controller named DC1. The connection between the on-premises data center and Azure is established through ExpressRoute.

Which method should be implemented on DC1 to monitor and identify network latency between TD1 and DC1 using Connection Monitor?

A. Install an Azure Monitor agent extension.
B. Utilize the Azure Connected Machine agent for Azure Arc-enabled servers.
C. Launch Azure Recovery Services agent.
D. Deploy the Dependency agent.

A

A. Install an Azure Monitor agent extension.

Explanation:
The Azure Monitor Agent is a unified agent for collecting telemetry data from Azure and hybrid environments, including on-premises and multi-cloud. It consolidates the functionality of previous monitoring agents and supports a wide range of data collection scenarios, such as performance metrics, event logs, and custom logs. The Azure Monitor Agent is installed as an extension on virtual machines and can be managed through various methods, including Azure Policy, PowerShell, and the Azure portal. It enables comprehensive monitoring of infrastructure and applications, providing insights through Azure Monitor, which helps maintain the entire environment’s performance and reliability.

To monitor and identify network latency between TD1 in Azure and the on-premises DC1 using Connection Monitor, you must install the Azure Monitor agent extension on DC1. This agent facilitates the collection of detailed metrics and logs from the on-premises server, which are essential for measuring network performance. By integrating with Azure Monitor, the agent enables the setup of Connection Monitor tests to track connectivity and latency between the two environments continuously. The data collected is sent to Azure Monitor, which can be analyzed and visualized, helping detect and troubleshoot network issues effectively. This ensures that the hybrid network infrastructure remains reliable and performant, leveraging the comprehensive monitoring capabilities of Azure Monitor.

Hence, the correct answer is: Install an Azure Monitor agent extension.

The option that says: Utilize the Azure Connected Machine agent for Azure Arc-enabled servers is incorrect because this option is designed for managing and monitoring non-Azure machines through Azure Arc. While it allows on-premises and multi-cloud servers to be managed alongside Azure resources, it does not provide the specific network performance monitoring features required by Connection Monitor. Its primary purpose is to enable Azure management capabilities for non-Azure servers, not to monitor network latency.

The option that says: Launch Azure Recovery Services agent is incorrect because it is primarily used for backup and disaster recovery solutions. It is designed to facilitate backups of your on-premises data and manage recovery operations. While it ensures data availability and protection, it does not provide capabilities for monitoring network latency or performance, which are critical for Connection Monitor.

The option that says: Deploy the Dependency agent is incorrect because this agent is only used in conjunction with the Azure Monitor Agent to provide in-depth application performance monitoring and dependency mapping within Azure Monitor’s VM insights. While it enhances the monitoring of application dependencies and interactions, it does not offer the necessary network latency monitoring capabilities required for Connection Monitor. It focuses on understanding application dependencies rather than monitoring network performance between Azure and on-premises environments.

35
Q

To automate the user settings configuration for the human resources department at Noypi, Inc., which solution should be included in the recommendation?

A. Azure Application Insights Profiler
B. Dynamic groups with conditional access policies
C. Azure Monitor Agent
D. Azure AD Business-to-Consumer (B2C)

A

B. Dynamic groups with conditional access policies

Explanation:
Dynamic Groups in Azure AD: Dynamic groups are a type of Azure AD group that automatically assign or remove members based on user or device attributes. This allows you to create rules to define the membership criteria for the group rather than manually adding or removing members.

For example, you can create a dynamic group that includes all users with the “Department” attribute set to “Finance”. As new users are added to the Finance department or existing users leave, their membership in the dynamic group will be automatically updated.

Conditional Access Policies in Azure AD: Conditional Access policies in Azure AD allow you to define access requirements and controls based on various conditions, such as user, device, location, and risk level. These policies can be applied to specific cloud apps or resources.

Some common examples of conditional access policies include:

– Requiring multi-factor authentication (MFA) for specific applications or users

– Restricting access based on the user’s location or device state (e.g., compliant or non-compliant)

– Blocking or granting access based on risk levels or user risk profiles

By combining dynamic groups and conditional access policies, you can automate the process of applying user settings and access controls based on the human resources department attribute. As users join or leave the human resources department, their access, and settings will be automatically adjusted without manual intervention.

Therefore, the correct answer is: Dynamic groups with conditional access policies.

Azure AD Business-to-Consumer (B2C) is incorrect because it is primarily used for managing external user identities and enabling customer identity and access management scenarios, which is not the requirement in this case.

Azure Application Insights Profiler is incorrect because it’s just a tool for profiling live web applications to diagnose performance issues not related to user settings configuration.

Azure Monitor Agent is incorrect because it is mainly used for collecting monitoring data from virtual machines and other resources. Azure Monitor Agent is not capable of configuring user settings automation.

36
Q

You have been assigned to establish an Azure Storage account with the name AnalyticDataStorage for a new project.
You must incorporate Azure Data Lake Storage to facilitate big data analytics.
You must ensure cost-effectiveness for storing data that is not accessed frequently.
You must ensure data redundancy across multiple Azure regions for robust disaster recovery.

Which configurations for AnalyticDataStorage should you do? (SELECT THREE.)

A. Enable the Hot access tier.
B. Enable the Cool access tier.
C. Choose the Locally redundant storage (LRS) for data backup.
D. Choose the Geo-redundant storage (GRS) for data backup.
E. Enable the hierarchical namespace for a structured storage setup.

A

B. Enable the Cool access tier.
D. Choose the Geo-redundant storage (GRS) for data backup.
E. Enable the hierarchical namespace for a structured storage setup.

Explanation:
Geo-redundant storage (GRS) replicates your data to a secondary region that is hundreds of miles away from the primary region. If GRS is enabled for your storage account, your data remains safe even in the event of a complete regional outage or a disaster where the main region cannot be recovered.

Hierarchical namespace organizes objects/files into a hierarchy of directories for efficient data access. A common example is the directory structure in a file system. Hierarchical namespace is a feature of Azure Data Lake Storage Gen2. With hierarchical namespace enabled, a storage account can provide the scalability and cost-effectiveness of object storage, along with file system semantics familiar to analytics engines and frameworks.

The Cool access tier is optimized for storing data that is infrequently accessed or modified for a minimum of 30 days. This makes it a cost-effective choice for the use case described in the question.

Hence, the correct answers are:

– Choose the Geo-redundant storage (GRS) for data backup.

– Enable the hierarchical namespace for a structured storage setup.

– Enable the Cool access tier.

The option that says: Choose the Locally redundant storage (LRS) for data backup is incorrect. Locally redundant storage (LRS) provides high durability by replicating data within a single data center in the primary region. However, it does not automatically replicate data to a secondary Azure region. LRS is the least expensive replication option, but it also provides the least durability and availability compared to other options.

The option that says: Enable the Hot access tier is incorrect because the Hot access tier is optimized for storing data that is accessed frequently. In this case, the requirement is to be cost-effective for storing data that is not accessed frequently, which is not the use case for the Hot access tier.

37
Q

You work for a company that has an Azure subscription and runs data centers in Sydney and London.
You are orchestrating the configuration of these two data centers as geo-clustered locations for the purpose of disaster recovery.
You must provide a solution that aligns with the following criteria:

Ensure data replication is conducted across a network of distributed nodes.

Ensure that the replication nodes are positioned in different geographic locations.

In the event of a regional outage, your application needs to failover to the secondary region and allow read operation.

The solution should be cost-effective.

What Azure storage redundancy option would be the most suitable choice?

A. Read-access geo-redundant storage (RA-GRS)
B. Read-access geo-zone-redundant storage (RA-GZRS)
C. Zone-redundant storage (ZRS)
D. Geo-redundant storage (GRS)

A

A. Read-access geo-redundant storage (RA-GRS)

Explanation:
RA-GRS, or Read-Access Geo-Redundant Storage, is similar to GRS but with the added benefit of providing read-only access to data in the secondary region if the primary region experiences an outage. This means that even if the primary region is unavailable, the data is still accessible for read-only operations. This feature is particularly useful for applications that need to perform read-only operations even during a failure.

Hence, the correct answer is: Read-access geo-redundant storage (RA-GRS).

Geo-redundant storage (GRS) is incorrect. Although this replicates your data to a secondary geographic location, it does not provide read access to the data in the secondary location.

Zone-redundant storage (ZRS) is incorrect. ZRS synchronously replicates your data across three Azure availability zones in the primary region. However, it does not replicate data to a secondary geographic location.

Read-access geo-zone-redundant storage (RA-GZRS) is incorrect. RA-GZRS combines the features of ZRS and RA-GRS. It synchronously replicates your data across three Azure availability zones in the primary region and also replicates the data to a secondary geographic location. It provides read access to the data in the secondary location. However, this option is more expensive than RA-GRS. Since the question asks for a cost-effective solution, this option is not suitable.

38
Q
A