Review Mode Set 1 Dojo Flashcards

1
Q

Your company has an Azure Storage account named TutorialsDojo1.

You have to copy your files hosted on your on-premises network to TutorialsDojo1 using AzCopy.

What Azure Storage services will you be able to copy your data into?

A. Blob and File only
B. Blob, File, Table, and Queue
C. Table and Queue only
D. Blob, Table, and File only

A

A. Blob and File only

Explanation:
The Azure Storage platform is Microsoft’s cloud storage solution for modern data storage scenarios. Core storage services offer a massively scalable object store for data objects, disk storage for Azure virtual machines (VMs), a file system service for the cloud, a messaging store for reliable messaging, and a NoSQL store.

AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account.

Azure Blob storage is Microsoft’s object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn’t adhere to a particular data model or definition, such as text or binary data.

Blob storage is designed for:

– Serving images or documents directly to a browser.

– Storing files for distributed access.

– Streaming video and audio.

– Writing to log files.

– Storing data for backup and restore disaster recovery, and archiving.

– Storing data for analysis by an on-premises or Azure-hosted service.

Azure Files enables you to set up highly available network file shares that can be accessed by using the standard Server Message Block (SMB) protocol. That means that multiple VMs can share the same files with both read and write access. You can also read the files using the REST interface or the storage client libraries.

One thing that distinguishes Azure Files from files on a corporate file share is that you can access the files from anywhere in the world using a URL that points to the file and includes a shared access signature (SAS) token. You can generate SAS tokens; they allow specific access to a private asset for a specific amount of time.

File shares can be used for many common scenarios:

– Many on-premises applications use file shares. This feature makes it easier to migrate those applications that share data to Azure. If you mount the file share to the same drive letter that the on-premises application uses, the part of your application that accesses the file share should work with minimal, if any, changes.

– Configuration files can be stored on a file share and accessed from multiple VMs. Tools and utilities used by multiple developers in a group can be stored on a file share, ensuring that everybody can find them and that they use the same version.

– Diagnostic logs, metrics, and crash dumps are just three examples of data that can be written to a file share and processed or analyzed later.

Hence, the correct answers are: Blob and File only.

The option that says: Table and Queue only is incorrect because Table and Queue are not supported services by AzCopy.

The option that says: Blob, Table, and File only is incorrect because Table is not a supported service by AzCopy. The AzCopy command-line utility can only copy blobs or files to or from a storage account.

The option that says: Blob, File, Table, and Queue is incorrect. Although Blob and File types are supported by AzCopy, the Table and Queue services are not supported.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Your organization has deployed multiple Azure virtual machines configured to run as web servers and an Azure public load balancer named TD1.

There is a requirement that TD1 must consistently route your user’s request to the same web server every time they access it.

What should you configure?

A. Hash based
B. Health probe
C. Session persistence: None
D. Session persistence: Client IP

A

D. Session persistence: Client IP

Explanation:
A public load balancer can provide outbound connections for virtual machines (VMs) inside your virtual network. These connections are accomplished by translating their private IP addresses to public IP addresses. Public Load Balancers are used to load balance Internet traffic to your VMs.

Session persistence is also known session affinity, source IP affinity, or client IP affinity. This distribution mode uses a two-tuple (source IP and destination IP) or three-tuple (source IP, destination IP, and protocol type) hash to route to backend instances.

When using session persistence, connections from the same client will go to the same backend instance within the backend pool.

Session persistence mode has two configuration types:

– Client IP (2-tuple) – Specifies that successive requests from the same client IP address will be handled by the same backend instance.

– Client IP and protocol (3-tuple) – Specifies that successive requests from the same client IP address and protocol combination will be handled by the same backend instance.

Hence, the correct answer is: Session persistence: Client IP.

Hash based is incorrect because this simply allows traffic from the same client IP to be routed to any healthy instance in the backend pool. You would need session persistence if you need users to connect to the same virtual machine for each request.

Session persistence: None is incorrect because this will route the user request to any health instance in the backend pool.

Health probe is incorrect because this is only used to determine the health status of the instances in the backend pool. During load balancer creation, configure a health probe for the load balancer to use. This health probe will determine if an instance is healthy and can receive traffic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Your company has a Microsoft Entra ID tenant named tutorialsdojo.onmicrosoft.com and a public DNS zone for tutorialsdojo.com.

You added the custom domain name tutorialsdojo.com to Microsoft Entra ID. You need to verify that Azure can verify the domain name.

What DNS record type should you use?

A. SOA
B. CNAME
C. A
D. MX

A

D. MX

Explanation:
Every new Microsoft Entra ID tenant comes with an initial domain name, <domainname>.onmicrosoft.com. You can’t change or delete the initial domain name, but you can add your organization’s names. Adding custom domain names helps you to create user names that are familiar to your users, such as azure@tutorialsdojo.com.</domainname>

You can verify your custom domain name by using TXT or MX record types.

Hence, the correct answer is: MX.

A, CNAME, and SOA are incorrect because these record types are not supported by the Microsoft Entra ID for verifying your custom domain. Only TXT and MX record types are supported.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A company has two virtual networks named TDVnet1 and TDVnet2. A site-to-site VPN, using a VPN Gateway (TDGW1) with static routing, connects your on-premises network to TDVnet1. On your Windows 10 computer, TD1, you’ve set up a point-to-site VPN connection to TDVnet1.

You’ve recently established a virtual network peering between TDVnet1 and TDVnet2. Tests confirm connectivity to TDVnet2 from your on-premises network and to TDVnet1 from TD1. However, TD1 is currently unable to access TDVnet2.

What steps are necessary to enable a connection from TD1 to TDVnet2?

A. Restart TDGW1 to re-establish the connection.
B. Enable transit gateway for TDVnet1.
C. Download the VPN client configuration file and re-install it on TD1.
D. Enable transit gateway for TDVnet2.

A

C. Download the VPN client configuration file and re-install it on TD1.

Explanation:
Point-to-Site (P2S) VPN connection allows you to create a secure connection to your virtual network from an individual client computer. A P2S connection is established by starting it from the client’s computer. This solution is useful for telecommuters who want to connect to Azure VNets from a remote location, such as from home or a conference. P2S VPN is also a helpful solution to utilize instead of S2S VPN when you have only a few clients that need to connect to a VNet.

As part of the Point-to-Site configuration, you install a certificate and a VPN client configuration package which are contained in a zip file. Configuration files provide the settings required for a native Windows, Mac IKEv2 VPN, or Linux clients to connect to a virtual network over Point-to-Site connections that use native Azure certificate authentication and are specific to the VPN configuration for the virtual network.

Take note that after creating the point-to-site connection between TD1 and TDVnet1, there is already a change in network topology when you created the virtual network peering with TDVnet1 and TDVnet2. Whenever there is a change in the topology of your network, you will always need to download and re-install the VPN configuration file.

Hence, the correct answer is: Download the VPN client configuration file and re-install it on TD1.

The option that says: Restart TDGW1 to re-establish the connection is incorrect because restarting the VPN gateway is only done when you lose cross-premises VPN connectivity on one or more Site-to-Site VPN tunnels. In this scenario, TD1 can connect to TDVnet1 which implies that TDGW1 is working and running.

The options that say: Enable transit gateway for TDVnet1 and Enable transit gateway for TDVnet2 are incorrect. Transit gateway is a peering property that lets one virtual network use the VPN gateway in the peered virtual network for cross-premises or VNet-to-VNet connectivity. Since TDVnet2 can connect to the on-premises network, it means that the transit gateway is already enabled and as such, enabling the transit gateway is not necessary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Due to compliance requirements, you need to find a solution for the following:

Traffic between the web tier and application tier must be spread equally across all the virtual machines.

The web tier must be protected from SQL injection attacks.

Which Azure solution would you recommend for each requirement?

Select the correct answer from the drop-down list of options. Each correct selection is worth one point.

Traffic between the web tier and application tier must be spread equally across all the virtual machines. 

(Internal Load Balancer)

The web tier must be protected from SQL injection attacks.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You have a server in your on-premises datacenter that contains a DNS server named TD1 with a primary DNS zone for the tutorialsdojo.com domain.

You have an Azure subscription named TD-Subscription1.

You plan to migrate the tutorialsdojo.com zone to an Azure DNS zone in TD-Subscription1. You must ensure that you minimize administrative effort.

Which two tools can you use?

A. Azure CLI
B. Azure PowerShell
C. Azure CloudShell
D. Azure Portal
E. Azure Resource Manager templates

A

A. Azure CLI
D. Azure Portal

Explanation:
Azure DNS is a hosting service for DNS domains that provides name resolution by using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services.

You can’t use Azure DNS to buy a domain name. For an annual fee, you can buy a domain name by using App Service domains or a third-party domain name registrar. Your domains can then be hosted in Azure DNS for record management.

A DNS zone file is a text file that contains details of every Domain Name System (DNS) record in the zone. It follows a standard format, making it suitable for transferring DNS records between DNS systems. Using a zone file is a quick, reliable, and convenient way to transfer a DNS zone into or out of Azure DNS.

Take note that Azure DNS supports importing and exporting zone files by using the Azure command-line interface (CLI) and Azure Portal. Zone file import is NOT supported via Azure PowerShell and Azure Cloud Shell.

The Azure CLI is a cross-platform command-line tool used for managing Azure services. It is available for the Windows, Mac, and Linux platforms.

Hence, the correct answer are:

– Azure CLI

– Azure Portal

Azure PowerShell, Azure Resource Manager templates, and Azure CloudShell are incorrect because these user tools are not supported by Azure DNS for importing a DNS zone file. Only Azure CLI and Azure Portal are supported.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You created a new Recovery Services vault in your Azure account as part of your company’s Disaster Recovery Plan. Your account subscription has the following virtual machines, each with its respective auto-shutdown configuration:

  1. VirtualMachinee1 - Autoshutdown 17:00
  2. VirtualMachine2 - Off
  3. VirtualMachine3 - Autoshutdown 2300
  4. VirtualMachine4 Off

The scheduled backup will run every day at 23:59.

Which of the following virtual machines allows you to create a backup using the Azure Backup service?

A. VirtualMachine1 and VirtualMachine3
B. VirtualMachine1, VirtualMachine2, VirtualMachine3, and VirtualMachine4
C. VirtualMachine1, VirtualMachine2, and VirtualMachine4
D. VirtualMachine2 and VirtualMachine4

A

B. VirtualMachine1, VirtualMachine2, VirtualMachine3, and VirtualMachine4

Explanation:
With Azure Backup service, you can back up on-premises machines, workloads, and Azure VMs. If you would recall, the VM in a stopped/deallocated state only stops the virtual machine. Take note that Azure Backup only takes snapshots of the VM disks. This means that even if the VM status is running or stopped, you can still create a backup as long as the disk is attached to the VM.

When creating a backup, you need to ensure that the virtual machines are in the same region as the Recovery Services vault. Based on the given table in the question, all the virtual machines enable you to create a backup using the Azure Backup service.

Hence, the correct answer is: VirtualMachine1, VirtualMachine2, VirtualMachine3 and VirtualMachine4.

The option that says: VirtualMachine1 and VirtualMachine3 is incorrect because you can also create a backup on both VirtualMachine2 and VirtualMachine4.

The option that says: VirtualMachine2 and VirtualMachine4 is incorrect. Just like the option above, you can also create a backup on VirtualMachine1 and VirtualMachine3. Take note that scheduled backups still run even if you shut down the virtual machine.

The option that says: VirtualMachine1, VirtualMachine2, and VirtualMachine4 is incorrect. Even if the VirtualMachine3 is scheduled to shut down at 23:00 and

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You plan to provision ten virtual machines using the Azure VM scale sets.

The virtual machines must be optimized for large-scale stateless workloads.

Which of the following options allows you to deploy VMs as quickly as possible?

A. Create a VM scale set and set the orchestration mode to Flexible.
B. Create ten virtual machines in the Azure portal.
C. Create ten virtual machines in Azure CLI using the az vm create command.
D. Create a VM scale set and set the orchestration mode to Uniform.

A

D. Create a VM scale set and set the orchestration mode to Uniform.

Explanation:

Azure Virtual Machine Scale Sets provide a logical grouping of platform-managed virtual machines. With scale sets, you create a virtual machine configuration model, automatically add or remove additional instances based on CPU or memory load, and automatically upgrade to the latest OS version. Traditionally, scale sets allow you to create virtual machines using a VM configuration model provided at the time of scale set creation, and the scale set can only manage virtual machines that are implicitly created based on the configuration model.

Scale set orchestration modes give you more control over how virtual machine instances are managed by the scale set. The two types of orchestration modes are:

Uniform – uses a virtual machine profile or template to scale up to desired capacity. This orchestration mode is mainly used for large-scale stateless workloads that require identical VM instances. It also provides fault domain high availability (less than 100 VMs).
Flexible – offers high availability with identical or multiple VM types (up to 1000 VMs) by spreading VMs across fault domains in a region or within an Availability Zone.

Orchestration mode also helps you design a highly available infrastructure since the virtual machines are deployed in fault domains and Availability Zones. In Flexible orchestration mode, you manually create and add the VM to the scale set. While in Uniform orchestration mode, you just need to define a VM model and Azure will automatically create identical instances based on that model. Remember that the orchestration mode is defined when you create the scale set and cannot be changed or updated later.

In this scenario, you must use the Azure virtual machine scale sets to provision ten virtual machines. Among the options given, you can select between the two orchestration modes: Uniform and Flexible. It is stated in the scenario that the virtual machines must be optimized for large-scale stateless workloads. Therefore, you must set the orchestration mode to Uniform in order to satisfy this requirement.

Hence, the correct answer is: Create a VM scale set and set the orchestration mode to Uniform.

The option that says: Create a VM scale set and set the orchestration mode to Flexible is incorrect because the requirement is to create virtual machines that are optimized for large-scale stateless workloads. Flexible orchestration mode is mainly used for quorum-based or stateful workloads.

The option that says: Create ten virtual machines in Azure CLI using the az vm create command is incorrect because you need to use Uniform orchestration scale set to provision ten virtual machines and not just using the Azure VM via the CLI. Also, the az vm create command will only create 1 virtual machine.

The option that says: Create ten virtual machines in the Azure portal is incorrect. Instead of creating one virtual machine at a time, you must use a VM scale set and set the orchestration mode to Uniform.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You plan to host a web application in three Azure virtual machines.

You need to make sure that there are at least two virtual machines running if an Azure data center becomes inaccessible.

What should you do?

A. Deploy all the virtual machines in a single Availability Set.
B. Deploy all the virtual machines in a single Availability Zone.
C. Deploy one virtual machine in each Availability Zone.
D. Deploy one virtual machine in each Availability Set.

A

C. Deploy one virtual machine in each Availability Zone.

Explanation:
Azure Virtual Machines (VM) is one of several types of on-demand, scalable computing resources that Azure offers. Typically, you choose a VM when you need more control over the computing environment. An Azure VM gives you the flexibility of virtualization without having to buy and maintain the physical hardware that runs it. However, you still need to maintain the VM by performing tasks, such as configuring, patching, and installing the software that runs on it.

In Azure, there are two options for managing availability and resiliency for your applications. The first option is availability sets. It is used to protect applications from hardware failures within an Azure data center. Meanwhile, availability zones are used to protect applications against Azure data center failures. Take note that an availability set only protects your resources from planned and unplanned maintenance. It cannot protect your applications from data center outages. Also, in the availability set, if a hardware or software failure happens, only a subset of your VMs are impacted and your overall solution stays operational.

For example, when you create a new VM, you specify the availability set as a parameter. Azure makes sure the VMs are isolated across multiple physical hardware resources within the data center. If the physical hardware that one of your servers is running on has a problem, you know the other instances of your servers will keep running because they’re on different hardware.

Based on the given requirements, you can protect your web application from data center outages if you will deploy the three virtual machines in a separate Availability Zone. Remember that Availability Zones are unique physical locations within an Azure region. Each zone is made up of one or more data centers equipped with independent power, cooling, and networking. To ensure resiliency, there is a minimum of three separate zones in all enabled regions. The physical separation of Availability Zones within a region protects applications and data from datacenter failures.

Hence, the correct answer is: Deploy one virtual machine in each Availability Zone.

The option that says: Deploy all the virtual machines in a single Availability Zone is incorrect because if the Availability Zone becomes inaccessible then all of the resources in that location will also be affected. To achieve a highly available application, you must deploy the virtual machines in multiple Availability Zones.

The option that says: Deploy all the virtual machines in a single Availability Set is incorrect because an Availability Set only isolates virtual machines from each other. This means that the virtual machines are still in the same data center. To protect your application from a data center outage, you must deploy the virtual machines in three Availability Zones.

The option that says: Deploy one virtual machine in each Availability Set is incorrect. Deploying the virtual machines in a separate Availability Set does not mean that it is protected from a data center outage. Take note that this option only ensures that your VMs are distributed across multiple fault domains in the Azure data center. Therefore, if the data center becomes unavailable, your application becomes unavailable too.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You have deployed two Azure virtual machines to host a web application.

You plan to set up an Availability Set for your application.

You need to make sure that the application is available during planned maintenance.

Which of the following options will allow you to accomplish this?

A. Assign two fault domains in the Availability Set.
B. Assign one update domain in the Availability Set.
C. Assign one fault domain in the Availability Set.
D. Assign two update domains in the Availability Set.

A

D. Assign two update domains in the Availability Set.

Explanation:
Planned maintenance is periodic updates made by Microsoft to the underlying Azure platform to improve the platform infrastructure’s overall reliability, performance, and security that your virtual machines run on.

To ensure that the application is available during planned maintenance, you must assign two update domains in the Availability Set. An update domain will make sure that the VMs in the Availability Set are not updated at the same time. The order of update domains being rebooted may not proceed sequentially during planned maintenance, but only one update domain is rebooted at a time. A rebooted update domain is given 30 minutes to recover before maintenance is initiated on a different update domain.

Hence, the correct answer is: Assign two update domains in the Availability Set.

The option that says: Assign one update domain in the Availability Set is incorrect because you need to assign one update domain for each virtual machine.

The option that says: Assign two fault domains in the Availability Set is incorrect because the requirement in the scenario is only planned maintenance. Even if you assigned two or more fault domains, the application will still be unavailable during planned maintenance. You must assign two update domains and one virtual machine for each update domain.

The option that says: Assign one fault domain in the Availability Set is incorrect because the fault domain is mainly used for unplanned maintenance. Instead of assigning a fault domain in the Availability Set, you must assign an update domain in order to satisfy this requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You deployed four Azure virtual machines in the following regions.

VirtualMachine1 - North Central US
VirtualMachine2 - North Central US
VirtualMachine3 - Weest Central US
VirtualMachine4 - Weest Central US

You have created a Recovery Services vault to hold backup data for VirtualMachine1 and VirtualMachine2.

You need to ensure that VirtualMachine3 and VirtualMachine4 are protected by a storage entity in Azure that houses data.

What should you do?

A. Deploy a Storage Sync Service.
B. Create another Recovery Services vault.
C. Create a BlockBlobStorage account.
D. Use the az backup policy set command in the Azure CLI.

A

B. Create another Recovery Services vault.

Explanation:

A Recovery Services vault is a storage entity in Azure that houses data. The data is typically copies of data, or configuration information for virtual machines (VMs), workloads, servers, or workstations. You can use Recovery Services vaults to hold backup data for various Azure services such as IaaS VMs (Linux or Windows) and Azure SQL databases. Recovery Services vaults support System Center DPM, Windows Server, Azure Backup Server, and more. Recovery Services vaults make it easy to organize your backup data while minimizing management overhead.

In this scenario, VirtualMachine1 and VirtualMachine2 are already protected by the Recovery Services vault. A Recovery Services vault is an entity that stores the backups and recovery points created over time for a particular Region only. Since VirtualMachine3 and VirtualMachine4 are in a different region, you must create a new Recovery Services vault. Remember that a Recovery Services vault must be in the same region as the virtual machines to create a recovery point. Therefore, to successfully back up the virtual machines, they must be in the same subscription or region as the vault.

Hence, the correct answer is: Create another Recovery Services vault.

The option that says: Deploy a Storage Sync Service is incorrect because setting up an Azure File Sync is not needed in the scenario. Take note that the only requirement in the scenario is to protect the data of VirtualMachine3 and VirtualMachine4 by a storage entity in Azure that houses the data. Therefore, to copy the data and configuration information of a virtual machine, you must use a Recovery Services vault.

The option that says: Create a BlockBlobStorage account is incorrect because this storage account is mainly used for workloads with high transaction rates or that require very fast access times. Since you need to protect the data in VirtualMachine3 and VirtualMachine4, you must use a Recovery Services vault and not a BlockBlobStorage account.

The option that says: Use the az backup policy set command in the Azure CLI is incorrect because this command only updates the existing policy in the Azure Backup service with the details that you provide. You can’t use the az backup policy set command to hold the backup data of VirtualMachine3 and VirtualMachine4.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You created a new Azure web app with an F1 App Service plan.

You want to add a staging slot for your application but the option seems unavailable in the Azure Portal.

What must be done first to satisfy the above requirement?

A. Scale up the App Service plan.
B. Scale-out the App Service plan.
C. Add a new deployment slot.
D. Configure a custom domain.

A

A. Scale up the App Service plan.

Explanation:
If you encountered the image shown above, this means that your App Service plan does not have the capability to add a staging slot for your application. To resolve this problem, you can upgrade your App Service plan to a Standard or Premium tier. After you successfully upgraded your plan, you can now add a slot in the deployment slots.

Hence, the correct answer is: Scale up the App Service plan.

The option that says: Add a new deployment slot is incorrect because you can’t add a slot using the F1 App Service plan. You must first upgrade your plan tier to a Standard or Premium tier.

The option that says: Scale-out the App Service plan is incorrect because the process of scaling out only allows you to enable autoscaling of your resources. This option will not help you add a staging slot to your application.

The option that says: Configure a custom domain is incorrect because a custom hostname is not needed and irrelevant in the scenario. Also, you can’t configure a custom domain in an F1 App Service plan. You must upgrade your plan tier first to enable this feature.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You plan to use an Azure Resource Manager (ARM) template to deploy 5 web apps in the same region.

You are required to launch the application in the most cost-effective way.

Which of the following options fulfills this requirement?

A. Create one App Service plan.
B. Create an Application Gateway
C. Create a CDN endpoint.
D. Create five App Service plans.

A

A. Create one App Service plan.

Explanation:
Azure Resource Manager (ARM) templates are primarily used to implement infrastructure as code for your Azure solutions. The template is a JavaScript Object Notation (JSON) file that defines your project’s infrastructure and configuration. The template uses declarative syntax, which lets you state what you intend to deploy without writing the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources.

The main requirement in this scenario is to deploy web apps in the most cost-effective way. To accomplish this requirement, you can create one App Service plan and use the plan to deploy five web apps. If you recall the Azure App Service concepts, you can configure one or more apps to run on the same computing resources (or in the same App Service plan). Therefore, if you deploy the five web apps in the same region, you can use one App Service plan for your resources.

Hence, the correct answer is: Create one App Service plan.

The option that says: Create five App Service plan is incorrect because the requirement in this scenario is to deploy the five web apps to the same region in the most cost-effective way. This approach is applicable if you need to deploy web apps in different regions.

The option that says: Create an Application Gateway is incorrect because you can’t deploy five web apps using Azure Application Gateway. This service is simply a web traffic load balancer and is not capable of hosting an application.

The option that says: Create a CDN endpoint is incorrect because a CDN endpoint only represents a specific configuration of content delivery behavior and access. You must create one App Service plan to fulfill the requirement in the scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You need to identify idle and underutilized resources to reduce the overall costs of your account. The service tier of your development virtual machines must also be changed to a less expensive offering.

What Azure service should you use?

A. Azure Advisor
B. Azure Event Hubs
C. Azure Monitor
D. Azure Compliance Manager

A

A. Azure Advisor

Explanation:
Azure Advisor is a personalized cloud consultant that helps you follow best practices to optimize your Azure deployments. It analyzes your resource configuration and usage telemetry and then recommends solutions that can help you improve the cost-effectiveness, performance, reliability, and security of your Azure resources.

With Azure Advisor, you can optimize and improve the efficiency of your infrastructure by identifying idle and underutilized resources. Azure Cost Management works with Azure Advisor to provide cost optimization recommendations. To view cost optimization recommendations for a subscription, you can open the desired scope in the Azure portal and select Advisor recommendations. The list of recommendations identifies usage inefficiencies or shows purchase recommendations that can help you save costs.

Hence, the correct answer is: Azure Advisor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You are managing a Microsoft Entra tenant that has 500 user accounts.

You created a new user account named AppAdmin.

You must assign the role of Application Administrator to the AppAdmin user account.

What should you do in the Microsoft Entra ID settings to accomplish this requirement?

A. Select the user profile and add the role assignments.
B. Select the user profile and add the user to the admin group.
C. Select the user profile and assign it to an administrative unit.
D. Select the user profile and enable the My Staff feature.

A

A. Select the user profile and add the role assignments.

Explanation:
Microsoft Entra ID is Microsoft’s cloud-based identity and access management service, which helps your employees sign in and access resources in external and internal resources. External resources, such as Microsoft 365, the Azure portal, and thousands of other SaaS applications. Internal resources, such as apps on your corporate network and intranet, along with any cloud apps developed by your own organization.

Microsoft Entra has a set of built-in admin roles for granting access to manage configuration in Microsoft Entra for all applications. These roles are the recommended way to grant IT experts access to manage broad application configuration permissions without granting access to manage other parts of Microsoft Entra not related to application configuration. Here are the two common built-in roles in Microsoft Entra ID:

– Application Administrator: Users in this role can create and manage all aspects of enterprise applications, application registrations, and application proxy settings. This role also grants the ability to consent to delegated permissions, and application permissions excluding Microsoft Graph. Users assigned to this role are not added as owners when creating new application registrations or enterprise applications.

– Cloud Application Administrator: Users in this role have the same permissions as the Application Administrator role, excluding the ability to manage application proxy. Users assigned to this role are not added as owners when creating new application registrations or enterprise applications.

If you want to grant a user permission to manage Microsoft Entra resources, you must assign them to a role that provides the permissions they need. Based on the given scenario, the new user account needs the role of Application Administrator. To grant a role to the new user account, you must select the user profile and click on add assignments in the assigned roles option. Add the Application Administrator role, and the user can now create and manage all aspects of app registrations and enterprise apps.

Hence, the correct answer is: Select the user profile and add the role assignments.

The option that says: Select the user profile and add the user to the admin group is incorrect because adding the user to the admin group doesn’t mean that the Application Administrator’s role is automatically assigned to the user account.

The option that says: Select the user profile and assign it to an administrative unit is incorrect because this option only restricts permissions in a role to any portion of your organization that you define. Take note that the requirement in the scenario is to assign an Application Administrator role to the new user account and not to restrict its permissions in your account.

The option that says: Select the user profile and enable the My Staff feature is incorrect because the My Staff feature simply enables you to delegate to a figure of authority, such as a store manager or a team lead, the permissions to ensure that their staff members are able to access to their Microsoft Entra

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You need to use an existing Azure Resource Manager (ARM) template to provision ten Azure virtual machines.

You should retrieve the password using the ARM template. The password must not be stored in plain text.

Which of the following options can help you accomplish this?

A. Create a key vault and configure an access policy.
B. Create a storage account and configure data protection.
C. Configure label protection.
D. Configure Microsoft Entra Password Protection.

A

A. Create a key vault and configure an access policy.

Explanation:
Azure Key Vault is a cloud service for securely storing and accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, or cryptographic keys. Key Vault service supports two types of containers: vaults and managed HSM pools. Vaults support storing software and HSM-backed keys, secrets, and certificates. While Managed HSM pools only support HSM-backed keys.

In this scenario, you can use the ARM template to retrieve the password in Azure Key Vault. Instead of putting a secure value (like a password) directly in your template or parameter file, you can retrieve the value from an Azure Key Vault during deployment. You retrieve the value by referencing the key vault and secret in your parameter file. The value is never exposed because you only reference its key vault ID.

Hence, the correct answer is: Create a key vault and configure an access policy.

The option that says: Create a storage account and configure data protection is incorrect because you can’t store a secret in a storage account. You must use a key vault to store and use several types of secret/key data. Also, data protection in the storage account is primarily used for the recovery and tracking of blobs.

The option that says: Configure label protection is incorrect. This option is a feature of Azure Information Protection. Label protection is used for protecting sensitive documents and emails by using the Rights Management service. You can’t use label protection to store secret values in Azure Key Vault.

The option that says: Configure Microsoft Entra Password Protection is incorrect because this option only detects and blocks known weak passwords in your organization. Take note that the requirement in the scenario is to store the password as a secret that is not in plaintext. Therefore, you must use the Azure Key Vault.

17
Q

Your company has a Microsoft Entra tenant named TD-Entra-ID that contains 3 User Administrators and 2 Global Administrators.

You recently purchased 5 Premium P1 licenses.

You need to make sure that the users in your tenant have access to all the Premium P1 features.

What should you do to satisfy the above requirement?

A. Select the user in your tenant and assign it to an administrative unit.
B. Select the user in your tenant and assign a new role in the Directory role blade of each user.
C. In the Licenses blade of Microsoft Entra ID, select the user in your tenant and assign the license.
D. Select the user in your tenant and add the user to a Microsoft Entra group.

A

C. In the Licenses blade of Microsoft Entra ID, select the user in your tenant and assign the license.

Explanation:
Microsoft Entra ID is a cloud-based identity and access management service that enables your employees access external resources. Example resources include Microsoft 365, the Azure portal, and thousands of other SaaS applications.

Microsoft Entra ID also helps them access internal resources like apps on your corporate intranet, and any cloud apps developed for your own organization.

There are several license plans available for the Microsoft Entra ID service, including:

– Microsoft Entra ID Free

– Microsoft Entra ID P1

– Microsoft Entra ID P2

To ensure that the users in your tenant have access to Premium P1 license features, you must manually add the license to each user or add the license to a group. Remember that only the users with active licenses can access and use the licensed Microsoft Entra ID services. Also, licenses are applied per tenant, and you can’t transfer them to other tenants.

Hence, the correct answer is: In the Licenses blade of Microsoft Entra ID, select the user in your tenant and assign the license.

18
Q

You plan to migrate your business-critical application to Azure virtual machines.

You need to make sure that at least two VMs are available during planned Azure maintenance.

What should you do?

A. Create an Availability Set that has three update domains and two fault domains.
B. Create an Availability Set that has three update domains and one fault domain.
C. Create an Availability Set that has two update domains and three fault domains.
D. Create an Availability Set that has one update domain and three fault domains.

A

A. Create an Availability Set that has three update domains and two fault domains.

Explanation:
Azure periodically updates its platform to improve the reliability, performance, and security of the host infrastructure for virtual machines. The purpose of these updates ranges from patching software components in the hosting environment to upgrading networking components or decommissioning hardware.

Updates rarely affect the hosted VMs. When updates do have an effect, Azure chooses the least impactful method for updates:

– If the update doesn’t require a reboot, the VM is paused while the host is updated, or the VM is live-migrated to an already updated host.

– If maintenance requires a reboot, you’re notified of the planned maintenance. Azure also provides a time window in which you can start the maintenance yourself, at a time that works for you. The self-maintenance window is typically 35 days unless the maintenance is urgent. Azure is investing in technologies to reduce the number of cases in which planned platform maintenance requires the VMs to be rebooted.

The main objective of the question is to test your understanding of update and fault domains. Since it’s a requirement in the scenario that at least two virtual machines must be available during planned maintenance, you should add three update domains in the Availability Set. Take note that each virtual machine in your availability set is assigned to an update domain and a fault domain.

During scheduled maintenance, only one update domain is updated at any given time. Update domains aren’t necessarily updated sequentially. A rebooted update domain is given 30 minutes to recover before maintenance is initiated on a different update domain. For fault domains, you can set a minimum number of fault domains in your Availability Set because the main requirement in the scenario is to prepare for planned maintenance.

Hence, the correct answer is: Create an Availability Set that has three update domains and two fault domains.

19
Q

Your company has an Azure Kubernetes Service (AKS) cluster and a Windows 10 workstation with Azure CLI installed.

You plan to use the kubectl client on Windows 10.

Which of the following commands should you run?

A. az aks create
B. az aks nodepool
C. az aks install-cli
D. az aks browse

A

C. az aks install-cli

Explanation:
Azure Kubernetes Service (AKS) makes it simple to deploy a managed Kubernetes cluster in Azure. AKS reduces the complexity and operational overhead of managing Kubernetes by offloading much of that responsibility to Azure. As a hosted Kubernetes service, Azure handles critical tasks like health monitoring and maintenance for you. The Kubernetes masters are managed by Azure. You only manage and maintain the agent nodes.

To connect to the Kubernetes cluster from your local computer, you need to use kubectl (Kubernetes command-line client). But before you can use kubectl, you should first run the command az aks install-cli in the command-line interface. The kubectl allows you to deploy applications, inspect and manage cluster resources, and view logs.

Hence, the correct answers is: az aks install-cli.

The option that says: az aks nodepool is incorrect because this command only allows you to manage node pools in a Kubernetes cluster. It is stated in the scenario that you need to use the kubectl client. Therefore, you should first run the az aks install-cli command.

The option that says: az aks create is incorrect because this will just create a new managed Kubernetes cluster. Take note that in this scenario, you need to use the Kubernetes command-line client in Windows 10. In order for you to manage cluster resources, you should use the kubectl client.

The option that says: az aks browse is incorrect because it will simply show the dashboard of the Kubernetes cluster in your web browser. Instead of running the command az aks browse, you should run az aks install-cli to download and install the Kubernetes command-line tool.

20
Q

You created a new Azure subscription. The subscription has a resource group named TD-RG. The resources in TD-RG is created using ARM templates.

You need to get the exact date and time when the resources in TD-RG was deployed.

Solution: In the resource group settings, select Policies.

Does the solution meet the goal?

A. Yes
B. No

A

B. No

Explanation:
A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. You decide how you want to allocate resources to resource groups based on what makes the most sense for your organization. Generally, add resources that share the same lifecycle to the same resource group so you can easily deploy, update, and delete them as a group.

The policy in the resource group is mainly used for implementing governance for resource consistency, regulatory compliance, security, cost, and management. Azure Policy does not contain the date and time when the resources were deployed.

To verify the date and time the resources were deployed, you can select the resource group and click the deployment settings. You will see a summary of the deployment: the deployment name, status, last modified, duration, and related events. If you select the specific template, you can check the inputs, outputs, and the template used during deployment.

Hence, the correct answer is: No.

21
Q

You created a new Azure subscription. The subscription has a resource group named TD-RG. The resources in TD-RG is created using ARM templates.

You need to get the exact date and time when the resources in TD-RG was deployed.

Solution: In the resource group settings, select Properties.

Does the solution meet the goal?

A. No
B. Yes

A

B. Yes

Explanation:
A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. You decide how you want to allocate resources to resource groups based on what makes the most sense for your organization. Generally, add resources that share the same lifecycle to the same resource group so you can easily deploy, update, and delete them as a group.

The properties in the resource group contain the name, location, location ID, resource ID, subscription, and subscription ID. This setting does not contain the date and time when the resources were deployed.

To verify the date and time the resources were deployed, you can select the resource group and click the deployment settings. You will see a summary of the deployment: the deployment name, status, last modified, duration, and related events. If you select a particular template, you can check the inputs, outputs, and the template used during deployment.

Hence, the correct answer is: No.

22
Q

You created a new Azure subscription. The subscription has a resource group named TD-RG. The resources in TD-RG is created using ARM templates.

You need to get the exact date and time when the resources in TD-RG was deployed.

Solution: In the resource group settings, select Deployments.

Does the solution meet the goal?

A. No
B. Yes

A

B. Yes

Explanation:
A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. You decide how you want to allocate resources to resource groups based on what makes the most sense for your organization. Generally, add resources that share the same lifecycle to the same resource group so you can easily deploy, update, and delete them as a group.

To verify the date and time the resources were deployed, you can select the resource group and click the deployment settings. You will see a summary of the deployment: the deployment name, status, last modified, duration, and related events. If you select the template, you can check the inputs, outputs, and the template used during deployment.

Hence, the correct answer is: Yes.

23
Q

You need to use an Azure storage service that can be mounted concurrently on the cloud and on-premises data center.

Which of the following services fulfills this requirement?

A. Azure Blob
B. Azure Files
C. Azure Disk
D. Azure Table

A

B. Azure Files

Explanation:
Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol or Network File System (NFS) protocol. Azure Files SMB file shares are accessible from Windows, Linux, and macOS clients. Azure Files NFS file shares are accessible from Linux or macOS clients. Additionally, Azure Files SMB file shares can be cached on Windows Servers with Azure File Sync for fast access near where the data is being used.

The requirements in the scenario are:

– Move the existing file server to a more efficient service.

– Ensure that the file server can be mounted from Azure and on-premises data center.

Among the given options, only Azure Files can satisfy the given requirements. Azure file shares can be mounted concurrently on the cloud or on-premises deployments. Azure Files can be used to completely replace or supplement traditional on-premises file servers or NAS devices. Azure File SMB file shares can also be replicated with Azure File Sync to Windows Servers, either on-premises or in the cloud, for performance and distributed caching of the data where it’s being used.

Hence, the correct answer is: Azure Files.

Azure Blob is incorrect because this service can’t be mounted concurrently on the cloud and on-premises data center. Instead of using Azure Blob, you should use Azure Files.

Azure Table is incorrect because this service simply stores structured NoSQL data. You can’t mount this storage service to your on-premises data center.

Azure Disk is incorrect because this storage service can only be used on Azure resources. If you need to move your existing file server to the cloud, you can use Azure Files.

24
Q

You plan to create a solution that automatically increases the number of VMs when there is high demand.

What should you implement?

A. Deploy the virtual machine in an Availability Set.
B. Deploy the virtual machine in multiple Availability Zones.
C. Create Azure virtual machine scale sets.
D. Create an Azure ARM template to deploy a virtual machine.

A

C. Create Azure virtual machine scale sets.

Explanation:
In this scenario, you can create a VM scale set to automatically increase the number of VMs when there is high demand. Take note that scale sets are built from virtual machines. With scale sets, the management and automation layers are provided to run and scale your applications.

Hence, the correct answer is: Create Azure virtual machine scale sets.

The option that says: Deploy the virtual machine in an Availability Set is incorrect because an Availability Set only allows you to deploy the virtual machine in a single data center. Therefore, this option does not meet the technical requirements of being scalable and highly available.

The option that says: Deploy the virtual machine in multiple Availability Zones is incorrect. Just like the option above, the virtual machine won’t scale as the traffic increases by default. You have to create Azure virtual machine scale sets instead.

The option that says: Create an Azure ARM template to deploy a virtual machine is incorrect because this template only deploys one virtual machine to Azure. If the template would create virtual machine scale sets then this option would satisfy the requirements in the scenario.

25
Q

You need to deploy a load balancer that supports SSL termination.

What Azure service should you use?

A. Azure Front Door
B. Azure Application gateway
C. Azure Traffic Manager
D. Azure Load Balancer

A

B. Azure Application gateway

Explanation:
Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. Traditional load balancers operate at the transport layer (OSI layer 4 – TCP and UDP) and route traffic based on source IP address and port, to a destination IP address and port. Application Gateway can make routing decisions based on additional attributes of an HTTP request, for example, URI path or host headers.

SSL termination refers to the process of decrypting encrypted traffic before passing it along to a web server. TLS is just an updated, more secure, version of SSL. An SSL connection sends encrypted data between a user and a web server by using a certificate for authentication. SSL termination helps speed the decryption process and reduces the processing burden on the servers.

Azure Application Gateway supports end-to-end traffic encryption and TLS/SSL termination. Based on the defined routing rules, the gateway applies the rules to the traffic, re-encrypts the packet, and forwards the packet to the appropriate server. Any reply from the web server goes back to the same process.

Hence, the correct answer is: Azure Application Gateway.

Azure Traffic Manager is incorrect because Traffic Manager does not support SSL termination. This service is mainly used for DNS-based traffic load balancing.

Azure Load Balancer is incorrect. Just like the option above, this service does not support SSL termination. You can use this service to create public and internal load balancers only.

Azure Front Door is incorrect. Although it supports SSL offloading, this service is not a load balancer. Azure Front Door is a global, scalable entry-point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications.

26
Q

You have a Bicep file named main.bicep that defines an Azure Storage account resource.

@description(‘Storage Account name’)
param storageAccountName string = ‘store${uniqueString(resourceGroup().id)}’

@description(‘Location for the storage account’)
param location string = resourceGroup().location

resource stgAccount ‘Microsoft.Storage/storageAccounts@2022-09-01’ = {
name: storageAccountName
location: location
sku: {
name: ‘Standard_LRS’
}
kind: ‘StorageV2’
properties: {
accessTier: ‘Hot’
}
}
output storageAccountId string = stgAccount.id

You need to deploy the resource using Azure CLI.
Which command should you use?

A. New-AzResourceGroupDeployment
B. az deployment group create
C. New-AzSubscriptionDeployment
D. az deployment sub create

A

B. az deployment group create

Explanation:
A Bicep file is a domain-specific language (DSL) used to define and deploy Azure resources in a declarative way. Bicep is a more concise and readable syntax compared to JSON-based Azure Resource Manager (ARM) templates, which were previously used for defining and deploying Azure resources.

Bicep files are transpiled into ARM templates (JSON files) during deployment, and the resulting ARM templates are then used by Azure Resource Manager to provision and manage the defined resources.

Bicep files use a syntax that is similar to programming languages like TypeScript or Python, making them more readable and easier to author compared to JSON-based ARM templates.

Bicep files allow you to define Azure resources using a declarative syntax. You can specify the resource type, properties, and dependencies between resources.

Bicep files can be deployed using the Azure CLI, Azure PowerShell, or Azure Resource Manager REST API. During deployment, the Bicep file is transpiled into an ARM template, and the resulting ARM template is used to provision the defined resources.

Microsoft provides various tools and extensions for working with Bicep files, such as the Bicep Visual Studio Code extension, which provides syntax highlighting, IntelliSense, and other features to improve the authoring experience.

Bicep files aim to simplify the process of defining and deploying Azure resources by providing a more readable and maintainable syntax compared to JSON-based ARM templates. They are part of Microsoft’s efforts to improve the developer experience and increase productivity when working with Azure infrastructure as code.

To deploy the Bicep file main.bicep using the Azure CLI, you should use the command az deployment group create. This command is used to create a deployment at the resource group level.

Therefore, the correct answer is: az deployment group create.

New-AzResourceGroupDeployment is incorrect because New-AzResourceGroupDeployment is a PowerShell cmdlet, not an Azure CLI command. It is used to deploy Azure Resource Manager templates (ARM templates) in PowerShell, not in the Azure CLI environment.

New-AzSubscriptionDeployment Similar to the previous option, New-AzSubscriptionDeployment is a PowerShell cmdlet, not an Azure CLI command. It is used to deploy ARM templates at the subscription level in PowerShell, not in the Azure CLI environment.

az deployment sub create is incorrect because the az deployment sub create command is used to create a deployment at the subscription level, not the resource group level. The question specifies that you need to deploy the Bicep file to a resource group, so the correct command should target the resource group level.

27
Q

Your company hosts its business-critical Azure virtual machines in the Australia East region.

The servers are then replicated to a secondary region using Azure site recovery for disaster recovery.

The Australia East region is experiencing an outage and you need to failover to your secondary region.

Which three actions should you perform?

A. Run a test failover.
B. Run a failover.
C. Initiate replication.
D. Reprotect virtual machine.
E. Verify if the virtual machines are protected and healthy.
F. Run a failback.

A

B. Run a failover.
D. Reprotect virtual machine.
E. Verify if the virtual machines are protected and healthy.

Explanation:
A Recovery Services vault is a storage entity in Azure that houses data. The data is typically copies of data or configuration information for virtual machines (VMs), workloads, servers, or workstations. You can use Recovery Services vaults to hold backup data for various Azure services such as IaaS VMs (Linux or Windows) and Azure SQL databases.

Recovery Services vaults support System Center DPM, Windows Server, Azure Backup Server, and more. Recovery Services vaults make it easy to organize your backup data while minimizing management overhead.

When you enable replication for a VM to set up disaster recovery, the Site Recovery Mobility service extension installs on the VM and registers it with Azure Site Recovery.

During replication, VM disk writes are sent to a cache storage account in the source region. Data is sent from there to the target region, and recovery points are generated from the data. When you fail over a VM during disaster recovery, a recovery point is used to restore the VM in the target region.

To perform a failover, you should complete the following steps:

Verify the VM settings – Check if the VM is healthy and protected. You also need to verify if the VM is running a support Windows or Linux operation system and if the VM complies with compute, storage and networking requirements.
Run a failover – In the failover tab, you are required to choose a recovery point. The Azure VM in the target region is created using data from this recovery point.
Reprotect the VM – After failover, you reprotect the VM in the secondary region so that it replicates back to the primary region.

Hence, the correct answers are:

– Verify if the virtual machines are protected and healthy.

– Run a failover.

– Reprotect the VM.

Initiate replication is incorrect because this is the first step in setting up a disaster recovery for virtual machines. The question states that the servers are already replicated to the secondary region which indicates that it is ready for a failover

Run a failback is incorrect because this option allows you to failback to your primary region and is only executed once the primary region is running as normal again.

Run a test failover is incorrect because you only run a test failover to check if an actual failover will work. This is done during disaster recovery drills.

28
Q

A company plans on migrating its data to an Azure storage account named Cebu.

You need to migrate the files by using AzCopy.

Which of the following operating systems are supported by AzCopy?

A. Windows and macOS
B. Windows and Linux
C. Windows, Linux, and macOS
D. Windows

A

C. Windows, Linux, and macOS

Explanation:
AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. You can also provide authorization credentials on your AzCopy command by using Microsoft Entra ID or by using a Shared Access Signature (SAS) token.

The following operating systems are supported:

– Windows

– Linux

– macOS

Download the AzCopy executable file to any directory on your computer. AzCopy V10 is just an executable file, so there’s nothing to install. These files are compressed as a zip file (Windows and Mac) or a tar file (Linux).

Hence, the correct answer is: Windows, Linux, and macOS.

29
Q

You manage an Azure environment for a financial services company. The Azure subscription includes 12 virtual machines (VMs) that run various Linux distributions. Each VM is configured to run a custom financial application and has the Azure Monitor Agent installed.

The company requires comprehensive logging of the application activities on each VM to comply with regulatory requirements. These logs need to be centralized in a Log Analytics workspace for auditing and analysis. To meet the tight security requirements, all data collection must occur through a designated endpoint.

Which configuration step should you perform first?

A. Enable VM insights.
B. Set up the data collection endpoint (DCE).
C. Configure Azure Monitor Private Link Scope (AMPLS).
D. Add Diagnostic settings in Azure Monitor

A

B. Set up the data collection endpoint (DCE).

Explanation:
A Data Collection Endpoint (DCE) in Azure is a crucial component designed to manage and secure the flow of telemetry data from your resources to Azure Monitor. It provides a dedicated endpoint for collecting logs, metrics, and traces from various Azure services and applications. By setting up a DCE, organizations can ensure that all data collection occurs through a centralized, secure, and compliant channel, which is particularly important for meeting regulatory and security requirements. The DCE acts as a gateway, enforcing policies and controlling access to ensure that data is handled securely and efficiently before being ingested into services like Azure Monitor and Log Analytics.

Using a DCE enhances the security and manageability of telemetry data collection in several ways. It supports advanced scenarios where specific compliance requirements mandate the use of dedicated endpoints to segregate data collection traffic. This segregation helps in maintaining the integrity and confidentiality of sensitive data. Additionally, a DCE can help optimize data collection by providing a consistent endpoint configuration, reducing complexity, and simplifying the management of data flows across multiple resources and environments. By centralizing data collection through a DCE, organizations can better monitor, audit, and analyze their telemetry data, ensuring comprehensive visibility and control over their Azure resources.

Hence, the correct answer is: Set up the data collection endpoint (DCE).

The option that says: Enable VM insights is incorrect. Enabling VM insights provides detailed monitoring and performance metrics for virtual machines, including health, performance, and dependency data. While this is valuable for gaining visibility into the state of VMs, it does not specifically address the requirement for comprehensive application logging or secure data collection through a designated endpoint. VM insights primarily focus on infrastructure metrics rather than the application-level logs needed for regulatory compliance.

The option that says: Configure Azure Monitor Private Link Scope (AMPLS) is incorrect. This option is important for enhancing the security of data in transit and ensuring that traffic does not traverse the public internet. However, configuring AMPLS is not the first step in setting up centralized logging. It is more relevant for securing the network path used by Azure Monitor, not for configuring the initial collection and centralization of application logs.

The option that says: Add Diagnostic settings in Azure Monitor is incorrect. Adding Diagnostic settings in Azure Monitor is a crucial step for capturing and forwarding logs and metrics to a Log Analytics workspace. This configuration ensures that logs from various services and applications are sent to a central repository for analysis and auditing. However, without first establishing a secure and compliant endpoint for data collection (DCE), adding diagnostic settings alone does not ensure that the data collection process adheres to stringent security requirements.

30
Q

You need to set up a disaster recovery plan for your company’s critical business application after its migration to Azure.

What should you configure first?

A. Microsoft Azure Backup Server (MABS)
B. Backup Policy
C. Azure Site Recovery (ASR)
D. Recovery Services vault

A

D. Recovery Services vault

Explanation:
A Recovery Services vault is an essential component in Azure that acts as a storage entity for backup data. It enables the protection and management of data by allowing you to back up and restore data for various workloads, including Azure virtual machines (VMs), on-premises servers, and other critical infrastructure. By setting up a Recovery Services vault, you can ensure that all the virtual machines hosting the TradeApp tiers are backed up efficiently. This meets the company’s requirement for protecting its VMs after migration to Azure. The Recovery Services vault leverages Azure Backup to provide secure, scalable, and reliable data protection, ensuring that the backups are stored and managed according to best practices.

Furthermore, a Recovery Services vault facilitates centralized management of backups, making it easier to automate and monitor backup jobs through Azure’s comprehensive monitoring and alerting features. It supports multiple types of workloads and provides features like backup encryption, long-term retention, and role-based access control to enhance security and compliance. This service ensures that in the event of data loss, corruption, or other disasters, TradeApp’s data can be quickly and effectively restored, minimizing downtime and ensuring business continuity. By utilizing the Recovery Services vault, Contoso Limited can also benefit from Azure’s global infrastructure, ensuring that their backup data is geographically redundant and accessible from anywhere, further enhancing the resilience of their operations.

Hence, the correct answer is: Recovery Services vault.

The option that says: Microsoft Azure Backup Server (MABS) is incorrect because this service is primarily for hybrid backup scenarios where both on-premises and cloud data protection are needed. While it extends Azure Backup capabilities to physical servers and on-premises VMs, it is not the initial configuration step required for backing up Azure VMs.

The option that says: Backup Policy is incorrect because it only outlines the schedule and retention settings for backups. However, it cannot be created without an existing Recovery Services vault. The Recovery Services vault is essential for storing the backups and managing the policies associated with them. Therefore, configuring a backup policy is a subsequent step that depends on having a Recovery Services vault in place, making it not the correct first action to take.

The option that says: Azure Site Recovery (ASR) is incorrect because this option is focused on disaster recovery by replicating workloads to a secondary location to ensure business continuity during outages. While it is a crucial service for disaster recovery scenarios, it is not specifically intended for regular backups. The scenario requires a backup solution, and ASR is not suited for this purpose.

31
Q
A