TD--Wrong Only Flashcards

1
Q

-3. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
Your company has a Microsoft Entra ID tenant named tutorialsdojo.onmicrosoft.com and a public DNS zone for tutorialsdojo.com.

You added the custom domain name tutorialsdojo.com to Microsoft Entra ID. You need to verify that Azure can verify the domain name.

What DNS record type should you use?

A
SOA
MX
CNAME

A

MX

Microsoft Entra ID is a cloud-based identity and access management service that enables your employees access external resources. Example resources include Microsoft 365, the Azure portal, and thousands of other SaaS applications.

Microsoft Entra ID also helps them access internal resources like apps on your corporate intranet, and any cloud apps developed for your own organization.

Every new Microsoft Entra ID tenant comes with an initial domain name, <domainname>.onmicrosoft.com. You can’t change or delete the initial domain name, but you can add your organization’s names. Adding custom domain names helps you to create user names that are familiar to your users, such as azure@tutorialsdojo.com.</domainname>

You can verify your custom domain name by using TXT or MX record types.

Hence, the correct answer is: MX.

A, CNAME, and SOA are incorrect because these record types are not supported by the Microsoft Entra ID for verifying your custom domain. Only TXT and MX record types are supported.

References:

https://learn.microsoft.com/en-us/entra/fundamentals/whatis

https://learn.microsoft.com/en-us/entra/fundamentals/add-custom-domain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

-4. QUESTION
Category: AZ-104 – Implement and Manage Storage
You have an existing Azure subscription that has the following Azure Storage accounts.

(image)

There is a requirement to identify the storage accounts that can be converted to zone-redundant storage (ZRS) replication. This must be done only through a live migration from Azure Support.

Which of the following accounts can you convert to ZRS?

tdaccount4
tdaccount3
tdaccount2
tdaccount1

A

tdaccount1

Azure Storage always stores multiple copies of your data so that it is protected from planned and unplanned events, including transient hardware failures, network or power outages, and massive natural disasters. Redundancy ensures that your storage account meets its availability and durability targets even in the face of failures.

When deciding which redundancy option is best for your scenario, consider the tradeoffs between lower costs and higher availability. The factors that help determine which redundancy option you should choose to include are:

– How your data is replicated in the primary region.

– Whether your data is replicated to a second region that is geographically distant to the primary region, to protect against regional disasters.

– Whether your application requires read access to the replicated data in the secondary region if the primary region becomes unavailable for any reason.

Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage offers four options for how your data is replicated:

Locally redundant storage (LRS) copies your data synchronously three times within a single physical location in the primary region. LRS is the least expensive replication option but is not recommended for applications requiring high availability.
Zone-redundant storage (ZRS) copies your data synchronously across three Azure availability zones in the primary region. For applications requiring high availability.
Geo-redundant storage (GRS) copies your data synchronously three times within a single physical location in the primary region using LRS. It then copies your data asynchronously to a single physical location in a secondary region that is hundreds of miles away from the primary region.
Geo-zone-redundant storage (GZRS) copies your data synchronously across three Azure availability zones in the primary region using ZRS. It then copies your data asynchronously to a single physical location in the secondary region.
You can switch a storage account from one type of replication to any other type, but some scenarios are more straightforward than others. If you want to add or remove geo-replication or read access to the secondary region, you can use the Azure portal, PowerShell, or Azure CLI to update the replication setting. However, if you want to change how data is replicated in the primary region, by moving from LRS to ZRS or vice versa, then you must perform a manual migration.

The following table provides an overview of how to switch from each type of replication to another:

To request a live migration to ZRS, GZRS, or RA-GZRS, you need to migrate your storage account from LRS to ZRS in the primary region with no application downtime. To migrate from LRS to GZRS or RA-GZRS, first switch to GRS or RA-GRS and then request a live migration. Similarly, you can request a live migration from GRS or RA-GRS to GZRS or RA-GZRS. To migrate from GRS or RA-GRS to ZRS, first switch to LRS, then request a live migration.

Live migration is supported only for storage accounts that use LRS or GRS replication. If your account uses RA-GRS then you need to first change your account’s replication type to either LRS or GRS before proceeding. This intermediary step removes the secondary read-only endpoint provided by RA-GRS before migration.

Hence, the correct answer is: tdaccount1.

tdaccount2 is incorrect because you need to first change your account’s replication type to either LRS or GRS before you change to zone-redundant storage (ZRS). The requirement states that you must only request live migration.

tdaccount3 is incorrect because a general-purpose V1 storage account type does not support zone-redundant storage (ZRS) as its replication option. Only General-purpose V2, FileStorage, and BlockBlobStorage support ZRS.

tdaccount4 is incorrect because a BlobStorage account type does not support zone-redundant storage (ZRS) as its replication option. Only General-purpose V2, FileStorage, and BlockBlobStorage support ZRS.

References:

https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy

https://docs.microsoft.com/en-us/azure/storage/common/redundancy-migration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

-6. QUESTION
Category: AZ-104 – Implement and Manage Storage
You have a file share in your Azure subscription named Manila-Subscription-01.

You plan to synchronize files from your on-premises file server named TDFileServer1 to Azure.

You created an Azure file share and a storage sync service.

Which four actions should you perform in sequence to synchronize files from TDFileServer1 to Azure?

Instructions: rearrange the following into the correct order

  1. Create a sync group and a cloud endpoint
  2. Register TDFileServer1 with Storage Sync Service
  3. Create a server endpoint
  4. Deploy the Azure File Sync agent to TDFileServer1
A

A-4
Deploy the Azure File Sync agent to TDFileServer1
B-2
Register TDFileServer1 with Storage Sync Service
C-1
Create a sync group and a cloud endpoint
D-3
Create a server endpoint

Azure Files enables you to set up highly available network file shares that can be accessed by using the standard Server Message Block (SMB) protocol. That means that multiple VMs can share the same files with both read and write access. You can also read the files using the REST interface or the storage client libraries.

One thing that distinguishes Azure Files from files on a corporate file share is that you can access the files from anywhere in the world using a URL that points to the file and includes a shared access signature (SAS) token. You can generate SAS tokens; they allow specific access to a private asset for a specific amount of time.

File shares can be used for many common scenarios:

  1. Many on-premises applications use file shares. This feature makes it easier to migrate those applications that share data to Azure. If you mount the file share to the same drive letter that the on-premises application uses, the part of your application that accesses the file share should work with minimal, if any, changes.
  2. Configuration files can be stored on a file share and accessed from multiple VMs. Tools and utilities used by multiple developers in a group can be stored on a file share, ensuring that everybody can find them and that they use the same version.
  3. Resource logs, metrics, and crash dumps are just three examples of data that can be written to a file share and processed or analyzed later.

You can use Azure File Sync to centralize your organization’s file shares in Azure Files while keeping the flexibility, performance, and compatibility of an on-premises file server. Azure File Sync transforms Windows Server into a quick cache of your Azure file share. You can use any protocol that’s available on Windows Server to access your data locally, including SMB, NFS, and FTPS. You can have as many caches as you need across the world.

You can sync TDFileServer1 to Azure using the following steps in order:

  1. Prepare Windows Server to use with Azure File Sync

– You need to disable Internet Explorer Enhanced Security Configuration in your server. This is required only for initial server registration. You can re-enable it after the server has been registered.

  1. Deploy the Storage Sync Service

– Allows you to create sync groups that contain Azure file shares across multiple storage accounts and multiple registered Windows Servers.

  1. Deploy the Azure File Sync agent to TDFileServer1

– The Azure File Sync agent is a downloadable package that enables Windows Server to be synced with an Azure file share.

  1. Register TDFileServer1 with Storage Sync Service

– This establishes a trust relationship between your server (or cluster) and the Storage Sync Service. A server can only be registered to one Storage Sync Service and can sync with other servers and Azure file shares associated with the same Storage Sync Service.

– 5. Create a sync group and a cloud endpoint

– A sync group defines the sync topology for a set of files. Endpoints within a sync group are kept in sync with each other.

  1. Create a server endpoint

– A server endpoint represents a specific location on a registered server, such as a folder on a server volume.

Hence, the correct order of deployment are:

  1. Deploy the Azure File Sync agent to TDFileServer1
  2. Register TDFileServer1 with Storage Sync Service
  3. Create a sync group and a cloud endpoint
  4. Create a server endpoint

References:

https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction

https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-deployment-guide

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

-9. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
You have an Azure subscription that contains an Azure virtual network named TDVnet1 with an address space of 10.1.0.0/18 and a subnet named TDSub1 with an address space of 10.1.0.0/22.

You need to connect your on-premises network to Azure by using a site-to-site VPN.

Which four actions should you perform in sequence?

Instructions: To answer, drag the appropriate item from the column on the left to its description on the right. Each correct match is worth one point.

A. Deploy a gateway subnet
B. Deploy a local network gateway
C. Deploy a VPN gateway
D. Deploy a VPN connection

A
  1. A-Deploy a gateway subnet
  2. C-Deploy a VPN gateway
  3. B-Deploy a local network gateway
  4. D-Deploy a VPN connection

Azure Virtual Network (VNet) is the fundamental building block for your private network in Azure. VNet enables many types of Azure resources, such as Azure Virtual Machines (VM), to securely communicate with each other, the Internet, and on-premises networks. VNet is similar to a traditional network that you’d operate in your own datacenter but brings with it additional benefits of Azure’s infrastructure such as scale, availability, and isolation.

A Site-to-Site VPN gateway connection is used to connect your on-premises network to an Azure virtual network over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it.

You can create a site-to-site VPN connection by deploying the following in order:

  1. Deploy a virtual network
  2. Deploy a gateway subnet

– You need to create a gateway subnet for your VNet in order to configure a virtual network gateway. All gateway subnets must be named ‘GatewaySubnet’ to work properly. Don’t name your gateway subnet something else. It is recommended that you create a gateway subnet that uses a /27 or /28.

  1. Deploy a VPN gateway

– A VPN gateway is a specific type of virtual network gateway that is used to send encrypted traffic between an Azure virtual network and an on-premises location over the public Internet.

  1. Deploy a local network gateway

– The local network gateway is a specific object that represents your on-premises location (the site) for routing purposes.

  1. Deploy a VPN connection

– A VPN connection creates the link for the VPN gateway and local network gateway. It also gives you the status of your site-to-site connection.

Since you have deployed TDVnet1, the next step is to deploy a gateway subnet.

Hence, the correct order of deployment are:

  1. Deploy a gateway subnet
  2. Deploy a VPN gateway
  3. Deploy a local network gateway
  4. Deploy a VPN connection

References:

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview

https://docs.microsoft.com/en-us/azure/vpn-gateway/tutorial-site-to-site-portal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

-10. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
Your company has an Azure subscription that contains the following resources:

(image1)

You plan to create an internal load balancer with the following parameters:

Name: TDB1
SKU: Basic
Subnet: TDSub2
Virtual network: TDVnet1

Yes or No for the following?
1. Traffic between TD5 and TD6 can be load balanced by TDB1
2. Traffic between TD3 and TD4 can be load balanced by TDB1
3. Traffic between TD1 and TD2 can be load balanced by TDB1

A
  1. Traffic between TD5 and TD6 can be load balanced by TDB1 NO
  2. Traffic between TD3 and TD4 can be load balanced by TDB1 NO
  3. Traffic between TD1 and TD2 can be load balanced by TDB1 YES

Private (or Internal) Load balancer provides a higher level of availability and scale by spreading incoming requests across virtual machines (VMs). A private load balancer distributes traffic to resources that are inside a virtual network. Azure restricts access to the frontend IP addresses of a virtual network that is load balanced. Front-end IP addresses and virtual networks are never directly exposed to an internet endpoint. Internal line-of-business applications run in Azure and are accessed from within Azure or from on-premises resources.

Take note that in this scenario, you need to determine if you can load balance traffic in between virtual machines according to the parameters of TDB1. TD1 and TD2 are the only virtual machines that are associated with an availability set. In the image above, it states that only virtual machines within a single availability set or virtual machine scale set can be used as backend pool endpoints for load balancers that use Basic as its SKU.

The backend pool is a critical component of the load balancer. The backend pool defines the group of resources that will serve traffic for a given load-balancing rule.

Hence, this statement is correct: Traffic between TD1 and TD2 can be load balanced by TDB1

The following statements are incorrect because TDB1 is using the Basic SKU. Since the virtual machines below do not have an availability set or a virtual machine scale set, it does not have the capability to load balance the traffic.

– Traffic between TD3 and TD4 can be load balanced by TDB1

– Traffic between TD5 and TD6 can be load balanced by TDB1

References:

https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview

https://docs.microsoft.com/en-us/azure/load-balancer/skus

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

-13. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
You have a server in your on-premises datacenter that contains a DNS server named TD1 with a primary DNS zone for the tutorialsdojo.com domain.

You have an Azure subscription named TD-Subscription1.

You plan to migrate the tutorialsdojo.com zone to an Azure DNS zone in TD-Subscription1. You must ensure that you minimize administrative effort.

Which two tools can you use?

  1. Azure CloudShell
  2. Azure Portal
  3. Azure Resource Manager templates
  4. Azure CLI
  5. Azure PowerShell
A

– Azure CLI
– Azure Portal

Azure DNS is a hosting service for DNS domains that provides name resolution by using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services.

You can’t use Azure DNS to buy a domain name. For an annual fee, you can buy a domain name by using App Service domains or a third-party domain name registrar. Your domains can then be hosted in Azure DNS for record management.

A DNS zone file is a text file that contains details of every Domain Name System (DNS) record in the zone. It follows a standard format, making it suitable for transferring DNS records between DNS systems. Using a zone file is a quick, reliable, and convenient way to transfer a DNS zone into or out of Azure DNS.

Take note that Azure DNS supports importing and exporting zone files by using the Azure command-line interface (CLI) and Azure Portal. Zone file import is NOT supported via Azure PowerShell and Azure Cloud Shell.

The Azure CLI is a cross-platform command-line tool used for managing Azure services. It is available for the Windows, Mac, and Linux platforms.

Hence, the correct answer are:

– Azure CLI

– Azure Portal

Azure PowerShell, Azure Resource Manager templates, and Azure CloudShell are incorrect because these user tools are not supported by Azure DNS for importing a DNS zone file. Only Azure CLI and Azure Portal are supported.

References:

https://docs.microsoft.com/en-us/azure/dns/dns-overview

https://docs.microsoft.com/en-us/azure/dns/dns-import-export

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

-14. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
You have an Azure subscription that contains the following virtual network peerings:

(image)

Select the correct answer .

  1. Virtual Machines on TDVnet1 has network connectivity with hosts on:
    ?

2.What is the first thing you need to do to change the status of the peering connection for TDVnet2 to Connected:
?

A
  1. Virtual Machines on TDVnet1 has network connectivity with hosts on:
    TDVnet1 only
  2. What is the first thing you need to do to change the status of the peering connection for TDVnet2 to Connected:
    Delete TDVnet1-2

Azure Virtual Network (VNet) is the fundamental building block for your private network in Azure. VNet enables many types of Azure resources, such as Azure Virtual Machines (VM), to securely communicate with each other, the Internet, and on-premises networks. VNet is similar to a traditional network that you’d operate in your own datacenter but brings with it additional benefits of Azure’s infrastructure such as scale, availability, and isolation.

Virtual network peering enables you to connect two or more Virtual Networks in Azure seamlessly. The virtual networks appear as one for connectivity purposes. The traffic between virtual machines in peered virtual networks uses the Microsoft backbone infrastructure. Like traffic between virtual machines in the same network, traffic is routed only through Microsoft’s private network.

In the image above, TDVnet1 is the hub while TDvnet2 and TDVnet3 are the spoke. TDVnet1 hosts can not communicate with TDvnet2 and TDVnet3 because their peerings are in a disconnected state.

Take note that if your VNet peering connection is in a Disconnected state, it means one of the links created was deleted. To re-establish a peering connection, you will need to delete the disconnected peer and recreate it.

Therefore, virtual machines on TDVnet1 can communicate to hosts on TDVnet1 only because the peerings associated with TDVnet1 are in a disconnected state. It means that traffic between virtual networks is prohibited.

Conversely, you need to Delete TDVnet1-2 to re-establish the connection. Once you have deleted the disconnected peer, you can then recreate it.

The following options are incorrect because TDVnet2 and TDVnet3 have a disconnected peer with TDVnet1. No traffic will be able to flow between virtual networks as long as the peer’s status is disconnected. To re-establish the connection, you must delete the disconnected peer and recreate it.

– TDVnet2 only

– TDVnet3 only

– TDVnet1,TDVnet2, and TDVnet3

The option that says: Change the address space is incorrect because you can not change the address space of a virtual network if there is an active peering connection. You need to delete the peer first to change the address space.

The option that says: Delete a subnet is incorrect because even if you delete or add a subnet, it will not have any impact on the state of the peering connection.

The option that says: Enable gateway transit is incorrect because this feature is simply a peering property that lets one virtual network use the VPN gateway in the peered virtual network for cross-premises or VNet-to-VNet connectivity.

References:

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

-16. QUESTION
Category: AZ-104 – Implement and Manage Storage
You have an Azure subscription named TDSubscription that contains Azure file share called TDShare1 and TDShare2. Both have the same storage account and the same region.

You deploy the following resources:

(Image1)

You plan to back up the following file servers in your on-premises datacenter to Azure:

(image2)

You then add E:\tutorials of FileServer1 as the server endpoint of TDGroup1.

For each of the following items, choose Yes if the statement is true or choose No if the statement is false.

  1. You can add C:\files of FileServer2 as a server endpoint of TDGroup1
  2. You can add TDShare2 to TDGroup1 as a cloud endpoint
  3. You can add F:\dojo of FileServer1 as a server endpoint to TDGroup1
A
  1. You can add C:\files of FileServer2 as a server endpoint of TDGroup1
    YES
  2. You can add TDShare2 to TDGroup1 as a cloud endpoint
    NO
  3. You can add F:\dojo of FileServer1 as a server endpoint to TDGroup1
    NO

Azure Files enables you to set up highly available network file shares that can be accessed by using the standard Server Message Block (SMB) protocol. That means that multiple VMs can share the same files with both read and write access. You can also read the files using the REST interface or the storage client libraries.

A sync group defines the sync topology for a set of files. Endpoints within a sync group are kept in sync with each other. A sync group must contain one cloud endpoint, which represents an Azure file share, and one or more server endpoints.

A cloud endpoint is a pointer to an Azure file share. All server endpoints will sync with a cloud endpoint, making the cloud endpoint the hub.

A server endpoint represents a specific location on a registered server, such as a folder on a server volume.

Take note that multiple server endpoints can exist on the same volume if their namespaces are not overlapping (for example, F:\sync1 and F:\sync2) and each endpoint is syncing to a unique sync group meaning you can not have more than one server endpoint from the same server in the same sync group.

The statement that says: You can add C:\files of FileServer2 as a server endpoint of TDGroup1 is correct because FileServer2 has no server endpoint yet on TDGroup1. Therefore, you can add the file server to the sync group without any restrictions.

The statement that says: You can add TDShare2 to TDGroup1 as a cloud endpoint is incorrect because you can only have one cloud endpoint per sync group. If you want to add another cloud endpoint, you must create another sync group.

The statement that says: You can add F:\dojo of FileServer1 as a server endpoint to TDGroup1 is incorrect because TDGroup1 already has a server endpoint for FileServer1 for the folder E:\tutorials. Take note that you can not have more than one server endpoint from the same server in the same sync group. If you need to add the folder F:\dojo of FileServer1, you need to create another sync group.

References:

https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction

https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-deployment-guide

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

1-17. QUESTION
Category: AZ-104 – Deploy and Manage Azure Compute Resources
You need to perform the following actions in a Windows virtual machine:

Create a document on drive C.

Create a document on drive D.

Create a new folder on the desktop.

Create a local user account.

Modify the desktop background

You plan to redeploy the virtual machine.

Which of the following changes will be lost after you redeploy the virtual machine to a new Azure node?

  1. The created folder.
  2. The data on drive D.
  3. The created local user account.
  4. The data on drive C.
A
  1. The data on drive D.

Azure Virtual Machines (VM) is one of several types of on-demand, scalable computing resources that Azure offers. Typically, you choose a VM when you need more control over the computing environment. An Azure VM gives you the flexibility of virtualization without having to buy and maintain the physical hardware. However, you still need to maintain the VM by performing tasks, such as configuring, patching, and installing the software that runs on it.

Most VMs contain a temporary disk, which is not a managed disk. The temporary disk provides short-term storage for applications and processes and is intended to only store data such as page or swap files. Data on the temporary disk may be lost during a maintenance event or when you redeploy a VM. During a successful standard reboot of the VM, data on the temporary disk will persist.

On Azure Linux VMs, the temporary disk is typically /dev/sdb and on Windows VMs the temporary disk is D: by default. The temporary disk is not encrypted by server-side encryption unless you enable encryption at host.

In this scenario, the only changes that will be lost are the data in the temporary disk. The temporary disk is just short-term storage for applications and processes. Take note that you can’t recover any data from this disk. The data loss occurs when the virtual machine moves to a different host server, the host is updated, and the host experiences a hardware failure. By default, the temporary disk on a Windows virtual machine is on drive D.

Hence, the correct answer is: The data on drive D.

The option that says: The created folder is incorrect. Even if you redeploy the virtual machine into a new node, the new folder will still be on the desktop of the virtual machine since drive C is the default storage.

The option that says: The created local user account is incorrect because user accounts are stored in drive C. After you redeploy the virtual machine to a new Azure node, the user account would still be stored in the virtual machine.

The option that says: The data on drive C is incorrect because drive C is a persistent storage. This means the data stored on this drive wouldn’t be deleted even if you redeploy the virtual machine.

References:

https://docs.microsoft.com/en-us/azure/virtual-machines/managed-disks-overview#temporary-disk

https://docs.microsoft.com/es-mx/archive/blogs/mast/understanding-the-temporary-drive-on-windows-azure-virtual-machines

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

-20. QUESTION
Category: AZ-104 – Deploy and Manage Azure Compute Resources
You have an Azure subscription that has the following vCPU quotas.
(image1)

You plan to create the virtual machines listed below in the order they are listed.
(image2)

The deployed virtual machines are shown in the table below.
(image3)

For each of the following items, choose Yes if the statement is true or choose No if the statement is false.

1.You can create VM5 in North Central US Region.
2. You can create VM4 in North Central US Region.
3.You can create VM6 in North Central US Region.

A

1.You can create VM5 in North Central US Region.
NO
2. You can create VM4 in North Central US Region.
YES
3.You can create VM6 in North Central US Region.
NO

Azure Virtual Machines (VM) is one of several types of on-demand, scalable computing resources that Azure offers. Typically, you choose a VM when you need more control over the computing environment. An Azure VM gives you the flexibility of virtualization without having to buy and maintain the physical hardware that runs it. However, you still need to maintain the VM by performing tasks such as configuring, patching, and installing the software that runs on it.

The vCPU quotas for virtual machines and virtual machine scale sets are arranged in two tiers for each subscription in each region.

– Total Regional vCPUs

– VM size family cores

Every time you deploy a new VM, the vCPUs must not exceed the vCPU quota for the VM size family or the total regional vCPU. If either of those quotas has been exceeded, the VM deployment will not be allowed. Take note that there is also a quota for the overall number of virtual machines in the region. The quota is calculated based on the total number of cores in use, both allocated and deallocated. If you need additional cores, you can request a quota increase or delete VMs that are no longer needed.

The statement that says: You can create VM4 in North Central US Region is correct because the remaining vCPU quota in North Central US is 3 vCPUs. If you created VM4 in the North Central US Region, the total vCPUs in that Region is 14 of 15 vCPUs.

The statement that says: You can create VM5 in North Central US Region is incorrect. Take note that you already created the VM4 instance. Therefore, the remaining vCPU quota in the North Central US is only 1 vCPU.

The statement that says: You can create VM6 in North Central US Region is incorrect because if you create VM6 in the North Central US, it will exceed the total regional vCPU quota.

References:

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/quotas

https://docs.microsoft.com/en-us/azure/virtual-machines/sizes

https://docs.microsoft.com/en-us/azure/azure-portal/supportability/per-vm-quota-requests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

-22. QUESTION
Category: AZ-104 – Deploy and Manage Azure Compute Resources
You have deployed two Azure virtual machines to host a web application.

You plan to set up an Availability Set for your application.

You need to make sure that the application is available during planned maintenance.

Which of the following options will allow you to accomplish this?

  1. Assign one fault domain in the Availability Set.
  2. Assign two update domains in the Availability Set.
  3. Assign one update domain in the Availability Set.
  4. Assign two fault domains in the Availability Set.
A
  1. Assign two update domains in the Availability Set.

Azure Virtual Machines (VM) is one of several types of on-demand, scalable computing resources that Azure offers. Typically, you choose a VM when you need more control over the computing environment. An Azure VM gives you the flexibility of virtualization without having to buy and maintain the physical hardware. However, you still need to maintain the VM by performing tasks, such as configuring, patching, and installing the software that runs on it.

Planned maintenance is periodic updates made by Microsoft to the underlying Azure platform to improve the platform infrastructure’s overall reliability, performance, and security that your virtual machines run on.

To ensure that the application is available during planned maintenance, you must assign two update domains in the Availability Set. An update domain will make sure that the VMs in the Availability Set are not updated at the same time. The order of update domains being rebooted may not proceed sequentially during planned maintenance, but only one update domain is rebooted at a time. A rebooted update domain is given 30 minutes to recover before maintenance is initiated on a different update domain.

Hence, the correct answer is: Assign two update domains in the Availability Set.

The option that says: Assign one update domain in the Availability Set is incorrect because you need to assign one update domain for each virtual machine.

The option that says: Assign two fault domains in the Availability Set is incorrect because the requirement in the scenario is only planned maintenance. Even if you assigned two or more fault domains, the application will still be unavailable during planned maintenance. You must assign two update domains and one virtual machine for each update domain.

The option that says: Assign one fault domain in the Availability Set is incorrect because the fault domain is mainly used for unplanned maintenance. Instead of assigning a fault domain in the Availability Set, you must assign an update domain in order to satisfy this requirement.

References:

https://docs.microsoft.com/en-us/azure/virtual-machines/manage-availability

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/tutorial-availability-sets

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

-32. QUESTION
Category: AZ-104 – Deploy and Manage Azure Compute Resources
You plan to migrate your business-critical application to Azure virtual machines.

You need to make sure that at least two VMs are available during planned Azure maintenance.

What should you do?

  1. Create an Availability Set that has two update domains and three fault domains.
  2. Create an Availability Set that has three update domains and two fault domains.
  3. Create an Availability Set that has three update domains and one fault domain.
  4. Create an Availability Set that has one update domain and three fault domains.
A
  1. Create an Availability Set that has three update domains and two fault domains.

Azure periodically updates its platform to improve the reliability, performance, and security of the host infrastructure for virtual machines. The purpose of these updates ranges from patching software components in the hosting environment to upgrading networking components or decommissioning hardware.

Updates rarely affect the hosted VMs. When updates do have an effect, Azure chooses the least impactful method for updates:

– If the update doesn’t require a reboot, the VM is paused while the host is updated, or the VM is live-migrated to an already updated host.

– If maintenance requires a reboot, you’re notified of the planned maintenance. Azure also provides a time window in which you can start the maintenance yourself, at a time that works for you. The self-maintenance window is typically 35 days unless the maintenance is urgent. Azure is investing in technologies to reduce the number of cases in which planned platform maintenance requires the VMs to be rebooted.

The main objective of the question is to test your understanding of update and fault domains. Since it’s a requirement in the scenario that at least two virtual machines must be available during planned maintenance, you should add three update domains in the Availability Set. Take note that each virtual machine in your availability set is assigned to an update domain and a fault domain.

During scheduled maintenance, only one update domain is updated at any given time. Update domains aren’t necessarily updated sequentially. A rebooted update domain is given 30 minutes to recover before maintenance is initiated on a different update domain. For fault domains, you can set a minimum number of fault domains in your Availability Set because the main requirement in the scenario is to prepare for planned maintenance.

Hence, the correct answer is: Create an Availability Set that has three update domains and two fault domains.

The option that says: Create an Availability Set that has three update domains and one fault domain is incorrect because if you set 3 update domains and 1 fault domain in an Availability Set, you will receive an error message: “The update domain count must be 1 when fault domain count is 1.” To resolve this error, you must have 2 fault domains instead of 1 fault domain.

The option that says: Create an Availability Set that has two update domains and three fault domains is incorrect because you need to have three update domains instead of two update domains.

The option that says: Create an Availability Set that has one update domain and three fault domains is incorrect because three fault domains are not needed in this scenario. Fault domains are mainly used for unplanned maintenance. Three update domains must be provisioned to adequately satisfy the requirements.

References:

https://docs.microsoft.com/en-us/azure/virtual-machines/maintenance-and-updates

https://docs.microsoft.com/en-us/azure/virtual-machines/manage-availability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

-34. QUESTION
Category: AZ-104 – Deploy and Manage Azure Compute Resources
Your company has an Azure Kubernetes Service (AKS) cluster and a Windows 10 workstation with Azure CLI installed.

You plan to use the kubectl client on Windows 10.

Which of the following commands should you run?

  1. az aks install-cli
  2. az aks nodepool
  3. az aks create
  4. az aks browse
A
  1. az aks install-cli

Azure Kubernetes Service (AKS) makes it simple to deploy a managed Kubernetes cluster in Azure. AKS reduces the complexity and operational overhead of managing Kubernetes by offloading much of that responsibility to Azure. As a hosted Kubernetes service, Azure handles critical tasks like health monitoring and maintenance for you. The Kubernetes masters are managed by Azure. You only manage and maintain the agent nodes.

To connect to the Kubernetes cluster from your local computer, you need to use kubectl (Kubernetes command-line client). But before you can use kubectl, you should first run the command az aks install-cli in the command-line interface. The kubectl allows you to deploy applications, inspect and manage cluster resources, and view logs.

Hence, the correct answers is: az aks install-cli.

The option that says: az aks nodepool is incorrect because this command only allows you to manage node pools in a Kubernetes cluster. It is stated in the scenario that you need to use the kubectl client. Therefore, you should first run the az aks install-cli command.

The option that says: az aks create is incorrect because this will just create a new managed Kubernetes cluster. Take note that in this scenario, you need to use the Kubernetes command-line client in Windows 10. In order for you to manage cluster resources, you should use the kubectl client.

The option that says: az aks browse is incorrect because it will simply show the dashboard of the Kubernetes cluster in your web browser. Instead of running the command az aks browse, you should run az aks install-cli to download and install the Kubernetes command-line tool.

References:

https://docs.microsoft.com/en-us/cli/azure/aks

https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

-40. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
Note: This item is part of a series of case study questions with the exact same scenario but has a different technical requirement. Each one in the series has a unique solution that may or may not comply with the requirements specified in the scenario.

Overview

Contoso Limited is an online learning portal for technology-related topics that empowers its users to upgrade their skills and career. Contoso Limited has users from all over the world, ranging from the United States, Europe, and Asia.

Existing Environment

Currently, Contoso Limited utilizes a three-tier system for their LMS application on-premises, including the following:

-Web frontend tier
-Application tier
-SQL Server

Each tier contains three virtual machines with no ability to scale out.

The contents of the application are stored in the file server.

Planned changes

Contoso Limited plans to implement the following modifications for their migration to Azure:

-Migrate the web and application tier to Azure virtual machines.
-Migrate the SQL server to the Azure SQL database.
-Move the existing file server to a more efficient service.

Technical Requirements

-Minimize administrative effort and cost whenever possible.
-Ensure that the user can increase the number of virtual machines for the web tier and application tier when there is high demand.
-Ensure that there will be automated backups for all virtual machines.
-Ensure that the file server can be mounted from Azure and on-premises data center.
-Enable Multi-Factor Authentication (MFA) for administrators only.
-Assets must be stored in the Azure Storage service.
-Enable SSL termination at the load balancer layer.
-The architecture must be highly available.

You need to deploy a load balancer that supports SSL termination.

What Azure service should you use?

  1. Azure Application Gateway
  2. Azure Front Door
  3. Azure Load Balancer
  4. Azure Traffic Manager
A
  1. Azure Application Gateway

Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. Traditional load balancers operate at the transport layer (OSI layer 4 – TCP and UDP) and route traffic based on source IP address and port, to a destination IP address and port. Application Gateway can make routing decisions based on additional attributes of an HTTP request, for example, URI path or host headers.

SSL termination refers to the process of decrypting encrypted traffic before passing it along to a web server. TLS is just an updated, more secure, version of SSL. An SSL connection sends encrypted data between a user and a web server by using a certificate for authentication. SSL termination helps speed the decryption process and reduces the processing burden on the servers.

Azure Application Gateway supports end-to-end traffic encryption and TLS/SSL termination. Based on the defined routing rules, the gateway applies the rules to the traffic, re-encrypts the packet, and forwards the packet to the appropriate server. Any reply from the web server goes back to the same process.

Hence, the correct answer is: Azure Application Gateway.

Azure Traffic Manager is incorrect because Traffic Manager does not support SSL termination. This service is mainly used for DNS-based traffic load balancing.

Azure Load Balancer is incorrect. Just like the option above, this service does not support SSL termination. You can use this service to create public and internal load balancers only.

Azure Front Door is incorrect. Although it supports SSL offloading, this service is not a load balancer. Azure Front Door is a global, scalable entry-point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications.

References:

https://docs.microsoft.com/en-us/azure/application-gateway/overview

https://azure.microsoft.com/en-us/services/application-gateway/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

-45. QUESTION
Category: AZ-104 – Monitor and Maintain Azure Resources
Your company hosts its business-critical Azure virtual machines in the Australia East region.

The servers are then replicated to a secondary region using Azure site recovery for disaster recovery.

The Australia East region is experiencing an outage and you need to failover to your secondary region.

Which three actions should you perform?

  1. Run a test failover.
  2. Run a failback.
  3. Initiate replication.
  4. Run a failover.
  5. Reprotect virtual machine.
  6. Verify if the virtual machines are protected and healthy.
A

-6. Verify if the virtual machines are protected and healthy.
-5. Run a failover.
-4. Reprotect the VM.

A Recovery Services vault is a storage entity in Azure that houses data. The data is typically copies of data or configuration information for virtual machines (VMs), workloads, servers, or workstations. You can use Recovery Services vaults to hold backup data for various Azure services such as IaaS VMs (Linux or Windows) and Azure SQL databases.

Recovery Services vaults support System Center DPM, Windows Server, Azure Backup Server, and more. Recovery Services vaults make it easy to organize your backup data while minimizing management overhead.

When you enable replication for a VM to set up disaster recovery, the Site Recovery Mobility service extension installs on the VM and registers it with Azure Site Recovery.

During replication, VM disk writes are sent to a cache storage account in the source region. Data is sent from there to the target region, and recovery points are generated from the data. When you fail over a VM during disaster recovery, a recovery point is used to restore the VM in the target region.

To perform a failover, you should complete the following steps:

Verify the VM settings – Check if the VM is healthy and protected. You also need to verify if the VM is running a support Windows or Linux operation system and if the VM complies with compute, storage and networking requirements.
Run a failover – In the failover tab, you are required to choose a recovery point. The Azure VM in the target region is created using data from this recovery point.
Reprotect the VM – After failover, you reprotect the VM in the secondary region so that it replicates back to the primary region.
Hence, the correct answers are:

– Verify if the virtual machines are protected and healthy.

– Run a failover.

– Reprotect the VM.

Initiate replication is incorrect because this is the first step in setting up a disaster recovery for virtual machines. The question states that the servers are already replicated to the secondary region which indicates that it is ready for a failover

Run a failback is incorrect because this option allows you to failback to your primary region and is only executed once the primary region is running as normal again.

Run a test failover is incorrect because you only run a test failover to check if an actual failover will work. This is done during disaster recovery drills.

References:

https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-overview

https://docs.microsoft.com/en-us/azure/site-recovery/azure-to-azure-tutorial-enable-replication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

-50. QUESTION
Category: AZ-104 – Implement and Manage Storage
Which of the following authentication methods can you use when transferring data using AzCopy to Blob storage and File storage?

For each sotrage type, can you use shared access signature, RBAC, or both?

  1. Blob storage - ?
  2. File storage - ?
A
  1. Blob storage - Shared access signature
  2. File storage - (Shared access signature)

AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. You can also provide authorization credentials on your AzCopy command by using Azure Active Directory (AD) or by using a Shared Access Signature (SAS) token.

For blob storage, the supported authorization methods are: shared access signature and by using your Active Directory credentials.

Meanwhile, for file storage, the only supported authorization method is shared access signature.

Therefore, for both blob storage and file storage, you have to use shared access signature token as your authorization method.

References:

https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview

https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

2-7. QUESTION
Category: AZ-104 – Implement and Manage Storage
You have an Azure subscription that contains several virtual machines deployed to a virtual network named TDVnet1.

You created an Azure storage account named tdstorageaccount1 as shown in the following exhibit:

(image)

Select yes or no for the following:

  1. Your virtual machines deployed to the 20.2.1.0/24 subnet will have access to the file shares in tdstorageaccount1.
    ?
  2. The unmanaged disks of the virtual machines can be backed up to tdsotrageaccount1 by using Azure Backup.
    ?
A

-1. No
-2. No

An Azure storage account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS. Data in your Azure storage account is durable and highly available, secure, and massively scalable.

Virtual Network service endpoint allows administrators to create network rules that allow traffic only from selected VNets and subnets, creating a secure network boundary for their data. Service endpoints extend your VNet private address space and identity to the Azure services, over a direct connection. This allows you to secure your critical service resources to only your virtual networks, providing private connectivity to these resources and fully removing Internet access. You need to explicitly specify which subnets can access your storage account.

Azure Backup can access your storage account in the same subscription for running backups and restores of unmanaged disks in virtual machines. To enable this, you need to tick the “Allow trusted Microsoft Services to access this storage account” box.

Take note that in the screenshot presented in the scenario, the following observations can be made:

  1. There are two subnets inside TDVnet1, 20.2.0.0/24 and 20.2.1.0/24. The only subnet included in the lists of allowed subnets to tdstorageaccount1 is 20.2.0.0/24. The virtual machines deployed to the subnet 20.2.1.0/24 will never have access to tdstorageaccount1.
  2. The “Allow trusted Microsoft Services to access this storage account” is not enabled. This means that Azure Backup will never have the capability to backup the unmanaged disks of the virtual machines to tdstorageaccount1.

Therefore, your virtual machines in 20.2.1.0/24 will Never have access to the file shares in tdstorageaccount1.

Conversely, Azure Backup will Never be able to backup unmanaged disks of the virtual machines.

References:

https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-overview

https://docs.microsoft.com/en-us/azure/storage/common/storage-network-security

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

2-8. QUESTION
Category: AZ-104 – Implement and Manage Storage
You have an Azure blob storage account in your Azure subscription named TD1, located in the Southeast Asia region.
Due to compliance requirements, data uploaded to TD1 must be duplicated to the Australia Central region for redundancy. The solution must minimize administrative effort.

What should you do?

  1. Configure Geo-redundant storage (GRS).
  2. Configure firewalls and virtual networks.
  3. Configure object replication.
  4. Configure versioning.
A

Object replication asynchronously copies block blobs between a source storage account and a destination account. Some scenarios supported by object replication include:

  1. Minimizing latency. Object replication can reduce latency for read requests by enabling clients to consume data from a region that is in closer physical proximity.
  2. Increase efficiency for compute workloads. With object replication, compute workloads can process the same sets of block blobs in different regions.
  3. Optimizing data distribution. You can process or analyze data in a single location and then replicate just the results to additional regions.
  4. Optimizing costs. After your data has been replicated, you can reduce costs by moving it to the archive tier using life cycle management policies.

The requirement states that whenever data is uploaded to TD1 must be duplicated to Australia Central due to compliance requirements. Since the regional pair of Southeast Asia is East Asia, we won’t be able to use geo-redundant storage (GRS) as we cannot choose the secondary region due to regional pairs. Instead, we can use object replication to copy data from TD1 to a storage account in Australia Central region.

Object replication is supported for general-purpose v2 storage accounts and premium block blob accounts. Both the source and destination accounts must be either general-purpose v2 or premium block blob accounts. Object replication supports block blobs only; append blobs and page blobs aren’t supported.

Hence, the correct answer is: Configure object replication.

The option that says: Configure firewalls and virtual networks is incorrect because this feature only allows users of Azure storage accounts to block or allow specific traffic to your storage account. It does not have any capability to replicate data to another region.

The option that says: Configure versioning is incorrect because this allows you to automatically maintain previous versions of an object in a single storage account. Although, to use object replication, versioning must be enabled in the source and target storage accounts.

The option that says: Configure Geo-redundant storage (GRS) is incorrect because the data will automatically be stored in East Asia since it is the regional pair of Southeast Asia region. You don’t get to choose the secondary region when enabling geo-redundant storage. Instead, use object replication.

References:

https://learn.microsoft.com/en-us/azure/storage/blobs/object-replication-overview

https://learn.microsoft.com/en-us/azure/reliability/cross-region-replication-azure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

2-9. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
You have an Azure subscription that contains a Windows virtual machine named TD1 with the following configurations:

Virtual network: TDVnet1

Public IP Address: 20.10.0.1

Private IP Address: 48.156.83.51

Location: Southeast Asia

You deploy the following Azure DNS zones:
(image)

You need to determine which DNS zones can be linked to TDVnet1 and which DNS zones TD1 can be automatically registered.

  1. TDVnet1 can be linked to the following DNS zones:
    ?
  2. TD1 can be automatically registered to the following DNS zones:
    ?
A
  1. Manila.com and Davao.com only can be linked to TDVNet1 since they are both private DNS zones.
  2. TD1 can be automatically registered to Manila.com and Davao.com only because both DNS zones are private DNS zones provided that you enable the auto registration feature.

Azure Private DNS provides a reliable, secure DNS service to manage and resolve domain names in a virtual network without the need to add a custom DNS solution. By using private DNS zones, you can use your own custom domain names rather than the Azure-provided names available today.

Using custom domain names helps you to tailor your virtual network architecture to best suit your organization’s needs. It provides name resolution for virtual machines (VMs) within a virtual network and between virtual networks. Additionally, you can configure zone names with a split-horizon view, which allows a private and a public DNS zone to share the name.

Once you create a private DNS zone in Azure, it is not immediately accessible from any virtual network. You must link it to a virtual network before a VM hosted in that network can access the private DNS zone.

When you create a link between a private DNS zone and a virtual network, you have an option to turn on autoregistration of DNS records for virtual machines. If you choose this option, the virtual network becomes a registration virtual network for the private DNS zone.

– A DNS record is automatically created for the virtual machines that you deploy in the network. DNS records are created for the virtual machines that you have already deployed in the virtual network.

– One private DNS zone can have multiple registration virtual networks, however, every virtual network can have exactly one registration zone associated with it.

When you create a virtual network link under a private DNS zone and choose not to enable DNS record autoregistration, the virtual network is treated as a resolution only virtual network.

– DNS records for virtual machines deployed in such networks will not be automatically created in the linked private DNS zone. However, the virtual machines deployed in such a network can successfully query the DNS records from the private DNS zone.

– These records may be manually created by you or may be populated from other virtual networks that have been linked as registration networks with the private DNS zone.

– One private DNS zone can have multiple resolution virtual networks and a virtual network can have multiple resolution zones associated to it.

Take note that you can only link a virtual network and use the auto registration feature to a private DNS zone only.

Therefore, Manila.com and Davao.com only can be linked to TDVNet1 since they are both private DNS zones.

Conversely, TD1 can be automatically registered to Manila.com and Davao.com only because both DNS zones are private DNS zones provided that you enable the auto registration feature.

The following options are incorrect because Dagupan.com and Palawan.com are public DNS zones. You can not use public DNS zones as they do not have the capability to use virtual network links and the auto registration feature.

– Manila.com and Dagupan.com only

– Davao.com and Palawan.com

– Dagupan.com and Palawan.com only

References:

https://docs.microsoft.com/en-us/azure/dns/private-dns-overview

https://docs.microsoft.com/en-us/azure/dns/private-dns-virtual-network-links

https://docs.microsoft.com/en-us/azure/dns/private-dns-autoregistration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

2-11. QUESTION
Category: AZ-104 – Implement and Manage Storage
You have an Azure subscription that contains a storage account named tdstorageaccount1.

You have 14 TB of files you need to migrate to tdstorageaccount1 using Azure Import/Export service.

You need to identify the two files you need to create before the preparation of the drives for journal file.

Which two files should you create?

  1. ARM template
  2. Dataset CSV File
  3. WAImportExport file
  4. Driveset CSV file
  5. PowerShell PS1 file
A
  1. Dataset CSV File
  2. Driveset CSV file

Azure Import/Export service is used to securely import large amounts of data to Azure Blob storage and Azure Files by shipping disk drives to an Azure datacenter. This service can also be used to transfer data from Azure Blob storage to disk drives and ship to your on-premises sites. Data from one or more disk drives can be imported either to Azure Blob storage or Azure Files.

Consider using Azure Import/Export service when uploading or downloading data over the network is too slow or getting additional network bandwidth is cost-prohibitive. Use this service in the following scenarios:

– Data migration to the cloud: Move large amounts of data to Azure quickly and cost-effectively.

– Content distribution: Quickly send data to your customer sites.

– Backup: Take backups of your on-premises data to store in Azure Storage.

– Data recovery: Recover large amount of data stored in storage and have it delivered to your on-premises location.

The first step of an import job is the preparation of the drives. This is where you need to generate a journal file. The following files are needed before you create a journal file:

– The Dataset CSV File

– Dataset CSV file is the value of /dataset flag is a CSV file that contains a list of directories and/or a list of files to be copied to target drives. The first step to creating an import job is to determine which directories and files you are going to import.

– This can be a list of directories, a list of unique files, or a combination of those two. When a directory is included, all files in the directory and its subdirectories will be part of the import job.

– The Driveset CSV file

– The value of the /InitialDriveSet or /AdditionalDriveSet flag is a CSV file that contains the list of disks to which the drive letters are mapped so that the tool can correctly pick the list of disks to be prepared.

Hence, the correct answers are:

– Dataset CSV File

– Driveset CSV file

The following options are incorrect because an Azure Import/Export journal file only requires a driveset CSV file and dataset CSV File during the preparation of your drives.

– ARM template

– PowerShell PS1 file

– WAImportExport file

References:

https://docs.microsoft.com/en-us/azure/import-export/storage-import-export-service

https://docs.microsoft.com/en-us/azure/import-export/storage-import-export-data-to-files

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

2-14. QUESTION
Category: AZ-104 – Monitor and Maintain Azure Resources
Your company has an Azure subscription that contains the following resources:
(image)

You have an Azure Recovery Services vault named TDBackup1 that backs up TD1, TD2, and TD3 daily without an Azure Backup Agent.

Select the correct answer from the drop-down list of options. Each correct selection is worth one point.

  1. You can execute a file recovery operation TD2 to:
    ?
  2. You can restore TD3 to:
    ?
A
  1. You can execute a file recovery operation TD2 to:
    TD2 only
  2. You can restore TD3 to: TD3 only

Azure Backup provides independent and isolated backups to guard against unintended destruction of the data on your VMs. Backups are stored in a Recovery Services vault with built-in management of recovery points. Configuration and scaling are simple, backups are optimized, and you can easily restore as needed.

To recover a specific file, you must specify the recovery point of your backup and download a script that will mount the disks from the selected recovery point. After the script is successfully downloaded, make sure you have the right machine to execute this script.

When recovering files, you can’t restore files to a previous or future operating system version. For example, you can’t restore a file from a Windows Server 2016 VM to Windows Server 2012 or a Windows 8 computer. You can restore files from a VM to the same server operating system, or to the compatible client operating system.

You can restore a virtual machine with the following options:

– Create a new VM

– Restore Disk

– Replace existing disk (OLR)

As one of the restore options, you can replace an existing VM disk with the selected restore point. The current VM must exist. If it’s been deleted, this option can’t be used. Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify.

Existing disks connected to the VM are replaced with the selected restore point. The snapshot is copied to the vault and retained in accordance with the retention policy.

After the Replace Disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren’t needed.

Therefore, you can perform file recovery to TD2 only because the operating systems of TD1 and TD3 are not compatible with TD2. You need to ensure that the machine you are recovering the file to meets the requirements before executing the script.

Conversely, you can restore TD3 to TD3 only because you can not restore the disk of TD3 to TD1 and TD2. You can only restore a virtual machine by creating a new VM, restoring a disk, or replace the existing VM disk.

References:

https://docs.microsoft.com/en-us/azure/backup/backup-overview

https://docs.microsoft.com/en-us/azure/backup/backup-azure-arm-restore-vms

https://docs.microsoft.com/en-us/azure/backup/backup-azure-restore-files-from-vm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

2-15. QUESTION
Category: AZ-104 – Implement and Manage Storage
You have an Azure subscription that contains a sync group named TDSync1 which has an associated cloud endpoint called TDCloud1. The file tutorials.docx is included in the cloud endpoint.

You have the following on-premises Windows Server 2019 file servers that you want to synchronize to Azure:
(image)

You first registered FileServer1 as a server endpoint to TDSync1 and then registered FileServer2 as a server endpoint to TDSync1.

For each of the following items, choose Yes if the statement is true or choose No if the statement is false.

  1. tutorials.docx on TDCloud1 will be overwritten by tutorials.docx from FileServer1
  2. dojo.mp4 will be synced to FileServer1
  3. tutorials.docx on FileServer1 will be overwritten by tutorials.docx from TDCloud1
A
  1. tutorials.docx on TDCloud1 will be overwritten by tutorials.docx from FileServer1
    NO
  2. dojo.mp4 will be synced to FileServer1
    YES
  3. tutorials.docx on FileServer1 will be overwritten by tutorials.docx from TDCloud1
    NO

Azure Files enables you to set up highly available network file shares that can be accessed by using the standard Server Message Block (SMB) protocol. That means that multiple VMs can share the same files with both read and write access. You can also read the files using the REST interface or the storage client libraries.

Remember that whenever you make changes to any cloud endpoint or server endpoint in the sync group, it will be synced to the other endpoints in the sync group. If you make a change to the cloud endpoint (Azure file share) directly, changes first need to be discovered by an Azure File Sync change detection job. A change detection job is only initiated for a cloud endpoint once every 24 hours.

Take note that Azure does not overwrite any files in your sync group. Instead, it will keep both changes to files that are changed in two endpoints at the same time. The most recently written change keeps the original file name.

The older file (determined by LastWriteTime) has the endpoint name and the conflict number appended to the filename. For server endpoints, the endpoint name is the name of the server. For cloud endpoints, the endpoint name follows this taxonomy:

– <FileNameWithoutExtension>-<endpointName>[-#].<ext></ext></endpointName></FileNameWithoutExtension>

– For example, tutorials-FileServer1.docx

Azure File Sync supports 100 conflict files per file. Once the maximum number of conflict files has been reached, the file will fail to sync until the number of conflict files is less than 100.

Hence, this statement is correct: dojo.mp4 will be synced to FileServer1.

The following statements are incorrect because Azure File Sync will not overwrite any files in your endpoints. It will simply append a conflict number to the filename of the older file, while the most recent change will retain the original file name.

– tutorials.docx on FileServer1 will be overwritten by tutorials.docx from TDCloud1.

– tutorials.docx on TDCloud1 will be overwritten by tutorials.docx from FileServer1.

References:

https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction

https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-deployment-guide

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

2-20. QUESTION
Category: AZ-104 – Monitor and Maintain Azure Resources
Your company has an Azure Log Analytics workspace in their Azure subscription.

You are instructed to find the error in the table named EventLogs.

Which log query should you run in the workspace?

  1. search in (EventLogs) “error”
  2. EventLogs | take 10
  3. search “error”
  4. EventLogs | sort by TimeGenerated desc
A
  1. search in (EventLogs) “error”

Azure Monitor is a service in Azure that provides performance and availability monitoring for applications and services in Azure, other cloud environments, or on-premises. Azure Monitor collects data from multiple sources into a common data platform where it can be analyzed for trends and anomalies. Rich features in Azure Monitor assist you in quickly identifying and responding to critical situations that may affect your application.

To retrieve data in the Log Analytics workspace, you need to use a Kusto Query Language (KQL). Remember that there are different types of log queries in Azure Monitor. Based on the given question, you only need to find the “error” in the table named “EventLogs.”

With search queries, you can find the specific value that you need in your table. This query searches the “TableName” table for records that contains the word “value”:

search in (TableName) “value”

If you omit the “in (TableName)“ part and just run the search “value”, the search will go over all tables, which would take longer and be less efficient.

Hence, the correct answer is: search in (EventLogs) “error”.

The option that says: EventLogs | take 10 is incorrect because this option would only take 10 results in the EventLogs table. Remember that the requirement in the scenario is to show all the logs containing the word “error” in the table named EventLogs.

The option that says: search “error” is incorrect because this query would search “error” in all the tables. Take note that you only need to query the table EventLogs.

The option that says: EventLogs | sort by TimeGenerated desc is incorrect because this query will only sort the entire EventLogs table by the TimeGenerated column.

References:

https://docs.microsoft.com/en-us/azure/azure-monitor/log-query/get-started-queries

https://docs.microsoft.com/en-us/azure/azure-monitor/log-query/log-analytics-tutorial

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

2-23. QUESTION
Category: AZ-104 – Deploy and Manage Azure Compute Resources
You deployed ten web servers that are running in Windows Server 2019 virtual machines behind an Azure load balancer. The virtual machines host a stateless web application.

You need to ensure that successive requests from the same client IP address and protocol will be handled by the same virtual machine.

What should you configure in the load balancer?

  1. Set idle timeout to the maximum available limit.
  2. Set the session persistence to Client IP and protocol.
  3. Configure Client IP as the session persistence type.
  4. Enable floating IP.
A
  1. Set the session persistence to Client IP and protocol.

Azure Load Balancer is a Layer-4 (TCP, UDP) load balancer that provides high availability by distributing incoming traffic among healthy VMs. A load balancer health probe monitors a given port on each VM and only distributes traffic to an operational VM. You define a front-end IP configuration that contains one or more public IP addresses. This front-end IP configuration allows your load balancer and applications to be accessible over the Internet.

To redirect the client request to the same virtual machine, you need to add a session persistence in the load balancing rule. Session persistence specifies that traffic from a client should be handled by the same virtual machine in the backend pool for the duration of a session.

There are three options in session persistence:

– None – specifies that successive requests from the same client may be handled by any virtual machine.

– Client IP – specifies that the same virtual machine will handle successive requests from the same client IP address.

– Client IP and protocol – specifies that the same virtual machine will handle successive requests from the same client IP address and protocol combination.

Since the requirement in the scenario is to handle the same client IP address and protocol, you need to set the Session Persistence to Client IP and protocol.

Hence, the correct answer is: Set the session persistence to Client IP and protocol.

The option that says: Configure Client IP as the session persistence type is incorrect because the requirement in the scenario is the same client IP address and protocol. This type of configuration is only applicable if you want to persist the same client IP address, excluding its protocol.

The option that says: Set idle timeout to the maximum available limit is incorrect because the maximum available limit in idle timeout is 30 minutes. Also, idle timeout is used to keep TCP or HTTP connections open without relying on clients to send keep-alive messages. You don’t need to set idle timeout because the only requirement is to redirect the same client IP address and protocol to the same virtual machine.

The option that says: Enable Floating IP is incorrect because this feature just changes the IP address mapping to the front-end IP of the load balancer. The Floating IP feature is not capable of handling sticky sessions.

References:

https://docs.microsoft.com/en-us/azure/load-balancer/manage

https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

2-26. QUESTION
Category: AZ-104 – Deploy and Manage Azure Compute Resources
A company plans to deploy an Azure Virtual Machine with the following parameters:

Region: South Central US

OS disk type: Standard HDD

Ultra Disk compatibility: Disabled

Managed disks: Disabled

To prevent downtime, you need to make sure that the instance can be moved in different Availability Zones using Site Recovery.

Which parameter should be modified?

  1. Managed disks
  2. Region
  3. OS disk type
  4. Ultra Disk compatibility
A
  1. Managed disks

Azure Site Recovery helps ensure business continuity by keeping business apps and workloads running during outages. Site Recovery replicates workloads running on physical and virtual machines (VMs) from a primary site to a secondary location. When an outage occurs at your primary site, you fail over to a secondary location, and access apps from there. After the primary location is running again, you can fail back to it.

Site Recovery can manage replication for:

– Azure VMs replicating between Azure regions
– Replication from Azure Public Multi-Access Edge Compute (MEC) to the region
– Replication between two Azure Public MECs
– On-premises VMs, Azure Stack VMs, and physical servers

Managed disks are designed to have a 99.999% uptime. Managed disks achieve this by storing three copies of your data, resulting in high durability. If one or two replicas fail, the remaining replicas help ensure data persistence and high failure tolerance.

With Azure Site Recovery, you can move single-instance VMs into Availability Zones in a target region. However, in order to move a VM to an Availability Zone, you must first ensure that the VM is using managed disks. You can also convert existing Windows VMs that use unmanaged disks to use managed disks.

Hence, the correct answer is: Managed disks.

Region is incorrect because South Central US already allows you to select Availability Zone as an option. This means that you can move the VM to different AZs. Take note that some Regions or locations don’t support AZs.

OS disk type is incorrect because it doesn’t matter what type of disk the VM is using, if you already enabled “use managed disks” in advanced disk configuration.

Ultra Disk compatibility is incorrect because the requirement is not related to data-intensive workloads. You only need to ensure that the VMs can be moved to a different AZ in the event of a disaster.

References:

https://learn.microsoft.com/en-us/azure/site-recovery/move-azure-vms-avset-azone

https://learn.microsoft.com/en-us/azure/virtual-machines/windows/convert-unmanaged-to-managed-disks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

2-28. QUESTION
Category: AZ-104 – Manage Azure Identities and Governance
Your company has an Azure subscription named TD-Sub1 that contains the resources shown in the table below.
(image)

You created a new Azure subscription named TD-Sub2.

You plan to move the resources from TD-Sub1 to TD-Sub2.

Which resources in TD-Sub1 can you move to the new subscription?

  1. Virtual machine, Virtual network, Recovery Services vault, and Storage account
  2. Virtual machine, Virtual network, and Storage account
  3. Virtual machine, Virtual network, and Recovery Services vault
  4. Virtual machine and Virtual network
A
  1. Virtual machine, Virtual network, Recovery Services vault, and Storage account

A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. You decide how you want to allocate resources to resource groups based on what makes the most sense for your organization. Generally, add resources that share the same lifecycle to the same resource group so you can easily deploy, update, and delete them as a group.

If you need to move your resources to a new subscription or resource group under the same subscription, you can use Azure portal, Azure PowerShell, Azure CLI, or the REST API. Take note that when you move a resource to a new resource group or subscription, the location of the resource won’t change.

Hence, the correct answer is: Virtual machine, Virtual network, Recovery Services vault, and Storage account.

The following options are incorrect because you can move all these resources to a new subscription or resource group.

– Virtual machine, Virtual network, and Storage account

– Virtual machine, Virtual network, and Recovery Services vault

– Virtual machine and Virtual network

References:

https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/move-resource-group-and-subscription

https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

2-30. QUESTION
Category: AZ-104 – Manage Azure Identities and Governance
You are managing a Microsoft Entra ID tenant and a Microsoft 365 tenant.

You need to grant several users who must belong to the same Azure group temporary access to the Microsoft SharePoint document library. The group must automatically be deleted after 180 days for compliance purposes.

Which two actions could you perform?

  1. Set up a dynamic membership on Microsoft 365 groups.
  2. Set up an assigned membership on security groups.
  3. Set up an assigned membership on Microsoft 365 groups.
  4. Set up a dynamic membership on security groups.
  5. Set up an external identity provider.
A
  1. Set up a dynamic membership on Microsoft 365 groups.
  2. Set up an assigned membership on Microsoft 365 groups.

Microsoft Entra ID is a cloud-based identity and access management service that enables your employees access external resources. Example resources include Microsoft 365, the Azure portal, and thousands of other SaaS applications.

Microsoft Entra ID also helps them access internal resources like apps on your corporate intranet, and any cloud apps developed for your own organization.

When creating a new group in Microsoft Entra ID, you can select two types of membership.

-The assigned membership type lets you add specific users to be members of the group and to have unique permissions.

-While dynamic membership type lets you add and remove members automatically based on your dynamic membership rules (user attributes such as department, location, or job title).

Since you need to delete the groups automatically, you can set an expiration policy in Microsoft 365 groups. Take note that when a group expires, all of its associated services will also be deleted.

Hence, the correct answers are:

– Set up a dynamic membership on Microsoft 365 groups.

– Set up an assigned membership on Microsoft 365 groups.

The options that say: Set up an assigned membership on security groups and Set up a dynamic membership on security groups are incorrect because security groups can only be used for devices or users and not for groups.

The option that says: Set up an external identity provider is incorrect because external identities only allow users outside your organization to access your resources. This option won’t help you create an expiration policy.

References:

https://docs.microsoft.com/en-us/microsoft-365/solutions/microsoft-365-groups-expiration-policy

https://learn.microsoft.com/en-us/entra/fundamentals/how-to-manage-groups

https://learn.microsoft.com/en-us/entra/identity/users/groups-create-rule

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

2-32. QUESTION
Category: AZ-104 – Deploy and Manage Azure Compute Resources
You are managing an Azure subscription that contains the following resources:
(image)

You plan to configure a proximity placement group for the TD-VMSS1 virtual machine scale set.

Which of the following proximity placement groups should you use?

  1. TD-Proximity1 and TD-Proximity3
  2. TD-Proximity2
  3. TD-Proximity3
  4. TD-Proximity1, TD-Proximity2, and TD-Proximity3
A
  1. TD-Proximity3

Azure virtual machine scale sets let you create and manage a group of load-balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule. Scale sets provide high availability to your applications and allow you to centrally manage, configure, and update a large number of VMs. With virtual machine scale sets, you can build large-scale services for areas such as compute, big data, and container workloads.

A proximity placement group is a logical grouping used to make sure that Azure compute resources are physically located close to each other. Proximity placement groups are useful for workloads where low latency is a requirement. When you assign your virtual machines to a proximity placement group, the virtual machines are placed in the same data center, resulting in lower and deterministic latency for your applications.

It is stated in the scenario that you must configure a placement group for TD-VMSS1. Among the given placement groups, you can only assign TD-Proximity3 since it belongs to the same region as TD-VMSS1. Remember that when you are configuring a proximity placement group for a virtual machine scale set. Both the placement group and scale set must be in the same region.

Hence, the correct answer is: TD-Proximity3.

The option that says: TD-Proximity2 is incorrect because TD-VMSS1 is located in East Asia and not in Australia East. Although both TD-VMSS1 and TD-Proximity2 belong to the same resource group, take note that the location of the resource group is irrelevant in this scenario. You should assign TD-VMSS1 to the TD-Proximity3 placement group to satisfy the requirement.

The option that says: TD-Proximity1 and TD-Proximity3 is incorrect. You can’t configure TD-Proximity1 for TD-VMSS1 since the location of TD-Proximity1 is in Southeast Asia while TD-VMSS1 is in Australia East. The region of the virtual machine scale set and the proximity placement group should be the same.

The option that says: TD-Proximity1, TD-Proximity2, and TD-Proximity3 is incorrect because you can only assign TD-VMSS1 in TD-Proximity3. In this scenario, both the virtual machine scale set and proximity placement group must be in the same region.

References:

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/proximity-placement-groups-portal

https://azure.microsoft.com/en-us/blog/announcing-the-general-availability-of-proximity-placement-groups/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

2-33. QUESTION
Category: AZ-104 – Monitor and Maintain Azure Resources
Your company has an Azure subscription that contains the following resources:
(image1)

You are instructed to monitor the storage account and configure an SMS notification for the following signals.
(image2)

How many alert rules and action groups should you create?

Select the correct answer from the drop-down list of options. Each correct selection is worth one point.

  1. Alert rules
    ?
  2. Action groups
    ?
A
  1. Alert rules
    4
  2. Action groups
    3

Azure Monitor is a service in Azure that provides performance and availability monitoring for applications and services in Azure, other cloud environments, or on-premises. Azure Monitor collects data from multiple sources into a common data platform where it can be analyzed for trends and anomalies. Rich features in Azure Monitor assist you in quickly identifying and responding to critical situations that may affect your application.

Action rules help you define or suppress actions at any Azure Resource Manager scope (Azure subscription, resource group, or target resource). It has various filters that can help you narrow down the specific subset of alert instances that you want to act on.

An action group is a collection of notification preferences defined by the owner of an Azure subscription. Azure Monitor and Service Health alerts use action groups to notify users that an alert has been triggered. Various alerts may use the same action group or different action groups depending on the user’s requirements.

The requirement in the scenario is to identify how many alert rules and action groups should be created. Based on the given signal types, you should create four alert rules. Take note that you need to create one alert rule per signal type.

For the action groups, you only need to create 3 action groups because the users that will be notified for Availability and Create/Update Storage Account are the same (User 1, User 2, and User 3). Remember that action groups are created for each unique set of users that will be notified.

Therefore, the correct answers are:

– Alert rules = 4

– Action groups = 3

References:

https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-action-rules

https://docs.microsoft.com/en-us/azure/azure-monitor/platform/action-groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

2-34. QUESTION
Category: AZ-104 – Manage Azure Identities and Governance
You are managing an Azure subscription that has an Azure AD tenant named tutorialsdojo.onmicrosoft.com. The tenant contains the following users:
(image1)

You created the following security groups in tutorialsdojo.onmicrosoft.com:
(image2)

The tutorialsdojo.onmicrosoft.com contains the following Windows 10 devices:
(image3)

For each of the following items, choose Yes if the statement is true or choose No if the statement is false.

  1. TD-User1 can add TD-Device2 to TD-SG2.
  2. TD-User1 can add TD-Device2 to TD-SG1.
  3. TD-User2 can add TD-Device1 to TD-SG1.
A
  1. TD-User1 can add TD-Device2 to TD-SG2.
    NO
  2. TD-User1 can add TD-Device2 to TD-SG1.
    YES
  3. TD-User2 can add TD-Device1 to TD-SG1.
    NO

Azure Active Directory (Azure AD) is Microsoft’s cloud-based identity and access management service, which helps your employees sign in and access resources in external (such as Microsoft 365, the Azure portal, and thousands of other SaaS applications) and internal resources (such as apps on your corporate network and intranet, along with any cloud apps developed by your own organization).

Azure Role-Based Access Control (Azure RBAC) has several Azure built-in roles that you can assign to users, groups, service principals, and managed identities. Role assignments are the way you control access to Azure resources. If the built-in roles don’t meet the specific needs of your organization, you can create your own Azure custom roles. For the given scenario, the Owner role has full access to manage all resources, including the ability to assign roles in Azure RBAC.

In the given scenario, there are three Azure AD roles:

  1. User Administrator role – can create users and manage all aspects of users with some restrictions, and can update password expiration policies. Additionally, users with this role can create and manage all groups.
  2. Cloud Device Administrator role – can enable, disable, and delete devices in Azure AD and read Windows 10 BitLocker keys (if present) in the Azure portal. The role does not grant permission to manage any other properties on the device.
  3. Security Administrator role – has permissions to manage security-related features in the Microsoft 365 security center, Azure Active Directory Identity Protection, Azure Active Directory Authentication, Azure Information Protection, and Office 365 Security & Compliance Center.

To organize users or devices by geographic location, department, or hardware characteristics, you can create the following types of groups:

  1. Assigned – the Administrators can manually assign users or devices to this group, and manually remove users or devices.
  2. Dynamic – automatically add/remove users or devices to user groups or device groups based on an expression you create. For example, when a user is added with the manager title, the user is automatically added to an All managers users group. Or, when a device has the iOS/iPadOS device OS type, the device is automatically added to an All iOS/iPadOS devices group.

To get a device in Azure AD, you have multiple options:

  1. Azure AD registered – devices that are Azure AD registered are typically personally owned or mobile devices, and are signed in with a personal Microsoft account or another local account.
  2. Azure AD joined – devices that are Azure AD joined are owned by an organization, and are signed in with an Azure AD account belonging to that organization. They exist only in the cloud.
  3. Hybrid Azure AD joined – devices that are hybrid Azure AD joined are owned by an organization and are signed in with an Active Directory Domain Services account belonging to that organization. They exist in the cloud and on-premises.

The statement that says: TD-User1 can add TD-Device2 to TD-SG1 is correct because TD-User1 has the role of User Administrator and also the owner of TD-SG1. Since the membership type of TD-SG1 is assigned, TD-User1 would be able to add or assign TD-Device2 to the group.

The statement that says: TD-User2 can add TD-Device1 to TD-SG1 is incorrect. Take note that the Cloud Device Administrator role can only manage devices in Azure AD. The role does not have permission to manage a group like a User Administrator role. But if TD-User2 is the owner of TD-SG1, then it has the permission to add TD-Device1 to TD-SG1.

The statement that says: TD-User1 can add TD-Device2 to TD-SG2 is incorrect because TD-SG2 is a dynamic group. This means that users and devices are automatically added to the group. In short, TD-User1 can’t manually add or remove users and devices in TD-SG2.

References:

https://docs.microsoft.com/en-us/azure/active-directory/roles/permissions-reference

https://docs.microsoft.com/en-us/mem/intune/fundamentals/groups-add

https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

2-36. QUESTION
Category: AZ-104 – Deploy and Manage Azure Compute Resources
Note: This item is part of a series of questions with the exact same scenario but with a different proposed answer. Each one in the series has a unique solution that may, or may not, comply with the requirements specified in the scenario.

You deployed an Ubuntu server using Azure Virtual Machine.

You received an email notification that your resources will be affected by the planned maintenance.

You need to migrate the virtual machine to a new Azure host.

Solution: Redeploy the virtual machine.

Does the solution meet the goal?

No
Yes

A

YES

Azure Virtual Machines (VM) is one of several types of on-demand, scalable computing resources that Azure offers. Typically, you choose a VM when you need more control over the computing environment. An Azure VM gives you the flexibility of virtualization without having to buy and maintain the physical hardware that runs it. However, you still need to maintain the VM by performing tasks, such as configuring, patching, and installing the software that runs on it.

When you redeploy a VM, it moves the VM to a new node within the Azure infrastructure and then powers it back on. This means that the virtual machine will be unavailable when the redeployment is in progress. Since the requirement in the scenario is to migrate the VM to a new Azure host then redeploying the virtual machine will satisfy the requirement.

Hence, the correct answer is: Yes.

References:

https://docs.microsoft.com/en-us/azure/virtual-machines/troubleshooting/redeploy-to-new-node-linux

https://docs.microsoft.com/en-us/azure/virtual-machines/troubleshooting/redeploy-to-new-node-windows

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

2-38. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
Note: This item is part of a series of case study questions with the exact same scenario but with a different proposed answer. Each one in the series has a unique solution that may, or may not, comply with the requirements specified in the scenario.

Your company has an Azure subscription that contains a virtual network with a subnet named TDSub1 and a virtual machine named TD1 with a public IP address and is configured to allow Remote Desktop Connections.

TDSub1 is the subnet of TD1.

You created two network security groups named TDSG-TD1 attached to the network interface of TD1 and TDSG-TDSub1 attached to TDSub1.

TDSG-TDSub1 uses default inbound security rules while TDSG-TD1 has the default inbound security rules with a custom rule:

Name: RDP
Priority: 100
Source: Any
Source port range: *
Destination: *
Destination port range: 3389
Protocol: ICMP
Action: Allow
You need to ensure that you can connect to TD1 from the Internet using Remote Desktop connections.

Solution: You add an inbound security rule to TDSG-TDSub1 with the following configuration:

Priority: 200
Source: Any
Source port range: *
Destination: *
Destination port range: 3389
Protocol: TCP
Action: Allow
Does this meet the goal?

Yes
No

A

NO

Azure Network Security Group is used to filter network traffic to and from Azure resources in an Azure virtual network. A network security group contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol.

The solution in this scenario states that you will add a new inbound security rule that allows port 3389 traffic from the Internet using TCP protocol to TDSG-TDSub1.

In the image above, the Remote Desktop connection will first be evaluated by the security rules in TDSG-TDSub1, since it is associated to TDSub1 and TD1 is in TDSub1. The connection is allowed and will be evaluated next by TDSG-TD1. The connection is then denied by TDSG-TD1 because the custom rule only allows port 3389 traffic from the Internet using ICMP protocol.

You should modify the current custom rule of TDSG-TD1 by changing the ICMP protocol to TCP protocol or you can create a new inbound security rule in TDSG-TD1 that allows port 3389 traffic from the Internet using TCP protocol.

Hence, the correct answer is: No.

References:

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview

https://docs.microsoft.com/en-us/azure/virtual-network/network-security-groups-overview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

2-39. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
Note: This item is part of a series of case study questions with the exact same scenario but with a different proposed answer. Each one in the series has a unique solution that may, or may not, comply with the requirements specified in the scenario.

Your company has an Azure subscription that contains a virtual network with a subnet named TDSub1 and a virtual machine named TD1 with a public IP address and is configured to allow Remote Desktop Connections.

TDSub1 is the subnet of TD1.

You created two network security groups named TDSG-TD1 attached to the network interface of TD1 and TDSG-TDSub1 attached to TDSub1.

TDSG-TDSub1 uses default inbound security rules while TDSG-TD1 has the default inbound security rules with a custom rule:

Name: RDP
Priority: 100
Source: Any
Source port range: *
Destination: *
Destination port range: 3389
Protocol: TCP
Action: Allow
You need to ensure that you can connect to TD1 from the Internet using Remote Desktop connections.

Solution: You add an inbound security rule to TDSG-TDSub1 with the following configuration:

Priority: 200
Source: Service tag
Source port range: Virtual Network
Destination: *
Destination port range: 3389
Protocol: TCP
Action: Allow
You disassociate TDSG-TD1 from the network interface of TD1.

Does this meet the goal?

No
Yes

A

NO

Azure Network Security Group is used to filter network traffic to and from Azure resources in an Azure virtual network. A network security group contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol.

In the image above, the Remote Desktop connection will first be evaluated by the security rules in TDSG-TDSub1, since it is associated to TDSub1 and TD1 is in TDSub1. Take note that TDSG-TDSub1 uses only the default security rules, the connection is denied by the DenyAllInbound default security rule, and never evaluated by TDSG-TD1 since TDSG-TD1 is associated to the network interface. If TDSG-TDSub1 has a security rule that allows port 3389 from the Internet, the traffic is then processed by TDSG-TD1.

It is recommended that you associate a network security group to a subnet or a network interface, but not both. Since rules in a network security group associated with a subnet can conflict with rules in a network security group associated with a network interface, you can have unexpected communication problems that require troubleshooting.

To allow port 3389 from the Internet to TD1, you create an inbound security rule to TDSG-TDSub1 that allows port 3389 from the Internet instead of the virtual network service tag and you disassociate TDSG-TD1 from the network interface of TD1.

Hence, the correct answer is: No.

References:

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview

https://docs.microsoft.com/en-us/azure/virtual-network/network-security-groups-overview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q
  1. QUESTION
    Category: AZ-104 – Manage Azure Identities and Governance
    Your company has a subscription to Azure that has multiple virtual machines.

You have been tasked to manage all computing resources.

You must determine unattached disks that can be deleted in order to reduce costs.

Which of the following options should you use?

  1. Use Azure Cost Management cost analysis to download resources data.
  2. Use Azure Advisor to identify low usage virtual machines.
  3. Use Azure Cost Management to view Advisor recommendations.
  4. Use Azure Monitor VM insights.
A
  1. Use Azure Cost Management to view Advisor recommendations.

Cost Management shows the organizational cost and usage patterns with advanced analytics. Reports in Cost Management show the usage-based costs consumed by Azure services and third-party Marketplace offerings. The reports help you understand your spending and resource use and can help find spending anomalies. Cost Management uses Azure management groups, budgets, and recommendations to show clearly how your expenses are organized and how you might reduce costs.

Advisor recommendations show how to optimize and improve efficiency by identifying idle and underutilized resources. Alternatively, they can display less expensive resource options. When you follow the advice, you change the way you use your resources to save money.

It is important to note that deleting the disk eliminates the possibility of recovery. Azure recommends taking a snapshot before deleting data or ensuring that the data on the disk is no longer required.

Hence, the correct answer is: Use Azure Cost Management to view Advisor recommendations.

The option that says: Use Azure Advisor to identify low usage virtual machines is incorrect because you only need to identify unattached disks and not the virtual machine usage.

The option that says: Use Azure Cost Management cost analysis to download resources data is incorrect because cost analysis only helps you view accumulated costs over time to estimate monthly, quarterly, or even yearly cost trends against a budget.

The option that says: Use Azure Monitor VM insights is incorrect because you don’t need to monitor the health and performance of virtual machines just to identify unattached disks.

References:

https://learn.microsoft.com/en-us/azure/advisor/advisor-reference-cost-recommendations

https://learn.microsoft.com/en-us/azure/cost-management-billing/cost-management-billing-overview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

2-43. QUESTION
Category: AZ-104 – Monitor and Maintain Azure Resources
Your organization has an e-commerce application hosted in a virtual machine.

You need to create a protection using Azure Backup. The backup must be created daily at 6:00 and stored for at least 180 days.

What should you use?

Storage
?

Protection
?

A

Storage
Recovery Services Vault

Protection
Backup Policy

A Recovery Services vault is a storage entity in Azure that houses data. The data is typically copies of data or configuration information for virtual machines (VMs), workloads, servers, or workstations. You can use Recovery Services vaults to hold backup data for various Azure services such as IaaS VMs (Linux or Windows) and Azure SQL databases. Recovery Services vaults support System Center DPM, Windows Server, Azure Backup Server, and more. Recovery Services vaults make it easy to organize your backup data while minimizing management overhead.

In this scenario, you’ll need to use Azure Backup to protect the application from any unexpected circumstances. Since the main requirement is to create a backup at a specific time and retain the data for a number of days, you can just configure a backup policy.

Therefore, the correct answers are:

– Storage = Recovery Services Vault

– Protection = Backup Policy

References:

https://learn.microsoft.com/en-us/azure/backup/backup-azure-arm-vms-prepare

https://learn.microsoft.com/en-us/azure/backup/backup-azure-vms-first-look-arm

36
Q

2-44. QUESTION
Category: AZ-104 – Manage Azure Identities and Governance
You have created a new Azure Active Directory user named TD-Juan.

You need to make sure that TD-Juan is able to assign an Azure policy to the root management group.

Which of following options should you do?

  1. Create a management group and assign the role of Owner.
  2. Assign the role of Owner and enable access management for Azure resources.
  3. Assign the role of Global Administrator and enable access management for Azure resources.
  4. Create a management group and assign the role of Contributor.
A
  1. Assign the role of Global Administrator and enable access management for Azure resources.

Azure Active Directory (Azure AD) is a service for managing identity and access in the cloud. This service enables your employees to access external resources such as Microsoft 365, the Azure portal, and thousands of other SaaS apps. Azure Active Directory also allows them to access internal resources such as apps on your corporate intranet network, as well as any cloud apps developed specifically for your organization.

Each directory is given a single top-level management group called the root management group. The root management group is built into the hierarchy to have all management groups and subscriptions fold up to it. This root management group allows for global policies and Azure role assignments to be applied at the directory level.

No one is given default access to the root management group. Azure AD Global Administrators are the only users that can elevate themselves to gain access. Once they have access to the root management group, the global administrators can assign any Azure role to other users to manage it.

If you are a Global Administrator in Azure AD, you can assign yourself access to all Azure subscriptions and management groups in your directory. Use this capability if you don’t have access to Azure subscription resources, such as virtual machines or storage accounts, and you want to use your Global Administrator privilege to gain access to those resources.

In this scenario, you need to have a role of Global Administrator and activate access management for Azure resources in Azure AD. After that, you should now have access to all subscriptions and management groups in your directory. When you view the Access control (IAM) pane, you’ll notice that you have been assigned the User Access Administrator role at root scope.

Hence, the correct answer is: Assign the role of Global Administrator and enable access management for Azure resources.

The option that says: Create a management group and assign the role of Owner is incorrect because this option won’t still allow you to assign a policy to the tenant root group.

The option that says: Create a management group and assign the role of Contributor is incorrect. Just like the option above, even if you assign a Contributor role, you still need to enable access management for Azure resources.

The option that says: Assign the role of Owner and enable access management for Azure resources is incorrect because this role only has access to the subscription level. You can’t use this role to access the tenant root, management group.

References:

https://learn.microsoft.com/en-us/azure/governance/management-groups/overview

https://learn.microsoft.com/en-us/azure/role-based-access-control/rbac-and-directory-admin-roles

https://learn.microsoft.com/en-us/azure/role-based-access-control/elevate-access-global-admin

37
Q

2-45. QUESTION
Category: AZ-104 – Deploy and Manage Azure Compute Resources
Your company has an app that is currently running in an Azure Kubernetes Service cluster called TD-Manila.

You have been assigned to configure cluster autoscaling.

Which of the two actions can you use?

  1. Use az group create.
  2. Use Azure portal.
  3. Use kubectl commands.
  4. Use az vmss scale.
  5. Use az aks commands.
A

-2. Use Azure portal.
-5. Use az aks commands.

Azure Kubernetes Service (AKS) simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks, like health monitoring and maintenance. Since Kubernetes masters are managed by Azure, you only manage and maintain the agent nodes. Thus, AKS is free; you only pay for the agent nodes within your clusters, not for the masters.

AKS clusters can scale in one of two ways:

The cluster autoscaler watches for pods that can’t be scheduled on nodes because of resource constraints. The cluster then automatically increases the number of nodes.
The horizontal pod autoscaler uses the Metrics Server in a Kubernetes cluster to monitor the resource demand of pods. If an application needs more resources, the number of pods is automatically increased to meet the demand.
If you need to create an AKS cluster, use the az aks create command. To enable and configure the cluster autoscaler on the node pool for the cluster, use the –enable-cluster-autoscaler parameter, and specify a node –min-count and –max-count. You can also enable autoscale in the Azure portal.

Hence, the correct answers are:

– Use az aks commands.

– Use Azure portal.

The option that says: Use az vmss scale is incorrect because this command is for virtual machine scale sets.

The option that says: Use kubectl commands is incorrect because you don’t need to configure horizontal pod autoscaling.

The option that says: Use az group create is incorrect because it is mainly used for creating resource groups.

References:

https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest

https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler

https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-scale?tabs=azure-cli

38
Q

2-46. QUESTION
Category: AZ-104 – Implement and Manage Storage
You have an Azure subscription with an Azure storage named TD1 that is encrypted at rest and it contains the following containers:
(image)

You need to create an encryption key that is dedicated only to TD3. The solution must minimize administrative effort.

What should you do?

  1. Create a new key vault.
  2. Create a new encryption key and apply it to all containers.
  3. Apply an encryption scope to TD3.
  4. Move TD3 to another storage account.
A
  1. Apply an encryption scope to TD3.

\Encryption scope enables you to manage encryption at the level of an individual blob or container. You can use encryption scopes to create secure boundaries between data that resides in the same storage account but belongs to different customers.

By default, a storage account is encrypted with a key that is scoped to the entire storage account. When you define an encryption scope, you specify a key that may be scoped to a container or an individual blob.

When the encryption scope is applied to a blob, the blob is encrypted with that key. When the encryption scope is applied to a container, it serves as the default scope for blobs in that container so that all blobs that are uploaded to that container may be encrypted with the same key.

The container can be configured to enforce the default encryption scope for all blobs in the container or to permit an individual blob to be uploaded to the container with an encryption scope other than the default.

Take note that one of the requirements states that you must minimize administrative effort.

Hence, the correct answer is: Apply an encryption scope to TD3.

The option that says: Move TD3 to another storage account is incorrect because you do not need to move data from TD3 to another storage account because you can satisfy the requirements by simply applying an encryption scope to TD3. Remember that one of the requirements is to minimize administrative effort.

The option that says: Create a new encryption key and apply it to all containers is incorrect because the encryption key for TD3 must be different from TD1 and TD2. With encryption scopes, you can create a separate key that is solely dedicated to TD3.

The option that says: Create a new key vault is incorrect. Although it is possible, creating a new key vault will require more administrative effort. You can instead use encryption scopes by using the same key vault that is used by TD1 and TD2.

References:

https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-overview

https://learn.microsoft.com/en-us/azure/storage/blobs/encryption-scope-overview

39
Q

2-49. QUESTION
Category: AZ-104 – Implement and Manage Storage
You have an Azure subscription named Boracay.
You plan on creating a storage account for your media files. The media files must be asynchronously copied to another storage account to minimize latency for your users.
What storage account type should the source and target account be?

For each, select from: Standard general-purpose v2, Premium block blobs, Premium file shares3, & Premium page blobs

Source account
?
Target account
?

A

Source account
(General-purpose v2 or premium block blob)

Target account
(General-purpose v2 or premium block blob)

Object replication asynchronously copies block blobs between a source storage account and a destination account. Some scenarios supported by object replication include:

– Minimizing latency. Object replication can reduce latency for read requests by enabling clients to consume data from a region that is in closer physical proximity.

– Increase efficiency for compute workloads. With object replication, compute workloads can process the same sets of block blobs in different regions.

– Optimizing data distribution. You can process or analyze data in a single location and then replicate just the results in additional regions.

– Optimizing costs. After your data has been replicated, you can reduce costs by moving it to the archive tier using life cycle management policies.

Object replication is supported for general-purpose v2 storage accounts and premium block blob accounts. Both the source and destination accounts must be either general-purpose v2 or premium block blob accounts. Object replication supports block blobs only; append blobs, and page blobs aren’t supported.

Object replication requires that blob versioning is enabled on both the source and destination accounts. When a replicated blob in the source account is modified, a new version of the blob is created in the source account that reflects the previous state of the blob before modification. The current version in the source account reflects the most recent updates. Both the current version and any previous versions are replicated to the destination account.

Therefore, for both the source and target storage accounts, you can only use general-purpose v2 or premium block blob storage accounts.

References:

https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-overview

https://learn.microsoft.com/en-us/azure/storage/blobs/object-replication-overview

40
Q

3-2. QUESTION
Category: AZ-104 – Implement and Manage Storage
Your company has an Azure Subscription that contains an Azure Container named TDContainer.

You are tasked with deploying a new Azure container instance that will run a custom-developed .NET application requiring persistent storage for operation.

You need to create a storage service that will meet the requirements for TDContainer.

What should you use?

  1. Azure Table storage
  2. Azure Queue storage
  3. Azure Blob storage
  4. Azure Files
A
  1. Azure Files

Containers are becoming the preferred way to package, deploy, and manage cloud applications. Azure Container Instances offers the fastest and simplest way to run a container in Azure, without having to manage any virtual machines and without having to adopt a higher-level service.

Azure Container Instances is a solution for any scenario that can operate in isolated containers, without orchestration. Run event-driven applications, quickly deploy from your container development pipelines, and run data processing and build jobs.

Containers offer significant startup benefits over virtual machines (VMs). Azure Container Instances can start containers in Azure in seconds, without the need to provision and manage VMs.

Bring Linux or Windows container images from Docker Hub, a private Azure container registry, or another cloud-based docker registry. Azure Container Instances caches several common base OS images, helping speed deployment of your custom application images.

By default, Azure Container Instances are stateless. If the container crashes or stops, all of its states are lost. To persist state beyond the lifetime of the container, you must mount a volume from an external store. Azure Container Instances can mount an Azure file share created with Azure Files.

Azure Files offers fully managed file shares hosted in Azure Storage that are accessible via the industry standard Server Message Block (SMB) protocol. Using an Azure file share with Azure Container Instances provides file-sharing features similar to using an Azure file share with Azure virtual machines.

Azure Disks or Files are commonly used to provide persistent volumes for Azure Container Instances and Azure VMs.

Hence, the correct answer is: Azure Files.

Azure Queue Storage is incorrect because this service is simply used for storing large numbers of messages to enable communication between components of a distributed application.

Azure Table Storage and Azure Blob Storage are both incorrect because Azure Container Services does not support direct integration of these services.

References:

https://docs.microsoft.com/en-us/azure/container-instances/container-instances-overview

https://docs.microsoft.com/en-us/azure/container-instances/container-instances-volume-azure-files

41
Q

3-3. QUESTION
Category: AZ-104 – Implement and Manage Storage
Your company has an Azure subscription that contains an Azure Storage account named tutorialsdojoaccount.

There is a requirement to copy a virtual machine image to a container named tdimage from your on-premises datacenter. You need to provision an Azure Container instance to host the container image.

Which AzCopy command should you run?

Select the correct answer from the drop-down list of options. Each correct selection is worth one point.

  1. AzCopy _____
    ?
  2. “https://tutorialsdojoaccount.____.core.windows.net/tdimage”
    ?
A
  1. AzCopy ______
    Make
  2. “https://tutorialsdojoaccount.____.core.windows.net/tdimage”
    blob

The Azure Storage platform is Microsoft’s cloud storage solution for modern data storage scenarios. Core storage services offer a massively scalable object store for data objects, disk storage for Azure virtual machines (VMs), a file system service for the cloud, a messaging store for reliable messaging, and a NoSQL store.

Azure Blob storage is Microsoft’s object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn’t adhere to a particular data model or definition, such as text or binary data.

Blob storage is designed for:

– Serving images or documents directly to a browser.

– Storing files for distributed access.

– Streaming video and audio.

– Writing to log files.

– Storing data for backup and restore disaster recovery, and archiving.

– Storing data for analysis by an on-premises or Azure-hosted service.

A container organizes a set of blobs, similar to a directory in a file system. A storage account can include an unlimited number of containers, and a container can store an unlimited number of blobs. VHD files can be used to create custom images that can be stored in an Azure Blob container, which are used to provision virtual machines.

AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. The azcopy make command is commonly used to create a container or a file share.

The correct syntax in creating a blob container is:

azcopy make “https://[account-name].blob.core.windows.net/[top-level-resource-name]”

For example:

azcopy make “https://myaccount.blob.core.windows.net/mycontainer/myblob”

Therefore, the correct answers are:

AzCopy = Make

https://tutorialsdojoaccount.____.core.windows.net/tdimage = Blob

Copy is incorrect because it simply copies source data to a destination location.

Sync is incorrect because it only replicates the source location to the destination location.

File is incorrect because when you execute this command, it will create a file share. Take note that it is mentioned in the scenario that container images and instances are used.

Table is incorrect because this is just a NoSQL data store that accepts authenticated calls from inside and outside the Azure cloud which allows you to store large amounts of structured data.

Queue is incorrect because this simply provides cloud messaging between application components that allows you to decouple your applications so that they can scale independently.

References:

https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview

https://docs.microsoft.com/en-us/azure/storage/common/storage-ref-azcopy-make

42
Q

3-5. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
Your company has an Azure subscription named TDSubcription1. It contains the following resources:
(image)

Which subnet/s can you associateTDNSG1with?

  1. You can associate it to the subnet of TDVnet2 only.
  2. You can associate it to the subnet of TDVnet3 only.
  3. You can associate it to the subnet of TDVnet1 only.
  4. You can associate it to the subnets of TDVnet1 and TDVnet2 only.
A
  1. You can associate it to the subnet of TDVnet3 only.

Azure Network Security Group is used to filter network traffic to and from Azure resources in an Azure virtual network. A network security group contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol.

You can only associate a network security group to a subnet or network interface within the same region as the network security group. So if your network security is in the Azure security groups, it can’t be moved from one region to another. However, you can use an Azure Resource Manager template to export the existing configuration and security rules of an NSG. You can then stage the resource in another region by exporting the NSG to a template, modifying the parameters to match the destination region, and then deploying the template to the new region.

Hence, the correct answer is: You can associate it to the subnet of TDVnet3 only.

The following options are incorrect because TDVnet1 and TDVnet2 are located in Southeast Asia. You can only associate a network security group to a subnet within the same region as the network security group.

– You can associate it to the subnets of TDVnet1 and TDVnet2 only

– You can associate it to the subnet of TDVnet1 only

– You can associate it to the subnet of TDVnet2 only

References:

https://docs.microsoft.com/en-us/azure/virtual-network/network-security-groups-overview

https://docs.microsoft.com/en-us/azure/virtual-network/move-across-regions-nsg-portal

43
Q

3-6. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
Your company has a virtual network named TDVnet1 and a policy-based virtual network gateway named TD1 in your Azure subscription.

You have users that need to access TDVnet1 from a remote location.

Which two actions should you do so your users can establish a point-to-site connection to TDVnet1?

  1. Download and install the VPN client configuration file
  2. Reset TD1
  3. Deploy a route-based VPN gateway
  4. Delete TD1
  5. Deploy a gateway subnet
A
  1. Deploy a route-based VPN gateway
  2. Delete TD1

Point-to-Site (P2S) VPN connection allows you to create a secure connection to your virtual network from an individual client computer. A P2S connection is established by starting it from the client computer. This solution is useful for telecommuters who want to connect to Azure VNets from a remote location, such as from home or a conference. P2S VPN is also a useful solution to use instead of S2S VPN when you have only a few clients that need to connect to a VNet.

When you configure a point-to-site VPN connection, you must use a route-based VPN type for your gateway. Policy-based VPN type for point-to-site VPN connection is not supported by Azure.

If you create a policy-based VPN type as your gateway, you need to delete it and deploy a route-based VPN gateway instead.

Hence, the correct answers are:

– Delete TD1

– Deploy a route-based VPN gateway

The option that says: Deploy a gateway subnet is incorrect. A gateway subnet is a prerequisite when you create a point-to-site VPN connection and since there is already an existing point-to-site VPN connection in your Azure subscription, you don’t have to deploy one again.

The option that says: Reset TD1 is incorrect. Resetting TD1 will not work since it is a policy-based VPN type. Take note that you need a route-based VPN type for point-to-site VPN connections.

The option that says: Download and install the VPN client configuration file is incorrect. Even if you have downloaded and installed the VPN client configuration file, the users still won’t be able to connect to TDVnet1 because TD1 is a policy-based VPN type. You have to delete TD1 first and deploy a new route-based VPN gateway.

References:

https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal

https://docs.microsoft.com/en-us/azure/vpn-gateway/point-to-site-about

44
Q

3-7. QUESTION
Category: AZ-104 – Implement and Manage Storage
Your company has an Azure subscription named TDSubscription1.

You plan to host your media assets to a storage account.

You created an Azure storage account named tutorialsdojostorage using the following parameters:
(image)

Select the correct answer from the drop-down list of options. Each correct selection is worth one point.

  1. How many copies of your data will be maintained by the Azure storage account at the minimum?
  2. The files that you will host in tutorialsdojostorage are frequently accessed files. What setting should you modify?
A

How many copies of your data will be maintained by the Azure storage account at the minimum?
6
The files that you will host in tutorialsdojostorage are frequently accessed files. What setting should you modify?
Access tier

An Azure storage account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS. Data in your Azure storage account is durable and highly available, secure, and massively scalable.

Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage offers four options for how your data is replicated:

Locally redundant storage (LRS) copies your data synchronously three times within a single physical location in the primary region. LRS is the least expensive replication option but is not recommended for applications requiring high availability.
Zone-redundant storage (ZRS) copies your data synchronously across three Azure availability zones in the primary region for applications requiring high availability.
Geo-redundant storage (GRS) copies your data synchronously three times within a single physical location in the primary region using LRS. It then copies your data asynchronously to a single physical location in a secondary region that is hundreds of miles away from the primary region.
Geo-zone-redundant storage (GZRS) copies your data synchronously across three Azure availability zones in the primary region using ZRS. It then copies your data asynchronously to a single physical location in the secondary region.
Take note that Geo-redundant storage (GRS) maintains six copies total, including three copies in the primary region and three copies in the secondary region.

Azure storage offers different access tiers, allowing you to store blob object data in the most cost-effective manner. Available access tiers include:

Hot – Optimized for storing data that is accessed frequently.
Cool – Optimized for storing data that is infrequently accessed and stored for at least 30 days.
Archive – Optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements on the order of hours.
Therefore, you will have a total of 6 copies maintained because its replication setting is Geo-redundant storage (GRS). This storage option copies your data asynchronously across 3 Azure availability zones in your primary region and 3 copies in the secondary region, for a total of 6 copies.

Conversely, if you store frequently accessed files, you must modify the access tier to the hot tier from the cool tier.

The option that say: 3 is incorrect because only Locally redundant storage (LRS) and Zone-redundant storage (ZRS) maintain a total of 3 copies of data.

The options that say: 4 and 5 are incorrect because there is no Azure Storage redundancy type that maintains 4 and 5 copies of data. Only 3 for LRS and GRS and 6 for GRS and GZRS.

Account Kind is incorrect because it simply offers several types of storage accounts, such as StorageV2, Storage, and BlobStorage. Each type supports different features and has its own pricing model.

Versioning is incorrect because this feature is for automatically maintaining the previous versions of an object. When blob versioning is enabled, you can restore an earlier version of a blob to recover your data if it is erroneously modified or deleted.

Performance is incorrect because this tiering system is primarily used for determining the speed capability of your storage account. There are two types of performance tiers: Standard, optimized for high capacity/throughput, and Premium, optimized for high transaction rates and single-digit consistent storage latency.

References:

https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview

https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy

https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers

45
Q

3-8. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
Your company has an Azure subscription named TD-Subscription1 with the following resources:
(image1)

You need to use a DNS service that will resolve domains for your two virtual networks. You created an Azure private zone named tutorialsdojo.com.

You link TDVnet2 to tutorialsdojo.com with auto registration enabled. The parameters of your private zone are as follows:
(image2)

For each of the following items, choose Yes if the statement is true or choose No if the statement is false.

  1. td2.tutorialsdojo.com is resolvable by TD4.
  2. When you create a virtual machine in TDVnet1, it will automatically register the A record of the VM to tutorialsdojo.com zone.
  3. td2.tutorialsdojo.com is resolvable by TD3.
A
  1. Yes
  2. No
  3. No

Azure Private DNS provides a reliable, secure DNS service to manage and resolve domain names in a virtual network without the need to add a custom DNS solution. By using private DNS zones, you can use your own custom domain names rather than the Azure-provided names available today.

Using custom domain names helps you to tailor your virtual network architecture to best suit your organization’s needs. It provides name resolution for virtual machines (VMs) within a virtual network and between virtual networks. Additionally, you can configure zones names with a split-horizon view, which allows a private and a public DNS zone to share the name.

To resolve the records of a private DNS zone from your virtual network, you must link the virtual network with the zone. Linked virtual networks have full access and can resolve all DNS records published in the private zone. Additionally, you can also enable auto registration on a virtual network link.

The Azure DNS private zones auto registration feature takes the pain out of DNS record management for virtual machines deployed in a virtual network.

In addition to forward look records (A records), reverse lookup records (PTR records) are also automatically created for the virtual machines. If you add more virtual machines to the virtual network, DNS records for these virtual machines are also automatically created in the linked private DNS zone.

When you delete a virtual machine, the DNS records for the virtual machine are automatically deleted from the private DNS zone.

Take note that in this scenario, TDVnet2 is linked to the tutorialsdojo.com zone with auto registration enabled. This means that the DNS records of the virtual machines deployed in TDVnet2 are automatically created in the tutorialsdojo.com zone

Hence, this statement is correct: td2.tutorialsdojo.com is resolvable by TD4.

The statement that says: td2.tutorialsdojo.com is resolvable by TD3 is incorrect because TD3 is located in TDVnet1. Since TDVnet1 is not linked to the tutorialsdojo.com zone, TD3 will not have the capability to resolve it. You can link a virtual network to a private zone by heading over to virtual network links in your private zone.

The statement that says: When you create a virtual machine in TDVnet1, it will automatically register the A record of the VM to tutorialsdojo.com zone is incorrect. There are two conditions for the automatic registration of your VMs A records. First, you need to link the tutorialsdojo.com zone to TDVnet1 and second, you must enable auto registration. Since TDVnet1 is not linked to tutorialsdojo.com zone, you will not be able to automatically register the A records of your VMs.

References:

https://docs.microsoft.com/en-us/azure/dns/private-dns-overview

https://docs.microsoft.com/en-us/azure/dns/private-dns-autoregistration

46
Q

3-10. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
You have an Azure virtual network named TDVnet1 that contains the following subnets shown below:
(image)

You plan to create a network security group for your virtual machines.

Due to regulatory compliance, you must meet the following requirements:

Virtual machines in TDSub2 and TDSub3 must have HTTPS traffic from the Internet.
Remote Desktop connections from the public Internet must only access TD1.
All traffic between TD1 and TD2 must be allowed.
Restrict all other external network traffic from accessing TDVnet1.
What is the minimum number of network security groups that you should provision to satisfy the requirements above?

  1. 6
  2. 5
  3. 3
  4. 1
A
  1. 1

Azure Network Security Group is used to filter network traffic to and from Azure resources in an Azure virtual network. A network security group contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol.

Network Security Groups can be attached to multiple subnets and/or network interfaces. Unless you have a specific reason to, it is recommended that you associate a network security group to a subnet or a network interface, but not both.

In the image above, the requirements of the scenario are fully satisfied. You only need to create one network security group with multiple rules and associate it with TDSub1, TDSub2, and TDSub3.

  1. Virtual machines in TDSub2 and TDSub3 must have HTTPS traffic from the Internet.

– You can whitelist the address spaces of TDSub2 and TDSub3 in the destination IP addresses/CIDR ranges of an inbound security rule. This will force HTTPS traffic to only those subnets without allowing HTTPS traffic to TDSub1. See priority 100 in the image above.

  1. Remote Desktop connections from the Internet must access TD1.

– Since there are two virtual machines in TDSub1 and the requirement states that only TD1 must have Remote Desktop connection, you cannot whitelist the address space of TDSub1 in the destination IP addresses.

– An alternative to this is whitelisting the IP address of TD1 to the destination IP addresses when you create an inbound security rule. See priority 110 in the image above.

  1. All traffic between TD1 and TD2 must be allowed.

– When you create a network security group, the default rules of a network security group always allow traffic coming from WITHIN the virtual network. No action is needed from your side.

  1. Restrict all other external network traffic from accessing TDVnet1.

– The default rules of a network security group explicitly deny all incoming traffic. No action is needed from your side.

Hence, the correct answer is: 1.

3, 5, and 6 are incorrect because you only need to create one network security group with multiple rules to satisfy the requirements of the scenario.

References:

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview

https://learn.microsoft.com/en-us/azure/virtual-network/network-security-group-how-it-works

47
Q

3-13. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
Your company has an Azure subscription named TDSubscription1 that contains the following resources:
(image)

You recently added a new address space 10.30.0.0/16 to TDVnet1.

What should you do next?

  1. Sync the peering between TDVnet1 and TDVnet2.
  2. Delete the peering between TDVnet1 and TDVnet2.
  3. Delete TDVnet2.
  4. Re-create the peering between TDVnet1 and TDVnet2
A
  1. Sync the peering between TDVnet1 and TDVnet2.

Azure Virtual Network (VNet) is the fundamental building block for your private network in Azure. VNet enables many types of Azure resources, such as Azure Virtual Machines (VM), to securely communicate with each other, the Internet, and on-premises networks. VNet is similar to a traditional network that you’d operate in your own datacenter but brings with it additional benefits of Azure’s infrastructure such as scale, availability, and isolation.

Virtual network peering enables you to seamlessly connect two or more Virtual Networks in Azure. The virtual networks appear as one for connectivity purposes. The traffic between virtual machines in peered virtual networks uses the Microsoft backbone infrastructure. Like traffic between virtual machines in the same network, traffic is routed through Microsoft’s private network only.

You can resize the address space of Azure virtual networks that are peered without incurring any downtime on the currently peered address space. This feature is useful when you need to resize the virtual network’s address space after scaling your workloads. After resizing the address space, all that is required is for peers to be synced with the new address space changes. Resizing works for both IPv4 and IPv6 address spaces.

Addresses can be resized in the following ways:

– Modifying the address range prefix of an existing address range (For example, changing 10.1.0.0/16 to 10.1.0.0/18).

– Adding address ranges to a virtual network.

– Deleting address ranges from a virtual network.

– Resizing of address space is supported cross-tenant.

Hence, the correct answer is: Sync the peering between TDVnet1 and TDVnet2.

The statement that says: Delete TDVnet2 is incorrect because you can add an address space to your virtual network without deleting it.

The following statements are incorrect because you do not need to delete and re-create the peering when you add an address space to an existing virtual network peering. All you have to do is sync the peering after you have added an address space.

– Delete the peering between TDVnet1 and TDVnet2

– Re-create the peering between TDVnet1 and TDVnet2

References:

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-manage-peering

48
Q

3-14. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
Your Azure subscription contains a fleet of virtual machines.

You recently deployed an Azure bastion named TD1 with an SKU of Basic and a subnet size of /26.

There is a requirement that more than 90 users will concurrently use TD1. You need to be able to accommodate the number of users that will be accessing TD1. The solution must minimize administrative effort.
What should you do first?

  1. Increase the instance count of TD1.
  2. Increase the server size of TD1.
  3. Deploy a new bastion server with an SKU of Standard
  4. Upgrade the SKU of TD1
A
  1. Upgrade the SKU of TD1

Azure Bastion is a service you deploy that lets you connect to a virtual machine using your browser and the Azure portal. The Azure Bastion service is a fully platform-managed PaaS service that you provision inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly from the Azure portal over TLS. When you connect via Azure Bastion, virtual machines don’t need a public IP address, agent, or special client software.

Bastion provides secure RDP and SSH connectivity to all of the VMs in the virtual network in which it is provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world while providing secure access using RDP/SSH.

Two instances are created when you configure Azure Bastion using the Basic SKU. Using the Standard SKU, you can specify the number of instances. This is called host scaling.

Each instance can support 20 concurrent RDP connections and 40 concurrent SSH connections for medium workloads. The number of connections per instance depends on your actions when connected to the client VM. For example, if you are doing something data-intensive, it creates a more significant load for the instance to process. Once the concurrent sessions are exceeded, an additional scale unit (instance) is required.

Remember that you can only use host scaling if your bastion server has an SKU of Standard

To accommodate additional concurrent client connections, first, you need to upgrade the SKU of TD1 from Basic to Standard(after upgrading to Standard, you can not revert back to Basic SKU) After that, you can increase the instance count of TD1 to whatever number of servers are required to accommodate the 90 users.

Hence, the correct answer is: Upgrade the SKU of TD1.

The option that says: Deploy a new bastion server with an SKU of Standard is incorrect because there is no need to deploy a new bastion server with an SKU of Standard. You can upgrade the SKU of TD1 to Standard. One of the requirements is that your solution must minimize administrative effort.

The option that says: Increase the instance count of TD1 is incorrect because you will only be able to increase the instance count if TD1 is already using an SKU of Standard. Take note that the question asks what you will do first.

The option that says: Increase the server size of TD1 is incorrect because there is no option to increase the server size of a bastion server. If you need more computing power, you can increase the instance count of the bastion server. Remember that you need to use an SKU of Standard before being able to use host scaling.

References:

https://docs.microsoft.com/en-us/azure/bastion/bastion-overview

https://learn.microsoft.com/en-us/azure/bastion/configuration-settings

49
Q

3-16. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
You have an Azure subscription that contains an Azure DNS zone named tutorialsdojo.com.

There is a requirement to delegate a subdomain named portal.tutorialsdojo.com to another Azure DNS zone.

What solution would satisfy the requirement?

  1. Navigate to tutorialsdojo.com and add a PTR record named portal.
  2. Navigate to tutorialsdojo.com and add an NS record named portal.
  3. Navigate to tutorialsdojo.com and add a CNAME record named portal.
  4. Navigate to tutorialsdojo.com and add a TXT record named portal.
A
  1. Navigate to tutorialsdojo.com and add an NS record named portal.

Azure DNS is a hosting service for DNS domains that provides name resolution by using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services.

You can use the Azure portal to delegate a DNS subdomain. For example, if you own the tutorialsdojo.com domain, you can delegate a subdomain called portal to another, separate zone that you can administer separately from the tutorialsdojo.com zone.

To delegate an Azure DNS subdomain, you must first delegate your public domain to Azure DNS. Once your domain is delegated to your Azure DNS zone, you can delegate your subdomain.

You can delegate a subdomain by doing the following:

  1. Create a new Azure DNS zone named portal.tutorialsdojo.com. Copy down the four nameservers as you will need them for step 2.
  2. Navigate to the tutorialsdojo.com DNS zone and add an NS record named portal. Under records, enter the four nameservers from portal.tutorialsdojo.com and click ok.
  3. To verify your work, open a PowerShell window and type nslookup portal.tutorialsdojo.com

Hence, this statement is correct: Navigate to tutorialsdojo.com and add an NS record named portal.

The following statements are incorrect because PTR, CNAME, and TXT records are not used to delegate an Azure DNS subdomain.

– Navigate to tutorialsdojo.com and add a PTR record named portal.

– Navigate to tutorialsdojo.com and add a CNAME record named portal.

– Navigate to tutorialsdojo.com and add a TXT record named portal.

References:

https://docs.microsoft.com/en-us/azure/dns/dns-overview

https://docs.microsoft.com/en-us/azure/dns/delegate-subdomain

50
Q

3- 22. QUESTION
Category: AZ-104 – Deploy and Manage Azure Compute Resources
Your company has an Azure Subscription that contains an Azure Kubernetes Service (AKS) cluster and an Azure AD tenant named tutorialsdojo.com.

You received a report that the system administrator is unable to grant access to Azure AD users who need to use the cluster.

You need to grant the users in tutorialsdojo.com access to the cluster.

What should you implement?

  1. Add a namespace.
  2. Create an OAuth 2.0 authorization endpoint.
  3. Create a new AKS cluster.
  4. Configure external collaboration settings.
A
  1. Create an OAuth 2.0 authorization endpoint.

Azure Kubernetes Service (AKS) makes it simple to deploy a managed Kubernetes cluster in Azure. AKS reduces the complexity and operational overhead of managing Kubernetes by offloading much of that responsibility to Azure. As a hosted Kubernetes service, Azure handles critical tasks like health monitoring and maintenance for you. The Kubernetes masters are managed by Azure. You only manage and maintain the agent nodes. As a managed Kubernetes service, AKS is free — you only pay for the agent nodes within your clusters, not for the masters.

The OAuth 2.0 authorization code grant can be used in apps that are installed on a device to gain access to protected resources. As shown in the image above, the Azure AD client application will use kubectl to sign in users with OAuth 2.0 device authorization grant flow. Azure AD will provide an access_token, id_token, and a refresh_token then the user will request to kubectl using an access_token from kubeconfig. After validation, the API will perform an authorization decision based on the Kubernetes Role/RoleBinding. Once authorized, the API server returns a response to kubectl.

Hence, the correct answer is: Create an OAuth 2.0 authorization endpoint.

The option that says: Configure external collaboration settings is incorrect because external collaboration settings only let you turn guest invitations on or off for different types of users in your organization. This option wouldn’t help you grant the users in tutorialsdojo.com access to the cluster.

The option that says: Create a new AKS cluster is incorrect because a cluster is just a set of nodes that run containerized applications. Creating a new cluster is not necessary. You need to create an authorization endpoint to grant the users access to the domain name.

The option that says: Add a namespace is incorrect because a namespace only divides cluster resources between multiple users. Remember that users can only interact with resources within their assigned namespaces. To grant the users in tutorialsdojo.com access to the cluster, you should create an OAuth authorization endpoint.

References:

https://docs.microsoft.com/en-us/azure/aks/concepts-identity

https://docs.microsoft.com/en-us/azure/aks/azure-ad-integration-cli

51
Q

3-23. QUESTION
Category: AZ-104 – Deploy and Manage Azure Compute Resources
Your company has a virtual network that contains a MySQL database hosted on a virtual machine.

You created a web app named tutorialsdojo-webapp using the Azure App service.

You need to make sure that tutorialsdojo-webapp can fetch the data from the MySQL database.

What should you implement?

  1. Peer the virtual network to another virtual network.
  2. Enable VNet Integration and connect the web app to the virtual network.
  3. Create an Azure Application Gateway.
  4. Create an internal load balancer.
A
  1. Enable VNet Integration and connect the web app to the virtual network.

Azure App Service is an HTTP-based service for hosting web applications, REST APIs, and mobile back ends. You can develop in your favorite language, be it .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python. Applications run and scale with ease on both Windows and Linux-based environments. App Service not only adds the power of Microsoft Azure to your application, such as security, load balancing, autoscaling, and automated management. You can also take advantage of its DevOps capabilities, such as continuous deployment from Azure DevOps, GitHub, Docker Hub, other sources, package management, staging environments, custom domain, and TLS/SSL certificates.

With Azure Virtual Network (VNets), you can place many of your Azure resources in a non-internet-routable network. The VNet Integration feature enables your apps to access resources in or through a VNet. VNet Integration doesn’t enable your apps to be accessed privately.

Azure App Service has two variations on the VNet Integration feature:

– The multitenant systems support the full range of pricing plans except for Isolated.

– The App Service Environment, which deploys into your VNet and supports Isolated pricing plan apps.

Hence, the correct answer is: Enable VNet Integration and connect the web app to the virtual network.

The option that says: Create an internal load balancer is incorrect because this option only distributes the traffic. An internal load balancer is mainly used to load balance traffic inside a virtual network.

The option that says: Peer the virtual network to another virtual network is incorrect because virtual network peering wouldn’t help the web app access the virtual machine.

The option that says: Create an Azure Application Gateway is incorrect because the distribution of web traffic is not needed in the scenario. An Azure Application Gateway is just a web traffic load balancer that enables you to manage traffic to your web applications. Take note that the only requirement is to ensure that tutorialsdojo-webapp can access the data from the MySQL database hosted on a virtual machine.

References:

https://docs.microsoft.com/en-us/azure/app-service/web-sites-integrate-with-vnet

https://azure.microsoft.com/en-in/services/app-service/

52
Q

3-24. QUESTION
Category: AZ-104 – Monitor and Maintain Azure Resources
Your company has two Azure virtual networks named TDVNet1 and TDVNet2 in Central US region. A virtual machine named TD-VM1 is running in TDVNet1 while the other virtual network has a virtual machine named TD-VM2.

A web application is hosted on TD-VM1 and the data is retrieved and processed by TD-VM2.

Several users reported that the web application has a sluggish performance.

You are instructed to track the average round-trip time (RTT) of the packets from TD-VM1 to TD-VM2.

Which of the following options can satisfy the given requirement?

  1. IP flow verify
  2. Connection Troubleshoot
  3. Connection Monitor
  4. NSG flow logs
A
  1. Connection Monitor

Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products which includes Virtual Machines, Virtual Networks, Application Gateways, Load balancers, etc.

In this scenario, you can use Connection Monitor to track the average round-trip time (RTT) of the packets from TD-VM1 to TD-VM2. In Azure Network Watcher, Connection Monitor provides unified end-to-end connection monitoring. The Connection Monitor feature also supports hybrid and Azure cloud deployments.

Benefits of using the Connection Monitor:

– Unified, intuitive experience for Azure and hybrid monitoring needs

– Cross-region, cross-workspace connectivity monitoring

– Higher probing frequencies and better visibility into network performance

– Faster alerting for your hybrid deployments

– Support for connectivity checks that are based on HTTP, TCP, and ICMP

– Metrics and Log Analytics support for both Azure and non-Azure test setups

Hence, the correct answer is Connection Monitor.

IP flow verify is incorrect because this feature only looks at the rules for all Network Security Groups (NSGs) applied to the network interface. It is stated in the scenario that you must track the packets from TD-VM1 to TD-VM2. IP flow verify is not capable of providing the average round-trip time of the packets from the source to the destination.

Connection Troubleshoot is incorrect because it simply checks connectivity between source and destination. Take note that you need to track the average round-trip time of the packets from VM1 to VM2. Therefore, you need to use Connection Monitor to analyze the end-to-end connection and not the Connection Troubleshoot operation.

NSG flow logs is incorrect because it only allows you to log information about IP traffic flowing (ingress and egress) through an NSG. Take note that you can’t use NSG flow logs to track the average RTT of the packets from TD-VM1 to TD-VM2. You need to use Connection Monitor to provide unified end-to-end connection monitoring.

References:

https://docs.microsoft.com/en-us/azure/network-watcher/connection-monitor-overview

https://docs.microsoft.com/en-us/azure/azure-monitor/faq

53
Q

3-25. QUESTION
Category: AZ-104 – Manage Azure Identities and Governance
Your company recently created a new Azure subscription. You checked the subscription and it contains the following resources.
(image)

TD-RG3 contains a web app named TD-App3 which is located in North Europe.

You plan to move TD-App3 to TD-RG1.

What is the effect of moving the web app to a different resource group?

  1. The TD-App3 is moved to the North Central US region and the policy applied to the resource will be Policy 1.
  2. The TD-App3 remains in the North Europe region and the policy applied to the resource will be Policy 3.
  3. The TD-App3 is moved to the North Central US region and the policy applied to the resource will be Policy 3.
  4. The TD-App3 remains in the North Europe region and the policy applied to the resource will be Policy 1.
A
  1. The TD-App3 remains in the North Europe region and the policy applied to the resource will be Policy 1.

Azure App Service is an HTTP-based service for hosting web applications, REST APIs, and mobile backends. You can develop in your favorite language, be it .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python. Applications run and scale with ease on both Windows and Linux-based environments.

In this scenario, the TD-App3 is located in the North Europe region. Take note that you cannot change an App Service plan’s region. Also, if you move a resource to a new resource group or subscription, the location of the resource would not change. If you need to run your app in a different region, one alternative is app cloning. Cloning makes a copy of your app in a new or existing App Service plan in any region.

Since you plan to move TD-App3 to TD-RG1, the policy that will be applied to TD-App3 is the policy of TD-RG1 (Policy1). Remember that the assigned policy on the resource group will also be applied to the resources. You can also assign multiple policies in one resource group.

Hence, the correct answer is: The TD-App3 remains in the North Europe region and the policy applied to the resource will be Policy 1.

The option that says: The TD-App3 is moved to the North Central US region and the policy applied to the resource will be Policy 1 is incorrect because TD-App3 would still remain in the North Europe region even if you moved the resource to a different resource group.

The option that says: The TD-App3 remains in the North Europe region and the policy applied to the resource will be Policy 3 is incorrect because Policy 3 is only applied to the TD-RG3 resources. Since you moved the resources to TD-RG1, the policy applied to the TD-App3 is Policy1.

The option that says: The TD-App3 is moved to the North Central US region and the policy applied to the resource will be Policy 3 is incorrect because if you moved a resource to a different resource group, the location of the resource would not change.

References:

https://docs.microsoft.com/en-us/azure/app-service/app-service-plan-manage

https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/move-resource-group-and-subscription

54
Q

3-26. QUESTION
Category: AZ-104 – Manage Azure Identities and Governance
You are managing an Azure subscription that contains a resource group named TD-RG1 which has a virtual machine named TD-VM1.

TD-VM1 has services that will deploy new resources on TD-RG1.

You need to make sure that the services running on TD-VM1 should be able to manage the resources in TD-RG1 using its identity.

Which of the following actions should you do first?

  1. Configure the access control of TD-RG1.
  2. Configure the access control of TD-VM1.
  3. Configure the managed identity of TD-VM1.
  4. Configure the security settings of TD-RG1.
A
  1. Configure the managed identity of TD-VM1.

Microsoft Entra ID is a cloud-based identity and access management service that enables your employees access external resources. Example resources include Microsoft 365, the Azure portal, and thousands of other SaaS applications.

Microsoft Entra ID also helps them access internal resources like apps on your corporate intranet, and any cloud apps developed for your own organization.

There are two types of managed identities:

– System-assigned: some Azure services allow you to enable a managed identity directly on a service instance. When you enable a system-assigned managed identity, an identity is created in Microsoft Entra ID that is tied to the lifecycle of that service instance. So when the resource is deleted, Azure automatically deletes the identity for you. By design, only that Azure resource can use this identity to request tokens from Microsoft Entra ID.

– User-assigned: you may also create a managed identity as a standalone Azure resource. You can create a user-assigned managed identity and assign it to one or more instances of an Azure service. In the case of user-assigned managed identities, the identity is managed separately from the resources that use it.

In this scenario, you can use the system-assigned managed identity. Take note that this identity is restricted to only one resource. You can grant permissions to the managed identity by using Azure RBAC. The managed identity is authenticated with Microsoft Entra ID, so you don’t have to store any credentials.

Hence, the correct answer is: Configure the managed identity of TD-VM1.

The option that says: Configure the security settings of TD-RG1 is incorrect because it only provides security recommendations and security alerts for your resource group. As per the scenario, you need to ensure that the services running on TD-VM1 are able to manage the resources in TD-RG1 using its identity. Therefore, you need to configure the managed identity settings of TD-VM1.

The options that say: Configure the access control of TD-VM1 and Configure the access control of TD-RG1 are incorrect because these are only adding role assignments to an Azure resource. A role assignment is a process of attaching a role definition to a user, group, or service principal to provide access to a specific resource. Remember that access is granted by creating a role assignment, and access is revoked by removing a role assignment. You have to configure a managed identity instead.

References:

https://learn.microsoft.com/en-us/entra/identity/managed-identities-azure-resources/qs-configure-portal-windows-vm

https://learn.microsoft.com/en-us/entra/identity/managed-identities-azure-resources/overview

55
Q

3-27. QUESTION
Category: AZ-104 – Manage Azure Identities and Governance
Your company has an Azure subscription with an Azure AD tenant named tutorialsdojo.onmicrosoft.com that contains the following users:
(image1)

You are instructed to enable self-service password reset for tutorialsdojo.onmicrosoft.com as shown in the following image:
(image2)

You have configured the authentication methods for password reset as illustrated below.
(image3)

For each of the following items, choose Yes if the statement is true or choose No if the statement is false.

  1. The password can be reset immediately after TD-User2 answers the three security questions correctly.
  2. If TD-User3 has forgotten its password, a mobile phone app can be used to reset the password.
  3. TD-User1 can add security questions for password reset.
A
  1. No
  2. No
  3. No

Azure Active Directory (Azure AD) is Microsoft’s cloud-based identity and access management service, which helps your employees sign in and access resources in external (such as Microsoft 365, the Azure portal, and thousands of other SaaS applications) and internal resources (such as apps on your corporate network and intranet, along with any cloud apps developed by your own organization).

The Azure AD self-service password reset (SSPR) gives users the ability to change or reset their password with no administrator or help desk involvement. If a user’s account is locked or they forget their password, they can follow prompts to unblock themselves. This ability reduces help desk calls and loss of productivity when a user can’t sign in to their device or an application.

Remember that users can only reset their password if they have registered an authentication method that the administrator has enabled. These are the authentication methods available for SSPR: Mobile app notification, Mobile app code, Mobile phone, Office phone, Email, and Security questions.

You also need to use an account with Global Administrator privileges to allow users to unlock their account or reset passwords using Azure Active Directory self-service password reset since a user with a User Administrator role does not have permission to manage MFA.

The statement that says: TD-User1 can add security questions for password reset is incorrect because the role of TD-User1 is a User Administrator. Take note that the User Administrator role does not have permission to modify security questions. If TD-User1 needs to add security questions for a password reset, you should assign a Global Administrator role.

The statement that says: The password can be reset immediately after TD-User2 answers the three security questions correctly is incorrect because the number of methods required for password reset is set to two. This means that you also need to use the second method (Mobile phone) to reset your password.

The statement that says: If TD-User3 has forgotten its password, a mobile phone app can be used to reset the password is incorrect because TD-User3 is assigned to TD-Group2. Take note that the password reset is configured on TD-Group1. Therefore, TD-User3 won’t be able to reset its password.

References:

https://docs.microsoft.com/en-us/azure/active-directory/authentication/tutorial-enable-sspr

https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-sspr-howitworks

https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-sspr-deployment

56
Q

3-28. QUESTION
Category: AZ-104 – Manage Azure Identities and Governance
Your company has the following Azure management groups in its Azure account:
(image1)

You have added the following Azure subscriptions to the management groups:
(image2)

You created the following Azure policies:
(image3)

For each of the following items, choose Yes if the statement is true or choose No if the statement is false.

  1. You can create a virtual network in TD-Subscription1.
  2. You can create a virtual machine in TD-Subscription2.
  3. You can move TD-Subscription3 to TD-Management-Group20.
A
  1. NO
  2. NO
  3. YES

Azure Policy evaluates resources in Azure by comparing the properties of those resources to business rules. These business rules, described in JSON format, are known as policy definitions. To simplify management, several business rules can be grouped together to form a policy initiative (sometimes called a policySet). Once your business rules have been formed, the policy definition or initiative is assigned to any scope of resources that Azure supports, such as management groups, subscriptions, resource groups, or individual resources.

Azure management groups provide a level of scope above subscriptions. You organize subscriptions into containers called “management groups” and apply your governance conditions to the management groups. All subscriptions within a management group automatically inherit the conditions applied to the management group. Management groups give you enterprise-grade management at a large scale no matter what type of subscriptions you might have. All subscriptions within a single management group must trust the same Azure Active Directory tenant.

For example, you can apply policies to a management group that limits the regions available for virtual machine (VM) creation. This policy would be applied to all management groups, subscriptions, and resources under that management group by only allowing VMs to be created in that region.

Based on the given scenario, there are two policies:

  1. Allowed resource types – this policy enables you to specify the resource types that your organization can deploy. Only resource types that support ‘tags’ and ‘location’ will be affected by this policy. To restrict all resources, you have to duplicate this policy and change the ‘mode’ to ‘All’.
  2. Not allowed resource types – this policy enables you to specify the resource types that your organization cannot deploy.

When you assign a policy to the tenant root group, the policy would also be applied to the subscription and management group. For example, if there is a Deny policy at the tenant root group, then the policy will be applied to the hierarchy of management groups and subscriptions. Remember that a Deny policy always overrides an Allow policy.

The statement that says: You can move TD-Subscription3 to TD-Management-Group20 is correct because you are allowed to move subscriptions between management groups. Take note that a subscription can only have one parent management group. Therefore, you can’t assign a subscription to multiple management groups.

The statement that says: You can create a virtual network in TD-Subscription1 is incorrect because deny overrides allow. Based on the given policies, you can’t create a virtual network since you have assigned a “Not allowed resource types” policy definition. To create a virtual network, you should remove/delete this policy.

The statement that says: You can create a virtual machine in TD-Subscription2 is incorrect because the Tenant Root Group has a Deny policy that restricts it, as well as its related resource groups (e.g. TD-Management-Group11), from deploying virtual networks. If you can’t create a virtual network, then you also can’t deploy a virtual machine. To allow the creation of a virtual machine, you need to remove the assigned policy.

References:

https://docs.microsoft.com/en-us/azure/governance/management-groups/overview

https://docs.microsoft.com/en-us/azure/governance/management-groups/manage#moving-management-groups-and-subscriptions

https://docs.microsoft.com/en-us/azure/governance/policy/overview

57
Q
  1. QUESTION
    Category: AZ-104 – Implement and Manage Virtual Networking
    Note: This item is part of a series of questions with the exact same scenario but with a different proposed answer. Each one in the series has a unique solution that may, or may not, comply with the requirements specified in the scenario.

Your company has 12 peered virtual networks in your Azure subscription.

You plan to deploy a network security group for each virtual network.

There is a compliance requirement that port 80 should be automatically blocked between virtual networks whenever a new network security group is created. The solution must minimize administrative effort.

Solution: You create a security rule that denies incoming port 80 traffic.

Does the solution meet the goal?

Yes
No

A

No

Azure Network Security Group is used to filter network traffic to and from Azure resources in an Azure virtual network. A network security group contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol.

It is stated in the scenario that blocking port 80 should be done automatically whenever a new network security group is created. By creating a rule manually, it becomes quite cumbersome to configure as you need to create a security rule for every network security group you create. It’s best practice to always automate your security processes to avoid administrative overhead.

You should use a custom policy definition in order to automate the requirement.

Hence, the correct answer is: No.

References:

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview

https://docs.microsoft.com/en-us/azure/virtual-network/network-security-groups-overview

58
Q

3-35. QUESTION
Category: AZ-104 – Monitor and Maintain Azure Resources
Note: This item is part of a series of case study questions with the exact same scenario but has a different technical requirement. Each one in the series has a unique solution that may or may not comply with the requirements specified in the scenario.

Overview

Tutorials Dojo is an online learning portal for technology-related topics that empowers its users to upgrade their skills and career. Tutorials Dojo has users worldwide, ranging from the United States, Europe, and Asia.

Existing Environment

Tutorials Dojo uses a wide range of servers for its business operations, including the following:

Domain Controller.
File Servers.
Microsoft SQL Servers.
Active Directory forest named tutorialsdojo.com. The servers and workstations are joined to the Active Directory.
A public-facing application named TutorialsDojoPortal compromises the following three tiers.

A web tier.
A business tier.
A database tier.
The web tier and business tier each consists of 5 virtual machines, while the database tier only has two, a primary and secondary SQL database server.

Planned Changes

Tutorials Dojo plans to implement the following changes to the infrastructure:

Migrate TutorialsDojoPortal to Azure.
Migrate the media files to Azure Blob Storage.
Utilize Content Delivery Network.
Technical Requirements

Tutorials Dojo must meet the following technical requirements:
Migrate the TutorialsDojoPortal virtual machines to Azure.
Limit the number of ports between TutorialsDojoPortal tiers.
Backup and disaster recovery scenario for TutorialsDojoPortal servers.
Migrate the media files to Azure over the Internet.
The media files must be stored in a Blob container and cached via Content Delivery Network.
The virtual machines must be joined to the Active Directory.
The SQL database server must run on virtual machines.
Minimize administrative effort whenever possible.
User Requirements

Create a new user named TutorialsDojoAdmin1 as the service admin for the Azure Subscription.
Ensure that the TutorialsDojoAdmin1 receive email alerts for budget alerts.
Ensure that only Administrators can create virtual machines.
Your company has already migrated the TutorialsDojoPortal to Azure.

There is a requirement to implement a backup solution for TutorialsDojoPortal.

What should you create first?

  1. Recovery Plan
  2. Microsoft Azure Backup Server (MABS)
  3. Backup policy
  4. Recovery Services Vault
A
  1. Recovery Services Vault

Azure Backup is a cost-effective, secure, one-click backup solution that’s scalable based on your backup storage needs. The centralized management interface makes it easy to define backup policies and protect a wide range of enterprise workloads, including Azure Virtual Machines, SQL and SAP databases, and Azure file shares.

When you back up data in Azure, you store that data in an Azure resource called a Recovery Services vault. The Recovery Services vault resource is available from the Settings menu of most Azure services. The benefit of having the Recovery Services vault integrated into the Settings menu of most Azure services is the ease of backing up data.

Here are the steps when you backup an Azure virtual machine:

– Create a Recovery Services vault

– Define a backup policy

– Apply the backup policy to protect multiple virtual machines

Hence, the correct answer is: Recovery Services Vault.

Backup policy is incorrect because you need to create a Recovery Services vault first. A backup policy is a schedule for how often and when recovery points are taken. A policy also includes the retention range for the recovery points.

Microsoft Azure Backup Server is incorrect. Microsoft Azure Backup Server (MABS) is a server product that can be used to back up on-premises physical servers, VMs, and apps running on them. The prerequisite of deploying a Microsoft Azure Backup Server is to have a Recovery Services Vault.

Recovery Plan is incorrect. A recovery plan gathers machines into recovery groups for the purpose of failover. A recovery plan helps you to define a systematic recovery process, by creating small independent units that you can failover. A unit typically represents an app in your environment. The requirement is to implement a backup solution not a disaster recovery solution.

References:

https://docs.microsoft.com/en-us/azure/backup/backup-overview

https://docs.microsoft.com/en-us/azure/backup/tutorial-backup-vm-at-scale

59
Q

3-36. QUESTION
Category: AZ-104 – Implement and Manage Storage
Note: This item is part of a series of case study questions with the exact same scenario but has a different technical requirement. Each one in the series has a unique solution that may or may not comply with the requirements specified in the scenario.

Overview

Tutorials Dojo is an online learning portal for technology-related topics that empowers its users to upgrade their skills and career. Tutorials Dojo has users worldwide, ranging from the United States, Europe, and Asia.

Existing Environment

Tutorials Dojo uses a wide range of servers for its business operations, including the following:

Domain Controller.
File Servers.
Microsoft SQL Servers.
Active Directory forest named tutorialsdojo.com. The servers and workstations are joined to the Active Directory.
A public-facing application named TutorialsDojoPortal compromises the following three tiers.

A web tier.
A business tier.
A database tier.
The web tier and business tier each consists of 5 virtual machines, while the database tier only has two, a primary and secondary SQL database server.

Planned Changes

Tutorials Dojo plans to implement the following changes to the infrastructure:

Migrate TutorialsDojoPortalto Azure.
Migrate the media files to Azure Blob Storage.
Utilize Content Delivery Network.
Technical Requirements

Tutorials Dojo must meet the following technical requirements:
Migrate the TutorialsDojoPortal virtual machines to Azure.
Limit the number of ports between TutorialsDojoPortal tiers.
Backup and disaster recovery scenario for TutorialsDojoPortal servers.
Migrate the media files to Azure over the Internet.
The media files must be stored in a Blob container and cached via Content Delivery Network.
The virtual machines must be joined to the Active Directory.
The SQL database server must run on virtual machines.
Minimize administrative effort whenever possible.
User Requirements

Create a new user named TutorialsDojoAdmin1 as the service admin for the Azure Subscription.
Ensure that the TutorialsDojoAdmin1 receive email alerts for budget alerts.
Ensure that only Administrators can create virtual machines.
Your company has already migrated the TutorialsDojoPortal to Azure.

There is a requirement to migrate the media files to Azure.

What should you do?

  1. Use file explorer to copy the files by mapping a drive using an Azure storage account access key for authorization.
  2. Use Azure Storage Explorer to copy the files.
  3. Use file explorer to copy the files by mapping a drive using a shared access signature (SAS) in the Azure storage account to grant temporary access.
  4. Use Azure Import/Export service to copy the files.
A
  1. Use Azure Storage Explorer to copy the files.

Azure Blob storage is Microsoft’s object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn’t adhere to a particular data model or definition, such as text or binary data.

Blob storage is designed for:

– Serving images or documents directly to a browser.

– Storing files for distributed access.

– Streaming video and audio.

– Writing to log files.

– Storing data for backup and restore disaster recovery, and archiving.

– Storing data for analysis by an on-premises or Azure-hosted service.

Microsoft Azure Storage Explorer is a standalone app that is accessible, intuitive, feature-rich graphical user interface (GUI) for full management of cloud storage resources and makes it easy to work with Azure Storage data on Windows, macOS, and Linux. You can upload, download, and manage Azure blobs, files, queues, and tables, as well as Azure Cosmos DB and Azure Data Lake Storage entities.

The requirements to be considered for this scenario are:

– Migrate the media files to Azure over the Internet.

– The media files must be stored in a Blob container and cached via Content Delivery Network.

Hence, the correct answer is: Use Azure Storage Explorer to copy the files.

The option that says: Use Azure Import/Export service to copy the files is incorrect. Azure Import/Export service is primarily used to securely import large amounts of data to Azure Blob storage and Azure Files by shipping disk drives to an Azure datacenter. The requirement states that the transfer of the media files must be done over the Internet.

The following options are incorrect because you cannot mount a Blob container using file explorer. Take note that the requirement states that the media files must be stored in a Blob container.

– Use file explorer to copy the files by mapping a drive using a shared access signature (SAS) in the Azure storage account to grant temporary access.

– Use file explorer to copy the files by mapping a drive using an Azure storage account access key for authorization.

References:

https://azure.microsoft.com/en-us/features/storage-explorer/

https://docs.microsoft.com/en-us/azure/vs-azure-tools-storage-manage-with-storage-explorer

60
Q

3-37. QUESTION
Category: AZ-104 – Implement and Manage Storage
Note: This item is part of a series of case study questions with the exact same scenario but has a different technical requirement. Each one in the series has a unique solution that may or may not comply with the requirements specified in the scenario.

Overview

Tutorials Dojo is an online learning portal for technology-related topics that empowers its users to upgrade their skills and career. Tutorials Dojo has users worldwide, ranging from the United States, Europe, and Asia.

Existing Environment

Tutorials Dojo uses a wide range of servers for its business operations, including the following:

Domain Controller.
File Servers.
Microsoft SQL Servers.
Active Directory forest named tutorialsdojo.com. The servers and workstations are joined to the Active Directory.
A public-facing application named TutorialsDojoPortal compromises the following three tiers.

A web tier.
A business tier.
A database tier.
The web tier and business tier each consists of 5 virtual machines, while the database tier only has two, a primary and secondary SQL database server.

Planned Changes

Tutorials Dojo plans to implement the following changes to the infrastructure:

Migrate TutorialsDojoPortal to Azure.
Migrate the media files to Azure Blob Storage.
Utilize Content Delivery Network.
Technical Requirements

Tutorials Dojo must meet the following technical requirements:

Migrate the TutorialsDojoPortal virtual machines to Azure.
Limit the number of ports between TutorialsDojoPortal tiers.
Backup and disaster recovery scenario for TutorialsDojoPortal servers.
Migrate the media files to Azure over the internet.
The media files must be stored in a Blob container and cached via Content Delivery Network.
The virtual machines must be joined to the Active Directory.
The SQL database server must run on virtual machines.
Minimize administrative effort whenever possible.
User Requirements

Create a new user named TutorialsDojoAdmin1 as the service admin for the Azure Subscription.
Ensure that the TutorialsDojoAdmin1 receive email alerts for budget alerts.
Ensure that only Administrators can create virtual machines.
You need to identify the storage requirements for TutorialsDojoPortal media files.

For each of the following items, choose Yes if the statement is true or choose No if the statement is false.

  1. Azure Blob storage meets the storage requirements TutorialsDojoPortal media files.
  2. Azure Files storage meets the storage requirements of TutorialsDojoPortal media files.
  3. Azure Table storage meets the storage requirements of TutorialsDojoPortal media files.
A
  1. Yes
  2. No
  3. No

Azure Blob storage is Microsoft’s object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn’t adhere to a particular data model or definition, such as text or binary data.

Azure Table stores large amounts of structured data. The service is a NoSQL datastore that accepts authenticated calls from inside and outside the Azure cloud.

Azure Files enables you to set up highly available network file shares that can be accessed by using the standard Server Message Block (SMB) protocol. That means that multiple VMs can share the same files with both read and write access. You can also read the files using the REST interface or the storage client libraries.

Azure Content Delivery Network (CDN) is a distributed network of servers that is used to cache and store content. These servers are in locations that are close to end-users to minimize latency.

You can use Azure CDN to cache content from a Blob container and configure the custom domain endpoint for your Blob container, provision custom TLS/SSL certificates, and configure custom rewrite rules. Azure CDN also provides TLS encryption with your own certificate.

The server locations are referred to as Point-of-presence (POP) locations. CDNs store cached data on edge servers, or servers close to your users, in these POP locations.

The requirement to be considered for this scenario is:

– The media files must be stored in a Blob container and cached via Content Delivery Network.

Hence, this statement is correct: Azure Blob storage meets the storage requirements TutorialsDojoPortal media files.

The statement that says: Azure Table storage meets the storage requirements of TutorialsDojoPortal media files is incorrect because Azure Table is ideal for storing structured, non-relational data. You simply cannot integrate Azure Table with Azure CDN. Take note that the requirement states that the files must be stored in a blob container and cached via CDN.

The statement that says: Azure Files storage meets the storage requirements of TutorialsDojoPortal media files is incorrect. Azure Files can be only accessed through SMB protocol and cannot be put directly behind an Azure CDN which only supports HTTP(80) and HTTPS(443) protocols.

References:

https://docs.microsoft.com/en-us/azure/cdn/cdn-overview

https://docs.microsoft.com/en-us/azure/cdn/cdn-create-a-storage-account-with-cdn

61
Q

3-38. QUESTION
Category: AZ-104 – Manage Azure Identities and Governance
Note: This item is part of a series of case study questions with the exact same scenario but has a different technical requirement. Each one in the series has a unique solution that may or may not comply with the requirements specified in the scenario.

Overview

Adatum Corporation is an insurance company that has a total of 5,000 employees with its headquarters located in Singapore and three satellite offices in Tokyo, Seattle, and London.

Existing Environment

Adatum Corporation hosts its applications in their Singapore datacenter. The Singapore datacenter consists of the following servers:
az104-3-38 scenario imageYour network contains an Active Directory forest named adatum.com. All servers and client computers are joined to Active Directory.

A private connection is used for traffic in between offices. Each office has a network device that can be used for VPN connections.

Adatum uses two web applications named AdatumApp1 and AdatumApp2.

Planned Changes

Adatum Corporation plans to implement the following modifications for their migration to Azure:

Establish a private connection to Azure from the headquarters in Singapore.
Move the virtual machines located in the Singapore datacenter to Azure.
Move AdatumApp1 and AdatumApp2 to two Azure App Service named AdatumWeb1 and AdatumWeb2.
Ensure that the on-premises active directory is synchronized with Azure Active Directory.
Technical Requirements

Minimize administrative effort and cost whenever possible.
Ensure that the information technology department receives an email whenever the CPU utilization vm3.adatum.com reaches 75%.
Ensure that you create an Azure custom role named AdatumAdministrator that is based on the built-in Contributor role.
Enable Multi-Factor Authentication (MFA) for the information technology department only.
The servers in the Montreal office must be able to establish a connection over port 443 to vm3.adatum.com.
Ensure that the London office can send encrypted traffic to Azure over the public Internet.
Ensure that AdatumWeb2 can automatically increase the number of instances based on CPU utilization.
You need to retrieve the JSON string of the Contributor role so you can customize it to create the AdatumAdministrator custom role.

Which command should you run?

  1. Get-AzRoleAssignment -Name Contributor | ConvertTo-Json
  2. Get-AzRoleDefinition -Name Contributor | ConvertFrom-Json
  3. Get-AzRoleDefinition -Name Contributor | ConvertTo-Json
  4. Get-AzRoleAssignment -Name Contributor | ConvertFrom-Json
A
  1. Get-AzRoleDefinition -Name Contributor | ConvertTo-Json

Access management for cloud resources is a critical function for any organization that is using the cloud. Azure role-based access control (Azure RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.

Azure RBAC is an authorization system built on Azure Resource Manager that provides fine-grained access management of Azure resources.

If the Azure built-in roles don’t meet the specific needs of your organization, you can create your own custom roles. Just like built-in roles, you can assign custom roles to users, groups, and service principals at management group, subscription, and resource group scopes.

Take note that in this scenario, you need to create a custom role named AdatumAdministrator that is based on the built-in policy Contributor role. You need to retrieve the JSON format file of the Contributor role so that you can customize it to your needs.

To retrieve the JSON string of the Contributor role, you need to use the command:

– Get-AzRoleDefinition -Name <role_name> | ConvertTo-Json</role_name>

Hence, the correct answer is: Get-AzRoleDefinition -Name Contributor | ConvertTo-Json

Get-AzRoleDefinition -Name Contributor | ConvertFrom-Json is incorrect because the ConvertFrom-Json cmdlet just converts your JSON string to a PSCustomObject object that has a property for each field in the JSON string. Take note that you need to retrieve the JSON role so that you can customize it to your needs.

The following options are incorrect because the Get-AzRoleAssignment simply allows you to list Azure RBAC role assignments at the specified scope. By default, it lists all role assignments in the selected Azure subscription. You have to use the respective parameters to list assignments to a specific user, or to list assignments on a specific resource group or resource.

– Get-AzRoleAssignment -Name Contributor | ConvertTo-Json

– Get-AzRoleAssignment -Name Contributor | ConvertFrom-Json

References:

https://docs.microsoft.com/en-us/azure/role-based-access-control/overview

https://docs.microsoft.com/en-us/azure/role-based-access-control/custom-roles-powershell

62
Q

3-44. QUESTION
Category: AZ-104 – Deploy and Manage Azure Compute Resources
Your company is planning to launch an internal web app using an AKS cluster.

The app should be accessible via the pod’s IP address.

Which of the following network settings should you configure to meet this requirement?

  1. Azure NSG
  2. kubenet
  3. Azure CNI
  4. Azure Private Link
A
  1. Azure CNI

Azure Kubernetes Service (AKS) simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks, like health monitoring and maintenance. Since Kubernetes masters are managed by Azure, you only manage and maintain the agent nodes. Thus, AKS is free; you only pay for the agent nodes within your clusters, not for the masters.

A Kubernetes cluster provides two options to configure your network:

– By default, AKS clusters use kubenet, and a virtual network and subnet are created for you. With kubenet, nodes get an IP address from a virtual network subnet.

– With Azure Container Networking Interface (CNI), every pod gets an IP address from the subnet and can be accessed directly.

Since you will connect to the app using the pod’s IP address, you need to select Azure CNI upon creation of your cluster.

Hence, the correct answer is: Azure CNI.

kubenet is incorrect because as stated in the scenario, you need to connect via the pods ip address. With this option, network address translation is then configured on the nodes, and pods receive an IP address behind the node IP.

Azure NSG is incorrect because you don’t need to allow or deny inbound and outbound network traffic.

Azure Private Link is incorrect because this just provides private access to Azure-hosted services. It will not allow you to configure the cluster network type to assign IP addresses to pods.

References:

https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni

https://learn.microsoft.com/en-us/azure/aks/concepts-network

63
Q

3-45. QUESTION
Category: AZ-104 – Monitor and Maintain Azure Resources
Your organization Azure subscription contains the following identities:
(image)

You created an alert rule and configured an action group with the notification type Email Azure Resource Manager Role, which sends an email to the Monitoring Reader role.

The Monitoring Reader role is assigned to the user, service principal and group.

Which of the following identities will receive an email notification?

  1. TDU1, TDU2, and TDSP1
  2. TDU3
  3. TDU1, TDU2, TDU3, TDSP1, and TDSP2
  4. TDU3 and TDSP2
A
  1. TDU3

Azure Monitor helps you maximize the availability and performance of your applications and services. It delivers a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. This information helps you understand how your applications are performing and proactively identify issues that affect them and the resources they depend on.

An action group is a collection of notification preferences set by the Azure subscription’s owner. Since an action group is a global service, it is not bound to a specific Azure region and can handle any client requests. For example, if one region of the action group service is unavailable, traffic is routed and processed by other regions. A catastrophe recovery solution is provided by an action group as a global service.

When you use the Email Azure Resource Manager role type of notification, you can send email to members of a subscription’s role. Emails are only sent to Azure AD user members who are members of the role. Azure AD groups and service principals are not emailed. Also, a notification email will only be sent to the primary email address.

Hence, the correct answer is: TDU3.

All of the other options are incorrect because only TDU3 will able to receive the email notification since emails are only sent to Azure AD user members who are members of the role.

– TDU1, TDU2, TDU3, TDSP1, and TDSP2

– TDU3 and TDSP2

– TDU1, TDU2, and TDSP1

References:

https://learn.microsoft.com/en-us/azure/azure-monitor/alerts/action-groups#email-azure-resource-manager-role

https://learn.microsoft.com/en-us/azure/azure-monitor/alerts/action-groups

64
Q

3-50. QUESTION
Category: AZ-104 – Monitor and Maintain Azure Resources
Your company Azure Subscription contains the following resources:
(image)

You have created a file share named FS1 and a blob container named BC1.

Which of the following resources can be backed up in the Recovery Services vaults?

  1. Backups can be performed using RSV1 and _________?
  2. Backups can be performed using RSV2 and _________?
A
  1. Backups can be performed using RSV1 and FS1
  2. Backups can be performed using RSV2 and VM2

A Recovery Services vault is a storage entity in Azure that houses data. The data is typically copies of data or configuration information for virtual machines (VMs), workloads, servers, or workstations. You can use Recovery Services vaults to hold backup data for various Azure services such as IaaS VMs (Linux or Windows) and Azure SQL databases. Recovery Services vaults support System Center DPM, Windows Server, Azure Backup Server, and more. Recovery Services vaults make it easy to organize your backup data while minimizing management overhead.

In this scenario, you need to identify which resources can be backed up by RSV1 and RSV2. The first thing that you need to take a look at is the location or region of the resource. Since you can only backup using RSV if the resource and vault are in the same location.

You can only use Recovery Services vaults to hold backup data for various Azure services such as IaaS VMs (Linux or Windows) and SQL Server in Azure VMs. After knowing which resources can be backed up using RSV, the remaining resources would be VM and File Share.

Therefore, the correct answers are:

– Backups can be performed using RSV1 = FS1

– Backups can be performed using RSV2 = VM2

References:

https://learn.microsoft.com/en-us/azure/backup/backup-azure-recovery-services-vault-overview

https://learn.microsoft.com/en-us/azure/backup/backup-afs

https://learn.microsoft.com/en-us/azure/backup/backup-azure-arm-vms-prepare

65
Q

4-1. QUESTION
Category: AZ-104 – Manage Azure Identities and Governance
Note: This item is part of a series of questions with the exact same scenario but with a different proposed answer. Each one in the series has a unique solution that may, or may not, comply with the requirements specified in the scenario.

Your organization has an Azure AD subscription that is associated with the directory TD-Siargao.

You have been tasked to implement a conditional access policy.

The policy must require the DevOps group to use multi-factor authentication and a hybrid Azure AD joined device when connecting to Azure AD from untrusted locations.

Solution: Create a conditional access policy and enforce session control.

Does the solution meet the goal?

No
Yes

A

N0

Azure Active Directory (Azure AD) enterprise identity service provides single sign-on and multi-factor authentication to help protect your users from 99.9 percent of cybersecurity attacks. The single sign-on is an authentication method that simplifies access to your apps from anywhere. While conditional access and multi-factor authentication help protect and govern access to your resources.

With conditional access, you can implement automated access-control decisions for accessing your cloud apps based on conditions. Conditional access policies are enforced after the first-factor authentication has been completed. It’s not intended to be a first-line defense against denial-of-service (DoS) attacks, but it uses signals from these events to determine access.

There are two types of access controls in a conditional access policy:

Grant – enforces grant or block access to resources.
Session – enable limited experiences within specific cloud applications
Going back to the scenario, the requirement is to enforce a policy to the members of the DevOps group to use MFA and a hybrid Azure AD joined device when connecting to Azure AD from untrusted locations. The given solution is to enforce session access control. If you check the image above, the session control doesn’t have options to require the use of MFA and AD joined devices.

Hence, the correct answer is: No.

References:

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/overview

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/concept-conditional-access-grant

66
Q

4-2. QUESTION
Category: AZ-104 – Monitor and Maintain Azure Resources
Your company eCommerce website is deployed in an Azure virtual machine named TD-BGC.

You created a backup of the TD-BGC and implemented the following changes:

– Change the local admin password.

– Create and attach a new disk.

– Resize the virtual machine.

– Copy the log reports to the data disk.

You received an email that the admin restore the TD-BGC using the replace existing configuration.

Which of the following options should you perform to bring back the changes in TD-BGC?

  1. Create and attach a new disk.
  2. Change the local admin password.
  3. Copy the log reports to the data disk.
  4. Resize the virtual machine.
A
  1. Copy the log reports to the data disk.

Azure Backup is a cost-effective, secure, one-click backup solution that’s scalable based on your backup storage needs. The centralized management interface makes it easy to define backup policies and protect a wide range of enterprise workloads, including Azure Virtual Machines, SQL and SAP databases, and Azure file shares.

Azure Backup provides several ways to restore a VM:

Create a new VM – quickly creates and gets a basic VM up and running from a restore point.
Restore disk – restores a VM disk, which can then be used to create a new VM.
Replace existing – restore a disk, and use it to replace a disk on the existing VM.
Cross-Region (secondary region) – restore Azure VMs in the secondary region, which is an Azure paired region.
The restore configuration that is given in the scenario is the replace existing option. Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. The existing disks connected to the VM are replaced with the selected restore point.

The snapshot is copied to the vault, and retained in accordance with the retention policy. After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren’t needed.

Since you restore the VM using the backup data, the new disk won’t have a copy of the log reports. To bring back the changes in the TD-BGC virtual machine, you will need to copy the log reports to the disk.

Hence, the correct answer is: Copy the log reports to the data disk.

The option that says: Change the local admin password is incorrect because the new password will not be overridden with the old password using the restore VM option. Therefore, you can use the updated password to connect via RDP to the machine.

The option that says: Create and attach a new disk is incorrect because the new disk does not contain the log reports. Instead of creating a new disk, you should attach the existing data disk that contains the log reports.

The option that says: Resize the virtual machine is incorrect because the only changes that will retain after rolling back are the VM size and the account password.

References:

https://docs.microsoft.com/en-us/azure/backup/backup-azure-arm-restore-vms

https://docs.microsoft.com/en-us/azure/backup/backup-azure-vms-first-look-arm

67
Q

4-3. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
You plan to deploy the following public IP addresses in your Azure subscription shown in the following table:
(image)

You need to associate a public IP address to a public Azure load balancer with an SKU of standard.
Which of the following IP addresses can you use?

  1. TD1 and TD2
  2. TD3 and TD4
  3. TD3
  4. TD1
A
  1. TD3

A public load balancer can provide outbound connections for virtual machines (VMs) inside your virtual network. These connections are accomplished by translating their private IP addresses to public IP addresses. Public Load Balancers are used to load balance Internet traffic to your VMs.

A public IP associated with a load balancer serves as an Internet-facing frontend IP configuration. The frontend is used to access resources in the backend pool. The frontend IP can be used for members of the backend pool to egress to the Internet.

Remember that the SKU of a load balancer and the SKU of a public IP address SKU must match when you use them with public IP addresses meaning if you have a load balancer with an SKU of standard, you must provision a public IP address with an SKU of standard also.

Hence, the correct answer is: TD3.

The options that say: TD1 and TD1 and TD2 are incorrect because both public IP addresses have an SKU of basic. You must provision a public IP address with a SKU of standard so you can associate it with a standard public load balancer.

The option that says: TD3 and TD4 is incorrect because you can only create a standard public IP address with an assignment of static.

References:

https://docs.microsoft.com/en-us/azure/virtual-network/ip-services/public-ip-addresses

https://docs.microsoft.com/en-us/azure/virtual-network/ip-services/configure-public-ip-load-balancer

68
Q

4-4. QUESTION
Category: AZ-104 – Manage Azure Identities and Governance
Note: This item is part of a series of questions with the exact same scenario but with a different proposed answer. Each one in the series has a unique solution that may, or may not, comply with the requirements specified in the scenario.

Your organization has an Azure AD subscription that is associated with the directory TD-Siargao.

You have been tasked to implement a conditional access policy.

The policy must require the DevOps group to use multi-factor authentication and a hybrid Azure AD joined device when connecting to Azure AD from untrusted locations.

Solution: Go to the security option in Azure AD and configure MFA.

Does the solution meet the goal?

No
Yes

A

No

Azure Active Directory (Azure AD) enterprise identity service provides single sign-on and multi-factor authentication to help protect your users from 99.9 percent of cybersecurity attacks. The single sign-on is an authentication method that simplifies access to your apps from anywhere. While conditional access and multi-factor authentication help protect and govern access to your resources.

With conditional access, you can implement automated access-control decisions for accessing your cloud apps based on conditions. Conditional access policies are enforced after the first-factor authentication has been completed. It’s not intended to be a first-line defense against denial-of-service (DoS) attacks, but it uses signals from these events to determine access.

There are two types of access controls in a conditional access policy:

Grant – enforces grant or block access to resources.
Session – enable limited experiences within specific cloud applications
Going back to the scenario, the requirement is to enforce a policy to the members of the DevOps group to use MFA and a hybrid Azure AD joined device when connecting to Azure AD from untrusted locations. The given solution is to configure MFA in Azure AD security. If you check the question again, there is a line “You have been tasked to implement a conditional access policy.” This means that you must create a conditional access policy and enforce grant control. Also, configuring MFA does not enable the option to require the use of an AD joined device.

Hence, the correct answer is: No.

References:

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/overview

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/concept-conditional-access-grant

69
Q

4-5. QUESTION
Category: AZ-104 – Implement and Manage Storage
Your company plans to store media assets in two Azure regions.

You are given the following requirements:

Media assets must be stored in multiple availability zones

Media assets must be stored in multiple regions

Media assets must be readable in the primary and secondary regions.

Which of the following data redundancy options should you recommend?

  1. Locally redundant storage
  2. Read-access geo-redundant storage
  3. Zone-redundant storage
  4. Geo-redundant storage
A

-2. Read-access geo-redundant storage

An Azure storage account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS. Data in your Azure storage account is durable and highly available, secure, and massively scalable.

Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage offers four options for how your data is replicated:

Locally redundant storage (LRS) copies your data synchronously three times within a single physical location in the primary region. LRS is the least expensive replication option but is not recommended for applications requiring high availability.
Zone-redundant storage (ZRS) copies your data synchronously across three Azure availability zones in the primary region. For applications requiring high availability.
Geo-redundant storage (GRS) copies your data synchronously three times within a single physical location in the primary region using LRS. It then copies your data asynchronously to a single physical location in a secondary region that is hundreds of miles away from the primary region.
Geo-zone-redundant storage (GZRS) copies your data synchronously across three Azure availability zones in the primary region using ZRS. It then copies your data asynchronously to a single physical location in the secondary region.

Take note, one of the requirements states that you need the media assets must be readable in the primary and secondary regions. With Geo-redundant storage, your media assets are stored in multiple availability zones and multiple regions. But read access will only be available in the secondary region if you or Microsoft initiates a failover from the primary region to the secondary region.

In order to have read access in the primary and secondary region at all times without having the need to initiate a failover, you need to recommend Read-access geo-redundant storage.

Hence, the correct answer is: Read-access geo-redundant storage.

Locally redundant storage is incorrect because the media assets will only be stored in one physical location.

Zone-redundant storage is incorrect. It only satisfies one requirement which is to store the media assets in multiple availability zones. You still need to store your media assets in multiple regions which ZRS is unable to do.

Geo-redundant storage is incorrect because the requirement states that you need read access to the primary and secondary regions. With GRS, the data in the secondary region isn’t available for read access. You can only have read access in the secondary region if a failover from the primary region to the secondary region is initiated by you or Microsoft.

References:

https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview

https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy

70
Q

4-6. QUESTION
Category: AZ-104 – Implement and Manage Storage
For each of the following items, choose Yes if the statement is true or choose No if the statement is false.

  1. You can access your blob data that is in archive tier
  2. You can rehydrate a blob data in archive tier instantly
  3. You can rehydrate a blob data in archive tier without costs
A
  1. No
  2. No
  3. No

Azure storage offers different access tiers, which allow you to store blob object data in the most cost-effective manner. The available access tiers include:

Hot – Optimized for storing data that is accessed frequently.

Cool – Optimized for storing data that is infrequently accessed and stored for at least 30 days.

Archive – Optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements (on the order of hours).

While a blob is in the archive access tier, it’s considered offline and can’t be read or modified. The blob metadata remains online and available, allowing you to list the blob and its properties. Reading and modifying blob data is only available with online tiers such as hot or cool.

To read data in archive storage, you must first change the tier of the blob to hot or cool. This process is known as rehydration and can take hours to complete.

A rehydration operation with Set Blob Tier is billed for data read transactions and data retrieval size. High-priority rehydration has higher operation and data retrieval costs compared to standard priority. High-priority rehydration shows up as a separate line item on your bill.

The statement that says: You can rehydrate a blob data in archive tier without costs is incorrect. You are billed for data read transactions and data retrieval size (per GB).

The statement that says: You can rehydrate a blob data in archive tier instantly is incorrect. Rehydrating a blob from the Archive tier can take several hours to complete.

The statement that says: You can access your blob data that is in archive tier is incorrect because blob data stored in the archive tier is considered to be offline and can’t be read or modified.

References:

https://azure.microsoft.com/en-us/services/storage/archive/

https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-rehydration

71
Q

4-8. QUESTION
Category: AZ-104 – Manage Azure Identities and Governance
Your company plans to implement a hybrid Azure Active Directory that will include the following users:
(image)

You have been assigned to modify the Department and UsageLocation attributes of the given users.

Which users can the following attributes be modifed from Azure AD?

  1. Department: _______ ?
  2. UsageLocation: ?
A
  1. Department: Dev1 and Dev2 only
  2. UsageLocation: Dev1, Dev2, Dev3, and Dev4

Azure Active Directory (Azure AD) is a multi-tenant, cloud-based identity and access management service. By implementing hybrid Azure AD joined devices, organizations with existing Active Directory implementations can benefit from some of the functionality provided by Azure Active Directory. These devices are joined to your on-premises Active Directory and registered with Azure Active Directory.

To achieve a hybrid identity with Azure AD, one of three authentication methods can be used, depending on your scenarios. The three methods are:

Password hash synchronization (PHS)
Pass-through authentication (PTA)
Federation (AD FS)
These authentication methods also provide single-sign-on capabilities. Single-sign on automatically signs your users in when they are on their corporate devices, connected to your corporate network.

Based on the given scenario, you need to modify the Department and UsageLocation attributes from Azure Active Directory. Once you encounter this kind of scenario, the most important info to look at is the source of the user.

There are three sources:

Microsoft account
Windows Server AD
Azure AD
Keep in mind that you cannot modify the Job Info of a user using Azure AD if the source is from Windows Server AD. To update the information of users from this source, you must do it in the Windows Server AD. Lastly, since the UsageLocation is an attribute of Azure Active Directory, you can modify it for all users.

Therefore, the correct answers are:

– EmployeeID = Dev1 and Dev2 only

– UsageLocation = Dev1, Dev2, Dev3, and Dev4

References:

https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-users-profile-azure-portal

https://docs.microsoft.com/en-us/azure/active-directory/devices/concept-azure-ad-join-hybrid

https://docs.microsoft.com/en-us/azure/active-directory/devices/hybrid-azuread-join-plan

72
Q

4-9. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
Note: This item is part of a series of questions with the exact same scenario but with a different proposed answer. Each one in the series has a unique solution that may, or may not, comply with the requirements specified in the scenario.

Tutorials Dojo has a subscription named TDSub1 that contains the following resources:

AZ104-D-17 image

TDVM1 needs to connect to a newly created virtual network named TDNET1 that is located in Japan West.

What should you do to connect TDVM1 to TDNET1?

Solution: You create a network interface in TD1 in the South East Asia region.

Does this meet the goal?

Yes
No

A

No

A network interface enables an Azure Virtual Machine to communicate with internet, Azure, and on-premises resources. When creating a virtual machine using the Azure portal, the portal creates one network interface with default settings for you.

You may instead choose to create network interfaces with custom settings and add one or more network interfaces to a virtual machine when you create it. You may also want to change default network interface settings for an existing network interface.

Remember these conditions and restrictions when it comes to network interfaces:

– A virtual machine can have multiple network interfaces attached but a network interface can only be attached to a single virtual machine.

– The network interface must be located in the same region and subscription as the virtual machine that it will be attached to.

– When you delete a virtual machine, the network interface attached to it will not be deleted.

– In order to detach a network interface from a virtual machine, you must shut down the virtual machine first.

– By default, the first network interface attached to a VM is the primary network interface. All other network interfaces in the VM are secondary network interfaces.

The solution proposed in the question is incorrect because the virtual network is not located in the same region as TDVM1. Take note that a virtual machine, virtual network and network interface must be in the same region or location.

You need to first redeploy TDVM1 from South East Asia to Japan West region and then create and attach the network interface in to TDVM1 in the Japan West region.

Hence, the correct answer is: No.

References:

https://docs.microsoft.com/en-us/azure/virtual-network/

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-network-interface

73
Q

4-11. QUESTION
Category: AZ-104 – Monitor and Maintain Azure Resources
Your company has an Azure Subscription that contains a resource group named TD-Cebu.

TD-Cebu contains the following resources:
(image)

What should you do first to delete the TD-Cebu resource group?

  1. Change the resource lock type of TD-VNET and modify the backup configuration of TD-VM.
  2. Delete all the resource lock and backup data in TD-RSV.
  3. Set the resource lock of TD-SA to Delete.
  4. Stop TD-VM and delete the resource lock of TD-VNET.
A
  1. Delete all the resource lock and backup data in TD-RSV.

A Recovery Services vault is a storage entity in Azure that houses data. The data is typically copies of data, or configuration information for virtual machines (VMs), workloads, servers, or workstations. You can use Recovery Services vaults to hold backup data for various Azure services such as IaaS VMs (Linux or Windows) and Azure SQL databases. Recovery Services vaults support System Center DPM, Windows Server, Azure Backup Server, and more. Recovery Services vaults make it easy to organize your backup data while minimizing management overhead.

In order to delete the TD-Cebu resource group, you must first delete/remove the following:

  1. Resource Lock

– If the lock level is set to Delete or Read-only, the users in your organization are prevented from accidentally deleting or modifying critical resources. The lock overrides any permissions the user might have.

  1. Backup data in Recovery Services vault

– If you try to delete a vault that contains backup data, you’ll encounter a message: “Vault cannot be deleted as there are existing resources within the vault. Please ensure there are no backup items, protected servers, or backup management servers associated with this vault.”

After you deleted the lock and backup data, you can now delete the TD-Cebu resource group.

Hence, the correct answer is: Delete all the resource lock and backup data in TD-RSV.

The option that says: Stop TD-VM and delete the resource lock of TD-VNET is incorrect because you must also delete the backup data of TD-RSV to delete the resource group. Take note that you can’t delete a vault that contains backup data.

The option that says: Set the resource lock of TD-SA to Delete is incorrect because even if you change the resource lock of TD-SA, you still won’t be able to delete the TD-Cebu resource group. You must first delete all the resource lock and backup data in TD-RSV to delete the resource group.

The option that says: Change the resource lock type of TD-VNET and modify the backup configuration of TD-VM is incorrect because changing the lock type of TD-VNET to Delete or Read-only still won’t allow you to delete the resource group. To accomplish the requirements in the scenario, you need to remove the resource lock and delete all the backup data in TD-RSV.

References:

https://docs.microsoft.com/en-us/azure/backup/backup-azure-delete-vault

https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/lock-resources?tabs=json

74
Q

4-12. QUESTION
Category: AZ-104 – Monitor and Maintain Azure Resources
Your company created several Azure virtual machines and a file share in the subscription TD-Boracay. The VMs are all part of the same virtual network.

You have been assigned to manage the on-premises Hyper-V server replication to Azure.

To support the planned deployment, you will need to create additional resources in TD-Boracay.

Which of the following options should you create?

  1. VNet Service Endpoint
  2. Hyper-V site
  3. Replication Policy
  4. Azure ExpressRoute
  5. Azure Storage Account
  6. Azure Recovery Services Vault
A

-2. Hyper-V site
-3. Replication Policy
-6. Azure Recovery Services Vault

Azure Virtual Machines is one of several types of on-demand, scalable computing resources that Azure offers. It gives you the flexibility of virtualization without having to buy and maintain the physical hardware that runs it. However, you still need to maintain the VM by performing tasks such as configuring, patching, and installing the software that runs on it.

Hyper-V is Microsoft’s hardware virtualization product. It lets you create and run a software version of a computer called a virtual machine. Each virtual machine acts like a complete computer, running an operating system and programs. Hyper-V runs each virtual machine in its own isolated space, which means you can run more than one virtual machine on the same hardware at the same time.

A Recovery Services vault is a management entity that stores recovery points created over time and provides an interface to perform backup-related operations.

A replication policy defines the settings for the retention history of recovery points. The policy also defines the frequency of app-consistent snapshots.

To set up disaster recovery of on-premises Hyper-V VMs to Azure, you should complete the following steps:

Select your replication source and target – to prepare the infrastructure, you will need to create a Recovery Services vault. After you created the vault, you can now accomplish the protection goal, as shown in the image above.
Set up the source replication environment, including on-premises Site Recovery components and the target replication environment – to set up the source environment, you need to create a Hyper-V site and add to that site the Hyper-V hosts containing the VMs that you want to replicate. The target environment will be the subscription and the resource group in which the Azure VMs will be created after failover.
Create a replication policy
Enable replication for a VM
Hence, the correct answers are:

– Hyper-V site

– Azure Recovery Services Vault

– Replication Policy

Azure Storage Account is incorrect because before you can create an Azure file share, you need to create a storage account first. Instead of creating a storage account again, you should set up a Hyper-V site.

Azure ExpressRoute is incorrect because this service is simply used to establish a private connection between your on-premises data center or corporate network to your Azure cloud infrastructure. It does not have the capability to replicate the Hyper-V server to Azure.

VNet Service Endpoint is incorrect because this option will only remove public internet access to resources and allow traffic only from your virtual network. Remember that the main requirement is to replicate the Hyper-V server to Azure. Therefore, this option wouldn’t satisfy the requirement.

References:

https://docs.microsoft.com/en-us/azure/site-recovery/tutorial-prepare-azure-for-hyperv

https://docs.microsoft.com/en-nz/azure/site-recovery/hyper-v-azure-tutorial

https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/hyper-v-technology-overview

75
Q

4-14. QUESTION
Category: AZ-104 – Manage Azure Identities and Governance
Your company has several Azure subscriptions.

The TD-Subscription-01 contains the following resource groups:
(image1)

You deployed a web app named TD-WebApp2 in TD-RG2.

The TD-Subscription-02 contains the following resource groups.
(image2)

For each of the following items, choose Yes if the statement is true or choose No if the statement is false.

  1. You can move TD-WebApp2 to TD-RG1
  2. You can move TD-WebApp2 to TD-RG3
  3. You can move TD-WebApp2 to TD-RG5
A
  1. No
  2. Yes
  3. Yes

Azure App Service is an HTTP-based service for hosting web applications, REST APIs, and mobile back ends. You can develop in your favorite language, be it .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python. Applications run and scale with ease on both Windows and Linux-based environments.

Locking of resources overrides the permissions of the users in your organization. It is mainly used to prevent unexpected changes such as modification and deletion of critical resources. Remember that when you apply a lock at a parent scope, all resources within that scope inherit the same lock.

You can set the lock level to CanNotDelete or ReadOnly. In the Azure Portal, the locks are called Delete and Read-only respectively.

– CanNotDelete means authorized users can still read and modify a resource, but they can’t delete the resource.

– ReadOnly means authorized users can read a resource, but they can’t delete or update the resource.

A resource group is just a container for your resources. You decide which resources belong to different resource groups. Take note that if you move a resource to a different resource group, the location of the resource would not change.

The following statements are correct because you can move the TD-WebApp2 to the existing resource groups:

– You can move TD-WebApp2 to TD-RG3.

– You can move TD-WebApp2 to TD-RG5.

The statement that says: You can move TD-WebApp2 to TD-RG1 is incorrect because the lock type of the resource group is set to read-only. This means that users can only read a resource, but they can’t delete or update the resource. If you try to move TD-WebApp2 to TD-RG1, you’d receive an error message “Moving resources failed”. In order to move the web app, you must delete the read-only lock type.

References:

https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/move-resource-group-and-subscription

https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/lock-resources

76
Q

4-15. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
You have an Azure subscription that contains a subscription named TDSub1.

There is a requirement to assess your network infrastructure using Azure Network Watcher. You plan to do the following activities:

Capture information about the IP traffic going to and from a network security group.

Diagnose connectivity issues to or from an Azure virtual machine

Which feature should you use for each activity?

  1. Capture information about the IP traffic going to and from a network security group: ________ ?
  2. Diagnose connectivity issues to an Azure virtual machine: _________ ?
A
  1. Capture information about the IP traffic going to and from a network security group:
    NSG flow logs
  2. Diagnose connectivity issues to an Azure virtual machine:
    IP flow verify

Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products which includes Virtual Machines, Virtual Networks, Application Gateways, Load balancers, etc.

Network security group (NSG) flow logs is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through an NSG. Flow data is sent to Azure Storage accounts from where you can access it as well as export it to any visualization tool, SIEM, or IDS of your choice.

Flow logs are the source of truth for all network activity in your cloud environment. Whether you’re an upcoming startup trying to optimize resources or a large enterprise trying to detect intrusion, Flow logs are your best bet. You can use it for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions, and more.

IP flow verify checks if a packet is allowed or denied to or from a virtual machine. If the packet is denied by a security group, the name of the rule that denied the packet is returned.

IP flow verify looks at the rules for all Network Security Groups (NSGs) applied to the network interface, such as a subnet or virtual machine NIC. Traffic flow is then verified based on the configured settings to or from that network interface. IP flow verify is useful in confirming if a rule in a Network Security Group is blocking ingress or egress traffic to or from a virtual machine.

Therefore, you have to use the NSG flow logs to capture information about the IP traffic going to and from a network security group.

Conversely, to diagnose connectivity issues to or from an Azure virtual machine, you need to use IP flow verify.

Next hop is incorrect because this simply helps you determine if traffic is being directed to the intended destination, or whether the traffic is being sent nowhere.

Traffic analytics is incorrect because this just allows you to process your NSG Flow Log data that enables you to visualize, query, analyze, and understand your network traffic.

References:

https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-monitoring-overview

https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-ip-flow-verify-overview

https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-nsg-flow-logging-overview

77
Q

4-16. QUESTION
Category: AZ-104 – Manage Azure Identities and Governance
You have been assigned to manage the following Azure resources:
(image)

These resources are used by the analytics, development, and operations teams.

You need to track the resource consumption and prevent the deletion of resources.

Which resources can you apply tags and locks?

  1. Tags: _____________ ?
  2. Locks: __________ ?
A

Tags: tdvm, tdsa, and tdsub
Locks: tdvm, tdsa, and tdsub

Tags are used to logically organize your Azure resources, resource groups, and subscriptions into a taxonomy. Each tag consists of a name and a value pair. For example, you can apply the name Environment and the value Production to all the resources in production. You can also use tags to categorize costs by runtime environment, such as the billing usage for VMs running in the production environment.

While locks are used to prevent other users in your organization from accidentally deleting or modifying critical resources. When you apply a lock at a parent scope, all resources within that scope inherit the same lock. Even resources you add later inherit the lock from the parent.

The lock level can be set in two ways:

CanNotDelete means authorized users can still read and modify a resource, but they can’t delete the resource.
ReadOnly means authorized users can read a resource, but they can’t delete or update the resource.
Going back to the question, the analytics, developments, and operations teams are able to use the resources given from the table. Your task is to identify which resources can you apply tags and locks. As we’ve read earlier about the usage of tags and locks, the only resource that we cannot apply a tag and lock is the Management Group. The Azure management groups are containers that helps you manage access, policy, and compliance across multiple subscriptions.

Therefore, the correct answers are:

– Tags = tdvm, tdsa, and tdsub

– Locks = tdvm, tdsa, and tdsub

References:

https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/tag-resources

https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/lock-resources

78
Q

4-20. QUESTION
Category: AZ-104 – Manage Azure Identities and Governance
Your company has five branch offices and a Microsoft Entra ID to centrally manage all identities and application access.

You have been tasked with granting permission to local administrators to manage users and groups within their scope.

What should you do?

  1. Assign a Microsoft Entra ID role.
  2. Create management groups.
  3. Create an administrative unit.
  4. Assign an Azure role.
A
  1. Create an administrative unit.

Microsoft Entra ID is a cloud-based identity and access management service that enables your employees access external resources. Example resources include Microsoft 365, the Azure portal, and thousands of other SaaS applications.

Microsoft Entra ID also helps them access internal resources like apps on your corporate intranet, and any cloud apps developed for your own organization.

For more granular administrative control in Microsoft Entra ID, you can assign anMicrosoft Entra ID role with a scope limited to one or more administrative units.

Administrative units limit a role’s permissions to any portion of your organization that you define. You could, for example, use administrative units to delegate the Helpdesk Administrator role to regional support specialists, allowing them to manage users only in the region for which they are responsible.

Hence, the correct answer is: Create an administrative unit.

The option that says: Assign an Microsoft Entra ID role is incorrect because if you assign an administrative role to a user that is not a member of an administrative unit, the scope of this role is within the directory.

The option that says: Create a management group is incorrect because this is just a container to organize your resources and subscriptions. This option won’t help you grant permission to local administrators to manage users and groups.

The option that says: Assign an Azure role is incorrect because the requirement is to grant local administrators permission only in their respective offices. If you use an Azure role, the user will be able to manage other Azure resources. Therefore, you need to use administrative units so the administrators can only manage users in the region that they support.

References:

https://learn.microsoft.com/en-us/entra/identity/role-based-access-control/admin-units-assign-roles

https://learn.microsoft.com/en-us/entra/identity/role-based-access-control/administrative-units

79
Q

4-21. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
You have an Azure subscription named Davao-Subscription1.

You have the following public load balancers deployed in Davao-Subscription1.
(image)

You provisioned two groups of virtual machines containing 5 virtual machines each where the traffic must be load balanced to ensure the traffic are evenly distributed.

Which of the following health probes are not available for TD2?

  1. HTTP
  2. TCP
  3. RDP
  4. HTTPS
A
  1. HTTPS

Azure Load balancer provides a higher level of availability and scale by spreading incoming requests across virtual machines (VMs). A private load balancer distributes traffic to resources that are inside a virtual network. Azure restricts access to the frontend IP addresses of a virtual network that is load balanced. Front-end IP addresses and virtual networks are never directly exposed to an internet endpoint. Internal line-of-business applications run in Azure and are accessed from within Azure or from on-premises resources.

Remember that although cheaper, load balancers with the basic SKU have limited features compared to a standard load balancer. Basic load balancers are only useful for testing in development environments but when it comes to production workloads, you need to upgrade your basic load balancer to standard load balancer to fully utilize the features of Azure Load Balancer.

Take note, the protocols supported by the health probes of a basic load balancer only support HTTP and TCP protocols.

Hence, the correct answer is: HTTPS.

HTTP and TCP are incorrect because these are supported protocols for health probes using basic load balancer.

RDP is incorrect because this protocol is not supported by Azure Load Balancer.

References:

https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview

https://docs.microsoft.com/en-us/azure/load-balancer/skus

80
Q

4-27. QUESTION
Category: AZ-104 – Manage Azure Identities and Governance
Note: This item is part of a series of questions with the exact same scenario but with a different proposed answer. Each one in the series has a unique solution that may, or may not, comply with the requirements specified in the scenario.

Your organization has an Azure AD subscription that is associated with the directory TD-Siargao.

You have been tasked to implement a conditional access policy.

The policy must require the DevOps group to use multi-factor authentication and a hybrid Azure AD joined device when connecting to Azure AD from untrusted locations.

Solution: Create a conditional access policy and enforce grant control.

Does the solution meet the goal?

Yes
No

A

Yes

Azure Active Directory (Azure AD) enterprise identity service provides single sign-on and multi-factor authentication to help protect your users from 99.9 percent of cybersecurity attacks. The single sign-on is an authentication method that simplifies access to your apps from anywhere. While conditional access and multi-factor authentication help protect and govern access to your resources.

With conditional access, you can implement automated access-control decisions for accessing your cloud apps based on conditions. Conditional access policies are enforced after the first-factor authentication has been completed. It’s not intended to be a first-line defense against denial-of-service (DoS) attacks, but it uses signals from these events to determine access.

There are two types of access controls in a conditional access policy:

Grant – enforces grant or block access to resources.
Session – enable limited experiences within specific cloud applications
Going back to the scenario, the requirement is to enforce a policy to the members of the DevOps group to use MFA and a hybrid Azure AD joined device when connecting to Azure AD from untrusted locations. The given solution is to enforce grant access control. If you check the image above, the grant control satisfies this requirement.

Hence, the correct answer is: Yes.

References:

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/overview

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/concept-conditional-access-grant

81
Q

4-29. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
Your organization has a domain named tutorialsdojo.com.

You want to host your records in Microsoft Azure.

Which three actions should you perform?

  1. Create an Azure private DNS zone
  2. Copy the Azure DNS NS records
  3. Update the Azure A records to your domain registrar
  4. Update the Azure NS records to your domain registrar
  5. Copy the Azure DNS A records
  6. Create an Azure public DNS zone
A

-2. Copy the Azure DNS NS records
-4. Update the Azure NS records to your domain registrar
-6. Create an Azure public DNS zone

Azure DNS is a hosting service for DNS domains that provides name resolution by using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services.

Using custom domain names helps you to tailor your virtual network architecture to best suit your organization’s needs. It provides name resolution for virtual machines (VMs) within a virtual network and between virtual networks. Additionally, you can configure zone names with a split-horizon view, which allows a private and a public DNS zone to share the name.

You can use Azure DNS to host your DNS domain and manage your DNS records. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services.

Since you own tutorialsdojo.com from a domain name registrar you can then create a zone with the name tutorialsdojo.com in Azure DNS. Since you’re the owner of the domain, your registrar allows you to configure the Nameserver (NS) records to your domain allowing internet users around the world are then directed to your domain in your Azure DNS zone whenever they try to resolve tutorialsdojo.com.

The steps in registering your Azure public DNS records are:

Create your Azure public DNS zone
Retrieve name servers – Azure DNS gives name servers from a pool each time a zone is created.
Delegate the domain – Once the DNS zone gets created and you have the name servers, you’ll need to update the parent domain with the Azure DNS name servers.
Hence, the correct answers are:

– Create an Azure public DNS zone

– Update the Azure NS records to your domain registrar

– Copy the Azure DNS NS records

The options that say: Copy the Azure DNS A records and Update the Azure A records to your domain registrar is incorrect because you need to copy the nameserver records instead of the A record. An A record is a type of DNS record that points a domain to an IP address.

The option that says: Create an Azure private DNS zone is incorrect because this simply manages and resolves domain names in the virtual network without the need to configure a custom DNS solution. The requirement states that the users must be able to access tutorialsdojo.com via the internet. You need to deploy an Azure public DNS zone instead.

References:

https://docs.microsoft.com/en-us/azure/dns/dns-overview

https://docs.microsoft.com/en-us/azure/dns/dns-getstarted-portal

82
Q

4-30. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
Note: This item is part of a series of questions with the exact same scenario but with a different proposed answer. Each one in the series has a unique solution that may, or may not, comply with the requirements specified in the scenario.

Your company Azure subscription contains the following resources:
(image)

You plan to record all sessions to track traffic to and from your virtual machines for a period of 3600 seconds.

Solution: Use IP flow verify in Azure Network Watcher.

Does the solution meet the goal?

No
Yes

A

No

Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products including Virtual Machines (VM), Virtual Networks, Application Gateways, Load balancers, etc.

With Packet Capture, you can create packet capture sessions to track traffic to and from a virtual machine. It also helps diagnose network anomalies both reactively and proactively. But in order to use this feature, the virtual machine must have the Azure Network Watcher extension.

The packet capture output (.cap) file can be saved in a storage account and/or on the target virtual machine. You can also filter the protocol, IP addresses, and ports when adding a packet capture. Keep in mind that the maximum duration of capturing sessions is 5 hours.

The provided solution is to use IP flow verify in Azure Network Watcher. The main use case of IP flow verify is to determine whether a packet to or from a virtual machine is allowed or denied based on 5-tuple information and not to capture packets from your virtual machines for a period of 3600 seconds or 1 hour.

Hence, the correct answer is: No.

References:

https://learn.microsoft.com/en-us/azure/network-watcher/network-watcher-packet-capture-overview

https://learn.microsoft.com/en-us/azure/network-watcher/frequently-asked-questions

83
Q

4-31. QUESTION
Category: AZ-104 – Deploy and Manage Azure Compute Resources
A company deployed a Grafana image in Azure Container Apps with the following configurations:

Resource Group: tdrg-grafana

Region: Canada Central

Zone Redundancy: Disabled

Virtual Network: Default

IP Restrictions: Allow

The container’s public IP address was provided to development teams in the East US region to allow users access to the dashboard. However, you received a report that users can’t access the application.

Which of the following options allows users to access Grafana with the least amount of configuration?

  1. Move the container app to the East US Region.
  2. Add a custom domain and certificate.
  3. Disable IP Restrictions.
  4. Configure ingress to generate a new endpoint.
A
  1. Configure ingress to generate a new endpoint.

Azure Container Apps allows you to deploy containerized apps without managing complex infrastructure. You have the freedom to write code in your preferred language or framework, and create microservices that are fully supported by the Distributed Application Runtime (Dapr). The scaling of your application can be automatically adjusted based on either HTTP traffic or events, utilizing Kubernetes Event-Driven Autoscaling (KEDA).

With Azure Container Apps ingress, you can make your container application accessible to the public internet, VNET, or other container apps within your environment. This eliminates the need to create an Azure Load Balancer, public IP address, or any other Azure resources to handle incoming HTTPS requests. Each container app can have unique ingress configurations. For instance, one container app can be publicly accessible while another can only be reached within the Container Apps environment.

The problem with the given scenario is that users are accessing the public IP address even though the ingress setting is not enabled during the creation of the container app. When you configure the ingress and target port and then save it, the app will generate a new endpoint depending on the ingress traffic that you’ve selected. Now when you try to access the application URL, you will be redirected to the target port of the container image.

Hence, the correct answer is: Configure ingress to generate a new endpoint.

The option that says: Move the container app to the East US Region is incorrect because you can’t move a container app to a different Region.

The option that says: Disable IP Restrictions is incorrect because this won’t still help users access the Grafana app. Instead of denying traffic from source IPs, you only need to enable ingress and target port.

The option that says: Add a custom domain and certificate is incorrect because even though you added a custom domain name, you still won’t be able to access the application since additional configurations must be done to allow VNET-scope ingress. Therefore, the quickest way and least amount of configurations would be to enable ingress and get the application URL.

References:

https://learn.microsoft.com/en-us/azure/container-apps/ingress?tabs=bash

https://azure.microsoft.com/en-us/products/container-apps/

84
Q

4-33. QUESTION
Category: AZ-104 – Implement and Manage Storage
You have the following storage accounts in your Azure subscription.
(image)

There is a requirement to export the data from your subscription using the Azure Import/Export service
Which Azure Storage account can you use to export the data?

  1. mystorage4
  2. mystorage1
  3. mystorage2
  4. mystorage3
A
  1. mystorage2

Azure Import/Export service is used to securely import large amounts of data to Azure Blob storage and Azure Files by shipping disk drives to an Azure datacenter. This service can also be used to transfer data from Azure Blob storage to disk drives and ship to your on-premises sites. Data from one or more disk drives can be imported either to Azure Blob storage or Azure Files.

Consider using Azure Import/Export service when uploading or downloading data over the network is too slow, or getting additional network bandwidth is cost-prohibitive. Use this service in the following scenarios:

Data migration to the cloud: Move large amounts of data to Azure quickly and cost-effectively.

Content distribution: Quickly send data to your customer sites.

Backup: Take backups of your on-premises data to store in Azure Storage.

Data recovery: Recover a large amount of data stored in the storage and have it delivered to your on-premises location.

Azure Import/Export service allows data transfer into Azure Blobs and Azure Files by creating jobs. Use the Azure portal or Azure Resource Manager REST API to create jobs. Each job is associated with a single storage account. This service only supports export of Azure Blobs. Export of Azure files is not supported.

The jobs can be import or export jobs. An import job allows you to import data into Azure Blobs or Azure files, whereas the export job allows data to be exported from Azure Blobs. For an import job, you ship drives containing your data. When you create an export job, you ship empty drives to an Azure datacenter. In each case, you can ship up to 10 disk drives per job.

Hence, the correct answer is: mystorage2.

mystorage1 is incorrect because an export job does not support Azure Files. The Azure Import/Export service only supports export of Azure Blobs.

mystorage3 and mystorage4 are incorrect because the Queue and Table storage services are simply not supported by the Azure Import/Export service.

References:

https://docs.microsoft.com/en-us/azure/storage/common/storage-import-export-service

https://docs.microsoft.com/en-us/azure/storage/common/storage-import-export-requirements

85
Q

4-36. QUESTION
Category: AZ-104 – Implement and Manage Virtual Networking
Your company is currently hosting a mission-critical application in an Azure virtual machine that resides in a virtual network named TDVnet1. You plan to use Azure ExpressRoute to allow the web applications to connect to the on-premises network.

Due to compliance requirements, you need to ensure that in the event your ExpressRoute fails, the connectivity between TDVnet1 and your on-premises network will remain available.

The solution must utilize a site-to-site VPN between TDVnet1 and the on-premises network. The solution should also be cost-effective.

Which three actions should you implement? Each correct answer presents part of the solution.

  1. Configure a local network gateway.
  2. Configure a VPN gateway with VpnGw1 as its SKU.
  3. Configure a gateway subnet.
  4. Configure a VPN gateway with Basic as its SKU.
  5. Configure a connection.
A

-1. Configure a local network gateway.
-2. Configure a VPN gateway with VpnGw1 as its SKU.
-5. Configure a connection.

A VPN gateway is a specific type of virtual network gateway that is used to send encrypted traffic between an Azure virtual network and an on-premises location over the public Internet. You can also use a VPN gateway to send encrypted traffic between Azure virtual networks over the Microsoft network. Each virtual network can have only one VPN gateway. However, you can create multiple connections to the same VPN gateway. When you create multiple connections to the same VPN gateway, all VPN tunnels share the available gateway bandwidth.

A site-to-site VPN gateway connection is used to connect your on-premises network to an Azure virtual network over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it.

Configuring Site-to-Site VPN and ExpressRoute coexisting connections has several advantages:

– You can configure a Site-to-Site VPN as a secure failover path for ExpressRoute.

– Alternatively, you can use Site-to-Site VPNs to connect to sites that are not connected through ExpressRoute.

To create a site-to-site connection, you need to do the following:

– Provision a virtual network

– Provision a VPN gateway

– Provision a local network gateway

– Provision a VPN connection

– Verify the connection

– Connect to a virtual machine

Take note that since you have already deployed an ExpressRoute, you do not need to create a virtual network and gateway subnet as these are prerequisites in creating an ExpressRoute.

Hence, the correct answers are:

– Configure a VPN gateway with a VpnGw1 SKU.

– Configure a local network gateway.

– Configure a connection.

The option that says: Configure a gateway subnet is incorrect. As you already have an ExpressRoute connecting to your on-premises network, this means that a gateway subnet is already provisioned.

The option that says: Configure a VPN gateway with Basic as its SKU is incorrect. Although one of the requirements is to minimize costs, the coexisting connection for ExpressRoute and site-to-site VPN connection does not support a Basic SKU. The bare minimum for a coexisting connection is VpnGw1.

References:

https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-portal

https://docs.microsoft.com/en-us/azure/expressroute/expressroute-howto-coexist-resource-manager

https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-about-vpngateways#gwsku

86
Q

4-41. QUESTION
Category: AZ-104 – Implement and Manage Storage
You have an Azure subscription that contains an Azure File Share named TDShare1 that contains sensitive data.

You want to ensure that only authorized users can access this data for compliance requirements, and users must only have access to specific files and folders.

You registered TDShare1 to use AD DS authentication and Azure AD Connect sync for specific AD user access.

You need to give your active directory users access to TDShare1.

What should you do?

  1. Enable anonymous access to the storage account.
  2. Create a shared access signature (SAS) with a stored access policy.
  3. Use the storage account access keys for authentication.
  4. Configure role-based access control (RBAC).
A
  1. Configure role-based access control (RBAC).

Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol or Network File System (NFS) protocol. Azure Files SMB file shares are accessible from Windows, Linux, and macOS clients. Azure Files NFS file shares are accessible from Linux or macOS clients. Additionally, Azure Files SMB file shares can be cached on Windows Servers with Azure File Sync for fast access near where the data is being used.

Once you’ve enabled an Active Directory (AD) source for your storage account, you must configure share-level permissions in order to get access to your file share. There are two ways you can assign share-level permissions. You can assign them to specific Azure AD users/groups, and you can assign them to all authenticated identities as a default share-level permission.

Since we are handling sensitive data, we want our users to be able to access files that they are only allowed to. Due to this, we need to assign specific Azure AD users or groups to access Azure file share resources.

In order for share-level permissions to work for specific Azure AD users or groups, you must:

Sync the users and the groups from your local AD to Azure AD using either the on-premises Azure AD Connect sync application or Azure AD Connect cloud sync.
Add AD synced groups to RBAC role so they can access your storage account.
Hence, the correct answer is: Configure role-based access control (RBAC).

The option that says: Enable anonymous access to the storage account is incorrect as it allows anyone to access the storage account and its contents without authentication.

The option that says: Create a shared access signature (SAS) with a stored access policy is incorrect because while SAS tokens can provide limited access to a storage account, they are not a suitable authentication mechanism for controlling access to sensitive data.

The option that says: Use the storage account access keys for authentication is incorrect because storage account keys provide full control over the storage account, which means that anyone with the key can perform any operation on the storage account. This makes them a less secure option, especially for sensitive data that requires fine-grained access control.

References:

https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction

https://learn.microsoft.com/en-us/azure/storage/files/storage-files-identity-ad-ds-assign-permissions

87
Q

4-45. QUESTION
Category: AZ-104 – Monitor and Maintain Azure Resources
Your company is currently running a mission-critical application in a primary Azure region.

You plan to implement a disaster recovery by configuring failover to a secondary region using Azure Site Recovery.

What should you do?

  1. Create an RSV in the secondary region, install and configure the Azure Site Recovery agent on the VMs, and design a recovery plan to orchestrate failover and failback operations.
  2. Create a virtual network and subnet in the secondary region, install and configure the Azure Site Recovery agent on the VMs, and design a recovery plan to orchestrate failover and failback operations.
  3. Create an Azure Traffic Manager profile to load-balance traffic between the primary and secondary regions, install and configure the Azure Site Recovery agent on the VMs, and design a replication policy to replicate the data to the secondary region.
  4. Create an RSV in the primary region, install and configure the Azure Site Recovery agent on the VMs, and design a replication policy to replicate the data to the secondary region.
A
  1. Create an RSV in the secondary region, install and configure the Azure Site Recovery agent on the VMs, and design a recovery plan to orchestrate failover and failback operations.

Azure Site Recovery service contributes to your business continuity and disaster recovery (BCDR) strategy by keeping your business applications online during planned and unplanned outages. Site Recovery manages and orchestrates disaster recovery of on-premises machines and Azure virtual machines (VM), including replication, failover, and recovery.

Enabling replication for a virtual machine (VM) for disaster recovery purposes involves installing the Site Recovery Mobility service extension on the VM and registering it with Azure Site Recovery. During replication, any disk writes from the VM are first sent to a cache storage account in the source region. Subsequently, the data is transferred to the target region, where recovery points are generated from it. During a disaster recovery failover of the VM, a recovery point is used to restore the VM in the target region.

Here’s how to set up disaster recovery for a VM with Azure Site Recovery:

First, you need to create a Recovery Services Vault (RSV) in the secondary region, which will serve as the target location for the VM during a failover.
Next, you need to install and configure the Azure Site Recovery agent on the VMs that you want to protect. The agent captures data changes on the VM disks and sends them to Azure Site Recovery for replication to the secondary region.
Once the replication is set up, you need to design a recovery plan that outlines the steps to orchestrate the failover and failback operations. This includes defining the order in which VMs should be failed over, any dependencies between VMs, and the desired recovery point objective (RPO) and recovery time objective (RTO) for each VM.
During replication, VM disk writes are sent to a cache storage account in the source region, and from there to the target region, where recovery points are generated from the data. In the event of a disaster or planned failover, a recovery point is used to restore the VM in the target region, allowing the business to continue operations without significant downtime or data loss.
Hence, the correct answer is: Create an RSV in the secondary region, install and configure the Azure Site Recovery agent on the VMs, and design a recovery plan to orchestrate failover and failback operations.

The option that says: Create an RSV in the primary region, install and configure the Azure Site Recovery agent on the VMs, and design a replication policy to replicate the data to the secondary region is incorrect because although this will replicate the data to the secondary region, it does not include the necessary steps to perform failover. You still need to create a Recovery Services vault in the secondary region, not the primary region, to perform failover.

The option that says: Create a virtual network and subnet in the secondary region, install and configure the Azure Site Recovery agent on the VMs, and design a recovery plan to orchestrate failover and failback operations is incorrect because, just like the other options, you will still need to create a Recovery Services vault in the secondary region, install and configure the Azure Site Recovery agent on the virtual machines, and create a recovery plan to orchestrate failover and failback operations.

The option that says: Create an Azure Traffic Manager profile to load-balance traffic between the primary and secondary regions, install and configure the Azure Site Recovery agent on the VMs, and design a replication policy to replicate the data to the secondary region is incorrect because this will just load-balance traffic between the primary and secondary regions but won’t be able to perform failover. You will still need to create a Recovery Services vault in the secondary region to perform failover.

References:

https://learn.microsoft.com/en-us/azure/site-recovery/site-recovery-overview

https://learn.microsoft.com/en-us/azure/site-recovery/azure-to-azure-quickstart