Review Mode Set 4 Dojo Flashcards

1
Q

You have an Azure subscription that contains hundreds of network resources.

You need to recommend a solution that will allow you to monitor resources in one centralized console for network monitoring.

What solution should you recommend?

A. Azure Monitor Network Insights
B. Azure Virtual Network
C. Azure Traffic Manager
D. Azure Advisor

A

A. Azure Monitor Network Insights

Explanation:
Azure Monitor maximizes the availability and performance of your applications and services by delivering a solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. It helps you understand how your applications are performing and proactively identifies issues affecting them and the resources they depend on.

Azure Monitor Network Insights provides a comprehensive view of health and metrics for all deployed network resources without requiring any configuration. It also provides access to network monitoring capabilities like Connection Monitor, flow logging for network security groups (NSGs), and Traffic Analytics. And it provides other network diagnostic features. Key features of Network Insight:

– Single console for network monitoring

– No agent configuration required

– Access to health state, metrics, alerts, & data from traffic and connectivity monitoring tools in one place

– View network topology with functional dependencies for simpler troubleshooting

– Access resources metrics to debug issues without writing queries or authoring workbooks

Hence, the correct answer is: Azure Monitor Network Insights.

Azure Virtual Network is incorrect because this service simply allows your resources, such as virtual machines, to securely communicate with each other, the internet, and on-premises networks. VNet is similar to a traditional network that you’d operate in your own data center but brings with it additional benefits of Azure’s infrastructure such as scale, availability, and isolation.

Azure Traffic Manager is incorrect because this is simply a DNS-based traffic load balancer that enables you to distribute traffic optimally to services across global Azure regions while providing high availability and responsiveness. However, you cannot use this to monitor your network resources.

Azure advisor is incorrect because this service just helps you improve the cost-effectiveness, performance, reliability (formerly called high availability), and security of your Azure resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Your organization has a Microsoft Entra ID subscription that is associated with the directory TD-Siargao.

You have been tasked to implement a conditional access policy.

The policy must require the DevOps group to use multi-factor authentication and a hybrid Microsoft Entra joined device when connecting to Microsoft Entra from untrusted locations.

Solution: Create a conditional access policy and enforce grant control.

Does the solution meet the goal?

A. No
B. Yes

A

B. Yes

Explanation:
Microsoft Entra ID enterprise identity service provides single sign-on and multi-factor authentication to help protect your users from 99.9 percent of cybersecurity attacks. The single sign-on is an authentication method that simplifies access to your apps from anywhere. While conditional access and multi-factor authentication help protect and govern access to your resources.

With conditional access, you can implement automated access-control decisions for accessing your cloud apps based on conditions. Conditional access policies are enforced after the first-factor authentication has been completed. It’s not intended to be a first-line defense against denial-of-service (DoS) attacks, but it uses signals from these events to determine access.

There are two types of access controls in a conditional access policy:

Grant – enforces grant or block access to resources.
Session – enable limited experiences within specific cloud applications

Going back to the scenario, the requirement is to enforce a policy to the members of the DevOps group to use MFA and a hybrid Microsoft Entra joined device when connecting to Microsoft Entra from untrusted locations. The given solution is to enforce grant access control. If you check the image above, the grant control satisfies this requirement.

Hence, the correct answer is: Yes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Your organization has an Azure AD subscription that is associated with the directory TD-Siargao.

You have been tasked to implement a conditional access policy.

The policy must require the DevOps group to use multi-factor authentication and a hybrid Azure AD joined device when connecting to Azure AD from untrusted locations.

Solution: Create a conditional access policy and enforce session control.

Does the solution meet the goal?

A. Yes
B. No

A

B. No

Explanation:
Azure Active Directory (Azure AD) enterprise identity service provides single sign-on and multi-factor authentication to help protect your users from 99.9 percent of cybersecurity attacks. The single sign-on is an authentication method that simplifies access to your apps from anywhere. While conditional access and multi-factor authentication help protect and govern access to your resources.

With conditional access, you can implement automated access-control decisions for accessing your cloud apps based on conditions. Conditional access policies are enforced after the first-factor authentication has been completed. It’s not intended to be a first-line defense against denial-of-service (DoS) attacks, but it uses signals from these events to determine access.

There are two types of access controls in a conditional access policy:

Grant – enforces grant or block access to resources.
Session – enable limited experiences within specific cloud applications

Going back to the scenario, the requirement is to enforce a policy to the members of the DevOps group to use MFA and a hybrid Azure AD joined device when connecting to Azure AD from untrusted locations. The given solution is to enforce session access control. If you check the image above, the session control doesn’t have options to require the use of MFA and AD joined devices.

Hence, the correct answer is: No.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Your organization has a Microsoft Entra subscription that is associated with the directory TD-Siargao.

You have been tasked to implement a conditional access policy.

The policy must require the DevOps group to use multi-factor authentication and a hybrid Microsoft Entra joined device when connecting to Microsoft Entra ID from untrusted locations.

Solution: Go to the security option in Microsoft Entra and configure MFA.

Does the solution meet the goal?

A. No
B. Yes

A

A. No

Explanation:
Microsoft Entra ID enterprise identity service provides single sign-on and multi-factor authentication to help protect your users from 99.9 percent of cybersecurity attacks. The single sign-on is an authentication method that simplifies access to your apps from anywhere. While conditional access and multi-factor authentication help protect and govern access to your resources.

With conditional access, you can implement automated access-control decisions for accessing your cloud apps based on conditions. Conditional access policies are enforced after the first-factor authentication has been completed. It’s not intended to be a first-line defense against denial-of-service (DoS) attacks, but it uses signals from these events to determine access.

There are two types of access controls in a conditional access policy:

Grant – enforces grant or block access to resources.
Session – enable limited experiences within specific cloud applications

Going back to the scenario, the requirement is to enforce a policy to the members of the DevOps group to use MFA and a hybrid Microsoft Entra joined device when connecting to Microsoft Entra from untrusted locations. The given solution is to configure MFA in Microsoft Entra security. If you check the question again, there is a line “You have been tasked to implement a conditional access policy.” This means that you must create a conditional access policy and enforce grant control. Also, configuring MFA does not enable the option to require the use of a Microsoft Entra joined device.

Hence, the correct answer is: No.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Your company created several Azure virtual machines and a file share in the subscription TD-Boracay. The VMs are all part of the same virtual network.

You have been assigned to manage the on-premises Hyper-V server replication to Azure.

To support the planned deployment, you will need to create additional resources in TD-Boracay.

Which of the following options should you create?

A. Replication Policy
B. Azure Storage Account
C. VNet Service Endpoint
D. Hyper-V site
E. Azure Recovery Services Vault
F. Azure ExpressRoute

A

A. Replication Policy
D. Hyper-V site
E. Azure Recovery Services Vault

Explanation:
Azure Virtual Machines is one of several types of on-demand, scalable computing resources that Azure offers. It gives you the flexibility of virtualization without having to buy and maintain the physical hardware that runs it. However, you still need to maintain the VM by performing tasks such as configuring, patching, and installing the software that runs on it.

Hyper-V is Microsoft’s hardware virtualization product. It lets you create and run a software version of a computer called a virtual machine. Each virtual machine acts like a complete computer, running an operating system and programs. Hyper-V runs each virtual machine in its own isolated space, which means you can run more than one virtual machine on the same hardware at the same time.

A Recovery Services vault is a management entity that stores recovery points created over time and provides an interface to perform backup-related operations.

A replication policy defines the settings for the retention history of recovery points. The policy also defines the frequency of app-consistent snapshots.

To set up disaster recovery of on-premises Hyper-V VMs to Azure, you should complete the following steps:

Select your replication source and target – to prepare the infrastructure, you will need to create a Recovery Services vault. After you created the vault, you can now accomplish the protection goal, as shown in the image above.
Set up the source replication environment, including on-premises Site Recovery components and the target replication environment – to set up the source environment, you need to create a Hyper-V site and add to that site the Hyper-V hosts containing the VMs that you want to replicate. The target environment will be the subscription and the resource group in which the Azure VMs will be created after failover.
Create a replication policy
Enable replication for a VM

Hence, the correct answers are:

– Hyper-V site

– Azure Recovery Services Vault

– Replication Policy

Azure Storage Account is incorrect because before you can create an Azure file share, you need to create a storage account first. Instead of creating a storage account again, you should set up a Hyper-V site.

Azure ExpressRoute is incorrect because this service is simply used to establish a private connection between your on-premises data center or corporate network to your Azure cloud infrastructure. It does not have the capability to replicate the Hyper-V server to Azure.

VNet Service Endpoint is incorrect because this option will only remove public internet access to resources and allow traffic only from your virtual network. Remember that the main requirement is to replicate the Hyper-V server to Azure. Therefore, this option wouldn’t satisfy the requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Your company has five branch offices and a Microsoft Entra ID to centrally manage all identities and application access.

You have been tasked with granting permission to local administrators to manage users and groups within their scope.

What should you do?

A. Create an administrative unit.
B. Assign a Microsoft Entra role.
C. Assign an Azure role.
D. Create management groups.

A

A. Create an administrative unit.

Explanation:
Microsoft Entra ID is a cloud-based identity and access management service that enables your employees access external resources. Example resources include Microsoft 365, the Azure portal, and thousands of other SaaS applications.

Microsoft Entra ID also helps them access internal resources like apps on your corporate intranet, and any cloud apps developed for your own organization.

For more granular administrative control in Microsoft Entra ID, you can assign a Microsoft Entra role with a scope limited to one or more administrative units.

Administrative units limit a role’s permissions to any portion of your organization that you define. You could, for example, use administrative units to delegate the Helpdesk Administrator role to regional support specialists, allowing them to manage users only in the region for which they are responsible.

Hence, the correct answer is: Create an administrative unit.

The option that says: Assign a Microsoft Entra role is incorrect because if you assign an administrative role to a user that is not a member of an administrative unit, the scope of this role is within the directory.

The option that says: Create a management group is incorrect because this is just a container to organize your resources and subscriptions. This option won’t help you grant permission to local administrators to manage users and groups.

The option that says: Assign an Azure role is incorrect because the requirement is to grant local administrators permission only in their respective offices. If you use an Azure role, the user will be able to manage other Azure resources. Therefore, you need to use administrative units so the administrators can only manage users in the region that they support.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Your company has a web app hosted in Azure Virtual Machine.

You plan to create a backup of TD-VM1 but the backup pre-checks displayed a warning state.

What could be the reason?

A. The Recovery Services vault lock type is read-only.
B. The TD-VM1 data disk is unattached.
C. The status of TD-VM1 is deallocated.
D. The latest VM Agent is not installed in TD-VM1

A

D. The latest VM Agent is not installed in TD-VM1

Explanation:
Azure Virtual Machine is an image service instance that provides on-demand and scalable computing resources with usage-based pricing. More broadly, a virtual machine behaves like a server: it is a computer within a computer that provides the user the same experience they would have on the host operating system itself. To protect your data, you can use Azure Backup to create recovery points that can be stored in geo-redundant recovery vaults.

A Recovery Services vault is a management entity that stores recovery points created over time and provides an interface to perform backup-related operations. These operations include taking on-demand backups, performing restores, and creating backup policies.

Backup Pre-Checks, as the name implies, check the configuration of your VMs for issues that may affect backups and aggregate this information so that you can view it directly from the Recovery Services Vault dashboard. It also provides recommendations for corrective measures to ensure successful file-consistent or application-consistent backups, wherever applicable.

Backup Pre-Checks are performed as part of your Azure VMs’ scheduled backup operations and result in one of the following states:

Passed: This state indicates that your VMs configuration is conducive for successful backups and no corrective action needs to be taken.
Warning: This state indicates one or more issues in VM’s configuration that might lead to backup failures and provides recommended steps to ensure successful backups. Not having the latest VM Agent installed, for example, can cause backups to fail intermittently and falls in this class of issues.
Critical: This state indicates one or more critical issues in the VM’s configuration that will lead to backup failures and provides required steps to ensure successful backups. A network issue caused due to an update to the NSG rules of a VM, for example, will fail backups as it prevents the VM from communicating with the Azure Backup service and falls in this class of issues.

As stated above, the reason why backup pre-checks displayed a warning state is because of the VM agent. The Azure VM Agent for Windows is automatically upgraded on images deployed from the Azure Marketplace. As new VMs are deployed to Azure, they receive the latest VM agent at VM provision time.

If you have installed the agent manually or are deploying custom VM images you will need to manually update to include the new VM agent at image creation time. To check for the Azure VM Agent in your machine, open Task Manager and look for a process name WindowsAzureGuestAgent.exe.

Hence, the correct answer is: The latest VM Agent is not installed in TD-VM1.

The option that says: The Recovery Services vault lock type is read-only is incorrect because you can’t create a backup if the configured lock type is read-only. If you attempted to backup a virtual machine with a resource lock, the operation won’t be performed, and notify you to remove the lock first.

The option that says: The TD-VM1 data disk is unattached is incorrect because you don’t need to attach a data disk to the virtual machine when creating a backup. To enable VM backup, you need to have a VM agent and Recovery Services vault.

The option that says: The status of TD-VM1 is deallocated is incorrect because you can still create a backup even if the status of your virtual machine is stopped (deallocated).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Your company eCommerce website is deployed in an Azure virtual machine named TD-BGC.

You created a backup of the TD-BGC and implemented the following changes:

– Change the local admin password.

– Create and attach a new disk.

– Resize the virtual machine.

– Copy the log reports to the data disk.

You received an email that the admin restore the TD-BGC using the replace existing configuration.

Which of the following options should you perform to bring back the changes in TD-BGC?

A. Create and attach a new disk.
B. Change the local admin password.
C. Copy the log reports to the data disk.
D. Resize the virtual machine.

A

C. Copy the log reports to the data disk.

Explanation:
Azure Backup is a cost-effective, secure, one-click backup solution that’s scalable based on your backup storage needs. The centralized management interface makes it easy to define backup policies and protect a wide range of enterprise workloads, including Azure Virtual Machines, SQL and SAP databases, and Azure file shares.

Azure Backup provides several ways to restore a VM:

Create a new VM – quickly creates and gets a basic VM up and running from a restore point.
Restore disk – restores a VM disk, which can then be used to create a new VM.
Replace existing – restore a disk, and use it to replace a disk on the existing VM.
Cross-Region (secondary region) – restore Azure VMs in the secondary region, which is an Azure paired region.

The restore configuration that is given in the scenario is the replace existing option. Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. The existing disks connected to the VM are replaced with the selected restore point.

The snapshot is copied to the vault, and retained in accordance with the retention policy. After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren’t needed.

Since you restore the VM using the backup data, the new disk won’t have a copy of the log reports. To bring back the changes in the TD-BGC virtual machine, you will need to copy the log reports to the disk.

Hence, the correct answer is: Copy the log reports to the data disk.

The option that says: Change the local admin password is incorrect because the new password will not be overridden with the old password using the restore VM option. Therefore, you can use the updated password to connect via RDP to the machine.

The option that says: Create and attach a new disk is incorrect because the new disk does not contain the log reports. Instead of creating a new disk, you should attach the existing data disk that contains the log reports.

The option that says: Resize the virtual machine is incorrect because the only changes that will retain after rolling back are the VM size and the account password.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Your company plans to store media assets in two Azure regions.

You are given the following requirements:

Media assets must be stored in multiple availability zones

Media assets must be stored in multiple regions

Media assets must be readable in the primary and secondary regions.

Which of the following data redundancy options should you recommend?

A. Locally redundant storage
B. Zone-redundant storage
C. Geo-redundant storage
D. Read-access geo-redundant storage

A

D. Read-access geo-redundant storage

Explanation:
An Azure storage account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS. Data in your Azure storage account is durable and highly available, secure, and massively scalable.

Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage offers four options for how your data is replicated:

Locally redundant storage (LRS) copies your data synchronously three times within a single physical location in the primary region. LRS is the least expensive replication option but is not recommended for applications requiring high availability.
Zone-redundant storage (ZRS) copies your data synchronously across three Azure availability zones in the primary region. For applications requiring high availability.
Geo-redundant storage (GRS) copies your data synchronously three times within a single physical location in the primary region using LRS. It then copies your data asynchronously to a single physical location in a secondary region that is hundreds of miles away from the primary region.
Geo-zone-redundant storage (GZRS) copies your data synchronously across three Azure availability zones in the primary region using ZRS. It then copies your data asynchronously to a single physical location in the secondary region.

Take note, one of the requirements states that you need the media assets must be readable in the primary and secondary regions. With Geo-redundant storage, your media assets are stored in multiple availability zones and multiple regions. But read access will only be available in the secondary region if you or Microsoft initiates a failover from the primary region to the secondary region.

In order to have read access in the primary and secondary region at all times without having the need to initiate a failover, you need to recommend Read-access geo-redundant storage.

Hence, the correct answer is: Read-access geo-redundant storage.

Locally redundant storage is incorrect because the media assets will only be stored in one physical location.

Zone-redundant storage is incorrect. It only satisfies one requirement which is to store the media assets in multiple availability zones. You still need to store your media assets in multiple regions which ZRS is unable to do.

Geo-redundant storage is incorrect because the requirement states that you need read access to the primary and secondary regions. With GRS, the data in the secondary region isn’t available for read access. You can only have read access in the secondary region if a failover from the primary region to the secondary region is initiated by you or Microsoft.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Tutorials Dojo has a subscription named TDSub1 that contains the following resources:

AZ104-D-17 image

TDVM1 needs to connect to a newly created virtual network named TDNET1 that is located in Japan West.

What should you do to connect TDVM1 to TDNET1?

Solution: You create a network interface in TD1 in the South East Asia region.

Does this meet the goal?

A. No
B. Yes

A

A. No

Explanation:
A network interface enables an Azure Virtual Machine to communicate with internet, Azure, and on-premises resources. When creating a virtual machine using the Azure portal, the portal creates one network interface with default settings for you.

You may instead choose to create network interfaces with custom settings and add one or more network interfaces to a virtual machine when you create it. You may also want to change default network interface settings for an existing network interface.

Remember these conditions and restrictions when it comes to network interfaces:

– A virtual machine can have multiple network interfaces attached but a network interface can only be attached to a single virtual machine.

– The network interface must be located in the same region and subscription as the virtual machine that it will be attached to.

– When you delete a virtual machine, the network interface attached to it will not be deleted.

– In order to detach a network interface from a virtual machine, you must shut down the virtual machine first.

– By default, the first network interface attached to a VM is the primary network interface. All other network interfaces in the VM are secondary network interfaces.

The solution proposed in the question is incorrect because the virtual network is not located in the same region as TDVM1. Take note that a virtual machine, virtual network and network interface must be in the same region or location.

You need to first redeploy TDVM1 from South East Asia to Japan West region and then create and attach the network interface in to TDVM1 in the Japan West region.

Hence, the correct answer is: No.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You have the following public load balancers deployed in Davao-Subscription1.

TD1 - Standard
TD2 - Basic

You provisioned two groups of virtual machines containing 5 virtual machines each where the traffic must be load balanced to ensure the traffic are evenly distributed.

Which of the following health probes are not available for TD2?

A. HTTP
B. TCP
C. RDP
D. HTTPS

A

D. HTTPS

Explanation:
Azure Load balancer provides a higher level of availability and scale by spreading incoming requests across virtual machines (VMs). A private load balancer distributes traffic to resources that are inside a virtual network. Azure restricts access to the frontend IP addresses of a virtual network that is load balanced. Front-end IP addresses and virtual networks are never directly exposed to an internet endpoint. Internal line-of-business applications run in Azure and are accessed from within Azure or from on-premises resources.

Remember that although cheaper, load balancers with the basic SKU have limited features compared to a standard load balancer. Basic load balancers are only useful for testing in development environments but when it comes to production workloads, you need to upgrade your basic load balancer to standard load balancer to fully utilize the features of Azure Load Balancer.

Take note, the protocols supported by the health probes of a basic load balancer only support HTTP and TCP protocols.

Hence, the correct answer is: HTTPS.

HTTP and TCP are incorrect because these are supported protocols for health probes using basic load balancer.

RDP is incorrect because this protocol is not supported by Azure Load Balancer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You have an Azure subscription that contains the following storage accounts:

TD1 - general-purpose v1 - Locally redundant storage
TD2 - general-purpose v1 - Geo redundant storage

There is a compliance requirement where in the data in TD1 and TD2 must be available if a single availability zone in a region fails. The solution must minimize costs and administrative effort.

What should you do first?

A. Upgrade TD1 and TD2 to general-purpose v2
B. Upgrade TD1 and TD2 to zone-redundant storage
C. Configure lifecycle policy
D. Upgrade TD1 to geo-redundant storage

A

A. Upgrade TD1 and TD2 to general-purpose v2

Explanation:
Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage offers four options for how your data is replicated:

Locally redundant storage (LRS) copies your data synchronously three times within a single physical location in the primary region. LRS is the least expensive replication option but is not recommended for applications requiring high availability.
Zone-redundant storage (ZRS) copies your data synchronously across three Azure availability zones in the primary region. For applications requiring high availability.
Geo-redundant storage (GRS) copies your data synchronously three times within a single physical location in the primary region using LRS. It then copies your data asynchronously to a single physical location in a secondary region that is hundreds of miles away from the primary region.
Geo-zone-redundant storage (GZRS) copies your data synchronously across three Azure availability zones in the primary region using ZRS. It then copies your data asynchronously to a single physical location in the secondary region.

The main requirement is that you need to ensure the data in TD1 and TD2 are available if a single availability zone fails while minimizing costs and administrative effort.

Between the redundancy options, zone-redundant storage fits the requirement of protecting your data by copying the data synchronously across three Azure availability zones. So even if a single availability zone fails, you still have two availability zones that are available.

Remember, ZRS is not a supported redundancy option under general-purpose v1. The first thing you need to do is to upgrade your storage account to general-purpose v2 and then upgrade the replication type to ZRS.

Hence, the correct answer is: Upgrade TD1 and TD2 to general-purpose v2.

The option that says: Upgrade TD1 and TD2 to zone-redundant storage is incorrect because zone-redundant storage is not supported under general-purpose v1.

The option that says: Upgrade TD1 to geo-redundant storage is incorrect because one of the requirements is to minimize cost. With ZRS, you have already satisfied the data availability requirement.

The option that says: Configure lifecycle policy is incorrect because this is simply a rule-based policy that you can use to transition blob data to the appropriate access tiers or to expire data at the end of the data lifecycle.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Your organization has a domain named tutorialsdojo.com.

You want to host your records in Microsoft Azure.

Which three actions should you perform?

A. Copy the Azure DNS NS records
B. Copy the Azure DNS A records
C. Create an Azure private DNS zone
D. Create an Azure public DNS zone
E. Update the Azure NS records to your domain registrar
F. Update the Azure A records to your domain registrar

A

A. Copy the Azure DNS NS records
D. Create an Azure public DNS zone
E. Update the Azure NS records to your domain registrar

Explanation:
Azure DNS is a hosting service for DNS domains that provides name resolution by using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services.

Using custom domain names helps you to tailor your virtual network architecture to best suit your organization’s needs. It provides name resolution for virtual machines (VMs) within a virtual network and between virtual networks. Additionally, you can configure zone names with a split-horizon view, which allows a private and a public DNS zone to share the name.

You can use Azure DNS to host your DNS domain and manage your DNS records. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services.

Since you own tutorialsdojo.com from a domain name registrar you can then create a zone with the name tutorialsdojo.com in Azure DNS. Since you’re the owner of the domain, your registrar allows you to configure the Nameserver (NS) records to your domain allowing internet users around the world are then directed to your domain in your Azure DNS zone whenever they try to resolve tutorialsdojo.com.

The steps in registering your Azure public DNS records are:

Create your Azure public DNS zone
Retrieve name servers – Azure DNS gives name servers from a pool each time a zone is created.
Delegate the domain – Once the DNS zone gets created and you have the name servers, you’ll need to update the parent domain with the Azure DNS name servers.

Hence, the correct answers are:

– Create an Azure public DNS zone

– Update the Azure NS records to your domain registrar

– Copy the Azure DNS NS records

The options that say: Copy the Azure DNS A records and Update the Azure A records to your domain registrar is incorrect because you need to copy the nameserver records instead of the A record. An A record is a type of DNS record that points a domain to an IP address.

The option that says: Create an Azure private DNS zone is incorrect because this simply manages and resolves domain names in the virtual network without the need to configure a custom DNS solution. The requirement states that the users must be able to access tutorialsdojo.com via the internet. You need to deploy an Azure public DNS zone instead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You plan to deploy the following public IP addresses in your Azure subscription shown in the following table:

TD1 | Basic | Static
TD2 | Basic | Dynamic
TD3 | Standard | Static
TD4 | Standard | Dynamic

You need to associate a public IP address to a public Azure load balancer with an SKU of standard.
Which of the following IP addresses can you use?

A. TD1
B. TD3
C. TD3 and TD4
D. TD1 and TD2

A

B. TD3

Explanation:
A public load balancer can provide outbound connections for virtual machines (VMs) inside your virtual network. These connections are accomplished by translating their private IP addresses to public IP addresses. Public Load Balancers are used to load balance Internet traffic to your VMs.

A public IP associated with a load balancer serves as an Internet-facing frontend IP configuration. The frontend is used to access resources in the backend pool. The frontend IP can be used for members of the backend pool to egress to the Internet.

Remember that the SKU of a load balancer and the SKU of a public IP address SKU must match when you use them with public IP addresses meaning if you have a load balancer with an SKU of standard, you must provision a public IP address with an SKU of standard also.

Hence, the correct answer is: TD3.

The options that say: TD1 and TD1 and TD2 are incorrect because both public IP addresses have an SKU of basic. You must provision a public IP address with a SKU of standard so you can associate it with a standard public load balancer.

The option that says: TD3 and TD4 is incorrect because you can only create a standard public IP address with an assignment of static.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

For each of the following items, choose Yes if the statement is true or choose No if the statement is false. Take note that each correct item is worth one point.

Questions 	Yes 	No	 1.    You can rehydrate a blob data in archive tier instantly	
  1. You can rehydrate a blob data in archive tier without costs
  2. You can access your blob data that is in archive tier
A
  1. No
  2. No
  3. No

Explanation:
Azure storage offers different access tiers, which allow you to store blob object data in the most cost-effective manner. The available access tiers include:

Hot – Optimized for storing data that is accessed frequently.

Cool – Optimized for storing data that is infrequently accessed and stored for at least 30 days.

Archive – Optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements (on the order of hours).

While a blob is in the archive access tier, it’s considered offline and can’t be read or modified. The blob metadata remains online and available, allowing you to list the blob and its properties. Reading and modifying blob data is only available with online tiers such as hot or cool.

To read data in archive storage, you must first change the tier of the blob to hot or cool. This process is known as rehydration and can take hours to complete.

A rehydration operation with Set Blob Tier is billed for data read transactions and data retrieval size. High-priority rehydration has higher operation and data retrieval costs compared to standard priority. High-priority rehydration shows up as a separate line item on your bill.

The statement that says: You can rehydrate a blob data in archive tier without costs is incorrect. You are billed for data read transactions and data retrieval size (per GB).

The statement that says: You can rehydrate a blob data in archive tier instantly is incorrect. Rehydrating a blob from the Archive tier can take several hours to complete.

The statement that says: You can access your blob data that is in archive tier is incorrect because blob data stored in the archive tier is considered to be offline and can’t be read or modified.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You deployed an Ubuntu server using an Azure virtual machine.

You need to monitor the system performance metrics and log events.

Which of the following options would you use?

A. Azure Performance Diagnostics VM Extension
B. Boot diagnostics
C. Connection monitor
D. Linux Diagnostic Extension

A

D. Linux Diagnostic Extension

Explanation:
Azure Diagnostics extension is an agent in Azure Monitor that collects monitoring data from the guest operating system of Azure compute resources including virtual machines. It collects guest metrics into Azure Monitor Metrics and sends guest logs and metrics to Azure storage for archiving.

Azure Performance Diagnostics VM Extension helps collect performance diagnostic data from Windows VMs. The extension performs analysis and provides a report of findings and recommendations to identify and resolve performance issues on the virtual machine.

The Linux Diagnostic Extension will help you monitor the health of a Linux VM running on Microsoft Azure. It has the following capabilities:

– Collects system performance metrics from the VM and stores them in a specific table in a designated storage account.

– Retrieves log events from syslog and stores them in a specific table in the designated storage account.

– Enables users to customize the data metrics that are collected and uploaded.

– Enables users to customize the syslog facilities and severity levels of events that are collected and uploaded.

– Enables users to upload specified log files to a designated storage table.

– Supports sending metrics and log events to arbitrary EventHub endpoints and JSON-formatted blobs in the designated storage account.

With this extension, you can now monitor the system performance metrics and log events of the virtual machine.

Hence, the correct answer is: Linux Diagnostic Extension.

Azure Performance Diagnostics VM Extension is incorrect because this extension only collects performance diagnostic data from Windows VMs.

Boot diagnostics is incorrect because this feature is primarily used to diagnose VM boot failures and not for monitoring the system performance metrics and log events.

Connection monitor is incorrect because this is simply used for end-to-end connection monitoring.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

You plan to record all sessions to track traffic to and from your virtual machines for a period of 3600 seconds.

Solution: Configure a packet capture in Azure Network Watcher.

Does the solution meet the goal?

A. Yes
B. No

A

A. Yes

Explanation:
Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products including Virtual Machines (VM), Virtual Networks, Application Gateways, Load balancers, etc.

With Packet Capture, you can create packet capture sessions to track traffic to and from a virtual machine. It also helps diagnose network anomalies both reactively and proactively. But in order to use this feature, the virtual machine must have the Azure Network Watcher extension.

The packet capture output (.cap) file can be saved in a storage account and/or on the target virtual machine. You can also filter the protocol, IP addresses, and ports when adding a packet capture. Keep in mind that the maximum duration of capturing sessions is 5 hours.

Hence, the correct answer is: Yes.

18
Q

You plan to record all sessions to track traffic to and from your virtual machines for a period of 3600 seconds.

Solution: Create a connection monitor in Azure Network Watcher.

Does the solution meet the goal?

A. Yes
B. No

A

B. No

Explanation:
Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products including Virtual Machines (VM), Virtual Networks, Application Gateways, Load balancers, etc.

With Packet Capture, you can create packet capture sessions to track traffic to and from a virtual machine. It also helps diagnose network anomalies both reactively and proactively. But in order to use this feature, the virtual machine must have the Azure Network Watcher extension.

The packet capture output (.cap) file can be saved in a storage account and/or on the target virtual machine. You can also filter the protocol, IP addresses, and ports when adding a packet capture. Keep in mind that the maximum duration of capturing sessions is 5 hours.

The solution provided is to set up a Connection Monitor in Azure Network Watcher. Connection Monitor’s primary use case is to track connectivity between your on-premises setups and the Azure VMs/virtual machine scale sets that host your cloud application. You cannot use this feature to capture packets to and from your virtual machines in a virtual network because it is not supported.

Hence, the correct answer is: No.

19
Q

You plan to record all sessions to track traffic to and from your virtual machines for a period of 3600 seconds.

Solution: Use IP flow verify in Azure Network Watcher.

Does the solution meet the goal?

A. Yes
B. No

A

B. No

Explanation:
Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products including Virtual Machines (VM), Virtual Networks, Application Gateways, Load balancers, etc.

With Packet Capture, you can create packet capture sessions to track traffic to and from a virtual machine. It also helps diagnose network anomalies both reactively and proactively. But in order to use this feature, the virtual machine must have the Azure Network Watcher extension.

The packet capture output (.cap) file can be saved in a storage account and/or on the target virtual machine. You can also filter the protocol, IP addresses, and ports when adding a packet capture. Keep in mind that the maximum duration of capturing sessions is 5 hours.

The provided solution is to use IP flow verify in Azure Network Watcher. The main use case of IP flow verify is to determine whether a packet to or from a virtual machine is allowed or denied based on 5-tuple information and not to capture packets from your virtual machines for a period of 3600 seconds or 1 hour.

Hence, the correct answer is: No.

20
Q

A company deployed a Grafana image in Azure Container Apps with the following configurations:

Resource Group: tdrg-grafana

Region: Canada Central

Zone Redundancy: Disabled

Virtual Network: Default

IP Restrictions: Allow

The container’s public IP address was provided to development teams in the East US region to allow users access to the dashboard. However, you received a report that users can’t access the application.

Which of the following options allows users to access Grafana with the least amount of configuration?

A. Disable IP Restrictions.
B. Move the container app to the East US Region.
C. Configure ingress to generate a new endpoint.
D. Add a custom domain and certificate.

A

C. Configure ingress to generate a new endpoint.

Explanation:
With Azure Container Apps ingress, you can make your container application accessible to the public internet, VNET, or other container apps within your environment. This eliminates the need to create an Azure Load Balancer, public IP address, or any other Azure resources to handle incoming HTTPS requests. Each container app can have unique ingress configurations. For instance, one container app can be publicly accessible while another can only be reached within the Container Apps environment.

The problem with the given scenario is that users are accessing the public IP address even though the ingress setting is not enabled during the creation of the container app. When you configure the ingress and target port and then save it, the app will generate a new endpoint depending on the ingress traffic that you’ve selected. Now when you try to access the application URL, you will be redirected to the target port of the container image.

Hence, the correct answer is: Configure ingress to generate a new endpoint.

The option that says: Move the container app to the East US Region is incorrect because you can’t move a container app to a different Region.

The option that says: Disable IP Restrictions is incorrect because this won’t still help users access the Grafana app. Instead of denying traffic from source IPs, you only need to enable ingress and target port.

The option that says: Add a custom domain and certificate is incorrect because even though you added a custom domain name, you still won’t be able to access the application since additional configurations must be done to allow VNET-scope ingress. Therefore, the quickest way and least amount of configurations would be to enable ingress and get the application URL.

21
Q

You have been tasked with replicating the current state of your resources in order to automate future deployments when a new feature needs to be added to the application.

Which of the following should you do?

A. Capture an image of a VM.
B. Use the resource group export template.
C. Redeploy and reapply a VM.
D. Create a VM with preset configurations.

A

B. Use the resource group export template.

Explanation:
Azure Resource Manager (ARM) templates is a service provided by Microsoft Azure that allows you to provision, manage, and delete Azure resources using declarative syntax. These templates can be used to deploy and manage resources such as virtual machines, storage accounts, and virtual networks in a consistent and reliable manner. To deploy the template, you can use the Azure Portal, Azure CLI, or Azure PowerShell.

In this scenario, you need to use ARM export templates to replicate the current state of our resources. This means that if you need to redeploy all your resources, you can just create a reusable template instead of going all over the manual creation of resources. You can export a service or a resource group. Based on the given requirements, we just need to capture all resources, then export resource group as template.

Hence, the correct answer is: Use the resource group export template.

The option that says: Capture an image of a VM is incorrect because this just creates a snapshot of the virtual machine configurations. Take note that you need to capture the current state of all resources. Therefore, export template will help us ease the creation of resources.

The option that says: Redeploy and reapply a VM is incorrect because redeploying the VM just migrates it to a new Azure host. While reapply is used to resolve the issues of a stuck failed state of the VM. Both features does not help us capture the current state of our resources.

The option that says: Create a VM with preset configurations is incorrect because this only helps you choose a VM based on your workload type and environment.

22
Q

You have the following storage accounts in your Azure subscription.

mystorage1 | General purpose V1 | File
mystorage2 | BlobStorage| Blob
mystorage3 | General-purposeee V2 | File,Table
mystorage4 | General-purpose V2 | Queue

There is a requirement to export the data from your subscription using the Azure Import/Export service

Which Azure Storage account can you use to export the data?

A. mystorage2
B. mystorage1
C. mystorage3
D. mystorage4

A

A. mystorage2

Explanation:
Azure Import/Export service is used to securely import large amounts of data to Azure Blob storage and Azure Files by shipping disk drives to an Azure datacenter. This service can also be used to transfer data from Azure Blob storage to disk drives and ship to your on-premises sites. Data from one or more disk drives can be imported either to Azure Blob storage or Azure Files.

Consider using Azure Import/Export service when uploading or downloading data over the network is too slow, or getting additional network bandwidth is cost-prohibitive. Use this service in the following scenarios:

Data migration to the cloud: Move large amounts of data to Azure quickly and cost-effectively.

Content distribution: Quickly send data to your customer sites.

Backup: Take backups of your on-premises data to store in Azure Storage.

Data recovery: Recover a large amount of data stored in the storage and have it delivered to your on-premises location.

Azure Import/Export service allows data transfer into Azure Blobs and Azure Files by creating jobs. Use the Azure portal or Azure Resource Manager REST API to create jobs. Each job is associated with a single storage account. This service only supports export of Azure Blobs. Export of Azure files is not supported.

The jobs can be import or export jobs. An import job allows you to import data into Azure Blobs or Azure files, whereas the export job allows data to be exported from Azure Blobs. For an import job, you ship drives containing your data. When you create an export job, you ship empty drives to an Azure datacenter. In each case, you can ship up to 10 disk drives per job.

Hence, the correct answer is: mystorage2.

mystorage1 is incorrect because an export job does not support Azure Files. The Azure Import/Export service only supports export of Azure Blobs.

mystorage3 and mystorage4 are incorrect because the Queue and Table storage services are simply not supported by the Azure Import/Export service.

23
Q

Your company has an Azure subscription that contains a storage account named tdstorageaccount1 and a virtual network named TDVNET1 with an address space of 192.168.0.0/16.

You have a user that needs to connect to the storage account from her workstation which has a public IP address of 131.107.1.23.

You need to ensure that the user is the only one who can access tdstorageaccount1.

Which two actions should you perform? Each correct answer presents part of the solution.

A. From the networking settings, enable TDVnet1 under Firewalls and virtual networks.
B. From the networking settings, select service endpoint under Firewalls and virtual networks.
C. From the networking settings, select “Allow trusted Microsoft services to access this storage account” under Firewalls and virtual networks.
D. Set the Allow access from field to Selected networks under the Firewalls and virtual networks blade of tdstorageaccount1.
E. Add the 131.107.1.23 IP address under Firewalls and virtual networks blade of tdstorageaccount1.

A

D. Set the Allow access from field to Selected networks under the Firewalls and virtual networks blade of tdstorageaccount1.
E. Add the 131.107.1.23 IP address under Firewalls and virtual networks blade of tdstorageaccount1.

Explanation:
An Azure storage account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS. Data in your Azure storage account is durable and highly available, secure, and massively scalable.

To secure your storage account, you should first configure a rule to deny access to traffic from all networks (including Internet traffic) on the public endpoint, by default. Then, you should configure rules that grant access to traffic from specific VNets. You can also configure rules to grant access to traffic from selected public Internet IP address ranges, enabling connections from specific Internet or on-premises clients. This configuration enables you to build a secure network boundary for your applications.

To whitelist a public IP address, you must:

  1. Go to the storage account you want to secure.
  2. Select on the settings menu called Networking.
  3. Under Firewalls and virtual networks, select Selected networks.
  4. Under firewall, add the public IP address then save.

Hence, the following statements are correct:

– Set the Allow access from field to Selected networks under the Firewalls and virtual networks blade of tdstorageaccount1.

– Add the 131.107.1.23 IP address under Firewalls and virtual networks blade of tdstorageaccount1.

The statement that says: From the networking settings, add TDVnet1 under Firewalls and virtual networks is incorrect because adding TDVnet1 will not allow the user to connect to tdstorageaccount1. The requirement states that the workstation of the user must have access to tdstorageaccount1. The TDVnet1 virtual network doesn’t share the same network setting with tdstorageaccount1.

The statement that says: From the networking settings, select service endpoint under Firewalls and virtual networks is incorrect because it only allows you to create network rules that allow traffic only from selected VNets and subnets, which creates a secure network boundary for their data. Service endpoints only extend your VNet private address space and identity to the Azure services, over a direct connection.

The statement that says: From the networking settings, select Allow trusted Microsoft services to access this storage account under Firewalls and virtual networks is incorrect because this simply grants a subset of trusted Azure services access to the storage account, while maintaining network rules for other apps. These trusted services will then use strong authentication to securely connect to your storage account but won’t restrict access to a particular subnetwork or IP address.

24
Q

Your company is currently hosting a mission-critical application in an Azure virtual machine that resides in a virtual network named TDVnet1. You plan to use Azure ExpressRoute to allow the web applications to connect to the on-premises network.

Due to compliance requirements, you need to ensure that in the event your ExpressRoute fails, the connectivity between TDVnet1 and your on-premises network will remain available.

The solution must utilize a site-to-site VPN between TDVnet1 and the on-premises network. The solution should also be cost-effective.

Which three actions should you implement? Each correct answer presents part of the solution.

A. Configure a local network gateway.
B. Configure a connection.
C. Configure a VPN gateway with Basic as its SKU.
D. Configure a gateway subnet.
E. Configure a VPN gateway with VpnGw1 as its SKU.

A

A. Configure a local network gateway.
B. Configure a connection.
E. Configure a VPN gateway with VpnGw1 as its SKU.

Explanation:
A VPN gateway is a specific type of virtual network gateway that is used to send encrypted traffic between an Azure virtual network and an on-premises location over the public Internet. You can also use a VPN gateway to send encrypted traffic between Azure virtual networks over the Microsoft network. Each virtual network can have only one VPN gateway. However, you can create multiple connections to the same VPN gateway. When you create multiple connections to the same VPN gateway, all VPN tunnels share the available gateway bandwidth.

A site-to-site VPN gateway connection is used to connect your on-premises network to an Azure virtual network over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it.

Configuring Site-to-Site VPN and ExpressRoute coexisting connections has several advantages:

– You can configure a Site-to-Site VPN as a secure failover path for ExpressRoute.

– Alternatively, you can use Site-to-Site VPNs to connect to sites that are not connected through ExpressRoute.

To create a site-to-site connection, you need to do the following:

– Provision a virtual network

– Provision a VPN gateway

– Provision a local network gateway

– Provision a VPN connection

– Verify the connection

– Connect to a virtual machine

Take note that since you have already deployed an ExpressRoute, you do not need to create a virtual network and gateway subnet as these are prerequisites in creating an ExpressRoute.

Hence, the correct answers are:

– Configure a VPN gateway with a VpnGw1 SKU.

– Configure a local network gateway.

– Configure a connection.

The option that says: Configure a gateway subnet is incorrect. As you already have an ExpressRoute connecting to your on-premises network, this means that a gateway subnet is already provisioned.

The option that says: Configure a VPN gateway with Basic as its SKU is incorrect. Although one of the requirements is to minimize costs, the coexisting connection for ExpressRoute and site-to-site VPN connection does not support a Basic SKU. The bare minimum for a coexisting connection is VpnGw1.

25
Q

You have an Azure subscription that has a virtual network named TDVNet1 that contains 2 subnets: TDSubnet1 and TDSubnet2.

You have two virtual machines shown in the following table:

TD1 | Windows Server 2019 | TDSubnet1
TD2 | Windows Server 2019 | TDSubnet2

TD1 and TD2 both use a public IP address and you allow inbound Remote Desktop connections from the Windows Server 2019.

Your subscription has two network security groups (NSGs) named TDSG1 and TDSG2.

TDSG1 is associated with TDSubnet1 and only uses the default rules.

TDSG2 is associated with the network interface of TD2. It uses the default rules and the following custom incoming rule:

Priority: 100

Name: RDP

Port: 3389

Protocol: TCP

Source: Any

Destination: Any

Action: Allow

For each of the following items, choose Yes if the statement is true or choose No if the statement is false. Take note that each correct item is worth one point.

Questions 	Yes 	No	 1.     You can connect to TD2 from TD1 using Remote Desktop via Azure Bastion.	
  1. You can connect to TD2 using Remote Desktop from the internet.
  2. You can connect to TD1 using Remote Desktop from the internet.
A
  1. Yes
  2. Yes
  3. No

Explanation:
Azure Network Security Group is used to filter network traffic to and from Azure resources in an Azure virtual network. A network security group contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol.

In the image above, there is an inbound rule named RDP that allows Remote Desktop access (3389) from the Internet. Since TDSG2 is associated with the network interface of TD2, it will allow RDP access from the Internet. From a security standpoint, RDP access should not be exposed to the Internet and the best practice is to use whitelisting of specific IP addresses to ensure that only traffic coming in from your workstation can connect to the server.

Take note that you do not need to configure additional inbound rules when you want to connect to TD2 from TD1. This is because the default rules of a network security group always allow traffic that is coming from within the virtual network where both virtual machines reside in as well as all connected on-premises address spaces and connected Azure VNets (local networks).

Therefore, the following statements are correct:

– You can connect to TD2 using Remote Desktop from the internet.

– You can connect to TD2 from TD1 using Remote Desktop via Azure Bastion.

The statement that says: You can connect to TD1 using Remote Desktop from the internet is incorrect. Since TDSG1 only uses the default rules, it will not accept any incoming traffic coming from the Internet. Take note that default rules cannot be deleted, but because they are assigned the lowest priority, they can be overridden by the rules that you create.

26
Q

You have an Azure subscription named TD-Subscription1 that contains a load balancer that distributes traffic between 10 virtual machines using port 443.

There is a requirement wherein all traffic from Remote Desktop Protocol (RDP) connections must be forwarded to VM1 only.

What should you do to satisfy the requirement?

A. Create a health probe.
B. Create a load balancing rule.
C. Create a new load balancer for VM1.
D. Create an inbound NAT rule.

A

D. Create an inbound NAT rule.

Explanation:
A public load balancer can provide outbound connections for virtual machines (VMs) inside your virtual network. These connections are accomplished by translating their private IP addresses to public IP addresses. Public Load Balancers are used to load balance Internet traffic to your VMs.

You need to create an inbound NAT rule to forward traffic from a specific port of the front-end IP address to a specific port of a back-end VM. The traffic is then sent to a specific virtual machine.

Take note that you can only have one virtual machine as the target virtual machine. A network security group (NSG) must be associated to VM1 with inbound rules explicitly allowing traffic from port 3389 and your IP address.

Hence, the correct answer is: Create an inbound NAT rule.

The option that says: Create a new load balancer for VM1 is incorrect because you do not need to create a new load balancer as you can simply use port forwarding or inbound NAT rule to forward RDP traffic to VM1.

The option that says: Create a load balancing rule is incorrect because this component only defines how incoming traffic is distributed to all the instances within the backend pool. Furthermore, it is mentioned in the scenario that you should direct RDP traffic to VM1 only.

The option that says: Create a health probe is incorrect because it is just used to determine the health status of the instances in the backend pool. This health probe will determine if an instance is healthy and can receive traffic.

27
Q

Your organization has a standard general-purpose v2 storage account with an access tier of Hot. The files uploaded to the storage account are infrequently accessed by your colleagues.

You were tasked with modifying the storage account with the following requirements:

Inactive data must automatically transition to the archive tier after 120 days.

Data uploaded must be accessed instantly, provided that it has not been transitioned to the archive tier yet.

Minimize costs.

Minimize administrative effort.

Which two actions should you perform? Choose two.

A. Create an Azure Function to move the inactive data to the archive tier after 120 days of inactivity.
B. Set the default access tier of the storage account to the Cool tier.
C. Automatically archive data on upload.
D. Manually copy the inactive data using the Copy Blob operation to the archive tier after 120 days of inactivity.
E. Set the default access tier of the storage account to the Archive tier.
F. Create a lifecycle management rule to move the inactive data to the Archive tier after 120 days of inactivity.

A

B. Set the default access tier of the storage account to the Cool tier.
F. Create a lifecycle management rule to move the inactive data to the Archive tier after 120 days of inactivity.

Explanation:
Data sets have unique lifecycles. Early in the lifecycle, people access some data often. But the need for access often drops drastically as the data ages. Some data remains idle in the cloud and is rarely accessed once stored.

Some data sets expire days or months after creation, while other data sets are actively read and modified throughout their lifetimes. Azure Storage lifecycle management offers a rule-based policy that you can use to transition blob data to the appropriate access tiers or to expire data at the end of the data lifecycle.

Storage accounts have a default access tier setting that indicates in which online tier a new blob is created. The default access tier setting can be either hot or cool only. The behavior of this setting is slightly different depending on the type of storage account:

Since the scenario states that your colleagues infrequently access the data, this means that you do not need to store your data in the Hot tier. You can park the data in the Cool tier and automatically transition it to Archive using the data lifecycle.

Hence, the correct answers are:

– Create a lifecycle management rule to move the inactive data to the Archive tier after 120 days of inactivity.

– Set the default access tier of the storage account to the Cool tier.

The statement that says: Create an Azure Function to move the inactive data to the archive tier after 120 days of inactivity is incorrect because you can achieve the same goal using lifecycle management. Remember that one of the requirements is minimizing administrative effort.

The statement that says: Set the default access tier of the storage account to the Archive tier is incorrect because the supported default access tiers for storage accounts are Hot and Cool tiers. What you can do is move the data to the cool tier if your data is infrequently accessed and then create a lifecycle policy to transition unmodified data to the Archive tier after a set amount of time.

The statement that says: Manually copy the inactive data using the Copy Blob operation to the archive tier after 120 days of inactivity is incorrect because manually copying the inactive data to the Archive tier is a tedious task if you have thousands of data. One of the requirements states that you must lessen the administrative effort. Use lifecycle management instead.

The statement that says: Automatically archive data on upload is incorrect one of the requirements states that you need your data that is not in the archive tier to be accessible instantly. Data in the Archive tier takes hours before you can access it.

28
Q

You have an Azure subscription that contains an Azure File Share named TDShare1 that contains sensitive data.

You want to ensure that only authorized users can access this data for compliance requirements, and users must only have access to specific files and folders.

You registered TDShare1 to use AD DS authentication and Microsoft Entra Connect sync for specific AD user access.

You need to give your active directory users access to TDShare1.

What should you do?

A. Enable anonymous access to the storage account.
B. Create a shared access signature (SAS) with a stored access policy.
C. Configure role-based access control (RBAC).
D. Use the storage account access keys for authentication.

A

C. Configure role-based access control (RBAC).

Explanation:
Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol or Network File System (NFS) protocol. Azure Files SMB file shares are accessible from Windows, Linux, and macOS clients. Azure Files NFS file shares are accessible from Linux or macOS clients. Additionally, Azure Files SMB file shares can be cached on Windows Servers with Azure File Sync for fast access near where the data is being used.

Once you’ve enabled an Active Directory (AD) source for your storage account, you must configure share-level permissions in order to get access to your file share. There are two ways you can assign share-level permissions. You can assign them to specific Microsoft Entra users/groups, and you can assign them to all authenticated identities as a default share-level permission.

Since we are handling sensitive data, we want our users to be able to access files that they are only allowed to. Due to this, we need to assign specific Microsoft Entra users or groups to access Azure file share resources.

In order for share-level permissions to work for specific Microsoft Entra users or groups, you must:

Sync the users and the groups from your local AD to Microsoft Entra ID using either the on-premises Microsoft Entra Connect sync application or Microsoft Entra Connect cloud sync.
Add AD synced groups to RBAC role so they can access your storage account.

Hence, the correct answer is: Configure role-based access control (RBAC).

The option that says: Enable anonymous access to the storage account is incorrect as it allows anyone to access the storage account and its contents without authentication.

The option that says: Create a shared access signature (SAS) with a stored access policy is incorrect because while SAS tokens can provide limited access to a storage account, they are not a suitable authentication mechanism for controlling access to sensitive data.

The option that says: Use the storage account access keys for authentication is incorrect because storage account keys provide full control over the storage account, which means that anyone with the key can perform any operation on the storage account. This makes them a less secure option, especially for sensitive data that requires fine-grained access control.

References:

https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction

https://learn.microsoft.com/en-us/azure/storage/files/storage-files-identity-ad-ds-assign-permissions

29
Q

TD1 is unable to connect to TD4 via port 443. You need to troubleshoot why the communication between the two virtual machines is failing.

Which two features should you use?

A. Effective security rules
B. Azure Diagnostics
C. Connection troubleshoot
D. Log Analytics
E. VPN troubleshoot
F. IP flow verify

A

C. Connection troubleshoot
F. IP flow verify

Explanation:
Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products which includes Virtual Machines, Virtual Networks, Application Gateways, Load balancers, etc.

Connection troubleshoot helps reduce the amount of time to diagnose and troubleshoot network connectivity issues. The results returned can provide insights about the root cause of the connectivity problem and whether it’s due to a platform or user configuration issue.

Connection troubleshoot reduces the Mean Time To Resolution (MTTR) by providing a comprehensive method of performing all connection major checks to detect issues pertaining to network security groups, user-defined routes, and blocked ports.

IP flow verify checks if a packet is allowed or denied to or from a virtual machine. If the packet is denied by a security group, the name of the rule that denied the packet is returned.

IP flow verify looks at the rules for all Network Security Groups (NSGs) applied to the network interface, such as a subnet or virtual machine NIC. Traffic flow is then verified based on the configured settings to or from that network interface. IP flow verify is useful in confirming if a rule in a Network Security Group is blocking ingress or egress traffic to or from a virtual machine.

Therefore, the correct answers are:

– Connection troubleshoot

– IP flow verify

Effective security rules is incorrect because this simply allows you to see all inbound and outbound security rules that apply to a virtual machine’s network interface. This is also used for security compliance and auditing.

Azure Diagnostics is incorrect because it is an agent in Azure Monitor that collects monitoring data from the guest operating system of Azure compute resources, including virtual machines.

Log Analytics is incorrect because this is just a tool to edit and run log queries from data collected by Azure Monitor logs and interactively analyze their results.

VPN troubleshoot is incorrect because this only provides the capability to troubleshoot virtual network gateways and their connections. This is primarily used for diagnosing the traffic between your on-premises resources and Azure virtual networks.

30
Q

Your organization has an AKS cluster that hosts several microservices as Kubernetes deployments. During peak hours, one of the deployments experiences high traffic, resulting in longer response times and occasional failures.

You plan to implement horizontal pod autoscaling to scale the deployment based on traffic.

What should you do?

A. Install Kubernetes Dashboard, then define an HPA object in the manifest file and set the desired min and max number of replicas in a deployment.
B. Install Azure Monitor for Containers agent, then define a VPA object in the manifest file and set the desired min and max number of replicas in a deployment.
C. Install AKS cluster autoscaler, then define an HPA object in the manifest file and set the desired min and max number of replicas in a deployment.
D. Install Kubernetes Metrics Server, then define an HPA object in the manifest file and set the min and max number of replicas in a deployment.

A

D. Install Kubernetes Metrics Server, then define an HPA object in the manifest file and set the min and max number of replicas in a deployment.

Explanation:
Azure Kubernetes Service (AKS) simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks like health monitoring and maintenance. When you create an AKS cluster, a control plane is automatically created and configured. This control plane is provided at no cost as a managed Azure resource abstracted from the user. You only pay for and manage the nodes attached to the AKS cluster.

The horizontal pod autoscaler (HPA) is used by Kubernetes to monitor resource demand and automatically scale the number of pods. The HPA checks the Metrics API for any required changes in replica count every 15 seconds by default, and the Metrics API retrieves data from the Kubelet every 60 seconds. As a result, the HPA is updated every 60 seconds. When changes are made, the number of replicas is increased or decreased.

The following steps should be taken to configure horizontal pod autoscaling (HPA) for the deployment:

Install the Kubernetes Metrics Server to provide HPA with metrics.
In the Kubernetes manifest file, define a horizontal pod autoscaler object. This object specifies the scaled deployment, the minimum and maximum number of replicas, and the scaling metric.
Set the deployment’s minimum and maximum number of replicas. Based on the specified metric, these values determine the number of pods that the HPA feature can create or delete.

Hence, the correct answer is: Install Kubernetes Metrics Server, then define an HPA object in the manifest file and set the min and max number of replicas in a deployment.

The option that says: Install Kubernetes Dashboard, then define an HPA object in the manifest file and set the desired min and max number of replicas in a deployment is incorrect because the Kubernetes Dashboard does not provide HPA functionality. It is mainly used for deploying applications, creating and updating objects, and monitoring the health of the cluster.

The option that says: Install AKS cluster autoscaler, then define an HPA object in the manifest file and set the desired min and max number of replicas in a deployment is incorrect because the AKS cluster autoscaler scales the number of nodes in an AKS cluster rather than the number of replicas in a deployment.

The option that says: Install Azure Monitor for Containers agent, then define a VPA object in the manifest file and set the desired min and max number of replicas in a deployment is incorrect. Instead of scaling the number of replicas, vertical pod autoscaling (VPA) is used to adjust the resource allocation of individual pods based on their resource usage.

31
Q

Your company is currently running a mission-critical application in a primary Azure region.

You plan to implement a disaster recovery by configuring failover to a secondary region using Azure Site Recovery.

What should you do?

A. Create an RSV in the primary region, install and configure the Azure Site Recovery agent on the VMs, and design a replication policy to replicate the data to the secondary region.
B. Create an RSV in the secondary region, install and configure the Azure Site Recovery agent on the VMs, and design a recovery plan to orchestrate failover and failback operations.
C. Create a virtual network and subnet in the secondary region, install and configure the Azure Site Recovery agent on the VMs, and design a recovery plan to orchestrate failover and failback operations.
D. Create an Azure Traffic Manager profile to load-balance traffic between the primary and secondary regions, install and configure the Azure Site Recovery agent on the VMs, and design a replication policy to replicate the data to the secondary region.

A

B. Create an RSV in the secondary region, install and configure the Azure Site Recovery agent on the VMs, and design a recovery plan to orchestrate failover and failback operations.

Explanation:
Azure Site Recovery service contributes to your business continuity and disaster recovery (BCDR) strategy by keeping your business applications online during planned and unplanned outages. Site Recovery manages and orchestrates disaster recovery of on-premises machines and Azure virtual machines (VM), including replication, failover, and recovery.

Enabling replication for a virtual machine (VM) for disaster recovery purposes involves installing the Site Recovery Mobility service extension on the VM and registering it with Azure Site Recovery. During replication, any disk writes from the VM are first sent to a cache storage account in the source region. Subsequently, the data is transferred to the target region, where recovery points are generated from it. During a disaster recovery failover of the VM, a recovery point is used to restore the VM in the target region.

Here’s how to set up disaster recovery for a VM with Azure Site Recovery:

First, you need to create a Recovery Services Vault (RSV) in the secondary region, which will serve as the target location for the VM during a failover.
Next, you need to install and configure the Azure Site Recovery agent on the VMs that you want to protect. The agent captures data changes on the VM disks and sends them to Azure Site Recovery for replication to the secondary region.
Once the replication is set up, you need to design a recovery plan that outlines the steps to orchestrate the failover and failback operations. This includes defining the order in which VMs should be failed over, any dependencies between VMs, and the desired recovery point objective (RPO) and recovery time objective (RTO) for each VM.
During replication, VM disk writes are sent to a cache storage account in the source region, and from there to the target region, where recovery points are generated from the data. In the event of a disaster or planned failover, a recovery point is used to restore the VM in the target region, allowing the business to continue operations without significant downtime or data loss.

Hence, the correct answer is: Create an RSV in the secondary region, install and configure the Azure Site Recovery agent on the VMs, and design a recovery plan to orchestrate failover and failback operations.

The option that says: Create an RSV in the primary region, install and configure the Azure Site Recovery agent on the VMs, and design a replication policy to replicate the data to the secondary region is incorrect because although this will replicate the data to the secondary region, it does not include the necessary steps to perform failover. You still need to create a Recovery Services vault in the secondary region, not the primary region, to perform failover.

The option that says: Create a virtual network and subnet in the secondary region, install and configure the Azure Site Recovery agent on the VMs, and design a recovery plan to orchestrate failover and failback operations is incorrect because, just like the other options, you will still need to create a Recovery Services vault in the secondary region, install and configure the Azure Site Recovery agent on the virtual machines, and create a recovery plan to orchestrate failover and failback operations.

The option that says: Create an Azure Traffic Manager profile to load-balance traffic between the primary and secondary regions, install and configure the Azure Site Recovery agent on the VMs, and design a replication policy to replicate the data to the secondary region is incorrect because this will just load-balance traffic between the primary and secondary regions but won’t be able to perform failover. You will still need to create a Recovery Services vault in the secondary region to perform failover.

32
Q

You have an Azure subscription containing an Azure virtual machine named Siargao with an assigned dynamic public IP address.
During routine maintenance, Siargao was deallocated and then started again.

The development team reports that their application hosted on Siargao has lost its connection with an external service. The external service whitelists the IP addresses allowed to access it. You suspect the public IP address has changed during the maintenance.

What should you do?

A. Attach multiple dynamic public IP addresses to Siargao.
B. Modify Siargao to use a static public IP address.
C. Enable an Azure VPN gateway for Siargao.
D. Provision an Azure NAT gateway to provide outbound internet connectivity.

A

B. Modify Siargao to use a static public IP address.

Explanation:
Azure Virtual Network (VNet) is the fundamental building block for your private network in Azure. VNet enables many types of Azure resources, such as Azure Virtual Machines (VM), to securely communicate with each other, the Internet, and on-premises networks. VNet is similar to a traditional network that you’d operate in your own data center but brings with it additional benefits of Azure’s infrastructure such as scale, availability, and isolation.

Public IP addresses allow Internet resources to communicate inbound to Azure resources. Public IP addresses enable Azure resources to communicate with the Internet and public-facing Azure services. The address is dedicated to the resource until it’s unassigned by you. A resource without a public IP assigned can communicate outbound.

IP addresses in Azure can be either dynamic or static. By default, Azure assigns a dynamic IP address to the VM. When the VM is started, Azure assigns it an IP address, and when the VM is stopped (deallocated), that IP address is returned to the pool and can be assigned to a different VM. This means that when you stop and start a VM, it can get a different public IP address, which can cause problems if you have systems or services that rely on the specific IP address of that VM, such as an external service that whitelists specific IP addresses.

A static IP address, unlike a dynamic IP address, does not change when the VM is deallocated. Once a static IP address is assigned to a VM, that IP is reserved for the VM and won’t be assigned to any other VM, even when the original VM is stopped. This means the VM would keep the same IP address throughout its lifecycle, regardless of its state.

In this case, to solve the issue, we need to modify Siargao to use a static public IP address instead of a dynamic public IP address.

Hence, the correct answer is: Modify Siargao to use a static public IP address.

The statement that says: Enable an Azure VPN gateway for Siargao is incorrect. Azure VPN Gateway is used to establish secure, cross-premises connectivity between your virtual network within Azure and your on-premises network, but it doesn’t provide static public IP functionality for individual VMs.

The statement that says: Attach multiple dynamic public IP addresses to Siargao is incorrect because assigning multiple dynamic public IP addresses would not solve the issue, as these dynamic IP addresses can still change when the VM is deallocated.

The statement that says: Provision an Azure NAT gateway to provide outbound internet connectivity is incorrect because Azure NAT Gateway is a service that provides outbound-only internet connectivity for the VMs in your virtual network. However, it doesn’t help in maintaining the same public IP address of a VM during its deallocation and reallocation.

33
Q

Your organization has an Azure subscription that contains an AKS cluster running an older version of Kubernetes.

You have been assigned to upgrade the cluster to the latest stable version of Kubernetes.

What should you do?

A. Create a new AKS cluster with the desired Kubernetes version, migrate the application workloads from the old cluster to the new cluster, and then delete the old cluster.
B. Stop all workloads, scale down the cluster to zero nodes, delete the cluster, create a new AKS cluster, and redeploy the application workloads.
C. Plan and execute the upgrade by reviewing release notes, determining a maintenance window, and upgrading the AKS cluster via Azure Portal.
D. Run az aks get-upgrades in Azure CLI to upgrade the AKS cluster to the latest Kubernetes version.

A

C. Plan and execute the upgrade by reviewing release notes, determining a maintenance window, and upgrading the AKS cluster via Azure Portal.

Explanation:
Azure Kubernetes Service (AKS) is a managed container orchestration service provided by Microsoft Azure. It simplifies the deployment, management, and scaling of containerized applications using Kubernetes. AKS abstracts away the underlying infrastructure and handles the operational aspects of managing a Kubernetes cluster, allowing developers and DevOps teams to focus on deploying and managing their applications.

Periodic upgrades to the latest Kubernetes version are part of the AKS cluster lifecycle. It is critical that you apply the most recent security updates or upgrade to get the most recent features. In Azure, you can upgrade a cluster using Azure CLI, PowerShell or Portal.

AKS performs the following operations during the cluster upgrade process:

-Add a new buffer node to the cluster that runs the specified Kubernetes version (or as many nodes as configured in max surge).

-To minimize disruption to running applications, cordon and drain one of the old nodes. When you use max surge, it cordons and drains as many nodes as the number of buffer nodes you specify.

-When the old node is completely depleted, it is reimaged to receive the new version and serves as a buffer node for the next node to be upgraded.

-This process is repeated until all cluster nodes have been upgraded.

-At the end of the process, the last buffer node is deleted while the existing agent node is kept.

Hence, the correct answer is: Plan and execute the upgrade by reviewing release notes, determining a maintenance window, and upgrading the AKS cluster via Azure Portal.

The option that says: Run az aks get-upgrades in Azure CLI to upgrade the AKS cluster to the latest Kubernetes version is incorrect because this command won’t upgrade the cluster but it will just get the upgrade versions available for a managed Kubernetes cluster.

The option that says: Stop all workloads, scale down the cluster to zero nodes, delete the cluster, create a new AKS cluster, and redeploy the application workloads is incorrect because deleting the cluster and redeploying all the application workloads would result in unnecessary downtime and resource loss, as well as potential issues in recreating the cluster and redeploying the applications.

The option that says: Create a new AKS cluster with the desired Kubernetes version, migrate the application workloads from the old cluster to the new cluster, and then delete the old cluster is incorrect because this approach would involve unnecessary complexity and downtime for migrating the workloads between clusters, which can be avoided by upgrading the existing cluster directly.

34
Q

You have an Azure subscription with a storage account named TD1. An external auditor has requested access to TD1 for a duration of 2 weeks.

You need to deploy a solution without compromising the integrity and security of your primary data access methods.

Which Azure feature would satisfy this?

A. Role-Based Access Control (RBAC)
B. Shared Access Signature (SAS)
C. Service Endpoints
D. Connection Strings

A

B. Shared Access Signature (SAS)

Explanation:
A shared access signature (SAS) is a URI that grants restricted access rights to Azure Storage resources. You can provide a shared access signature to clients who shouldn’t be trusted with your storage account key but who need access to certain storage account resources.

A shared access signature is a token that is appended to the URI for an Azure Storage resource. The token that contains a special set of query parameters that indicate how the resources may be accessed by the client. One of the query parameters, the signature, is constructed from the SAS parameters and signed with the key that was used to create the SAS. This signature is used by Azure Storage to authorize access to the storage resource.

With shared access signature (SAS), you have granular control over how a client can access your data. This makes it the ideal solution for this scenario. For example:

– What resources the client may access.

– What permissions do they have to those resources.

– How long the SAS is valid.

Hence, the correct answer is: Shared Access Signatures (SAS).

Role-Based Access Control (RBAC) is incorrect because this is a system that provides fine-grained access management to Azure resources. By using RBAC, you can assign specific permissions to users, groups, and applications at a certain scope.

Service Endpoints is incorrect because this feature simply provides secure and direct connectivity to Azure service resources from a virtual network. This feature ensures that Azure service traffic remains on the Azure backbone network.

Connection Strings is incorrect. Connection strings are a way to provide necessary information for applications to connect to various services, including databases or storage accounts. They typically contain the access keys, which you wouldn’t want to share with an external auditor if you’re trying to avoid sharing the primary or secondary keys.

35
Q

You are configuring a blob container’s access policy within an Azure storage account. You want to set multiple named access policies for fine-grained control and flexibility.

What is the maximum number of named access policies you can create for a blob container?

A. 5
B. 20
C. 10
D. 1

A

A. 5

Explanation:
A stored access policy provides an additional level of control over service-level shared access signatures (SASs) on the server side. Establishing a stored access policy serves to group shared access signatures and to provide additional restrictions for signatures that are bound by the policy.

With a container access policy, you can grant or revoke permissions for specific operations on blobs, such as read, write, delete, list, and more. The key benefit of using a container access policy is that it offers a more targeted and controlled approach to managing access to individual blobs within the container without the need to modify the storage account’s shared access signature (SAS) settings.

You can set a maximum of five access policies on a container, table, queue, or share at a time. Each SignedIdentifier field, with its unique Id field, corresponds to one access policy. Trying to set more than five access policies at one time causes the service to return status code 400 (Bad Request).

Hence, the correct answer is: 5.

36
Q

You are managing an Azure subscription that includes 150 virtual machines. These virtual machines are generated and terminated frequently as part of your operations. To optimize storage costs, you need to identify unattached disks that can be removed.

What action should you take?

A. Check the Cost Analysis from Azure Cost Management.
B. Enable diagnostic settings in Azure Monitor.
C. Explore the Account Management properties in Microsoft Azure Storage Explorer.
D. Go to Advisor Recommendations in Azure Cost Management.

A

D. Go to Advisor Recommendations in Azure Cost Management.

Explanation:
Azure Advisor provides customized best practices and suggestions for optimizing your Azure resources. It offers detailed advice on costs, security, dependability, operational excellence, and performance.

To identify unattached disks, Azure Advisor analyzes your environment and provides recommendations for disks that are not currently associated with any virtual machines (VMs). These unattached disks can be safely deleted to reduce storage costs. Viewing Advisor Recommendations within Azure Cost Management will give you insights into which disks are unattached and can be removed, helping you optimize your storage resources efficiently.

Hence, the correct answer is: Go to Advisor Recommendations in Azure Cost Management.

The option that says: Check the Cost Analysis from Azure Cost Management is incorrect because Cost Analysis only provides insights into overall spending and cost trends but does not specifically identify unattached disks. It is designed to help you understand where your money is going, not to identify unused or unnecessary resources.

The option that says: Enable diagnostic settings in Azure Monitor is incorrect. While diagnostic settings in Azure Monitor can provide valuable monitoring data, they do not specifically identify unattached disks. Diagnostic settings are aimed at performance and health monitoring rather than cost optimization for unattached disks.

The option that says: Explore the Account Management properties in Microsoft Azure Storage Explorer is incorrect because this tool is primarily used for managing storage accounts and exploring data stored in Azure. Although you can view and manage various properties of your storage accounts, it does not provide specific recommendations or insights on unattached disks. This tool is more suited for managing storage objects and data than identifying optimization opportunities.

References:

https://learn.microsoft.com/en-us/azure/advisor/advisor-cost-recommendations

https://learn.microsoft.com/en-us/azure/cost-management-billing/cost-management-billing-overview

Check out this Azure Advisor Cheat Sheet:

https://tutorialsdojo.com/azure-advisor/

37
Q

You are tasked with configuring network connectivity between two virtual machines, VM3 and VM6, which are located in separate virtual networks, VNET5 and VNET6 respectively. The goal is to enable seamless communication between the two VMs while ensuring minimal administrative overhead.

Which of the following options should you take?

A. Implement a Network Security Group (NSG) and apply it to both VM3 and VM6.
B. Create a virtual network peering connection between VNET5 and VNET6.
C. Configure a user-defined route that directs traffic from VM3 to VM6.
D. Set up a VPN Gateway between VNET5 and VNET6.

A

B. Create a virtual network peering connection between VNET5 and VNET6.

Explanation:
Virtual network peering allows two virtual networks to connect directly through the Azure backbone network. This connection allows resources in either virtual network to communicate with each other as if they were in the same network. This is achieved by routing traffic between the virtual networks through the Microsoft backbone infrastructure, rather than over the public internet.

Once the peering connection is established, the two virtual networks function as a single entity for connectivity purposes. This means that virtual machines in these virtual networks can communicate with each other using private IP addresses. Despite this seamless connectivity, the two virtual networks continue to operate as separate resources, maintaining their own set of network policies.

Hence, the correct answer is: Create a virtual network peering connection between VNET5 and VNET6.

The option that says: Set up a VPN Gateway between VNET5 and VNET6 is incorrect. This option would be more appropriate if there is a specific need for the traffic between VM3 and VM6 to be encrypted. The VPN Gateway ensures that all data transmitted between the two networks is secure with IPsec/IKE encryption protocols. However, it involves more setup and management overhead compared to virtual network peering and typically incurs higher costs and potentially lower throughput due to the encryption/decryption process.

The option that says: Implement a Network Security Group (NSG) and apply it to both VM3 and VM6 is incorrect. It would help control inbound and outbound traffic to these VMs, but it wouldn’t establish network connectivity between the two virtual networks.

The option that says: Configure a user-defined route that directs traffic from VM3 to VM6 is incorrect because it could enable communication between the two VMs. However, this approach would require more administrative effort and wouldn’t provide the seamless network-to-network connectivity that virtual network peering offers.

38
Q

You are managing a web application named TDApp on Azure.

You notice that some web requests are taking too long to complete, and you want to trace these requests to understand the cause of the delay.

Which of the following options should you consider?

A. Use the Azure Network Watcher to monitor network performance.
B. Check the Activity log for any unusual activities.
C. Go to the Diagnose and solve problems settings to identify potential issues.
D. Use the Azure Application Insights Profiler for performance profiling.

A

D. Use the Azure Application Insights Profiler for performance profiling.

Explanation:
Azure Application Insights Profiler is a powerful tool designed to help developers understand the performance characteristics of their applications. It provides detailed performance traces for applications running in Azure. The Profiler operates automatically, at scale, and does not negatively affect your users. It identifies the median, fastest, and slowest response times for each web request made by your customers. It also pinpoints the “hot” code path spending the most time handling a particular web request. The Profiler can be enabled on all your Azure applications to gather data with various triggers such as sampling, CPU usage, and memory usage.

When you notice that some web requests are taking too long to complete, the Azure Application Insights Profiler becomes an invaluable tool. It allows you to capture, identify, and view performance traces for your application running in Azure. This means you can see exactly what’s happening with each web request, allowing you to identify any bottlenecks or performance issues.

Moreover, the Profiler operates automatically and at scale, meaning it can handle the demands of a production environment. It also doesn’t negatively affect your users, so you can use it without worrying about impacting the user experience.

Hence, the correct answer is: Use the Azure Application Insights Profiler for performance profiling.

The option that says: Check the Activity log for any unusual activities is incorrect because Azure Activity Log only provides insight into subscription-level events, like when a resource is modified or a virtual machine is started. It doesn’t provide information about the performance of individual web requests within an application.

The option that says: Use the Azure Network Watcher to monitor network performance is incorrect. Azure Network Watcher is a service that provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. While it’s useful network monitoring, it doesn’t provide detailed tracing of individual web requests within an application.

The option that says: Go to the Diagnose and solve problems settings to identify potential issues is incorrect. The “Diagnose and solve problems” feature in Azure is simply a self-help diagnostic tool that helps you troubleshoot and solve problems with your Azure services. Although it can help identify common issues that can affect the performance of your application, it doesn’t provide detailed tracing of individual web requests within an application as the Azure Application Insights Profiler.

39
Q

You currently have an Azure subscription with a resource group named TD-RG1 and a virtual network named TD-VNet1. You propose to launch an Azure container instance named maincontainer1. Specific parameters are required to ensure that the container instance may be accessed using a reusable DNS name label.

Which action needs to be configured for maincontainer1?

A. Set up the private networking type.
B. Configure the public networking type.
C. Create a new subnet on VNet1.
D. Utilize an Azure Key Vault.

A

B. Configure the public networking type.

Explanation:
Azure Container Instances (ACI) is a service for deploying and managing containerized applications in the Azure cloud environment. It offers a quick and straightforward approach to running containers in the cloud without having to maintain the underlying infrastructure.

Azure Container Instances (ACI) provides flexibility in supporting public and private networking types. For the specific task of configuring DNS name label scope reuse for maincontainer1, the public networking type is required. This configuration allows the container instance to be accessible over the internet and enables the use of DNS name labels, which are crucial for this requirement.

Public networking enables an Azure container instance to be accessible via the internet using a fully qualified domain name (FQDN). This is necessary for setting up a DNS name label that can be reused, ensuring the container is accessible through a DNS name.

Using public networking, you can assign a DNS name label to the container instance. This feature allows for the reuse of the DNS name label across different instances, facilitating easier access and management.

Hence, the correct answer is: Configure the public networking type.

The option that says: Set up the private networking type is incorrect because this simply restricts the container instance to a virtual network, making it accessible only within that network. This configuration won’t work with the use of a DNS name label that is reachable from the public internet.

The option that says: Create a new subnet on VNet1 is incorrect because it simply provides additional network segmentation within the virtual network. It does not impact the ability to configure DNS name label scope reuse for the container instance.

The option that says: Utilize an Azure Key Vault is incorrect because this service only helps protect cryptographic keys and secrets used by cloud applications and services. It enhances security for managing secrets but does not influence DNS configurations for container instances. While important for security and secret management, it does not affect the networking configuration needed for DNS name label reuse.

40
Q
A