test6 Flashcards

1
Q

You have two subscriptions named Subscription1 and Subscription2. Each subscription is associated to a different Azure AD tenant.
Subscription1 contains a virtual network named VNet1. VNet1 contains an Azure virtual machine named VM1 and has an IP address space of 10.0.0.0/16.
Subscription2 contains a virtual network named VNet2. Vnet2 contains an Azure virtual machine named VM2 and has an IP address space of 10.10.0.0/24.
You need to connect VNet1 to VNet2.
What should you do first?
A. Modify the IP address space of VNet2.
B. Move VM1 to Subscription2.
C. Provision virtual network gateways.
D. Move VNet1 to Subscription2.

A

The correct answer is C. Provision virtual network gateways.

Here’s why:

Explanation:

To connect two Azure Virtual Networks (VNets), especially when they are in different subscriptions or Azure AD tenants, you typically need to use virtual network gateways. Virtual network gateways enable you to create VPN tunnels between VNets, allowing resources in different VNets to communicate with each other.

Let’s break down why each option is (or isn’t) the correct first step:

A. Modify the IP address space of VNet2.

Why it’s incorrect as the first step: The IP address spaces of VNet1 (10.0.0.0/16) and VNet2 (10.10.0.0/24) are not overlapping. Non-overlapping IP address spaces are a requirement for VNet peering or VNet-to-VNet connections using gateways. Modifying the IP address space is only necessary if there’s an overlap, which isn’t the case here. Therefore, this is not the first step.

B. Move VM1 to Subscription2.

Why it’s incorrect: Moving VM1 to Subscription2 is completely irrelevant to connecting VNet1 and VNet2. The goal is to connect the networks, not move VMs between subscriptions as a prerequisite for network connectivity. VMs are within the VNets, but the connection needs to be established at the VNet level.

C. Provision virtual network gateways.

Why it’s correct: Provisioning virtual network gateways is the essential first step for establishing a VNet-to-VNet connection, especially across subscriptions. Virtual network gateways are the Azure resources that create and manage the VPN tunnels required for VNet-to-VNet connectivity. Before you can configure the actual connection, you must have gateways in place in both VNets.

D. Move VNet1 to Subscription2.

Why it’s incorrect as the first step (and generally less desirable as the first step): Moving VNet1 to Subscription2 would place both VNets in the same subscription. While this might simplify some aspects of management and could potentially allow for simpler VNet peering within the same subscription (if peering is chosen instead of gateways), it’s not the necessary first step. You can connect VNets across subscriptions and tenants without moving them. Moving a VNet is a more complex operation with potential impact on existing resources and configurations. It’s also not the most direct or necessary first action to achieve VNet connectivity. It’s also not required to use gateways, which is hinted at by option C.

In summary:

The most logical and necessary first step to connect VNet1 and VNet2, especially given they are in different subscriptions, is to provision virtual network gateways in both VNets. This is the foundational infrastructure component required to build the VPN tunnels for VNet-to-VNet communication.

Final Answer: C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You have an Azure Active Directory (Azure AD) tenant.
You have an existing Azure AD conditional access policy named Policy1. Policy1 enforces the use of Azure AD-joined devices when members of the Global
Administrators group authenticate to Azure AD from untrusted locations.
You need to ensure that members of the Global Administrators group will also be forced to use multi-factor authentication when authenticating from untrusted locations.
What should you do?
A. From the Azure portal, modify session control of Policy1.
B. From multi-factor authentication page, modify the user settings.
C. From multi-factor authentication page, modify the service settings.
D. From the Azure portal, modify grant control of Policy1.

A

The correct answer is D. From the Azure portal, modify grant control of Policy1.

Here’s why:

Conditional Access Grant Controls: Conditional Access policies in Azure AD work by evaluating conditions (like user group, location, device platform) and then applying grant controls if those conditions are met. Grant controls define what is required to grant access. Common grant controls include:

Require multi-factor authentication: Enforces MFA.

Require device to be marked as compliant: Requires the device to be managed and compliant with your compliance policies.

Require hybrid Azure AD joined device: Requires the device to be hybrid Azure AD joined.

Require approved client app: Requires the user to use an approved client application.

Require app protection policy: Requires the user to use an app that has app protection policies applied.

Policy1’s Current Configuration: Policy1 already enforces “Azure AD-joined devices” for Global Administrators from untrusted locations. This means the “Grant” control section of Policy1 is already configured to “Require device to be marked as compliant” or “Require hybrid Azure AD joined device” (or a similar device-based control).

Adding MFA Requirement: To also force MFA, you need to add the “Require multi-factor authentication” grant control to Policy1. The “Grant” control section in the Azure portal for Policy1 allows you to specify multiple grant requirements. You can require one of the selected controls or all of them. In this case, you likely want to require both Azure AD-joined devices and MFA.

Why other options are incorrect:

A. From the Azure portal, modify session control of Policy1. Session controls are applied after authentication and access are granted. They control the user session behavior, such as sign-in frequency, persistent browser sessions, and application enforced restrictions. Session controls are not used to enforce primary authentication requirements like MFA or device compliance.

B. From multi-factor authentication page, modify the user settings. The older Azure AD MFA settings page (if you are referring to the legacy MFA settings) is primarily for per-user MFA enforcement and app password management. Conditional Access policies are the modern and recommended way to manage MFA at scale and based on conditions (like location, user group, etc.). Modifying user settings directly bypasses the conditional access policy logic and is not the correct approach for this scenario.

C. From multi-factor authentication page, modify the service settings. Similar to option B, service settings on the legacy MFA page are generally for configuring MFA provider settings (like verification methods), not for integrating MFA into Conditional Access policies. Conditional Access policies are configured separately.

In summary: To enforce MFA in addition to the existing device requirement for Global Administrators from untrusted locations within Policy1, you need to modify the grant controls of Policy1 in the Azure portal and add the “Require multi-factor authentication” option.

Final Answer: D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You plan to deploy five virtual machines to a virtual network subnet.
Each virtual machine will have a public IP address and a private IP address.
Each virtual machine requires the same inbound and outbound security rules.
What is the minimum number of network interfaces and network security groups that you require? To answer, select the appropriate options in the answer area.
Network interfaces:
5
10
15
20
Network Security Group
1
2
5
10

A

Network interfaces: 5

Network Security Group: 1

Explanation:

Network Interfaces:

Minimum Requirement: Each Azure virtual machine must have at least one network interface (NIC) to connect to a virtual network and communicate with other resources.

Public and Private IPs on a Single NIC: A single NIC on an Azure VM can be configured with both a private IP address (from the subnet’s IP range) and a public IP address. You don’t need separate NICs to have both types of IP addresses.

Calculation: Since you have five virtual machines, you need a minimum of 5 network interfaces, one for each VM.

Network Security Groups (NSGs):

Subnet-Level NSGs: Network Security Groups can be associated with either individual network interfaces or entire subnets. When you associate an NSG with a subnet, the security rules in that NSG apply to all virtual machines within that subnet.

Shared Security Rules: The requirement states that “Each virtual machine requires the same inbound and outbound security rules.” This is the key point. Because the security rules are identical for all VMs in the subnet, you can efficiently manage security by applying a single NSG at the subnet level.

Minimizing NSGs: Using a subnet-level NSG is the most efficient and least administrative effort approach when VMs within a subnet share the same security requirements. You avoid the need to create and manage individual NSGs for each VM or NIC.

Calculation: Since all VMs in the subnet need the same rules, and they are in the same subnet, you need a minimum of 1 Network Security Group applied to the subnet.

Why other options are incorrect:

Network Interfaces:

10, 15, 20: These numbers are unnecessarily high. You don’t need multiple NICs per VM just because they have public and private IPs, or because they require the same security rules. One NIC per VM is sufficient.

Network Security Groups:

2, 5, 10: These numbers are also unnecessarily high. Creating multiple NSGs (especially more than 1) for this scenario would be redundant and increase management complexity without providing any benefit since the security rules are identical for all VMs and they are in the same subnet. Applying more than one NSG per subnet or per NIC could even lead to conflicting or overly complex security configurations.

Therefore, the minimum and most efficient configuration is:

5 Network Interfaces (one per VM)

1 Network Security Group (applied to the subnet)

Final Answer:

Option Selected
Network interfaces: 5
Network Security Group 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You have an Azure subscription named Subscription1 that contains an Azure virtual machine named VM1. VM1 is in a resource group named RG1.
VM1 runs services that will be used to deploy resources to RG1.
You need to ensure that a service running on VM1 can manage the resources in RG1 by using the identity of VM1.
What should you do first?
A. From the Azure portal, modify the Access control (IAM) settings of RG1.
B. From the Azure portal, modify the Policies settings of RG1.
C. From the Azure portal, modify the Access control (IAM) settings of VM1.
D. From the Azure portal, modify the value of the Managed Service Identity option for VM1.

A

The correct answer is D. From the Azure portal, modify the value of the Managed Service Identity option for VM1.

Here’s why:

Explanation:

To allow a service running on VM1 to manage Azure resources using VM1’s identity, you need to enable and configure Managed Identities for Azure Resources on VM1 first. Managed Identities provide Azure services with an automatically managed identity in Azure Active Directory (Azure AD). This identity can then be used to authenticate to Azure services that support Azure AD authentication, without needing to manage credentials in your code.

Let’s break down each option:

A. From the Azure portal, modify the Access control (IAM) settings of RG1.

Why it’s not the first step: Modifying the Access control (IAM) settings of RG1 is a necessary later step, but not the first step. IAM settings on RG1 are where you will grant permissions to the identity of VM1 to manage resources in RG1. However, you first need to enable the Managed Identity on VM1 before you can grant it permissions.

B. From the Azure portal, modify the Policies settings of RG1.

Why it’s incorrect: Azure Policies are used to enforce organizational standards and compliance across Azure resources. They are not related to enabling Managed Identities or granting permissions for a VM to manage resources. Policies are for governance, not identity management in this context.

C. From the Azure portal, modify the Access control (IAM) settings of VM1.

Why it’s incorrect: Modifying the Access control (IAM) settings of VM1 controls who can manage the VM itself. It doesn’t enable the VM’s identity to be used to manage other resources. IAM settings on VM1 are for role-based access control to the VM, not from the VM to other resources.

D. From the Azure portal, modify the value of the Managed Service Identity option for VM1.

Why it’s correct and the first step: This is the correct first step. You need to enable either a System-assigned Managed Identity or a User-assigned Managed Identity (or both) on VM1. Enabling the Managed Identity creates an identity for VM1 in Azure AD. Once the Managed Identity is enabled, you can then proceed to grant this identity permissions to manage RG1 resources via IAM on RG1 (Option A - which would be the next step).

Steps to solve the problem:

Enable Managed Identity on VM1: In the Azure portal, navigate to VM1. Under the “Settings” section, find “Identity”. Enable either “System assigned” or “User assigned” Managed Identity (System-assigned is often simpler for this scenario).

Grant Permissions to VM1’s Identity on RG1: After enabling Managed Identity on VM1, go to Resource Group RG1 in the Azure portal. Navigate to “Access control (IAM)”. Add a role assignment.

For “Principal”, search for the name of VM1 (if System-assigned) or the name of the User-assigned Managed Identity you created.

Select an appropriate role for the service running on VM1 to manage resources in RG1 (e.g., “Contributor” role to have broad management rights, or more specific roles if you want to limit permissions).

Code on VM1: The service running on VM1 can now use the Azure SDKs or REST APIs to authenticate using the VM’s Managed Identity and manage resources in RG1. The Azure SDKs handle the authentication process automatically when running within an Azure VM with Managed Identity enabled.

In summary, enabling Managed Identity on VM1 is the prerequisite and the correct first step to allow a service on VM1 to manage Azure resources using VM1’s identity.

Final Answer: D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

You have an Azure Active Directory (Azure AD) tenant.
You need to create a conditional access policy that requires all users to use multi-factor authentication when they access the Azure portal.
Which three settings should you configure? To answer, select the appropriate settings to the answer area.
NOTE: Each correct selection is worth one point.

Name ->Policy1
users and groups
cloud apps
conditions
Grant
Session
enable policy

A

To create a conditional access policy that requires MFA for all users accessing the Azure portal, you need to configure the following three core settings:

  1. Users and groups: You need to specify who this policy applies to. In this case, you want it to apply to all users. Within the Conditional Access policy configuration, you will select “Users and groups” and then choose to apply the policy to “All users”.
  2. Cloud apps or actions: You need to specify what application(s) this policy protects. In this case, you want to protect access to the Azure portal. Within the Conditional Access policy configuration, you will select “Cloud apps or actions” and then choose “Select apps” and search for and select “Azure Azure portal”.
  3. Grant: You need to specify what access control to enforce when the conditions (user, app) are met. In this case, you want to require multi-factor authentication. Within the Conditional Access policy configuration, you will select “Grant” and then choose “Grant access” and check the box for “Require multi-factor authentication”.

Let’s evaluate the provided options and map them to these core settings:

Name -> Policy1: While a policy needs a name, it’s not a functional setting for enforcing MFA itself. It’s an administrative label. It’s less critical for the functionality compared to the other options.

users and groups: Correct. This is essential to define who the policy applies to (all users in this case).

cloud apps: Correct. This is essential to define what application is being protected (Azure portal).

conditions: While conditions are a fundamental part of Conditional Access policies and define when the policy applies (e.g., location, device state), for the simplest requirement of “all users, all the time for Azure portal”, you technically don’t need to configure specific conditions beyond the default “any condition”. However, the “Conditions” section is still a necessary part of the policy structure to define the scope. In a broader sense, it is a setting you configure even if you leave it at default “any condition”. Let’s consider this as potentially one of the three.

Grant: Correct. This is essential to define what action is taken when the conditions are met, which is “require multi-factor authentication” in this scenario.

Session: Session controls are applied after successful authentication and are used to manage the user session (e.g., sign-in frequency, persistent browser session). They are not directly involved in requiring MFA for initial access.

enable policy: While you need to enable the policy for it to be active, “enable policy” is more of an on/off switch for the entire policy rather than a specific setting within the policy configuration itself. It’s also less directly related to defining the MFA requirement compared to Users, Apps, and Grant.

Considering the need to select three settings that are most directly related to configuring the MFA requirement, the most appropriate and functionally essential settings are:

users and groups

cloud apps

Grant

While “conditions” is an integral part of CA policies, for this specific and simple requirement, the most direct settings to configure to achieve the goal of requiring MFA for Azure portal access for all users are Users, Apps, and Grant.

Therefore, the three settings you should configure are:

users and groups

cloud apps

Grant

Final Answer:

Option Selected
users and groups
cloud apps
Grant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You configure Azure AD Connect for Azure Active Directory Seamless Single Sign-On (Azure AD Seamless SSO) for an on-premises network.
Users report that when they attempt to access myapps.microsoft.com, they are prompted multiple times to sign in and are forced to use an account name that ends with onmicrosoft.com.
You discover that there is a UPN mismatch between Azure AD and the on-premises Active Directory.
You need to ensure that the users can use single-sign on (SSO) to access Azure resources.
What should you do first?
A. From on-premises network, deploy Active Directory Federation Services (AD FS).
B. From Azure AD, add and verify a custom domain name.
C. From on-premises network, request a new certificate that contains the Active Directory domain name.
D. From the server that runs Azure AD Connect, modify the filtering options.

A

The correct answer is B. From Azure AD, add and verify a custom domain name.

Here’s why:

Explanation:

The core issue described is a UPN (User Principal Name) mismatch. Azure AD Seamless SSO relies on matching the UPNs of users between on-premises Active Directory and Azure AD. When there’s a mismatch, Seamless SSO will fail, leading to users being prompted for credentials multiple times and potentially being forced to use their default onmicrosoft.com UPN.

Let’s break down why option B is the correct first step and why the others are not:

B. From Azure AD, add and verify a custom domain name.

Why it’s correct and the first step: If users are being forced to use onmicrosoft.com UPNs, it strongly indicates that a custom domain name has not been properly added and verified in Azure AD. By default, Azure AD uses the onmicrosoft.com domain. To use your organization’s domain (e.g., company.com) for user UPNs in Azure AD, you must add and verify this custom domain in your Azure AD tenant.

UPN Alignment: Adding and verifying the custom domain allows Azure AD to recognize and accept user UPNs that match your on-premises domain (e.g., user@company.com). This is essential for Seamless SSO to work correctly and for UPNs to be consistent across on-premises and cloud environments.

First Step: This is the most fundamental and logical first step to address a UPN mismatch issue. Without a verified custom domain, Azure AD won’t properly handle UPNs from your on-premises domain.

A. From on-premises network, deploy Active Directory Federation Services (AD FS).

Why it’s incorrect: Deploying AD FS is a completely different approach to Single Sign-On. AD FS is a federation-based SSO solution, while Azure AD Seamless SSO is a password hash synchronization-based solution (with Kerberos for authentication). Deploying AD FS is a significant change in SSO strategy and not a step to fix issues with Seamless SSO. It’s also overkill and not directly related to the UPN mismatch problem.

C. From on-premises network, request a new certificate that contains the Active Directory domain name.

Why it’s incorrect: Certificates are used in Azure AD Seamless SSO for Kerberos ticket decryption and security. While a certificate is necessary for Seamless SSO to function, the problem description specifically points to a UPN mismatch, not a certificate issue. A new certificate is unlikely to resolve the UPN mismatch. Certificates are related to the technical security aspects of Kerberos authentication, not UPN alignment.

D. From the server that runs Azure AD Connect, modify the filtering options.

Why it’s incorrect: Azure AD Connect filtering options control which objects (users, groups, etc.) are synchronized from on-premises Active Directory to Azure AD. While incorrect filtering could cause synchronization issues, it’s not the primary cause of a UPN mismatch leading to SSO problems. Filtering is more about which users are synced, not about the UPN attribute itself being correctly configured. Modifying filtering is a more advanced configuration and not the first step to address a fundamental UPN mismatch problem.

Steps to resolve the issue after choosing option B:

Add and Verify Custom Domain in Azure AD (Option B - Correct First Step): In the Azure portal, navigate to Azure Active Directory -> Custom domain names. Add your organization’s domain name (e.g., company.com) and follow the steps to verify it (typically by adding a DNS TXT record).

Update UPNs (If Necessary): After verifying the custom domain, ensure that the UPN suffixes in your on-premises Active Directory match the verified custom domain. If there are users with UPNs still using a different or incorrect suffix, you may need to update their UPNs in on-premises AD.

Run Azure AD Connect Synchronization: After making changes (adding custom domain, updating UPNs), ensure Azure AD Connect synchronizes these changes to Azure AD.

Test Seamless SSO: Have users try accessing myapps.microsoft.com again from domain-joined devices on the on-premises network to verify if Seamless SSO is now working correctly without multiple prompts or forced onmicrosoft.com usage.

Final Answer: B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You have an Active Directory forest named contoso.com.
You install and configure AD Connect to use password hash synchronization as the single sign-on(SSO) method. Staging mode is enabled.
You review the synchronization results and discover that the Synchronization Service Manager does not display any sync jobs.
You need to ensure that the synchronization completes successfully.
What should you do?
A. From Azure PowerShell, run Start-AdSyncSycnCycle ““PolicyType Initial.
B. Run Azure AD Connect and set the SSO method to Pass-through Authentication.
C. From Synchronization Service Manager, run a full import.
D. Run Azure AD Connect and disable staging mode.

A

The correct answer is D. Run Azure AD Connect and disable staging mode.

Here’s why:

Explanation:

When Azure AD Connect is configured in staging mode, it means that the synchronization service is running and performing imports and synchronizations, but it does not export any changes to Azure AD. This mode is designed for testing and verifying the configuration before making it active in your production environment.

If you are seeing no sync jobs in the Synchronization Service Manager while staging mode is enabled, this is expected behavior. In staging mode, the primary purpose is to review the synchronization configuration and preview changes, not to actively synchronize data to your production Azure AD tenant.

To make the synchronization process active and for sync jobs to run and export data to Azure AD, you need to disable staging mode.

Let’s analyze each option:

A. From Azure PowerShell, run Start-AdSyncSycnCycle ““PolicyType Initial.

While this command can manually trigger a synchronization cycle, it will still operate within the constraints of staging mode. If staging mode is enabled, running this command will likely initiate a synchronization cycle, but it will still not export changes to Azure AD. Therefore, it won’t resolve the core issue of getting the synchronization to complete successfully in a production sense.

B. Run Azure AD Connect and set the SSO method to Pass-through Authentication.

Changing the SSO method to Pass-through Authentication is irrelevant to the problem of synchronization not completing successfully. The issue is that no sync jobs are being displayed, which is directly related to staging mode preventing export of changes to Azure AD. Changing the SSO method won’t enable the synchronization process in staging mode to export data.

C. From Synchronization Service Manager, run a full import.

Running a full import from the Synchronization Service Manager might initiate an import operation, but again, if staging mode is enabled, the subsequent synchronization and export steps will be suppressed. A full import alone won’t enable the synchronization to complete successfully and export data to Azure AD when staging mode is active.

D. Run Azure AD Connect and disable staging mode.

This is the correct answer. Disabling staging mode in Azure AD Connect is the necessary step to make the synchronization configuration active and allow changes to be exported to Azure AD. Once staging mode is disabled, Azure AD Connect will start performing full and delta synchronizations and will export changes to your Azure AD tenant. This will then allow you to see sync jobs running in the Synchronization Service Manager and for the synchronization to complete successfully in a production context.

Steps to resolve the issue:

Run Azure AD Connect wizard again.

Choose “Configure” from the initial tasks page.

Select “Configure staging mode” from the Additional tasks page.

Uncheck the “Enable staging mode” checkbox.

Complete the wizard.

After disabling staging mode, the synchronization service will start exporting changes to Azure AD, and you should see synchronization jobs running in the Synchronization Service Manager.

Final Answer: D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You have an Azure Active Directory (Azure AD) tenant that has the initial domain name.
You have a domain name of contoso.com registered at a third-party registrar.
You need to ensure that you can create Azure AD users that have names containing a suffix of @contoso.com.
Which three actions should you perform in sequence? To answer, move the appropriate cmdlets from the list of cmdlets to the answer area and arrange them in the correct order.
Add an Azure AD server
Create an Azure DNS zone
Verify the Domain
Configure company branding
Add a record to the Public Contoso, com DNS zone
Add a custom domain name

A

Explanation:

To use a custom domain name like contoso.com for Azure AD users, you need to perform the following steps in sequence:

Add a custom domain name: First, you need to inform Azure AD that you intend to use contoso.com with your Azure AD tenant. This is done by adding the custom domain name in the Azure portal or using PowerShell.

Verify the Domain: After adding the domain name, Azure AD will provide you with DNS record information (typically a TXT record or MX record). You need to add this record to the DNS settings of contoso.com at your third-party registrar to prove that you own the domain. This process is called domain verification.

Add a record to the Public Contoso, com DNS zone: This is the step where you actually add the DNS record provided by Azure AD to your domain’s public DNS settings. This action proves to Azure AD that you control the DNS for contoso.com, thus verifying your ownership.

Let’s arrange the provided actions in the correct sequence:

Step 1: Add a custom domain name
This is the first step to initiate the process within Azure AD. You need to tell Azure AD that you want to use contoso.com.

Step 2: Verify the Domain
Once you add the custom domain name, Azure AD will start the verification process. This step represents the action of Azure AD initiating the verification and providing you with the DNS record information.

Step 3: Add a record to the Public Contoso, com DNS zone
This is the final step to complete the verification. You take the DNS record information provided in the “Verify the Domain” step and add it to the DNS zone managed by your third-party registrar for contoso.com.

Incorrect Options and Why:

Add an Azure AD server: Azure AD is a cloud service and doesn’t involve adding servers in the context of custom domain setup.

Create an Azure DNS zone: While you could use Azure DNS to manage your domain’s DNS records, it’s not a mandatory step to add a custom domain to Azure AD. You can use any DNS provider where your domain is registered. This step is optional for DNS management, not for domain verification itself.

Configure company branding: Company branding is for customizing the Azure AD sign-in experience and is not related to adding or verifying a custom domain for user UPNs.

Answer Area (Correct Sequence):

Add a custom domain name

Verify the Domain

Add a record to the Public Contoso, com DNS zone

Final Answer:

Add a custom domain name
Verify the Domain
Add a record to the Public Contoso, com DNS zone

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You have an Azure subscription that contains 100 virtual machines.
You regularly create and delete virtual machines.
You need to identify unattached disks that can be deleted.
What should you do?
A. From Microsoft Azure Storage Explorer, view the Account Management properties.
B. From Azure Cost Management, create a Cost Management report.
C. From the Azure portal, configure the Advisor recommendations.
D. From Azure Cost Management, open the Optimizer tab and create a report.

A

The correct answer is C. From the Azure portal, configure the Advisor recommendations.

Here’s why:

Explanation:

Azure Advisor’s Cost Recommendations: Azure Advisor is a service in Azure that provides personalized recommendations to help you optimize your Azure resources for cost, security, reliability, performance, and operational excellence. One of its key features is to identify cost-saving opportunities.

Identifying Unattached Disks: Azure Advisor specifically includes a recommendation category related to cost optimization, and within that category, it can identify unattached disks. Advisor analyzes your Azure environment and detects disks that are not currently attached to any virtual machines. These unattached disks are still incurring storage costs, and deleting them can save money.

Configuring Advisor Recommendations: You can access Azure Advisor from the Azure portal. You don’t need to “configure” it in the sense of setting up new rules for unattached disks detection, as this is a built-in recommendation. You simply need to view the Advisor recommendations, specifically looking at the “Cost” category. Advisor will automatically list out any unattached disks it finds in your subscription.

Why other options are incorrect:

A. From Microsoft Azure Storage Explorer, view the Account Management properties. Azure Storage Explorer is a useful tool for managing storage accounts and their contents (blobs, files, disks, etc.). However, viewing “Account Management properties” in Storage Explorer will not directly provide a list of unattached disks. You would have to manually browse through disks and cross-reference them with your VM list to determine which are unattached, which is inefficient for 100 VMs. Storage Explorer is not designed for this specific discovery task in an automated way.

B. From Azure Cost Management, create a Cost Management report. Azure Cost Management is excellent for analyzing and reporting on Azure spending. You can create reports to see your storage costs, including disk storage. However, Cost Management reports themselves don’t directly identify unattached disks. You might see high disk costs, but the report won’t automatically tell you which disks are not in use. You would need to analyze cost data and correlate it with other information to infer unattached disks, which is not the most efficient approach.

D. From Azure Cost Management, open the Optimizer tab and create a report. While Azure Cost Management has an “Optimizer” section (or similar features that might be renamed or UI updated over time), and it may surface some cost-saving recommendations, it’s still generally less direct than using Azure Advisor for this specific task. The Optimizer tab is more likely to guide you toward acting on Advisor recommendations or provide a more general cost optimization overview, rather than directly and specifically listing unattached disks for deletion. Azure Advisor is the dedicated service for providing these kinds of actionable recommendations.

In summary, Azure Advisor is the most direct and efficient Azure service to identify unattached disks. By configuring (more accurately, by viewing) Advisor recommendations, specifically in the “Cost” category, you will get a list of unattached disks that you can then review and delete.

Final Answer: C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You have an Azure subscription that contains 10 virtual machines.
You need to ensure that you receive an email message when any virtual machines are powered off, restarted, or deallocated.
What is the minimum number of rules and action groups that you require?
A. three rules and three action groups
B. one rule and one action group
C. three rules and one action group
D. one rule and three action groups

A

To meet the requirement of receiving email notifications when virtual machines are powered off, restarted, or deallocated, you need to configure Azure Monitor alerts. Let’s analyze the minimum number of rules and action groups required.

Alert Rules:

You need to monitor three distinct events:

Virtual machine powered off (Stopped/PowerOff): You need a rule to detect when a VM transitions to the “Powered Off” state.

Virtual machine restarted: You need a rule to detect when a VM is restarted.

Virtual machine deallocated (Stopped (deallocated)): You need a rule to detect when a VM is deallocated.

While technically you might be able to create a single complex rule that tries to capture all three states, it is cleaner, more manageable, and generally recommended to create separate alert rules for each distinct event you want to monitor. This allows for more specific configurations and easier troubleshooting.

Therefore, you will need a minimum of three alert rules, one for each of the virtual machine power state changes you want to monitor.

Action Groups:

Action groups define the actions to take when an alert is triggered. In this scenario, the desired action is to send an email message. You want to receive an email notification for any of the three VM power state changes. You don’t need separate email notifications for each event; you just need a notification when any of these events occur.

Therefore, you can use a single action group configured to send an email message. You can then associate this single action group with all three alert rules. When any of the three alert rules are triggered (VM powered off, restarted, or deallocated), the same action group will be executed, resulting in an email notification being sent.

Minimum Requirements:

Alert Rules: Three (one for each power state: powered off, restarted, deallocated)

Action Groups: One (to send the email notification for all three alert rules)

Based on this analysis, the correct option is C. three rules and one action group.

Let’s review why other options are incorrect:

A. three rules and three action groups: Using three action groups is redundant. You don’t need a separate action group for each rule if the desired action (sending an email to the same recipient list) is the same for all rules.

B. one rule and one action group: One rule is insufficient to monitor three distinct events effectively and clearly. While technically you might try to create a very complex single rule, it’s not the minimum manageable approach and is not best practice for clarity and maintainability.

D. one rule and three action groups: One rule is still insufficient, and using three action groups is still redundant. A single rule cannot clearly distinguish and monitor all three power state changes in a simple and maintainable way, and you only need one email notification mechanism.

Final Answer: The final answer is
C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You plan to automate the deployment of a virtual machine scale set that uses the Windows Server 2016 Datacenter image.
You need to ensure that when the scale set virtual machines are provisioned, they have web server components installed.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
A. Upload a configuration script.
B. Create an automation account.
C. Create a new virtual machine scale set in the Azure portal.
D. Create an Azure policy.
E. Modify the extension profile section of the Azure Resource Manager template.

A

The correct answers are A. Upload a configuration script. and E. Modify the extension profile section of the Azure Resource Manager template.

Here’s why these options are correct and how they work together:

A. Upload a configuration script.

Purpose: You need a script (like a PowerShell script for Windows Server) that contains the commands to install the web server components (e.g., IIS - Internet Information Services). This script will be executed on each virtual machine instance in the scale set after it’s provisioned.

Content: The script would typically include PowerShell commands to:

Install the Web-Server role (IIS).

Optionally configure IIS further (e.g., default website settings, application pools, etc.).

Potentially perform other necessary configuration steps for your web server application.

Upload Location: You would typically upload this script to an accessible storage location, such as:

Azure Blob Storage: A common and recommended approach. You upload the script to a public or private blob container and provide the URI to the script in your ARM template.

Script in ARM Template: For simpler scripts, you can sometimes embed the script directly within the ARM template, but for more complex scripts, uploading to storage is better for management and readability.

E. Modify the extension profile section of the Azure Resource Manager template.

Purpose: The Azure Resource Manager (ARM) template is used to define and deploy your virtual machine scale set. To automatically run your configuration script on each VM instance during provisioning, you use VM extensions. The extensionProfile section of the ARM template is where you configure these VM extensions.

Extension to Use: For running custom scripts on Windows VMs, the CustomScriptExtension is the most common and appropriate extension.

Configuration within extensionProfile: In the extensionProfile, you would define a CustomScriptExtension and configure it to:

fileUris: Point to the URI of your uploaded configuration script (from option A, like the Blob Storage URL).

commandToExecute: Specify the command to execute the script on the VM (e.g., powershell -ExecutionPolicy Unrestricted -File install_webserver.ps1).

settings and protectedSettings (optional): For passing parameters to the script or handling sensitive information securely.

How A and E Work Together:

Create the Configuration Script (Action A): You write a PowerShell script to install the web server components.

Upload the Script (Action A): You upload this script to Azure Blob Storage (or another accessible location).

Modify ARM Template (Action E): In your ARM template for the VM scale set, you add or modify the extensionProfile section.

Configure CustomScriptExtension (Action E): Within the extensionProfile, you define a CustomScriptExtension, pointing it to the script URI (fileUris) and specifying how to execute it (commandToExecute).

Deploy the ARM Template: When you deploy the ARM template, Azure will:

Provision the virtual machine scale set.

For each VM instance, Azure will download the script from the URI specified in the CustomScriptExtension.

The CustomScriptExtension will execute the script on the VM, installing the web server components.

Why other options are incorrect:

B. Create an automation account. Azure Automation accounts are powerful for automation tasks, but they are not the primary mechanism for directly configuring VMs during provisioning in a VM scale set. While you could use Azure Automation in a more complex scenario (e.g., triggered after VM creation), using VM extensions directly within the ARM template is the simpler and more standard approach for this requirement.

C. Create a new virtual machine scale set in the Azure portal. Creating a VM scale set itself doesn’t install web server components. The Azure portal is an interface for deployment, but you still need a mechanism to configure the VMs during deployment, which is achieved through extensions and scripts. The portal would be used to deploy the ARM template (which includes the extension profile).

D. Create an Azure policy. Azure Policy is used to enforce configurations and compliance after VMs are deployed. It can audit or remediate configuration drift. Policies are not designed to initiate the installation of software during VM provisioning. Policies ensure ongoing compliance but don’t handle the initial setup in this scenario.

Final Answer:

Option Selected
Upload a configuration script.
Modify the extension profile section of the Azure Resource Manager template.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

An app uses a virtual network with two subnets. One subnet is used for the application server. The other subnet is used for a database server. A network virtual appliance (NVA) is used as a firewall.
Traffic destined for one specific address prefix is routed to the NVA and then to an on-premises database server that stores sensitive data. A Border Gateway
Protocol (BGP) route is used for the traffic to the on-premises database server.
You need to recommend a method for creating the user-defined route.
Which two options should you recommend? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. For the virtual network configuration, use a VPN.
B. For the next hop type, use virtual network peering.
C. For the virtual network configuration, use Azure ExpressRoute.
D. For the next hop type, use a virtual network gateway.

A

Let’s break down the requirements and analyze each option in the context of creating a user-defined route (UDR) for traffic destined to an on-premises database via an NVA.

Understanding the Scenario:

Traffic from a subnet in Azure needs to be routed to a specific address prefix (on-premises database network).

The traffic must pass through a Network Virtual Appliance (NVA) acting as a firewall within the Azure VNet.

BGP routing is used for traffic after the NVA to reach the on-premises database server. This implies a connection between Azure and the on-premises network that supports BGP, such as VPN or ExpressRoute.

Analyzing Each Option:

A. For the virtual network configuration, use a VPN.

Correct. A VPN (Site-to-Site VPN) is a common method to establish a secure connection between an Azure virtual network and an on-premises network. While a VPN gateway is the specific component, using “VPN for the virtual network configuration” broadly implies setting up VPN-based hybrid connectivity. In this scenario, the BGP route mentioned likely refers to BGP being used over a VPN or ExpressRoute connection to exchange routes with the on-premises network. Therefore, using VPN for the virtual network configuration is a valid part of a complete solution for connecting to on-premises.

B. For the next hop type, use virtual network peering.

Incorrect. Virtual network peering is used to connect two Azure virtual networks directly. It’s not relevant for routing traffic from a subnet to an NVA within the same virtual network to reach an on-premises network. Peering is for VNet-to-VNet connectivity, not for routing to an NVA for on-premises access.

C. For the virtual network configuration, use Azure ExpressRoute.

Correct. Azure ExpressRoute provides a dedicated, private, and often higher-bandwidth connection between Azure and an on-premises network. Similar to VPN, ExpressRoute is a method for establishing hybrid connectivity. Using ExpressRoute for the virtual network configuration is also a valid part of a complete solution for connecting to on-premises, especially when dealing with sensitive data and potentially higher bandwidth requirements. ExpressRoute also supports BGP for route exchange.

D. For the next hop type, use a virtual network gateway.

Incorrect. While a virtual network gateway (VPN gateway or ExpressRoute gateway) is involved in connecting to on-premises via VPN or ExpressRoute, it is not the correct “next hop type” for a UDR when you want to route traffic to an NVA within the same VNet. For routing traffic to an NVA, the correct “next hop type” in a UDR is typically “Virtual appliance”. Using “Virtual network gateway” as the next hop type in a UDR would typically be used to route traffic directly out of the VNet to the VPN or ExpressRoute connection, bypassing the NVA. In this scenario, the traffic needs to go through the NVA first.

Rationale for choosing A and C:

Options A and C represent the two primary methods for establishing a hybrid network connection between Azure and on-premises, which is essential for the scenario described. The mention of a BGP route further reinforces that a VPN or ExpressRoute connection to on-premises is likely part of the intended architecture. While option D mentions a virtual network gateway, it’s not the correct next hop type for routing traffic to an NVA within the VNet. Option B is irrelevant for routing to on-premises via an NVA.

Therefore, Options A and C provide the necessary virtual network configurations for establishing the hybrid connectivity required to reach the on-premises database server, making them the most suitable recommendations for a “complete solution” in this context.

Final Answer: The final answer is
A,C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You manage a solution in Azure that consists of a single application that runs on a virtual machine (VM). Traffic to the application has increased dramatically.
The application must not experience any downtime and scaling must be dynamically defined.
You need to define an auto-scale strategy to ensure that the VM can handle the workload.
Which three options should you recommend? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. Deploy application automatic vertical scaling.
B. Create a VM availability set.
C. Create a VM scale set.
D. Deploy application automatic horizontal scaling.
E. Deploy a custom auto-scale implementation.

A

Let’s analyze each option based on the requirements of no downtime and dynamically defined scaling for an application running on a single VM experiencing increased traffic.

A. Deploy application automatic vertical scaling.

Correctness: Vertical scaling involves increasing the resources (CPU, memory, etc.) of the existing virtual machine. Automatic vertical scaling would mean this process happens automatically in response to load changes.

Downtime: Vertical scaling typically requires a reboot of the virtual machine to apply the changes, resulting in application downtime. While some newer Azure VM sizes support resizing without downtime, this is not universally guaranteed and is more complex than horizontal scaling for zero-downtime scaling.

Dynamically Defined Scaling: Yes, vertical scaling can be dynamically defined based on metrics such as CPU usage or memory consumption.

Completeness as a Solution: While it can dynamically scale, the potential for downtime during vertical scaling makes it less ideal for applications requiring continuous availability. Therefore, it is NOT the closest with correct answer when considering the “no downtime” requirement.

B. Create a VM availability set.

Correctness: Availability sets are used to improve the availability and fault tolerance of virtual machines by distributing them across multiple fault domains and update domains within an Azure datacenter.

Downtime: Availability sets help protect against planned and unplanned downtime events within a datacenter, enhancing the application’s resilience. However, availability sets themselves do not provide scaling capabilities.

Dynamically Defined Scaling: No, availability sets do not offer autoscaling functionality. They are focused on high availability, not scalability to handle increased traffic.

Completeness as a Solution: Availability sets address availability, but not the scaling requirement to handle dramatic traffic increases. Therefore, it is NOT a correct answer to the question about scaling strategy.

C. Create a VM scale set.

Correctness: VM scale sets are designed to deploy and manage a set of identical, auto-scaling virtual machines. They are ideal for horizontally scaling applications.

Downtime: VM scale sets are designed to minimize downtime during scaling operations. Scaling out (adding more instances) is generally a zero-downtime operation, especially when used with a load balancer and health probes.

Dynamically Defined Scaling: Yes, VM scale sets have built-in autoscaling capabilities. You can define rules based on various metrics (CPU utilization, memory, custom metrics, etc.) to automatically adjust the number of VM instances based on workload demands.

Completeness as a Solution: VM scale sets are a highly effective and complete solution for achieving both no downtime and dynamically defined scaling for VM-based applications. This is a very strong candidate for a correct answer.

D. Deploy application automatic horizontal scaling.

Correctness: Horizontal scaling (scaling out) involves adding more instances of the application to handle increased load. Automatic horizontal scaling means this addition of instances happens automatically based on demand.

Downtime: Horizontal scaling inherently aims to avoid downtime during scale-out. By distributing load across multiple instances, you can accommodate increased traffic without interrupting service to existing users.

Dynamically Defined Scaling: Yes, automatic horizontal scaling is by definition dynamic.

Completeness as a Solution: Deploying automatic horizontal scaling is conceptually the correct approach to meet the requirements. In Azure, VM scale sets are the primary and recommended way to implement automatic horizontal scaling for VM-based applications. This is also a very strong candidate for a correct answer.

E. Deploy a custom auto-scale implementation.

Correctness: A custom auto-scale implementation means designing and building your own scaling solution, potentially using Azure APIs, monitoring data, and custom logic.

Downtime: The potential for downtime depends entirely on how well the custom implementation is designed and executed. If done carefully, it could achieve zero-downtime scaling.

Dynamically Defined Scaling: Yes, a custom implementation can be designed to provide fully dynamic scaling, potentially based on very specific application metrics.

Completeness as a Solution: A custom auto-scale implementation can be a complete solution, but it is generally more complex, requires more development effort, and is harder to maintain compared to using managed Azure services like VM scale sets. It is also less aligned with the “minimize implementation time” aspect mentioned in the broader context of Azure solutions. However, it still technically can be a “complete solution”.

Analyzing for “closest with correct answer”:

If we need to choose the three options that are “closest with correct answer” and represent complete solutions, we should prioritize the options that best meet both “no downtime” and “dynamically defined scaling” requirements and are practical and recommended Azure approaches.

C. Create a VM scale set and D. Deploy application automatic horizontal scaling are the most direct and best-practice Azure solutions for horizontal autoscaling of VM-based applications to meet these requirements. They are tightly related - VM scale sets are Azure’s managed service for horizontal autoscaling.

E. Deploy a custom auto-scale implementation is also technically a “complete solution” as it can achieve the goals, although it is less managed and more complex. It’s a valid option if there are very specific, unique requirements not easily met by VM scale sets, but for general autoscaling, VM scale sets are preferred.

A. Deploy application automatic vertical scaling is less suitable due to potential downtime and is generally not the best approach for handling dramatic traffic increases in web applications where horizontal scaling is more scalable and resilient.

B. Create a VM availability set is not a scaling solution at all.

Therefore, the three options that are “closest with correct answer” and represent complete solutions for dynamic scaling with minimal downtime are C, D, and E.

Final Answer: The final answer is C, D, and E.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You develop a web app that uses the tier D1 app service plan by using the Web Apps feature of Microsoft Azure App Service.
Spikes in traffic have caused increases in page load times.
You need to ensure that the web app automatically scales when CPU load is about 85 percent and minimize costs.
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Configure the web App to the premium App Service tier
Configure a scale condition
Configure the web app to the standard App service tier
Enable autoscaling on the web app
Add a Scale rule
Switch to an Azure App Services consumption plan

A

To implement autoscaling for an Azure Web App based on CPU load and minimize costs, you should perform the following actions in sequence:

Configure the web app to the standard App service tier:

Reason: The D1 App Service plan is a Dev/Test tier and does not support autoscaling. To enable autoscaling, you must upgrade to a tier that supports it. The Standard App Service tier is a suitable and cost-effective option that supports autoscaling. While Premium tiers also support autoscaling, Standard tier is generally less expensive and sufficient for many production workloads where cost minimization is a priority.

Enable autoscaling on the web app:

Reason: After moving to a suitable App Service tier, you need to explicitly enable the autoscaling feature for your web app. This activates the autoscaling engine for the app.

Configure a scale condition:

Reason: You need to define the trigger for autoscaling. In this case, the requirement is to scale when the “CPU load is about 85 percent”. You need to configure a scale condition based on the CpuPercentage metric and set a threshold of 85%.

Add a Scale rule:

Reason: Once the scale condition is defined, you need to specify the action to take when the condition is met. This is done by adding a scale rule. The scale rule will define how to scale (e.g., increase instance count by a certain number) when the CPU load reaches 85%. You can also configure scale-in rules to reduce instances when load decreases, further optimizing costs.

Why other options are not in the correct sequence or not suitable:

Configure the web App to the premium App Service tier: While Premium tier also supports autoscaling, it’s generally more expensive than Standard. For cost minimization, starting with Standard tier is more appropriate. Premium might be considered later if Standard proves insufficient for performance or features, but not as the first step for cost-conscious scaling.

Switch to an Azure App Services consumption plan: Consumption plan is a serverless hosting option where you pay per-second consumption. While it autoscales automatically, it is a different hosting model than the dedicated instance-based App Service plans (like Standard and Premium). Switching to Consumption plan is a significant architectural change and might not be desired or suitable if the application was initially designed for a dedicated App Service plan. It might also lead to less predictable costs if traffic spikes are very high.

Add an Azure DNS zone: Azure DNS zones are for managing DNS records and are not related to App Service autoscaling.

Correct Sequence of Actions:

Configure the web app to the standard App service tier

Enable autoscaling on the web app

Configure a scale condition

Add a Scale rule

Answer Area:

Configure the web app to the standard App service tier
Enable autoscaling on the web app
Configure a scale condition
Add a Scale rule

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You are implementing authentication for applications in your company. You plan to implement self-service password reset (SSPR) and multifactor authentication
(MFA) in Azure Active Directory (Azure AD).
You need to select authentication mechanisms that can be used for both MFA and SSPR.
Which two authentication methods should you use? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. Short Message Service (SMS) messages
B. Azure AD passwords/Authentication App
C. Email addresses
D. Security questions
E. App passwords

A

The correct answers are A. Short Message Service (SMS) messages and B. Azure AD passwords/Authentication App.

Here’s why:

A. Short Message Service (SMS) messages

MFA: SMS is a common and widely supported method for multi-factor authentication. Azure AD can send a verification code via SMS to a user’s phone, which they must enter to complete the MFA process.

SSPR: SMS is also a standard method for self-service password reset. Users can choose to receive a verification code via SMS to their registered phone number as part of the password reset process.

B. Azure AD passwords/Authentication App (Interpreted as Authentication App - e.g., Microsoft Authenticator)

MFA: Authentication apps (like Microsoft Authenticator, Google Authenticator, etc.) are a strong and recommended method for MFA. They can provide push notifications or generate Time-based One-Time Passcodes (TOTP) that users use for verification.

SSPR: Authentication apps are also supported for self-service password reset. Users can use push notifications or TOTP codes from their authenticator app to verify their identity and reset their password. The phrase “Azure AD passwords” in this option is a bit misleading. It likely refers to using the Authentication App method, not the password itself as an MFA or SSPR mechanism.

Why other options are incorrect:

C. Email addresses

While email addresses can be used for SSPR, it is not a recommended method for MFA due to security concerns. Email accounts can be compromised, making it a weaker second factor. While Azure AD technically allows email as an MFA method in some configurations, it’s generally discouraged for security best practices. Since the question asks for methods usable for both MFA and SSPR, and email is weak for MFA, it’s not the best choice.

D. Security questions

Security questions are strongly discouraged for both MFA and SSPR. They are inherently insecure as answers are often easily guessable or publicly available. Microsoft is actively moving away from security questions as an authentication method due to security vulnerabilities.

E. App passwords

App passwords are not an authentication method for MFA or SSPR. App passwords are used as a workaround for legacy applications that do not support modern authentication (like MFA). They are generated for specific applications to bypass MFA requirements for those apps, not as an MFA or SSPR method themselves.

Therefore, the two authentication methods that are genuinely and commonly used for both MFA and SSPR in Azure AD are SMS messages and Authentication Apps.

Final Answer: The final answer is
A,B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

HOTSPOT
You create a virtual machine scale set named Scale1. Scale1 is configured as shown in the following exhibit.

Create a virtual machine scale set
Basics Disks Networking Scaling Management Health Advanced

An Azure virtual machine scale set can automatically increase or decrease the number of VM instances that run your
application. This automated and elastic behavior reduces the management overhead to monitor and optimize the performance
of your application. Learn more about VMSS scaling
Instance
Initial instance count *
4
Scaling
Scaling policy *
Manual Custom
Minimum number of VMs *
2
Maximum number of VMs *
20
Scale out
CPU threshold (%) *
80
Duration in minutes *
5
Number of VMs to increase by *
2
Scale in
CPU threshold (%) *
30
Number of VMs to decrease by *
4
Diagnostic logs
Collect diagnostic logs from Autoscale
Disabled Enabled
Review + create
< Previous Next: Management >
Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.

NOTE: Each correct selection is worth one point.
If Scale1 is utilized at 85 percent for six minutes after it is deployed, Scale1 will be running [answer choice].
2 virtual machines
4 virtual machines
6 virtual machines
10 virtual machines
20 virtual machines
If Scale1 is first utilized at 25 percent for six minutes after it is deployed, and then utilized at
50 percent for six minutes, Scale1 will be running [answer choice].
2 virtual machines
4 virtual machines
6 virtual machines
8 virtual machines
10 virtual machines

A

Step 1: Analyze Scenario 1

Initial State: The scale set starts with 4 virtual machines as defined by the “Initial instance count”.

CPU Utilization: The utilization is at 85 percent for six minutes.

Scale-Out Threshold: The scale-out threshold is set at 80 percent.

Scale-Out Duration: The duration to trigger a scale-out is 5 minutes.

Scale-Out Action: Increase the number of VMs by 2.

Since the CPU utilization (85%) is above the scale-out threshold (80%) and the duration (6 minutes) is longer than the required duration (5 minutes), a scale-out event will be triggered.

Calculation: Initial VMs (4) + VMs to increase by (2) = 6 virtual machines

Step 2: Analyze Scenario 2

Initial State: The scale set starts with 4 virtual machines.

First Utilization Period: 25 percent CPU for six minutes.

Second Utilization Period: 50 percent CPU for six minutes.

Scale-In Threshold: The scale-in threshold is set at 30 percent.

Scale-Out Threshold: The scale-out threshold is set at 80 percent.

First Utilization Period (25% CPU):

25% CPU is below the scale-in threshold (30%) and below the scale-out threshold (80%).

No scaling action is triggered because the CPU utilization is not outside either threshold boundary for scale in or scale out.

The VM count remains at 4.

Second Utilization Period (50% CPU):

50% CPU is above the scale-in threshold (30%) but below the scale-out threshold (80%).

No scaling action is triggered because the CPU utilization is within the defined thresholds for neither scale-in nor scale-out.

The VM count remains at 4.

Step 3: Select the Answers

Based on the analysis:

Scenario 1 (85 percent for six minutes): Scale1 will be running 6 virtual machines.

Scenario 2 (25 percent then 50 percent for six minutes each): Scale1 will be running 4 virtual machines.

Correct Answer Choices:

If Scale1 is utilized at 85 percent for six minutes after it is deployed, Scale1 will be running 6 virtual machines.

If Scale1 is first utilized at 25 percent for six minutes after it is deployed, and then utilized at 50 percent for six minutes, Scale1 will be running 4 virtual machines.

Final Answer:

If Scale1 is utilized at 85 percent for six minutes after it is deployed, Scale1 will be running 6 virtual machines.

If Scale1 is first utilized at 25 percent for six minutes after it is deployed, and then utilized at 50 percent for six minutes, Scale1 will be running 4 virtual machines.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

You plan to automate the deployment of a virtual machine scale set that uses the Windows Server 2016 Datacenter image.
You need to ensure that when the scale set virtual machines are provisioned, they have web server components installed.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Upload a configuration script.
Create an Azure policy.
Modify the extensionProfile section of the Azure Resource Manager template.
Create a new virtual machine scale set in the Azure portal.
Create an automation account.

A

To automate the deployment of a virtual machine scale set with web server components installed on each VM instance, you need a mechanism to execute a configuration script during the VM provisioning process. Let’s analyze each option:

A. Upload a configuration script.

Correct. You absolutely need a configuration script. This script will contain the commands necessary to install the web server components (like IIS on Windows Server). This script will be executed on each VM instance as it is provisioned. The script could be a PowerShell script for Windows Server.

B. Create an Azure policy.

Incorrect. Azure Policy is primarily used for governance, compliance, and enforcing standards after resources are deployed. While you could potentially use Azure Policy to audit or remediate VMs that don’t have web server components installed after they are running, it is not the mechanism to initiate the installation of web server components during the VM scale set provisioning process. Policy is reactive, not proactive in this initial setup context.

C. Create a new virtual machine scale set in the Azure portal.

Incorrect. Creating a virtual machine scale set in the Azure portal is the action of deploying the scale set itself. However, simply creating the scale set does not automatically install web server components. The portal is just the interface for deployment. You need to configure the scale set deployment to include the installation of web server components, which is done through other mechanisms.

D. Create an Azure policy.

Incorrect. (This is a repeated option, and as explained in B, Azure Policy is not the correct approach for initial setup during provisioning.)

E. Modify the extensionProfile section of the Azure Resource Manager template.

Correct. The extensionProfile section within an Azure Resource Manager (ARM) template is specifically designed to configure virtual machine extensions. VM extensions are the standard way to run post-deployment configuration tasks on Azure VMs and VM scale sets. You can use the CustomScriptExtension within the extensionProfile to execute a script (like the one uploaded in option A) on each VM instance during provisioning. This is the ideal and recommended method for automating software installation during VM scale set deployment.

Explanation of why A and E are the correct pair:

Upload a configuration script (A): You need a script that actually performs the web server component installation. This script will contain the necessary commands for Windows Server 2016 (e.g., PowerShell commands to install the Web-Server role). You will need to store this script in an accessible location, such as Azure Blob Storage, so that the VM instances can download and execute it.

Modify the extensionProfile section of the Azure Resource Manager template (E): You will use an ARM template to define your virtual machine scale set deployment. Within the extensionProfile of the ARM template, you will configure a CustomScriptExtension. This extension will:

Point to the location of your configuration script (uploaded in step A).

Specify the command to execute the script on each VM instance as part of the provisioning process.

By combining these two actions, you ensure that when the VM scale set is deployed using the ARM template, each VM instance will automatically download and execute your configuration script, thus installing the web server components during provisioning.

Final Answer: The final answer is
Uploadaconfigurationscript,ModifytheextensionProfilesectionoftheAzureResourceManagertemplate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

HOTSPOT
You have several Azure virtual machines on a virtual network named VNet1. Vnet1 has two subnets that have 10.2.0.0/24 and 10.2.9.0/24 address spaces.

You configure an Azure Storage account as shown in the following exhibit.
contoso20 | Networking
Storage account
Firewalls and virtual networks
Selected networks
Configure network security for your storage accounts.
Virtual networks
+ Add existing virtual network + Add new virtual network
Virtual Network Subnet Address range Endpoint Status Resource Group Subscription
✓ VNET1 1 10.2.0.0/24 ✓ Enabled RG1 Visual Studio Premium with MSDN …
Prod RG1 Visual Studio Premium with MSDN …

Network Routing
Routing preference
Microsoft network routing (selected) Internet routing
….
Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.

NOTE: Each correct selection is worth one point.
The virtual machines on the 10.2.9.0/24 subnet will have
network connectivity to the file shares in the storage account
Azure Backup will be able to back up the unmanaged hard
disks of the virtual machines in the storage account
always
during a backup
never
always
during a backup
never

A

Statement 1: The virtual machines on the 10.2.9.0/24 subnet will have [answer choice] network connectivity to the file shares in the storage account.

Analysis: The Storage account’s “Firewalls and virtual networks” setting is configured to “Selected networks”.

Under “Virtual networks”, only VNET1 Subnet 1 with address range 10.2.0.0/24 is listed and enabled.

The VMs in question are on the 10.2.9.0/24 subnet.

Since the 10.2.9.0/24 subnet is not explicitly listed as an allowed network in the Storage account’s firewall settings, traffic from VMs in this subnet will be blocked by the storage account firewall.

Answer: never

Statement 2: Azure Backup will be able to back up the unmanaged hard disks of the virtual machines in the storage account [answer choice].

Analysis: Azure Backup is an Azure service that needs to access the storage account to store backups.

The storage account is configured to use “Selected networks” and has authorized subnet 10.2.0.0/24.

The “Endpoint Status” for VNET1 Subnet 1 (10.2.0.0/24) is Enabled, indicating that Service Endpoints for Storage are enabled on this subnet. This is crucial for allowing traffic from the VNet to the Storage service.

The “Network Routing” is set to Microsoft network routing. This setting generally allows Azure services to communicate with the storage account over the Azure backbone network, potentially bypassing some internet routing restrictions when accessing from within Azure.

Reasoning: Azure Backup, as a trusted Azure service, is designed to work with storage accounts secured by virtual network firewalls when using service endpoints and Microsoft network routing. When you initiate a backup of a VM, Azure Backup service attempts to access the storage account to store the backup. The service endpoint configuration and Microsoft Network Routing are intended to allow authorized Azure services like Backup to securely access the storage account, even when network restrictions are in place for general internet traffic or traffic from unauthorized networks.

Answer: always (While “during a backup” might seem plausible, the service endpoint configuration and “Microsoft network routing” are meant to ensure continuous secure access for authorized services like Backup. “Always” in this context implies that the capability is there for Azure Backup to function whenever it needs to perform a backup operation, not just during a specific time window. “Always” is the more appropriate option given the intent of service endpoints and Microsoft network routing for authorized Azure services.)

Final Answer:

The virtual machines on the 10.2.9.0/24 subnet will have never network connectivity to the file shares in the storage account.

Azure Backup will be able to back up the unmanaged hard disks of the virtual machines in the storage account always.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

DRAG DROP
You have virtual machines (VMs) that run a mission-critical application.

You need to ensure that the VMs never experience down time.

What should you recommend? To answer, drag the appropriate solutions to the correct scenarios. Each solution may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point

Solutions

Solutions
Fault Domain
Availability Zone
Availability Set
Scale Sets

Scenario
Maintain application performance across identical VMs: Solution
Maintain application availability when an Azure datacenter fails: Solution
Maintain application performance across different VMs: Solution

A

Scenario 1: Maintain application performance across identical VMs:

Solution: Scale Sets

Explanation: Virtual Machine Scale Sets are designed to deploy and manage a set of identical, auto-scaling virtual machines. They are ideal for distributing application load across multiple identical VMs to maintain performance and handle increased traffic. A load balancer is typically used in front of a scale set to distribute traffic evenly across the instances.

Scenario 2: Maintain application availability when an Azure datacenter fails:

Solution: Availability Zone

Explanation: Availability Zones are physically separate datacenters within an Azure region. By deploying VMs across Availability Zones, you ensure that if one datacenter (zone) fails, your application remains available in the other zones. This provides the highest level of availability and resilience against datacenter-level failures.

Scenario 3: Maintain application performance across different VMs:

Solution: Availability Set

Explanation: Availability Sets are used to improve the availability of VMs within a single datacenter. They distribute VMs across fault domains (power and network isolation) and update domains (planned maintenance isolation). While primarily for availability, they also help in distributing load and maintaining performance to some extent, even if the VMs are not strictly identical. For scenarios where VMs might have different configurations or sizes but still need to be highly available within a region, Availability Sets are suitable. While Scale Sets are better for identical VMs, Availability Sets are more flexible for different VMs needing availability.

Incorrect Solutions and Why:

Fault Domain: Fault Domain is a component of Availability Sets and Availability Zones. It’s not a standalone solution but a concept describing how VMs are isolated within a datacenter. You don’t directly deploy a “Fault Domain”.

Final Answer:

Scenario 1: Maintain application performance across identical VMs: Scale Sets
Scenario 2: Maintain application availability when an Azure datacenter fails: Availability Zone
Scenario 3: Maintain application performance across different VMs: Availability Set

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Your company has an office in Seattle.
You have an Azure subscription that contains a virtual network named VNET1.
You create a site-to-site VPN between the Seattle office and VNET1.
VNET1 contains the subnets shown in the following table.
— —
Name IP address space
Subnet1 10.1.1.0/24
GatewaySubnet 10.1.200.0/28
——

You need to route all Internet-bound traffic from Subnet1 to the Seattle office.
What should you create?
a route for GatewaySubnet that uses the virtual network gateway as the next hop
a route for Subnet1 that uses the local network gateway as the next hop
a route for Subnet1 that uses the virtual network gateway as the next hop
a route for GatewaySubnet that uses the local network gateway as the next hop

A

To route all Internet-bound traffic from Subnet1 to the Seattle office via a site-to-site VPN, you need to implement forced tunneling. Forced tunneling in Azure involves creating a user-defined route (UDR) that directs all internet-bound traffic (0.0.0.0/0) from a subnet to your on-premises network through the VPN tunnel.

Let’s analyze each option:

a route for GatewaySubnet that uses the virtual network gateway as the next hop: This is incorrect. GatewaySubnet is reserved for the Azure VPN gateway. Routes applied to GatewaySubnet typically affect traffic to or through the gateway itself, not traffic originating from other subnets and destined for the internet.

a route for Subnet1 that uses the local network gateway as the next hop: This is incorrect. The Local Network Gateway in Azure represents your on-premises VPN device. It is the destination endpoint of the VPN tunnel in Azure’s configuration. Using it as the next hop in a route for Subnet1 is conceptually wrong for routing traffic through the VPN from Azure towards on-premises.

a route for Subnet1 that uses the virtual network gateway as the next hop: This is correct. To implement forced tunneling, you need to create a route for Subnet1 (where your VMs are located) that directs all internet-bound traffic to the Virtual Network Gateway. The Virtual Network Gateway is the Azure-side VPN gateway resource. By setting the next hop to the Virtual Network Gateway for the 0.0.0.0/0 destination prefix, you are instructing Azure to send all traffic destined for the internet (and not already within the VNet or on-premises networks known by BGP or static routes) through the VPN tunnel. This traffic will then be routed to your Seattle office via the site-to-site VPN connection.

a route for GatewaySubnet that uses the local network gateway as the next hop: This is incorrect for the same reasons as the first option. Routes on GatewaySubnet are not used to control internet-bound traffic from Subnet1.

To implement forced tunneling for internet-bound traffic from Subnet1 to the Seattle office, you should:

Create a Route Table.

Add a Route to the Route Table with the following properties:

Destination prefix: 0.0.0.0/0 (This represents all internet destinations)

Next hop type: Virtual network gateway

Associate the Route Table to Subnet1.

This configuration will ensure that any traffic originating from VMs in Subnet1 that is destined for the internet will be routed through the Azure Virtual Network Gateway and then over the site-to-site VPN connection to your Seattle office.

Final Answer: The final answer is
a route for Subnet1 that uses the virtual network gateway as the next hop

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

You create an Azure virtual machine named VM1 in a resource group named RG1.
You discover that VM1 performs slower than expected.
You need to capture a network trace on VM1.
What should you do?
From the VM1 blade, configure Connection troubleshoot.
From Diagnostic settings for VM1, configure the performance counters to include network counters.
From the VM1 blade, install performance diagnostics and run advanced performance analysis.
From Diagnostic settings for VM1, configure the log level of the diagnostic agent.

A

Let’s analyze each option to determine the best way to capture a network trace on an Azure VM (VM1) that is performing slower than expected.

Option 1: From the VM1 blade, configure Connection troubleshoot.

Explanation: Azure Network Watcher’s Connection troubleshoot tool is designed to test and diagnose connectivity issues between two endpoints, such as Azure VMs, internet endpoints, or on-premises resources. It checks reachability, latency, and hop-by-hop routes.

Relevance to Network Trace: While Connection troubleshoot is useful for identifying connectivity problems, it does not capture a detailed network trace (like a .pcap file). It provides insights into connectivity paths and potential bottlenecks, but not packet-level information. Therefore, this is not the correct option for capturing a network trace.

Option 2: From Diagnostic settings for VM1, configure the performance counters to include network counters.

Explanation: Azure Monitor Diagnostic settings allow you to collect performance metrics and logs from Azure resources, including VMs. You can configure which performance counters to collect, including network-related counters (e.g., Network Interface Bytes Received/sec, Network Interface Bytes Sent/sec).

Relevance to Network Trace: Configuring performance counters will collect aggregated network performance metrics, which can be helpful for identifying trends and overall network utilization. However, performance counters are not a network trace. They do not capture the actual packets transmitted over the network. This option provides performance data, but not a detailed packet capture.

Option 3: From the VM1 blade, install performance diagnostics and run advanced performance analysis.

Explanation: Azure Performance Diagnostics is a VM extension that helps diagnose performance issues on Azure VMs. It collects performance data, configuration information, and logs to identify bottlenecks. It can perform advanced analysis and provide recommendations.

Relevance to Network Trace: Azure Performance Diagnostics is focused on identifying performance bottlenecks related to CPU, memory, disk, and potentially some network aspects through performance metrics and logs. It is not designed to capture raw network traces in the way that tools like Wireshark or tcpdump do. It’s more about high-level performance analysis, not packet-level inspection.

Option 4: From Diagnostic settings for VM1, configure the log level of the diagnostic agent.

Explanation: Diagnostic settings also allow you to configure the log level for the Azure Diagnostics extension. This controls the verbosity of logs collected from the VM’s operating system and applications (e.g., Windows Event Logs, Syslog).

Relevance to Network Trace: Configuring the log level affects the amount of logging data collected, but it does not capture network traffic. Diagnostic logs are system and application logs, not network packet captures. This option is irrelevant to capturing a network trace.

Conclusion:

None of the options directly provide a feature in the Azure portal to initiate a traditional packet capture or network trace in the same way that you would use tools like Wireshark or tcpdump on a local machine.

However, if the question is asking for the closest Azure portal related action to get some form of network-related information to diagnose performance issues, then Option 2 is the most relevant because:

It directly involves network counters, which are performance metrics related to network activity.

While not a packet capture, network performance counters can provide insights into network throughput, errors, and utilization, which can be helpful in diagnosing network-related performance problems.

To get a true network trace (packet capture) on an Azure VM, you would typically need to:

Connect to the VM via RDP or SSH.

Install a network capture tool on the VM’s operating system (e.g., tcpdump on Linux, netsh trace or Wireshark on Windows).

Run the network capture tool from within the VM to capture network traffic directly on the VM’s network interface.

Considering the available options and the context of Azure portal actions, Option 2 is the closest, albeit imperfect, answer if we must choose from the given options within the Azure portal context for gathering network-related performance data.

Final Answer: The final answer is
From Diagnostic settings for VM1, configure the performance counters to include network counters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

You have an Azure subscription named Subscription1 that contains an Azure virtual network named VNet1. VNet1 connects to your on-premises network by using Azure ExpressRoute.
You need to connect VNet1 to the on-premises network by using a site-to-site VPN. The solution must minimize cost.
Which three actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Create a gateway subnet.
Create a VPN gateway that uses the VpnGw1 SKU.
Create a connection.
Create a local site VPN gateway.
Create a VPN gateway that uses the Basic SKU.

A

The correct three actions are:

Create a gateway subnet.

Create a VPN gateway that uses the Basic SKU.

Create a connection.

Here’s why each action is correct and why the others are incorrect:

Correct Actions:

Create a gateway subnet: This is a mandatory prerequisite for deploying a VPN gateway in Azure. The gateway subnet is a dedicated subnet within your VNet that is specifically used to host the VPN gateway service. Azure requires this subnet to properly route traffic to and from the VPN gateway.

Create a VPN gateway that uses the Basic SKU: To minimize cost, you should choose the Basic SKU for the VPN gateway. The Basic SKU is the lowest-cost option for Azure VPN Gateways and is suitable for dev/test environments or scenarios with moderate bandwidth and connection requirements. VpnGw1 and other higher SKUs offer higher bandwidth, more connections, and features like zone redundancy, but they are more expensive. Since the requirement is to minimize cost, Basic SKU is the optimal choice.

Create a connection: After creating the VPN gateway, you need to establish a connection between the Azure VPN gateway and your on-premises VPN device (represented in Azure by a Local Network Gateway object, which is implicitly needed for this connection to work). The connection object defines the shared key (pre-shared key or IKE shared key), connection type (site-to-site), and other parameters needed for the VPN tunnel to be established.

Incorrect Actions:

Create a VPN gateway that uses the VpnGw1 SKU: While VpnGw1 is a valid SKU for a VPN gateway, it is not the most cost-effective option. VpnGw1 and higher SKUs are designed for higher performance and more demanding scenarios. Since the requirement is to minimize cost, VpnGw1 is not the correct choice.

Create a local site VPN gateway: This option is incorrect terminology and conceptually misleading in the Azure context. Azure uses the term “Local Network Gateway” to represent your on-premises VPN device in Azure. You do need to create a Local Network Gateway in Azure to represent your on-premises VPN endpoint before you create the connection. However, the provided option’s phrasing is not standard Azure terminology and can be confusing. While conceptually you need to represent the on-premises side, “Create a connection” implicitly requires you to have configured the “Local Network Gateway” (or on-premises side representation) beforehand. Therefore, “Create a connection” is the more direct and crucial action in this list of options for establishing the VPN tunnel from the Azure side, assuming the on-premises side representation is already considered or implied in the “connection” step. In a real-world deployment, you would definitely create a Local Network Gateway, but in the context of the options given, “Create a connection” is the step that directly establishes the VPN link from Azure after the gateway is deployed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Your network contains an on-premises Active Directory domain named contoso.com. The domain contains the users shown in the following table.
Name Member of
User1 Domain Admins
User2 Domain Users
User3 ADSyncAdmins
User4 Account Operators

You plan to install Azure AD Connect and enable SSO.
You need to specify which user to use to enable SSO. The solution must use the principle of least privilege.
Which user should you specify?
User3
User2
User1
User4

A

The correct answer is User3. Here’s why:

Principle of Least Privilege: The question explicitly states the solution must use the principle of least privilege. This means we need to choose the user with the minimum necessary permissions to enable SSO during Azure AD Connect installation.

Let’s analyze each user:

User1 (Domain Admins): Domain Admins is the highest level of administrative privilege in an Active Directory domain. They have complete control over the domain. While a Domain Admin can definitely enable SSO, it violates the principle of least privilege. Using a Domain Admin account for this task gives far more permissions than are actually required and is a security risk.

User2 (Domain Users): Domain Users is the default group for regular domain users. They have very limited administrative rights. A Domain User account will not have the necessary permissions to enable SSO during Azure AD Connect installation. This process requires creating service accounts and potentially modifying domain configuration, which Domain Users cannot do.

User3 (ADSyncAdmins): The name “ADSyncAdmins” strongly suggests this group is specifically designed for Azure AD Connect administration. It’s a common practice to create a dedicated administrative group for Azure AD Connect with the necessary permissions. This group is likely granted the minimum required permissions to perform tasks related to Azure AD Connect, including enabling SSO. This aligns perfectly with the principle of least privilege.

User4 (Account Operators): Account Operators have permissions to create and manage user and group accounts within the domain. While they have more privileges than Domain Users, Account Operators generally do not have the necessary permissions to enable SSO during Azure AD Connect installation. SSO configuration often involves creating computer accounts, managing service principals, and potentially modifying domain-level Kerberos settings, which are beyond the scope of Account Operators.

Why User3 is the best choice:

Least Privilege: User3 (ADSyncAdmins) is likely designed to have just enough permissions for Azure AD Connect tasks, adhering to the principle of least privilege.

Purpose-Built Group: The name clearly indicates its purpose is related to AD synchronization, making it the most logical choice for managing Azure AD Connect and its features like SSO.

Security Best Practice: Using a dedicated, least-privileged administrative account for services like Azure AD Connect is a security best practice. It limits the potential damage if the account is compromised.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

HOTSPOT
You have an Azure subscription that contains the resource groups shown in the following table.
Name Region
RG1 East US
RG2 West US

RG1 contains the virtual machines shown in the following table.
Name Region
VM1 West US
VM2 West US
VM3 West US
VM4 West US

RG2 contains the virtual machines shown in the following table.
Name Region
VM5 East US 2
VM6 East US 2
VM7 West US
VM8 West US 2

All the virtual machines are configured to use premium disks and are accessible from the Internet.

VM1 and VM2 are in an availability set named AVSET1. VM3 and VM4 are in the same availability zone. VM5 and VM6 are in different availability zones.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.
Statements Yes No
VM1 is eligible for a Service Level Agreement (SLA) of 99.95 percent.
VM3 is eligible for a Service Level Agreement (SLA) of 99.99 percent.
VM5 is eligible for a Service Level Agreement (SLA) of 99.99 percent.

A

Statement 1: VM1 is eligible for a Service Level Agreement (SLA) of 99.95 percent.

Analysis: VM1 is in an availability set named AVSET1 along with VM2. Virtual machines deployed in an availability set within the same region are protected from planned and unplanned maintenance events. Azure guarantees a 99.95% uptime SLA for virtual machines deployed in an availability set.

Conclusion: Yes.

Statement 2: VM3 is eligible for a Service Level Agreement (SLA) of 99.99 percent.

Analysis: VM3 and VM4 are in the same availability zone. While availability zones provide high availability by isolating resources to specific physical locations within an Azure region, deploying VMs in the same availability zone does not qualify for the 99.99% SLA. To achieve a 99.99% SLA, you need to deploy VMs across different availability zones within the same region. Deploying within a single availability zone provides the same SLA as Availability Sets, which is 99.95%.

Conclusion: No.

Statement 3: VM5 is eligible for a Service Level Agreement (SLA) of 99.99 percent.

Analysis: VM5 and VM6 are in different availability zones. When you deploy virtual machines across availability zones, Azure guarantees a 99.99% uptime SLA. This is because availability zones are physically separate datacenters within an Azure region, providing fault tolerance against datacenter-level failures.

Conclusion: Yes.

Therefore, the correct answers are:

Statement 1: Yes

Statement 2: No

Statement 3: Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

A company plans to use third-party application software to perform complex data analysis processes. The software will use up to 500 identical virtual machines (VMs) based on an Azure Marketplace VM image.
You need to design the infrastructure for the third-party application server. The solution must meet the following requirements:
– The number of VMs that are running at any given point in time must change when the user workload changes.
– When a new version of the application is available in Azure Marketplace it must be deployed without causing application downtime.
– Use VM scale sets.
– Minimize the need for ongoing maintenance.
Which two technologies should you recommend? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
single placement group
single storage account
managed disks
autoscale

A

Let’s analyze each option in the context of the requirements:

Single placement group:

Purpose: Placement groups in VM scale sets control the physical placement of VMs. They can be either single or regional. Single placement groups limit the scale set to a single fault domain and upgrade domain within a region. While they can be used for scenarios requiring low latency and tight proximity within a scale set, they don’t directly address the requirements of autoscaling, zero-downtime deployments for application updates, or minimizing maintenance in the context of this specific question.

Relevance to Requirements: Indirectly related to availability within a single region, but not directly to scaling, zero-downtime updates, or minimizing maintenance in the way the question is framed.

Single storage account:

Purpose: In older (unmanaged disk) VM scale sets, you would need to manage storage accounts for VM disks. However, with managed disks, Azure handles the storage account management. Recommending a “single storage account” is not directly relevant to the requirements in the context of modern VM scale sets using managed disks. In fact, for very large scale sets, Azure might distribute disks across multiple storage accounts under the hood for scalability and performance. With managed disks, the user doesn’t need to manage storage accounts directly.

Relevance to Requirements: Not directly relevant, and less so with managed disks being the best practice.

Managed disks:

Purpose: Managed disks simplify disk management for Azure VMs and VM scale sets. They handle storage account creation and management in the background, providing better reliability, scalability, and management compared to unmanaged disks. Using managed disks minimizes the need for ongoing maintenance related to storage accounts, as Azure handles this.

Relevance to Requirements: Directly contributes to minimizing ongoing maintenance by abstracting away storage account management. Also, managed disks are a prerequisite for many advanced VM scale set features, including autoscale and easier updates.

Autoscale:

Purpose: Autoscale is a core feature of VM scale sets that allows the number of VMs to be automatically adjusted based on metrics (like CPU utilization, memory, custom metrics) or a schedule. This directly addresses the requirement for the number of VMs to change with user workload. Autoscale also helps minimize ongoing maintenance by automating the scaling process, reducing the need for manual intervention to adjust VM counts. Furthermore, rolling upgrades (for zero-downtime deployments) are often triggered or work in conjunction with autoscaling to manage capacity during updates.

Relevance to Requirements: Directly addresses scaling based on workload and minimizes ongoing maintenance by automating scaling. Indirectly related to zero-downtime deployments as scaling is often part of deployment strategies.

Conclusion:

The two technologies that best address the requirements are:

Managed disks: Minimizes ongoing maintenance related to storage management and is a best practice for VM scale sets.

Autoscale: Directly addresses the dynamic scaling requirement and minimizes manual maintenance related to scaling.

While neither option directly and completely solves the zero-downtime deployment of Marketplace application updates on its own, using Managed disks is a foundational best practice for modern VM scale sets, and Autoscale is critical for dynamically adjusting resources based on workload and managing the scale set efficiently, which are key components of a well-designed, maintainable, and scalable infrastructure. Rolling upgrades within VM Scale Sets, often used for zero-downtime updates, rely on these underlying technologies.

Final Answer: The final answer is managed disks and autoscale.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

You have a resource group named RG1 that contains the following:
– A virtual network that contains two subnets named – Subnet1 and AzureFirewallSubnet
– An Azure Storage account named contososa1
– An Azure firewall deployed to AzureFirewallSubnet
You need to ensure that contososa1 is accessible from Subnet1 over the Azure backbone network.
What should you do?
Modify the Firewalls and virtual networks settings for contososa1.
Create a stored access policy for contososa1.
Implement a virtual network service endpoint.
Remove the Azure firewall.

A

The correct answer is Implement a virtual network service endpoint.

Here’s why:

Virtual Network Service Endpoints:

Purpose: Virtual network service endpoints are the recommended way to secure Azure service resources to only your virtual network. They extend your virtual network private address space and the identity of your virtual network to the Azure services over a direct connection.

Azure Backbone Network: When you use service endpoints, traffic from your virtual network to the Azure service (in this case, Azure Storage) travels over the Azure backbone network. This provides optimized routing, security, and performance compared to going over the public internet.

Security: Service endpoints also enable you to secure your Azure Storage account (contososa1) to only allow traffic originating from the specified subnet (Subnet1). This is achieved through the Storage Account’s firewall settings, which you configure after enabling the service endpoint.

Modify the Firewalls and virtual networks settings for contososa1:

While you will need to modify the “Firewalls and virtual networks settings” of the storage account, this is a subsequent step to enabling service endpoints. Simply modifying these settings without enabling a service endpoint on the subnet will not ensure that the traffic uses the Azure backbone network. This option is incomplete as a standalone solution.

Create a stored access policy for contososa1:

Stored access policies are used to generate Shared Access Signature (SAS) tokens. SAS tokens are for granting delegated access to storage resources with specific permissions and for a defined period. They are related to authentication and authorization, not to network connectivity or ensuring traffic traverses the Azure backbone network. SAS tokens do not address the requirement.

Remove the Azure firewall:

Removing the Azure firewall is a security risk and is not the correct approach. The Azure firewall is designed to enhance network security, not hinder access to storage from within the virtual network when properly configured. Removing it is counterproductive and does not help achieve the requirement of backbone network access in a secure manner.

In summary:

To ensure contososa1 is accessible from Subnet1 over the Azure backbone network, the fundamental step is to Implement a virtual network service endpoint for Azure Storage on Subnet1. After this, you would then configure the “Firewalls and virtual networks settings” of contososa1 to allow access from Subnet1.

Final Answer: The final answer is Implement a virtual network service endpoint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

You have an Azure subscription that contains 100 virtual machines.
You have a set of PowerShell scripts that validate the virtual machine environment.
You need to run the scripts whenever there is an operating system update on the virtual machines. The solution must minimize implementation time and recurring costs.
Which three resources should you use to implement the scripts? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
an alert action group
an Azure Monitor query
an Azure Automation runbook
a virtual machine that has network access to the 100 virtual machines
an alert rule

A

To meet the requirements of running PowerShell scripts on OS updates for 100 VMs while minimizing implementation time and recurring costs, we need to leverage Azure services that provide automation and monitoring capabilities. Let’s evaluate each option:

an alert action group:

Purpose: Action groups in Azure Monitor are used to define a collection of actions to perform when an alert is triggered. Actions can include sending notifications, calling webhooks, and crucially, running Azure Automation runbooks.

Relevance: This is a crucial component. We will need an action group to connect the alert (detecting OS updates) to the execution of the PowerShell scripts (validation scripts). Action groups are designed for automating responses to alerts.

an Azure Monitor query:

Purpose: Azure Monitor queries (typically Log Analytics queries) are used to retrieve and analyze data collected by Azure Monitor, such as logs and metrics. You can use queries to identify specific events or conditions within your Azure environment.

Relevance: While an Azure Monitor query itself doesn’t directly run scripts, it’s essential for defining the condition that triggers the script execution. We would use a query to detect OS update events in the Azure Activity Log or other relevant logs. This query would then be used as the basis for an alert rule. However, as a standalone resource for implementing the scripts, it’s less direct than other options. The alert rule is the resource that uses the query and triggers actions.

an Azure Automation runbook:

Purpose: Azure Automation runbooks allow you to automate tasks in Azure and hybrid environments using PowerShell or Python.

Relevance: This is the core component for running the PowerShell validation scripts. The runbook will contain the PowerShell scripts that validate the VM environment. We will trigger this runbook when an OS update is detected.

a virtual machine that has network access to the 100 virtual machines:

Purpose: A VM could be used as a jump box or control machine to manually run scripts against the 100 VMs.

Relevance: This option is not suitable for automated script execution based on OS updates and does not minimize implementation time or recurring costs. It would require manual scheduling or complex custom scripting to detect OS updates and trigger scripts, adding to implementation time and ongoing management overhead. It also incurs costs for running the VM continuously. The requirement is for an automated solution, making this option less desirable.

an alert rule:

Purpose: Azure Monitor alert rules are used to detect specific conditions in your Azure environment based on metrics, logs, or activity log events. When the condition is met, the alert rule triggers defined actions.

Relevance: This is essential. We will need an alert rule to monitor for OS update events on the virtual machines. The alert rule will use a condition (possibly based on an Azure Monitor query that detects OS update events in activity logs) and be configured to trigger the action group (which in turn runs the Automation Runbook).

The Three Essential Resources:

To implement the automated PowerShell script execution upon OS updates with minimal implementation time and recurring costs, the three core resources are:

an alert action group: To define the action of running the Automation Runbook when an alert is triggered.

an Azure Automation runbook: To contain and execute the PowerShell validation scripts.

an alert rule: To monitor for OS update events and trigger the action group when an update occurs.

These three resources work together to create an automated, serverless, and cost-effective solution. While an Azure Monitor query is implicitly needed to define the alert condition, the alert rule is the resource that directly triggers the action based on that condition.

Final Answer: The final answer is:

an alert action group

an Azure Automation runbook

an alert rule

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

You have an Active Directory forest named contoso.com.
You install and configure Azure AD Connect to use password hash synchronization as the single sign-on (SSO) method. Staging mode is enabled.
You review the synchronization results and discover that the Synchronization Service Manager does not display any sync jobs.
You need to ensure that the synchronization completes successfully.
What should you do?
Run Azure AD Connect and disable staging mode.
From Synchronization Service Manager, run a full import.
Run Azure AD Connect and set the SSO method to Pass-through Authentication.
From Azure PowerShell, run Start-AdSyncSyncCycle –PolicyType Initial.

A

The correct answer is: Run Azure AD Connect and disable staging mode.

Explanation:

Here’s why this is the correct solution and why the other options are incorrect:

Run Azure AD Connect and disable staging mode. (Correct)

Staging Mode Behavior: When Azure AD Connect is configured in staging mode, it is designed to be a passive server. This means it performs import and synchronization operations, but it does not export changes to Azure AD. This is by design for staging mode, allowing you to test configurations or have a backup server without actively synchronizing.

Synchronization Service Manager in Staging Mode: In staging mode, you might see import and synchronization operations in the Synchronization Service Manager, but export operations will be skipped, and you might not see the typical “sync jobs” that indicate active synchronization to Azure AD. The lack of sync jobs in the description strongly suggests staging mode is the cause.

Disabling Staging Mode: To make the Azure AD Connect server active and allow it to synchronize changes to Azure AD, you must disable staging mode. This is the primary purpose of staging mode - to be turned on for specific scenarios and off for active synchronization.

From Synchronization Service Manager, run a full import. (Incorrect)

Full Import Purpose: A full import is used to refresh the connector space with all objects from the connected directories (Active Directory and Azure AD). While a full import might be necessary after certain configuration changes or if data is inconsistent, it does not override staging mode. If staging mode is enabled, even after a full import, the export step (which synchronizes changes to Azure AD) will still be skipped. Running a full import alone will not resolve the issue of no sync jobs being displayed because the core problem is staging mode being active.

Run Azure AD Connect and set the SSO method to Pass-through Authentication. (Incorrect)

SSO Method Irrelevance: The SSO method (Password Hash Synchronization, Pass-through Authentication, or Federation) is a separate configuration from staging mode. Changing the SSO method will not disable staging mode or cause synchronization to start if staging mode is enabled. Password hash synchronization is already configured as per the question, and changing it to Pass-through Authentication is not related to the issue of no sync jobs in staging mode.

From Azure PowerShell, run Start-AdSyncSyncCycle –PolicyType Initial. (Incorrect)

Start-AdSyncSyncCycle Purpose: This PowerShell command is used to manually trigger a synchronization cycle. While this command can initiate a sync, it will still respect the staging mode configuration. If staging mode is enabled, running this command will likely start an import and synchronization cycle, but the export to Azure AD will still be skipped because of staging mode. Therefore, this command will not resolve the fundamental issue of staging mode preventing active synchronization.

In summary: The root cause is staging mode being enabled. Disabling staging mode using the Azure AD Connect wizard is the direct and necessary action to allow synchronization to Azure AD to proceed and for sync jobs to be displayed correctly in Synchronization Service Manager.

Final Answer: The final answer is Run Azure AD Connect and disable staging mode.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Your on-premises network contains 100 virtual machines that run Windows Server 2019.
You have an Azure subscription that contains an Azure Log Analytics workspace named Workspace1.
You need to collect errors from the Windows event logs on the virtual machines.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Create an Azure Event Grid domain.
Deploy the Microsoft Monitoring Agent.
Configure Windows Event Forwarding on the virtual machines.
Create an Azure Sentinel workspace.
Modify Agent configuration settings in Workspace1.

A

The correct answers are:

Deploy the Microsoft Monitoring Agent.

Modify Agent configuration settings in Workspace1.

Explanation:

Let’s break down why these two actions are necessary and why the others are not the best choices:

Deploy the Microsoft Monitoring Agent (MMA).

Why it’s correct: The Microsoft Monitoring Agent (MMA), also known as the Log Analytics agent, is the primary agent used to connect Windows and Linux machines (including on-premises VMs) to Azure Monitor and Log Analytics. To collect data from the on-premises Windows Server 2019 VMs and send it to Workspace1, you must deploy the MMA agent on each of these VMs. The agent is responsible for gathering the event logs and securely transmitting them to the configured Log Analytics workspace.

Why other options are not replacements: Without an agent installed on the VMs, there’s no mechanism to collect and send the event log data to Azure.

Modify Agent configuration settings in Workspace1.

Why it’s correct: After deploying the MMA agent, you need to configure your Log Analytics workspace (Workspace1) to specify which data to collect from the agents. This is done in the “Agents configuration” or “Data” settings within the Log Analytics workspace in the Azure portal. Specifically, you need to configure the Windows Event Logs settings to collect Error events. You can specify which event logs to collect (e.g., Application, System, Security) and the minimum severity level (e.g., Error, Warning, Information).

Why other options are not replacements: Simply deploying the agent is not enough. You need to tell the Log Analytics workspace what kind of data to expect and collect from the connected agents. This configuration within Workspace1 is essential for data collection to start and for collecting only errors as required.

Create an Azure Event Grid domain.

Why it’s incorrect: Azure Event Grid is a service for routing events from various Azure services and custom applications to event handlers. It’s not directly used for collecting Windows Event Logs from VMs. Event Grid is more for real-time event-driven architectures, not log aggregation in this context.

Configure Windows Event Forwarding on the virtual machines.

Why it’s incorrect: While Windows Event Forwarding (WEF) can be used to forward events to a central collector, it’s not necessary for this scenario. The MMA agent can directly collect Windows Event Logs from the local machine and send them to Log Analytics without requiring WEF. Using WEF would add unnecessary complexity and management overhead for 100 VMs. MMA is the simpler and more direct approach for this requirement.

Create an Azure Sentinel workspace.

Why it’s incorrect: Azure Sentinel is a Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) solution built on top of Azure Log Analytics. While Sentinel uses Log Analytics workspaces as its data store, creating a new Azure Sentinel workspace is not necessary to simply collect Windows Event Logs into an existing Log Analytics workspace (Workspace1). You already have Workspace1, and that’s sufficient for log collection. Sentinel is for advanced security analysis on top of collected logs, not for the basic act of collecting logs itself.

Therefore, the two essential actions are deploying the MMA agent and configuring the Log Analytics workspace to collect Windows Event Log errors.

Final Answer: The final answer is Deploy the Microsoft Monitoring Agent and Modify Agent configuration settings in Workspace1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

You have an Azure subscription named Subscription1.
You deploy a Linux virtual machine named VM1 to Subscription1.
You need to monitor the metrics and the logs of VM1.
What should you use?
Azure HDInsight
Azure Analysis Services
Linux Diagnostic Extension (LAD)
the AzurePerformanceDiagnostics extension

A

The correct answer is Linux Diagnostic Extension (LAD).

Explanation:

Linux Diagnostic Extension (LAD):

Purpose: LAD is an Azure VM Extension specifically designed for Linux virtual machines. Its primary function is to collect system-level metrics and logs from the Linux operating system and send them to Azure Monitor.

Functionality: LAD can collect a wide range of metrics, including CPU utilization, memory usage, disk I/O, network traffic, and more. It can also be configured to collect various logs, such as syslog, authlog, and custom application logs.

Integration with Azure Monitor: LAD seamlessly integrates with Azure Monitor, allowing you to visualize metrics, query logs, set up alerts, and perform in-depth analysis of your Linux VM’s performance and health within the Azure portal.

Let’s look at why the other options are incorrect:

Azure HDInsight:

Purpose: Azure HDInsight is a cloud service for big data analytics. It allows you to process massive amounts of data using popular open-source frameworks like Hadoop, Spark, Hive, and others.

Relevance to VM Monitoring: Azure HDInsight is not related to monitoring the metrics and logs of a single Linux VM. It’s for big data processing and analytics, not general VM monitoring.

Azure Analysis Services:

Purpose: Azure Analysis Services is a fully managed platform as a service (PaaS) that provides enterprise-grade semantic modeling capabilities in the cloud. It’s used for building and deploying OLAP (Online Analytical Processing) models for business intelligence and data analysis.

Relevance to VM Monitoring: Azure Analysis Services is not used for VM monitoring. It’s for business intelligence and analytical data modeling.

the AzurePerformanceDiagnostics extension:

Purpose: The AzurePerformanceDiagnostics extension is designed for Windows virtual machines. It collects performance data and helps troubleshoot performance issues on Windows VMs.

Relevance to Linux VMs: This extension is not compatible with Linux VMs. It is specifically for Windows operating systems.

In summary:

For monitoring metrics and logs of a Linux virtual machine in Azure, the Linux Diagnostic Extension (LAD) is the correct and purpose-built tool. It’s designed to collect the necessary data from Linux VMs and integrate with Azure Monitor for visualization, analysis, and alerting.

Final Answer: The final answer is Linux Diagnostic Extension (LAD).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

HOTSPOT
You plan to deploy five virtual machines to a virtual network subnet.

Each virtual machine will have a public IP address and a private IP address.

Each virtual machine requires the same inbound and outbound security rules.

What is the minimum number of network interfaces and network security groups that you require? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Minimum number of network interfaces:
5
10
15
20
Minimum number of network security groups:
1
2
5
10

A

Answer:

Minimum number of network interfaces: 5
Minimum number of network security groups: 1

Explanation:

Minimum number of network interfaces: 5

Each virtual machine needs to be connected to a virtual network to have a private IP address and to be accessible over the network.

A network interface (NIC) is the resource that allows a virtual machine to connect to a virtual network.

Since you have five virtual machines, you need a minimum of five network interfaces, one for each virtual machine. A single NIC can be configured with both a private and a public IP address.

Minimum number of network security groups: 1

Network security groups (NSGs) are used to filter network traffic to and from Azure resources in a virtual network.

You can associate an NSG with either a subnet or individual network interfaces.

Since all five virtual machines require the same inbound and outbound security rules, you can apply a single NSG at the subnet level. By applying the NSG to the subnet, all virtual machines within that subnet (including the five you plan to deploy) will automatically inherit the security rules defined in that NSG.

Therefore, you only need a minimum of one network security group applied to the subnet to enforce the same security rules for all five virtual machines.

Final Answer:

Minimum number of network interfaces: 5
Minimum number of network security groups: 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

You have an Azure subscription named Subscription1 that includes an Azure File share named share1.
You create several Azure virtual machines in Subscription1. All of the virtual machines belong to the same virtual network.
You have an on-premises Hyper-V server named Server1. Server1 hosts a virtual machine named VM1.
You plan to replicate VM1 to Azure.
You need to create additional objects in Subscription1 to support the planned deployment.
Which three objects should you create? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Hyper-V site
Azure Recovery Services Vault
storage account
replication policy
Azure Traffic Manager instance
endpoint

A

The three objects you should create in Subscription1 to support the planned Hyper-V VM replication to Azure are:

Azure Recovery Services Vault

Storage account

Hyper-V site

Explanation:

Azure Recovery Services Vault: This is the central management service for Azure Site Recovery (ASR). ASR is the Azure service used to replicate on-premises Hyper-V VMs to Azure. You must create a Recovery Services vault in your Azure subscription to configure and manage the replication, failover, and failback processes.

Storage account: When you replicate on-premises VMs to Azure using ASR, the replicated VM disks are stored as managed disks within Azure Storage accounts. You need a storage account in your Azure subscription to serve as the target location for the replicated VM data.

Hyper-V site: In Azure Site Recovery terminology, you need to register your on-premises Hyper-V environment as a “Configuration Manager” site within the Recovery Services vault. Although the option is just “Hyper-V site”, it refers to the logical representation of your on-premises Hyper-V infrastructure within Azure Site Recovery, which is essential for ASR to communicate with and manage your on-premises Hyper-V hosts. This setup involves deploying the Azure Site Recovery Provider on your Hyper-V hosts and registering them to the Recovery Services vault.

Let’s examine why the other options are not the primary necessary objects for basic Hyper-V to Azure replication:

replication policy: While a replication policy is required to configure the settings for replication (like recovery point retention, snapshot frequency), it’s something you configure within the Recovery Services vault, not a standalone object you create before setting up replication. It’s a configuration setting, not a top-level resource to create first. However, given that the question asks for objects to create, and a replication policy is indeed a configurable entity within ASR, it’s a plausible option.

Azure Traffic Manager instance: Azure Traffic Manager is a DNS-based traffic load balancer. It is used to distribute traffic across different Azure regions. It is not directly involved in the Hyper-V to Azure replication process itself. Traffic Manager might be used in a disaster recovery scenario after a failover to Azure to direct user traffic to the Azure-replicated VMs, but it’s not a prerequisite object for setting up the replication itself.

Endpoint: “Endpoint” is a very generic term. In Azure networking, endpoints can refer to various things (like service endpoints, private endpoints, public endpoints of VMs, etc.). In the context of basic Hyper-V to Azure replication setup, “endpoint” is not a specific Azure object that you would typically create upfront as a primary requirement. It’s too broad and not directly relevant as a core object needed for initial replication setup.

Considering the core components needed to initiate Hyper-V to Azure replication, the three most essential and directly created objects are: Azure Recovery Services Vault, Storage account, and Hyper-V site (representing the on-premises Hyper-V environment within ASR), and arguably replication policy as a key configuration entity that you need to define.

Given the options and the context of setting up replication, the best three choices that represent core object creation are:

Azure Recovery Services Vault

storage account

replication policy

Although “Hyper-V site” is also conceptually needed, “replication policy” is a more concrete and explicitly configured entity in Azure Site Recovery and is often considered a key “object” in the configuration process. If forced to pick three from the provided list, these three are the most pertinent.

Final Answer: The final answer is Azure Recovery Services Vault, storage account, and replication policy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

You manage an Active Directory domain named contoso.local.
You install Azure AD Connect and connect to an Azure Active Directory (Azure AD) tenant named contoso.com without syncing any accounts.
You need to ensure that only users who have a UPN suffix of contoso.com in the contoso.local domain sync to Azure AD.
What should you do?
Use the Synchronization Service Manager to modify the Metaverse Designer tab.
Use Azure AD Connect to customize the synchronization options.
Use the Synchronization Rules Editor to create a synchronization rule.
Use Synchronization Service Manager to modify the Active Directory Domain Services (AD DS) Connector.

A

The correct answer is Use Azure AD Connect to customize the synchronization options.

Explanation:

Here’s why this option is the most appropriate and why the others are less suitable:

Use Azure AD Connect to customize the synchronization options. (Correct)

Azure AD Connect Customization: Azure AD Connect provides a wizard-driven interface for configuring and customizing synchronization. When you run Azure AD Connect again after the initial setup, you can choose to “Customize synchronization options.”

Filtering Options: Within the customization options, Azure AD Connect allows you to configure filtering based on domains, organizational units (OUs), and attributes. This attribute-based filtering is precisely what you need to filter users based on their UPN suffix.

UPN Suffix Filtering: You can configure attribute-based filtering to only synchronize users where the userPrincipalName attribute ends with @contoso.com. This ensures that only users with the desired UPN suffix from the contoso.local domain are synchronized to Azure AD.

User-Friendly Approach: Using the Azure AD Connect wizard is the recommended and most user-friendly method for making common synchronization customizations like filtering.

Use the Synchronization Rules Editor to create a synchronization rule. (Less Ideal, but Possible)

Synchronization Rules Editor Power: The Synchronization Rules Editor is a more powerful tool that allows for very granular control over synchronization rules. You could use it to create a custom inbound synchronization rule that filters users based on their UPN suffix.

Complexity: However, using the Synchronization Rules Editor is generally considered a more advanced approach. It requires a deeper understanding of the synchronization engine and rule syntax. For a relatively simple filtering requirement like this, using the Azure AD Connect wizard is a simpler and more recommended approach. Directly editing sync rules is typically reserved for more complex scenarios that the wizard doesn’t directly cover.

Use Synchronization Service Manager to modify the Metaverse Designer tab. (Incorrect)

Metaverse Designer Purpose: The Metaverse Designer in Synchronization Service Manager is primarily used to manage the schema of the Metaverse, which is the central identity repository in Azure AD Connect. It’s used for extending the schema or modifying object type mappings. It’s not the tool for filtering users based on attributes like UPN suffix.

Use Synchronization Service Manager to modify the Active Directory Domain Services (AD DS) Connector. (Incorrect)

AD DS Connector Configuration: Modifying the AD DS Connector in Synchronization Service Manager mainly involves configuring the connection to your on-premises Active Directory, such as credentials, connected domains, and OU selection for synchronization. While you can select specific OUs to sync, you cannot directly apply attribute-based filtering (like UPN suffix) at the connector level. Attribute-based filtering is configured through synchronization rules or the Azure AD Connect wizard’s customization options.

In summary:

While the Synchronization Rules Editor could be used, the Azure AD Connect wizard’s customization options provide a more user-friendly and recommended way to achieve the desired filtering based on UPN suffix. It’s the intended tool for common synchronization configuration changes like filtering.

Final Answer: The final answer is Use Azure AD Connect to customize the synchronization options.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

You have an Azure subscription that contains the resource groups shown in the following table.
Name Region
RG1 West US
RG2 West US
RG3 East US
You have the Azure SQL servers shown in the following table.
Name Region In resource group
Sql1 West US RG1
Sql2 East US RG2
Sql3 West US RG3
Sql4 West US RG1

You create an Azure SQL database named DB1 on Sql1 in an elastic pool named Pool1.
You need to create an Azure SQL database named DB2 in Pool1.
Where should you deploy DB2?
Sql1
Sql2
Sql3
Sql4

A

To create an Azure SQL database (DB2) within an existing elastic pool (Pool1), DB2 must be deployed on the same Azure SQL server that hosts the elastic pool.

According to the question:

DB1 is an Azure SQL database created on Sql1.

DB1 is in an elastic pool named Pool1.

This directly implies that Pool1 is hosted on Sql1.

Elastic pools are server-scoped. An elastic pool is created on a specific Azure SQL server, and all databases that are part of that elastic pool must reside on the same SQL server. You cannot have databases in an elastic pool that are spread across different SQL servers.

Therefore, to create DB2 in Pool1, you must deploy DB2 on Sql1.

Let’s examine the options:

Sql1: This is the correct answer. Since Pool1 is hosted on Sql1, DB2 must also be deployed on Sql1 to be part of Pool1.

Sql2: Sql2 is located in the East US region and resource group RG2. Sql1 is in West US and RG1. Elastic pools and databases within them must be in the same region and on the same SQL server. Sql2 is not the correct location.

Sql3: Sql3 is located in the West US region and resource group RG3. While Sql3 is in the same region (West US) as Sql1, it is a different SQL server. You cannot add a database to an elastic pool that is hosted on a different SQL server.

Sql4: Sql4 is located in the West US region and resource group RG1, the same region and resource group as Sql1. However, it is still a different SQL server instance than Sql1. You cannot add a database to an elastic pool hosted on Sql1 if you deploy DB2 on Sql4.

Conclusion:

The only valid location to deploy DB2 so that it can be part of Pool1 is Sql1, as Pool1 is hosted on Sql1.

Final Answer: The final answer is Sql1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

HOTSPOT
You deploy an Azure virtual machine scale set named VSS1 that contains 30 virtual machine instances across three zones in the same Azure region. The instances host an application named App1 that must be accessible by using HTTP and HTTPS traffic. Currently, VSS1 is inaccessible from the internet.

You need to use Azure Load Balancer to provide access to App1 across all the instances from the internet by using a single IP address.

What should you configure? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Minimum number of network security groups (NSGs) to create:
1
3
30
Objects to assign to the network security groups (NSGs):
1 subnet
3 subnets
30 network interfaces
Minimum number of Azure Standard Load Balancer rules to create:
1
2
3
4
6

A

Answer Area:

Minimum number of network security groups (NSGs) to create: 1

Objects to assign to the network security groups (NSGs): 1 subnet

Minimum number of Azure Standard Load Balancer rules to create: 2

Explanation:

Minimum number of network security groups (NSGs) to create: 1

You can use a single Network Security Group (NSG) to control inbound and outbound traffic for all virtual machine instances within the virtual machine scale set.

NSGs can be associated with a subnet or individual network interfaces. To minimize management and apply the same security rules to all instances in the scale set, you should apply the NSG to the subnet in which VSS1 is deployed.

Since all VMs in the scale set need the same security rules (allowing HTTP and HTTPS), one NSG applied at the subnet level is sufficient.

Objects to assign to the network security groups (NSGs): 1 subnet

As explained above, applying the NSG to the subnet is the most efficient way to manage security rules for all VMs in the VSS. You don’t need to create NSGs for each zone or for each individual network interface when the security requirements are the same across all instances.

Minimum number of Azure Standard Load Balancer rules to create: 2

You need to provide access to App1 using both HTTP (port 80) and HTTPS (port 443).

Azure Load Balancer rules define how traffic is distributed to the backend pool. Each rule typically handles traffic for a specific port and protocol combination.

You will need one load balancer rule to handle HTTP traffic (port 80) and forward it to the backend pool (VSS1 instances).

You will need a second load balancer rule to handle HTTPS traffic (port 443) and forward it to the same backend pool (VSS1 instances).

Therefore, you require a minimum of two load balancer rules to handle both HTTP and HTTPS traffic.

Final Answer:

Minimum number of network security groups (NSGs) to create: 1
Objects to assign to the network security groups (NSGs): 1 subnet
Minimum number of Azure Standard Load Balancer rules to create: 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an app named App1 that uses data from two on-premises Microsoft SQL Server databases named DB1 and DB2.
You plan to move DB1 and DB2 to Azure.
You need to implement Azure services to host DB1 and DB2. The solution must support server-side transactions across DB1 and DB2.
Solution: You deploy DB1 and DB2 as Azure SQL databases each on a different Azure SQL Database server.
Does this meet the goal?
Yes
No

A

The question asks if deploying DB1 and DB2 as Azure SQL databases on different Azure SQL Database servers supports server-side transactions across them.

Azure SQL Database (Single Database): This deployment option provides a single, isolated database. Each Azure SQL Database server is a logical construct providing administrative scope for databases. Databases on different Azure SQL Database servers are fundamentally independent and isolated from a transactional perspective.

Server-Side Transactions Across Databases: True server-side transactions across databases typically require distributed transaction capabilities. In SQL Server, this might be handled using Distributed Transaction Coordinator (DTC) in certain scenarios or linked servers with limitations on full transactional guarantees in distributed scenarios.

Azure SQL Database Limitations: Azure SQL Database (single database option) has limitations when it comes to distributed transactions across independent database servers. While you can query across databases using features like elastic queries, these are not designed for ACID (Atomicity, Consistency, Isolation, Durability) transactions that span multiple independent Azure SQL databases.

Cross-Database Transactions in Azure SQL Database: While some form of cross-database querying is possible in Azure SQL Database, true distributed transactions in the traditional sense (like within a single SQL Server instance or using DTC across instances) are not directly supported across independent Azure SQL Databases on different servers.

Analysis of the Proposed Solution:

The solution proposes deploying DB1 and DB2 as separate Azure SQL databases, each on a different Azure SQL Database server. This setup creates two completely independent Azure SQL databases. In this configuration, server-side transactions that span across DB1 and DB2 and maintain full ACID properties are NOT natively supported by Azure SQL Database.

While application-level transaction management or eventual consistency patterns might be possible, the requirement is for server-side transactions. Deploying them as independent Azure SQL Databases on different servers does not directly fulfill this requirement with built-in server features.

Conclusion:

The proposed solution does not meet the goal of supporting server-side transactions across DB1 and DB2 using the described Azure SQL Database deployment. To achieve server-side transactions, a different Azure SQL deployment option (like Azure SQL Managed Instance, which offers more SQL Server instance-level features, or deploying SQL Server on Azure VMs) or a different application architecture might be necessary.

Final Answer: No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an app named App1 that uses data from two on-premises Microsoft SQL Server databases named DB1 and DB2.
You plan to move DB1 and DB2 to Azure.
You need to implement Azure services to host DB1 and DB2. The solution must support server-side transactions across DB1 and DB2.
Solution: You deploy DB1 and DB2 as Azure SQL databases on the same Azure SQL Database server.
Does this meet the goal?
Yes
No

A

The question asks if deploying DB1 and DB2 as Azure SQL databases on the same Azure SQL Database server supports server-side transactions across them.

Azure SQL Database Server Scope: In Azure SQL Database, a “server” is a logical construct. It’s a management unit for Azure SQL databases. Crucially, databases deployed on the same Azure SQL Database server share the same physical SQL Server instance in the backend infrastructure (though this is abstracted from the user).

Cross-Database Transactions within a Single SQL Server Instance: Microsoft SQL Server, by design, supports cross-database transactions when the databases reside on the same SQL Server instance. You can use standard SQL transaction commands (like BEGIN TRANSACTION, COMMIT TRANSACTION, ROLLBACK TRANSACTION) to create transactions that span multiple databases on the same instance. You can reference objects in other databases using three or four-part naming conventions (e.g., DatabaseName.SchemaName.ObjectName).

Azure SQL Database Behavior: When you deploy multiple Azure SQL databases on the same Azure SQL Database server, they effectively reside within the same logical SQL Server instance environment provided by the Azure SQL Database service. This means they inherit the cross-database transaction capabilities of SQL Server.

Server-Side Transactions: The requirement is for server-side transactions. This implies that the transaction management should be handled by the database server itself, not just at the application level. SQL Server’s built-in transaction management features, when used for cross-database transactions on the same instance (or Azure SQL Database server), fulfill this requirement.

Analysis of the Proposed Solution:

The solution proposes deploying DB1 and DB2 as Azure SQL databases on the same Azure SQL Database server. This is the key point. Because they are on the same server, they can participate in server-side transactions that span both databases. You can write stored procedures or application code that initiates a transaction, performs operations in both DB1 and DB2 (using proper database naming conventions), and then commits or rolls back the entire transaction as a single atomic unit.

Conclusion:

The proposed solution does meet the goal. Deploying DB1 and DB2 as Azure SQL databases on the same Azure SQL Database server enables the implementation of server-side transactions across DB1 and DB2 using standard SQL Server transactional capabilities.

Final Answer: Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure Cosmos DB database that contains a container named Container1. The partition key for Container1 is set to /day. Container1 contains the items shown in the following table.
— —
Name Content
Item1 {
“id”: “1”,
“day”: “Mon”,
“value”: “10”
}
Item2 {
“id”: “2”,
“day”: “Mon”,
“value”: “15”
}
Item3 {
“id”: “3”,
“day”: “Tue”,
“value”: “10”
}
Item4 {
“id”: “4”,
“day”: “Wed”,
“value”: “15”
}
— —
You need to programmatically query Azure Cosmos DB and retrieve Item1 and Item2 only.
Solution: You run the following query.
SELECT day
WHERE value = “10”
You set the Enable Cross Partition Query property to False.
Does this meet the goal?
Yes
No

A

The goal is to retrieve Item1 and Item2 only from Azure Cosmos DB Container1.
Let’s analyze the provided solution:

Solution:
You run the following query:

SELECT day
WHERE value = “10”
Use code with caution.
SQL
You set the Enable Cross Partition Query property to False.

Item Data Analysis:

Item1: {“id”: “1”, “day”: “Mon”, “value”: “10”}

Item2: {“id”: “2”, “day”: “Mon”, “value”: “15”}

Item3: {“id”: “3”, “day”: “Tue”, “value”: “10”}

Item4: {“id”: “4”, “day”: “Wed”, “value”: “15”}

Query Evaluation:

The query SELECT day WHERE value = “10” filters items based on the value property being equal to “10”.

Item1: value is “10”, so it matches the WHERE clause.

Item2: value is “15”, so it does not match the WHERE clause.

Item3: value is “10”, so it matches the WHERE clause.

Item4: value is “15”, so it does not match the WHERE clause.

Therefore, based on the WHERE value = “10” condition, the query would retrieve Item1 and Item3, not Item1 and Item2.

Cross Partition Query Setting:

Setting Enable Cross Partition Query to False means that the query will only be executed against a single partition. However, the query itself is already flawed in achieving the goal, regardless of the cross-partition setting, because it will not retrieve Item2.

Goal Evaluation:

The goal is to retrieve Item1 and Item2 only.

The query SELECT day WHERE value = “10” will retrieve Item1 and Item3, not Item1 and Item2. Item2 has a value of “15” and will be excluded by the WHERE clause.

Conclusion:

The provided solution does not meet the goal because the query SELECT day WHERE value = “10” will retrieve Item1 and Item3, not Item1 and Item2. Setting Enable Cross Partition Query to False does not change the fact that the query’s filter condition is incorrect for retrieving Item1 and Item2 only.

Final Answer: No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

HOTSPOT
You have an on-premises data center and an Azure subscription. The data center contains two VPN devices. The subscription contains an Azure virtual network named VNet1. VNet1 contains a gateway subnet.

You need to create a site-to-site VPN. The solution must ensure that if a single instance of an Azure VPN gateway fails, or a single on-premises VPN device fails, the failure will not cause an interruption that is longer than two minutes.

What is the minimum number of public IP addresses, virtual network gateways, and local network gateways required in Azure? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Public IP addresses:
1
2
3
4
Virtual network gateways:
1
2
3
4
Local network gateways:
1
2
3
4

A

Answer Area:

Public IP addresses: 2

Virtual network gateways: 2

Local network gateways: 2

Explanation:

To achieve high availability and minimize downtime to under two minutes in case of a single component failure for a site-to-site VPN connection, you need to configure redundancy on both the Azure side and the on-premises side.

Public IP Addresses: 2

In an active-standby Azure VPN Gateway setup, you need a Standard SKU public IP address for each VPN gateway instance.

You will deploy two Azure VPN gateway instances in an active-standby configuration for redundancy.

Therefore, you need a minimum of two public IP addresses. One for the active VPN gateway instance and one for the standby instance.

Virtual network gateways: 2

To achieve high availability for the Azure VPN gateway, you must deploy it in an active-standby configuration.

This configuration requires two Azure virtual network gateways. One gateway will be active and handle traffic, while the other will be in standby mode, ready to take over in case of a failure of the active gateway.

Hence, you need a minimum of two virtual network gateways.

Local network gateways: 2

To ensure redundancy on the on-premises side and to leverage your two VPN devices, you should create two local network gateways in Azure.

Each local network gateway will represent one of your on-premises VPN devices and will be configured with the public IP address of that device.

By having two local network gateways, you can create redundant connections from your Azure VPN gateways to your on-premises network through different VPN devices.

Therefore, you need a minimum of two local network gateways to represent your two on-premises VPN devices and achieve on-premises redundancy in conjunction with Azure’s active-standby VPN gateway setup.

In summary:

By using 2 Public IP addresses, 2 Virtual Network Gateways (in active-standby), and 2 Local Network Gateways, you create a resilient site-to-site VPN solution that can withstand the failure of a single Azure VPN gateway instance or a single on-premises VPN device with minimal interruption. This setup allows for automatic failover within the Azure VPN gateway pair and provides redundant paths via two on-premises VPN devices.

Final Answer:

Public IP addresses: 2
Virtual network gateways: 2
Local network gateways: 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

You have an Azure subscription that contains an Azure Sentinel workspace. Sentinel is configured to monitor several Azure resources.
You need to send notification emails to resource owners when alerts or recommendations are generated for a resource.
What should you use?
Logic Apps Designer
Azure Security Center
Automation Runbook
Azure Machine Learning Studio

A

The correct answer is Logic Apps Designer.

Explanation:

Here’s why Logic Apps Designer is the best choice and why the other options are less suitable for this specific scenario:

Logic Apps Designer:

Purpose: Azure Logic Apps is a cloud-based platform for automating workflows and integrating services. The Logic Apps Designer is the visual interface for creating these workflows.

Integration with Azure Sentinel: Logic Apps has native connectors and triggers for Azure Sentinel. You can create a Logic App that is triggered when a new Azure Sentinel alert is generated.

Email Notification Capabilities: Logic Apps has built-in connectors for various email services (like Office 365 Outlook, Gmail, SendGrid, etc.). You can easily add an action in your Logic App workflow to send an email notification.

Dynamic Resource Owner Lookup: Logic Apps can integrate with Azure Resource Graph or Azure Resource Manager to dynamically retrieve information about the resource that triggered the alert. You could potentially use Resource Graph to query for tags on the resource or use Azure RBAC to find the owners/contributors and send notifications to them.

Flexibility and Customization: Logic Apps provides a highly flexible and customizable way to build notification workflows. You can tailor the email content, recipients, and notification logic based on the specifics of the Sentinel alert.

Azure Security Center (Microsoft Defender for Cloud):

Purpose: Microsoft Defender for Cloud focuses on security posture management and threat protection. While it provides security alerts and recommendations, its native email notification capabilities are primarily for Security Center’s own findings, not for routing notifications based on Sentinel alerts to resource owners.

Limited Customization for Sentinel Alerts: While Defender for Cloud and Sentinel are integrated, Defender for Cloud’s notification system is not designed to be the primary mechanism for handling notifications from Sentinel alerts and routing them to resource owners based on resource context.

Automation Runbook:

Purpose: Azure Automation Runbooks are used for general automation tasks in Azure and hybrid environments.

Can Send Emails: Runbooks can be coded (PowerShell or Python) to send emails.

Complexity for Workflows: While you could use an Automation Runbook to achieve the goal, it would generally require more coding and be less visually intuitive to build the workflow compared to Logic Apps. You would need to handle the trigger (e.g., via a webhook from Sentinel), resource owner lookup logic, and email sending all within the Runbook code. Logic Apps provides a more declarative, visual, and connector-driven approach that is often simpler for this type of workflow.

Azure Machine Learning Studio:

Purpose: Azure Machine Learning Studio is for building, training, and deploying machine learning models. It has no relevance to sending email notifications for Azure Sentinel alerts.

Why Logic Apps is the Best Choice:

Logic Apps Designer is the most direct and efficient tool for this specific requirement because:

Native Sentinel Integration: It has built-in triggers for Azure Sentinel alerts.

Email Connectors: It provides easy-to-use connectors for sending emails.

Workflow Automation Focus: It is designed for building automated workflows, making it ideal for handling alerts and triggering actions like email notifications.

Resource Context and Customization: It provides the flexibility to retrieve resource information and customize notifications based on alert details and resource ownership.

Final Answer: The final answer is Logic Apps Designer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

HOTSPOT
You have an Azure subscription that contains the virtual networks shown in the following table.
Name Location Virtual machine
Vnet1 North Europe VM1
Vnet2 West Europe VM2

You create an Azure Cosmos DB account as shown in the exhibit. (Click the Exhibit tab.)
— —
Microsoft Azure
Search resources, services, and docs (G+/)
Home > Azure Cosmos DB > Create Azure Cosmos DB Account
Create Azure Cosmos DB Account
✓ Validation Success
Basics Networking Tags Review + create
Creation Time
Estimated Account Creation Time (in
minutes)
6
The estimated creation time is calculated based on the location you have
selected
Basics
Subscription
Resource Group
Location
Account Name
API
Account Type
Geo-Redundancy
Multi-region Writes
Networking
Connectivity method
Private endpoints
Private endpoint
Subscription1
RG1
North Europe
(new) cosmos75246
Core (SQL)
Production
Enable
Disable
Private endpoint
Endpoint1 (Core (SQL)) (Vnet1)
Create
Previous
Next
Download a template for automation
— —

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.
Statements
Cosmos75246 is accessible by using a public IP address.
VM1 can read from cosmos75246.
VM2 can read from cosmos75246.

A

Let’s analyze each statement based on the provided Azure Cosmos DB account configuration and virtual network setup.

Statement 1: Cosmos75246 is accessible by using a public IP address.

Analysis: The exhibit shows that the Connectivity method for the Cosmos DB account is set to Private endpoint. When you configure a private endpoint for Azure Cosmos DB, you are specifically enabling access to the Cosmos DB account through a private IP address within your virtual network, rather than through its public endpoint. While Cosmos DB accounts generally have a public endpoint, configuring a private endpoint and not explicitly enabling public network access in firewall settings (which is not shown in the exhibit but is a common practice when using private endpoints) effectively restricts public access. With the configuration shown, the primary access method being configured is the private endpoint.

Conclusion: Based on the configuration, especially the “Private endpoint” setting, the Cosmos DB account is primarily designed for private access. Therefore, the statement that it is accessible by using a public IP address in the context of this configuration is misleading. While a public endpoint might technically exist, the configured access is via private endpoint, suggesting public access is likely restricted or not the intended access method. Thus, the statement is No.

Statement 2: VM1 can read from cosmos75246.

Analysis: VM1 is located in Vnet1 in the North Europe region. The private endpoint Endpoint1 for Cosmos DB is being created within Vnet1. Private endpoints are designed to enable private connectivity to Azure services from within a specific virtual network. Since VM1 is in the same virtual network (Vnet1) where the private endpoint is created, VM1 will be able to access the Cosmos DB account using the private IP address of the endpoint.

Conclusion: VM1 is in the same virtual network as the private endpoint configured for Cosmos DB. Therefore, VM1 can read from cosmos75246. The statement is Yes.

Statement 3: VM2 can read from cosmos75246.

Analysis: VM2 is located in Vnet2 in the West Europe region. The private endpoint Endpoint1 for Cosmos DB is created in Vnet1 in the North Europe region. Private endpoints provide connectivity within the virtual network where they are created. By default, virtual networks are isolated from each other. Vnet1 and Vnet2 are separate virtual networks and are even in different Azure regions (North Europe and West Europe). Without explicit configuration like VNet peering or VPN connectivity to connect Vnet2 to Vnet1, VM2, which is in Vnet2, will not be able to directly access the Cosmos DB account through the private endpoint that is deployed in Vnet1. Private endpoints do not inherently grant access from other virtual networks.

Conclusion: VM2 is in a different virtual network (Vnet2) than the private endpoint (Vnet1). Without additional networking configurations to connect Vnet2 to Vnet1, VM2 cannot read from cosmos75246 via the private endpoint in Vnet1. Therefore, the statement is No.

Final Answer:

Statements Yes No
Cosmos75246 is accessible by using a public IP address. ☐ ☑
VM1 can read from cosmos75246. ☑ ☐
VM2 can read from cosmos75246. ☐ ☑

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

HOTSPOT
You have an Azure subscription named Subscription1.

Subscription1 contains the virtual machines in the following table.
Name IP address
VM1 10.0.1.4
VM2 10.0.2.4
VM3 10.0.3.4

Subscription1 contains a virtual network named VNet1 that has the subnets in the following table.
— —
Name Address space | Connected virtual machine
Subnet1 10.0.1.0/24 VM1
Subnet2 10.0.2.0/24 VM2
Subnet3 10.0.3.0/24 VM3
— —

VM3 has a network adapter named NIC3. IP forwarding is enabled on NIC3. Routing is enabled on VM3.

You create a route table named RT1 that contains the routes in the following table.
Address prefix Next hop type Next hop address
10.0.1.0/24 Virtual appliance 10.0.3.4
10.0.2.0/24 Virtual appliance 10.0.3.4

You apply RT1 to Subnet1 and Subnet2.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.
Statements
VM3 can establish a network connection to VM1.
If VM3 is turned off, VM2 can establish a network connection to VM1.
VM1 can establish a network connection to VM2.

A

Let’s analyze each statement based on the network configuration provided.

Statement 1: VM3 can establish a network connection to VM1.

VM3 is in Subnet3 (10.0.3.0/24). VM1 is in Subnet1 (10.0.1.0/24).

There is no route table applied to Subnet3, where VM3 resides.

By default, Azure virtual networks allow direct communication between subnets within the same VNet.

Therefore, VM3 should be able to directly communicate with VM1 using the default system routes of the virtual network.

Conclusion for Statement 1: Yes.

Statement 2: If VM3 is turned off, VM2 can establish a network connection to VM1.

VM2 is in Subnet2 (10.0.2.0/24). VM1 is in Subnet1 (10.0.1.0/24).

Route table RT1 is applied to Subnet2.

RT1 contains a route for the address prefix 10.0.1.0/24 (which includes VM1’s IP address) with the next hop type “Virtual appliance” and the next hop address 10.0.3.4 (which is VM3’s IP address).

This means that traffic from VM2 destined for VM1 (10.0.1.0/24) is directed to VM3.

If VM3 is turned off, the next hop virtual appliance (10.0.3.4) becomes unavailable.

When the next hop specified in a route table is unavailable, Azure will not automatically fall back to default system routes for that destination prefix. Traffic matching that route will likely be dropped or routing will fail because the specified next hop is unreachable.

Therefore, if VM3 is turned off, VM2 will not be able to establish a network connection to VM1 because the custom route in RT1 points to a non-functional appliance, and default VNet routing is overridden by RT1 for traffic from Subnet2 to Subnet1.

Conclusion for Statement 2: No.

Statement 3: VM1 can establish a network connection to VM2.

VM1 is in Subnet1 (10.0.1.0/24). VM2 is in Subnet2 (10.0.2.0/24).

Route table RT1 is applied to Subnet1.

RT1 contains a route for the address prefix 10.0.2.0/24 (which includes VM2’s IP address) with the next hop type “Virtual appliance” and the next hop address 10.0.3.4 (which is VM3’s IP address).

This means that traffic from VM1 destined for VM2 (10.0.2.0/24) is directed to VM3.

As long as VM3 is running and configured to route traffic between Subnet1 and Subnet2 (which is implied by “Routing is enabled on VM3” and IP forwarding on NIC3), VM1 will be able to establish a network connection to VM2, although the traffic will pass through VM3.

Conclusion for Statement 3: Yes.

Final Answer:

Statements Yes No
VM3 can establish a network connection to VM1. ☑ ☐
If VM3 is turned off, VM2 can establish a network connection to VM1. ☐ ☑
VM1 can establish a network connection to VM2. ☑ ☐

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

DRAG DROP
You have an Azure virtual machine named VM1 that runs Windows Server 2016.

You install a line-of-business application on VM1.

You need to create a scale set by using VM1 as a custom image.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Actions
Run sysprep.exe on VM1.
Install Network Load Balancing (NLB) on VM1.
From Azure CLI, deallocate VM1 and mark VM1 as generalized.
From Azure CLI, apply a custom script extension.
Create a virtual machine scale set.
Answer Area

A

Answer Area:

Run sysprep.exe on VM1.

From Azure CLI, deallocate VM1 and mark VM1 as generalized.

Create a virtual machine scale set.

Explanation:

Run sysprep.exe on VM1: The first step to create a custom image from a Windows VM is to generalize the VM using the System Preparation tool (Sysprep). Sysprep removes computer-specific information, such as the computer name and security identifiers (SIDs), ensuring that the image can be used to create multiple unique VMs. Running sysprep.exe /generalize /shutdown /oobe is the standard command to prepare a Windows VM for image capture.

From Azure CLI, deallocate VM1 and mark VM1 as generalized: After running Sysprep and shutting down the VM, you need to deallocate the VM in Azure. Deallocating the VM releases the compute resources associated with it, making it possible to capture the VM as an image. Using the Azure CLI commands az vm deallocate –resource-group <resource_group_name> --name VM1 and az vm generalize --resource-group <resource_group_name> --name VM1 is the correct way to deallocate and mark the VM as generalized in Azure. Marking the VM as generalized in Azure signals to the platform that the VM has been sysprepped and is ready to be used as a source for a custom image.</resource_group_name></resource_group_name>

Create a virtual machine scale set: Once VM1 is generalized and deallocated, you can create a virtual machine scale set using this generalized VM as a custom image. When creating the scale set, you will specify the resource ID of the generalized VM as the imageReference source in the scale set’s configuration. The scale set will then use this custom image to deploy new VM instances.

The other actions are not part of the necessary sequence for creating a scale set from a custom image based on an existing VM:

Install Network Load Balancing (NLB) on VM1: NLB is used for load balancing traffic within a set of VMs, not for creating a custom image. NLB configuration would typically be done after the scale set is created and deployed, or as part of application deployment within the VM instances.

From Azure CLI, apply a custom script extension: Custom script extensions are used to run scripts on VMs after they are provisioned from an image. They are not needed to prepare a VM for image capture itself. While you can use custom script extensions in scale sets to customize instances after deployment, it’s not a prerequisite step for creating the custom image.

Final Answer:

Answer Area
1. Run sysprep.exe on VM1.
2. From Azure CLI, deallocate VM1 and mark VM1 as generalized.
3. Create a virtual machine scale set.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

You have an Azure virtual network that contains a subnet named Subnet1. Subnet1 contains 50 virtual machines. Twenty-five of the virtual machines are web servers and the other 25 are application servers.
You need to filter traffic between the web servers and the application servers by using application security groups.
Which additional resource should you provision?
Azure Firewall
a user-defined route
Azure Private Link
a network security group (NSG)

A

To filter traffic between web servers and application servers within the same subnet using Application Security Groups (ASGs), you need to use a network security group (NSG).

Here’s why:

Network Security Groups (NSGs): NSGs are Azure’s fundamental network traffic filtering service. They allow you to control inbound and outbound traffic to Azure resources within a virtual network. NSGs use security rules to allow or deny traffic based on various criteria such as:

Source and destination IP addresses or address prefixes

Source and destination ports

Protocol (TCP, UDP, ICMP)

Application Security Groups (ASGs)

Application Security Groups (ASGs): ASGs are designed to group virtual machines based on application context. They allow you to define network security rules based on these groups instead of individual IP addresses. This simplifies security management, especially in dynamic environments where IP addresses might change.

Filtering with ASGs and NSGs: To filter traffic using ASGs, you need to create an NSG and define security rules within that NSG that use ASGs as the source or destination.

In this scenario, you would:

Create two Application Security Groups: One for the web servers (e.g., ASG_WebServers) and one for the application servers (e.g., ASG_AppServers).

Associate the VMs with ASGs: Associate the 25 web server VMs with ASG_WebServers and the 25 application server VMs with ASG_AppServers.

Create a Network Security Group (if one doesn’t already exist for Subnet1).

Create NSG Security Rules: Within the NSG associated with Subnet1, you would create rules like:

Allow inbound traffic from ASG_WebServers to ASG_AppServers on the specific ports required for communication between web and application servers (e.g., port 8080).

Deny inbound traffic from ASG_WebServers to ASG_AppServers on other ports if needed, to restrict communication.

You can also add rules to control traffic in the reverse direction (from AppServers to WebServers) if required.

Let’s look at why the other options are not the correct “additional resource”:

Azure Firewall: Azure Firewall is a more advanced, cloud-native firewall service that provides network and application-level protection, including threat intelligence. While Azure Firewall can filter traffic, it’s an over-engineered solution for simply filtering traffic within a subnet based on application groups. NSGs with ASGs are the more appropriate and cost-effective solution for this specific requirement.

A user-defined route (UDR): User-defined routes control the routing of traffic, not the filtering of traffic based on application groups. UDRs are used to direct traffic through network appliances like firewalls or NVAs. They are not relevant for implementing application security group-based filtering.

Azure Private Link: Azure Private Link provides private connectivity to Azure PaaS services and customer-owned services. It is not related to filtering traffic between VMs within a subnet using application security groups.

Conclusion:

To filter traffic between web servers and application servers using Application Security Groups, you need to provision a network security group (NSG) as the additional resource. You will then configure NSG rules that use the ASGs to define the traffic filtering policies.

Final Answer: The final answer is a network security group (NSG).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Your on-premises network contains several Hyper-V hosts.
You have a hybrid deployment of Azure Active Directory (Azure AD).
You create an Azure Migrate project.
You need to ensure that you can evaluate virtual machines by using Azure Migrate.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Deploy the Azure Migrate appliance to an on-premises Hyper-V host.
Assign the migration account to the Administrators group on each Hyper-V virtual machine.
Deploy the Microsoft Monitoring Agent to each Hyper-V host.
Assign the migration account to the Administrators group on each Hyper-V host.
Deploy the Microsoft Monitoring Agent to each Hyper-V virtual machine.
Deploy the Azure Migrate appliance as an Azure virtual machine.

A

The correct answers are:

Deploy the Azure Migrate appliance to an on-premises Hyper-V host.

Assign the migration account to the Administrators group on each Hyper-V host.

Explanation:

To evaluate Hyper-V virtual machines using Azure Migrate, you need to perform the following actions:

Deploy the Azure Migrate appliance to an on-premises Hyper-V host:

The Azure Migrate appliance is a lightweight virtual appliance that you deploy in your on-premises environment. For Hyper-V environments, you deploy it on a Hyper-V host.

The appliance acts as a discovery and assessment tool. It discovers your Hyper-V VMs and collects metadata about their configuration, performance, and dependencies. This data is then sent to your Azure Migrate project in Azure.

Without deploying the appliance, Azure Migrate cannot discover and assess your on-premises Hyper-V VMs.

Assign the migration account to the Administrators group on each Hyper-V host:

The Azure Migrate appliance needs credentials to connect to your Hyper-V hosts and perform discovery.

You need to provide an account to the appliance that has administrative privileges on the Hyper-V hosts. This is typically done by specifying credentials during the appliance configuration process.

Assigning the migration account (the account you configure in the appliance) to the Administrators group on each Hyper-V host grants the necessary permissions for the appliance to access and collect information about the VMs running on those hosts.

Let’s look at why the other options are incorrect:

Deploy the Microsoft Monitoring Agent to each Hyper-V host: While the Microsoft Monitoring Agent (MMA) is used by Azure Monitor and also by the Dependency Agent (which can be used with Azure Migrate for dependency visualization), deploying MMA on Hyper-V hosts is not the primary requirement for initial VM evaluation using Azure Migrate’s assessment capabilities. The core discovery and assessment are handled by the Azure Migrate appliance itself. MMA on VMs is more relevant for post-migration monitoring or for dependency analysis, which is an optional step in Azure Migrate assessment.

Deploy the Microsoft Monitoring Agent to each Hyper-V virtual machine: Similar to the previous point, deploying MMA on each virtual machine is not a prerequisite for the initial evaluation using Azure Migrate’s core assessment features. MMA on VMs is more for in-guest dependency analysis or performance monitoring after migration. For simply evaluating migration readiness and cost, the appliance collecting metadata from Hyper-V hosts is sufficient.

Deploy the Azure Migrate appliance as an Azure virtual machine: The Azure Migrate appliance is designed to be deployed on-premises, close to the Hyper-V environment it needs to discover. Deploying it as an Azure VM would not allow it to directly discover and assess your on-premises Hyper-V infrastructure. The appliance needs to be on-premises to have network access to the Hyper-V hosts.

In summary:

The essential actions to evaluate Hyper-V VMs using Azure Migrate are deploying the Azure Migrate appliance on-premises and providing it with administrative credentials to access the Hyper-V hosts.

Final Answer: The final answer is Deploy the Azure Migrate appliance to an on-premises Hyper-V host and Assign the migration account to the Administrators group on each Hyper-V host.

46
Q

HOTSPOT
Your network contains an on-premises Active Directory domain. The domain contains the Hyper-V failover clusters shown in the following table.
Name Number of nodes Number of virtual machines
Cluster1 2 12
Cluster2 4 10
Cluster3 6 40

You plan to assess and migrate the virtual machines by using Azure Migrate.

What is the minimum number of Azure Migrate appliances and Microsoft Azure Recovery Services (MARS) agents required?

NOTE: Each correct selection is worth one point.
Azure Migrate appliances:
1
3
12
62
MARS agents:
1
3
12
62

A

Azure Migrate appliances: 1

MARS agents: 1

Explanation:

Azure Migrate appliances:

Minimum Requirement: You need at least one Azure Migrate appliance to discover and assess your on-premises Hyper-V virtual machines. The Azure Migrate appliance is deployed on-premises and connects to your Hyper-V environment to collect metadata about your VMs.

Scalability: A single Azure Migrate appliance can handle the discovery and assessment of a significant number of VMs. For the given scenario with 62 virtual machines spread across three clusters, one appliance is generally sufficient for assessment. You don’t need a separate appliance for each cluster or VM.

Redundancy (Not Minimum Requirement): While you can deploy multiple Azure Migrate appliances for redundancy in very large environments, the question asks for the minimum number. For basic assessment, one appliance is enough.

MARS agents:

MARS Agent and Azure Migrate: The Microsoft Azure Recovery Services (MARS) agent is primarily used for Azure Backup to back up files, folders, and system state from on-premises machines to Azure Recovery Services Vault. It is not directly required for Azure Migrate’s assessment and migration of Hyper-V virtual machines to Azure. Azure Migrate uses its own appliance for discovery and assessment and Azure Site Recovery (ASR) technology for replication and migration of Hyper-V VMs.

Possible Misinterpretation (Exam Context): The question might be worded in a way that could mislead you into thinking MARS is needed, or perhaps the exam is testing your understanding that MARS is not the correct component for Azure Migrate VM migration.

Minimum Number (Forced Choice): Since MARS agent is not actually required for Azure Migrate in this scenario, the most minimal answer from the given options would be 1. This is a forced choice to select something from the options, even though in reality, for Azure Migrate VM assessment and migration, you technically need zero MARS agents. If you must pick a number from the available choices and want to select the lowest possible number, then ‘1’ is the closest to “not needed”.

Therefore, the minimum number of resources required is:

Azure Migrate appliances: 1 (To perform the assessment)

MARS agents: 1 (Choosing the lowest available option, acknowledging MARS is not actually a required component for Azure Migrate VM assessment/migration in the standard workflow).

It is crucial to understand that for Azure Migrate Hyper-V to Azure migration using the standard Azure Migrate process, you do not need MARS agents. The Azure Migrate appliance and Azure Site Recovery replication are the core components. The question is likely testing your understanding of the correct components and might be structured to check if you understand MARS agent’s role is not in Azure Migrate VM migration itself. Selecting ‘1’ for MARS is a forced choice from the options, representing the most minimal, though technically unnecessary, number from the given choices.

Final Answer:

Azure Migrate appliances: 1
MARS agents: 1

47
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure Active Directory (Azure AD) tenant named contoso.com.
A user named Admin1 attempts to create an access review from the Azure Active Directory admin center and discovers that the Access reviews settings are unavailable. Admin1 discovers that all the other Identity Governance settings are available.
Admin1 is assigned the User administrator, Compliance administrator, and Security administrator roles.
You need to ensure that Admin1 can create access reviews in contoso.com.
Solution: You assign the Service administrator role to Admin1.
Does this meet the goal?
Yes
No

A

The goal is to enable Admin1 to create access reviews in the Azure AD tenant contoso.com. Admin1 currently cannot access the Access reviews settings, even though other Identity Governance settings are available. Admin1 is currently assigned the following roles: User administrator, Compliance administrator, and Security administrator.

The proposed solution is to assign the Service administrator role to Admin1.

Let’s analyze the permissions associated with each role and their relevance to managing Azure AD Access Reviews:

User Administrator: This role can manage all aspects of users and groups, including resetting passwords, managing licenses, and assigning roles to users and groups. Importantly, the User Administrator role is documented to have permissions to manage Access Reviews.

Compliance Administrator: This role has permissions to view and manage compliance-related features in Azure AD and Microsoft 365. While it may have some visibility into Identity Governance features for compliance auditing, it is not primarily intended for creating and managing Access Reviews.

Security Administrator: This role manages security-related features, including security policies, security reports, and monitoring. It has a broad view of security settings but is not primarily intended for creating and managing Access Reviews.

Service Administrator (renamed to Global Administrator): The Service administrator role (now known as Global Administrator) is one of the highest privileged roles in Azure AD. Global Administrators have full access to all features in Azure AD, including Identity Governance and Access Reviews. They can perform any administrative task within the Azure AD tenant.

According to Microsoft documentation, the roles that can create and manage access reviews include:

Global Administrator

User Administrator

Identity Governance Administrator

Since Admin1 is already assigned the User administrator role, which should grant permissions to manage Access Reviews, the fact that “Access reviews settings are unavailable” is unusual.

However, the question specifically asks: “Does this meet the goal?” and proposes assigning the Service administrator role.

Analysis of the Solution:

Assigning the Service administrator (Global Administrator) role to Admin1 will definitely meet the goal. Global Administrators have all permissions in Azure AD, including the ability to create and manage Access Reviews. By assigning this role, Admin1 will gain the necessary permissions to access the Access reviews settings and create access reviews.

Is it the best practice? No. Assigning the Global Administrator role should be avoided if a less privileged role can fulfill the requirement. In a real-world scenario, you would investigate why the User Administrator role is not working as expected before assigning a highly privileged role like Global Administrator. It’s possible there’s a permission propagation delay, a bug, or some other underlying issue.

Does it meet the goal? Yes. The question is asking if the proposed solution will achieve the stated goal. Assigning the Service administrator role will technically grant Admin1 the necessary permissions.

Therefore, in the context of this specific question which is focused on whether the proposed solution meets the goal, and given that Service Administrator role indeed grants all permissions, the answer is Yes.

Final Answer: Yes

48
Q

You have an Azure subscription that contains a policy-based virtual network gateway named GW1 and a virtual network named VNet1.
You need to ensure that you can configure a point-to-site connection from an on-premises computer to VNet1.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Reset GW1.
Delete GW1.
Create a route-based virtual network gateway.
Add a public IP address space to VNet1.
Add a connection to GW1.
Add a service endpoint to VNet1.

A

Answer Area:

Delete GW1.

Create a route-based virtual network gateway.

Explanation:

Delete GW1:

Policy-based VPN gateways do not support Point-to-Site (P2S) VPN configurations. Policy-based gateways are limited in their capabilities compared to route-based gateways, especially when it comes to P2S VPNs and certain advanced VPN features.

To enable P2S VPN, you will need to use a route-based VPN gateway.

Therefore, the first necessary step is to delete the existing policy-based gateway (GW1) as it cannot be configured for P2S.

Create a route-based virtual network gateway:

Route-based VPN gateways are required for Point-to-Site VPN connections. Route-based gateways offer the necessary flexibility and features to support P2S VPN, as well as Site-to-Site and VNet-to-VNet VPNs.

After deleting the policy-based gateway, you must create a new virtual network gateway of the route-based type. When creating the gateway, you will specify the -GatewayType Vpn and -VpnType RouteBased parameters (if using PowerShell) or select “Route-based” as the VPN type in the Azure portal.

Once the route-based gateway is deployed, you can then configure the Point-to-Site configuration on it.

Let’s look at why the other options are incorrect:

Reset GW1: Resetting a VPN gateway restarts the gateway service. While this can resolve temporary connectivity issues, it does not change the gateway type from policy-based to route-based. Resetting a policy-based gateway will not make it support P2S VPN.

Add a public IP address space to VNet1: Adding a public IP address space to your virtual network’s address space is not relevant to enabling P2S VPN. Virtual network address spaces define the private IP address ranges within your VNet, not public IP addressing for VPN gateways or P2S clients.

Add a connection to GW1: Adding a connection to GW1 would typically be for creating a Site-to-Site or VNet-to-VNet VPN connection. If GW1 is a policy-based gateway, adding a P2S connection to it will not work because policy-based gateways do not support P2S. You need to change the gateway type itself.

Add a service endpoint to VNet1: Service endpoints are used to secure access to Azure PaaS services (like Azure Storage or Azure SQL Database) from within your virtual network. They are not related to VPN connectivity or enabling Point-to-Site VPN access to your VNet.

In summary: To enable Point-to-Site VPN connectivity when you currently have a policy-based gateway, you must replace the policy-based gateway with a route-based gateway. This involves deleting the existing gateway and creating a new one of the route-based type.

Final Answer:

Answer Area
1. Delete GW1.
2. Create a route-based virtual network gateway.

49
Q

You have an Azure subscription that contains the virtual networks shown in the following table.
Name Address space Location Number of Azure virtual machines
VNET1 10.1.0.0/16 West US 100
VNET2 172.16.0.0/16 East US 400

You need to recommend a connectivity solution that will enable the virtual machines on VNET1 and VNET2 to communicate through the Microsoft backbone infrastructure.
What should you include in the recommendation?
Azure ExpressRoute
peering
a point-to-site VPN
a site-to-site VPN

A

The correct answer is peering.

Explanation:

Peering (specifically, Global VNet Peering):

Purpose: Azure Virtual Network peering enables you to connect two Azure virtual networks directly. Global VNet peering extends this capability to virtual networks in different Azure regions.

Microsoft Backbone: Global VNet peering utilizes the Microsoft backbone infrastructure for communication between the peered virtual networks. This ensures low latency, high bandwidth, and secure communication as traffic stays within the Microsoft global network.

Direct Communication: Once peered, VMs in VNET1 and VNET2 can communicate with each other using their private IP addresses, as if they were in the same virtual network. Routing is automatically managed by Azure.

Scalability: VNet peering is designed to handle communication between a large number of virtual machines within the peered networks.

Let’s examine why the other options are not the most suitable for this scenario:

Azure ExpressRoute:

Purpose: Azure ExpressRoute is designed to establish private, dedicated connections between your on-premises network and Azure. While ExpressRoute provides a connection to the Microsoft backbone, it is primarily used for hybrid connectivity scenarios, connecting your data center to Azure.

VNet-to-VNet within Azure: While you could technically use ExpressRoute to route traffic between Azure VNets (by backhauling traffic to your on-premises network and then back to Azure), this is highly inefficient, complex, and adds unnecessary latency and cost. VNet peering is the direct and recommended solution for VNet-to-VNet connectivity within Azure. ExpressRoute is not designed for this intra-Azure VNet communication.

A point-to-site VPN:

Purpose: Point-to-site (P2S) VPN connections are designed to connect individual client computers to an Azure virtual network. They are not intended for connecting entire virtual networks to each other or enabling communication between hundreds of VMs in different VNets. P2S VPNs are for remote user access, not VNet-to-VNet communication.

A site-to-site VPN:

Purpose: Site-to-site (S2S) VPN connections can connect an on-premises network to an Azure virtual network, or they can connect two Azure virtual networks.

Public Internet Traversal (by default): By default, Site-to-Site VPN connections between Azure VNets traverse the public internet. While you can configure them to use the Microsoft backbone in certain scenarios (like using VPN Gateway with ExpressRoute), this is more complex than necessary and less efficient than VNet peering for direct VNet-to-VNet communication within Azure.

Performance and Cost: VPN gateways introduce more latency and have bandwidth limitations compared to VNet peering, which leverages the direct, high-bandwidth nature of the Microsoft backbone. VNet peering is generally more cost-effective for VNet-to-VNet connectivity within Azure.

Conclusion:

For enabling virtual machines on VNET1 and VNET2 to communicate through the Microsoft backbone infrastructure, peering (Global VNet Peering) is the most appropriate, efficient, and recommended solution. It directly addresses the requirement for VNet-to-VNet connectivity within Azure using the backbone, offering performance, security, and scalability.

Final Answer: The final answer is peering.

50
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an app named App1 that uses data from two on-premises Microsoft SQL Server databases named DB1 and DB2.
You plan to move DB1 and DB2 to Azure.
You need to implement Azure services to host DB1 and DB2. The solution must support server-side transactions across DB1 and DB2.
Solution: You deploy DB1 and DB2 to an Azure SQL Database managed instance.
Does this meet the goal?
Yes
No

A

The question asks if deploying DB1 and DB2 to an Azure SQL Database Managed Instance supports server-side transactions across them.

Azure SQL Database Managed Instance (Managed Instance): Managed Instance is designed to provide near-complete compatibility with on-premises SQL Server instances. It offers instance-level features, unlike single Azure SQL Databases which are database-scoped.

SQL Server Instance and Cross-Database Transactions: On-premises SQL Server instances natively support server-side transactions that span multiple databases hosted within the same instance. This is a fundamental feature of SQL Server. You can use standard SQL transaction commands to begin, commit, or rollback transactions that involve operations across different databases residing on the same SQL Server instance.

Managed Instance and Instance-Level Capabilities: Azure SQL Database Managed Instance is designed to emulate an on-premises SQL Server instance environment in the cloud. This includes providing instance-level features and behaviors, such as cross-database querying and, importantly, cross-database transactions.

Transaction Scope in Managed Instance: When you deploy multiple databases (like DB1 and DB2) to a single Azure SQL Database Managed Instance, they are hosted within the same SQL Server instance environment provided by the Managed Instance service. This means that they can leverage the built-in SQL Server capabilities for server-side transactions that span databases on the same instance.

Analysis of the Proposed Solution:

The solution proposes deploying DB1 and DB2 to an Azure SQL Database Managed Instance. Since Managed Instance is designed to provide SQL Server instance-level compatibility, and SQL Server instances support server-side transactions across databases within the instance, deploying DB1 and DB2 to the same Managed Instance will enable server-side transactions across these databases.

Conclusion:

The proposed solution does meet the goal. Deploying DB1 and DB2 to an Azure SQL Database Managed Instance allows for the implementation of server-side transactions across DB1 and DB2, leveraging the SQL Server instance-level transactional capabilities provided by Managed Instance.

Final Answer: Yes

51
Q

You manage a solution in Azure that consists of a single application which runs on a virtual machine (VM). Traffic to the application has increased dramatically.
The application must not experience any downtime and scaling must be dynamically defined.
You need to define an auto-scale strategy to ensure that the VM can handle the workload.
Which three options should you recommend? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
Deploy application automatic vertical scaling.
Create a VM availability set.
Create a VM scale set.
Deploy application automatic horizontal scaling.
Deploy a custom auto-scale implementation.

A

Let’s analyze each option in the context of the requirements: no downtime and dynamically defined scaling for an application running on a single VM that is experiencing increased traffic.

Deploy application automatic vertical scaling:

Downtime: Vertical scaling (scaling up the VM size) typically requires a VM reboot, which results in downtime for the application. While some newer Azure VM sizes support resizing without downtime, it is not guaranteed for all sizes and scenarios, and it’s generally less reliable for zero-downtime scaling compared to horizontal scaling.

Dynamic Scaling: Automatic vertical scaling can be dynamically defined based on metrics like CPU utilization or memory usage.

Complete Solution?: No, because it likely incurs downtime, violating the “no downtime” requirement.

Create a VM availability set:

Downtime: Availability sets improve the availability of VMs by distributing them across fault domains and update domains. They protect against hardware failures and planned maintenance, but they do not inherently provide scaling capabilities to handle increased traffic. Availability sets themselves do not dynamically scale resources.

Dynamic Scaling: No, availability sets do not offer autoscaling.

Complete Solution?: No, availability sets do not address the dynamic scaling requirement or the increased traffic problem directly. They are for high availability, not scalability.

Create a VM scale set:

Downtime: VM scale sets are designed for horizontal scaling and are built to minimize downtime during scaling operations. Scale-out operations (adding more instances) in a scale set are generally zero-downtime, especially when combined with a load balancer and proper health probes. Rolling upgrades can also minimize downtime during updates.

Dynamic Scaling: Yes, VM scale sets have built-in autoscaling capabilities. You can define autoscaling rules based on various metrics (CPU, memory, custom metrics, etc.) to dynamically adjust the number of VM instances based on workload.

Complete Solution?: Yes, VM scale sets are a complete and highly recommended solution for both no downtime and dynamically defined scaling for applications in Azure. They enable horizontal scaling, which is ideal for handling increased traffic without service interruption.

Deploy application automatic horizontal scaling:

Downtime: Horizontal scaling, by its nature, aims to avoid downtime during scale-out operations. By adding more instances behind a load balancer, you can handle increased traffic without interrupting service to existing users.

Dynamic Scaling: Yes, automatic horizontal scaling is inherently dynamic. You can automate the process of adding and removing application instances based on load.

Complete Solution?: Yes, deploying automatic horizontal scaling is a conceptually correct and effective solution. In Azure, VM scale sets are the primary way to implement automatic horizontal scaling for VM-based applications.

Deploy a custom auto-scale implementation:

Downtime: The potential for downtime depends entirely on the quality and design of the custom implementation. If meticulously designed, it could achieve zero downtime scaling. However, this is more complex and error-prone than using managed Azure services.

Dynamic Scaling: Yes, a custom implementation can be designed to provide fully dynamic scaling based on any metrics you choose.

Complete Solution?: Yes, theoretically, a custom auto-scale implementation can be a complete solution. However, it is generally less recommended than using managed services like VM scale sets due to increased complexity, development effort, and ongoing maintenance. It also goes against the “minimize implementation time” aspect of the requirements, compared to readily available Azure features like VM Scale Sets.

Considering the “Complete Solution” and Azure Best Practices:

The three most appropriate and complete solutions, focusing on Azure managed services and best practices for scalability and high availability, are:

Create a VM scale set: This is the most direct and recommended Azure service for horizontal scaling of VMs and meeting both no downtime and dynamic scaling requirements.

Deploy application automatic horizontal scaling: This conceptually describes the desired scaling approach, and in the Azure context, VM scale sets are the primary mechanism to achieve this.

Deploy a custom auto-scale implementation: While less ideal compared to managed services, a custom implementation could be considered a “complete solution” if designed and implemented correctly, although it adds complexity and overhead.

Final Answer: The final answer is:

Create a VM scale set.

Deploy application automatic horizontal scaling.

Deploy a custom auto-scale implementation.

52
Q

You have an Azure subscription that contains the resource groups shown in the following table.
Name Region
RG1 East US
RG2 West US

The subscription contains the storage accounts shown in the following table.
Name Resource group Location Account kind
Storage1 RG1 West US BlobStorage
Storage2 RG2 West US Storage (general purpose v1)
Storage3 RG1 East US Storage V2 (general purpose v2)
You create a Recovery Services vault named Vault1 in RG1 in the West US location.
You need to identify which storage accounts can be used to archive the diagnostics logs of Vault1.
Which storage accounts should you identify?
Storage1 only
Storage2 only
Storage3 only
Storage1 or Storage2 only
Storage1 or Storage3 only

A

To identify which storage accounts can be used to archive diagnostics logs for Recovery Services vault Vault1, we need to consider the following factors:

Location: While it is generally recommended to store diagnostic logs in the same region as the resource for performance and cost optimization, it is not always a strict requirement. However, it’s a best practice to prioritize storage accounts in the same region.

Storage Account Type: Azure Recovery Services Vault diagnostic logs can be archived to general-purpose storage accounts (v1 and v2) and Blob Storage accounts.

Let’s analyze each storage account based on these factors:

Vault1:

Resource Group: RG1

Location: West US (as stated in the question “Vault1 in RG1 in the West US location”)

Storage1:

Resource Group: RG1

Location: West US

Account kind: BlobStorage

Location: Same region as Vault1 (West US).

Account kind: BlobStorage is a compatible type for diagnostic logs.

Conclusion: Storage1 is a valid option.

Storage2:

Resource Group: RG2

Location: West US

Account kind: Storage (general purpose v1)

Location: Same region as Vault1 (West US).

Account kind: General purpose v1 is a compatible type for diagnostic logs.

Conclusion: Storage2 is a valid option.

Storage3:

Resource Group: RG1

Location: East US

Account kind: Storage V2 (general purpose v2)

Location: Different region from Vault1 (East US vs. West US).

Account kind: General purpose v2 is a compatible type for diagnostic logs.

Conclusion: Storage3 is technically a valid option as cross-region storage for diagnostics is generally allowed, although less optimal than same-region storage from a cost and latency perspective.

Considering the options provided, and prioritizing storage accounts in the same region as Vault1 (West US) for best practices:

Storage1 only: Valid, but not the most inclusive correct option if Storage2 is also valid.

Storage2 only: Valid, but not the most inclusive correct option if Storage1 is also valid.

Storage3 only: Less ideal due to cross-region location, though technically might work.

Storage1 or Storage2 only: This option includes both Storage1 and Storage2, which are in the same region as Vault1 and have compatible account types. This is the best option that prioritizes same-region storage.

Storage1 or Storage3 only: This option mixes same-region (Storage1) and cross-region (Storage3), and excludes Storage2 which is also in the same region as Vault1.

Based on best practices and the available options, the most appropriate answer is the one that includes both same-region storage accounts and excludes the cross-region one, if we are to select the best and most aligned options from the given choices. However, if cross-region is also considered technically possible in the question’s context, then all three might be considered technically valid for basic archiving, although less optimal for cross-region case.

Given the choices and aiming for the most technically sound and best practice aligned answer, “Storage1 or Storage2 only” is the most suitable because both are in the same region as Vault1 and have compatible storage account types.

Final Answer: The final answer is Storage1 or Storage2 only.

53
Q

You have an Azure subscription named Sub1 that has a subscription ID of 12ab3cd4-5e67-8901-f234-g5hi67jkl8m9.
In Sub1, you create an Azure Storage account named storage1 and a table named Table1.
Which URI should you use to access Table1?
https://storage.core.windons.net/12ab3cd4-5e67-8901-f234-g5hi67jkl8m9/storagel/table1
https://sub1.core.windows.net/storagel/table1
https://table1.table.core.windows.net/storage1
https://storagel.table.core.windows.net/table1

A

The correct URI to access Table1 in the Azure Storage account storage1 follows a specific format for Azure Table Storage. The general structure is:

https://<storage_account_name>.table.core.windows.net/<table>
Use code with caution.
Where:</storage_account_name>

<storage_account_name> is the name of your Azure Storage account (in this case, storage1).

<table> is the name of your Azure Table (in this case, Table1).

table.core.windows.net is the domain for Azure Table Storage.

Let's evaluate each of the provided options against this correct format:

Option 1: https://storage.core.windons.net/12ab3cd4-5e67-8901-f234-g5hi67jkl8m9/storagel/table1

Incorrect Domain: storage.core.windons.net is not the correct domain. It should be *.core.windows.net. Also, "windons" is likely a typo and should be "windows".

Incorrect Path with Subscription ID: Including the subscription ID 12ab3cd4-5e67-8901-f234-g5hi67jkl8m9 in the URI path is incorrect. Subscription IDs are not part of the standard Table Storage URI.

Misspelled Account Name (Likely): storagel is likely a typo for storage1.

Option 2: https://sub1.core.windows.net/storagel/table1

Incorrect Subdomain: sub1.core.windows.net is incorrect. The subdomain should be the storage account name, not the subscription name (sub1).

Misspelled Account Name (Likely): storagel is likely a typo for storage1.

Option 3: https://table1.table.core.windows.net/storage1

Incorrect Subdomain Order: table1.table.core.windows.net has the table name in place of the storage account name in the subdomain. It should be storage_account_name.table.core.windows.net.

Incorrect Path: /storage1 has the storage account name in the path, which is incorrect. The path should contain the table name (or nothing for account-level operations).

Option 4: https://storagel.table.core.windows.net/table1

Correct Domain and Subdomain Structure (Mostly): storagel.table.core.windows.net follows the correct structure <account_name>.table.core.windows.net.

Misspelled Account Name (Likely): storagel is likely a typo for storage1, but if we assume the storage account name was actually intended to be storagel, then this part is technically consistent with the provided name (if we consider the typo).

Correct Table Name in Path: /table1 correctly places the table name in the path.

Considering the most likely intent and the structure of the options, Option 4 is the closest to the correct answer, assuming there's a typo in the question and the storage account name is indeed meant to be storagel instead of storage1 in the URI context. If we correct this likely typo in the question to storagel instead of storage1, then Option 4 becomes the perfectly correct URI.

If we are forced to choose from the provided options and have to assume a typo in the storage account name in the question's description but not in Option 4, then Option 4 is the closest and most logically correct answer.

Final Answer: The final answer is https://storagel.table.core.windows.net/table1.
</account_name></table></storage_account_name>

54
Q

You monitor Azure virtual machines by using Azure Monitor.
You plan to restart the virtual machines when CPU usage exceeds 95 percent for more than 30 minutes.
You need to create an alert in Azure Monitor to restart the virtual machines. The solution must minimize administrative effort.
Which type of action should you use in the alert?
ITSM
Webhook
Automation Runbook
Logic App

A

To create an Azure Monitor alert that automatically restarts virtual machines when CPU usage exceeds a threshold, you need to use an action type that can trigger an automated process to perform the restart. Let’s evaluate each option based on its suitability for this task and its administrative effort.

ITSM:

Functionality: ITSM (IT Service Management) actions in Azure Monitor are designed to integrate with ITSM tools like ServiceNow, System Center Service Manager, etc. When an alert is triggered, an ITSM action can create a work item (like an incident or ticket) in the connected ITSM system.

Suitability for VM Restart: ITSM actions are primarily for notification and ticket creation for human intervention. They do not directly automate actions within Azure like restarting a virtual machine. They are meant to inform IT support teams, not to automatically remediate issues.

Administrative Effort: While easy to configure for notification purposes, they do not automate the restart, thus not meeting the requirement of automated remediation and not minimizing administrative effort for the restart itself.

Webhook:

Functionality: Webhook actions allow you to trigger an HTTP POST request to a specified URL when an alert is fired. This enables integration with external systems or custom automation scripts.

Suitability for VM Restart: You could use a webhook to trigger a custom script (e.g., hosted in Azure Functions or an external automation system) that then uses Azure APIs to restart the VM.

Administrative Effort: Using webhooks for VM restart requires developing and managing the external service or script that handles the restart logic. This increases administrative effort compared to using built-in Azure automation features.

Automation Runbook:

Functionality: Automation Runbooks, within Azure Automation, allow you to automate tasks in Azure and hybrid environments using PowerShell or Python. Azure Monitor alerts can directly trigger Automation Runbooks.

Suitability for VM Restart: You can create an Azure Automation Runbook that contains the PowerShell commands or Python code to restart an Azure VM. The alert action can then directly invoke this runbook when the CPU threshold is breached.

Administrative Effort: Using Automation Runbooks is a relatively low-effort way to automate tasks within Azure. You can use pre-built cmdlets or modules to manage Azure resources, including restarting VMs. This is a strong contender for minimizing administrative effort while achieving automated VM restarts.

Logic App:

Functionality: Logic Apps is a cloud-based service for automating workflows and integrating apps, data, and services. Logic Apps has connectors for various Azure services, including Azure Compute (Virtual Machines). Azure Monitor alerts can trigger Logic Apps.

Suitability for VM Restart: You can create a Logic App workflow that is triggered by the Azure Monitor alert and uses the Azure Virtual Machines connector to perform a “Restart Virtual Machine” action.

Administrative Effort: Logic Apps provides a visual, low-code interface for building automation workflows. For simple tasks like restarting a VM, using the Logic Apps designer and pre-built connectors can be very straightforward and minimize coding and scripting. Logic Apps are often favored for their ease of use in building automated workflows within Azure services.

Comparison and Best Choice:

Considering the requirement to “minimize administrative effort” and the need to automate VM restarts directly within Azure:

ITSM and Webhook: Require more administrative effort (ITSM for manual intervention, Webhook for custom external automation).

Automation Runbook and Logic App: Both are suitable for automated actions within Azure and can restart VMs.

Between Automation Runbook and Logic App, Logic App is generally considered to require less administrative effort for simple workflows like VM restarts due to its visual designer and pre-built connectors. For basic Azure service automation triggered by alerts, Logic Apps are often favored for their ease of use and quick setup. While Automation Runbooks are also very capable, they might involve slightly more scripting for a simple VM restart workflow compared to Logic Apps’ visual, connector-based approach.

Therefore, Logic App is the most suitable action type to minimize administrative effort for automatically restarting VMs based on Azure Monitor alerts in this scenario.

Final Answer: The final answer is Logic App.

55
Q

You have an Azure subscription that contains an Azure Log Analytics workspace.
You have a resource group that contains 100 virtual machines. The virtual machines run Linux.
You need to collect events from the virtual machines to the Log Analytics workspace.
Which Agent configuration setting should you modify?
Syslog
Linux performance counters
custom fields

A

The correct answer is Syslog.

Explanation:

Syslog Configuration Setting: Syslog is the standard protocol for message logging in Linux and other Unix-like systems. The Azure Log Analytics agent for Linux is designed to collect logs from the Syslog facility.

By modifying the Syslog configuration setting in your Log Analytics workspace (specifically, under the “Agents configuration” or “Data” settings for Linux Data Sources), you can specify which Syslog facilities and severities you want to collect from your Linux virtual machines.

Common Syslog facilities include auth, cron, daemon, kern, local0 through local7, lpr, mail, news, syslog, and user.

Severities range from Emergency to Debug.

Example Configuration: You might configure Syslog to collect:

facility: auth with minimum severity: Warning (to collect authentication-related warnings and errors)

facility: daemon with minimum severity: Error (to collect daemon-related errors)

facility: user with minimum severity: Information (to collect user-level informational events)

Why other options are incorrect:

Linux performance counters: This setting is for collecting performance metrics (CPU, memory, disk I/O, network) from Linux VMs, not events or logs. While performance counters are valuable for monitoring VM performance, they are not the source of event logs that you’d typically collect for application or system troubleshooting.

Custom fields: Custom fields are used to parse and extract specific data within logs that are already being collected. They are not a setting to enable the collection of logs themselves. Custom fields are applied after you’ve configured the agent to collect Syslog or other log types, to further process the ingested log data.

In summary: To collect events (logs) from Linux virtual machines to Azure Log Analytics, you need to modify the Syslog configuration setting in the Log Analytics workspace to specify which Syslog facilities and severities you want to ingest.

Final Answer: The final answer is Syslog.

56
Q

Case study
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.

To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.

At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

To start the case study

To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Overview

Contoso, Ltd. is a manufacturing company that has offices worldwide. Contoso works with partner organizations to bring products to market.

Contoso products are manufactured by using blueprint files that the company authors and maintains.

Existing Environment

Currently, Contoso uses multiple types of servers for business operations, including the following:

– File servers
– Domain controllers
– Microsoft SQL Server servers

Your network contains an Active Directory forest named contoso.com. All servers and client computers are joined to Active Directory.

You have a public-facing application named App1. App1 is comprised of the following three tiers:

– A SQL database
– A web front end
– A processing middle tier

Each tier is comprised of five virtual machines. Users access the web front end by using HTTPS only.

Requirements

Planned Changes

Contoso plans to implement the following changes to the infrastructure:

– Move all the tiers of App1 to Azure.
– Move the existing product blueprint files to Azure Blob storage.
– Create a hybrid directory to support an upcoming Microsoft 365 migration project.

Technical Requirements

Contoso must meet the following technical requirements:

– Move all the virtual machines for App1 to Azure.
– Minimize the number of open ports between the App1 tiers.
– Ensure that all the virtual machines for App1 are protected by backups.
– Copy the blueprint files to Azure over the Internet.
– Ensure that the blueprint files are stored in the archive storage tier.
– Ensure that partner access to the blueprint files is secured and temporary.
– Prevent user passwords or hashes of passwords from being stored in Azure.
– Use unmanaged standard storage for the hard disks of the virtual machines.
– Ensure that when users join devices to Azure Active Directory (Azure AD), the users use a mobile phone to verify their identity.
– Minimize administrative effort whenever possible.

User Requirements

Contoso identifies the following requirements for users:

– Ensure that only users who are part of a group named Pilot can join devices to Azure AD.
– Designate a new user named Admin1 as the service admin for the Azure subscription.
– Admin1 must receive email alerts regarding service outages.
– Ensure that a new user named User3 can create network objects for the Azure subscription.

HOTSPOT
You need to identify the storage requirements for Contoso.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.
Statements Yes No
Contoso requires a storage account that supports Blob storage.
Contoso requires a storage account that supports Azure Table storage.
Contoso requires a storage account that supports Azure File Storage.

A

To determine the storage requirements for Contoso, we need to review the “Requirements” section of the case study, specifically focusing on planned changes and technical requirements related to storage.

Planned Changes and Technical Requirements related to Storage:

Move the existing product blueprint files to Azure Blob storage: This explicitly states that Contoso needs to use Azure Blob storage for storing blueprint files.

Copy the blueprint files to Azure over the Internet: This indicates that the storage solution needs to be accessible over the internet for data transfer.

Ensure that the blueprint files are stored in the archive storage tier: This mandates the use of the Azure Blob storage Archive tier for cost optimization of infrequently accessed blueprint files.

Ensure that partner access to the blueprint files is secured and temporary: This implies the need for secure, time-limited access to the blueprint files, which can be achieved using Shared Access Signatures (SAS) in Azure Blob storage.

Use unmanaged standard storage for the hard disks of the virtual machines: This specifies the type of storage to be used for VM disks (unmanaged standard storage), but it’s about VM disk storage, not blueprint file storage.

Analyzing each statement:

Statement 1: Contoso requires a storage account that supports Blob storage.

Analysis: The requirement “Move the existing product blueprint files to Azure Blob storage” directly states the need for Blob storage. Azure Blob storage is designed for storing massive amounts of unstructured data, such as documents, media files, and application installers, which aligns with the description of blueprint files.

Conclusion: Yes, Contoso definitely requires a storage account that supports Blob storage.

Statement 2: Contoso requires a storage account that supports Azure Table storage.

Analysis: There is no mention in the requirements or existing environment description about needing Azure Table storage. Azure Table storage is a NoSQL key-value store, suitable for structured non-relational data. The case study focuses on blueprint files (unstructured data for Blob storage) and moving SQL databases. There’s no indication of a need for Table storage.

Conclusion: No, there is no requirement stated in the case study that necessitates Azure Table storage.

Statement 3: Contoso requires a storage account that supports Azure File Storage.

Analysis: Azure File Storage provides fully managed file shares in the cloud that are accessible via the SMB protocol. While the existing environment mentions “File servers,” the planned changes specifically state “Move the existing product blueprint files to Azure Blob storage,” not Azure File Storage. Azure File Storage is typically used for migrating file shares that need to be accessed by applications using standard file system protocols (SMB/NFS). For blueprint files intended for partner access and archive storage, Blob storage is a more suitable and scalable choice.

Conclusion: No, based on the requirements, there’s no explicit need for Azure File Storage. Blob storage is explicitly mentioned and more appropriate for the blueprint files scenario.

Final Answer:

Statements Yes No
Contoso requires a storage account that supports Blob storage. ☑ ☐
Contoso requires a storage account that supports Azure Table storage. ☐ ☑
Contoso requires a storage account that supports Azure File Storage. ☐ ☑

57
Q

You need to move the blueprint files to Azure.
What should you do?
Use the Azure Import/Export service.
Generate a shared access signature (SAS). Map a drive, and then copy the files by using File Explorer.
Use Azure Storage Explorer to copy the files.
Generate an access key. Map a drive, and then copy the files by using File Explorer.

A

To move the blueprint files to Azure Blob storage over the internet, we need to choose a method that is efficient, secure, and minimizes administrative effort, as per the requirements. Let’s analyze each option:

Use the Azure Import/Export service.

Analysis: Azure Import/Export service is designed for transferring large amounts of data to Azure Storage by physically shipping disk drives to an Azure datacenter. This service is most beneficial when network bandwidth is limited, or the data volume is extremely large (terabytes or petabytes).

Suitability: While Import/Export is suitable for large datasets, the requirement states “Copy the blueprint files to Azure over the Internet.” This suggests that internet connectivity is available and should be used. Using Import/Export would involve shipping disks, which is more complex and time-consuming than a network-based transfer over the internet, and likely more administrative effort for this scenario.

Generate a shared access signature (SAS). Map a drive, and then copy the files by using File Explorer.

Analysis: A Shared Access Signature (SAS) provides secure, delegated access to Azure Storage resources. Generating a SAS is a good security practice for granting temporary access. Mapping a drive to Azure Blob storage using WebDAV and then using File Explorer is possible, but it’s not the most efficient or robust method for bulk file transfer to Blob storage. File Explorer and WebDAV are better suited for occasional file access and management, not for large-scale or frequent data migration.

Suitability: While SAS is good for security, mapping a drive and using File Explorer for copying blueprint files is not an optimal approach for efficiency or minimizing administrative effort, especially for potentially numerous or large blueprint files.

Use Azure Storage Explorer to copy the files.

Analysis: Azure Storage Explorer is a free, standalone application from Microsoft specifically designed for managing Azure Storage resources, including Blob storage. It provides a graphical user interface for efficiently uploading, downloading, and managing blobs. Storage Explorer is optimized for transferring files to and from Azure Storage over the internet. It supports various authentication methods, including storage account access keys and SAS tokens.

Suitability: Azure Storage Explorer is an excellent tool for this task. It is user-friendly, efficient for internet-based transfers, and minimizes administrative effort by providing a visual interface and optimized transfer mechanisms for Azure Blob storage.

Generate an access key. Map a drive, and then copy the files by using File Explorer.

Analysis: Using a storage account access key provides full administrative access to the storage account. While it would allow File Explorer to authenticate and access the storage account, using access keys directly in client applications is a security risk and is generally discouraged. SAS tokens are preferred for delegated and temporary access. Mapping a drive and using File Explorer, even with an access key, is still not as efficient or recommended for bulk upload to Blob storage as using a dedicated tool like Azure Storage Explorer.

Suitability: While technically possible, using access keys directly and File Explorer is less secure and less efficient than using Azure Storage Explorer. It also does not minimize administrative effort compared to using a tool designed for storage management.

Conclusion:

Azure Storage Explorer is the most appropriate solution for moving the blueprint files to Azure Blob storage over the internet. It is designed for this type of task, is efficient, secure (can use SAS or access keys, but SAS is recommended for better security in general scenarios), and minimizes administrative effort by providing a dedicated tool with a graphical interface for Azure Blob storage management.

Final Answer: Use Azure Storage Explorer to copy the files.

58
Q

You need to implement a backup solution for App1 after the application is moved.
What should you create first?
an Azure Backup Server
a Recovery Services vault
a recovery plan
a backup policy

A

To implement a backup solution for App1 in Azure using Azure Backup, the first step is to create a Recovery Services vault.

Here’s why:

Recovery Services vault:

Central Management: A Recovery Services vault is the foundational, top-level Azure resource for Azure Backup. It acts as a management container for your backup data, backup policies, and protected items.

Prerequisite for Backup: Before you can configure backups for any Azure resources (like VMs, SQL databases, etc.) using Azure Backup, you must first create a Recovery Services vault in your Azure subscription and region.

Policy and Item Association: Backup policies and protected items (like VMs) are associated with a Recovery Services vault. You create backup policies within a vault and then apply them to resources you want to protect within that vault.

Let’s analyze why the other options are not the correct first step:

an Azure Backup Server: Azure Backup Server (MABS) is an on-premises server that you can use to back up on-premises workloads and certain Azure workloads to Azure. While MABS can be part of a hybrid backup strategy, it’s not the first resource you create in Azure to start using Azure Backup for Azure resources. The question is about implementing a backup solution in Azure for App1 after it is moved to Azure, implying we are focusing on Azure-native backup services.

a recovery plan: Recovery plans are used with Azure Site Recovery (ASR) for orchestrating disaster recovery failover and failback of virtual machines. Recovery plans are not related to Azure Backup, which is for data protection and restoration, not DR orchestration.

a backup policy: A backup policy defines the schedule and retention settings for backups. You create backup policies after you have a Recovery Services vault. You cannot create a backup policy without first having a vault to contain it. Backup policies are associated with a Recovery Services vault.

Sequence of Azure Backup Setup:

The typical sequence for setting up Azure Backup for Azure resources is:

Create a Recovery Services vault. (First step)

Configure backup policies within the vault (define schedule, retention).

Configure protected items (select VMs, SQL databases, etc.) and associate them with backup policies within the vault.

Conclusion:

The Recovery Services vault is the essential and first resource you must create in Azure to begin implementing a backup solution using Azure Backup.

Final Answer: The final answer is a Recovery Services vault.

59
Q

Case Study
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.

To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.

At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

To start the case study
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Overview. General Overview

Litware, Inc. is a medium-sized finance company. Litware recently acquired a financial services company named Fabrikam, Ltd.

Overview. Physical Locations

Litware has a datacenter in Boston. Fabrikam has a datacenter in San Francisco.

Existing Environment. Identity Environment

The network of Litware contains an Active Directory forest named Litware.com that syncs to an Azure Active Directory (Azure AD) tenant named Litware.com by using Azure AD Connect.

Azure AD Seamless Single Sign-on (Azure AD Seamless SSO) is enabled for the Litware.com tenant.

Users at Litware have a UPN suffix of Litware.com

Litware has an internal certification authority (CA) that is trusted by all devices.

The network of Fabrikam contains an Active Directory forest named fabrikam.com. Users at Fabrikam have a UPN suffix of fabrikam.com.

Existing Environment. Azure Environment

Litware has an Azure subscription named Sub1 that is linked to the Litware.com tenant. Sub1 contains the resources shown in the following table.

Name Type Azure region Configuration
VNet1 Virtual network East US None
KV1 Azure key vault East US None
storage1 Storage account East US BlockBlobStorage
storage2 Storage account East US Storage (general purpose v1)
Contains 500 GB of NoSQL
data in Azure Table storage
VPN1 Virtual network gateway East US Deployed to VNet1
FW1 Azure Firewall East US Deployed to VNet1

Litware has Azure Resource Manager (ARM) templates that deploy Azure Policy definitions and assignments to a management group.

Fabrikam does NOT have an Azure environment.

Existing Environment. On-Premises Environment

The on-premises network of Litware contains the resources shown in the following table.
Name Operating system Configuration
DC1 Windows Server 2019 Domain controller
DNS server
WEB1 Windows Server 2016 Internet Information Services (IIS) server
Hosts an internet facing ASP.NET web
app named WebApp1
The on-premises network of Fabrikam contains a domain member server named SERVER1 that runs Windows Server 2019.

Existing Environment. Network Environment

Litware has a site-to-site VPN connection to VNet1.

The Litware and Fabrikam datacenters are not connected.

Requirements. Planned Changes

Litware plans to implement the following changes:

– Establish a trust relationship between the Litware and Fabrikam forests.
– Migrate data from the on-premises NoSQL datastores to Azure Table storage.
– Containerize WebApp1 and deploy the app to an Azure Kubernetes Service (AKS) cluster on VNet1.
– Create an Azure blueprint named BP1 and use the blueprint to provision a resource group named RG1.

Requirements. Deployment Requirements

Litware identifies the following deployment requirements:

– The existing ARM templates must be used for deployments to Sub1.
– WebApp1 must be deployed to the AKS cluster without having to change the source code.

Requirements. Authentication and Authorization Requirements

Litware identifies the following authentication and authorization requirements:

– The Fabrikam users must be able to authenticate to the Litware.com tenant by using Azure AD Seamless SSO.
– The Fabrikam users and the Litware users must be able to manage the Azure resources in Sub1.
– Company policy must prohibit the creation of guest user accounts in the Litware.com tenant.
– You must be able to configure deny permissions for RG1 and for the resources in RG1.
– WebApp1 running on the AKS cluster must be able to retrieve secrets from KV1.

Requirements. Security Requirements

Litware identifies the following security requirements:

– On-premises Litware users must access KVI by using the private IP address of the key vault.
– Azure virtual machines must have all their disks encrypted, including the temporary disks.
– Azure Storage must encrypt all data by using keys issued by the internal CA of Litware.
– Inbound HTTPS traffic to WebApp1 must be inspected for SQL injection attacks.
– The principle of least privilege must be used.

You need to ensure that the NoSQL data is encrypted. The solution must meet the security requirements.
What should you do first?
Upgrade storage2 to StorageV2 (general purpose v2).
Create a new general-purpose v2 storage account.
Create a new Azure Blob storage account.
Modify the Encryption settings of storage2.

A

The question asks what to do first to ensure NoSQL data (currently in Azure Table storage within storage2) is encrypted and meets security requirements. Let’s analyze the options in the context of Azure Storage encryption and the case study requirements.

Existing Environment and Requirements Analysis:

Existing NoSQL Data: 500 GB of NoSQL data in Azure Table storage within storage2.

Storage2 Type: Storage (general purpose v1).

Security Requirement: “Azure Storage must encrypt all data by using keys issued by the internal CA of Litware.” This points to Customer-Managed Keys (CMK) for Azure Storage encryption, specifically using keys from Litware’s internal CA (likely meaning Azure Key Vault integration).

Encryption State of Storage (general purpose v1): General-purpose v1 storage accounts do not support Azure Storage encryption (SSE) with Customer-Managed Keys (CMK). They only support Service-Managed Keys (SSE-S2).

Evaluating each option:

Upgrade storage2 to StorageV2 (general purpose v2).

Analysis: Upgrading storage2 to General-purpose v2 (StorageV2) is a necessary first step. StorageV2 accounts do support Azure Storage encryption (SSE) with Customer-Managed Keys (CMK). General-purpose v1 accounts do not offer this feature.

Benefits: Upgrading to StorageV2 is generally recommended for modern storage features, performance, and cost optimization. It also unlocks the ability to use CMK encryption, which is required for the security requirement.

Impact on NoSQL data: Upgrading a storage account type does not typically cause data loss or require data migration within the same account. Azure handles the upgrade process.

Meets Security Requirement (Partially): Upgrading to StorageV2 enables CMK encryption, which is a prerequisite for meeting the security requirement, but upgrading alone doesn’t enable CMK encryption.

Create a new general-purpose v2 storage account.

Analysis: Creating a new StorageV2 account would also allow for CMK encryption. However, it would require migrating the 500 GB of NoSQL data from storage2 (v1) to the new v2 storage account. This data migration is an additional step and effort that might be avoidable if we can simply upgrade storage2.

Suitability as “First Step”: Creating a new account is not the first step if we can upgrade the existing one. Upgrading is generally less disruptive and requires less initial migration effort.

Create a new Azure Blob storage account.

Analysis: Creating a new Blob storage account (BlockBlobStorage account type) is also not the most direct first step for the NoSQL data which is currently in Table storage within storage2. While Blob storage accounts also support CMK encryption, Table storage is a different service and data model than Blob storage. Migrating Table storage data to Blob storage would be a significant data model change and likely not intended based on the case study (which mentions migrating from on-premises NoSQL to Azure Table storage, implying continued use of Table Storage).

Modify the Encryption settings of storage2.

Analysis: You cannot directly “modify the Encryption settings of storage2” to enable Customer-Managed Keys if storage2 is a general-purpose v1 account. General-purpose v1 accounts simply don’t support CMK. This option is premature and not possible as a first step until storage2 is upgraded to StorageV2. After upgrading to StorageV2, then modifying encryption settings to enable CMK would be a valid subsequent step.

Conclusion:

The most logical and efficient first step to ensure NoSQL data encryption and meet the security requirement is to Upgrade storage2 to StorageV2 (general purpose v2). This upgrade is a prerequisite to enabling Customer-Managed Keys encryption for the existing Table storage data within storage2. After the upgrade, you can then modify the encryption settings to configure CMK with Azure Key Vault and Litware’s internal CA keys.

Final Answer: The final answer is Upgrade storage2 to StorageV2 (general purpose v2).

60
Q

Case Study
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.

To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.

At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

To start the case study
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Overview. General Overview

Litware, Inc. is a medium-sized finance company. Litware recently acquired a financial services company named Fabrikam, Ltd.

Overview. Physical Locations

Litware has a datacenter in Boston. Fabrikam has a datacenter in San Francisco.

Existing Environment. Identity Environment

The network of Litware contains an Active Directory forest named Litware.com that syncs to an Azure Active Directory (Azure AD) tenant named Litware.com by using Azure AD Connect.

Azure AD Seamless Single Sign-on (Azure AD Seamless SSO) is enabled for the Litware.com tenant.

Users at Litware have a UPN suffix of Litware.com

Litware has an internal certification authority (CA) that is trusted by all devices.

The network of Fabrikam contains an Active Directory forest named fabrikam.com. Users at Fabrikam have a UPN suffix of fabrikam.com.

Existing Environment. Azure Environment

Litware has an Azure subscription named Sub1 that is linked to the Litware.com tenant. Sub1 contains the resources shown in the following table.

Name Type Azure region Configuration
VNet1 Virtual network East US None
KV1 Azure key vault East US None
storage1 Storage account East US BlockBlobStorage
storage2 Storage account East US Storage (general purpose v1)
Contains 500 GB of NoSQL
data in Azure Table storage
VPN1 Virtual network gateway East US Deployed to VNet1
FW1 Azure Firewall East US Deployed to VNet1

Litware has Azure Resource Manager (ARM) templates that deploy Azure Policy definitions and assignments to a management group.

Fabrikam does NOT have an Azure environment.

Existing Environment. On-Premises Environment

The on-premises network of Litware contains the resources shown in the following table.
Name Operating system Configuration
DC1 Windows Server 2019 Domain controller
DNS server
WEB1 Windows Server 2016 Internet Information Services (IIS) server
Hosts an internet facing ASP.NET web
app named WebApp1
The on-premises network of Fabrikam contains a domain member server named SERVER1 that runs Windows Server 2019.

Existing Environment. Network Environment

Litware has a site-to-site VPN connection to VNet1.

The Litware and Fabrikam datacenters are not connected.

Requirements. Planned Changes

Litware plans to implement the following changes:

– Establish a trust relationship between the Litware and Fabrikam forests.
– Migrate data from the on-premises NoSQL datastores to Azure Table storage.
– Containerize WebApp1 and deploy the app to an Azure Kubernetes Service (AKS) cluster on VNet1.
– Create an Azure blueprint named BP1 and use the blueprint to provision a resource group named RG1.

Requirements. Deployment Requirements

Litware identifies the following deployment requirements:

– The existing ARM templates must be used for deployments to Sub1.
– WebApp1 must be deployed to the AKS cluster without having to change the source code.

Requirements. Authentication and Authorization Requirements

Litware identifies the following authentication and authorization requirements:

– The Fabrikam users must be able to authenticate to the Litware.com tenant by using Azure AD Seamless SSO.
– The Fabrikam users and the Litware users must be able to manage the Azure resources in Sub1.
– Company policy must prohibit the creation of guest user accounts in the Litware.com tenant.
– You must be able to configure deny permissions for RG1 and for the resources in RG1.
– WebApp1 running on the AKS cluster must be able to retrieve secrets from KV1.

Requirements. Security Requirements

Litware identifies the following security requirements:

– On-premises Litware users must access KVI by using the private IP address of the key vault.
– Azure virtual machines must have all their disks encrypted, including the temporary disks.
– Azure Storage must encrypt all data by using keys issued by the internal CA of Litware.
– Inbound HTTPS traffic to WebApp1 must be inspected for SQL injection attacks.
– The principle of least privilege must be used.

You need to ensure that you can implement Azure AD Seamless SSO for Fabrikam. The solution must meet the following requirements:
– Support the planned changes.
– Meet the authentication and authorization requirements.
What should you do?
Create a new Azure AD tenant named fabrikam.com
From the Fabrikam forest, configure an additional UPN suffix of Litware.com.
From the Fabrikam forest, configure all users to have a UPN suffix of Litware.com.
From the Litware.com tenant, add a custom domain named fabrikam com.

A

To implement Azure AD Seamless SSO for Fabrikam users to authenticate to the Litware.com tenant, we need to integrate the Fabrikam identities with the Litware.com Azure AD tenant in a way that supports Seamless SSO and avoids creating guest accounts, as per the requirements.

Let’s analyze each option in the context of Azure AD Seamless SSO and the given requirements:

Option 1: Create a new Azure AD tenant named fabrikam.com

Analysis: Creating a new Azure AD tenant for Fabrikam would isolate Fabrikam identities in a separate tenant. This would not allow Fabrikam users to authenticate to the Litware.com tenant using Seamless SSO. Seamless SSO is about authenticating to a specific Azure AD tenant, in this case, Litware.com. A separate Fabrikam tenant would not fulfill this requirement.

Conclusion: Incorrect. This option does not address the requirement for Fabrikam users to authenticate to the Litware.com tenant using Seamless SSO.

Option 2: From the Fabrikam forest, configure an additional UPN suffix of Litware.com.

Analysis: Adding Litware.com as an additional UPN suffix in the Fabrikam on-premises Active Directory is not directly related to enabling Azure AD Seamless SSO in the Litware.com tenant for Fabrikam users. While UPN suffixes are important for user identities, changing the UPN suffix in Fabrikam forest to Litware.com would be confusing and not the correct approach for cross-forest authentication. Seamless SSO leverages existing UPNs.

Conclusion: Incorrect. This option is not the correct way to enable Seamless SSO for Fabrikam users in the Litware.com tenant.

Option 3: From the Fabrikam forest, configure all users to have a UPN suffix of Litware.com.

Analysis: This is even more incorrect than Option 2. Changing the UPN suffix for all Fabrikam users to Litware.com in their on-premises fabrikam.com forest is disruptive, incorrect identity management, and not related to enabling Seamless SSO for cross-tenant authentication. This would fundamentally change the identity structure in Fabrikam and is not a viable solution.

Conclusion: Incorrect. This option is a highly inappropriate and disruptive action.

Option 4: From the Litware.com tenant, add a custom domain named fabrikam com.

Analysis: Adding fabrikam.com as a custom domain to the Litware.com Azure AD tenant is the correct first step towards enabling authentication for Fabrikam users in the Litware.com tenant without creating guest accounts.

Domain Verification: Adding a custom domain involves verifying domain ownership, which is a standard procedure for managing identities in Azure AD.

Identity Integration: Adding fabrikam.com as a custom domain allows you to potentially manage Fabrikam users within the Litware.com tenant or set up federation. While Seamless SSO itself is primarily configured on-premises and in Azure AD Connect, adding the custom domain is a necessary prerequisite for many identity integration scenarios, including federation or potentially using Azure AD Connect to sync Fabrikam users to Litware.com tenant (though this case does not explicitly mention directory sync for Fabrikam). In the context of Seamless SSO for Fabrikam users to Litware.com tenant without guest accounts, adding the custom domain is a logical preparatory step.

Avoid Guest Accounts: By integrating Fabrikam’s domain with Litware.com tenant, you move towards a more integrated identity model rather than relying on guest accounts, which aligns with the company policy.

Although Seamless SSO itself is configured via Azure AD Connect and on-premises AD, the question is asking for a first step that supports the overall goal. Adding the custom domain is a foundational step to integrate Fabrikam identities into the Litware.com Azure AD tenant, which is necessary to enable any form of direct authentication (including Seamless SSO or federation) for Fabrikam users without relying on guest accounts.

Conclusion:

Option 4, From the Litware.com tenant, add a custom domain named fabrikam com, is the most logical and correct first step to support Azure AD Seamless SSO for Fabrikam users to the Litware.com tenant, while adhering to the requirement of not creating guest user accounts and supporting planned changes and authentication requirements. It sets the stage for further identity integration configuration.

Final Answer: The final answer is From the Litware.com tenant, add a custom domain named fabrikam com.

61
Q

Case Study
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.

To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.

At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

To start the case study
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Overview. General Overview

Litware, Inc. is a medium-sized finance company. Litware recently acquired a financial services company named Fabrikam, Ltd.

Overview. Physical Locations

Litware has a datacenter in Boston. Fabrikam has a datacenter in San Francisco.

Existing Environment. Identity Environment

The network of Litware contains an Active Directory forest named Litware.com that syncs to an Azure Active Directory (Azure AD) tenant named Litware.com by using Azure AD Connect.

Azure AD Seamless Single Sign-on (Azure AD Seamless SSO) is enabled for the Litware.com tenant.

Users at Litware have a UPN suffix of Litware.com

Litware has an internal certification authority (CA) that is trusted by all devices.

The network of Fabrikam contains an Active Directory forest named fabrikam.com. Users at Fabrikam have a UPN suffix of fabrikam.com.

Existing Environment. Azure Environment

Litware has an Azure subscription named Sub1 that is linked to the Litware.com tenant. Sub1 contains the resources shown in the following table.

Name Type Azure region Configuration
VNet1 Virtual network East US None
KV1 Azure key vault East US None
storage1 Storage account East US BlockBlobStorage
storage2 Storage account East US Storage (general purpose v1)
Contains 500 GB of NoSQL
data in Azure Table storage
VPN1 Virtual network gateway East US Deployed to VNet1
FW1 Azure Firewall East US Deployed to VNet1

Litware has Azure Resource Manager (ARM) templates that deploy Azure Policy definitions and assignments to a management group.

Fabrikam does NOT have an Azure environment.

Existing Environment. On-Premises Environment

The on-premises network of Litware contains the resources shown in the following table.
Name Operating system Configuration
DC1 Windows Server 2019 Domain controller
DNS server
WEB1 Windows Server 2016 Internet Information Services (IIS) server
Hosts an internet facing ASP.NET web
app named WebApp1
The on-premises network of Fabrikam contains a domain member server named SERVER1 that runs Windows Server 2019.

Existing Environment. Network Environment

Litware has a site-to-site VPN connection to VNet1.

The Litware and Fabrikam datacenters are not connected.

Requirements. Planned Changes

Litware plans to implement the following changes:

– Establish a trust relationship between the Litware and Fabrikam forests.
– Migrate data from the on-premises NoSQL datastores to Azure Table storage.
– Containerize WebApp1 and deploy the app to an Azure Kubernetes Service (AKS) cluster on VNet1.
– Create an Azure blueprint named BP1 and use the blueprint to provision a resource group named RG1.

Requirements. Deployment Requirements

Litware identifies the following deployment requirements:

– The existing ARM templates must be used for deployments to Sub1.
– WebApp1 must be deployed to the AKS cluster without having to change the source code.

Requirements. Authentication and Authorization Requirements

Litware identifies the following authentication and authorization requirements:

– The Fabrikam users must be able to authenticate to the Litware.com tenant by using Azure AD Seamless SSO.
– The Fabrikam users and the Litware users must be able to manage the Azure resources in Sub1.
– Company policy must prohibit the creation of guest user accounts in the Litware.com tenant.
– You must be able to configure deny permissions for RG1 and for the resources in RG1.
– WebApp1 running on the AKS cluster must be able to retrieve secrets from KV1.

Requirements. Security Requirements

Litware identifies the following security requirements:

– On-premises Litware users must access KVI by using the private IP address of the key vault.
– Azure virtual machines must have all their disks encrypted, including the temporary disks.
– Azure Storage must encrypt all data by using keys issued by the internal CA of Litware.
– Inbound HTTPS traffic to WebApp1 must be inspected for SQL injection attacks.
– The principle of least privilege must be used.

DRAG DROP
You need to ensure that the virtual machine disks are encrypted. The solution must meet the security requirements.

Which three actions should you perform in Sub1 in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Actions
Create a key in KV1 and configure a disk encryption set.
Register the Microsoft.Compute encryption provider feature.
Deploy the virtual machines and set Encryption at host to Yes.
Register the Microsoft KeyVault resource provider.
Deploy the virtual machines and enable Azure Disk Encryption.
Answer Area

A

Answer Area:

Register the Microsoft KeyVault resource provider.

Register the Microsoft.Compute encryption provider feature.

Create a key in KV1 and configure a disk encryption set.

Explanation:

To ensure virtual machine disks are encrypted, including temporary disks, and to meet the security requirements, you need to implement Encryption at Host using Customer-Managed Keys (CMK). Here’s the sequence of actions:

Register the Microsoft KeyVault resource provider.

Before you can use Azure Key Vault for Customer-Managed Keys with Disk Encryption Sets, you need to ensure that the Microsoft.KeyVault resource provider is registered in your Azure subscription (Sub1). Resource providers enable specific Azure services within your subscription. Registering the Key Vault resource provider ensures that you can use Key Vault to manage encryption keys.

Register the Microsoft.Compute encryption provider feature.

To use Disk Encryption Sets and Encryption at Host, the Microsoft.Compute encryption provider feature needs to be registered in your subscription. This feature enables the encryption capabilities within Azure Compute that are necessary for Disk Encryption Sets and Encryption at Host. You must register this feature to unlock these advanced encryption options.

Create a key in KV1 and configure a disk encryption set.

Create a key in KV1: You need a key in your Azure Key Vault (KV1) to use as the Customer-Managed Key for Disk Encryption Sets. This key will be used to encrypt the data encryption keys that, in turn, encrypt the VM disks.

Configure a disk encryption set: A Disk Encryption Set (DES) is an Azure resource that manages the encryption keys used for Encryption at Host. You need to create a DES and configure it to use the key you created in KV1. The DES becomes the central management point for the encryption keys used by VMs.

Why these actions are in this sequence:

Resource provider registration must come before using the features provided by those resource providers (Key Vault and Compute Encryption features).

You need a Key Vault and a key within it, along with a Disk Encryption Set configured to use that key, before you can deploy VMs and enable Encryption at Host using that DES.

Why the other actions are not part of this sequence or are incorrect:

Deploy the virtual machines and set Encryption at host to Yes: This is the final step, after you have registered the necessary providers and configured the Disk Encryption Set. You cannot enable Encryption at Host without a Disk Encryption Set configured. Therefore, this action comes after the first three. It is not part of the initial setup sequence.

Deploy the virtual machines and enable Azure Disk Encryption: While Azure Disk Encryption (ADE) encrypts OS and data disks, it does not encrypt temporary disks by default. The requirement specifically states “Azure virtual machines must have all their disks encrypted, including the temporary disks,” which points to Encryption at Host as the required solution, not standard Azure Disk Encryption. Therefore, enabling Azure Disk Encryption is not the correct action to meet this specific requirement.

Final Answer:

Answer Area

Register the Microsoft KeyVault resource provider.

Register the Microsoft.Compute encryption provider feature.

Create a key in KV1 and configure a disk encryption set.

62
Q

Case Study
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.

To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.

At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

To start the case study
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Overview. General Overview

Litware, Inc. is a medium-sized finance company. Litware recently acquired a financial services company named Fabrikam, Ltd.

Overview. Physical Locations

Litware has a datacenter in Boston. Fabrikam has a datacenter in San Francisco.

Existing Environment. Identity Environment

The network of Litware contains an Active Directory forest named Litware.com that syncs to an Azure Active Directory (Azure AD) tenant named Litware.com by using Azure AD Connect.

Azure AD Seamless Single Sign-on (Azure AD Seamless SSO) is enabled for the Litware.com tenant.

Users at Litware have a UPN suffix of Litware.com

Litware has an internal certification authority (CA) that is trusted by all devices.

The network of Fabrikam contains an Active Directory forest named fabrikam.com. Users at Fabrikam have a UPN suffix of fabrikam.com.

Existing Environment. Azure Environment

Litware has an Azure subscription named Sub1 that is linked to the Litware.com tenant. Sub1 contains the resources shown in the following table.

Name Type Azure region Configuration
VNet1 Virtual network East US None
KV1 Azure key vault East US None
storage1 Storage account East US BlockBlobStorage
storage2 Storage account East US Storage (general purpose v1)
Contains 500 GB of NoSQL
data in Azure Table storage
VPN1 Virtual network gateway East US Deployed to VNet1
FW1 Azure Firewall East US Deployed to VNet1

Litware has Azure Resource Manager (ARM) templates that deploy Azure Policy definitions and assignments to a management group.

Fabrikam does NOT have an Azure environment.

Existing Environment. On-Premises Environment

The on-premises network of Litware contains the resources shown in the following table.
Name Operating system Configuration
DC1 Windows Server 2019 Domain controller
DNS server
WEB1 Windows Server 2016 Internet Information Services (IIS) server
Hosts an internet facing ASP.NET web
app named WebApp1
The on-premises network of Fabrikam contains a domain member server named SERVER1 that runs Windows Server 2019.

Existing Environment. Network Environment

Litware has a site-to-site VPN connection to VNet1.

The Litware and Fabrikam datacenters are not connected.

Requirements. Planned Changes

Litware plans to implement the following changes:

– Establish a trust relationship between the Litware and Fabrikam forests.
– Migrate data from the on-premises NoSQL datastores to Azure Table storage.
– Containerize WebApp1 and deploy the app to an Azure Kubernetes Service (AKS) cluster on VNet1.
– Create an Azure blueprint named BP1 and use the blueprint to provision a resource group named RG1.

Requirements. Deployment Requirements

Litware identifies the following deployment requirements:

– The existing ARM templates must be used for deployments to Sub1.
– WebApp1 must be deployed to the AKS cluster without having to change the source code.

Requirements. Authentication and Authorization Requirements

Litware identifies the following authentication and authorization requirements:

– The Fabrikam users must be able to authenticate to the Litware.com tenant by using Azure AD Seamless SSO.
– The Fabrikam users and the Litware users must be able to manage the Azure resources in Sub1.
– Company policy must prohibit the creation of guest user accounts in the Litware.com tenant.
– You must be able to configure deny permissions for RG1 and for the resources in RG1.
– WebApp1 running on the AKS cluster must be able to retrieve secrets from KV1.

Requirements. Security Requirements

Litware identifies the following security requirements:

– On-premises Litware users must access KVI by using the private IP address of the key vault.
– Azure virtual machines must have all their disks encrypted, including the temporary disks.
– Azure Storage must encrypt all data by using keys issued by the internal CA of Litware.
– Inbound HTTPS traffic to WebApp1 must be inspected for SQL injection attacks.
– The principle of least privilege must be used.
You need to configure Azure AD Seamless SSO for Fabrikam. The solution must meet the authentication and authorization requirements.
What should you install first?
the Azure AD Connect provisioning agent on SERVER1
the Azure AD Connect provisioning agent on DC1
Azure AD Connect in staging mode on SERVER1
an Azure AD Connect primary server on SERVER1

A

To configure Azure AD Seamless SSO for Fabrikam users to authenticate to the Litware.com tenant, the key is to extend the Seamless SSO configuration in the Litware.com tenant to recognize and trust authentication requests coming from the Fabrikam environment. Let’s analyze the options in terms of being the first step in this process.

the Azure AD Connect provisioning agent on SERVER1:

Analysis: The Azure AD Connect provisioning agent is used for Azure AD Connect cloud sync, which is a newer, lighter-weight synchronization option. While provisioning agents are part of Azure AD Connect ecosystem, they are not directly involved in configuring or enabling Azure AD Seamless SSO. Seamless SSO is a feature configured within the main Azure AD Connect application itself. Installing the provisioning agent on SERVER1 (a Fabrikam domain member server) would not be the first step to enable Seamless SSO for Fabrikam users to Litware.com.

Conclusion: Incorrect. The provisioning agent is not the primary component for enabling Seamless SSO.

the Azure AD Connect provisioning agent on DC1:

Analysis: Similar to the previous option, the provisioning agent is not the core component for Seamless SSO. Installing it on DC1 (a Litware domain controller) is also not the correct first step for configuring Seamless SSO for Fabrikam users. Seamless SSO configuration happens within the Azure AD Connect application.

Conclusion: Incorrect. The provisioning agent is not the primary component for enabling Seamless SSO.

Azure AD Connect in staging mode on SERVER1:

Analysis: Installing Azure AD Connect in staging mode on SERVER1 (a Fabrikam domain member server) is a more relevant option related to Azure AD Connect. Staging mode is used for testing configurations without active synchronization to Azure AD. While SERVER1 is in the Fabrikam domain, setting up Azure AD Connect in staging mode could be a preparatory step to configure and test Seamless SSO settings related to the Fabrikam forest before making it active or impacting the existing Litware.com sync. It allows for experimentation and configuration in a non-production impacting way. Although SERVER1 in Fabrikam forest is not the ideal location for managing SSO for Fabrikam to Litware, in the context of these options, it’s the closest to a plausible first configuration step involving Azure AD Connect.

Conclusion: Plausible as a first step for configuration and testing in a non-disruptive manner, even though SERVER1’s domain is not Litware.

an Azure AD Connect primary server on SERVER1:

Analysis: Installing a primary Azure AD Connect server on SERVER1 (Fabrikam domain) to connect to the Litware.com tenant is architecturally less likely to be the first step for Seamless SSO configuration in this cross-forest scenario. Typically, Seamless SSO configuration for a tenant is managed by the Azure AD Connect instance that is already syncing to that tenant (in this case, the existing Litware.com Azure AD Connect setup, although its location is not specified). Setting up a new primary Azure AD Connect server on SERVER1 for Litware.com tenant is not the standard approach and might even conflict with the existing Azure AD Connect setup for Litware.com.

Conclusion: Less likely and potentially disruptive as a first step.

Most Logical First Step (Considering Imperfect Options):

Given the available options and the need to start configuring Azure AD Seamless SSO for Fabrikam users to Litware.com, installing Azure AD Connect in staging mode on SERVER1 is the most reasonable first action to perform. It allows for a non-production impacting environment to begin exploring and configuring Azure AD Connect settings that might be related to extending Seamless SSO to Fabrikam users, even though SERVER1 is in Fabrikam’s domain and not the ideal management location for Litware.com Azure AD tenant integration. The other options are less relevant to starting the Seamless SSO configuration process itself.

Final Answer: The final answer is Azure AD Connect in staging mode on SERVER1.

63
Q

You are implementing authentication for applications in your company. You plan to implement self-service password reset (SSPR) and multifactor authentication (MFA) in Azure Active Directory (Azure AD).
You need to select authentication mechanisms that can be used for both MFA and SSPR.
Which two authentication methods should you use? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
Authenticator app
Email addresses
App passwords
Short Message Service (SMS) messages
Security questions

A

The correct answers are Authenticator app and Short Message Service (SMS) messages.

Explanation:

Let’s analyze each authentication method and its applicability to both MFA and SSPR:

Authenticator app:

MFA: The Microsoft Authenticator app (and other authenticator apps that support TOTP) is a widely used and secure method for MFA. It can provide push notifications for approval or generate time-based one-time passcodes (TOTP).

SSPR: The Authenticator app can also be used for SSPR. Users can receive push notifications or use TOTP codes to verify their identity during the password reset process.

Conclusion: The Authenticator app is a valid and recommended method for both MFA and SSPR.

Short Message Service (SMS) messages:

MFA: SMS messages are a common method for MFA, where a verification code is sent to the user’s mobile phone via SMS.

SSPR: SMS is also a frequently used method for SSPR. Users can receive a verification code via SMS to their registered mobile phone number to reset their password.

Conclusion: SMS messages are a valid and widely used method for both MFA and SSPR.

Email addresses:

MFA: While email can be used as a fallback MFA method (e.g., sending a verification code to an email address), it is generally considered less secure than other methods like Authenticator app or SMS for MFA in isolation. Email accounts themselves can be compromised.

SSPR: Email addresses are a very common and primary method for SSPR. Users typically receive a password reset link or verification code to their alternate email address or primary email address.

Conclusion: Email addresses are primarily a core method for SSPR and can be used as a weaker fallback for MFA in certain configurations, but might not be considered as robust or primary MFA method compared to Authenticator app or SMS. However, if the question is asking for methods usable for both, then email addresses, especially alternate email addresses configured for recovery, can be considered usable for both.

App passwords:

MFA: App passwords are not an authentication method for MFA itself. App passwords are a security feature that allows users who have MFA enabled on their accounts to use older, legacy applications that do not support modern authentication. App passwords are generated after MFA is enabled to bypass MFA prompts for these legacy apps. They are not used as a method to perform MFA itself.

SSPR: App passwords are not used in SSPR processes.

Conclusion: App passwords are not an authentication method for either MFA or SSPR in the way the question intends.

Security questions:

MFA: Security questions were historically used as a form of authentication, including for MFA in some older systems. However, they are now strongly discouraged for MFA due to significant security vulnerabilities (easily guessable, social engineering risks, etc.). Modern MFA strongly prefers more robust methods like Authenticator apps, SMS, or FIDO2 keys.

SSPR: Security questions are a common and still used method for SSPR, although their security is also debated. They are often used as a fallback or secondary SSPR method.

Conclusion: Security questions are primarily an SSPR method and are not recommended nor a best practice for MFA in modern security contexts. While technically usable for SSPR and historically used in some MFA scenarios (now discouraged), they are not a strong or preferred method for both.

Selecting the Best Two for “Both MFA and SSPR”:

Considering the most robust, widely used, and recommended methods for both MFA and SSPR, the best two choices from the list are:

Authenticator app

Short Message Service (SMS) messages

These two methods are consistently strong and commonly deployed for both modern MFA and SSPR implementations in Azure AD.

Final Answer: The final answer is Authenticator app and Short Message Service (SMS) messages.

64
Q

Your company has the groups shown in the following table.
Group Number of members
Managers 10
Sales 100
Development 15
The company has an Azure subscription that is associated with an Azure Active Directory (Azure AD) tenant named contoso.com.
An administrator named Admin1 attempts to enable Enterprise State Roaming for all the users in the Managers group.
Admin1 reports that the options for Enterprise State Roaming are unavailable from Azure AD.
You verify that Admin1 is assigned the Global administrator role.
You need to ensure that Admin1 can enable Enterprise State Roaming.
What should you do?
Assign an Azure AD Privileged Identity Management (PIM) role to Admin1.
Purchase an Azure Rights Management (Azure RMS) license for each user in the Managers group.
Enforce Azure Multi-Factor Authentication (MFA) for Admin1.
Purchase an Azure AD Premium P1 license for each user in the Managers group.

A

The problem states that Admin1, a Global administrator, cannot access the Enterprise State Roaming settings in Azure AD. The goal is to enable Admin1 to configure Enterprise State Roaming. Let’s evaluate each option:

Assign an Azure AD Privileged Identity Management (PIM) role to Admin1:

PIM is used to manage, control, and monitor access to privileged roles. Admin1 is already assigned the Global administrator role, which is the highest privileged role in Azure AD and inherently should have permissions to manage all features, including Enterprise State Roaming. PIM is about managing the use of privileged roles, not granting access to features that a Global Admin should already possess. Assigning a PIM role wouldn’t directly address the issue of the settings being unavailable if the Global Administrator role already grants the necessary permissions.

Purchase an Azure Rights Management (Azure RMS) license for each user in the Managers group:

Azure Rights Management (Azure RMS), now part of Azure Information Protection, is for protecting sensitive documents and emails. It is not related to Enterprise State Roaming licensing or enabling admin access to Enterprise State Roaming settings. Azure RMS licenses are for information protection features, not device state roaming.

Enforce Azure Multi-Factor Authentication (MFA) for Admin1:

Enforcing MFA for Admin1 is a good security practice, but it is not related to the availability of Enterprise State Roaming settings. MFA is an authentication security measure, not a feature licensing or access enabler. MFA doesn’t unlock features in Azure AD; it secures access.

Purchase an Azure AD Premium P1 license for each user in the Managers group:

Enterprise State Roaming licensing requirement: Azure AD Enterprise State Roaming is a feature that requires Azure AD Premium P1 or P2 licenses for the users who will be using the feature. While a Global Administrator has the permission to configure the feature, the feature itself might be gated by the licensing of the users who will benefit from it.

Feature Availability based on Licensing: If Enterprise State Roaming is not licensed for any users in the tenant, it’s possible that the Azure AD portal interface might indeed disable or hide the settings related to Enterprise State Roaming, even for a Global Administrator. This is because the feature is not considered “enabled” for the tenant if no users have the required licenses.

Enabling the Feature: Purchasing Azure AD Premium P1 licenses for the target users (the Managers group in this case) would fulfill the licensing prerequisite for Enterprise State Roaming. Once the licenses are assigned, the Enterprise State Roaming feature should become fully enabled in the tenant, and the settings should become accessible to Admin1 (as a Global Administrator).

Conclusion:

The most likely reason why the Enterprise State Roaming options are unavailable to Admin1, even with Global Administrator role, is that the Azure AD tenant or the intended users are not licensed for Azure AD Premium P1 or P2, which is a prerequisite for using Enterprise State Roaming. Purchasing Azure AD Premium P1 licenses for the Managers group (the intended users of Enterprise State Roaming) is the action that directly addresses this licensing prerequisite and would most likely enable the feature and make its settings accessible to Admin1.

Final Answer: The final answer is Purchase an Azure AD Premium P1 license for each user in the Managers group.

65
Q

HOTSPOT
You plan to implement an access review to meet the following requirements:

The access review must be enforced until otherwise configured.
Each user or group that has access to the Azure environment must be in the scope of the access review.
The access review must be completed within two weeks.
A lack of response must not cause changes in the operational environment.

An administrator creates the access review shown in the answer area.

Which two sections of the access review should you modify to meet the requirements? To answer, select the appropriate sections in the answer area.

NOTE: Each correct selection is worth one point.
Create an access review
A.
Review name: Review1
Description
:
B.
Start date: 09/16/2020
Frequency: Quarterly
Duration (in days): [Slider] 25
End
: Never [End by Occurrences]
Number of times:
End date*: 10/16/2020
C.
Users
Users to review: Members of a group
Scope:
Guest users only
Everyone
D.
Group
Group1
E.
Reviewers
Reviewers: Group owners
F.
Programs
Link to program
Default Program
G.
Upon completion settings

Auto apply results to resource: [Enable]
If reviewers don’t respond: Remove access [Dropdown]
(Preview) Action to apply on denied users: Remove user’s membership from the resource
H.
Advanced settings

Show recommendations: [Enable] [Disable]
Require reason on approval: [Enable] [Disable]
Mail notifications: [Enable] [Disable]
Reminders: [Enable] [Disable]
(Preview) Additional content for reviewer email:

A

To determine which two sections of the access review should be modified, let’s review each requirement against the current configuration and identify the sections needing changes.

Requirements:

Requirement 1: The access review must be enforced until otherwise configured.

Current Configuration (Section C): “End*: Never” - This configuration ALREADY meets this requirement. The review is set to run indefinitely until explicitly changed.

Requirement 2: Each user or group that has access to the Azure environment must be in the scope of the access review.

Current Configuration (Section D & E): “Users to review: Members of a group”, “Scope: Guest users only”, “Group: Group1”. This configuration is NOT meeting the requirement. It is currently scoped to only review Guest users who are members of a specific group (Group1). To meet the requirement of each user or group, the scope needs to be broadened to include all users, not just guest users and members of a specific group.

Requirement 3: The access review must be completed within two weeks.

Current Configuration (Section C): “Duration (in days): [Slider] 25”. This configuration is NOT meeting the requirement. The duration is set to 25 days, which is longer than two weeks (14 days). This needs to be changed to 14 days.

Requirement 4: A lack of response must not cause changes in the operational environment.

Current Configuration (Section G): “If reviewers don’t respond: Remove access [Dropdown]”. This configuration is NOT meeting the requirement. Currently, if reviewers don’t respond, access is set to be removed. This needs to be changed to an action that does not remove access on no response.

Sections to Modify:

Based on the analysis above, the two sections that directly address requirements that are NOT currently met in the provided configuration are:

C. Start date, Frequency, Duration (in days), End**: This section needs to be modified to change the Duration (in days) to 14 to meet Requirement 3 (completed within two weeks).

G. Upon completion settings: This section needs to be modified in the “If reviewers don’t respond” dropdown. The current setting “Remove access” needs to be changed to an option that represents “no changes” or “Take no action” to meet Requirement 4 (lack of response = no changes).

While Section D & E (Users and Scope) also needs modification to meet Requirement 2 (all users in scope), the question asks for only two sections. Sections C and G are the most direct and critical modifications needed to address explicit, quantifiable requirements related to duration and action upon non-response, which are directly configurable in these sections.

Therefore, the two sections to modify are C and G.

Final Answer: The final answer is C and G.

66
Q

HOTSPOT
You have a hierarchy of management groups and Azure subscriptions as shown in the following table.
Tenant Root Group
├── ManagementGroup1
│ └── Subscription1
└── ManagementGroup2
└── Subscription2

You create the Azure resources shown in the following table.
Name Type Created in
RG1 Resource group Subscription1
RG2 Resource group Subscription2
VM2 Virtual machine RG2

You have the Owner role. You assign roles to users as shown in the following table.
User name Role On resource
User1 Contributor ManagementGroup1
User2 Contributor ManagementGroup2
User3 Reader Tenant Root Group
For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.
Statements
You can remove User1 from the Contributor role for RG1.
User2 can delete VM2.
You can add User3 as a Contributor for RG1.

A

Let’s analyze each statement based on Azure RBAC role inheritance and the provided hierarchy.

Statement 1: You can remove User1 from the Contributor role for RG1.

Analysis: You are stated to have the Owner role at the subscription level (Subscription1 and Subscription2, and implicitly at the Tenant Root Group as Owner at subscription implies Owner at all levels above). The Owner role grants full access to manage all resources, including role assignments.

User1’s Role: User1 is assigned the Contributor role at ManagementGroup1. Subscription1 is a child of ManagementGroup1. Role assignments are inherited down the management group hierarchy. Therefore, User1 inherits the Contributor role on Subscription1 and all resources within Subscription1, including RG1 and resources within RG1.

Your Permissions: As an Owner at the subscription level, you have permissions to manage RBAC at Subscription1 and below. This includes removing role assignments.

Conclusion: Yes, as an Owner at the subscription level, you have sufficient permissions to remove User1’s Contributor role assignment for RG1 (or any resource within Subscription1, even though User1’s explicit assignment is at ManagementGroup1 level due to inheritance).

Statement 2: User2 can delete VM2.

Analysis: User2 is assigned the Contributor role at ManagementGroup2. Subscription2 is a child of ManagementGroup2. VM2 is in RG2, which is in Subscription2.

Contributor Role Permissions: The Contributor role allows users to create and manage Azure resources but does not grant permissions to manage role assignments (RBAC). However, the Contributor role does grant permissions to perform most operations on resources, including deleting virtual machines.

Scope of Contributor Role: User2’s Contributor role assignment at ManagementGroup2 is inherited by Subscription2 and all resources within Subscription2, including RG2 and VM2.

Conclusion: Yes, User2, with the Contributor role at ManagementGroup2, inherits Contributor permissions on VM2 and therefore can delete VM2.

Statement 3: You can add User3 as a Contributor for RG1.

Analysis: You are an Owner at the subscription level (and above). User3 is assigned the Reader role at the Tenant Root Group. Role assignments are inherited down the management group hierarchy, but Reader role does not grant write permissions.

Your Permissions (Owner): As an Owner, you have full control over RBAC within your subscription and below. You can assign any role to any user or group within your scope.

User3’s Current Role (Reader): User3’s Reader role at the Tenant Root Group is inherited down to Subscription1 and RG1. However, this inherited Reader role does not prevent you, as an Owner, from assigning a different role (like Contributor) to User3 at a lower scope (like RG1). Role assignments at lower scopes take precedence.

Adding Contributor Role: You can add User3 as a Contributor at RG1. This would grant User3 Contributor permissions specifically for RG1, overriding the inherited Reader role for resources within RG1.

Conclusion: Yes, as an Owner, you have permissions to assign the Contributor role to User3 at RG1.

Final Answer:

Statements Yes No
You can remove User1 from the Contributor role for RG1. ☑ ☐
User2 can delete VM2. ☑ ☐
You can add User3 as a Contributor for RG1. ☑ ☐

67
Q

You have an Azure subscription that includes the resources shown in the following table.
Name Resource type
VM1 Virtual machine
VM2 Virtual machine
RG1 Resource group
You attempt to add a role assignment to RG1 as shown in the following exhibit.
Add role assignment
Role
Reader
Assign access to
Azure AD user, group, or service principal
Select
VM
VM1
Selected members:
No members selected. Search for and add one or more
members you want to assign to the role for this resource.
Learn more about RBAC

What should you do to ensure that you can assign VM2 the Reader role for the resource group?
Configure just in time (JIT) VM access on VM2.
Configure Access control (IAM) on VM2.
Assign a managed identity to VM2.
Modify the Reader role at the subscription level.

A

Answer Area:

Configure Access control (IAM) on VM2.

Explanation:

The question asks what needs to be done to ensure that you can assign VM2 the Reader role for the resource group RG1. Looking at the exhibit, it seems like the user is trying to assign a role to a VM, not for a VM on a Resource Group. This is likely a UI representation issue in the question and we should focus on the intent.

The most probable intended action is to grant VM2 permissions to read resources within RG1. To achieve this using Azure Role-Based Access Control (RBAC), you need to make VM2 a security principal that can be assigned roles.

Let’s analyze each option:

Configure just in time (JIT) VM access on VM2: JIT VM access is a security feature that controls and restricts inbound access to VMs for management ports like RDP and SSH. It’s not related to assigning RBAC roles for resource access. JIT helps secure access to the VM, not access from the VM to other resources.

Configure Access control (IAM) on VM2: This option is the most directly relevant and correct. Access control (IAM) on VM2 is where you would configure Managed Identities for VM2.

Managed Identities for Azure Resources: To allow Azure resources (like VMs) to authenticate to Azure services and be assigned RBAC roles, you need to enable Managed Identities for those resources.

Enabling Managed Identity: By configuring Access control (IAM) on VM2, you can enable either a System-assigned Managed Identity or a User-assigned Managed Identity for VM2.

VM as a Security Principal: Once a Managed Identity is enabled for VM2, VM2 becomes a service principal in your Azure AD tenant. This service principal can then be assigned Azure RBAC roles, just like users, groups, or service principals registered as applications.

Assigning Reader Role to VM2’s Managed Identity: After enabling Managed Identity on VM2, you would then go to the Access control (IAM) blade of RG1 (not VM2’s IAM blade as suggested by the misleading exhibit UI) and add a role assignment. When selecting the principal, you would search for and select the Managed Identity of VM2 (which will appear as the VM name). You would then assign the Reader role to this Managed Identity at the scope of RG1.

Assign a managed identity to VM2: This is essentially the same as “Configure Access control (IAM) on VM2” because configuring Access control (IAM) on a VM is the process to assign a managed identity. This option is also correct and reinforces the previous one. However, “Configure Access control (IAM) on VM2” is slightly more descriptive of the action within the Azure portal.

Modify the Reader role at the subscription level: Modifying the Reader role definition itself is unnecessary and incorrect. The Reader role already provides read permissions. The issue is not the role definition, but how to assign it to a VM so that the VM can have read permissions on RG1. Modifying the role itself is not the solution to assigning it to a VM principal.

Conclusion:

The most direct and necessary action to enable assigning the Reader role to VM2 for RG1 is to Configure Access control (IAM) on VM2 to enable a Managed Identity for VM2. This makes VM2 a valid security principal that can be assigned RBAC roles.

Final Answer: The final answer is Configure Access control (IAM) on VM2.

68
Q

You have an Azure Active Directory (Azure AD) tenant linked to an Azure subscription. The tenant contains a group named Admins.
You need to prevent users, except for the members of Admins, from using the Azure portal and Azure PowerShell to access the subscription.
What should you do?
From Azure AD, configure the User settings.
From Azure AD, create a conditional access policy.
From the Azure subscription, assign an Azure policy.
From the Azure subscription, configure Access control (IAM).

A

The correct answer is From Azure AD, create a conditional access policy.

Explanation:

Let’s analyze each option and why Conditional Access is the most suitable solution:

From Azure AD, create a conditional access policy.

Conditional Access Policies in Azure AD: Conditional Access is a powerful tool in Azure AD that allows you to create policies to control access to cloud applications based on various conditions. These conditions can include:

Users and groups: You can target specific users or groups for the policy. In this case, you can create a policy that applies to all users and then exclude the ‘Admins’ group.

Cloud apps or actions: Conditional Access can target specific applications. For managing Azure subscriptions, the relevant “cloud app” to target is “Microsoft Azure Management”. This application represents access to Azure management interfaces, including the Azure portal and Azure PowerShell (and Azure CLI, and other management tools).

Access controls: You can define access controls, such as:

Block access: Completely prevent access.

Grant access: Allow access, possibly with conditions like requiring multi-factor authentication or a compliant device.

Meeting the Requirement: By creating a Conditional Access policy with the following configuration, you can effectively meet the requirement:

Users and groups: Include “All users” and exclude the “Admins” group.

Cloud apps or actions: Select “Microsoft Azure Management”.

Access controls: Select “Block access”.

This policy will block all users except members of the ‘Admins’ group from accessing “Microsoft Azure Management”, which includes the Azure portal and Azure PowerShell.

From Azure AD, configure the User settings.

User settings in Azure AD: User settings in Azure AD primarily control features like:

External collaboration settings (guest access)

User and guest invitations

License management

User profile settings

Self-service group management

User settings are not designed to control access to Azure subscriptions or Azure management interfaces based on group membership and the method of access (portal/PowerShell). They are more about user lifecycle management and collaboration features within Azure AD itself.

From the Azure subscription, assign an Azure policy.

Azure Policies: Azure Policies are used to enforce organizational standards and assess compliance for Azure resources. They control what types of resources can be deployed, configured, and managed within Azure subscriptions and resource groups.

Scope of Azure Policies: Azure Policies are applied to subscriptions, resource groups, or management groups to govern resources. They are not designed to control user access to the Azure portal or Azure PowerShell based on group membership. Azure Policy is for resource governance, not user access control at the portal level.

From the Azure subscription, configure Access control (IAM).

Access control (IAM) in Azure subscriptions: Access Control (IAM) in Azure subscriptions is used for Role-Based Access Control (RBAC). RBAC controls authorization – what actions users and groups can perform on Azure resources once they have access.

Limitations of IAM for Access Prevention: While you can use RBAC to grant or deny specific permissions to resources, IAM in the Azure subscription is not designed to prevent all access to the Azure portal or Azure PowerShell for entire groups of users. IAM controls what you can do, not whether you can access the portal/PowerShell at all. IAM is about resource-level authorization, not preventing entry to management interfaces.

Conclusion:

Conditional Access policies in Azure AD are the appropriate tool to control access to the Azure portal and Azure PowerShell based on group membership. They offer the necessary conditions (user/group targeting, application targeting) and access controls (block access) to meet the requirement effectively.

Final Answer: The final answer is From Azure AD, create a conditional access policy.

69
Q

You have Azure virtual machines deployed to three Azure regions. Each region contains a single virtual network that has four virtual machines on the same subnet. Each virtual machine runs an application named App1. App1 is accessible by using HTTPS. Currently, the virtual machines are inaccessible from the internet.
You need to use Azure Front Door to load balance requests for App1 across all the virtual machines.
Which additional Azure service should you provision?
Azure Traffic Manager
an internal Azure Load Balancer
a public Azure Load Balancer
Azure Private Link

A

The correct answer is a public Azure Load Balancer.

Explanation:

Here’s why a public Azure Load Balancer is the necessary additional service and why the other options are not as suitable in this scenario:

Public Azure Load Balancer:

Purpose: A public Azure Load Balancer is designed to provide load balancing for traffic originating from the internet and distribute it to backend servers within Azure.

Functionality in this scenario:

Backend Pool: You would create a backend pool in the public Azure Load Balancer that consists of the virtual machines running App1 within each Azure region.

Frontend IP Configuration: You would configure a public IP address as the frontend IP for the load balancer. This public IP address will be the endpoint that Front Door will target.

Load Balancing Rules: You would create load balancing rules to distribute HTTPS traffic (port 443) to the backend VMs in each region.

Health Probes: You would configure health probes to monitor the health of App1 on each VM in the backend pool.

Integration with Front Door: You would then configure Azure Front Door to use the public IP address of the public Azure Load Balancer in each region as a backend pool. Front Door will then distribute global traffic across these regional public load balancers. The public load balancers will, in turn, distribute traffic within each region to the VMs.

Internet Accessibility: The public Azure Load Balancer is crucial for making the backend VMs accessible from the internet via Front Door. Front Door needs a publicly reachable endpoint to send traffic to in each region, and a public load balancer provides exactly that.

Let’s examine why the other options are not the best fit:

Azure Traffic Manager: Azure Traffic Manager is a DNS-based traffic load balancer. While it can distribute traffic across regions, it operates at the DNS level. Azure Front Door is already designed to be a global, more advanced load balancer that operates at layer 7 (HTTP/HTTPS) and provides features like WAF, caching, and more granular routing than Traffic Manager. Using Traffic Manager in addition to Front Door for basic load balancing of VMs in this scenario is redundant and not the intended architecture. Front Door is meant to replace the need for Traffic Manager in many web application scenarios.

An internal Azure Load Balancer: An internal Azure Load Balancer is used to distribute traffic within an Azure virtual network or across peered virtual networks. It is not designed to make resources directly accessible from the internet. Front Door needs to communicate with a publicly accessible endpoint in each region. An internal load balancer would not provide this internet-facing capability.

Azure Private Link: Azure Private Link provides private connectivity to Azure PaaS services and customer-owned services over the Azure backbone network. It’s about private access, not making VMs accessible from the internet. Private Link is not relevant to making the VMs internet-accessible via Front Door.

Conclusion:

To make App1 accessible from the internet via Azure Front Door and load balance requests across VMs in each region, you need to provision a public Azure Load Balancer in each Azure region. Front Door will then use the public IP addresses of these load balancers as backend pools to distribute global traffic, while the public load balancers will distribute traffic within each region to the VMs.

Final Answer: The final answer is a public Azure Load Balancer.

70
Q

You have Azure virtual machines that have Update Management enabled. The virtual machines are configured as shown in the following table.
Resource
Name Operating system
Location
group
VM1 Windows Server 2012 R2
RG1
East US
VM2 Windows Server 2016
RG1
West US
VM3 Windows Server 2019
RG2
West US
VM4 Red Hat Enterprise Linux 7.7 RG2
West US
VM5 Ubuntu Server 18.04 LTS
RG1
East US
VM6 CentOS-based 7.7
RG1
East US

You need to ensure that all critical and security updates are applied to each virtual machine every month.
What is the minimum number of update deployments you should create?
4
6
2
1

A

To determine the minimum number of update deployments required, we need to consider the constraints of Azure Update Management and the requirements for applying updates to the VMs.

Azure Update Management has the following key characteristics relevant to this question:

Operating System Specificity: Update deployments in Azure Update Management are operating system specific. You cannot target both Windows and Linux VMs in a single update deployment. You need separate deployments for Windows and Linux operating systems.

Scope: Update deployments can target VMs across different:

Resource Groups: You can include VMs from multiple resource groups in a single deployment.

Locations (Regions): You can include VMs from different Azure regions in a single deployment.

Based on these characteristics and the VM configuration provided:

Categorize VMs by Operating System:

Windows VMs:

VM1: Windows Server 2012 R2 (RG1, East US)

VM2: Windows Server 2016 (RG1, West US)

VM3: Windows Server 2019 (RG2, West US)

All three are Windows Server based. They can be included in a single update deployment targeting Windows VMs.

Linux VMs:

VM4: Red Hat Enterprise Linux 7.7 (RG2, West US)

VM5: Ubuntu Server 18.04 LTS (RG1, East US)

VM6: CentOS-based 7.7 (RG1, East US)

All three are Linux distributions. They can be included in a single update deployment targeting Linux VMs.

Determine Minimum Deployments:

Since update deployments are OS-specific, we need at least one deployment for Windows VMs and one deployment for Linux VMs to cover all virtual machines.

Deployment 1: Windows Updates - Target VMs: VM1, VM2, VM3. Configure to apply “Critical” and “Security” updates monthly.

Deployment 2: Linux Updates - Target VMs: VM4, VM5, VM6. Configure to apply “Critical” and “Security” updates monthly.

Evaluate Options:

4: Too many. We don’t need separate deployments based on Resource Group or Location.

6: Too many. We don’t need separate deployments for each VM.

2: This is the correct minimum. We need two deployments to separate Windows and Linux VMs.

1: Too few. A single deployment cannot target both Windows and Linux VMs.

Conclusion:

The minimum number of update deployments required is 2: one deployment for all Windows VMs and one deployment for all Linux VMs. Resource group and location differences do not necessitate additional deployments.

Final Answer: The final answer is 2.

71
Q

You have an Azure subscription.
You create a custom role in Azure by using the following Azure Resource Manager template.
{
“Name”: “Role1”,
“Id”: “88888888-8888-8888-8888-888888888888”,
“IsCustom”: true,
“Description”: “Role1 Description”,
“Actions”: [
“Microsoft.Storage//read”,
“Microsoft.Network/
/read”,
“Microsoft.Compute//read”,
“Microsoft.Compute/virtualMachines/start/action”,
“Microsoft.Compute/virtualMachines/restart/action”,
“Microsoft.Authorization/
/read”,
“Microsoft.ResourceHealth/availabilityStatuses/read”,
“Microsoft.Resources/subscriptions/resourceGroups/read”,
“Microsoft.Insights/alertRules/”,
“Microsoft.Insights/diagnosticSettings/
”,
“Microsoft.Support/*”
],
“NotActions”: [],
“DataActions”: [],
“NotDataActions”: [],
“AssignableScopes”: [
“/subscriptions/981dd4bc-8cf4-46fc-9513-0c599648b44b”
]
}

You assign the role to a user named User1.
Which action can User1 perform?
Create virtual machines.
Create resource groups.
Delete virtual machines.
Create support requests.

A

To determine which action User1 can perform, we need to analyze the permissions defined in the custom role “Role1”. Let’s examine the “Actions” array in the provided JSON template:

“Actions”: [
“Microsoft.Storage//read”,
“Microsoft.Network/
/read”,
“Microsoft.Compute//read”,
“Microsoft.Compute/virtualMachines/start/action”,
“Microsoft.Compute/virtualMachines/restart/action”,
“Microsoft.Authorization/
/read”,
“Microsoft.ResourceHealth/availabilityStatuses/read”,
“Microsoft.Resources/subscriptions/resourceGroups/read”,
“Microsoft.Insights/alertRules/”,
“Microsoft.Insights/diagnosticSettings/
”,
“Microsoft.Support/*”
]
Use code with caution.
Json
Now, let’s evaluate each option against these defined actions:

Create virtual machines.

To create virtual machines, the role would need actions like Microsoft.Compute/virtualMachines/write or Microsoft.Compute/virtualMachines/*.

Role1 only includes Microsoft.Compute/*/read, Microsoft.Compute/virtualMachines/start/action, and Microsoft.Compute/virtualMachines/restart/action. These actions grant read permissions for compute resources and specific actions to start and restart VMs, but not to create them.

Conclusion: User1 cannot create virtual machines.

Create resource groups.

To create resource groups, the role would need actions like Microsoft.Resources/subscriptions/resourceGroups/write or Microsoft.Resources/subscriptions/resourceGroups/*.

Role1 only includes Microsoft.Resources/subscriptions/resourceGroups/read. This action grants read permissions for resource groups, but not to create them.

Conclusion: User1 cannot create resource groups.

Delete virtual machines.

To delete virtual machines, the role would need actions like Microsoft.Compute/virtualMachines/delete or Microsoft.Compute/virtualMachines/*.

Role1 only includes Microsoft.Compute/*/read, Microsoft.Compute/virtualMachines/start/action, and Microsoft.Compute/virtualMachines/restart/action. These actions do not include permissions to delete virtual machines.

Conclusion: User1 cannot delete virtual machines.

Create support requests.

Role1 includes the action Microsoft.Support/*. The wildcard * after Microsoft.Support/ indicates that the role grants permissions for all operations within the Microsoft Support resource provider.

This includes actions to create, read, update, and manage support requests with Microsoft Azure support.

Conclusion: User1 can create support requests.

Based on the analysis, the only action User1 is authorized to perform with Role1 is to create support requests.

Final Answer: The final answer is Create support requests.

72
Q

You have an Azure SQL database named Db1 that runs on an Azure SQL server named SQLserver1.
You need to ensure that you can use the query editor on the Azure portal to query Db1.
What should you do?
Copy the ADO.NET connection string of Db1 and paste the string to the query editor.
Approve private endpoint connections for SQLserver1.
Modify the Advanced Data Security settings of Db1.
Configure the Firewalls and virtual networks settings for SQLserver1.

A

The correct answer is Configure the Firewalls and virtual networks settings for SQLserver1.

Explanation:

To use the query editor in the Azure portal to query an Azure SQL database, you need to ensure that your client machine (the machine you are using to access the Azure portal) is allowed to connect to the Azure SQL server through the firewall.

Here’s why Configure the Firewalls and virtual networks settings for SQLserver1 is the correct action and why the other options are incorrect:

Configure the Firewalls and virtual networks settings for SQLserver1:

Azure SQL Server Firewall: Azure SQL Server has a built-in firewall that, by default, restricts access to the server and its databases from all public networks except for other Azure services.

Allowing Azure Portal Access: To use the query editor in the Azure portal, you need to configure the firewall settings of the Azure SQL server (SQLserver1) to allow access from the Azure portal’s IP address ranges or, more commonly, to allow access from your client IP address (the IP address of the machine you are using to access the Azure portal).

Steps to Configure Firewall:

Navigate to your Azure SQL server (SQLserver1) in the Azure portal.

Go to Firewall and virtual networks settings.

You can either:

Allow Azure services and resources to access this server: Set this to Yes (if you want to allow other Azure services to access the server).

Add client IP address: Click + Add client IP address. This will automatically detect your current client IP address and add a firewall rule to allow access from your machine.

Add a firewall rule: Manually add a firewall rule specifying a range of IP addresses that include your client IP address.

Copy the ADO.NET connection string of Db1 and paste the string to the query editor:

ADO.NET connection strings are used by applications to connect to a database programmatically. The Azure portal query editor does not use connection strings in this way. The query editor establishes a direct connection to the database using your Azure credentials and network access permissions. Pasting a connection string into the query editor is not a valid action to enable query editor access.

Approve private endpoint connections for SQLserver1:

Private endpoints are used to provide private and secure access to Azure services from within your virtual network. If private endpoints were configured for SQLserver1 and specifically blocking public access, then approving private endpoint connections might be relevant for applications within the VNet. However, for accessing the query editor from your local machine through the Azure portal, private endpoints are not the issue. Private endpoints would actually restrict public access, not enable it for the query editor from your machine. This is not the solution for general query editor access from the portal.

Modify the Advanced Data Security settings of Db1:

Advanced Data Security (now Microsoft Defender for SQL) is related to security features like threat detection and vulnerability assessment within the database. It does not control basic network connectivity or firewall rules that are preventing you from accessing the query editor. Modifying Advanced Data Security settings will not enable query editor access if the firewall is blocking your connection.

Therefore, the correct action is to configure the Firewalls and virtual networks settings for SQLserver1 to allow your client IP address or Azure portal IP ranges to access the SQL server.

Final Answer: The final answer is Configure the Firewalls and virtual networks settings for SQLserver1.

73
Q

DRAG DROP
You have an Azure subscription that contains the resources shown in the following table
Name Type Region Resource group
RG1 Resource group Central US Not applicable
RG2 Resource group West US Not applicable
VM1 Virtual machine East US RG2
VNET1 Virtual network East US RG1

In RG2, you need to create a new virtual machine named VM2 that will connect to VNET1. VM2 will use a network interface named VM2_Interface.

In which region should you create VM2 and VM2_Interface? To answer, drag the appropriate regions to the correct targets. Each region may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.
Regions
Central US
East US
West US
Answer Area
VM2: Region
VM2_Interface: Region

A

Answer Area:

VM2: Region East US
VM2_Interface: Region East US

Explanation:

Virtual Networks are Regional: Azure Virtual Networks (VNets) are regional resources. VNET1 is located in East US.

Virtual Machines and VNets must be in the Same Region: Virtual Machines (VMs) must be created in the same Azure region as the Virtual Network they are intended to connect to. Since VM2 needs to connect to VNET1 (which is in East US), VM2 itself must also be created in East US.

Network Interfaces and VNets/VMs must be in the Same Region: Network Interfaces (NICs) are also regional resources. A NIC must be in the same region as both the Virtual Network it connects to and the Virtual Machine it is attached to. Because VM2 and VNET1 must be in East US, VM2_Interface must also be created in East US.

Therefore, to ensure VM2 can connect to VNET1, both VM2 and its network interface VM2_Interface must be created in the East US region.

Final Answer:

VM2: Region East US
VM2_Interface: Region East US

74
Q

A company hosts virtual machines (VMs) in an on-premises datacenter and in Azure. The on-premises and Azure-based VMs communicate using ExpressRoute.
The company wants to be able to continue regular operations if the ExpressRoute connection fails. Failover connections must use the Internet and must not require Multiprotocol Label Switching (MPLS) support.
You need to recommend a solution that provides continued operations.
What should you recommend?
Increase the bandwidth of the existing ExpressRoute connection.
Increase the bandwidth for the on-premises internet connection.
Set up a VPN connection.
Set up a second ExpressRoute connection.

A

The core requirement is to ensure continued operations if the ExpressRoute connection fails, with the failover connection using the internet and not requiring MPLS. Let’s examine each option:

Increase the bandwidth of the existing ExpressRoute connection.

Explanation: Increasing bandwidth will improve the performance of the existing connection, but it does not provide a failover mechanism if the ExpressRoute connection itself fails. If the ExpressRoute circuit goes down, increasing its bandwidth is irrelevant. It does not address the redundancy requirement.

Does it meet the requirement? No. It doesn’t provide continued operations during an ExpressRoute failure.

Increase the bandwidth for the on-premises internet connection.

Explanation: Increasing the bandwidth of the on-premises internet connection might be beneficial for general internet access, but it does not automatically create a failover path for the ExpressRoute connection. Just having a faster internet connection on-premises doesn’t mean it will be used for Azure connectivity if ExpressRoute fails.

Does it meet the requirement? No. It doesn’t provide continued operations during an ExpressRoute failure.

Set up a VPN connection.

Explanation: Setting up a VPN connection between the on-premises network and Azure can serve as a failover path for ExpressRoute. Azure VPN Gateway can be configured to work in conjunction with ExpressRoute. If the ExpressRoute connection fails, traffic can automatically failover to the VPN connection, ensuring continued connectivity.

Internet and No MPLS Requirement: VPN connections use the internet as the transport, fulfilling the requirement to use the internet for failover. Standard site-to-site VPN connections do not require MPLS.

Does it meet the requirement? Yes. It provides continued operations using the internet and without MPLS for failover.

Set up a second ExpressRoute connection.

Explanation: Setting up a second ExpressRoute connection would provide a redundant ExpressRoute circuit. This would enhance the resilience of the ExpressRoute connectivity itself. However, it does not meet the requirement of using the internet for failover. ExpressRoute connections are private, dedicated connections and do not use the public internet for transport. Furthermore, setting up a second ExpressRoute might still rely on MPLS depending on the provider and configuration, potentially not fulfilling the “must not require MPLS support” requirement for the failover connection type (even if the primary ExpressRoute does).

Does it meet the requirement? No. It does not use the internet for failover, and may still involve MPLS.

Conclusion:

The only option that directly addresses the requirement for a failover connection using the internet and without MPLS is Set up a VPN connection. Azure VPN Gateway can be configured as a backup connection for ExpressRoute to ensure business continuity in case of ExpressRoute circuit failure.

Final Answer: The final answer is Set up a VPN connection.

75
Q

Your network contains an on-premises Active Directory and an Azure Active Directory (Azure AD) tenant.
You deploy Azure AD Connect and configure pass-through authentication.
Your Azure subscription contains several web apps that are accessed from the Internet.
You plan to use Azure Multi-Factor Authentication (MFA) with the Azure Active Directory tenant.
You need to recommend a solution to prevent users from being prompted for Azure MFA when they access the web apps from the on-premises network.
What should you include in the recommendation?
an Azure policy
trusted IPs
a site-to-site VPN between the on-premises network and Azure
an Azure ExpressRoute circuit

A

Explanation:
The Trusted IPs feature of Azure Multi-Factor Authentication is used by administrators of a managed or federated tenant. The feature bypasses two-step verification for users who sign in from the company intranet. The feature is available with the full version of Azure Multi-Factor Authentication, and not the free version for administrators.

76
Q

HOTSPOT
Your network contains an on-premises Active Directory domain named contoso.com that contains a user named User1. The domain syncs to Azure Active Directory (Azure AD).

You have the Windows 10 devices shown in the following table.
Name Joined to
Device1 On-premises Active Directory
Device2 Azure AD
Device3 Workgroup
he User Sign-In settings are configured as shown in the following exhibit.
PROVISION FROM ACTIVE DIRECTORY
Azure AD Connect cloud provisioning
This feature allows you to manage provisioning from the cloud.
Manage provisioning (Preview)
Azure AD Connect sync
Sync Status Enabled
Last Sync Less than 1 hour ago
Password Hash Sync Enabled
USER SIGN-IN
Federation Disabled 0 domains
Seamless single sign-on Enabled 1 domain
Pass-through authentication Disabled 0 agents

For each of the following statements, select Yes if the statement is true. Otherwise, select No.
Statements Yes No
When accessing the Azure portal from Device1, User1 will sign in automatically by using SSO.
When accessing the Azure portal from Device2, User1 will sign in automatically by using SSO.
When accessing the Azure portal from Device3, User1 will sign in automatically by using SSO.

A

Explanation:
Box 1: Yes

Seamless SSO needs the user’s device to be domain-joined only, but it is not used on Azure AD Joined or Hybrid Azure AD joined devices. SSO on Azure AD joined, Hybrid Azure AD joined, and Azure AD registered devices works based on the primary refresh token.

Box 2: No

Box 3: No

77
Q

You create a new Azure subscription. You create a resource group named RG1. In RG1, you create the resources shown in the following table.
Name Type
VNET1 Virtual network
VM1 Virtual machine
GWSN1 Gateway subnet
VPNGW1 Virtual network gateway

You need to configure an encrypted tunnel between your on-premises network and VNET1.
Which two additional resources should you create in Azure? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
a VPN gateway
a site-to-site connection
a point-to-site configuration
a VNet-to-VNet connection
a local network gateway

A

The correct answers are:

a site-to-site connection

a local network gateway

Explanation:

To configure an encrypted tunnel between an on-premises network and an Azure Virtual Network (VNET1) using a Virtual Network Gateway (VPNGW1), you need the following additional Azure resources:

a local network gateway:

Purpose: A Local Network Gateway represents your on-premises VPN device and network in Azure. It provides Azure with the necessary information about your on-premises VPN endpoint, including:

Public IP address of your on-premises VPN device: Azure VPN Gateway needs to know the public IP address of your on-premises VPN device to establish the connection.

On-premises network address prefixes: You define the on-premises network address spaces (subnets) that you want to be reachable from Azure through the VPN tunnel.

Why it’s necessary: Azure VPN Gateway uses the Local Network Gateway configuration to know where to initiate the VPN connection and what networks to route traffic to on the on-premises side.

a site-to-site connection:

Purpose: A Connection resource in Azure represents the actual VPN tunnel itself. For a site-to-site VPN, you need to create a “Site-to-Site (IPsec)” connection. This connection resource defines:

Connection type: Site-to-to (IPsec)

VPN type: RouteBased or PolicyBased (depending on your VPNGW1 configuration - although RouteBased is generally recommended)

Shared key (pre-shared key): A secret key used for authentication between the Azure VPN gateway and the on-premises VPN device.

References to Azure VPN gateway and Local Network Gateway: The connection resource links your existing Azure VPN gateway (VPNGW1) to the Local Network Gateway representing your on-premises VPN device.

Why it’s necessary: The Connection resource is the resource that actually establishes the VPN tunnel. It uses the information from both the Azure VPN gateway and the Local Network Gateway to set up the encrypted connection.

Why other options are incorrect:

a VPN gateway: You already have a VPN gateway named VPNGW1. You don’t need to create another VPN gateway to establish a single site-to-site VPN connection.

a point-to-site configuration: Point-to-Site (P2S) VPN is for connecting individual client computers to your Azure VNet, not for connecting an entire on-premises network. Site-to-Site VPN is the correct type for connecting networks.

a VNet-to-VNet connection: VNet-to-VNet connections are used to connect two Azure Virtual Networks together. This is not relevant for connecting an on-premises network to Azure.

In summary, to create a site-to-site VPN tunnel, you need to represent your on-premises VPN endpoint in Azure using a Local Network Gateway and then create a Connection resource to define and establish the VPN tunnel between your Azure VPN Gateway and the Local Network Gateway.

Final Answer:

Answer Area
VM2: Region East US
VM2_Interface: Region East US

78
Q

You plan to create an Azure Storage account named storage1 that will store blobs and be accessed by Azure Databricks.
You need to ensure that you can set permissions for individual blobs by using Azure Active Directory (Azure AD) authentication.
Which Advanced setting should you enable for storage1?
Large file shares
Hierarchical namespace
NFS v3
Blob soft delete

A

To enable Azure Active Directory (Azure AD) authentication for setting permissions on individual blobs in an Azure Storage account, you need to enable Hierarchical namespace.

Here’s why:

Hierarchical Namespace (HNS):

Purpose: Hierarchical namespace transforms Azure Blob Storage into Azure Data Lake Storage Gen2 (ADLS Gen2). ADLS Gen2 provides a hierarchical file system structure (directories and subdirectories) on top of Blob storage.

Azure AD Integration for Access Control: A key feature of ADLS Gen2 (enabled by HNS) is its deep integration with Azure Active Directory (Azure AD) for access control. HNS enables you to use Azure AD identities to manage access permissions at the directory and individual blob level using Access Control Lists (ACLs). This is crucial for setting permissions for individual blobs using Azure AD authentication, as required in the question.

POSIX-like Permissions: HNS supports POSIX-style permissions, which are commonly used in file systems and allow for granular control over read, write, and execute permissions for users and groups managed in Azure AD.

Let’s examine why the other options are not correct for this requirement:

Large file shares:

Purpose: Large file shares is an option for Azure File Storage, enabling file shares larger than the standard size. Azure File Storage is accessed using SMB protocol and has its own authentication mechanisms (like Active Directory Domain Services or Azure Active Directory Domain Services). It is not related to Blob storage or Azure AD authentication for blobs.

NFS v3:

Purpose: NFS v3 (Network File System version 3) allows you to mount Azure Blob Storage containers as file systems using the NFS v3 protocol. While NFS v3 provides file system access, it does not directly enable Azure AD authentication for individual blobs in the way that Hierarchical Namespace does. NFS v3 authentication in this context is generally handled through other mechanisms and is not the same as native Azure AD ACLs on blobs.

Blob soft delete:

Purpose: Blob soft delete is a data protection feature that allows you to recover accidentally deleted blobs within a defined retention period. It is a data recovery feature and has no relation to authentication or authorization methods for accessing blobs.

Conclusion:

To meet the requirement of setting permissions for individual blobs using Azure AD authentication in Azure Storage, you must enable the Hierarchical namespace Advanced setting for the storage account. This transforms the storage account into Azure Data Lake Storage Gen2, which provides the necessary Azure AD integration and ACL capabilities for fine-grained access control at the blob level.

Final Answer: The final answer is Hierarchical namespace.

79
Q

HOTSPOT
You have an Azure subscription that includes an Azure key vault named Vault1.

You create the Azure virtual machines shown in the following table.
— —
Name Operating system disk type Use managed disks
VM1 Premium SSD No
VM2 Standard HDD Yes
VM3 Standard SSD Yes
— —
You enable Azure Disk Encryption for all the virtual machines and use the –VolumeType All parameter.

You add data disks to the virtual machines as shown in the following table.
Name Virtual machine Storage account type
VM1-Disk1 VM1 Premium SSD
VM2-Disk1 VM2 Standard SSD
VM3-Disk1 VM3 Standard HDD
For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.
Statements
VM1-Disk1 is encrypted automatically by using Azure Disk Encryption.
VM2-Disk1 is encrypted automatically by using Azure Disk Encryption.
VM3-Disk1 is encrypted automatically by using Azure Disk Encryption.

A

Let’s analyze each statement based on the behavior of Azure Disk Encryption (ADE) with the -VolumeType All parameter.

Azure Disk Encryption with -VolumeType All: When you enable Azure Disk Encryption with the -VolumeType All parameter, you are instructing ADE to encrypt both the operating system disk and all data disks attached to the virtual machine. This applies to both existing disks at the time of encryption and data disks that are added after ADE is enabled.

Statement 1: VM1-Disk1 is encrypted automatically by using Azure Disk Encryption.

Analysis: VM1 has Azure Disk Encryption enabled with -VolumeType All. VM1-Disk1 is a data disk added to VM1 after ADE was enabled. Since -VolumeType All is used, and data disks added after encryption are also meant to be encrypted, VM1-Disk1 should be encrypted.

Conclusion: Yes. VM1-Disk1 will be automatically encrypted.

Statement 2: VM2-Disk1 is encrypted automatically by using Azure Disk Encryption.

Analysis: VM2 has Azure Disk Encryption enabled with -VolumeType All. VM2-Disk1 is a data disk added to VM2 after ADE was enabled. Similar to VM1, -VolumeType All should ensure all data disks, including those added later, are encrypted.

Conclusion: Yes. VM2-Disk1 will be automatically encrypted.

Statement 3: VM3-Disk1 is encrypted automatically by using Azure Disk Encryption.

Analysis: VM3 has Azure Disk Encryption enabled with -VolumeType All. VM3-Disk1 is a data disk added to VM3 after ADE was enabled. Again, -VolumeType All should lead to encryption of all data disks, regardless of when they are attached.

Conclusion: Yes. VM3-Disk1 will be automatically encrypted.

Important Considerations:

Boot Diagnostics Disk: While -VolumeType All aims to encrypt all disks, the Boot diagnostics storage account disk attached to the VM may not be encrypted by ADE itself. However, the question is specifically about the data disks added (VM1-Disk1, VM2-Disk1, VM3-Disk1), not the boot diagnostics disk.

Encryption Process: For data disks added after ADE is enabled, the encryption process might not be instantaneous. It might take some time after attaching the disk for ADE to initiate and complete the encryption process on the new disk. However, the intent of -VolumeType All is to encrypt all disks, and it should eventually encrypt newly added data disks.

Final Answer:

Statements Yes No
VM1-Disk1 is encrypted automatically by using Azure Disk Encryption. ☑ ☐
VM2-Disk1 is encrypted automatically by using Azure Disk Encryption. ☑ ☐
VM3-Disk1 is encrypted automatically by using Azure Disk Encryption. ☑ ☐

80
Q

You have the following Azure Active Directory (Azure AD) tenants:
Contoso.onmicrosoft.com: Linked to a Microsoft 365 tenant and syncs to an Active Directory forest named contoso.com by using password hash synchronization
– Contosoazure.onmicrosoft.com: Linked to an Azure subscription named Subscription1
– You need to ensure that you can assign the users in contoso.com access to the resources in Subscription1.
What should you do?
Associate Subscription1 to contoso.onmicrosoft.com. Reassign all the roles in Subscription1.
Configure the existing Azure AD Connect server to sync contoso.com to contosoazure.onmicrosoft.com.
Configure contoso.onmicrosoft.com to use pass-through authentication.
Configure contosoazure.onmicrosoft.com to use pass-through authentication.

A

Final Answer: The final answer is Associate Subscription1 to contoso.onmicrosoft.com. Reassign all the roles in Subscription1.

Explanation:

Let’s break down why this is the correct solution and why the other options are incorrect:

Associate Subscription1 to contoso.onmicrosoft.com. Reassign all the roles in Subscription1. (Correct)

Tenant Association: Azure subscriptions are fundamentally linked to an Azure Active Directory (Azure AD) tenant. This tenant serves as the directory for managing users, groups, and their access to resources within that subscription. Currently, Subscription1 is linked to contosoazure.onmicrosoft.com, while the users you want to grant access (from contoso.com forest) are synced to contoso.onmicrosoft.com.

Changing Tenant Association: The most direct way to allow contoso.com users to access Subscription1 resources is to change the Azure AD tenant that Subscription1 trusts for identity management. By associating Subscription1 with contoso.onmicrosoft.com, you make the users and groups from contoso.onmicrosoft.com (and by extension, contoso.com) the native identities for authorization within Subscription1.

Role Reassignment: After changing the tenant association, the existing role assignments in Subscription1 will likely be based on principals from the old contosoazure.onmicrosoft.com tenant. These assignments will become invalid or irrelevant when the subscription is linked to contoso.onmicrosoft.com. Therefore, you’ll need to reassign all roles within Subscription1 to users and groups from the newly associated contoso.onmicrosoft.com tenant to restore and configure access for the intended users.

Effectiveness: This method directly addresses the core issue by placing the subscription under the control of the Azure AD tenant that manages the desired users.

Configure the existing Azure AD Connect server to sync contoso.com to contosoazure.onmicrosoft.com. (Incorrect)

Single Tenant Sync: An Azure AD Connect server is designed to synchronize a single on-premises Active Directory forest to one Azure AD tenant. You cannot configure a single Azure AD Connect server to synchronize the same on-premises forest to multiple Azure AD tenants (like both contoso.onmicrosoft.com and contosoazure.onmicrosoft.com). This option is technically not feasible with a single Azure AD Connect instance. Even if you could, it’s not the correct architectural approach.

Configure contoso.onmicrosoft.com to use pass-through authentication. (Incorrect)

Authentication Method: Pass-through authentication is an Azure AD sign-in method. Configuring contoso.onmicrosoft.com to use pass-through authentication only changes how users in contoso.onmicrosoft.com authenticate. It doesn’t change the tenant association of Subscription1 or grant access to users in contoso.com forest to resources in Subscription1. It’s irrelevant to the tenant association problem.

Configure contosoazure.onmicrosoft.com to use pass-through authentication. (Incorrect)

Authentication Method: Similarly, configuring contosoazure.onmicrosoft.com to use pass-through authentication only affects authentication for users within contosoazure.onmicrosoft.com (if any cloud-only users exist there). It does not bridge the identity gap or grant access to contoso.com users in Subscription1.

81
Q

You have several Azure web apps that use access keys to access databases.
You plan to migrate the access keys to Azure Key Vault. Each app must authenticate by using Azure Active Directory (Azure AD) to gain access to the access keys.
What should you create in Azure to ensure that the apps can access the access keys?
managed identities
managed applications
Azure policies
an App Service plan

A

The correct answer is managed identities.

Explanation:

Here’s why managed identities are the correct solution and why the other options are not:

Managed Identities:

Purpose: Managed identities for Azure resources provide Azure services with an automatically managed identity in Azure Active Directory (Azure AD). This allows Azure services to authenticate to Azure services that support Azure AD authentication without needing to hardcode credentials in code or configuration.

Scenario Application: In this scenario, you would enable managed identities for each Azure web app. This creates a service principal identity in Azure AD for each web app.

Key Vault Access: You would then grant these managed identities specific permissions to access your Azure Key Vault. For example, you would grant the “Get Secret” permission to each web app’s managed identity on the Key Vault where you store the database access keys.

Authentication Process: When the web apps need to access a database access key, they would use the Azure SDK (e.g., Azure.Identity library) to authenticate to Azure Key Vault using their managed identity. The Azure SDK automatically handles the authentication process, obtaining tokens from Azure AD without requiring any explicit credentials to be stored or managed by the web app.

Security Benefits: This approach is highly secure because:

No Hardcoded Credentials: You eliminate the need to store access keys directly in web app configuration or code.

Automated Credential Management: Azure automatically manages the lifecycle and rotation of the managed identity credentials.

Principle of Least Privilege: You can grant each web app only the necessary permissions (e.g., “Get Secret” for specific secrets) in Key Vault, adhering to the principle of least privilege.

Let’s look at why the other options are not the correct solution:

Managed applications: Managed applications are pre-packaged and deployable applications offered through the Azure Marketplace. They are not directly related to the authentication mechanism for web apps to access Key Vault for secrets.

Azure policies: Azure Policies are used to enforce organizational standards and assess compliance across Azure resources. They are not designed for managing application authentication to Key Vault or providing identities to applications.

An App Service plan: An App Service plan defines the set of compute resources for running Azure web apps. It determines the region, size, and scale of the underlying VMs. It does not provide authentication mechanisms for web apps to access other Azure services like Key Vault.

Therefore, the correct answer is managed identities because they are the Azure feature specifically designed to enable Azure AD-based authentication for Azure services (like web apps) to securely access other Azure services (like Key Vault) without managing credentials directly.

Final Answer: The final answer is managed identities.

82
Q

You have an Azure key vault named KV1.
You need to implement a process that will digitally sign the blobs stored in Azure Storage.
What is required in KV1 to sign the blobs?
a key
a secret
a certificate

A

The correct answer is a key.

Explanation:

To digitally sign blobs stored in Azure Storage using Azure Key Vault (KV1), you need to use a cryptographic key that is stored and managed within KV1.

Here’s why:

Keys in Azure Key Vault for Digital Signing: Azure Key Vault is designed to securely store and manage cryptographic keys, secrets, and certificates. For digital signing, you specifically need a cryptographic key. Key Vault supports different types of keys, including:

RSA keys: Commonly used for digital signatures and encryption.

EC keys: Elliptic Curve keys, also used for digital signatures and key exchange, often preferred for performance and smaller key sizes.

To perform digital signing, you would create or import a key into KV1. This key will be a cryptographic key pair, where the private key is used to create the digital signature, and the corresponding public key can be used to verify the signature. Crucially, the private key remains secure within Key Vault and is never directly exposed. The signing operation is performed by Key Vault itself, using the private key on your behalf.

Why other options are incorrect:

a secret: Secrets in Azure Key Vault are used to store sensitive information like passwords, connection strings, API keys, and other text or binary data. Secrets are not designed for performing cryptographic operations like digital signing. While you can store text that could represent a key, a “secret” object in Key Vault itself does not have the cryptographic functionality needed for signing. You would need a “key” object.

a certificate: Certificates in Azure Key Vault are X.509 certificates. Certificates are primarily used for authentication and encryption, and they contain a public key (and optionally a private key). While you could import a certificate containing a private key into Key Vault and potentially use the private key within the certificate for signing (if the certificate allows key export and signing operations), the most direct and fundamental requirement for signing is a key. Certificates are a higher-level construct built around keys, but the underlying cryptographic material needed for signing is the key. For just signing blobs, directly using a key is more straightforward and appropriate.

83
Q

You set the multi-factor authentication status for a user named admin1@contoso.com to Enabled.
Admin1 accesses the Azure portal by using a web browser.
Which additional security verifications can Admin1 use when accessing the Azure portal?
a phone call, an email message that contains a verification code, and a text message that contains an app password.
an app password, a text message that contains a verification code, and a verification code sent from the Microsoft Authenticator app.
an app password, a text message that contains a verification code, and a notification sent from the Microsoft Authenticator app.
a phone call, a text message that contains a verification code, and a notification or a verification code sent from the Microsoft Authenticator app.

A

When Multi-Factor Authentication (MFA) is enabled for a user in Azure Active Directory (Azure AD), they will be prompted for additional security verification methods beyond their password when they sign in. The available verification methods depend on the configuration set by the administrator and the methods configured by the user.

Let’s evaluate each option against the standard MFA verification methods offered by Azure AD:

Option 1: a phone call, an email message that contains a verification code, and a text message that contains an app password.

a phone call: Phone call to a registered phone number is a valid Azure MFA verification method.

an email message that contains a verification code: While email can be used for Self-Service Password Reset (SSPR) verification, it is not a standard primary MFA verification method for Azure AD sign-in in typical configurations. It’s less secure compared to other methods.

a text message that contains an app password: This is incorrect. App passwords are not sent via text messages. App passwords are generated by users (or admins) after MFA is enabled to allow legacy applications that don’t support modern authentication to bypass MFA prompts. Text messages contain verification codes, not app passwords.

Option 2: an app password, a text message that contains a verification code, and a verification code sent from the Microsoft Authenticator app.

an app password: As mentioned before, app passwords are not MFA verification methods themselves. They are a way to bypass MFA for legacy apps. Incorrect.

a text message that contains a verification code: Text message (SMS) verification is a valid Azure MFA method.

a verification code sent from the Microsoft Authenticator app: Verification code generated by the Microsoft Authenticator app (Time-based One-Time Password - TOTP) is a valid Azure MFA method.

Option 3: an app password, a text message that contains a verification code, and a notification sent from the Microsoft Authenticator app.

an app password: Incorrect, as explained before.

a text message that contains a verification code: Valid SMS verification.

a notification sent from the Microsoft Authenticator app: Push notification sent to the Microsoft Authenticator app for approval is a valid Azure MFA method.

Option 4: a phone call, a text message that contains a verification code, and a notification or a verification code sent from the Microsoft Authenticator app.

a phone call: Valid phone call verification.

a text message that contains a verification code: Valid SMS verification.

a notification or a verification code sent from the Microsoft Authenticator app: Both push notifications and verification codes (TOTP) from the Microsoft Authenticator app are valid Azure MFA methods.

Comparing the options, Option 4 lists only valid and standard Azure MFA verification methods: phone call, SMS message with verification code, and Microsoft Authenticator app (both push notification and verification code). Options 1, 2, and 3 incorrectly include “app password” as a verification method, and Option 1 incorrectly includes “email message with verification code” as a standard primary MFA method.

Final Answer: The final answer is a phone call, a text message that contains a verification code, and a notification or a verification code sent from the Microsoft Authenticator app.

84
Q

HOTSPOT
You have an Azure Active Directory (Azure AD) tenant that contains the user groups shown in the following table.

Name Role Member of
User1 Global administrator None
User2 User administrator Group1
User3 Password administrator Group1
User4 None Group1

You enable self-service password reset (SSPR) for Group1.

You configure the Notifications settings as shown in the following exhibit.
Save X Discard
Notify users on password resets? ① No
Notify all admins when other admins reset their password? ①
Yes
For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.
Statements
User1 gets a notification when User3 resets her password by using SSPR.
User3 gets a notification when User3 resets her password by using SSPR.
User1 gets a notification when User2 resets the password of User4.

A

Let’s analyze each statement based on the provided Azure AD SSPR notification settings and user roles.

Notification Settings:

Notify users on password resets? No

Notify all admins when other admins reset their password? Yes

User Roles and Group Membership:

User1: Global administrator, Not member of Group1

User2: User administrator, Member of Group1

User3: Password administrator, Member of Group1

User4: None, Member of Group1

Statement 1: User1 gets a notification when User3 resets her password by using SSPR.

Analysis:

User1 is a Global administrator, which is an admin role.

User3 is a Password administrator, which is also an admin role.

User3 is resetting her own password using Self-Service Password Reset (SSPR).

The notification setting “Notify all admins when other admins reset their password?” is set to Yes.

Since User3 is an admin and is resetting her own password using SSPR, and the notification setting for admin-to-admin SSPR resets is enabled, all admins should be notified. User1, being a Global administrator, is an admin.

Conclusion: Yes, Statement 1 is True.

Statement 2: User3 gets a notification when User3 resets her password by using SSPR.

Analysis:

User3 is resetting her own password using SSPR.

The notification setting “Notify users on password resets?” is set to No.

This setting directly controls whether the user who initiates the password reset (in this case, User3) receives a notification. Since it is set to “No,” User3 should not receive a notification about her own password reset.

Conclusion: No, Statement 2 is False.

Statement 3: User1 gets a notification when User2 resets the password of User4.

Analysis:

User1 is a Global administrator (Admin role).

User2 is a User administrator (Admin role).

User2 is resetting the password of User4. While the scenario mentions SSPR, it’s implied that User2, as a User administrator, is using admin capabilities (likely through the Azure AD admin center or similar tools) to reset User4’s password, not User2 performing SSPR for User2’s own password.

The notification setting “Notify all admins when other admins reset their password?” specifically refers to notifications when admins reset their own passwords using SSPR. It is not designed for scenarios where an admin resets another user’s password through admin interfaces, even if the underlying password reset mechanism shares components with SSPR. Admin-initiated password resets are distinct actions from user-initiated SSPR.

The notification is for “when other admins reset their password”. In this case, User2 is resetting User4’s password, not User2’s own password.

Conclusion: No, Statement 3 is False. The “Notify all admins when other admins reset their password?” setting is specifically for admin self-service password resets, not for admin-initiated password resets of other users. Therefore, User1 will not be notified in this scenario based on the configured SSPR notification settings.

Final Answer:

Statements Yes No
User1 gets a notification when User3 resets her password by using SSPR. ☑ ☐
User3 gets a notification when User3 resets her password by using SSPR. ☐ ☑
User1 gets a notification when User2 resets the password of User4. ☐ ☑

85
Q

Your company has an Azure subscription.
You enable multi-factor authentication (MFA) for all users.
The company’s help desk reports an increase in calls from users who receive MFA requests while they work from the company’s main office.
You need to prevent the users from receiving MFA requests when they sign in from the main office.
What should you do?
From Conditional access in Azure Active Directory (Azure AD), create a named location.
From the MFA service settings, create a trusted IP range.
From Conditional access in Azure Active Directory (Azure AD), create a custom control.
From Azure Active Directory (Azure AD), configure organizational relationships.

A

The goal is to prevent users from receiving MFA requests when they are working from the company’s main office. This implies that we need to establish a “trusted location” based on the company’s main office network and configure Azure MFA to bypass MFA prompts when users are within this trusted location. Let’s evaluate each option:

From Conditional access in Azure Active Directory (Azure AD), create a named location.

Explanation: Conditional Access Named Locations are specifically designed to define network locations (like corporate networks, trusted countries, etc.) based on IP addresses or countries/regions. You can create a named location for the IP address range of the company’s main office.

How it helps: Once a named location is defined, you can create a Conditional Access policy that excludes this named location from MFA requirements. For example, you can create a policy that requires MFA for “All cloud apps” and “All users” but with a condition that excludes the named location representing the main office network.

Effectiveness: This is a very effective and recommended approach. Conditional Access provides granular control over when MFA is required based on various conditions, including location.

From the MFA service settings, create a trusted IP range.

Explanation: Azure AD MFA service settings have a feature called “Trusted IPs” (sometimes referred to as “Bypass for trusted IPs”). This feature allows you to define a list of IP address ranges that are considered trusted. When users sign in from within these trusted IP ranges, MFA is bypassed.

How it helps: You can add the public IP address range(s) of the company’s main office to the Trusted IPs list in the MFA service settings.

Effectiveness: This is also a valid and simpler approach for this specific requirement. Trusted IPs are designed precisely for bypassing MFA based on location. It’s a less feature-rich approach compared to Conditional Access, but it directly addresses the need to create a location-based MFA bypass.

From Conditional access in Azure Active Directory (Azure AD), create a custom control.

Explanation: Custom controls in Conditional Access allow for more advanced and granular authorization requirements, often involving integrations with external systems or custom logic. Creating a custom control for a simple location-based MFA bypass is an over-engineered solution. Custom controls are typically used for more complex scenarios than just IP-based location bypasses.

Effectiveness: While technically you could potentially use a custom control to achieve this, it’s far more complex and unnecessary compared to using Named Locations or Trusted IPs.

From Azure Active Directory (Azure AD), configure organizational relationships.

Explanation: Organizational relationships in Azure AD are used for B2B collaboration, managing external users, and setting up cross-tenant access. They are not related to configuring MFA bypass for internal users based on their network location.

Effectiveness: Irrelevant to the requirement.

Comparing Named Locations and Trusted IPs:

Both Named Locations in Conditional Access and Trusted IPs in MFA service settings can achieve the goal. However, Trusted IP ranges (from MFA service settings) are often considered the more direct and simpler solution for this specific, basic requirement of bypassing MFA based on office IP address ranges. Conditional Access is more powerful and versatile for complex, policy-driven access control, but for a straightforward IP-based bypass, Trusted IPs are often quicker to configure and manage.

Considering the simplicity and directness for this specific requirement, Trusted IP ranges in MFA service settings is the most appropriate action.

Final Answer: The final answer is From the MFA service settings, create a trusted IP range.

86
Q

HOTSPOT
You have an Azure logic app named App1 and an Azure Service Bus queue named Queue1.

You need to ensure that App1 can read messages from Queue1. App1 must authenticate by using Azure Active Directory (Azure AD).

What should you do? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
On App1:
Add a logic app step
Configure Access control (IAM)
Regenerate the access key
Turn on the managed identity
On Queue1:
Add a read-only lock
Add a shared access policy
Configure Access control (IAM)
Modify the properties

A

Answer Area:

On App1: Turn on the managed identity

On Queue1: Configure Access control (IAM)

Explanation:

On App1 (Logic App):

Turn on the managed identity: This is the crucial step for enabling Azure AD authentication for App1. By turning on a managed identity (specifically, a system-assigned managed identity is usually simpler for this scenario), Azure automatically creates an identity for the Logic App in your Azure AD tenant. This identity can then be granted permissions to access other Azure resources that support Azure AD authentication, such as Azure Service Bus.

Why not other options for App1?

Add a logic app step: While you will need to add a Service Bus connector step in your Logic App to actually read messages from Queue1, this step itself doesn’t handle authentication. Authentication needs to be configured separately.

Configure Access control (IAM): Configuring IAM on App1 itself is not relevant for App1 authenticating to Service Bus. IAM on App1 controls who/what can manage the Logic App resource, not its outbound authentication to other services.

Regenerate the access key: Access keys are used for Shared Access Signature (SAS) authentication, which is an alternative to Azure AD authentication. The requirement explicitly states “App1 must authenticate by using Azure Active Directory (Azure AD)”, so SAS keys are not the desired method.

On Queue1 (Service Bus Queue):

Configure Access control (IAM): This step is essential to authorize App1’s managed identity to access Queue1. You need to grant the managed identity of App1 the necessary permissions to read messages from Queue1. This is done through Azure RBAC (Role-Based Access Control) via the Access control (IAM) blade for Queue1.

You would assign a built-in role like “Azure Service Bus Data Receiver” to the managed identity of App1 at the scope of Queue1 (or at a higher scope like the Service Bus namespace if appropriate). This role grants the necessary permissions to receive (read and delete) messages from the queue.

Why not other options for Queue1?

Add a read-only lock: Read-only locks prevent accidental modifications or deletions of the queue itself. They do not control access for reading messages.

Add a shared access policy: Shared Access Policies (SAPs) are used for SAS authentication, again, an alternative to Azure AD authentication. Since we need to use Azure AD, SAPs are not the correct approach.

Modify the properties: Modifying queue properties (like max size, TTL, etc.) does not configure authentication or authorization for accessing the queue’s messages.

In Summary:

To enable Azure AD authentication for App1 to read messages from Queue1, you need to:

Enable Managed Identity on App1: This provides App1 with an Azure AD identity.

Configure Access control (IAM) on Queue1: Grant the Managed Identity of App1 the “Azure Service Bus Data Receiver” role (or a similar role with read permissions) to authorize App1 to read messages.

Final Answer:

On App1: Turn on the managed identity
On Queue1: Configure Access control (IAM)

87
Q

You have the following Azure Active Directory (Azure AD) tenants:
– Contoso.onmicrosoft.com: Linked to a Microsoft 365 tenant and syncs to an Active Directory forest named contoso.com by using password hash synchronization
– Contosoazure.onmicrosoft.com: Linked to an Azure subscription named Subscription1
You need to ensure that you can assign the users in contoso.com access to the resources in Subscription1.
What should you do?
Create an Azure management group that contains Subscription1.
Configure the existing Azure AD Connect server to sync contoso.com to contosoazure.onmicrosoft.com.
Deploy a second Azure AD Connect server and sync contoso.com to contosoazure.onmicrosoft.com.
Create guest accounts for all the contoso.com users in contosoazure.onmicrosoft.com.

A

Let’s analyze each option to determine the best way to grant users from contoso.com access to resources in Subscription1.

Create an Azure management group that contains Subscription1.

Explanation: Azure Management Groups are used to organize and manage multiple Azure subscriptions under a single management structure. Management groups are primarily for policy and access control at scale across subscriptions. Creating a management group itself does not bridge the identity gap between different Azure AD tenants. Management groups help with organizing and governing subscriptions but do not inherently solve cross-tenant identity and access management for users.

Does it meet the goal? No. Management groups do not enable users from contoso.com (synced to contoso.onmicrosoft.com) to access resources in Subscription1 (linked to contosoazure.onmicrosoft.com).

Configure the existing Azure AD Connect server to sync contoso.com to contosoazure.onmicrosoft.com.

Explanation: Azure AD Connect is designed to synchronize a single on-premises Active Directory forest to one Azure AD tenant. The existing Azure AD Connect server is already correctly syncing contoso.com to contoso.onmicrosoft.com. You cannot reconfigure the same Azure AD Connect server to sync the same on-premises forest to a second, different Azure AD tenant (contosoazure.onmicrosoft.com). This is not a supported or feasible configuration for Azure AD Connect.

Does it meet the goal? No. This is not a valid Azure AD Connect configuration and will not solve the cross-tenant access problem.

Deploy a second Azure AD Connect server and sync contoso.com to contosoazure.onmicrosoft.com.

Explanation: While you can deploy multiple Azure AD Connect servers for redundancy or for syncing different on-premises forests to different Azure AD tenants, you cannot sync the same on-premises forest (contoso.com) to two separate Azure AD tenants (contoso.onmicrosoft.com and contosoazure.onmicrosoft.com) using multiple Azure AD Connect servers connected to the same on-premises forest. This is not a supported Azure AD Connect topology. Each on-premises forest should have a dedicated Azure AD Connect instance syncing to one Azure AD tenant.

Does it meet the goal? No. This is not a valid Azure AD Connect configuration and will not solve the cross-tenant access problem.

Create guest accounts for all the contoso.com users in contosoazure.onmicrosoft.com.

Explanation: This option utilizes Azure AD Business-to-Business (B2B) collaboration. You would invite users from the contoso.onmicrosoft.com tenant as guest users into the contosoazure.onmicrosoft.com tenant.

How it works:

Invite users from contoso.onmicrosoft.com as guests to contosoazure.onmicrosoft.com.

Guest user objects are created in contosoazure.onmicrosoft.com that represent the external users from contoso.onmicrosoft.com.

You can then assign Azure RBAC roles to these guest user accounts in contosoazure.onmicrosoft.com for resources in Subscription1.

Users from contoso.com (authenticating via contoso.onmicrosoft.com) can then access resources in Subscription1 using their original contoso.com credentials, but as guest users in contosoazure.onmicrosoft.com.

Does it meet the goal? Yes. This is the correct and standard way to grant users from one Azure AD tenant access to resources in another Azure AD tenant. It allows you to assign roles to users from contoso.com in Subscription1 while they use their existing credentials.

Final Answer: The final answer is Create guest accounts for all the contoso.com users in contosoazure.onmicrosoft.com.

88
Q

You have an application named App1 that does not support Azure Active Directory (Azure AD) authentication.
You need to ensure that App1 can send messages to an Azure Service Bus queue. The solution must prevent App1 from listening to the queue.
What should you do?
Configure Access control (IAM) for the Service Bus.
Add a shared access policy to the queue.
Modify the locks of the queue.
Configure Access control (IAM) for the queue.

A

The requirement is to grant an application (App1) that does not support Azure Active Directory (Azure AD) authentication the ability to send messages to an Azure Service Bus queue (Queue1) while explicitly preventing it from listening to messages. This scenario points towards using Shared Access Policies (SAPs) for delegated access with specific permissions, as App1 cannot use Azure AD for authentication.

Let’s analyze each option:

Configure Access control (IAM) for the Service Bus.

Explanation: Access control (IAM) in Azure is primarily designed for managing access for Azure AD identities (users, groups, service principals, managed identities). Since App1 does not support Azure AD authentication, using IAM directly is not the appropriate solution for App1’s authentication. IAM is best suited for applications that can authenticate with Azure AD. IAM roles for Service Bus typically grant broad permissions and are not designed for the granular send-only vs. listen-only scenario for non-AD applications.

Add a shared access policy to the queue.

Explanation: Shared Access Policies (SAPs) are a feature of Azure Service Bus specifically designed to grant delegated access to Service Bus resources (namespaces, queues, topics, subscriptions) to applications that may not support Azure AD authentication or when you need to grant access using shared keys.

Granular Permissions: When creating a Shared Access Policy, you can precisely control the permissions granted. For a queue, these permissions typically include:

Send: Allows sending messages to the queue.

Listen: Allows receiving (reading and processing) messages from the queue.

Manage: Allows managing the queue itself.

Meeting the Requirements: To meet the requirements, you should add a shared access policy to Queue1 and, crucially, grant only the Send permission in the policy, while explicitly not granting the Listen permission. App1 can then use the connection string generated from this SAP to authenticate to Service Bus and send messages, but it will be denied access if it tries to listen or receive messages because the SAP lacks the “Listen” permission.

Modify the locks of the queue.

Explanation: Locks in Azure Service Bus are used to prevent accidental deletion or modification of the queue itself. Locks are for resource management and protection of the queue’s configuration, not for controlling permissions for sending or receiving messages. Modifying locks will not achieve the goal of granting send-only access to App1.

Configure Access control (IAM) for the queue.

Explanation: As explained earlier, IAM is primarily for Azure AD identities. While you can use IAM with Service Bus, it’s not the intended authentication mechanism for an application that does not support Azure AD authentication. Furthermore, IAM roles for Service Bus are generally broader and might not provide the fine-grained control to grant only “Send” and explicitly deny “Listen” permissions in the context of non-AD authentication as effectively as Shared Access Policies.

Conclusion:

The most appropriate and effective solution to grant App1 send-only access to Queue1, especially when App1 does not support Azure AD authentication, is to Add a shared access policy to the queue and configure the policy to grant only the “Send” permission, explicitly excluding the “Listen” permission. This directly addresses both requirements: enabling sending and preventing listening for the non-AD application.

Final Answer: The final answer is Add a shared access policy to the queue.

89
Q

An administrator plans to create a function app in Azure that will have the following settings:
– Runtime stack: .NET Core
– Operating System: Linux
– Plan type: Consumption
– Enable Application Insights: Yes
You need to ensure that you can back up the function app.
Which settings should you recommend changing before creating the function app?
Runtime stack
Disable Application Insights
Operating System
Plan type

A

The correct answer is Plan type.

Here’s why:

Azure Functions on the Consumption plan have some limitations regarding backup and restore. Specifically, they don’t support traditional backup mechanisms like those available for App Service plans. Consumption plan function apps are ephemeral by nature, scaling dynamically and relying on storage for persistence.

To enable proper backup and restore capabilities, you should switch from the Consumption plan to an App Service plan (either dedicated or premium). App Service plans provide a dedicated environment with more control over the underlying infrastructure, allowing for snapshot-based backups and restore operations.

While the other options might have indirect impacts on backup (like Application Insights influencing monitoring and logging), they don’t directly address the core issue of backup capability itself. The runtime stack and OS are generally orthogonal to backup functionality. Disabling Application Insights would reduce data collected but not affect the ability to back up the function app itself. Therefore, changing the plan type is the only change that directly enables backup functionality.

90
Q

HOTSPOT
You have an Azure subscription.

You plan to deploy an app that has a web front end and an application tier.

You need to recommend a load balancing solution that meets the following requirements:

– Internet to web tier:
– Provides URL-based routing
– Supports connection draining
– Prevents SQL injection attacks

– Web tier to application tier:
– Provides port forwarding
– Supports HTTPS health probes
– Supports an availability set as a backend pool

Your solution must minimize costs.

Which load balancing solution should you recommend for each tier? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Internet to web tier:
An Azure Application Gateway that has a web application firewall (WAF)
An internal Azure Standard Load Balancer
A public Azure Basic Load Balancer
Web tier to application tier:
An Azure Application Gateway that has a web application firewall (WAF)
An internal Azure Standard Load Balancer
A public Azure Standard Load Balancer

A

Here’s the breakdown of the correct load balancing solutions for each tier, along with the reasons:

Internet to web tier:

An Azure Application Gateway that has a web application firewall (WAF)

URL-based routing: Application Gateway natively supports routing traffic based on URLs. This is a key requirement.

Connection draining: Application Gateway supports connection draining, ensuring that existing connections are gracefully terminated when a backend server is being taken offline.

Prevents SQL injection attacks: The Web Application Firewall (WAF) integrated with Application Gateway provides protection against common web vulnerabilities, including SQL injection.

Cost: While Application Gateway is more expensive than basic Load Balancer, the requirements explicitly include WAF, necessitating the use of Application Gateway for the internet-facing tier. Using Load Balancer here would require deploying another product to provide WAF capabilities, increasing complexity and potentially cost.

Web tier to application tier:

An internal Azure Standard Load Balancer

Port forwarding: Load Balancers can forward traffic on specific ports.

HTTPS health probes: Standard Load Balancer supports HTTPS health probes, allowing it to monitor the health of backend instances by sending HTTPS requests and verifying the responses.

Supports an availability set as a backend pool: Both Basic and Standard Load Balancers can use an availability set as a backend pool.

Internal: The load balancer needs to be internal because you’re balancing traffic between the web tier and the application tier within your Azure virtual network.

Standard: While a Basic Load Balancer can handle load distribution, Standard Load Balancer provides increased functionality and features that align with the application’s needs and future scalability requirements. Features such as HTTPS health probes and support for Availability Zones make the Standard Load Balancer the best choice.

In summary:

Internet to web tier: Azure Application Gateway with WAF is essential to meet the URL-based routing, connection draining, and SQL injection prevention requirements for traffic from the internet.

Web tier to application tier: An internal Azure Standard Load Balancer effectively handles load balancing between the web and application tiers, meeting requirements with health probes, port forwarding, and availability set integration.

91
Q

You have 10 Azure virtual machines on a subnet named Subnet1. Subnet1 is on a virtual network named VNet1.
You plan to deploy a public Azure Standard Load Balancer named LB1 to the same Azure region as the 10 virtual machines.
You need to ensure that traffic from all the virtual machines to the internet flows through LB1. The solution must prevent the virtual machines from being accessible on the internet.
Which three actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Add health probes to LB1.
Add the network interfaces of the virtual machines to the backend pool of LB1.
Add an inbound rule to LB1.
Add an outbound rule to LB1.
Associate a network security group (NSG) to Subnet1.
Associate a user-defined route to Subnet1.

A

Here’s the breakdown of the correct actions and why they’re needed:

Add the network interfaces of the virtual machines to the backend pool of LB1. This is essential for the load balancer to be aware of the VMs it should distribute traffic to. Without this, the load balancer doesn’t know which VMs are part of its managed pool.

Add an outbound rule to LB1. Standard Load Balancers require outbound rules to enable outbound connectivity to the internet when VMs in the backend pool don’t have public IPs directly assigned. This rule defines how outbound connections from the VMs are translated to the load balancer’s public IP.

Associate a network security group (NSG) to Subnet1. The NSG is critical for preventing the VMs from being directly accessible from the internet. You’ll configure the NSG to:

Deny inbound traffic from the internet: This is the key step to ensure that the VMs are not directly accessible from the outside. Block all inbound traffic except traffic originating from within the VNet (or specific sources you need to allow).

Allow outbound traffic to the internet through the load balancer: This allows the VMs to initiate outbound connections, which are then handled and NAT’d by the load balancer via the outbound rule. The default outbound rule should suffice here.

Why the other options are incorrect:

Add health probes to LB1: Health probes are necessary for the load balancer to determine the health of the backend VMs. While important for the load balancer’s function in general, they are not directly related to the requirement of routing internet traffic through the load balancer and preventing direct access to the VMs. If you’re not using the NSG, the load balancer is basically balancing traffic to the internet, but the client will go directly to your VM and not from your load balancer IP.

Add an inbound rule to LB1: Inbound rules are for directing traffic to the VMs through the load balancer. Since the requirement is to prevent the VMs from being directly accessible on the internet, adding an inbound rule isn’t directly relevant to that goal.

Associate a user-defined route to Subnet1. User-defined routes (UDRs) are used to control the routing of traffic within the virtual network. While you could potentially use UDRs in a more complex scenario, using an NSG to block direct internet access is simpler and more direct.

92
Q

You have SQL Server on an Azure virtual machine named SQL1.
You need to automate the backup of the databases on SQL1 by using Automated Backup v2 for the virtual machines. The backups must meet the following requirements:
– Meet a recovery point objective (RPO) of 15 minutes.
– Retain the backups for 30 days.
– Encrypt the backups at rest.
What should you provision as part of the backup solution?
Elastic Database jobs
Azure Key Vault
an Azure Storage account
a Recovery Services vault

A

The correct answer is a Recovery Services vault.

Here’s why:

Recovery Services Vault: This is the central management entity for Azure Backup. Automated Backup v2 for virtual machines uses the Recovery Services vault to orchestrate the backup process, store backup metadata, and manage retention policies. You’ll configure the backup settings (frequency, retention, etc.) within the Recovery Services vault.

Automated Backup v2 utilizes Log backups to achieve the RPO of 15 minutes.

Here’s why the other options are not the primary solution:

Elastic Database jobs: These are used for managing and automating tasks across multiple Azure SQL databases. They are not directly related to backing up SQL Server running inside an Azure VM.

Azure Key Vault: While you can integrate Azure Key Vault with Azure Backup to encrypt the backups themselves at rest, it is not required. Azure Backup, by default, can encrypt the backup data using platform-managed keys or customer-managed keys. Key Vault would be needed if you opted for customer-managed keys. And more importantly, a Recovery Services vault is required regardless to manage the overall backup process.

an Azure Storage account: While backups are stored in Azure Storage, you don’t directly provision a storage account for Automated Backup v2. The Recovery Services vault manages the storage account creation and configuration behind the scenes. You have limited control over this storage account.

93
Q

You have an Azure subscription that contains an Azure key vault named KeyVault1 and the virtual machines shown in the following table.
Name Connected to
VM1 VNET1/Subnet1
VM2 VNET1/Subnet2

KeyVault1 has an access policy that provides several users with Create Key permissions.
You need to ensure that the users can only register secrets in KeyVault1 from VM1.
What should you do?
Create a network security group (NSG) that is linked to Subnet1.
Configure the Firewall and virtual networks settings for KeyVault1.
Modify the access policy for KeyVault1.
Configure KeyVault1 to use a hardware security module (HSM).

A

The correct answer is Configure the Firewall and virtual networks settings for KeyVault1.

Here’s why:

Key Vault Firewall and Virtual Networks: This feature allows you to restrict access to your key vault based on the source IP address or the virtual network the request originates from. By configuring this, you can specify that only traffic originating from VNET1 (or, more specifically, Subnet1) is allowed to access KeyVault1. This ensures that only VM1 can register secrets.

Here’s why the other options are incorrect:

Create a network security group (NSG) that is linked to Subnet1: NSGs control inbound and outbound traffic to and from a subnet or network interface. While you can use NSGs to restrict outbound traffic from VM2, you can’t use them to grant specific access to KeyVault1 only from VM1. NSGs operate at the network layer and don’t have the fine-grained control to say “Allow this specific action on this specific resource only from this VM.”

Modify the access policy for KeyVault1: Access policies control who (users, service principals, managed identities) has what permissions on the key vault (e.g., get, create, list). You already have an access policy granting users Create Key permissions. You can’t use access policies to restrict where those permissions can be used from (i.e., only from VM1).

Configure KeyVault1 to use a hardware security module (HSM): HSMs provide a higher level of security for storing keys, but they don’t provide network-based access control. Using an HSM won’t restrict access based on the source VM.

94
Q

HOTSPOT
You have an Azure subscription named Subscription1 that contains a virtual network named VNet1.

You add the users in the following table.
User Role
User1 Owner
User2 Security Admin
User3 Network Contributor

Which user can perform each configuration? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Add a subnet to VNet1:
User1 only
User3 only
User1 and User3 only
User2 and User3 only
User1, User2, and User3
Assign a user the Reader role to VNet1:
User1 only
User2 only
User3 only
User1 and User2 only
User2 and User3 only
User1, User2, and User3

User | Role

A

Here’s the breakdown of which user can perform each configuration, along with the reasoning:

Add a subnet to VNet1:

User1 and User3 only

Owner (User1): The Owner role has full control over the subscription and all its resources. This includes the ability to create and modify virtual networks and subnets.

Network Contributor (User3): The Network Contributor role specifically grants permissions to manage network resources, including virtual networks and subnets.

Security Admin (User2): The Security Admin role manages security-related configurations and policies. While they might be involved in securing a network, they don’t have the inherent permission to directly modify network resources like adding subnets.

Assign a user the Reader role to VNet1:

User1 only

Owner (User1): The Owner role possesses the right to assign roles to resources within the Azure subscription.

Security Admin (User2): The Security Admin role can manage security-related configurations and policies, but does not have the ability to assign roles.

Network Contributor (User3): The Network Contributor role specifically grants permissions to manage network resources, but does not have the ability to assign roles.

In summary:

Add a subnet to VNet1: User1 and User3 only

Assign a user the Reader role to VNet1: User1 only

94
Q

HOTSPOT
You have an Azure subscription named Subscription1.

In Subscription1, you create an alert rule named Alert1.

The Alert1 action group is configured as shown in the following exhibit.
ResourceGroupName : default-activitylogalerts
GroupShortName : AG1
Enabled : True
EmailReceivers : {Action1-EmailAction-}
SMSReceivers : {Action1
-SMSAction-}
WebhookReceivers : {}
Id : /subscriptions/a4fde29b-d56a-4f6c-8298-6c53cd0b720c/resourceGroups/default-activitylogalerts/providers/microsoft.insights/actionGroups/ActionGroup1
Name : ActionGroup1
Type : Microsoft.Insights/ActionGroups
Location : Global
Tags : {}
Alert1 alert criteria is triggered every minute.

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.

NOTE: Each correct selection is worth one point.
The number of email messages that Alert1 will send in an hour is [answer choice].
0
4
6
12
60
The number of SMS messages that Alert1 will send in an hour is [answer choice].
0
4
6
12
60

A

Here’s the breakdown of the correct answers:

The number of email messages that Alert1 will send in an hour is [60].

The alert criteria is triggered every minute.

This means that the action group (which includes sending an email) will also be triggered every minute.

There are 60 minutes in an hour, therefore, 60 emails will be sent.

The number of SMS messages that Alert1 will send in an hour is [60].

The alert criteria is triggered every minute.

This means that the action group (which includes sending an SMS) will also be triggered every minute.

There are 60 minutes in an hour, therefore, 60 SMS messages will be sent.

Therefore the answers are:

60

60

95
Q

You have an Azure subscription that contains the Azure SQL Database servers shown in the following table.
Name Resource group Location
SQL1 RG1 West US
SQL2 RG2 West US

The SQL Database servers have the elastic pools shown in the following table.
Name SQL Database server vCores Maximum data size
Pool1 SQL1 2 16 GB
Pool2 SQL2 6 48 GB

SQL1 has the SQL databases shown in the following table.
SQL Database
Name server vCores Maximum data size Elastic pool
DB1 SQL1 4 30 GB None
DB2 SQL1 2 10 GB Pool1

What will occur if you add DB1 to Pool1?
The vCores on DB1 will decrease to two.
The maximum data size of Pool1 will increase to 22 GB.
The maximum data size of DB1 will decrease to 6 GB.
The vCores on Pool1 will increase to four.

A

The correct answer is:

The maximum data size of DB1 will decrease to 6 GB.

Here’s why:

Elastic Pools and Resource Limits: Elastic pools provide a shared pool of resources (vCores, data size) for the databases within them. When a database is moved into an elastic pool, it becomes subject to the resource limits of that pool. This means that the database’s individual resource limits (vCores, maximum data size) are governed by the pool’s settings, even if the database was previously configured with higher values.

Pool1 Limits: Pool1 has a maximum data size of 16 GB. When DB1, which is currently configured for 30 GB, is added to Pool1, its maximum data size will be constrained by what remains available within Pool1 for it.

Pool1 Capacity and DB2: Currently Pool1 only contains DB2 which is set at 10GB.

DB1 Changes: DB1 will be reduced to a maximum data size of 6GB which is what is available in Pool1 after DB2 is accounted for.

Here’s why the other options are incorrect:

The vCores on DB1 will decrease to two: While the database will share vCores with other databases in the pool, its explicitly configured vCores do not automatically change. The database is still technically allocated its originally configured vCores, however the maximum it can consume at any time while in the pool is limited by what the pool has available, and what priority the DB is given. The effect is a maximum vCore limit of Pool1 / Number of Databases in the pool, or as configured under database settings within the pool.

The maximum data size of Pool1 will increase to 22 GB: The operation of adding a DB to a pool does not dynamically increase the pool size. The pool is a separate entity that must be configured separately.

The vCores on Pool1 will increase to four: The operation of adding a DB to a pool does not dynamically increase the pool vCore size. The pool is a separate entity that must be configured separately.

96
Q

You download an Azure Resource Manager template based on an existing virtual machine. The template will be used to deploy 100 virtual machines.
You need to modify the template to reference an administrative password. You must prevent the password from being stored in plain text.
What should you create to store the password?
an Azure Key Vault and an access policy
Azure Active Directory (AD) Identity Protection and an Azure policy
a Recovery Services vault and a backup policy
an Azure Storage account and an access policy

A

The correct answer is an Azure Key Vault and an access policy.

Here’s why:

Azure Key Vault: Key Vault is designed for securely storing secrets, keys, and certificates. It provides a centralized and secure way to manage sensitive information, preventing you from storing passwords in plain text in your ARM template.

Access Policy: Access policies control which users, applications, or services have access to the secrets stored in the Key Vault. You would create an access policy to grant the appropriate permissions to your deployment process (e.g., a service principal) to retrieve the password from the Key Vault during the VM deployment.

Here’s why the other options are incorrect:

Azure Active Directory (AD) Identity Protection and an Azure policy: Azure AD Identity Protection helps detect and remediate identity risks, but it’s not designed for storing passwords. Azure Policy enforces organizational standards, but does not serve as a secure store for credentials.

a Recovery Services vault and a backup policy: Recovery Services vaults are used for backup and disaster recovery. They aren’t related to storing secrets or passwords.

an Azure Storage account and an access policy: While Azure Storage can store data securely, it’s not the appropriate service for managing and controlling access to secrets. Key Vault is specifically designed for this purpose, offering features like auditing, versioning, and access control tailored for secrets management.

97
Q

You have an Azure Storage account named storage1 that is accessed by several applications.
An administrator manually rotates the access keys for storage1.
After the rotation, the applications fail to access the storage account.
A developer manually modifies the applications to resolve the issue.
You need to implement a solution to rotate the access keys automatically. The solution must minimize the need to update the applications once the solution is implemented.
What should you include in the solution?
an Azure AD enterprise application
Azure Key Vault
Azure Logic Apps
an Azure Desired State Configuration (DSC) extension

A

The correct answer is Azure Key Vault.

Here’s why:

Azure Key Vault for Secure Storage and Rotation: Azure Key Vault is the core component here. You store the Storage Account access keys in Key Vault. Applications don’t directly use the keys. Instead, they use a managed identity to authenticate to Azure Key Vault, and then retrieve the Storage Account keys from there.

Automatic Rotation with Least Application Impact: When you want to rotate the Storage Account keys, you rotate them in the Storage Account, and then update the values stored in Azure Key Vault with the new keys. Because the applications are retrieving the keys from Key Vault (and not storing them directly), they don’t need to be changed. They simply start retrieving the new key values from Key Vault automatically.

Here’s why the other options are not as suitable:

an Azure AD enterprise application: While enterprise applications can be used for authentication and authorization, they don’t provide the secure storage and key rotation capabilities needed for this scenario. An enterprise application would be used to provide the managed identity required for this setup.

Azure Logic Apps: Logic Apps can automate tasks, but they don’t inherently provide a secure secrets storage or a mechanism for applications to dynamically retrieve those secrets without needing modification. Logic Apps could be used as part of solution that handles the rotation itself (updating the keys in storage account and Key Vault), but it’s Key Vault that makes the entire thing work with minimal application impact.

an Azure Desired State Configuration (DSC) extension: DSC is used for managing the configuration of virtual machines. While you could potentially use DSC to update configuration files containing connection strings, it doesn’t address the core issue of secure secrets management and rotation, and would require significant application code changes. Furthermore, storing connection strings in files on VMs is less secure than using Key Vault.

98
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure subscription.
You have an on-premises file server named Server1 that runs Windows Server 2019.
You manage Server1 by using Windows Admin Center.
You need to ensure that if Server1 fails, you can recover Server1 files from Azure.
Solution: You create an Azure Storage account and an Azure Storage Sync service. You configure Azure File Sync for Server1.
Does this meet the goal?
Yes
No

A

The answer is Yes.

Here’s why:

Azure File Sync Overview: Azure File Sync allows you to centralize your organization’s file shares in Azure Files while keeping the flexibility, performance, and compatibility of an on-premises file server. It effectively creates a cloud-based copy of your on-premises files.

Protection Against Server Failure: If Server1 fails, the files are still stored in Azure Files. You can then:

Provision a new Windows Server, install the Azure File Sync agent, and configure it to sync with the same Azure file share. This will download the files from Azure Files to the new server.

Access the files directly in the Azure file share itself. This might be suitable for short-term access while a new server is being provisioned.

Therefore, the solution meets the goal of being able to recover Server1 files from Azure in the event of a failure. The files are replicated to Azure.

99
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure subscription.
You have an on-premises file server named Server1 that runs Windows Server 2019.
You manage Server1 by using Windows Admin Center.
You need to ensure that if Server1 fails, you can recover Server1 files from Azure.
Solution: From the Azure portal, you create a Recovery Services vault. On Server1, you install the Azure Backup agent and you successfully perform a backup.
Does this meet the goal?
Yes
No

A

The answer is Yes.

Here’s why:

Azure Backup Overview: Azure Backup is a service that backs up your data to Azure. It can be used to protect a wide variety of workloads, including on-premises Windows Servers.

Protection Against Server Failure: By installing the Azure Backup agent on Server1 and performing a successful backup to a Recovery Services vault, you have created a backup copy of Server1’s data in Azure. If Server1 fails, you can use Azure Backup to restore the files to a new server or even to the same server (after it’s repaired/replaced).

100
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure subscription.
You have an on-premises file server named Server1 that runs Windows Server 2019.
You manage Server1 by using Windows Admin Center.
You need to ensure that if Server1 fails, you can recover Server1 files from Azure.
Solution: You register Windows Admin Center in Azure and configure Azure Backup.
Does this meet the goal?
Yes
No

A

The answer is No.

Here’s why:

Registering Windows Admin Center in Azure: Registering Windows Admin Center in Azure allows you to manage your on-premises servers from the Azure portal. It provides a centralized management interface. This in itself, does not provide the capability to recover files to Azure.

Configuring Azure Backup in Windows Admin Center: Configuring Azure Backup from Windows Admin Center is the critical point. If you configure Azure Backup on your server through the Windows Admin Center, you are then utilizing Azure Backup to store the data in Azure.

Without Azure Backup: If you are not utilizing Azure Backup, and you simply register Windows Admin Center in Azure, your backups are still in your Server1 environment, and this does not meet the goal.

101
Q

HOTSPOT
You need to design an authentication solution that will integrate on-premises Active Directory and Azure Active Directory (Azure AD). The solution must meet the following requirements:

– Active Directory users must not be able to sign in to Azure AD-integrated apps outside of the sign-in hours configured in the Active Directory user accounts.
– Active Directory users must authenticate by using multi-factor authentication (MFA) when they sign in to Azure AD-integrated apps.
– Administrators must be able to obtain Azure AD-generated reports that list the Active Directory users who have leaked credentials.
– The infrastructure required to implement and maintain the solution must be minimized.

What should you include in the solution? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Integrate Active Directory and Azure AD by using:
Active Directory Federation Services
Pass-through authentication with Azure AD Seamless SSO
Pass-through authentication with Azure AD Seamless SSO and password hash synchronization
Password hash synchronization with Azure AD Seamless SSO
Implement MFA by using:
A third-party authentication solution
Azure MFA
The Active Directory Federation Services (AD FS) Azure MFA adapter

A

Here’s the breakdown of the correct selections:

Integrate Active Directory and Azure AD by using: Password hash synchronization with Azure AD Seamless SSO

Password hash synchronization: This is a critical requirement as it allows Azure AD to enforce policies like sign-in hours, which are based on Active Directory attributes. Password hash sync allows Azure AD to have a representation of the on-premises passwords, enabling it to work with Conditional Access policies that need on-premises user information. Password hash synchronization is also the simplest method.

Azure AD Seamless SSO: Seamless SSO provides a better user experience by automatically signing users in when they are on their corporate network and using corporate devices, without prompting for their passwords. This complements password hash synchronization and enhances the user experience.

Implement MFA by using: Azure MFA

Azure MFA: Azure Multi-Factor Authentication (MFA) is the native solution in Azure for enforcing multi-factor authentication. It integrates directly with Azure AD and is the most straightforward and maintainable solution for requiring MFA for Azure AD-integrated apps.

Here’s why the other options are incorrect:

Active Directory Federation Services (AD FS): AD FS is a more complex solution for federated identity, but it’s not required in this scenario. Password hash synchronization with Seamless SSO provides the necessary functionality with significantly less infrastructure and management overhead. AD FS is useful when more complex trust relationships or authentication methods are required, but the requirements here don’t warrant the added complexity of AD FS. Also, AD FS cannot report on leaked credentials.

Pass-through authentication with Azure AD Seamless SSO: While pass-through authentication simplifies the sign-in experience, it doesn’t allow Azure AD to enforce policies like sign-in hours based on Active Directory attributes. Azure AD needs to have synchronized password hashes to enforce AD policies, which pass-through authentication does not provide.

A third-party authentication solution: Using a third-party authentication solution would add unnecessary complexity and management overhead. Azure MFA is the built-in solution and meets the requirements.

The Active Directory Federation Services (AD FS) Azure MFA adapter: This option is only relevant if you are using AD FS. Since the recommended solution is to use password hash synchronization with Seamless SSO and Azure MFA, the AD FS adapter is not applicable.

102
Q

You have resources in three Azure regions. Each region contains two virtual machines. Each virtual machine has a public IP address assigned to its network interface and a locally installed application named App1.
You plan to implement Azure Front Door-based load balancing across all the virtual machines.
You need to ensure that App1 on the virtual machines will only accept traffic routed from Azure Front Door.
What should you implement?
Azure Private Link
service endpoints
network security groups (NSGs) with service tags
network security groups (NSGs) with application security groups

A

The correct answer is network security groups (NSGs) with service tags.

Here’s why:

NSGs and Service Tags for Source Filtering: Network Security Groups (NSGs) are used to filter network traffic to and from Azure resources. Service tags represent a group of IP address prefixes from a given Azure service. By using service tags in your NSG rules, you can easily allow traffic from Azure Front Door while blocking all other traffic.

Service Tag: AzureFrontDoor.Backend: Azure Front Door has a service tag named AzureFrontDoor.Backend. This service tag represents the IP address ranges used by Azure Front Door’s backend services. You can create an NSG rule that allows inbound traffic from AzureFrontDoor.Backend and denies all other inbound traffic. This will ensure that only traffic originating from Azure Front Door can reach your virtual machines.

Here’s why the other options are incorrect:

Azure Private Link: Azure Private Link provides private connectivity to Azure PaaS services from your virtual network. While it enhances security by removing exposure to the public internet, it’s not directly applicable to ensuring that traffic only comes from Azure Front Door. Private Link is more relevant when you want to access a service from within your VNet without using public endpoints.

Service endpoints: Service endpoints provide secure and direct connectivity from your virtual network to Azure service resources. Similar to Private Link, they don’t directly address the requirement of restricting traffic to only come from Azure Front Door. They are for connecting to service resources, not filtering traffic sources.

Network security groups (NSGs) with application security groups: Application security groups (ASGs) group VMs with similar functions together. While you can use ASGs in NSG rules, they don’t help you filter traffic based on the source of the traffic (i.e., ensuring it only comes from Azure Front Door).

103
Q

You have an Azure key vault named KV1.
You need to ensure that applications can use KV1 to provision certificates automatically from an external certification authority (CA).
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
From KV1, create a certificate issuer resource.
Obtain the CA account credentials.
Obtain the root CA certificate.
From KV1, create a certificate signing request (CSR).
From KV1, create a private key,

A

The two correct actions are:

From KV1, create a certificate issuer resource.

Obtain the CA account credentials.

Here’s why:

Create a certificate issuer resource: This step is crucial because it establishes the connection and configuration for the Key Vault to interact with the external Certificate Authority (CA). You need to define the CA provider (e.g., Digicert, GlobalSign) and any necessary details for Key Vault to communicate with the CA. The certificate issuer resource tells Key Vault which CA to use.

Obtain the CA account credentials: To allow Key Vault to automatically provision certificates from the external CA, you need to provide the credentials (e.g., API key, username/password) that Key Vault will use to authenticate with the CA. This ensures that Key Vault has the necessary permissions to request and obtain certificates on your behalf.

Here’s why the other options are incorrect:

Obtain the root CA certificate: While the root CA certificate is important for trusting certificates issued by that CA, it’s not directly involved in the automatic provisioning process itself. You don’t need to upload or configure the root CA certificate in Key Vault to request certificates automatically. The clients trusting the SSL cert that’s generated will need the root CA.

From KV1, create a certificate signing request (CSR): When you want to sign a certificate manually (i.e., not automatically through a CA integration), you would typically generate a CSR. However, the purpose is to automate the certificate provisioning, you do not need to do a CSR.

From KV1, create a private key: Key Vault will create a private key as part of the certificate creation process, but this is done automatically as part of its interaction with the certificate issuer. You don’t need to explicitly create a private key before setting up the certificate issuer.

104
Q

You create the following Azure role definition.
{
“Name”: “Role1”,
“Id”: “80808080-8080-8080-8080-808080808080”,
“IsCustom”: false,
“Description”: “”,
“Actions”: [
“Microsoft.Storage//read”,
“Microsoft.Network/
/read”,
“Microsoft.Compute//read”,
“Microsoft.Compute/virtual Machines/start/action”,
“Microsoft.Compute/virtualMachines/restart/action”,
“Microsoft.Authorization/
/read”],
“NotActions”: [],
“DataActions”: [],
“NotDataActions”: [],
“AssignableScopes”: []
}
You need to create Role1 by using the role definition.
Which two values should you modify before you create Role1? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
AssignableScopes
Description
DataActions
IsCustom
Id

A

The two values that need to be modified before creating the role definition are:

AssignableScopes

Id

Here’s why:

AssignableScopes: This property defines where the custom role can be assigned. You must specify at least one scope (e.g., subscription, resource group) to make the role useful. Without an assignable scope, the role cannot be assigned to any user, group, or service principal. This is the most critical change required.

Id: This property must be a unique GUID. The provided GUID “80808080-8080-8080-8080-808080808080” is likely a placeholder or a default and will cause a conflict if you try to create another role with the same ID. Azure will generate a GUID for you, or you can generate one yourself (using New-Guid in PowerShell, for example) to ensure uniqueness.

Here’s why the other options are not required to be modified, though some could be:

Description: While a description is recommended for clarity, it’s not mandatory for creating the role. You can leave it as an empty string (“”) if you choose.

DataActions: This property is for specifying actions that can be performed on data within a resource (e.g., reading blob data, sending a message to a queue). Since the current role primarily grants management plane access, the DataActions property is left empty, which is perfectly valid if the role doesn’t need to manage data actions.

IsCustom: The value should be set to true when the role is customer created. The value in the presented code, is set to false.

In summary:

AssignableScopes must be set to define where the role can be assigned.

Id must be a unique GUID to avoid conflicts with existing roles.

105
Q

You have the following Azure Active Directory (Azure AD) tenants:
– Contoso.onmicrosoft.com: Linked to a Microsoft 365 tenant and syncs to an Active Directory forest named contoso.com by using password hash synchronization
– Contosoazure.onmicrosoft.com: Linked to an Azure subscription named Subscription1
You need to ensure that you can assign the users in contoso.com access to the resources in Subscription1.
What should you do?
Create an Azure management group that contains Subscription1.
Configure contoso.onmicrosoft.com to use pass-through authentication.
Create guest accounts for all the contoso.com users in contosoazure.onmicrosoft.com.
Configure Active Directory Federation Services (AD FS) federation between contosoazure.onmicrosoft.com and contoso.com.

A

The correct answer is:

Create guest accounts for all the contoso.com users in contosoazure.onmicrosoft.com.

Here’s why:

Azure AD Tenants and Access Control: Azure AD tenants are isolated security boundaries. To grant users from one tenant (contoso.onmicrosoft.com) access to resources in another tenant (contosoazure.onmicrosoft.com), you need to create a representation of those users in the target tenant. The standard way to do this is by creating guest accounts.

Guest Accounts: Guest accounts are B2B (Business-to-Business) collaboration accounts in Azure AD. They allow users from external Azure AD tenants or other identity providers to access resources in your Azure AD tenant.

Here’s why the other options are incorrect:

Create an Azure management group that contains Subscription1: Management groups are used for organizing and managing Azure subscriptions and resources. While management groups are useful for setting policies and assigning roles at a higher level, they don’t solve the fundamental problem of cross-tenant identity and access management. Creating a management group will not automatically grant access to users in another tenant.

Configure contoso.onmicrosoft.com to use pass-through authentication: Pass-through authentication is an authentication method that allows users to sign in to Azure AD-integrated applications using the same password they use on their on-premises Active Directory. While pass-through authentication can simplify sign-in, it doesn’t resolve the issue of users needing a representation in the contosoazure.onmicrosoft.com tenant to be granted access to resources. The users still do not exist in the Azure AD.

Configure Active Directory Federation Services (AD FS) federation between contosoazure.onmicrosoft.com and contoso.com: Setting up AD FS federation is overly complex for this scenario. Federation is useful when you need to establish a trust relationship between two organizations to allow users to seamlessly access resources across both organizations using their existing credentials. Guest accounts with B2B collaboration are much simpler to set up and maintain for granting access to resources in a different Azure AD tenant.

106
Q

You are developing an application that will enable users to download content from an Azure Storage account.
The users must only be able to download the content for a period of seven days.
You need to recommend an authentication solution to access the storage account.
What should you include in the recommendation?
shared access signature (SAS) tokens
identity-based authentication that uses Active Directory Domain Services (AD DS)
storage access key
identity-based authentication that uses Azure Active Directory (Azure AD)

A

The correct answer is shared access signature (SAS) tokens.

Here’s why:

Shared Access Signatures (SAS): SAS tokens provide granular, time-bound, and secure access to specific resources in Azure Storage. You can create a SAS token that grants read-only access to the content for a specific duration (in this case, seven days). The SAS token includes all the necessary authorization information, so the application doesn’t need to store any credentials.

Controlling Access Duration: One of the key benefits of SAS tokens is that you can specify the start and expiry time for the access. This perfectly fulfills the requirement of allowing users to download content only for seven days.

Here’s why the other options are incorrect:

Identity-based authentication that uses Active Directory Domain Services (AD DS): AD DS is an on-premises directory service. While you can integrate AD DS with Azure, using it directly for storage access would be more complex and less suitable for this scenario, which requires time-limited access. AD DS authentication is typically used for VMs and servers, but not directly for user-specific storage access.

Storage access key: Storage account access keys provide full access to the entire storage account. Sharing the storage access key would be a significant security risk, and it doesn’t allow you to control the access duration or the specific resources that the users can access.

Identity-based authentication that uses Azure Active Directory (Azure AD): Azure AD authentication provides identity-based access to Azure resources. While it’s a more secure approach than using storage access keys, it requires creating and managing user identities in Azure AD, and it doesn’t provide a built-in mechanism for limiting the access duration. You would need to implement additional logic in your application to enforce the seven-day limit. Using SAS, the Azure storage automatically expires the token without requiring any code.

107
Q

HOTSPOT
You have an on-premises server that runs Windows Server 2019 and hosts a web app named App1.

You have an Azure subscription named Subscription1.

You plan to migrate App1 to Subsciption1 by using Azure Migrate.

To which type of Azure service will App1 be migrated, and what should you provide during the migration? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Azure service type:
A logic app
An app service
A virtual machine
A container instance
During the migration, provide:
The device code
An X.509 certificate
A personal access token (PAT)

A

Here’s the breakdown of the correct selections:

Azure service type: An app service

The question states you are migrating a web app named App1. The Azure App Service is designed to host web applications. Azure Migrate can be used to migrate on-premises web applications to Azure App Service.

During the migration, provide: A personal access token (PAT)

When migrating to Azure App Service using Azure Migrate, you’ll need to provide credentials to access the source web app (in this case, on-premises Server1). A Personal Access Token (PAT) can be used to authenticate to a deployment source (like an on-premises web server) during the migration process.

Here’s why the other options are incorrect:

A logic app: Logic Apps are designed for automating workflows and integrating applications, but they are not the correct type of service for hosting a web application directly.

A virtual machine: While you could migrate App1 to an Azure virtual machine, it would be less efficient and cost-effective than using Azure App Service, which is optimized for hosting web apps. App Service abstracts away much of the infrastructure management required with VMs.

A container instance: Container Instances are useful for running isolated containers, but they’re generally not used for directly migrating existing web applications that are already running on a web server. Container Instances are better suited for containerized applications or microservices.

The device code: Device code authentication is typically used for headless devices or applications without a browser to authenticate with Azure Active Directory.

An X.509 certificate: While X.509 certificates are used for securing communications and authenticating to resources, they’re not the typical credential type used during an Azure Migrate migration.

108
Q

You create the user-assigned managed identities shown in the following table.
Name Resource group Location
Identity1 RG1 West US
Identity2 RG2 Central US
Identity3 RG2 West US

You create a virtual machine that has the following configurations:
– Name: VM1
– Location: West US
– Resource group: RG1
Which managed identities can you add to VM1?
Identity1 only
Identity1 and Identity2 only
Identity1 and Identity3 only
Identity1, Identity2, and Identity3

A

The correct answer is Identity1 and Identity3 only.

Here’s why:

User-assigned managed identities must reside in the same Azure region as the virtual machine they are assigned to.

VM1 Location: West US

Identity1 Location: West US (Same region as VM1) - Therefore, it can be assigned.

Identity2 Location: Central US (Different region from VM1) - Therefore, it cannot be assigned.

Identity3 Location: West US (Same region as VM1) - Therefore, it can be assigned.

While user-assigned managed identities can be in a different resource group than the resource to which they are assigned, they cannot be in a different region.

109
Q

You have an Azure key vault named KV1 and an Azure web app named WebApp1. WebApp1 runs in a Shared App Service plan.
You need to grant WebApp1 permissions to KV1.
What should you do?
Change to a Standard App Service plan.
Add a certificate to WebApp1.
Change to a Basic App Service plan.
Add a managed identity to WebApp1.

A

The correct answer is Add a managed identity to WebApp1.

Here’s why:

Managed Identities for Azure Resources: Managed identities provide an Azure service (like a web app) with an automatically managed identity in Azure Active Directory (Azure AD). This allows the service to authenticate to other Azure services (like Key Vault) without needing to embed credentials in the code or configuration.

Key Vault Access Control: Key Vault uses Azure AD for authentication. You grant access to Key Vault by assigning permissions to Azure AD identities. By enabling a managed identity for WebApp1, you can then grant that identity the appropriate permissions (e.g., Get Secret) on KV1.

Here’s why the other options are not the correct solution:

Change to a Standard App Service plan: While upgrading to a Standard App Service plan provides more resources and features, it’s not directly related to granting WebApp1 permissions to Key Vault. The managed identity feature is the key to secure authentication.

Add a certificate to WebApp1: Adding a certificate to WebApp1 might be needed for HTTPS communication, but doesn’t grant WebApp1 permission to access Key Vault. Certificates handle secure communication, not service-to-service authentication.

Change to a Basic App Service plan: The same logic applies to the Basic app service plan. Upgrading or downgrading the service plan is not the correct way to solve the authentication problem. In fact, Managed Identities are supported in the Basic App Service plan (and higher).

110
Q

HOTSPOT
You have an Azure subscription that contains 20 virtual machines. The virtual machines run Windows Server 2019.

You need to enable Update Management and deploy the required agents to the virtual machines.

What should you do? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
To enable Update Management, modify:
the Log Analytics workspace
the Azure Sentinel workbook
the Azure Automation account
To deploy the required agents to the
virtual machines, use:
An Azure Sentinel workbook
An Azure Automation account
An Azure Log Analytics workspace

A

Here’s the breakdown of the correct selections:

To enable Update Management, modify: the Azure Automation account

Update Management is a feature of Azure Automation. To enable it, you need to enable the Update Management solution in your Azure Automation account.

To deploy the required agents to the virtual machines, use: An Azure Automation account

The Log Analytics agent (MMA) is required to enable the Update Management, however, the process to enable the Update Management solution is performed in the Azure Automation Account.

Here’s why the other options are incorrect:

The Log Analytics workspace: A Log Analytics workspace is where the logs and data collected by Update Management are stored, but you don’t modify it to enable the feature itself. It is a destination not a configuration point.

The Azure Sentinel workbook: Azure Sentinel is a cloud-native SIEM (Security Information and Event Management) platform. While Sentinel can consume logs from Update Management, you don’t enable Update Management or deploy agents through Sentinel.