test6 Flashcards
You have two subscriptions named Subscription1 and Subscription2. Each subscription is associated to a different Azure AD tenant.
Subscription1 contains a virtual network named VNet1. VNet1 contains an Azure virtual machine named VM1 and has an IP address space of 10.0.0.0/16.
Subscription2 contains a virtual network named VNet2. Vnet2 contains an Azure virtual machine named VM2 and has an IP address space of 10.10.0.0/24.
You need to connect VNet1 to VNet2.
What should you do first?
A. Modify the IP address space of VNet2.
B. Move VM1 to Subscription2.
C. Provision virtual network gateways.
D. Move VNet1 to Subscription2.
The correct answer is C. Provision virtual network gateways.
Here’s why:
Explanation:
To connect two Azure Virtual Networks (VNets), especially when they are in different subscriptions or Azure AD tenants, you typically need to use virtual network gateways. Virtual network gateways enable you to create VPN tunnels between VNets, allowing resources in different VNets to communicate with each other.
Let’s break down why each option is (or isn’t) the correct first step:
A. Modify the IP address space of VNet2.
Why it’s incorrect as the first step: The IP address spaces of VNet1 (10.0.0.0/16) and VNet2 (10.10.0.0/24) are not overlapping. Non-overlapping IP address spaces are a requirement for VNet peering or VNet-to-VNet connections using gateways. Modifying the IP address space is only necessary if there’s an overlap, which isn’t the case here. Therefore, this is not the first step.
B. Move VM1 to Subscription2.
Why it’s incorrect: Moving VM1 to Subscription2 is completely irrelevant to connecting VNet1 and VNet2. The goal is to connect the networks, not move VMs between subscriptions as a prerequisite for network connectivity. VMs are within the VNets, but the connection needs to be established at the VNet level.
C. Provision virtual network gateways.
Why it’s correct: Provisioning virtual network gateways is the essential first step for establishing a VNet-to-VNet connection, especially across subscriptions. Virtual network gateways are the Azure resources that create and manage the VPN tunnels required for VNet-to-VNet connectivity. Before you can configure the actual connection, you must have gateways in place in both VNets.
D. Move VNet1 to Subscription2.
Why it’s incorrect as the first step (and generally less desirable as the first step): Moving VNet1 to Subscription2 would place both VNets in the same subscription. While this might simplify some aspects of management and could potentially allow for simpler VNet peering within the same subscription (if peering is chosen instead of gateways), it’s not the necessary first step. You can connect VNets across subscriptions and tenants without moving them. Moving a VNet is a more complex operation with potential impact on existing resources and configurations. It’s also not the most direct or necessary first action to achieve VNet connectivity. It’s also not required to use gateways, which is hinted at by option C.
In summary:
The most logical and necessary first step to connect VNet1 and VNet2, especially given they are in different subscriptions, is to provision virtual network gateways in both VNets. This is the foundational infrastructure component required to build the VPN tunnels for VNet-to-VNet communication.
Final Answer: C
You have an Azure Active Directory (Azure AD) tenant.
You have an existing Azure AD conditional access policy named Policy1. Policy1 enforces the use of Azure AD-joined devices when members of the Global
Administrators group authenticate to Azure AD from untrusted locations.
You need to ensure that members of the Global Administrators group will also be forced to use multi-factor authentication when authenticating from untrusted locations.
What should you do?
A. From the Azure portal, modify session control of Policy1.
B. From multi-factor authentication page, modify the user settings.
C. From multi-factor authentication page, modify the service settings.
D. From the Azure portal, modify grant control of Policy1.
The correct answer is D. From the Azure portal, modify grant control of Policy1.
Here’s why:
Conditional Access Grant Controls: Conditional Access policies in Azure AD work by evaluating conditions (like user group, location, device platform) and then applying grant controls if those conditions are met. Grant controls define what is required to grant access. Common grant controls include:
Require multi-factor authentication: Enforces MFA.
Require device to be marked as compliant: Requires the device to be managed and compliant with your compliance policies.
Require hybrid Azure AD joined device: Requires the device to be hybrid Azure AD joined.
Require approved client app: Requires the user to use an approved client application.
Require app protection policy: Requires the user to use an app that has app protection policies applied.
Policy1’s Current Configuration: Policy1 already enforces “Azure AD-joined devices” for Global Administrators from untrusted locations. This means the “Grant” control section of Policy1 is already configured to “Require device to be marked as compliant” or “Require hybrid Azure AD joined device” (or a similar device-based control).
Adding MFA Requirement: To also force MFA, you need to add the “Require multi-factor authentication” grant control to Policy1. The “Grant” control section in the Azure portal for Policy1 allows you to specify multiple grant requirements. You can require one of the selected controls or all of them. In this case, you likely want to require both Azure AD-joined devices and MFA.
Why other options are incorrect:
A. From the Azure portal, modify session control of Policy1. Session controls are applied after authentication and access are granted. They control the user session behavior, such as sign-in frequency, persistent browser sessions, and application enforced restrictions. Session controls are not used to enforce primary authentication requirements like MFA or device compliance.
B. From multi-factor authentication page, modify the user settings. The older Azure AD MFA settings page (if you are referring to the legacy MFA settings) is primarily for per-user MFA enforcement and app password management. Conditional Access policies are the modern and recommended way to manage MFA at scale and based on conditions (like location, user group, etc.). Modifying user settings directly bypasses the conditional access policy logic and is not the correct approach for this scenario.
C. From multi-factor authentication page, modify the service settings. Similar to option B, service settings on the legacy MFA page are generally for configuring MFA provider settings (like verification methods), not for integrating MFA into Conditional Access policies. Conditional Access policies are configured separately.
In summary: To enforce MFA in addition to the existing device requirement for Global Administrators from untrusted locations within Policy1, you need to modify the grant controls of Policy1 in the Azure portal and add the “Require multi-factor authentication” option.
Final Answer: D
You plan to deploy five virtual machines to a virtual network subnet.
Each virtual machine will have a public IP address and a private IP address.
Each virtual machine requires the same inbound and outbound security rules.
What is the minimum number of network interfaces and network security groups that you require? To answer, select the appropriate options in the answer area.
Network interfaces:
5
10
15
20
Network Security Group
1
2
5
10
Network interfaces: 5
Network Security Group: 1
Explanation:
Network Interfaces:
Minimum Requirement: Each Azure virtual machine must have at least one network interface (NIC) to connect to a virtual network and communicate with other resources.
Public and Private IPs on a Single NIC: A single NIC on an Azure VM can be configured with both a private IP address (from the subnet’s IP range) and a public IP address. You don’t need separate NICs to have both types of IP addresses.
Calculation: Since you have five virtual machines, you need a minimum of 5 network interfaces, one for each VM.
Network Security Groups (NSGs):
Subnet-Level NSGs: Network Security Groups can be associated with either individual network interfaces or entire subnets. When you associate an NSG with a subnet, the security rules in that NSG apply to all virtual machines within that subnet.
Shared Security Rules: The requirement states that “Each virtual machine requires the same inbound and outbound security rules.” This is the key point. Because the security rules are identical for all VMs in the subnet, you can efficiently manage security by applying a single NSG at the subnet level.
Minimizing NSGs: Using a subnet-level NSG is the most efficient and least administrative effort approach when VMs within a subnet share the same security requirements. You avoid the need to create and manage individual NSGs for each VM or NIC.
Calculation: Since all VMs in the subnet need the same rules, and they are in the same subnet, you need a minimum of 1 Network Security Group applied to the subnet.
Why other options are incorrect:
Network Interfaces:
10, 15, 20: These numbers are unnecessarily high. You don’t need multiple NICs per VM just because they have public and private IPs, or because they require the same security rules. One NIC per VM is sufficient.
Network Security Groups:
2, 5, 10: These numbers are also unnecessarily high. Creating multiple NSGs (especially more than 1) for this scenario would be redundant and increase management complexity without providing any benefit since the security rules are identical for all VMs and they are in the same subnet. Applying more than one NSG per subnet or per NIC could even lead to conflicting or overly complex security configurations.
Therefore, the minimum and most efficient configuration is:
5 Network Interfaces (one per VM)
1 Network Security Group (applied to the subnet)
Final Answer:
Option Selected
Network interfaces: 5
Network Security Group 1
You have an Azure subscription named Subscription1 that contains an Azure virtual machine named VM1. VM1 is in a resource group named RG1.
VM1 runs services that will be used to deploy resources to RG1.
You need to ensure that a service running on VM1 can manage the resources in RG1 by using the identity of VM1.
What should you do first?
A. From the Azure portal, modify the Access control (IAM) settings of RG1.
B. From the Azure portal, modify the Policies settings of RG1.
C. From the Azure portal, modify the Access control (IAM) settings of VM1.
D. From the Azure portal, modify the value of the Managed Service Identity option for VM1.
The correct answer is D. From the Azure portal, modify the value of the Managed Service Identity option for VM1.
Here’s why:
Explanation:
To allow a service running on VM1 to manage Azure resources using VM1’s identity, you need to enable and configure Managed Identities for Azure Resources on VM1 first. Managed Identities provide Azure services with an automatically managed identity in Azure Active Directory (Azure AD). This identity can then be used to authenticate to Azure services that support Azure AD authentication, without needing to manage credentials in your code.
Let’s break down each option:
A. From the Azure portal, modify the Access control (IAM) settings of RG1.
Why it’s not the first step: Modifying the Access control (IAM) settings of RG1 is a necessary later step, but not the first step. IAM settings on RG1 are where you will grant permissions to the identity of VM1 to manage resources in RG1. However, you first need to enable the Managed Identity on VM1 before you can grant it permissions.
B. From the Azure portal, modify the Policies settings of RG1.
Why it’s incorrect: Azure Policies are used to enforce organizational standards and compliance across Azure resources. They are not related to enabling Managed Identities or granting permissions for a VM to manage resources. Policies are for governance, not identity management in this context.
C. From the Azure portal, modify the Access control (IAM) settings of VM1.
Why it’s incorrect: Modifying the Access control (IAM) settings of VM1 controls who can manage the VM itself. It doesn’t enable the VM’s identity to be used to manage other resources. IAM settings on VM1 are for role-based access control to the VM, not from the VM to other resources.
D. From the Azure portal, modify the value of the Managed Service Identity option for VM1.
Why it’s correct and the first step: This is the correct first step. You need to enable either a System-assigned Managed Identity or a User-assigned Managed Identity (or both) on VM1. Enabling the Managed Identity creates an identity for VM1 in Azure AD. Once the Managed Identity is enabled, you can then proceed to grant this identity permissions to manage RG1 resources via IAM on RG1 (Option A - which would be the next step).
Steps to solve the problem:
Enable Managed Identity on VM1: In the Azure portal, navigate to VM1. Under the “Settings” section, find “Identity”. Enable either “System assigned” or “User assigned” Managed Identity (System-assigned is often simpler for this scenario).
Grant Permissions to VM1’s Identity on RG1: After enabling Managed Identity on VM1, go to Resource Group RG1 in the Azure portal. Navigate to “Access control (IAM)”. Add a role assignment.
For “Principal”, search for the name of VM1 (if System-assigned) or the name of the User-assigned Managed Identity you created.
Select an appropriate role for the service running on VM1 to manage resources in RG1 (e.g., “Contributor” role to have broad management rights, or more specific roles if you want to limit permissions).
Code on VM1: The service running on VM1 can now use the Azure SDKs or REST APIs to authenticate using the VM’s Managed Identity and manage resources in RG1. The Azure SDKs handle the authentication process automatically when running within an Azure VM with Managed Identity enabled.
In summary, enabling Managed Identity on VM1 is the prerequisite and the correct first step to allow a service on VM1 to manage Azure resources using VM1’s identity.
Final Answer: D
You have an Azure Active Directory (Azure AD) tenant.
You need to create a conditional access policy that requires all users to use multi-factor authentication when they access the Azure portal.
Which three settings should you configure? To answer, select the appropriate settings to the answer area.
NOTE: Each correct selection is worth one point.
Name ->Policy1
users and groups
cloud apps
conditions
Grant
Session
enable policy
To create a conditional access policy that requires MFA for all users accessing the Azure portal, you need to configure the following three core settings:
- Users and groups: You need to specify who this policy applies to. In this case, you want it to apply to all users. Within the Conditional Access policy configuration, you will select “Users and groups” and then choose to apply the policy to “All users”.
- Cloud apps or actions: You need to specify what application(s) this policy protects. In this case, you want to protect access to the Azure portal. Within the Conditional Access policy configuration, you will select “Cloud apps or actions” and then choose “Select apps” and search for and select “Azure Azure portal”.
- Grant: You need to specify what access control to enforce when the conditions (user, app) are met. In this case, you want to require multi-factor authentication. Within the Conditional Access policy configuration, you will select “Grant” and then choose “Grant access” and check the box for “Require multi-factor authentication”.
Let’s evaluate the provided options and map them to these core settings:
Name -> Policy1: While a policy needs a name, it’s not a functional setting for enforcing MFA itself. It’s an administrative label. It’s less critical for the functionality compared to the other options.
users and groups: Correct. This is essential to define who the policy applies to (all users in this case).
cloud apps: Correct. This is essential to define what application is being protected (Azure portal).
conditions: While conditions are a fundamental part of Conditional Access policies and define when the policy applies (e.g., location, device state), for the simplest requirement of “all users, all the time for Azure portal”, you technically don’t need to configure specific conditions beyond the default “any condition”. However, the “Conditions” section is still a necessary part of the policy structure to define the scope. In a broader sense, it is a setting you configure even if you leave it at default “any condition”. Let’s consider this as potentially one of the three.
Grant: Correct. This is essential to define what action is taken when the conditions are met, which is “require multi-factor authentication” in this scenario.
Session: Session controls are applied after successful authentication and are used to manage the user session (e.g., sign-in frequency, persistent browser session). They are not directly involved in requiring MFA for initial access.
enable policy: While you need to enable the policy for it to be active, “enable policy” is more of an on/off switch for the entire policy rather than a specific setting within the policy configuration itself. It’s also less directly related to defining the MFA requirement compared to Users, Apps, and Grant.
Considering the need to select three settings that are most directly related to configuring the MFA requirement, the most appropriate and functionally essential settings are:
users and groups
cloud apps
Grant
While “conditions” is an integral part of CA policies, for this specific and simple requirement, the most direct settings to configure to achieve the goal of requiring MFA for Azure portal access for all users are Users, Apps, and Grant.
Therefore, the three settings you should configure are:
users and groups
cloud apps
Grant
Final Answer:
Option Selected
users and groups
cloud apps
Grant
You configure Azure AD Connect for Azure Active Directory Seamless Single Sign-On (Azure AD Seamless SSO) for an on-premises network.
Users report that when they attempt to access myapps.microsoft.com, they are prompted multiple times to sign in and are forced to use an account name that ends with onmicrosoft.com.
You discover that there is a UPN mismatch between Azure AD and the on-premises Active Directory.
You need to ensure that the users can use single-sign on (SSO) to access Azure resources.
What should you do first?
A. From on-premises network, deploy Active Directory Federation Services (AD FS).
B. From Azure AD, add and verify a custom domain name.
C. From on-premises network, request a new certificate that contains the Active Directory domain name.
D. From the server that runs Azure AD Connect, modify the filtering options.
The correct answer is B. From Azure AD, add and verify a custom domain name.
Here’s why:
Explanation:
The core issue described is a UPN (User Principal Name) mismatch. Azure AD Seamless SSO relies on matching the UPNs of users between on-premises Active Directory and Azure AD. When there’s a mismatch, Seamless SSO will fail, leading to users being prompted for credentials multiple times and potentially being forced to use their default onmicrosoft.com UPN.
Let’s break down why option B is the correct first step and why the others are not:
B. From Azure AD, add and verify a custom domain name.
Why it’s correct and the first step: If users are being forced to use onmicrosoft.com UPNs, it strongly indicates that a custom domain name has not been properly added and verified in Azure AD. By default, Azure AD uses the onmicrosoft.com domain. To use your organization’s domain (e.g., company.com) for user UPNs in Azure AD, you must add and verify this custom domain in your Azure AD tenant.
UPN Alignment: Adding and verifying the custom domain allows Azure AD to recognize and accept user UPNs that match your on-premises domain (e.g., user@company.com). This is essential for Seamless SSO to work correctly and for UPNs to be consistent across on-premises and cloud environments.
First Step: This is the most fundamental and logical first step to address a UPN mismatch issue. Without a verified custom domain, Azure AD won’t properly handle UPNs from your on-premises domain.
A. From on-premises network, deploy Active Directory Federation Services (AD FS).
Why it’s incorrect: Deploying AD FS is a completely different approach to Single Sign-On. AD FS is a federation-based SSO solution, while Azure AD Seamless SSO is a password hash synchronization-based solution (with Kerberos for authentication). Deploying AD FS is a significant change in SSO strategy and not a step to fix issues with Seamless SSO. It’s also overkill and not directly related to the UPN mismatch problem.
C. From on-premises network, request a new certificate that contains the Active Directory domain name.
Why it’s incorrect: Certificates are used in Azure AD Seamless SSO for Kerberos ticket decryption and security. While a certificate is necessary for Seamless SSO to function, the problem description specifically points to a UPN mismatch, not a certificate issue. A new certificate is unlikely to resolve the UPN mismatch. Certificates are related to the technical security aspects of Kerberos authentication, not UPN alignment.
D. From the server that runs Azure AD Connect, modify the filtering options.
Why it’s incorrect: Azure AD Connect filtering options control which objects (users, groups, etc.) are synchronized from on-premises Active Directory to Azure AD. While incorrect filtering could cause synchronization issues, it’s not the primary cause of a UPN mismatch leading to SSO problems. Filtering is more about which users are synced, not about the UPN attribute itself being correctly configured. Modifying filtering is a more advanced configuration and not the first step to address a fundamental UPN mismatch problem.
Steps to resolve the issue after choosing option B:
Add and Verify Custom Domain in Azure AD (Option B - Correct First Step): In the Azure portal, navigate to Azure Active Directory -> Custom domain names. Add your organization’s domain name (e.g., company.com) and follow the steps to verify it (typically by adding a DNS TXT record).
Update UPNs (If Necessary): After verifying the custom domain, ensure that the UPN suffixes in your on-premises Active Directory match the verified custom domain. If there are users with UPNs still using a different or incorrect suffix, you may need to update their UPNs in on-premises AD.
Run Azure AD Connect Synchronization: After making changes (adding custom domain, updating UPNs), ensure Azure AD Connect synchronizes these changes to Azure AD.
Test Seamless SSO: Have users try accessing myapps.microsoft.com again from domain-joined devices on the on-premises network to verify if Seamless SSO is now working correctly without multiple prompts or forced onmicrosoft.com usage.
Final Answer: B
You have an Active Directory forest named contoso.com.
You install and configure AD Connect to use password hash synchronization as the single sign-on(SSO) method. Staging mode is enabled.
You review the synchronization results and discover that the Synchronization Service Manager does not display any sync jobs.
You need to ensure that the synchronization completes successfully.
What should you do?
A. From Azure PowerShell, run Start-AdSyncSycnCycle ““PolicyType Initial.
B. Run Azure AD Connect and set the SSO method to Pass-through Authentication.
C. From Synchronization Service Manager, run a full import.
D. Run Azure AD Connect and disable staging mode.
The correct answer is D. Run Azure AD Connect and disable staging mode.
Here’s why:
Explanation:
When Azure AD Connect is configured in staging mode, it means that the synchronization service is running and performing imports and synchronizations, but it does not export any changes to Azure AD. This mode is designed for testing and verifying the configuration before making it active in your production environment.
If you are seeing no sync jobs in the Synchronization Service Manager while staging mode is enabled, this is expected behavior. In staging mode, the primary purpose is to review the synchronization configuration and preview changes, not to actively synchronize data to your production Azure AD tenant.
To make the synchronization process active and for sync jobs to run and export data to Azure AD, you need to disable staging mode.
Let’s analyze each option:
A. From Azure PowerShell, run Start-AdSyncSycnCycle ““PolicyType Initial.
While this command can manually trigger a synchronization cycle, it will still operate within the constraints of staging mode. If staging mode is enabled, running this command will likely initiate a synchronization cycle, but it will still not export changes to Azure AD. Therefore, it won’t resolve the core issue of getting the synchronization to complete successfully in a production sense.
B. Run Azure AD Connect and set the SSO method to Pass-through Authentication.
Changing the SSO method to Pass-through Authentication is irrelevant to the problem of synchronization not completing successfully. The issue is that no sync jobs are being displayed, which is directly related to staging mode preventing export of changes to Azure AD. Changing the SSO method won’t enable the synchronization process in staging mode to export data.
C. From Synchronization Service Manager, run a full import.
Running a full import from the Synchronization Service Manager might initiate an import operation, but again, if staging mode is enabled, the subsequent synchronization and export steps will be suppressed. A full import alone won’t enable the synchronization to complete successfully and export data to Azure AD when staging mode is active.
D. Run Azure AD Connect and disable staging mode.
This is the correct answer. Disabling staging mode in Azure AD Connect is the necessary step to make the synchronization configuration active and allow changes to be exported to Azure AD. Once staging mode is disabled, Azure AD Connect will start performing full and delta synchronizations and will export changes to your Azure AD tenant. This will then allow you to see sync jobs running in the Synchronization Service Manager and for the synchronization to complete successfully in a production context.
Steps to resolve the issue:
Run Azure AD Connect wizard again.
Choose “Configure” from the initial tasks page.
Select “Configure staging mode” from the Additional tasks page.
Uncheck the “Enable staging mode” checkbox.
Complete the wizard.
After disabling staging mode, the synchronization service will start exporting changes to Azure AD, and you should see synchronization jobs running in the Synchronization Service Manager.
Final Answer: D
You have an Azure Active Directory (Azure AD) tenant that has the initial domain name.
You have a domain name of contoso.com registered at a third-party registrar.
You need to ensure that you can create Azure AD users that have names containing a suffix of @contoso.com.
Which three actions should you perform in sequence? To answer, move the appropriate cmdlets from the list of cmdlets to the answer area and arrange them in the correct order.
Add an Azure AD server
Create an Azure DNS zone
Verify the Domain
Configure company branding
Add a record to the Public Contoso, com DNS zone
Add a custom domain name
Explanation:
To use a custom domain name like contoso.com for Azure AD users, you need to perform the following steps in sequence:
Add a custom domain name: First, you need to inform Azure AD that you intend to use contoso.com with your Azure AD tenant. This is done by adding the custom domain name in the Azure portal or using PowerShell.
Verify the Domain: After adding the domain name, Azure AD will provide you with DNS record information (typically a TXT record or MX record). You need to add this record to the DNS settings of contoso.com at your third-party registrar to prove that you own the domain. This process is called domain verification.
Add a record to the Public Contoso, com DNS zone: This is the step where you actually add the DNS record provided by Azure AD to your domain’s public DNS settings. This action proves to Azure AD that you control the DNS for contoso.com, thus verifying your ownership.
Let’s arrange the provided actions in the correct sequence:
Step 1: Add a custom domain name
This is the first step to initiate the process within Azure AD. You need to tell Azure AD that you want to use contoso.com.
Step 2: Verify the Domain
Once you add the custom domain name, Azure AD will start the verification process. This step represents the action of Azure AD initiating the verification and providing you with the DNS record information.
Step 3: Add a record to the Public Contoso, com DNS zone
This is the final step to complete the verification. You take the DNS record information provided in the “Verify the Domain” step and add it to the DNS zone managed by your third-party registrar for contoso.com.
Incorrect Options and Why:
Add an Azure AD server: Azure AD is a cloud service and doesn’t involve adding servers in the context of custom domain setup.
Create an Azure DNS zone: While you could use Azure DNS to manage your domain’s DNS records, it’s not a mandatory step to add a custom domain to Azure AD. You can use any DNS provider where your domain is registered. This step is optional for DNS management, not for domain verification itself.
Configure company branding: Company branding is for customizing the Azure AD sign-in experience and is not related to adding or verifying a custom domain for user UPNs.
Answer Area (Correct Sequence):
Add a custom domain name
Verify the Domain
Add a record to the Public Contoso, com DNS zone
Final Answer:
Add a custom domain name
Verify the Domain
Add a record to the Public Contoso, com DNS zone
You have an Azure subscription that contains 100 virtual machines.
You regularly create and delete virtual machines.
You need to identify unattached disks that can be deleted.
What should you do?
A. From Microsoft Azure Storage Explorer, view the Account Management properties.
B. From Azure Cost Management, create a Cost Management report.
C. From the Azure portal, configure the Advisor recommendations.
D. From Azure Cost Management, open the Optimizer tab and create a report.
The correct answer is C. From the Azure portal, configure the Advisor recommendations.
Here’s why:
Explanation:
Azure Advisor’s Cost Recommendations: Azure Advisor is a service in Azure that provides personalized recommendations to help you optimize your Azure resources for cost, security, reliability, performance, and operational excellence. One of its key features is to identify cost-saving opportunities.
Identifying Unattached Disks: Azure Advisor specifically includes a recommendation category related to cost optimization, and within that category, it can identify unattached disks. Advisor analyzes your Azure environment and detects disks that are not currently attached to any virtual machines. These unattached disks are still incurring storage costs, and deleting them can save money.
Configuring Advisor Recommendations: You can access Azure Advisor from the Azure portal. You don’t need to “configure” it in the sense of setting up new rules for unattached disks detection, as this is a built-in recommendation. You simply need to view the Advisor recommendations, specifically looking at the “Cost” category. Advisor will automatically list out any unattached disks it finds in your subscription.
Why other options are incorrect:
A. From Microsoft Azure Storage Explorer, view the Account Management properties. Azure Storage Explorer is a useful tool for managing storage accounts and their contents (blobs, files, disks, etc.). However, viewing “Account Management properties” in Storage Explorer will not directly provide a list of unattached disks. You would have to manually browse through disks and cross-reference them with your VM list to determine which are unattached, which is inefficient for 100 VMs. Storage Explorer is not designed for this specific discovery task in an automated way.
B. From Azure Cost Management, create a Cost Management report. Azure Cost Management is excellent for analyzing and reporting on Azure spending. You can create reports to see your storage costs, including disk storage. However, Cost Management reports themselves don’t directly identify unattached disks. You might see high disk costs, but the report won’t automatically tell you which disks are not in use. You would need to analyze cost data and correlate it with other information to infer unattached disks, which is not the most efficient approach.
D. From Azure Cost Management, open the Optimizer tab and create a report. While Azure Cost Management has an “Optimizer” section (or similar features that might be renamed or UI updated over time), and it may surface some cost-saving recommendations, it’s still generally less direct than using Azure Advisor for this specific task. The Optimizer tab is more likely to guide you toward acting on Advisor recommendations or provide a more general cost optimization overview, rather than directly and specifically listing unattached disks for deletion. Azure Advisor is the dedicated service for providing these kinds of actionable recommendations.
In summary, Azure Advisor is the most direct and efficient Azure service to identify unattached disks. By configuring (more accurately, by viewing) Advisor recommendations, specifically in the “Cost” category, you will get a list of unattached disks that you can then review and delete.
Final Answer: C
You have an Azure subscription that contains 10 virtual machines.
You need to ensure that you receive an email message when any virtual machines are powered off, restarted, or deallocated.
What is the minimum number of rules and action groups that you require?
A. three rules and three action groups
B. one rule and one action group
C. three rules and one action group
D. one rule and three action groups
To meet the requirement of receiving email notifications when virtual machines are powered off, restarted, or deallocated, you need to configure Azure Monitor alerts. Let’s analyze the minimum number of rules and action groups required.
Alert Rules:
You need to monitor three distinct events:
Virtual machine powered off (Stopped/PowerOff): You need a rule to detect when a VM transitions to the “Powered Off” state.
Virtual machine restarted: You need a rule to detect when a VM is restarted.
Virtual machine deallocated (Stopped (deallocated)): You need a rule to detect when a VM is deallocated.
While technically you might be able to create a single complex rule that tries to capture all three states, it is cleaner, more manageable, and generally recommended to create separate alert rules for each distinct event you want to monitor. This allows for more specific configurations and easier troubleshooting.
Therefore, you will need a minimum of three alert rules, one for each of the virtual machine power state changes you want to monitor.
Action Groups:
Action groups define the actions to take when an alert is triggered. In this scenario, the desired action is to send an email message. You want to receive an email notification for any of the three VM power state changes. You don’t need separate email notifications for each event; you just need a notification when any of these events occur.
Therefore, you can use a single action group configured to send an email message. You can then associate this single action group with all three alert rules. When any of the three alert rules are triggered (VM powered off, restarted, or deallocated), the same action group will be executed, resulting in an email notification being sent.
Minimum Requirements:
Alert Rules: Three (one for each power state: powered off, restarted, deallocated)
Action Groups: One (to send the email notification for all three alert rules)
Based on this analysis, the correct option is C. three rules and one action group.
Let’s review why other options are incorrect:
A. three rules and three action groups: Using three action groups is redundant. You don’t need a separate action group for each rule if the desired action (sending an email to the same recipient list) is the same for all rules.
B. one rule and one action group: One rule is insufficient to monitor three distinct events effectively and clearly. While technically you might try to create a very complex single rule, it’s not the minimum manageable approach and is not best practice for clarity and maintainability.
D. one rule and three action groups: One rule is still insufficient, and using three action groups is still redundant. A single rule cannot clearly distinguish and monitor all three power state changes in a simple and maintainable way, and you only need one email notification mechanism.
Final Answer: The final answer is
C
You plan to automate the deployment of a virtual machine scale set that uses the Windows Server 2016 Datacenter image.
You need to ensure that when the scale set virtual machines are provisioned, they have web server components installed.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
A. Upload a configuration script.
B. Create an automation account.
C. Create a new virtual machine scale set in the Azure portal.
D. Create an Azure policy.
E. Modify the extension profile section of the Azure Resource Manager template.
The correct answers are A. Upload a configuration script. and E. Modify the extension profile section of the Azure Resource Manager template.
Here’s why these options are correct and how they work together:
A. Upload a configuration script.
Purpose: You need a script (like a PowerShell script for Windows Server) that contains the commands to install the web server components (e.g., IIS - Internet Information Services). This script will be executed on each virtual machine instance in the scale set after it’s provisioned.
Content: The script would typically include PowerShell commands to:
Install the Web-Server role (IIS).
Optionally configure IIS further (e.g., default website settings, application pools, etc.).
Potentially perform other necessary configuration steps for your web server application.
Upload Location: You would typically upload this script to an accessible storage location, such as:
Azure Blob Storage: A common and recommended approach. You upload the script to a public or private blob container and provide the URI to the script in your ARM template.
Script in ARM Template: For simpler scripts, you can sometimes embed the script directly within the ARM template, but for more complex scripts, uploading to storage is better for management and readability.
E. Modify the extension profile section of the Azure Resource Manager template.
Purpose: The Azure Resource Manager (ARM) template is used to define and deploy your virtual machine scale set. To automatically run your configuration script on each VM instance during provisioning, you use VM extensions. The extensionProfile section of the ARM template is where you configure these VM extensions.
Extension to Use: For running custom scripts on Windows VMs, the CustomScriptExtension is the most common and appropriate extension.
Configuration within extensionProfile: In the extensionProfile, you would define a CustomScriptExtension and configure it to:
fileUris: Point to the URI of your uploaded configuration script (from option A, like the Blob Storage URL).
commandToExecute: Specify the command to execute the script on the VM (e.g., powershell -ExecutionPolicy Unrestricted -File install_webserver.ps1).
settings and protectedSettings (optional): For passing parameters to the script or handling sensitive information securely.
How A and E Work Together:
Create the Configuration Script (Action A): You write a PowerShell script to install the web server components.
Upload the Script (Action A): You upload this script to Azure Blob Storage (or another accessible location).
Modify ARM Template (Action E): In your ARM template for the VM scale set, you add or modify the extensionProfile section.
Configure CustomScriptExtension (Action E): Within the extensionProfile, you define a CustomScriptExtension, pointing it to the script URI (fileUris) and specifying how to execute it (commandToExecute).
Deploy the ARM Template: When you deploy the ARM template, Azure will:
Provision the virtual machine scale set.
For each VM instance, Azure will download the script from the URI specified in the CustomScriptExtension.
The CustomScriptExtension will execute the script on the VM, installing the web server components.
Why other options are incorrect:
B. Create an automation account. Azure Automation accounts are powerful for automation tasks, but they are not the primary mechanism for directly configuring VMs during provisioning in a VM scale set. While you could use Azure Automation in a more complex scenario (e.g., triggered after VM creation), using VM extensions directly within the ARM template is the simpler and more standard approach for this requirement.
C. Create a new virtual machine scale set in the Azure portal. Creating a VM scale set itself doesn’t install web server components. The Azure portal is an interface for deployment, but you still need a mechanism to configure the VMs during deployment, which is achieved through extensions and scripts. The portal would be used to deploy the ARM template (which includes the extension profile).
D. Create an Azure policy. Azure Policy is used to enforce configurations and compliance after VMs are deployed. It can audit or remediate configuration drift. Policies are not designed to initiate the installation of software during VM provisioning. Policies ensure ongoing compliance but don’t handle the initial setup in this scenario.
Final Answer:
Option Selected
Upload a configuration script.
Modify the extension profile section of the Azure Resource Manager template.
An app uses a virtual network with two subnets. One subnet is used for the application server. The other subnet is used for a database server. A network virtual appliance (NVA) is used as a firewall.
Traffic destined for one specific address prefix is routed to the NVA and then to an on-premises database server that stores sensitive data. A Border Gateway
Protocol (BGP) route is used for the traffic to the on-premises database server.
You need to recommend a method for creating the user-defined route.
Which two options should you recommend? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. For the virtual network configuration, use a VPN.
B. For the next hop type, use virtual network peering.
C. For the virtual network configuration, use Azure ExpressRoute.
D. For the next hop type, use a virtual network gateway.
Let’s break down the requirements and analyze each option in the context of creating a user-defined route (UDR) for traffic destined to an on-premises database via an NVA.
Understanding the Scenario:
Traffic from a subnet in Azure needs to be routed to a specific address prefix (on-premises database network).
The traffic must pass through a Network Virtual Appliance (NVA) acting as a firewall within the Azure VNet.
BGP routing is used for traffic after the NVA to reach the on-premises database server. This implies a connection between Azure and the on-premises network that supports BGP, such as VPN or ExpressRoute.
Analyzing Each Option:
A. For the virtual network configuration, use a VPN.
Correct. A VPN (Site-to-Site VPN) is a common method to establish a secure connection between an Azure virtual network and an on-premises network. While a VPN gateway is the specific component, using “VPN for the virtual network configuration” broadly implies setting up VPN-based hybrid connectivity. In this scenario, the BGP route mentioned likely refers to BGP being used over a VPN or ExpressRoute connection to exchange routes with the on-premises network. Therefore, using VPN for the virtual network configuration is a valid part of a complete solution for connecting to on-premises.
B. For the next hop type, use virtual network peering.
Incorrect. Virtual network peering is used to connect two Azure virtual networks directly. It’s not relevant for routing traffic from a subnet to an NVA within the same virtual network to reach an on-premises network. Peering is for VNet-to-VNet connectivity, not for routing to an NVA for on-premises access.
C. For the virtual network configuration, use Azure ExpressRoute.
Correct. Azure ExpressRoute provides a dedicated, private, and often higher-bandwidth connection between Azure and an on-premises network. Similar to VPN, ExpressRoute is a method for establishing hybrid connectivity. Using ExpressRoute for the virtual network configuration is also a valid part of a complete solution for connecting to on-premises, especially when dealing with sensitive data and potentially higher bandwidth requirements. ExpressRoute also supports BGP for route exchange.
D. For the next hop type, use a virtual network gateway.
Incorrect. While a virtual network gateway (VPN gateway or ExpressRoute gateway) is involved in connecting to on-premises via VPN or ExpressRoute, it is not the correct “next hop type” for a UDR when you want to route traffic to an NVA within the same VNet. For routing traffic to an NVA, the correct “next hop type” in a UDR is typically “Virtual appliance”. Using “Virtual network gateway” as the next hop type in a UDR would typically be used to route traffic directly out of the VNet to the VPN or ExpressRoute connection, bypassing the NVA. In this scenario, the traffic needs to go through the NVA first.
Rationale for choosing A and C:
Options A and C represent the two primary methods for establishing a hybrid network connection between Azure and on-premises, which is essential for the scenario described. The mention of a BGP route further reinforces that a VPN or ExpressRoute connection to on-premises is likely part of the intended architecture. While option D mentions a virtual network gateway, it’s not the correct next hop type for routing traffic to an NVA within the VNet. Option B is irrelevant for routing to on-premises via an NVA.
Therefore, Options A and C provide the necessary virtual network configurations for establishing the hybrid connectivity required to reach the on-premises database server, making them the most suitable recommendations for a “complete solution” in this context.
Final Answer: The final answer is
A,C
You manage a solution in Azure that consists of a single application that runs on a virtual machine (VM). Traffic to the application has increased dramatically.
The application must not experience any downtime and scaling must be dynamically defined.
You need to define an auto-scale strategy to ensure that the VM can handle the workload.
Which three options should you recommend? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. Deploy application automatic vertical scaling.
B. Create a VM availability set.
C. Create a VM scale set.
D. Deploy application automatic horizontal scaling.
E. Deploy a custom auto-scale implementation.
Let’s analyze each option based on the requirements of no downtime and dynamically defined scaling for an application running on a single VM experiencing increased traffic.
A. Deploy application automatic vertical scaling.
Correctness: Vertical scaling involves increasing the resources (CPU, memory, etc.) of the existing virtual machine. Automatic vertical scaling would mean this process happens automatically in response to load changes.
Downtime: Vertical scaling typically requires a reboot of the virtual machine to apply the changes, resulting in application downtime. While some newer Azure VM sizes support resizing without downtime, this is not universally guaranteed and is more complex than horizontal scaling for zero-downtime scaling.
Dynamically Defined Scaling: Yes, vertical scaling can be dynamically defined based on metrics such as CPU usage or memory consumption.
Completeness as a Solution: While it can dynamically scale, the potential for downtime during vertical scaling makes it less ideal for applications requiring continuous availability. Therefore, it is NOT the closest with correct answer when considering the “no downtime” requirement.
B. Create a VM availability set.
Correctness: Availability sets are used to improve the availability and fault tolerance of virtual machines by distributing them across multiple fault domains and update domains within an Azure datacenter.
Downtime: Availability sets help protect against planned and unplanned downtime events within a datacenter, enhancing the application’s resilience. However, availability sets themselves do not provide scaling capabilities.
Dynamically Defined Scaling: No, availability sets do not offer autoscaling functionality. They are focused on high availability, not scalability to handle increased traffic.
Completeness as a Solution: Availability sets address availability, but not the scaling requirement to handle dramatic traffic increases. Therefore, it is NOT a correct answer to the question about scaling strategy.
C. Create a VM scale set.
Correctness: VM scale sets are designed to deploy and manage a set of identical, auto-scaling virtual machines. They are ideal for horizontally scaling applications.
Downtime: VM scale sets are designed to minimize downtime during scaling operations. Scaling out (adding more instances) is generally a zero-downtime operation, especially when used with a load balancer and health probes.
Dynamically Defined Scaling: Yes, VM scale sets have built-in autoscaling capabilities. You can define rules based on various metrics (CPU utilization, memory, custom metrics, etc.) to automatically adjust the number of VM instances based on workload demands.
Completeness as a Solution: VM scale sets are a highly effective and complete solution for achieving both no downtime and dynamically defined scaling for VM-based applications. This is a very strong candidate for a correct answer.
D. Deploy application automatic horizontal scaling.
Correctness: Horizontal scaling (scaling out) involves adding more instances of the application to handle increased load. Automatic horizontal scaling means this addition of instances happens automatically based on demand.
Downtime: Horizontal scaling inherently aims to avoid downtime during scale-out. By distributing load across multiple instances, you can accommodate increased traffic without interrupting service to existing users.
Dynamically Defined Scaling: Yes, automatic horizontal scaling is by definition dynamic.
Completeness as a Solution: Deploying automatic horizontal scaling is conceptually the correct approach to meet the requirements. In Azure, VM scale sets are the primary and recommended way to implement automatic horizontal scaling for VM-based applications. This is also a very strong candidate for a correct answer.
E. Deploy a custom auto-scale implementation.
Correctness: A custom auto-scale implementation means designing and building your own scaling solution, potentially using Azure APIs, monitoring data, and custom logic.
Downtime: The potential for downtime depends entirely on how well the custom implementation is designed and executed. If done carefully, it could achieve zero-downtime scaling.
Dynamically Defined Scaling: Yes, a custom implementation can be designed to provide fully dynamic scaling, potentially based on very specific application metrics.
Completeness as a Solution: A custom auto-scale implementation can be a complete solution, but it is generally more complex, requires more development effort, and is harder to maintain compared to using managed Azure services like VM scale sets. It is also less aligned with the “minimize implementation time” aspect mentioned in the broader context of Azure solutions. However, it still technically can be a “complete solution”.
Analyzing for “closest with correct answer”:
If we need to choose the three options that are “closest with correct answer” and represent complete solutions, we should prioritize the options that best meet both “no downtime” and “dynamically defined scaling” requirements and are practical and recommended Azure approaches.
C. Create a VM scale set and D. Deploy application automatic horizontal scaling are the most direct and best-practice Azure solutions for horizontal autoscaling of VM-based applications to meet these requirements. They are tightly related - VM scale sets are Azure’s managed service for horizontal autoscaling.
E. Deploy a custom auto-scale implementation is also technically a “complete solution” as it can achieve the goals, although it is less managed and more complex. It’s a valid option if there are very specific, unique requirements not easily met by VM scale sets, but for general autoscaling, VM scale sets are preferred.
A. Deploy application automatic vertical scaling is less suitable due to potential downtime and is generally not the best approach for handling dramatic traffic increases in web applications where horizontal scaling is more scalable and resilient.
B. Create a VM availability set is not a scaling solution at all.
Therefore, the three options that are “closest with correct answer” and represent complete solutions for dynamic scaling with minimal downtime are C, D, and E.
Final Answer: The final answer is C, D, and E.
You develop a web app that uses the tier D1 app service plan by using the Web Apps feature of Microsoft Azure App Service.
Spikes in traffic have caused increases in page load times.
You need to ensure that the web app automatically scales when CPU load is about 85 percent and minimize costs.
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Configure the web App to the premium App Service tier
Configure a scale condition
Configure the web app to the standard App service tier
Enable autoscaling on the web app
Add a Scale rule
Switch to an Azure App Services consumption plan
To implement autoscaling for an Azure Web App based on CPU load and minimize costs, you should perform the following actions in sequence:
Configure the web app to the standard App service tier:
Reason: The D1 App Service plan is a Dev/Test tier and does not support autoscaling. To enable autoscaling, you must upgrade to a tier that supports it. The Standard App Service tier is a suitable and cost-effective option that supports autoscaling. While Premium tiers also support autoscaling, Standard tier is generally less expensive and sufficient for many production workloads where cost minimization is a priority.
Enable autoscaling on the web app:
Reason: After moving to a suitable App Service tier, you need to explicitly enable the autoscaling feature for your web app. This activates the autoscaling engine for the app.
Configure a scale condition:
Reason: You need to define the trigger for autoscaling. In this case, the requirement is to scale when the “CPU load is about 85 percent”. You need to configure a scale condition based on the CpuPercentage metric and set a threshold of 85%.
Add a Scale rule:
Reason: Once the scale condition is defined, you need to specify the action to take when the condition is met. This is done by adding a scale rule. The scale rule will define how to scale (e.g., increase instance count by a certain number) when the CPU load reaches 85%. You can also configure scale-in rules to reduce instances when load decreases, further optimizing costs.
Why other options are not in the correct sequence or not suitable:
Configure the web App to the premium App Service tier: While Premium tier also supports autoscaling, it’s generally more expensive than Standard. For cost minimization, starting with Standard tier is more appropriate. Premium might be considered later if Standard proves insufficient for performance or features, but not as the first step for cost-conscious scaling.
Switch to an Azure App Services consumption plan: Consumption plan is a serverless hosting option where you pay per-second consumption. While it autoscales automatically, it is a different hosting model than the dedicated instance-based App Service plans (like Standard and Premium). Switching to Consumption plan is a significant architectural change and might not be desired or suitable if the application was initially designed for a dedicated App Service plan. It might also lead to less predictable costs if traffic spikes are very high.
Add an Azure DNS zone: Azure DNS zones are for managing DNS records and are not related to App Service autoscaling.
Correct Sequence of Actions:
Configure the web app to the standard App service tier
Enable autoscaling on the web app
Configure a scale condition
Add a Scale rule
Answer Area:
Configure the web app to the standard App service tier
Enable autoscaling on the web app
Configure a scale condition
Add a Scale rule
You are implementing authentication for applications in your company. You plan to implement self-service password reset (SSPR) and multifactor authentication
(MFA) in Azure Active Directory (Azure AD).
You need to select authentication mechanisms that can be used for both MFA and SSPR.
Which two authentication methods should you use? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. Short Message Service (SMS) messages
B. Azure AD passwords/Authentication App
C. Email addresses
D. Security questions
E. App passwords
The correct answers are A. Short Message Service (SMS) messages and B. Azure AD passwords/Authentication App.
Here’s why:
A. Short Message Service (SMS) messages
MFA: SMS is a common and widely supported method for multi-factor authentication. Azure AD can send a verification code via SMS to a user’s phone, which they must enter to complete the MFA process.
SSPR: SMS is also a standard method for self-service password reset. Users can choose to receive a verification code via SMS to their registered phone number as part of the password reset process.
B. Azure AD passwords/Authentication App (Interpreted as Authentication App - e.g., Microsoft Authenticator)
MFA: Authentication apps (like Microsoft Authenticator, Google Authenticator, etc.) are a strong and recommended method for MFA. They can provide push notifications or generate Time-based One-Time Passcodes (TOTP) that users use for verification.
SSPR: Authentication apps are also supported for self-service password reset. Users can use push notifications or TOTP codes from their authenticator app to verify their identity and reset their password. The phrase “Azure AD passwords” in this option is a bit misleading. It likely refers to using the Authentication App method, not the password itself as an MFA or SSPR mechanism.
Why other options are incorrect:
C. Email addresses
While email addresses can be used for SSPR, it is not a recommended method for MFA due to security concerns. Email accounts can be compromised, making it a weaker second factor. While Azure AD technically allows email as an MFA method in some configurations, it’s generally discouraged for security best practices. Since the question asks for methods usable for both MFA and SSPR, and email is weak for MFA, it’s not the best choice.
D. Security questions
Security questions are strongly discouraged for both MFA and SSPR. They are inherently insecure as answers are often easily guessable or publicly available. Microsoft is actively moving away from security questions as an authentication method due to security vulnerabilities.
E. App passwords
App passwords are not an authentication method for MFA or SSPR. App passwords are used as a workaround for legacy applications that do not support modern authentication (like MFA). They are generated for specific applications to bypass MFA requirements for those apps, not as an MFA or SSPR method themselves.
Therefore, the two authentication methods that are genuinely and commonly used for both MFA and SSPR in Azure AD are SMS messages and Authentication Apps.
Final Answer: The final answer is
A,B
HOTSPOT
You create a virtual machine scale set named Scale1. Scale1 is configured as shown in the following exhibit.
Create a virtual machine scale set
Basics Disks Networking Scaling Management Health Advanced
…
An Azure virtual machine scale set can automatically increase or decrease the number of VM instances that run your
application. This automated and elastic behavior reduces the management overhead to monitor and optimize the performance
of your application. Learn more about VMSS scaling
Instance
Initial instance count *
4
Scaling
Scaling policy *
Manual Custom
Minimum number of VMs *
2
Maximum number of VMs *
20
Scale out
CPU threshold (%) *
80
Duration in minutes *
5
Number of VMs to increase by *
2
Scale in
CPU threshold (%) *
30
Number of VMs to decrease by *
4
Diagnostic logs
Collect diagnostic logs from Autoscale
Disabled Enabled
Review + create
< Previous Next: Management >
Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.
If Scale1 is utilized at 85 percent for six minutes after it is deployed, Scale1 will be running [answer choice].
2 virtual machines
4 virtual machines
6 virtual machines
10 virtual machines
20 virtual machines
If Scale1 is first utilized at 25 percent for six minutes after it is deployed, and then utilized at
50 percent for six minutes, Scale1 will be running [answer choice].
2 virtual machines
4 virtual machines
6 virtual machines
8 virtual machines
10 virtual machines
Step 1: Analyze Scenario 1
Initial State: The scale set starts with 4 virtual machines as defined by the “Initial instance count”.
CPU Utilization: The utilization is at 85 percent for six minutes.
Scale-Out Threshold: The scale-out threshold is set at 80 percent.
Scale-Out Duration: The duration to trigger a scale-out is 5 minutes.
Scale-Out Action: Increase the number of VMs by 2.
Since the CPU utilization (85%) is above the scale-out threshold (80%) and the duration (6 minutes) is longer than the required duration (5 minutes), a scale-out event will be triggered.
Calculation: Initial VMs (4) + VMs to increase by (2) = 6 virtual machines
Step 2: Analyze Scenario 2
Initial State: The scale set starts with 4 virtual machines.
First Utilization Period: 25 percent CPU for six minutes.
Second Utilization Period: 50 percent CPU for six minutes.
Scale-In Threshold: The scale-in threshold is set at 30 percent.
Scale-Out Threshold: The scale-out threshold is set at 80 percent.
First Utilization Period (25% CPU):
25% CPU is below the scale-in threshold (30%) and below the scale-out threshold (80%).
No scaling action is triggered because the CPU utilization is not outside either threshold boundary for scale in or scale out.
The VM count remains at 4.
Second Utilization Period (50% CPU):
50% CPU is above the scale-in threshold (30%) but below the scale-out threshold (80%).
No scaling action is triggered because the CPU utilization is within the defined thresholds for neither scale-in nor scale-out.
The VM count remains at 4.
Step 3: Select the Answers
Based on the analysis:
Scenario 1 (85 percent for six minutes): Scale1 will be running 6 virtual machines.
Scenario 2 (25 percent then 50 percent for six minutes each): Scale1 will be running 4 virtual machines.
Correct Answer Choices:
If Scale1 is utilized at 85 percent for six minutes after it is deployed, Scale1 will be running 6 virtual machines.
If Scale1 is first utilized at 25 percent for six minutes after it is deployed, and then utilized at 50 percent for six minutes, Scale1 will be running 4 virtual machines.
Final Answer:
If Scale1 is utilized at 85 percent for six minutes after it is deployed, Scale1 will be running 6 virtual machines.
If Scale1 is first utilized at 25 percent for six minutes after it is deployed, and then utilized at 50 percent for six minutes, Scale1 will be running 4 virtual machines.
You plan to automate the deployment of a virtual machine scale set that uses the Windows Server 2016 Datacenter image.
You need to ensure that when the scale set virtual machines are provisioned, they have web server components installed.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Upload a configuration script.
Create an Azure policy.
Modify the extensionProfile section of the Azure Resource Manager template.
Create a new virtual machine scale set in the Azure portal.
Create an automation account.
To automate the deployment of a virtual machine scale set with web server components installed on each VM instance, you need a mechanism to execute a configuration script during the VM provisioning process. Let’s analyze each option:
A. Upload a configuration script.
Correct. You absolutely need a configuration script. This script will contain the commands necessary to install the web server components (like IIS on Windows Server). This script will be executed on each VM instance as it is provisioned. The script could be a PowerShell script for Windows Server.
B. Create an Azure policy.
Incorrect. Azure Policy is primarily used for governance, compliance, and enforcing standards after resources are deployed. While you could potentially use Azure Policy to audit or remediate VMs that don’t have web server components installed after they are running, it is not the mechanism to initiate the installation of web server components during the VM scale set provisioning process. Policy is reactive, not proactive in this initial setup context.
C. Create a new virtual machine scale set in the Azure portal.
Incorrect. Creating a virtual machine scale set in the Azure portal is the action of deploying the scale set itself. However, simply creating the scale set does not automatically install web server components. The portal is just the interface for deployment. You need to configure the scale set deployment to include the installation of web server components, which is done through other mechanisms.
D. Create an Azure policy.
Incorrect. (This is a repeated option, and as explained in B, Azure Policy is not the correct approach for initial setup during provisioning.)
E. Modify the extensionProfile section of the Azure Resource Manager template.
Correct. The extensionProfile section within an Azure Resource Manager (ARM) template is specifically designed to configure virtual machine extensions. VM extensions are the standard way to run post-deployment configuration tasks on Azure VMs and VM scale sets. You can use the CustomScriptExtension within the extensionProfile to execute a script (like the one uploaded in option A) on each VM instance during provisioning. This is the ideal and recommended method for automating software installation during VM scale set deployment.
Explanation of why A and E are the correct pair:
Upload a configuration script (A): You need a script that actually performs the web server component installation. This script will contain the necessary commands for Windows Server 2016 (e.g., PowerShell commands to install the Web-Server role). You will need to store this script in an accessible location, such as Azure Blob Storage, so that the VM instances can download and execute it.
Modify the extensionProfile section of the Azure Resource Manager template (E): You will use an ARM template to define your virtual machine scale set deployment. Within the extensionProfile of the ARM template, you will configure a CustomScriptExtension. This extension will:
Point to the location of your configuration script (uploaded in step A).
Specify the command to execute the script on each VM instance as part of the provisioning process.
By combining these two actions, you ensure that when the VM scale set is deployed using the ARM template, each VM instance will automatically download and execute your configuration script, thus installing the web server components during provisioning.
Final Answer: The final answer is
Uploadaconfigurationscript,ModifytheextensionProfilesectionoftheAzureResourceManagertemplate.
HOTSPOT
You have several Azure virtual machines on a virtual network named VNet1. Vnet1 has two subnets that have 10.2.0.0/24 and 10.2.9.0/24 address spaces.
You configure an Azure Storage account as shown in the following exhibit.
contoso20 | Networking
Storage account
Firewalls and virtual networks
Selected networks
Configure network security for your storage accounts.
Virtual networks
+ Add existing virtual network + Add new virtual network
Virtual Network Subnet Address range Endpoint Status Resource Group Subscription
✓ VNET1 1 10.2.0.0/24 ✓ Enabled RG1 Visual Studio Premium with MSDN …
Prod RG1 Visual Studio Premium with MSDN …
Network Routing
Routing preference
Microsoft network routing (selected) Internet routing
….
Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.
The virtual machines on the 10.2.9.0/24 subnet will have
network connectivity to the file shares in the storage account
Azure Backup will be able to back up the unmanaged hard
disks of the virtual machines in the storage account
always
during a backup
never
always
during a backup
never
Statement 1: The virtual machines on the 10.2.9.0/24 subnet will have [answer choice] network connectivity to the file shares in the storage account.
Analysis: The Storage account’s “Firewalls and virtual networks” setting is configured to “Selected networks”.
Under “Virtual networks”, only VNET1 Subnet 1 with address range 10.2.0.0/24 is listed and enabled.
The VMs in question are on the 10.2.9.0/24 subnet.
Since the 10.2.9.0/24 subnet is not explicitly listed as an allowed network in the Storage account’s firewall settings, traffic from VMs in this subnet will be blocked by the storage account firewall.
Answer: never
Statement 2: Azure Backup will be able to back up the unmanaged hard disks of the virtual machines in the storage account [answer choice].
Analysis: Azure Backup is an Azure service that needs to access the storage account to store backups.
The storage account is configured to use “Selected networks” and has authorized subnet 10.2.0.0/24.
The “Endpoint Status” for VNET1 Subnet 1 (10.2.0.0/24) is Enabled, indicating that Service Endpoints for Storage are enabled on this subnet. This is crucial for allowing traffic from the VNet to the Storage service.
The “Network Routing” is set to Microsoft network routing. This setting generally allows Azure services to communicate with the storage account over the Azure backbone network, potentially bypassing some internet routing restrictions when accessing from within Azure.
Reasoning: Azure Backup, as a trusted Azure service, is designed to work with storage accounts secured by virtual network firewalls when using service endpoints and Microsoft network routing. When you initiate a backup of a VM, Azure Backup service attempts to access the storage account to store the backup. The service endpoint configuration and Microsoft Network Routing are intended to allow authorized Azure services like Backup to securely access the storage account, even when network restrictions are in place for general internet traffic or traffic from unauthorized networks.
Answer: always (While “during a backup” might seem plausible, the service endpoint configuration and “Microsoft network routing” are meant to ensure continuous secure access for authorized services like Backup. “Always” in this context implies that the capability is there for Azure Backup to function whenever it needs to perform a backup operation, not just during a specific time window. “Always” is the more appropriate option given the intent of service endpoints and Microsoft network routing for authorized Azure services.)
Final Answer:
The virtual machines on the 10.2.9.0/24 subnet will have never network connectivity to the file shares in the storage account.
Azure Backup will be able to back up the unmanaged hard disks of the virtual machines in the storage account always.
DRAG DROP
You have virtual machines (VMs) that run a mission-critical application.
You need to ensure that the VMs never experience down time.
What should you recommend? To answer, drag the appropriate solutions to the correct scenarios. Each solution may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point
Solutions
Solutions
Fault Domain
Availability Zone
Availability Set
Scale Sets
Scenario
Maintain application performance across identical VMs: Solution
Maintain application availability when an Azure datacenter fails: Solution
Maintain application performance across different VMs: Solution
Scenario 1: Maintain application performance across identical VMs:
Solution: Scale Sets
Explanation: Virtual Machine Scale Sets are designed to deploy and manage a set of identical, auto-scaling virtual machines. They are ideal for distributing application load across multiple identical VMs to maintain performance and handle increased traffic. A load balancer is typically used in front of a scale set to distribute traffic evenly across the instances.
Scenario 2: Maintain application availability when an Azure datacenter fails:
Solution: Availability Zone
Explanation: Availability Zones are physically separate datacenters within an Azure region. By deploying VMs across Availability Zones, you ensure that if one datacenter (zone) fails, your application remains available in the other zones. This provides the highest level of availability and resilience against datacenter-level failures.
Scenario 3: Maintain application performance across different VMs:
Solution: Availability Set
Explanation: Availability Sets are used to improve the availability of VMs within a single datacenter. They distribute VMs across fault domains (power and network isolation) and update domains (planned maintenance isolation). While primarily for availability, they also help in distributing load and maintaining performance to some extent, even if the VMs are not strictly identical. For scenarios where VMs might have different configurations or sizes but still need to be highly available within a region, Availability Sets are suitable. While Scale Sets are better for identical VMs, Availability Sets are more flexible for different VMs needing availability.
Incorrect Solutions and Why:
Fault Domain: Fault Domain is a component of Availability Sets and Availability Zones. It’s not a standalone solution but a concept describing how VMs are isolated within a datacenter. You don’t directly deploy a “Fault Domain”.
Final Answer:
Scenario 1: Maintain application performance across identical VMs: Scale Sets
Scenario 2: Maintain application availability when an Azure datacenter fails: Availability Zone
Scenario 3: Maintain application performance across different VMs: Availability Set
Your company has an office in Seattle.
You have an Azure subscription that contains a virtual network named VNET1.
You create a site-to-site VPN between the Seattle office and VNET1.
VNET1 contains the subnets shown in the following table.
— —
Name IP address space
Subnet1 10.1.1.0/24
GatewaySubnet 10.1.200.0/28
——
You need to route all Internet-bound traffic from Subnet1 to the Seattle office.
What should you create?
a route for GatewaySubnet that uses the virtual network gateway as the next hop
a route for Subnet1 that uses the local network gateway as the next hop
a route for Subnet1 that uses the virtual network gateway as the next hop
a route for GatewaySubnet that uses the local network gateway as the next hop
To route all Internet-bound traffic from Subnet1 to the Seattle office via a site-to-site VPN, you need to implement forced tunneling. Forced tunneling in Azure involves creating a user-defined route (UDR) that directs all internet-bound traffic (0.0.0.0/0) from a subnet to your on-premises network through the VPN tunnel.
Let’s analyze each option:
a route for GatewaySubnet that uses the virtual network gateway as the next hop: This is incorrect. GatewaySubnet is reserved for the Azure VPN gateway. Routes applied to GatewaySubnet typically affect traffic to or through the gateway itself, not traffic originating from other subnets and destined for the internet.
a route for Subnet1 that uses the local network gateway as the next hop: This is incorrect. The Local Network Gateway in Azure represents your on-premises VPN device. It is the destination endpoint of the VPN tunnel in Azure’s configuration. Using it as the next hop in a route for Subnet1 is conceptually wrong for routing traffic through the VPN from Azure towards on-premises.
a route for Subnet1 that uses the virtual network gateway as the next hop: This is correct. To implement forced tunneling, you need to create a route for Subnet1 (where your VMs are located) that directs all internet-bound traffic to the Virtual Network Gateway. The Virtual Network Gateway is the Azure-side VPN gateway resource. By setting the next hop to the Virtual Network Gateway for the 0.0.0.0/0 destination prefix, you are instructing Azure to send all traffic destined for the internet (and not already within the VNet or on-premises networks known by BGP or static routes) through the VPN tunnel. This traffic will then be routed to your Seattle office via the site-to-site VPN connection.
a route for GatewaySubnet that uses the local network gateway as the next hop: This is incorrect for the same reasons as the first option. Routes on GatewaySubnet are not used to control internet-bound traffic from Subnet1.
To implement forced tunneling for internet-bound traffic from Subnet1 to the Seattle office, you should:
Create a Route Table.
Add a Route to the Route Table with the following properties:
Destination prefix: 0.0.0.0/0 (This represents all internet destinations)
Next hop type: Virtual network gateway
Associate the Route Table to Subnet1.
This configuration will ensure that any traffic originating from VMs in Subnet1 that is destined for the internet will be routed through the Azure Virtual Network Gateway and then over the site-to-site VPN connection to your Seattle office.
Final Answer: The final answer is
a route for Subnet1 that uses the virtual network gateway as the next hop
You create an Azure virtual machine named VM1 in a resource group named RG1.
You discover that VM1 performs slower than expected.
You need to capture a network trace on VM1.
What should you do?
From the VM1 blade, configure Connection troubleshoot.
From Diagnostic settings for VM1, configure the performance counters to include network counters.
From the VM1 blade, install performance diagnostics and run advanced performance analysis.
From Diagnostic settings for VM1, configure the log level of the diagnostic agent.
Let’s analyze each option to determine the best way to capture a network trace on an Azure VM (VM1) that is performing slower than expected.
Option 1: From the VM1 blade, configure Connection troubleshoot.
Explanation: Azure Network Watcher’s Connection troubleshoot tool is designed to test and diagnose connectivity issues between two endpoints, such as Azure VMs, internet endpoints, or on-premises resources. It checks reachability, latency, and hop-by-hop routes.
Relevance to Network Trace: While Connection troubleshoot is useful for identifying connectivity problems, it does not capture a detailed network trace (like a .pcap file). It provides insights into connectivity paths and potential bottlenecks, but not packet-level information. Therefore, this is not the correct option for capturing a network trace.
Option 2: From Diagnostic settings for VM1, configure the performance counters to include network counters.
Explanation: Azure Monitor Diagnostic settings allow you to collect performance metrics and logs from Azure resources, including VMs. You can configure which performance counters to collect, including network-related counters (e.g., Network Interface Bytes Received/sec, Network Interface Bytes Sent/sec).
Relevance to Network Trace: Configuring performance counters will collect aggregated network performance metrics, which can be helpful for identifying trends and overall network utilization. However, performance counters are not a network trace. They do not capture the actual packets transmitted over the network. This option provides performance data, but not a detailed packet capture.
Option 3: From the VM1 blade, install performance diagnostics and run advanced performance analysis.
Explanation: Azure Performance Diagnostics is a VM extension that helps diagnose performance issues on Azure VMs. It collects performance data, configuration information, and logs to identify bottlenecks. It can perform advanced analysis and provide recommendations.
Relevance to Network Trace: Azure Performance Diagnostics is focused on identifying performance bottlenecks related to CPU, memory, disk, and potentially some network aspects through performance metrics and logs. It is not designed to capture raw network traces in the way that tools like Wireshark or tcpdump do. It’s more about high-level performance analysis, not packet-level inspection.
Option 4: From Diagnostic settings for VM1, configure the log level of the diagnostic agent.
Explanation: Diagnostic settings also allow you to configure the log level for the Azure Diagnostics extension. This controls the verbosity of logs collected from the VM’s operating system and applications (e.g., Windows Event Logs, Syslog).
Relevance to Network Trace: Configuring the log level affects the amount of logging data collected, but it does not capture network traffic. Diagnostic logs are system and application logs, not network packet captures. This option is irrelevant to capturing a network trace.
Conclusion:
None of the options directly provide a feature in the Azure portal to initiate a traditional packet capture or network trace in the same way that you would use tools like Wireshark or tcpdump on a local machine.
However, if the question is asking for the closest Azure portal related action to get some form of network-related information to diagnose performance issues, then Option 2 is the most relevant because:
It directly involves network counters, which are performance metrics related to network activity.
While not a packet capture, network performance counters can provide insights into network throughput, errors, and utilization, which can be helpful in diagnosing network-related performance problems.
To get a true network trace (packet capture) on an Azure VM, you would typically need to:
Connect to the VM via RDP or SSH.
Install a network capture tool on the VM’s operating system (e.g., tcpdump on Linux, netsh trace or Wireshark on Windows).
Run the network capture tool from within the VM to capture network traffic directly on the VM’s network interface.
Considering the available options and the context of Azure portal actions, Option 2 is the closest, albeit imperfect, answer if we must choose from the given options within the Azure portal context for gathering network-related performance data.
Final Answer: The final answer is
From Diagnostic settings for VM1, configure the performance counters to include network counters.
You have an Azure subscription named Subscription1 that contains an Azure virtual network named VNet1. VNet1 connects to your on-premises network by using Azure ExpressRoute.
You need to connect VNet1 to the on-premises network by using a site-to-site VPN. The solution must minimize cost.
Which three actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Create a gateway subnet.
Create a VPN gateway that uses the VpnGw1 SKU.
Create a connection.
Create a local site VPN gateway.
Create a VPN gateway that uses the Basic SKU.
The three correct actions to connect VNet1 to the on-premises network using a site-to-site VPN while minimizing cost are:
Create a gateway subnet.
Create a VPN gateway that uses the Basic SKU.
Create a local network gateway.
Here’s why these actions are correct:
Create a gateway subnet:
A gateway subnet is required to host the VPN gateway in the virtual network1. This is a necessary step for setting up any VPN connection.
Create a VPN gateway that uses the Basic SKU:
The Basic SKU is the most cost-effective option for VPN gateways4. While it has limitations compared to higher SKUs, it supports site-to-site VPNs and is sufficient for basic connectivity needs. This aligns with the requirement to minimize cost.
Create a local network gateway:
A local network gateway represents your on-premises network in Azure1. It’s necessary to define the IP address of your on-premises VPN device and the address spaces that should be routed through the VPN tunnel.
The other options are not the best choices:
“Create a VPN gateway that uses the VpnGw1 SKU” is not the most cost-effective option. The Basic SKU is cheaper and sufficient for this scenario.
“Create a connection” is implicitly part of the process but not a separate action in this context.
Your network contains an on-premises Active Directory domain named contoso.com. The domain contains the users shown in the following table.
Name Member of
User1 Domain Admins
User2 Domain Users
User3 ADSyncAdmins
User4 Account Operators
You plan to install Azure AD Connect and enable SSO.
You need to specify which user to use to enable SSO. The solution must use the principle of least privilege.
Which user should you specify?
User3
User2
User1
User4
The correct answer is User3. Here’s why:
Principle of Least Privilege: The question explicitly states the solution must use the principle of least privilege. This means we need to choose the user with the minimum necessary permissions to enable SSO during Azure AD Connect installation.
Let’s analyze each user:
User1 (Domain Admins): Domain Admins is the highest level of administrative privilege in an Active Directory domain. They have complete control over the domain. While a Domain Admin can definitely enable SSO, it violates the principle of least privilege. Using a Domain Admin account for this task gives far more permissions than are actually required and is a security risk.
User2 (Domain Users): Domain Users is the default group for regular domain users. They have very limited administrative rights. A Domain User account will not have the necessary permissions to enable SSO during Azure AD Connect installation. This process requires creating service accounts and potentially modifying domain configuration, which Domain Users cannot do.
User3 (ADSyncAdmins): The name “ADSyncAdmins” strongly suggests this group is specifically designed for Azure AD Connect administration. It’s a common practice to create a dedicated administrative group for Azure AD Connect with the necessary permissions. This group is likely granted the minimum required permissions to perform tasks related to Azure AD Connect, including enabling SSO. This aligns perfectly with the principle of least privilege.
User4 (Account Operators): Account Operators have permissions to create and manage user and group accounts within the domain. While they have more privileges than Domain Users, Account Operators generally do not have the necessary permissions to enable SSO during Azure AD Connect installation. SSO configuration often involves creating computer accounts, managing service principals, and potentially modifying domain-level Kerberos settings, which are beyond the scope of Account Operators.
Why User3 is the best choice:
Least Privilege: User3 (ADSyncAdmins) is likely designed to have just enough permissions for Azure AD Connect tasks, adhering to the principle of least privilege.
Purpose-Built Group: The name clearly indicates its purpose is related to AD synchronization, making it the most logical choice for managing Azure AD Connect and its features like SSO.
Security Best Practice: Using a dedicated, least-privileged administrative account for services like Azure AD Connect is a security best practice. It limits the potential damage if the account is compromised.
HOTSPOT
You have an Azure subscription that contains the resource groups shown in the following table.
Name Region
RG1 East US
RG2 West US
RG1 contains the virtual machines shown in the following table.
Name Region
VM1 West US
VM2 West US
VM3 West US
VM4 West US
RG2 contains the virtual machines shown in the following table.
Name Region
VM5 East US 2
VM6 East US 2
VM7 West US
VM8 West US 2
All the virtual machines are configured to use premium disks and are accessible from the Internet.
VM1 and VM2 are in an availability set named AVSET1. VM3 and VM4 are in the same availability zone. VM5 and VM6 are in different availability zones.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Statements Yes No
VM1 is eligible for a Service Level Agreement (SLA) of 99.95 percent.
VM3 is eligible for a Service Level Agreement (SLA) of 99.99 percent.
VM5 is eligible for a Service Level Agreement (SLA) of 99.99 percent.
Statement 1: VM1 is eligible for a Service Level Agreement (SLA) of 99.95 percent.
Analysis: VM1 is in an availability set named AVSET1 along with VM2. Virtual machines deployed in an availability set within the same region are protected from planned and unplanned maintenance events. Azure guarantees a 99.95% uptime SLA for virtual machines deployed in an availability set.
Conclusion: Yes.
Statement 2: VM3 is eligible for a Service Level Agreement (SLA) of 99.99 percent.
Analysis: VM3 and VM4 are in the same availability zone. While availability zones provide high availability by isolating resources to specific physical locations within an Azure region, deploying VMs in the same availability zone does not qualify for the 99.99% SLA. To achieve a 99.99% SLA, you need to deploy VMs across different availability zones within the same region. Deploying within a single availability zone provides the same SLA as Availability Sets, which is 99.95%.
Conclusion: No.
Statement 3: VM5 is eligible for a Service Level Agreement (SLA) of 99.99 percent.
Analysis: VM5 and VM6 are in different availability zones. When you deploy virtual machines across availability zones, Azure guarantees a 99.99% uptime SLA. This is because availability zones are physically separate datacenters within an Azure region, providing fault tolerance against datacenter-level failures.
Conclusion: Yes.
Therefore, the correct answers are:
Statement 1: Yes
Statement 2: No
Statement 3: Yes