test6 Flashcards
You have two subscriptions named Subscription1 and Subscription2. Each subscription is associated to a different Azure AD tenant.
Subscription1 contains a virtual network named VNet1. VNet1 contains an Azure virtual machine named VM1 and has an IP address space of 10.0.0.0/16.
Subscription2 contains a virtual network named VNet2. Vnet2 contains an Azure virtual machine named VM2 and has an IP address space of 10.10.0.0/24.
You need to connect VNet1 to VNet2.
What should you do first?
A. Modify the IP address space of VNet2.
B. Move VM1 to Subscription2.
C. Provision virtual network gateways.
D. Move VNet1 to Subscription2.
The correct answer is C. Provision virtual network gateways.
Here’s why:
Explanation:
To connect two Azure Virtual Networks (VNets), especially when they are in different subscriptions or Azure AD tenants, you typically need to use virtual network gateways. Virtual network gateways enable you to create VPN tunnels between VNets, allowing resources in different VNets to communicate with each other.
Let’s break down why each option is (or isn’t) the correct first step:
A. Modify the IP address space of VNet2.
Why it’s incorrect as the first step: The IP address spaces of VNet1 (10.0.0.0/16) and VNet2 (10.10.0.0/24) are not overlapping. Non-overlapping IP address spaces are a requirement for VNet peering or VNet-to-VNet connections using gateways. Modifying the IP address space is only necessary if there’s an overlap, which isn’t the case here. Therefore, this is not the first step.
B. Move VM1 to Subscription2.
Why it’s incorrect: Moving VM1 to Subscription2 is completely irrelevant to connecting VNet1 and VNet2. The goal is to connect the networks, not move VMs between subscriptions as a prerequisite for network connectivity. VMs are within the VNets, but the connection needs to be established at the VNet level.
C. Provision virtual network gateways.
Why it’s correct: Provisioning virtual network gateways is the essential first step for establishing a VNet-to-VNet connection, especially across subscriptions. Virtual network gateways are the Azure resources that create and manage the VPN tunnels required for VNet-to-VNet connectivity. Before you can configure the actual connection, you must have gateways in place in both VNets.
D. Move VNet1 to Subscription2.
Why it’s incorrect as the first step (and generally less desirable as the first step): Moving VNet1 to Subscription2 would place both VNets in the same subscription. While this might simplify some aspects of management and could potentially allow for simpler VNet peering within the same subscription (if peering is chosen instead of gateways), it’s not the necessary first step. You can connect VNets across subscriptions and tenants without moving them. Moving a VNet is a more complex operation with potential impact on existing resources and configurations. It’s also not the most direct or necessary first action to achieve VNet connectivity. It’s also not required to use gateways, which is hinted at by option C.
In summary:
The most logical and necessary first step to connect VNet1 and VNet2, especially given they are in different subscriptions, is to provision virtual network gateways in both VNets. This is the foundational infrastructure component required to build the VPN tunnels for VNet-to-VNet communication.
Final Answer: C
You have an Azure Active Directory (Azure AD) tenant.
You have an existing Azure AD conditional access policy named Policy1. Policy1 enforces the use of Azure AD-joined devices when members of the Global
Administrators group authenticate to Azure AD from untrusted locations.
You need to ensure that members of the Global Administrators group will also be forced to use multi-factor authentication when authenticating from untrusted locations.
What should you do?
A. From the Azure portal, modify session control of Policy1.
B. From multi-factor authentication page, modify the user settings.
C. From multi-factor authentication page, modify the service settings.
D. From the Azure portal, modify grant control of Policy1.
The correct answer is D. From the Azure portal, modify grant control of Policy1.
Here’s why:
Conditional Access Grant Controls: Conditional Access policies in Azure AD work by evaluating conditions (like user group, location, device platform) and then applying grant controls if those conditions are met. Grant controls define what is required to grant access. Common grant controls include:
Require multi-factor authentication: Enforces MFA.
Require device to be marked as compliant: Requires the device to be managed and compliant with your compliance policies.
Require hybrid Azure AD joined device: Requires the device to be hybrid Azure AD joined.
Require approved client app: Requires the user to use an approved client application.
Require app protection policy: Requires the user to use an app that has app protection policies applied.
Policy1’s Current Configuration: Policy1 already enforces “Azure AD-joined devices” for Global Administrators from untrusted locations. This means the “Grant” control section of Policy1 is already configured to “Require device to be marked as compliant” or “Require hybrid Azure AD joined device” (or a similar device-based control).
Adding MFA Requirement: To also force MFA, you need to add the “Require multi-factor authentication” grant control to Policy1. The “Grant” control section in the Azure portal for Policy1 allows you to specify multiple grant requirements. You can require one of the selected controls or all of them. In this case, you likely want to require both Azure AD-joined devices and MFA.
Why other options are incorrect:
A. From the Azure portal, modify session control of Policy1. Session controls are applied after authentication and access are granted. They control the user session behavior, such as sign-in frequency, persistent browser sessions, and application enforced restrictions. Session controls are not used to enforce primary authentication requirements like MFA or device compliance.
B. From multi-factor authentication page, modify the user settings. The older Azure AD MFA settings page (if you are referring to the legacy MFA settings) is primarily for per-user MFA enforcement and app password management. Conditional Access policies are the modern and recommended way to manage MFA at scale and based on conditions (like location, user group, etc.). Modifying user settings directly bypasses the conditional access policy logic and is not the correct approach for this scenario.
C. From multi-factor authentication page, modify the service settings. Similar to option B, service settings on the legacy MFA page are generally for configuring MFA provider settings (like verification methods), not for integrating MFA into Conditional Access policies. Conditional Access policies are configured separately.
In summary: To enforce MFA in addition to the existing device requirement for Global Administrators from untrusted locations within Policy1, you need to modify the grant controls of Policy1 in the Azure portal and add the “Require multi-factor authentication” option.
Final Answer: D
You plan to deploy five virtual machines to a virtual network subnet.
Each virtual machine will have a public IP address and a private IP address.
Each virtual machine requires the same inbound and outbound security rules.
What is the minimum number of network interfaces and network security groups that you require? To answer, select the appropriate options in the answer area.
Network interfaces:
5
10
15
20
Network Security Group
1
2
5
10
Network interfaces: 5
Network Security Group: 1
Explanation:
Network Interfaces:
Minimum Requirement: Each Azure virtual machine must have at least one network interface (NIC) to connect to a virtual network and communicate with other resources.
Public and Private IPs on a Single NIC: A single NIC on an Azure VM can be configured with both a private IP address (from the subnet’s IP range) and a public IP address. You don’t need separate NICs to have both types of IP addresses.
Calculation: Since you have five virtual machines, you need a minimum of 5 network interfaces, one for each VM.
Network Security Groups (NSGs):
Subnet-Level NSGs: Network Security Groups can be associated with either individual network interfaces or entire subnets. When you associate an NSG with a subnet, the security rules in that NSG apply to all virtual machines within that subnet.
Shared Security Rules: The requirement states that “Each virtual machine requires the same inbound and outbound security rules.” This is the key point. Because the security rules are identical for all VMs in the subnet, you can efficiently manage security by applying a single NSG at the subnet level.
Minimizing NSGs: Using a subnet-level NSG is the most efficient and least administrative effort approach when VMs within a subnet share the same security requirements. You avoid the need to create and manage individual NSGs for each VM or NIC.
Calculation: Since all VMs in the subnet need the same rules, and they are in the same subnet, you need a minimum of 1 Network Security Group applied to the subnet.
Why other options are incorrect:
Network Interfaces:
10, 15, 20: These numbers are unnecessarily high. You don’t need multiple NICs per VM just because they have public and private IPs, or because they require the same security rules. One NIC per VM is sufficient.
Network Security Groups:
2, 5, 10: These numbers are also unnecessarily high. Creating multiple NSGs (especially more than 1) for this scenario would be redundant and increase management complexity without providing any benefit since the security rules are identical for all VMs and they are in the same subnet. Applying more than one NSG per subnet or per NIC could even lead to conflicting or overly complex security configurations.
Therefore, the minimum and most efficient configuration is:
5 Network Interfaces (one per VM)
1 Network Security Group (applied to the subnet)
Final Answer:
Option Selected
Network interfaces: 5
Network Security Group 1
You have an Azure subscription named Subscription1 that contains an Azure virtual machine named VM1. VM1 is in a resource group named RG1.
VM1 runs services that will be used to deploy resources to RG1.
You need to ensure that a service running on VM1 can manage the resources in RG1 by using the identity of VM1.
What should you do first?
A. From the Azure portal, modify the Access control (IAM) settings of RG1.
B. From the Azure portal, modify the Policies settings of RG1.
C. From the Azure portal, modify the Access control (IAM) settings of VM1.
D. From the Azure portal, modify the value of the Managed Service Identity option for VM1.
The correct answer is D. From the Azure portal, modify the value of the Managed Service Identity option for VM1.
Here’s why:
Explanation:
To allow a service running on VM1 to manage Azure resources using VM1’s identity, you need to enable and configure Managed Identities for Azure Resources on VM1 first. Managed Identities provide Azure services with an automatically managed identity in Azure Active Directory (Azure AD). This identity can then be used to authenticate to Azure services that support Azure AD authentication, without needing to manage credentials in your code.
Let’s break down each option:
A. From the Azure portal, modify the Access control (IAM) settings of RG1.
Why it’s not the first step: Modifying the Access control (IAM) settings of RG1 is a necessary later step, but not the first step. IAM settings on RG1 are where you will grant permissions to the identity of VM1 to manage resources in RG1. However, you first need to enable the Managed Identity on VM1 before you can grant it permissions.
B. From the Azure portal, modify the Policies settings of RG1.
Why it’s incorrect: Azure Policies are used to enforce organizational standards and compliance across Azure resources. They are not related to enabling Managed Identities or granting permissions for a VM to manage resources. Policies are for governance, not identity management in this context.
C. From the Azure portal, modify the Access control (IAM) settings of VM1.
Why it’s incorrect: Modifying the Access control (IAM) settings of VM1 controls who can manage the VM itself. It doesn’t enable the VM’s identity to be used to manage other resources. IAM settings on VM1 are for role-based access control to the VM, not from the VM to other resources.
D. From the Azure portal, modify the value of the Managed Service Identity option for VM1.
Why it’s correct and the first step: This is the correct first step. You need to enable either a System-assigned Managed Identity or a User-assigned Managed Identity (or both) on VM1. Enabling the Managed Identity creates an identity for VM1 in Azure AD. Once the Managed Identity is enabled, you can then proceed to grant this identity permissions to manage RG1 resources via IAM on RG1 (Option A - which would be the next step).
Steps to solve the problem:
Enable Managed Identity on VM1: In the Azure portal, navigate to VM1. Under the “Settings” section, find “Identity”. Enable either “System assigned” or “User assigned” Managed Identity (System-assigned is often simpler for this scenario).
Grant Permissions to VM1’s Identity on RG1: After enabling Managed Identity on VM1, go to Resource Group RG1 in the Azure portal. Navigate to “Access control (IAM)”. Add a role assignment.
For “Principal”, search for the name of VM1 (if System-assigned) or the name of the User-assigned Managed Identity you created.
Select an appropriate role for the service running on VM1 to manage resources in RG1 (e.g., “Contributor” role to have broad management rights, or more specific roles if you want to limit permissions).
Code on VM1: The service running on VM1 can now use the Azure SDKs or REST APIs to authenticate using the VM’s Managed Identity and manage resources in RG1. The Azure SDKs handle the authentication process automatically when running within an Azure VM with Managed Identity enabled.
In summary, enabling Managed Identity on VM1 is the prerequisite and the correct first step to allow a service on VM1 to manage Azure resources using VM1’s identity.
Final Answer: D
You have an Azure Active Directory (Azure AD) tenant.
You need to create a conditional access policy that requires all users to use multi-factor authentication when they access the Azure portal.
Which three settings should you configure? To answer, select the appropriate settings to the answer area.
NOTE: Each correct selection is worth one point.
Name ->Policy1
users and groups
cloud apps
conditions
Grant
Session
enable policy
To create a conditional access policy that requires MFA for all users accessing the Azure portal, you need to configure the following three core settings:
- Users and groups: You need to specify who this policy applies to. In this case, you want it to apply to all users. Within the Conditional Access policy configuration, you will select “Users and groups” and then choose to apply the policy to “All users”.
- Cloud apps or actions: You need to specify what application(s) this policy protects. In this case, you want to protect access to the Azure portal. Within the Conditional Access policy configuration, you will select “Cloud apps or actions” and then choose “Select apps” and search for and select “Azure Azure portal”.
- Grant: You need to specify what access control to enforce when the conditions (user, app) are met. In this case, you want to require multi-factor authentication. Within the Conditional Access policy configuration, you will select “Grant” and then choose “Grant access” and check the box for “Require multi-factor authentication”.
Let’s evaluate the provided options and map them to these core settings:
Name -> Policy1: While a policy needs a name, it’s not a functional setting for enforcing MFA itself. It’s an administrative label. It’s less critical for the functionality compared to the other options.
users and groups: Correct. This is essential to define who the policy applies to (all users in this case).
cloud apps: Correct. This is essential to define what application is being protected (Azure portal).
conditions: While conditions are a fundamental part of Conditional Access policies and define when the policy applies (e.g., location, device state), for the simplest requirement of “all users, all the time for Azure portal”, you technically don’t need to configure specific conditions beyond the default “any condition”. However, the “Conditions” section is still a necessary part of the policy structure to define the scope. In a broader sense, it is a setting you configure even if you leave it at default “any condition”. Let’s consider this as potentially one of the three.
Grant: Correct. This is essential to define what action is taken when the conditions are met, which is “require multi-factor authentication” in this scenario.
Session: Session controls are applied after successful authentication and are used to manage the user session (e.g., sign-in frequency, persistent browser session). They are not directly involved in requiring MFA for initial access.
enable policy: While you need to enable the policy for it to be active, “enable policy” is more of an on/off switch for the entire policy rather than a specific setting within the policy configuration itself. It’s also less directly related to defining the MFA requirement compared to Users, Apps, and Grant.
Considering the need to select three settings that are most directly related to configuring the MFA requirement, the most appropriate and functionally essential settings are:
users and groups
cloud apps
Grant
While “conditions” is an integral part of CA policies, for this specific and simple requirement, the most direct settings to configure to achieve the goal of requiring MFA for Azure portal access for all users are Users, Apps, and Grant.
Therefore, the three settings you should configure are:
users and groups
cloud apps
Grant
Final Answer:
Option Selected
users and groups
cloud apps
Grant
You configure Azure AD Connect for Azure Active Directory Seamless Single Sign-On (Azure AD Seamless SSO) for an on-premises network.
Users report that when they attempt to access myapps.microsoft.com, they are prompted multiple times to sign in and are forced to use an account name that ends with onmicrosoft.com.
You discover that there is a UPN mismatch between Azure AD and the on-premises Active Directory.
You need to ensure that the users can use single-sign on (SSO) to access Azure resources.
What should you do first?
A. From on-premises network, deploy Active Directory Federation Services (AD FS).
B. From Azure AD, add and verify a custom domain name.
C. From on-premises network, request a new certificate that contains the Active Directory domain name.
D. From the server that runs Azure AD Connect, modify the filtering options.
The correct answer is B. From Azure AD, add and verify a custom domain name.
Here’s why:
Explanation:
The core issue described is a UPN (User Principal Name) mismatch. Azure AD Seamless SSO relies on matching the UPNs of users between on-premises Active Directory and Azure AD. When there’s a mismatch, Seamless SSO will fail, leading to users being prompted for credentials multiple times and potentially being forced to use their default onmicrosoft.com UPN.
Let’s break down why option B is the correct first step and why the others are not:
B. From Azure AD, add and verify a custom domain name.
Why it’s correct and the first step: If users are being forced to use onmicrosoft.com UPNs, it strongly indicates that a custom domain name has not been properly added and verified in Azure AD. By default, Azure AD uses the onmicrosoft.com domain. To use your organization’s domain (e.g., company.com) for user UPNs in Azure AD, you must add and verify this custom domain in your Azure AD tenant.
UPN Alignment: Adding and verifying the custom domain allows Azure AD to recognize and accept user UPNs that match your on-premises domain (e.g., user@company.com). This is essential for Seamless SSO to work correctly and for UPNs to be consistent across on-premises and cloud environments.
First Step: This is the most fundamental and logical first step to address a UPN mismatch issue. Without a verified custom domain, Azure AD won’t properly handle UPNs from your on-premises domain.
A. From on-premises network, deploy Active Directory Federation Services (AD FS).
Why it’s incorrect: Deploying AD FS is a completely different approach to Single Sign-On. AD FS is a federation-based SSO solution, while Azure AD Seamless SSO is a password hash synchronization-based solution (with Kerberos for authentication). Deploying AD FS is a significant change in SSO strategy and not a step to fix issues with Seamless SSO. It’s also overkill and not directly related to the UPN mismatch problem.
C. From on-premises network, request a new certificate that contains the Active Directory domain name.
Why it’s incorrect: Certificates are used in Azure AD Seamless SSO for Kerberos ticket decryption and security. While a certificate is necessary for Seamless SSO to function, the problem description specifically points to a UPN mismatch, not a certificate issue. A new certificate is unlikely to resolve the UPN mismatch. Certificates are related to the technical security aspects of Kerberos authentication, not UPN alignment.
D. From the server that runs Azure AD Connect, modify the filtering options.
Why it’s incorrect: Azure AD Connect filtering options control which objects (users, groups, etc.) are synchronized from on-premises Active Directory to Azure AD. While incorrect filtering could cause synchronization issues, it’s not the primary cause of a UPN mismatch leading to SSO problems. Filtering is more about which users are synced, not about the UPN attribute itself being correctly configured. Modifying filtering is a more advanced configuration and not the first step to address a fundamental UPN mismatch problem.
Steps to resolve the issue after choosing option B:
Add and Verify Custom Domain in Azure AD (Option B - Correct First Step): In the Azure portal, navigate to Azure Active Directory -> Custom domain names. Add your organization’s domain name (e.g., company.com) and follow the steps to verify it (typically by adding a DNS TXT record).
Update UPNs (If Necessary): After verifying the custom domain, ensure that the UPN suffixes in your on-premises Active Directory match the verified custom domain. If there are users with UPNs still using a different or incorrect suffix, you may need to update their UPNs in on-premises AD.
Run Azure AD Connect Synchronization: After making changes (adding custom domain, updating UPNs), ensure Azure AD Connect synchronizes these changes to Azure AD.
Test Seamless SSO: Have users try accessing myapps.microsoft.com again from domain-joined devices on the on-premises network to verify if Seamless SSO is now working correctly without multiple prompts or forced onmicrosoft.com usage.
Final Answer: B
You have an Active Directory forest named contoso.com.
You install and configure AD Connect to use password hash synchronization as the single sign-on(SSO) method. Staging mode is enabled.
You review the synchronization results and discover that the Synchronization Service Manager does not display any sync jobs.
You need to ensure that the synchronization completes successfully.
What should you do?
A. From Azure PowerShell, run Start-AdSyncSycnCycle ““PolicyType Initial.
B. Run Azure AD Connect and set the SSO method to Pass-through Authentication.
C. From Synchronization Service Manager, run a full import.
D. Run Azure AD Connect and disable staging mode.
The correct answer is D. Run Azure AD Connect and disable staging mode.
Here’s why:
Explanation:
When Azure AD Connect is configured in staging mode, it means that the synchronization service is running and performing imports and synchronizations, but it does not export any changes to Azure AD. This mode is designed for testing and verifying the configuration before making it active in your production environment.
If you are seeing no sync jobs in the Synchronization Service Manager while staging mode is enabled, this is expected behavior. In staging mode, the primary purpose is to review the synchronization configuration and preview changes, not to actively synchronize data to your production Azure AD tenant.
To make the synchronization process active and for sync jobs to run and export data to Azure AD, you need to disable staging mode.
Let’s analyze each option:
A. From Azure PowerShell, run Start-AdSyncSycnCycle ““PolicyType Initial.
While this command can manually trigger a synchronization cycle, it will still operate within the constraints of staging mode. If staging mode is enabled, running this command will likely initiate a synchronization cycle, but it will still not export changes to Azure AD. Therefore, it won’t resolve the core issue of getting the synchronization to complete successfully in a production sense.
B. Run Azure AD Connect and set the SSO method to Pass-through Authentication.
Changing the SSO method to Pass-through Authentication is irrelevant to the problem of synchronization not completing successfully. The issue is that no sync jobs are being displayed, which is directly related to staging mode preventing export of changes to Azure AD. Changing the SSO method won’t enable the synchronization process in staging mode to export data.
C. From Synchronization Service Manager, run a full import.
Running a full import from the Synchronization Service Manager might initiate an import operation, but again, if staging mode is enabled, the subsequent synchronization and export steps will be suppressed. A full import alone won’t enable the synchronization to complete successfully and export data to Azure AD when staging mode is active.
D. Run Azure AD Connect and disable staging mode.
This is the correct answer. Disabling staging mode in Azure AD Connect is the necessary step to make the synchronization configuration active and allow changes to be exported to Azure AD. Once staging mode is disabled, Azure AD Connect will start performing full and delta synchronizations and will export changes to your Azure AD tenant. This will then allow you to see sync jobs running in the Synchronization Service Manager and for the synchronization to complete successfully in a production context.
Steps to resolve the issue:
Run Azure AD Connect wizard again.
Choose “Configure” from the initial tasks page.
Select “Configure staging mode” from the Additional tasks page.
Uncheck the “Enable staging mode” checkbox.
Complete the wizard.
After disabling staging mode, the synchronization service will start exporting changes to Azure AD, and you should see synchronization jobs running in the Synchronization Service Manager.
Final Answer: D
You have an Azure Active Directory (Azure AD) tenant that has the initial domain name.
You have a domain name of contoso.com registered at a third-party registrar.
You need to ensure that you can create Azure AD users that have names containing a suffix of @contoso.com.
Which three actions should you perform in sequence? To answer, move the appropriate cmdlets from the list of cmdlets to the answer area and arrange them in the correct order.
Add an Azure AD server
Create an Azure DNS zone
Verify the Domain
Configure company branding
Add a record to the Public Contoso, com DNS zone
Add a custom domain name
Explanation:
To use a custom domain name like contoso.com for Azure AD users, you need to perform the following steps in sequence:
Add a custom domain name: First, you need to inform Azure AD that you intend to use contoso.com with your Azure AD tenant. This is done by adding the custom domain name in the Azure portal or using PowerShell.
Verify the Domain: After adding the domain name, Azure AD will provide you with DNS record information (typically a TXT record or MX record). You need to add this record to the DNS settings of contoso.com at your third-party registrar to prove that you own the domain. This process is called domain verification.
Add a record to the Public Contoso, com DNS zone: This is the step where you actually add the DNS record provided by Azure AD to your domain’s public DNS settings. This action proves to Azure AD that you control the DNS for contoso.com, thus verifying your ownership.
Let’s arrange the provided actions in the correct sequence:
Step 1: Add a custom domain name
This is the first step to initiate the process within Azure AD. You need to tell Azure AD that you want to use contoso.com.
Step 2: Verify the Domain
Once you add the custom domain name, Azure AD will start the verification process. This step represents the action of Azure AD initiating the verification and providing you with the DNS record information.
Step 3: Add a record to the Public Contoso, com DNS zone
This is the final step to complete the verification. You take the DNS record information provided in the “Verify the Domain” step and add it to the DNS zone managed by your third-party registrar for contoso.com.
Incorrect Options and Why:
Add an Azure AD server: Azure AD is a cloud service and doesn’t involve adding servers in the context of custom domain setup.
Create an Azure DNS zone: While you could use Azure DNS to manage your domain’s DNS records, it’s not a mandatory step to add a custom domain to Azure AD. You can use any DNS provider where your domain is registered. This step is optional for DNS management, not for domain verification itself.
Configure company branding: Company branding is for customizing the Azure AD sign-in experience and is not related to adding or verifying a custom domain for user UPNs.
Answer Area (Correct Sequence):
Add a custom domain name
Verify the Domain
Add a record to the Public Contoso, com DNS zone
Final Answer:
Add a custom domain name
Verify the Domain
Add a record to the Public Contoso, com DNS zone
You have an Azure subscription that contains 100 virtual machines.
You regularly create and delete virtual machines.
You need to identify unattached disks that can be deleted.
What should you do?
A. From Microsoft Azure Storage Explorer, view the Account Management properties.
B. From Azure Cost Management, create a Cost Management report.
C. From the Azure portal, configure the Advisor recommendations.
D. From Azure Cost Management, open the Optimizer tab and create a report.
The correct answer is C. From the Azure portal, configure the Advisor recommendations.
Here’s why:
Explanation:
Azure Advisor’s Cost Recommendations: Azure Advisor is a service in Azure that provides personalized recommendations to help you optimize your Azure resources for cost, security, reliability, performance, and operational excellence. One of its key features is to identify cost-saving opportunities.
Identifying Unattached Disks: Azure Advisor specifically includes a recommendation category related to cost optimization, and within that category, it can identify unattached disks. Advisor analyzes your Azure environment and detects disks that are not currently attached to any virtual machines. These unattached disks are still incurring storage costs, and deleting them can save money.
Configuring Advisor Recommendations: You can access Azure Advisor from the Azure portal. You don’t need to “configure” it in the sense of setting up new rules for unattached disks detection, as this is a built-in recommendation. You simply need to view the Advisor recommendations, specifically looking at the “Cost” category. Advisor will automatically list out any unattached disks it finds in your subscription.
Why other options are incorrect:
A. From Microsoft Azure Storage Explorer, view the Account Management properties. Azure Storage Explorer is a useful tool for managing storage accounts and their contents (blobs, files, disks, etc.). However, viewing “Account Management properties” in Storage Explorer will not directly provide a list of unattached disks. You would have to manually browse through disks and cross-reference them with your VM list to determine which are unattached, which is inefficient for 100 VMs. Storage Explorer is not designed for this specific discovery task in an automated way.
B. From Azure Cost Management, create a Cost Management report. Azure Cost Management is excellent for analyzing and reporting on Azure spending. You can create reports to see your storage costs, including disk storage. However, Cost Management reports themselves don’t directly identify unattached disks. You might see high disk costs, but the report won’t automatically tell you which disks are not in use. You would need to analyze cost data and correlate it with other information to infer unattached disks, which is not the most efficient approach.
D. From Azure Cost Management, open the Optimizer tab and create a report. While Azure Cost Management has an “Optimizer” section (or similar features that might be renamed or UI updated over time), and it may surface some cost-saving recommendations, it’s still generally less direct than using Azure Advisor for this specific task. The Optimizer tab is more likely to guide you toward acting on Advisor recommendations or provide a more general cost optimization overview, rather than directly and specifically listing unattached disks for deletion. Azure Advisor is the dedicated service for providing these kinds of actionable recommendations.
In summary, Azure Advisor is the most direct and efficient Azure service to identify unattached disks. By configuring (more accurately, by viewing) Advisor recommendations, specifically in the “Cost” category, you will get a list of unattached disks that you can then review and delete.
Final Answer: C
You have an Azure subscription that contains 10 virtual machines.
You need to ensure that you receive an email message when any virtual machines are powered off, restarted, or deallocated.
What is the minimum number of rules and action groups that you require?
A. three rules and three action groups
B. one rule and one action group
C. three rules and one action group
D. one rule and three action groups
To meet the requirement of receiving email notifications when virtual machines are powered off, restarted, or deallocated, you need to configure Azure Monitor alerts. Let’s analyze the minimum number of rules and action groups required.
Alert Rules:
You need to monitor three distinct events:
Virtual machine powered off (Stopped/PowerOff): You need a rule to detect when a VM transitions to the “Powered Off” state.
Virtual machine restarted: You need a rule to detect when a VM is restarted.
Virtual machine deallocated (Stopped (deallocated)): You need a rule to detect when a VM is deallocated.
While technically you might be able to create a single complex rule that tries to capture all three states, it is cleaner, more manageable, and generally recommended to create separate alert rules for each distinct event you want to monitor. This allows for more specific configurations and easier troubleshooting.
Therefore, you will need a minimum of three alert rules, one for each of the virtual machine power state changes you want to monitor.
Action Groups:
Action groups define the actions to take when an alert is triggered. In this scenario, the desired action is to send an email message. You want to receive an email notification for any of the three VM power state changes. You don’t need separate email notifications for each event; you just need a notification when any of these events occur.
Therefore, you can use a single action group configured to send an email message. You can then associate this single action group with all three alert rules. When any of the three alert rules are triggered (VM powered off, restarted, or deallocated), the same action group will be executed, resulting in an email notification being sent.
Minimum Requirements:
Alert Rules: Three (one for each power state: powered off, restarted, deallocated)
Action Groups: One (to send the email notification for all three alert rules)
Based on this analysis, the correct option is C. three rules and one action group.
Let’s review why other options are incorrect:
A. three rules and three action groups: Using three action groups is redundant. You don’t need a separate action group for each rule if the desired action (sending an email to the same recipient list) is the same for all rules.
B. one rule and one action group: One rule is insufficient to monitor three distinct events effectively and clearly. While technically you might try to create a very complex single rule, it’s not the minimum manageable approach and is not best practice for clarity and maintainability.
D. one rule and three action groups: One rule is still insufficient, and using three action groups is still redundant. A single rule cannot clearly distinguish and monitor all three power state changes in a simple and maintainable way, and you only need one email notification mechanism.
Final Answer: The final answer is
C
You plan to automate the deployment of a virtual machine scale set that uses the Windows Server 2016 Datacenter image.
You need to ensure that when the scale set virtual machines are provisioned, they have web server components installed.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
A. Upload a configuration script.
B. Create an automation account.
C. Create a new virtual machine scale set in the Azure portal.
D. Create an Azure policy.
E. Modify the extension profile section of the Azure Resource Manager template.
The correct answers are A. Upload a configuration script. and E. Modify the extension profile section of the Azure Resource Manager template.
Here’s why these options are correct and how they work together:
A. Upload a configuration script.
Purpose: You need a script (like a PowerShell script for Windows Server) that contains the commands to install the web server components (e.g., IIS - Internet Information Services). This script will be executed on each virtual machine instance in the scale set after it’s provisioned.
Content: The script would typically include PowerShell commands to:
Install the Web-Server role (IIS).
Optionally configure IIS further (e.g., default website settings, application pools, etc.).
Potentially perform other necessary configuration steps for your web server application.
Upload Location: You would typically upload this script to an accessible storage location, such as:
Azure Blob Storage: A common and recommended approach. You upload the script to a public or private blob container and provide the URI to the script in your ARM template.
Script in ARM Template: For simpler scripts, you can sometimes embed the script directly within the ARM template, but for more complex scripts, uploading to storage is better for management and readability.
E. Modify the extension profile section of the Azure Resource Manager template.
Purpose: The Azure Resource Manager (ARM) template is used to define and deploy your virtual machine scale set. To automatically run your configuration script on each VM instance during provisioning, you use VM extensions. The extensionProfile section of the ARM template is where you configure these VM extensions.
Extension to Use: For running custom scripts on Windows VMs, the CustomScriptExtension is the most common and appropriate extension.
Configuration within extensionProfile: In the extensionProfile, you would define a CustomScriptExtension and configure it to:
fileUris: Point to the URI of your uploaded configuration script (from option A, like the Blob Storage URL).
commandToExecute: Specify the command to execute the script on the VM (e.g., powershell -ExecutionPolicy Unrestricted -File install_webserver.ps1).
settings and protectedSettings (optional): For passing parameters to the script or handling sensitive information securely.
How A and E Work Together:
Create the Configuration Script (Action A): You write a PowerShell script to install the web server components.
Upload the Script (Action A): You upload this script to Azure Blob Storage (or another accessible location).
Modify ARM Template (Action E): In your ARM template for the VM scale set, you add or modify the extensionProfile section.
Configure CustomScriptExtension (Action E): Within the extensionProfile, you define a CustomScriptExtension, pointing it to the script URI (fileUris) and specifying how to execute it (commandToExecute).
Deploy the ARM Template: When you deploy the ARM template, Azure will:
Provision the virtual machine scale set.
For each VM instance, Azure will download the script from the URI specified in the CustomScriptExtension.
The CustomScriptExtension will execute the script on the VM, installing the web server components.
Why other options are incorrect:
B. Create an automation account. Azure Automation accounts are powerful for automation tasks, but they are not the primary mechanism for directly configuring VMs during provisioning in a VM scale set. While you could use Azure Automation in a more complex scenario (e.g., triggered after VM creation), using VM extensions directly within the ARM template is the simpler and more standard approach for this requirement.
C. Create a new virtual machine scale set in the Azure portal. Creating a VM scale set itself doesn’t install web server components. The Azure portal is an interface for deployment, but you still need a mechanism to configure the VMs during deployment, which is achieved through extensions and scripts. The portal would be used to deploy the ARM template (which includes the extension profile).
D. Create an Azure policy. Azure Policy is used to enforce configurations and compliance after VMs are deployed. It can audit or remediate configuration drift. Policies are not designed to initiate the installation of software during VM provisioning. Policies ensure ongoing compliance but don’t handle the initial setup in this scenario.
Final Answer:
Option Selected
Upload a configuration script.
Modify the extension profile section of the Azure Resource Manager template.
An app uses a virtual network with two subnets. One subnet is used for the application server. The other subnet is used for a database server. A network virtual appliance (NVA) is used as a firewall.
Traffic destined for one specific address prefix is routed to the NVA and then to an on-premises database server that stores sensitive data. A Border Gateway
Protocol (BGP) route is used for the traffic to the on-premises database server.
You need to recommend a method for creating the user-defined route.
Which two options should you recommend? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. For the virtual network configuration, use a VPN.
B. For the next hop type, use virtual network peering.
C. For the virtual network configuration, use Azure ExpressRoute.
D. For the next hop type, use a virtual network gateway.
Let’s break down the requirements and analyze each option in the context of creating a user-defined route (UDR) for traffic destined to an on-premises database via an NVA.
Understanding the Scenario:
Traffic from a subnet in Azure needs to be routed to a specific address prefix (on-premises database network).
The traffic must pass through a Network Virtual Appliance (NVA) acting as a firewall within the Azure VNet.
BGP routing is used for traffic after the NVA to reach the on-premises database server. This implies a connection between Azure and the on-premises network that supports BGP, such as VPN or ExpressRoute.
Analyzing Each Option:
A. For the virtual network configuration, use a VPN.
Correct. A VPN (Site-to-Site VPN) is a common method to establish a secure connection between an Azure virtual network and an on-premises network. While a VPN gateway is the specific component, using “VPN for the virtual network configuration” broadly implies setting up VPN-based hybrid connectivity. In this scenario, the BGP route mentioned likely refers to BGP being used over a VPN or ExpressRoute connection to exchange routes with the on-premises network. Therefore, using VPN for the virtual network configuration is a valid part of a complete solution for connecting to on-premises.
B. For the next hop type, use virtual network peering.
Incorrect. Virtual network peering is used to connect two Azure virtual networks directly. It’s not relevant for routing traffic from a subnet to an NVA within the same virtual network to reach an on-premises network. Peering is for VNet-to-VNet connectivity, not for routing to an NVA for on-premises access.
C. For the virtual network configuration, use Azure ExpressRoute.
Correct. Azure ExpressRoute provides a dedicated, private, and often higher-bandwidth connection between Azure and an on-premises network. Similar to VPN, ExpressRoute is a method for establishing hybrid connectivity. Using ExpressRoute for the virtual network configuration is also a valid part of a complete solution for connecting to on-premises, especially when dealing with sensitive data and potentially higher bandwidth requirements. ExpressRoute also supports BGP for route exchange.
D. For the next hop type, use a virtual network gateway.
Incorrect. While a virtual network gateway (VPN gateway or ExpressRoute gateway) is involved in connecting to on-premises via VPN or ExpressRoute, it is not the correct “next hop type” for a UDR when you want to route traffic to an NVA within the same VNet. For routing traffic to an NVA, the correct “next hop type” in a UDR is typically “Virtual appliance”. Using “Virtual network gateway” as the next hop type in a UDR would typically be used to route traffic directly out of the VNet to the VPN or ExpressRoute connection, bypassing the NVA. In this scenario, the traffic needs to go through the NVA first.
Rationale for choosing A and C:
Options A and C represent the two primary methods for establishing a hybrid network connection between Azure and on-premises, which is essential for the scenario described. The mention of a BGP route further reinforces that a VPN or ExpressRoute connection to on-premises is likely part of the intended architecture. While option D mentions a virtual network gateway, it’s not the correct next hop type for routing traffic to an NVA within the VNet. Option B is irrelevant for routing to on-premises via an NVA.
Therefore, Options A and C provide the necessary virtual network configurations for establishing the hybrid connectivity required to reach the on-premises database server, making them the most suitable recommendations for a “complete solution” in this context.
Final Answer: The final answer is
A,C
You manage a solution in Azure that consists of a single application that runs on a virtual machine (VM). Traffic to the application has increased dramatically.
The application must not experience any downtime and scaling must be dynamically defined.
You need to define an auto-scale strategy to ensure that the VM can handle the workload.
Which three options should you recommend? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. Deploy application automatic vertical scaling.
B. Create a VM availability set.
C. Create a VM scale set.
D. Deploy application automatic horizontal scaling.
E. Deploy a custom auto-scale implementation.
Let’s analyze each option based on the requirements of no downtime and dynamically defined scaling for an application running on a single VM experiencing increased traffic.
A. Deploy application automatic vertical scaling.
Correctness: Vertical scaling involves increasing the resources (CPU, memory, etc.) of the existing virtual machine. Automatic vertical scaling would mean this process happens automatically in response to load changes.
Downtime: Vertical scaling typically requires a reboot of the virtual machine to apply the changes, resulting in application downtime. While some newer Azure VM sizes support resizing without downtime, this is not universally guaranteed and is more complex than horizontal scaling for zero-downtime scaling.
Dynamically Defined Scaling: Yes, vertical scaling can be dynamically defined based on metrics such as CPU usage or memory consumption.
Completeness as a Solution: While it can dynamically scale, the potential for downtime during vertical scaling makes it less ideal for applications requiring continuous availability. Therefore, it is NOT the closest with correct answer when considering the “no downtime” requirement.
B. Create a VM availability set.
Correctness: Availability sets are used to improve the availability and fault tolerance of virtual machines by distributing them across multiple fault domains and update domains within an Azure datacenter.
Downtime: Availability sets help protect against planned and unplanned downtime events within a datacenter, enhancing the application’s resilience. However, availability sets themselves do not provide scaling capabilities.
Dynamically Defined Scaling: No, availability sets do not offer autoscaling functionality. They are focused on high availability, not scalability to handle increased traffic.
Completeness as a Solution: Availability sets address availability, but not the scaling requirement to handle dramatic traffic increases. Therefore, it is NOT a correct answer to the question about scaling strategy.
C. Create a VM scale set.
Correctness: VM scale sets are designed to deploy and manage a set of identical, auto-scaling virtual machines. They are ideal for horizontally scaling applications.
Downtime: VM scale sets are designed to minimize downtime during scaling operations. Scaling out (adding more instances) is generally a zero-downtime operation, especially when used with a load balancer and health probes.
Dynamically Defined Scaling: Yes, VM scale sets have built-in autoscaling capabilities. You can define rules based on various metrics (CPU utilization, memory, custom metrics, etc.) to automatically adjust the number of VM instances based on workload demands.
Completeness as a Solution: VM scale sets are a highly effective and complete solution for achieving both no downtime and dynamically defined scaling for VM-based applications. This is a very strong candidate for a correct answer.
D. Deploy application automatic horizontal scaling.
Correctness: Horizontal scaling (scaling out) involves adding more instances of the application to handle increased load. Automatic horizontal scaling means this addition of instances happens automatically based on demand.
Downtime: Horizontal scaling inherently aims to avoid downtime during scale-out. By distributing load across multiple instances, you can accommodate increased traffic without interrupting service to existing users.
Dynamically Defined Scaling: Yes, automatic horizontal scaling is by definition dynamic.
Completeness as a Solution: Deploying automatic horizontal scaling is conceptually the correct approach to meet the requirements. In Azure, VM scale sets are the primary and recommended way to implement automatic horizontal scaling for VM-based applications. This is also a very strong candidate for a correct answer.
E. Deploy a custom auto-scale implementation.
Correctness: A custom auto-scale implementation means designing and building your own scaling solution, potentially using Azure APIs, monitoring data, and custom logic.
Downtime: The potential for downtime depends entirely on how well the custom implementation is designed and executed. If done carefully, it could achieve zero-downtime scaling.
Dynamically Defined Scaling: Yes, a custom implementation can be designed to provide fully dynamic scaling, potentially based on very specific application metrics.
Completeness as a Solution: A custom auto-scale implementation can be a complete solution, but it is generally more complex, requires more development effort, and is harder to maintain compared to using managed Azure services like VM scale sets. It is also less aligned with the “minimize implementation time” aspect mentioned in the broader context of Azure solutions. However, it still technically can be a “complete solution”.
Analyzing for “closest with correct answer”:
If we need to choose the three options that are “closest with correct answer” and represent complete solutions, we should prioritize the options that best meet both “no downtime” and “dynamically defined scaling” requirements and are practical and recommended Azure approaches.
C. Create a VM scale set and D. Deploy application automatic horizontal scaling are the most direct and best-practice Azure solutions for horizontal autoscaling of VM-based applications to meet these requirements. They are tightly related - VM scale sets are Azure’s managed service for horizontal autoscaling.
E. Deploy a custom auto-scale implementation is also technically a “complete solution” as it can achieve the goals, although it is less managed and more complex. It’s a valid option if there are very specific, unique requirements not easily met by VM scale sets, but for general autoscaling, VM scale sets are preferred.
A. Deploy application automatic vertical scaling is less suitable due to potential downtime and is generally not the best approach for handling dramatic traffic increases in web applications where horizontal scaling is more scalable and resilient.
B. Create a VM availability set is not a scaling solution at all.
Therefore, the three options that are “closest with correct answer” and represent complete solutions for dynamic scaling with minimal downtime are C, D, and E.
Final Answer: The final answer is C, D, and E.
You develop a web app that uses the tier D1 app service plan by using the Web Apps feature of Microsoft Azure App Service.
Spikes in traffic have caused increases in page load times.
You need to ensure that the web app automatically scales when CPU load is about 85 percent and minimize costs.
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Configure the web App to the premium App Service tier
Configure a scale condition
Configure the web app to the standard App service tier
Enable autoscaling on the web app
Add a Scale rule
Switch to an Azure App Services consumption plan
To implement autoscaling for an Azure Web App based on CPU load and minimize costs, you should perform the following actions in sequence:
Configure the web app to the standard App service tier:
Reason: The D1 App Service plan is a Dev/Test tier and does not support autoscaling. To enable autoscaling, you must upgrade to a tier that supports it. The Standard App Service tier is a suitable and cost-effective option that supports autoscaling. While Premium tiers also support autoscaling, Standard tier is generally less expensive and sufficient for many production workloads where cost minimization is a priority.
Enable autoscaling on the web app:
Reason: After moving to a suitable App Service tier, you need to explicitly enable the autoscaling feature for your web app. This activates the autoscaling engine for the app.
Configure a scale condition:
Reason: You need to define the trigger for autoscaling. In this case, the requirement is to scale when the “CPU load is about 85 percent”. You need to configure a scale condition based on the CpuPercentage metric and set a threshold of 85%.
Add a Scale rule:
Reason: Once the scale condition is defined, you need to specify the action to take when the condition is met. This is done by adding a scale rule. The scale rule will define how to scale (e.g., increase instance count by a certain number) when the CPU load reaches 85%. You can also configure scale-in rules to reduce instances when load decreases, further optimizing costs.
Why other options are not in the correct sequence or not suitable:
Configure the web App to the premium App Service tier: While Premium tier also supports autoscaling, it’s generally more expensive than Standard. For cost minimization, starting with Standard tier is more appropriate. Premium might be considered later if Standard proves insufficient for performance or features, but not as the first step for cost-conscious scaling.
Switch to an Azure App Services consumption plan: Consumption plan is a serverless hosting option where you pay per-second consumption. While it autoscales automatically, it is a different hosting model than the dedicated instance-based App Service plans (like Standard and Premium). Switching to Consumption plan is a significant architectural change and might not be desired or suitable if the application was initially designed for a dedicated App Service plan. It might also lead to less predictable costs if traffic spikes are very high.
Add an Azure DNS zone: Azure DNS zones are for managing DNS records and are not related to App Service autoscaling.
Correct Sequence of Actions:
Configure the web app to the standard App service tier
Enable autoscaling on the web app
Configure a scale condition
Add a Scale rule
Answer Area:
Configure the web app to the standard App service tier
Enable autoscaling on the web app
Configure a scale condition
Add a Scale rule
You are implementing authentication for applications in your company. You plan to implement self-service password reset (SSPR) and multifactor authentication
(MFA) in Azure Active Directory (Azure AD).
You need to select authentication mechanisms that can be used for both MFA and SSPR.
Which two authentication methods should you use? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. Short Message Service (SMS) messages
B. Azure AD passwords/Authentication App
C. Email addresses
D. Security questions
E. App passwords
The correct answers are A. Short Message Service (SMS) messages and B. Azure AD passwords/Authentication App.
Here’s why:
A. Short Message Service (SMS) messages
MFA: SMS is a common and widely supported method for multi-factor authentication. Azure AD can send a verification code via SMS to a user’s phone, which they must enter to complete the MFA process.
SSPR: SMS is also a standard method for self-service password reset. Users can choose to receive a verification code via SMS to their registered phone number as part of the password reset process.
B. Azure AD passwords/Authentication App (Interpreted as Authentication App - e.g., Microsoft Authenticator)
MFA: Authentication apps (like Microsoft Authenticator, Google Authenticator, etc.) are a strong and recommended method for MFA. They can provide push notifications or generate Time-based One-Time Passcodes (TOTP) that users use for verification.
SSPR: Authentication apps are also supported for self-service password reset. Users can use push notifications or TOTP codes from their authenticator app to verify their identity and reset their password. The phrase “Azure AD passwords” in this option is a bit misleading. It likely refers to using the Authentication App method, not the password itself as an MFA or SSPR mechanism.
Why other options are incorrect:
C. Email addresses
While email addresses can be used for SSPR, it is not a recommended method for MFA due to security concerns. Email accounts can be compromised, making it a weaker second factor. While Azure AD technically allows email as an MFA method in some configurations, it’s generally discouraged for security best practices. Since the question asks for methods usable for both MFA and SSPR, and email is weak for MFA, it’s not the best choice.
D. Security questions
Security questions are strongly discouraged for both MFA and SSPR. They are inherently insecure as answers are often easily guessable or publicly available. Microsoft is actively moving away from security questions as an authentication method due to security vulnerabilities.
E. App passwords
App passwords are not an authentication method for MFA or SSPR. App passwords are used as a workaround for legacy applications that do not support modern authentication (like MFA). They are generated for specific applications to bypass MFA requirements for those apps, not as an MFA or SSPR method themselves.
Therefore, the two authentication methods that are genuinely and commonly used for both MFA and SSPR in Azure AD are SMS messages and Authentication Apps.
Final Answer: The final answer is
A,B
HOTSPOT
You create a virtual machine scale set named Scale1. Scale1 is configured as shown in the following exhibit.
Create a virtual machine scale set
Basics Disks Networking Scaling Management Health Advanced
…
An Azure virtual machine scale set can automatically increase or decrease the number of VM instances that run your
application. This automated and elastic behavior reduces the management overhead to monitor and optimize the performance
of your application. Learn more about VMSS scaling
Instance
Initial instance count *
4
Scaling
Scaling policy *
Manual Custom
Minimum number of VMs *
2
Maximum number of VMs *
20
Scale out
CPU threshold (%) *
80
Duration in minutes *
5
Number of VMs to increase by *
2
Scale in
CPU threshold (%) *
30
Number of VMs to decrease by *
4
Diagnostic logs
Collect diagnostic logs from Autoscale
Disabled Enabled
Review + create
< Previous Next: Management >
Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.
If Scale1 is utilized at 85 percent for six minutes after it is deployed, Scale1 will be running [answer choice].
2 virtual machines
4 virtual machines
6 virtual machines
10 virtual machines
20 virtual machines
If Scale1 is first utilized at 25 percent for six minutes after it is deployed, and then utilized at
50 percent for six minutes, Scale1 will be running [answer choice].
2 virtual machines
4 virtual machines
6 virtual machines
8 virtual machines
10 virtual machines
Step 1: Analyze Scenario 1
Initial State: The scale set starts with 4 virtual machines as defined by the “Initial instance count”.
CPU Utilization: The utilization is at 85 percent for six minutes.
Scale-Out Threshold: The scale-out threshold is set at 80 percent.
Scale-Out Duration: The duration to trigger a scale-out is 5 minutes.
Scale-Out Action: Increase the number of VMs by 2.
Since the CPU utilization (85%) is above the scale-out threshold (80%) and the duration (6 minutes) is longer than the required duration (5 minutes), a scale-out event will be triggered.
Calculation: Initial VMs (4) + VMs to increase by (2) = 6 virtual machines
Step 2: Analyze Scenario 2
Initial State: The scale set starts with 4 virtual machines.
First Utilization Period: 25 percent CPU for six minutes.
Second Utilization Period: 50 percent CPU for six minutes.
Scale-In Threshold: The scale-in threshold is set at 30 percent.
Scale-Out Threshold: The scale-out threshold is set at 80 percent.
First Utilization Period (25% CPU):
25% CPU is below the scale-in threshold (30%) and below the scale-out threshold (80%).
No scaling action is triggered because the CPU utilization is not outside either threshold boundary for scale in or scale out.
The VM count remains at 4.
Second Utilization Period (50% CPU):
50% CPU is above the scale-in threshold (30%) but below the scale-out threshold (80%).
No scaling action is triggered because the CPU utilization is within the defined thresholds for neither scale-in nor scale-out.
The VM count remains at 4.
Step 3: Select the Answers
Based on the analysis:
Scenario 1 (85 percent for six minutes): Scale1 will be running 6 virtual machines.
Scenario 2 (25 percent then 50 percent for six minutes each): Scale1 will be running 4 virtual machines.
Correct Answer Choices:
If Scale1 is utilized at 85 percent for six minutes after it is deployed, Scale1 will be running 6 virtual machines.
If Scale1 is first utilized at 25 percent for six minutes after it is deployed, and then utilized at 50 percent for six minutes, Scale1 will be running 4 virtual machines.
Final Answer:
If Scale1 is utilized at 85 percent for six minutes after it is deployed, Scale1 will be running 6 virtual machines.
If Scale1 is first utilized at 25 percent for six minutes after it is deployed, and then utilized at 50 percent for six minutes, Scale1 will be running 4 virtual machines.
You plan to automate the deployment of a virtual machine scale set that uses the Windows Server 2016 Datacenter image.
You need to ensure that when the scale set virtual machines are provisioned, they have web server components installed.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Upload a configuration script.
Create an Azure policy.
Modify the extensionProfile section of the Azure Resource Manager template.
Create a new virtual machine scale set in the Azure portal.
Create an automation account.
To automate the deployment of a virtual machine scale set with web server components installed on each VM instance, you need a mechanism to execute a configuration script during the VM provisioning process. Let’s analyze each option:
A. Upload a configuration script.
Correct. You absolutely need a configuration script. This script will contain the commands necessary to install the web server components (like IIS on Windows Server). This script will be executed on each VM instance as it is provisioned. The script could be a PowerShell script for Windows Server.
B. Create an Azure policy.
Incorrect. Azure Policy is primarily used for governance, compliance, and enforcing standards after resources are deployed. While you could potentially use Azure Policy to audit or remediate VMs that don’t have web server components installed after they are running, it is not the mechanism to initiate the installation of web server components during the VM scale set provisioning process. Policy is reactive, not proactive in this initial setup context.
C. Create a new virtual machine scale set in the Azure portal.
Incorrect. Creating a virtual machine scale set in the Azure portal is the action of deploying the scale set itself. However, simply creating the scale set does not automatically install web server components. The portal is just the interface for deployment. You need to configure the scale set deployment to include the installation of web server components, which is done through other mechanisms.
D. Create an Azure policy.
Incorrect. (This is a repeated option, and as explained in B, Azure Policy is not the correct approach for initial setup during provisioning.)
E. Modify the extensionProfile section of the Azure Resource Manager template.
Correct. The extensionProfile section within an Azure Resource Manager (ARM) template is specifically designed to configure virtual machine extensions. VM extensions are the standard way to run post-deployment configuration tasks on Azure VMs and VM scale sets. You can use the CustomScriptExtension within the extensionProfile to execute a script (like the one uploaded in option A) on each VM instance during provisioning. This is the ideal and recommended method for automating software installation during VM scale set deployment.
Explanation of why A and E are the correct pair:
Upload a configuration script (A): You need a script that actually performs the web server component installation. This script will contain the necessary commands for Windows Server 2016 (e.g., PowerShell commands to install the Web-Server role). You will need to store this script in an accessible location, such as Azure Blob Storage, so that the VM instances can download and execute it.
Modify the extensionProfile section of the Azure Resource Manager template (E): You will use an ARM template to define your virtual machine scale set deployment. Within the extensionProfile of the ARM template, you will configure a CustomScriptExtension. This extension will:
Point to the location of your configuration script (uploaded in step A).
Specify the command to execute the script on each VM instance as part of the provisioning process.
By combining these two actions, you ensure that when the VM scale set is deployed using the ARM template, each VM instance will automatically download and execute your configuration script, thus installing the web server components during provisioning.
Final Answer: The final answer is
Uploadaconfigurationscript,ModifytheextensionProfilesectionoftheAzureResourceManagertemplate.
HOTSPOT
You have several Azure virtual machines on a virtual network named VNet1. Vnet1 has two subnets that have 10.2.0.0/24 and 10.2.9.0/24 address spaces.
You configure an Azure Storage account as shown in the following exhibit.
contoso20 | Networking
Storage account
Firewalls and virtual networks
Selected networks
Configure network security for your storage accounts.
Virtual networks
+ Add existing virtual network + Add new virtual network
Virtual Network Subnet Address range Endpoint Status Resource Group Subscription
✓ VNET1 1 10.2.0.0/24 ✓ Enabled RG1 Visual Studio Premium with MSDN …
Prod RG1 Visual Studio Premium with MSDN …
Network Routing
Routing preference
Microsoft network routing (selected) Internet routing
….
Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.
The virtual machines on the 10.2.9.0/24 subnet will have
network connectivity to the file shares in the storage account
Azure Backup will be able to back up the unmanaged hard
disks of the virtual machines in the storage account
always
during a backup
never
always
during a backup
never
Statement 1: The virtual machines on the 10.2.9.0/24 subnet will have [answer choice] network connectivity to the file shares in the storage account.
Analysis: The Storage account’s “Firewalls and virtual networks” setting is configured to “Selected networks”.
Under “Virtual networks”, only VNET1 Subnet 1 with address range 10.2.0.0/24 is listed and enabled.
The VMs in question are on the 10.2.9.0/24 subnet.
Since the 10.2.9.0/24 subnet is not explicitly listed as an allowed network in the Storage account’s firewall settings, traffic from VMs in this subnet will be blocked by the storage account firewall.
Answer: never
Statement 2: Azure Backup will be able to back up the unmanaged hard disks of the virtual machines in the storage account [answer choice].
Analysis: Azure Backup is an Azure service that needs to access the storage account to store backups.
The storage account is configured to use “Selected networks” and has authorized subnet 10.2.0.0/24.
The “Endpoint Status” for VNET1 Subnet 1 (10.2.0.0/24) is Enabled, indicating that Service Endpoints for Storage are enabled on this subnet. This is crucial for allowing traffic from the VNet to the Storage service.
The “Network Routing” is set to Microsoft network routing. This setting generally allows Azure services to communicate with the storage account over the Azure backbone network, potentially bypassing some internet routing restrictions when accessing from within Azure.
Reasoning: Azure Backup, as a trusted Azure service, is designed to work with storage accounts secured by virtual network firewalls when using service endpoints and Microsoft network routing. When you initiate a backup of a VM, Azure Backup service attempts to access the storage account to store the backup. The service endpoint configuration and Microsoft Network Routing are intended to allow authorized Azure services like Backup to securely access the storage account, even when network restrictions are in place for general internet traffic or traffic from unauthorized networks.
Answer: always (While “during a backup” might seem plausible, the service endpoint configuration and “Microsoft network routing” are meant to ensure continuous secure access for authorized services like Backup. “Always” in this context implies that the capability is there for Azure Backup to function whenever it needs to perform a backup operation, not just during a specific time window. “Always” is the more appropriate option given the intent of service endpoints and Microsoft network routing for authorized Azure services.)
Final Answer:
The virtual machines on the 10.2.9.0/24 subnet will have never network connectivity to the file shares in the storage account.
Azure Backup will be able to back up the unmanaged hard disks of the virtual machines in the storage account always.
DRAG DROP
You have virtual machines (VMs) that run a mission-critical application.
You need to ensure that the VMs never experience down time.
What should you recommend? To answer, drag the appropriate solutions to the correct scenarios. Each solution may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point
Solutions
Solutions
Fault Domain
Availability Zone
Availability Set
Scale Sets
Scenario
Maintain application performance across identical VMs: Solution
Maintain application availability when an Azure datacenter fails: Solution
Maintain application performance across different VMs: Solution
Scenario 1: Maintain application performance across identical VMs:
Solution: Scale Sets
Explanation: Virtual Machine Scale Sets are designed to deploy and manage a set of identical, auto-scaling virtual machines. They are ideal for distributing application load across multiple identical VMs to maintain performance and handle increased traffic. A load balancer is typically used in front of a scale set to distribute traffic evenly across the instances.
Scenario 2: Maintain application availability when an Azure datacenter fails:
Solution: Availability Zone
Explanation: Availability Zones are physically separate datacenters within an Azure region. By deploying VMs across Availability Zones, you ensure that if one datacenter (zone) fails, your application remains available in the other zones. This provides the highest level of availability and resilience against datacenter-level failures.
Scenario 3: Maintain application performance across different VMs:
Solution: Availability Set
Explanation: Availability Sets are used to improve the availability of VMs within a single datacenter. They distribute VMs across fault domains (power and network isolation) and update domains (planned maintenance isolation). While primarily for availability, they also help in distributing load and maintaining performance to some extent, even if the VMs are not strictly identical. For scenarios where VMs might have different configurations or sizes but still need to be highly available within a region, Availability Sets are suitable. While Scale Sets are better for identical VMs, Availability Sets are more flexible for different VMs needing availability.
Incorrect Solutions and Why:
Fault Domain: Fault Domain is a component of Availability Sets and Availability Zones. It’s not a standalone solution but a concept describing how VMs are isolated within a datacenter. You don’t directly deploy a “Fault Domain”.
Final Answer:
Scenario 1: Maintain application performance across identical VMs: Scale Sets
Scenario 2: Maintain application availability when an Azure datacenter fails: Availability Zone
Scenario 3: Maintain application performance across different VMs: Availability Set
Your company has an office in Seattle.
You have an Azure subscription that contains a virtual network named VNET1.
You create a site-to-site VPN between the Seattle office and VNET1.
VNET1 contains the subnets shown in the following table.
— —
Name IP address space
Subnet1 10.1.1.0/24
GatewaySubnet 10.1.200.0/28
——
You need to route all Internet-bound traffic from Subnet1 to the Seattle office.
What should you create?
a route for GatewaySubnet that uses the virtual network gateway as the next hop
a route for Subnet1 that uses the local network gateway as the next hop
a route for Subnet1 that uses the virtual network gateway as the next hop
a route for GatewaySubnet that uses the local network gateway as the next hop
To route all Internet-bound traffic from Subnet1 to the Seattle office via a site-to-site VPN, you need to implement forced tunneling. Forced tunneling in Azure involves creating a user-defined route (UDR) that directs all internet-bound traffic (0.0.0.0/0) from a subnet to your on-premises network through the VPN tunnel.
Let’s analyze each option:
a route for GatewaySubnet that uses the virtual network gateway as the next hop: This is incorrect. GatewaySubnet is reserved for the Azure VPN gateway. Routes applied to GatewaySubnet typically affect traffic to or through the gateway itself, not traffic originating from other subnets and destined for the internet.
a route for Subnet1 that uses the local network gateway as the next hop: This is incorrect. The Local Network Gateway in Azure represents your on-premises VPN device. It is the destination endpoint of the VPN tunnel in Azure’s configuration. Using it as the next hop in a route for Subnet1 is conceptually wrong for routing traffic through the VPN from Azure towards on-premises.
a route for Subnet1 that uses the virtual network gateway as the next hop: This is correct. To implement forced tunneling, you need to create a route for Subnet1 (where your VMs are located) that directs all internet-bound traffic to the Virtual Network Gateway. The Virtual Network Gateway is the Azure-side VPN gateway resource. By setting the next hop to the Virtual Network Gateway for the 0.0.0.0/0 destination prefix, you are instructing Azure to send all traffic destined for the internet (and not already within the VNet or on-premises networks known by BGP or static routes) through the VPN tunnel. This traffic will then be routed to your Seattle office via the site-to-site VPN connection.
a route for GatewaySubnet that uses the local network gateway as the next hop: This is incorrect for the same reasons as the first option. Routes on GatewaySubnet are not used to control internet-bound traffic from Subnet1.
To implement forced tunneling for internet-bound traffic from Subnet1 to the Seattle office, you should:
Create a Route Table.
Add a Route to the Route Table with the following properties:
Destination prefix: 0.0.0.0/0 (This represents all internet destinations)
Next hop type: Virtual network gateway
Associate the Route Table to Subnet1.
This configuration will ensure that any traffic originating from VMs in Subnet1 that is destined for the internet will be routed through the Azure Virtual Network Gateway and then over the site-to-site VPN connection to your Seattle office.
Final Answer: The final answer is
a route for Subnet1 that uses the virtual network gateway as the next hop
You create an Azure virtual machine named VM1 in a resource group named RG1.
You discover that VM1 performs slower than expected.
You need to capture a network trace on VM1.
What should you do?
From the VM1 blade, configure Connection troubleshoot.
From Diagnostic settings for VM1, configure the performance counters to include network counters.
From the VM1 blade, install performance diagnostics and run advanced performance analysis.
From Diagnostic settings for VM1, configure the log level of the diagnostic agent.
Let’s analyze each option to determine the best way to capture a network trace on an Azure VM (VM1) that is performing slower than expected.
Option 1: From the VM1 blade, configure Connection troubleshoot.
Explanation: Azure Network Watcher’s Connection troubleshoot tool is designed to test and diagnose connectivity issues between two endpoints, such as Azure VMs, internet endpoints, or on-premises resources. It checks reachability, latency, and hop-by-hop routes.
Relevance to Network Trace: While Connection troubleshoot is useful for identifying connectivity problems, it does not capture a detailed network trace (like a .pcap file). It provides insights into connectivity paths and potential bottlenecks, but not packet-level information. Therefore, this is not the correct option for capturing a network trace.
Option 2: From Diagnostic settings for VM1, configure the performance counters to include network counters.
Explanation: Azure Monitor Diagnostic settings allow you to collect performance metrics and logs from Azure resources, including VMs. You can configure which performance counters to collect, including network-related counters (e.g., Network Interface Bytes Received/sec, Network Interface Bytes Sent/sec).
Relevance to Network Trace: Configuring performance counters will collect aggregated network performance metrics, which can be helpful for identifying trends and overall network utilization. However, performance counters are not a network trace. They do not capture the actual packets transmitted over the network. This option provides performance data, but not a detailed packet capture.
Option 3: From the VM1 blade, install performance diagnostics and run advanced performance analysis.
Explanation: Azure Performance Diagnostics is a VM extension that helps diagnose performance issues on Azure VMs. It collects performance data, configuration information, and logs to identify bottlenecks. It can perform advanced analysis and provide recommendations.
Relevance to Network Trace: Azure Performance Diagnostics is focused on identifying performance bottlenecks related to CPU, memory, disk, and potentially some network aspects through performance metrics and logs. It is not designed to capture raw network traces in the way that tools like Wireshark or tcpdump do. It’s more about high-level performance analysis, not packet-level inspection.
Option 4: From Diagnostic settings for VM1, configure the log level of the diagnostic agent.
Explanation: Diagnostic settings also allow you to configure the log level for the Azure Diagnostics extension. This controls the verbosity of logs collected from the VM’s operating system and applications (e.g., Windows Event Logs, Syslog).
Relevance to Network Trace: Configuring the log level affects the amount of logging data collected, but it does not capture network traffic. Diagnostic logs are system and application logs, not network packet captures. This option is irrelevant to capturing a network trace.
Conclusion:
None of the options directly provide a feature in the Azure portal to initiate a traditional packet capture or network trace in the same way that you would use tools like Wireshark or tcpdump on a local machine.
However, if the question is asking for the closest Azure portal related action to get some form of network-related information to diagnose performance issues, then Option 2 is the most relevant because:
It directly involves network counters, which are performance metrics related to network activity.
While not a packet capture, network performance counters can provide insights into network throughput, errors, and utilization, which can be helpful in diagnosing network-related performance problems.
To get a true network trace (packet capture) on an Azure VM, you would typically need to:
Connect to the VM via RDP or SSH.
Install a network capture tool on the VM’s operating system (e.g., tcpdump on Linux, netsh trace or Wireshark on Windows).
Run the network capture tool from within the VM to capture network traffic directly on the VM’s network interface.
Considering the available options and the context of Azure portal actions, Option 2 is the closest, albeit imperfect, answer if we must choose from the given options within the Azure portal context for gathering network-related performance data.
Final Answer: The final answer is
From Diagnostic settings for VM1, configure the performance counters to include network counters.
You have an Azure subscription named Subscription1 that contains an Azure virtual network named VNet1. VNet1 connects to your on-premises network by using Azure ExpressRoute.
You need to connect VNet1 to the on-premises network by using a site-to-site VPN. The solution must minimize cost.
Which three actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Create a gateway subnet.
Create a VPN gateway that uses the VpnGw1 SKU.
Create a connection.
Create a local site VPN gateway.
Create a VPN gateway that uses the Basic SKU.
The correct three actions are:
Create a gateway subnet.
Create a VPN gateway that uses the Basic SKU.
Create a connection.
Here’s why each action is correct and why the others are incorrect:
Correct Actions:
Create a gateway subnet: This is a mandatory prerequisite for deploying a VPN gateway in Azure. The gateway subnet is a dedicated subnet within your VNet that is specifically used to host the VPN gateway service. Azure requires this subnet to properly route traffic to and from the VPN gateway.
Create a VPN gateway that uses the Basic SKU: To minimize cost, you should choose the Basic SKU for the VPN gateway. The Basic SKU is the lowest-cost option for Azure VPN Gateways and is suitable for dev/test environments or scenarios with moderate bandwidth and connection requirements. VpnGw1 and other higher SKUs offer higher bandwidth, more connections, and features like zone redundancy, but they are more expensive. Since the requirement is to minimize cost, Basic SKU is the optimal choice.
Create a connection: After creating the VPN gateway, you need to establish a connection between the Azure VPN gateway and your on-premises VPN device (represented in Azure by a Local Network Gateway object, which is implicitly needed for this connection to work). The connection object defines the shared key (pre-shared key or IKE shared key), connection type (site-to-site), and other parameters needed for the VPN tunnel to be established.
Incorrect Actions:
Create a VPN gateway that uses the VpnGw1 SKU: While VpnGw1 is a valid SKU for a VPN gateway, it is not the most cost-effective option. VpnGw1 and higher SKUs are designed for higher performance and more demanding scenarios. Since the requirement is to minimize cost, VpnGw1 is not the correct choice.
Create a local site VPN gateway: This option is incorrect terminology and conceptually misleading in the Azure context. Azure uses the term “Local Network Gateway” to represent your on-premises VPN device in Azure. You do need to create a Local Network Gateway in Azure to represent your on-premises VPN endpoint before you create the connection. However, the provided option’s phrasing is not standard Azure terminology and can be confusing. While conceptually you need to represent the on-premises side, “Create a connection” implicitly requires you to have configured the “Local Network Gateway” (or on-premises side representation) beforehand. Therefore, “Create a connection” is the more direct and crucial action in this list of options for establishing the VPN tunnel from the Azure side, assuming the on-premises side representation is already considered or implied in the “connection” step. In a real-world deployment, you would definitely create a Local Network Gateway, but in the context of the options given, “Create a connection” is the step that directly establishes the VPN link from Azure after the gateway is deployed.
Your network contains an on-premises Active Directory domain named contoso.com. The domain contains the users shown in the following table.
Name Member of
User1 Domain Admins
User2 Domain Users
User3 ADSyncAdmins
User4 Account Operators
You plan to install Azure AD Connect and enable SSO.
You need to specify which user to use to enable SSO. The solution must use the principle of least privilege.
Which user should you specify?
User3
User2
User1
User4
The correct answer is User3. Here’s why:
Principle of Least Privilege: The question explicitly states the solution must use the principle of least privilege. This means we need to choose the user with the minimum necessary permissions to enable SSO during Azure AD Connect installation.
Let’s analyze each user:
User1 (Domain Admins): Domain Admins is the highest level of administrative privilege in an Active Directory domain. They have complete control over the domain. While a Domain Admin can definitely enable SSO, it violates the principle of least privilege. Using a Domain Admin account for this task gives far more permissions than are actually required and is a security risk.
User2 (Domain Users): Domain Users is the default group for regular domain users. They have very limited administrative rights. A Domain User account will not have the necessary permissions to enable SSO during Azure AD Connect installation. This process requires creating service accounts and potentially modifying domain configuration, which Domain Users cannot do.
User3 (ADSyncAdmins): The name “ADSyncAdmins” strongly suggests this group is specifically designed for Azure AD Connect administration. It’s a common practice to create a dedicated administrative group for Azure AD Connect with the necessary permissions. This group is likely granted the minimum required permissions to perform tasks related to Azure AD Connect, including enabling SSO. This aligns perfectly with the principle of least privilege.
User4 (Account Operators): Account Operators have permissions to create and manage user and group accounts within the domain. While they have more privileges than Domain Users, Account Operators generally do not have the necessary permissions to enable SSO during Azure AD Connect installation. SSO configuration often involves creating computer accounts, managing service principals, and potentially modifying domain-level Kerberos settings, which are beyond the scope of Account Operators.
Why User3 is the best choice:
Least Privilege: User3 (ADSyncAdmins) is likely designed to have just enough permissions for Azure AD Connect tasks, adhering to the principle of least privilege.
Purpose-Built Group: The name clearly indicates its purpose is related to AD synchronization, making it the most logical choice for managing Azure AD Connect and its features like SSO.
Security Best Practice: Using a dedicated, least-privileged administrative account for services like Azure AD Connect is a security best practice. It limits the potential damage if the account is compromised.
HOTSPOT
You have an Azure subscription that contains the resource groups shown in the following table.
Name Region
RG1 East US
RG2 West US
RG1 contains the virtual machines shown in the following table.
Name Region
VM1 West US
VM2 West US
VM3 West US
VM4 West US
RG2 contains the virtual machines shown in the following table.
Name Region
VM5 East US 2
VM6 East US 2
VM7 West US
VM8 West US 2
All the virtual machines are configured to use premium disks and are accessible from the Internet.
VM1 and VM2 are in an availability set named AVSET1. VM3 and VM4 are in the same availability zone. VM5 and VM6 are in different availability zones.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Statements Yes No
VM1 is eligible for a Service Level Agreement (SLA) of 99.95 percent.
VM3 is eligible for a Service Level Agreement (SLA) of 99.99 percent.
VM5 is eligible for a Service Level Agreement (SLA) of 99.99 percent.
Statement 1: VM1 is eligible for a Service Level Agreement (SLA) of 99.95 percent.
Analysis: VM1 is in an availability set named AVSET1 along with VM2. Virtual machines deployed in an availability set within the same region are protected from planned and unplanned maintenance events. Azure guarantees a 99.95% uptime SLA for virtual machines deployed in an availability set.
Conclusion: Yes.
Statement 2: VM3 is eligible for a Service Level Agreement (SLA) of 99.99 percent.
Analysis: VM3 and VM4 are in the same availability zone. While availability zones provide high availability by isolating resources to specific physical locations within an Azure region, deploying VMs in the same availability zone does not qualify for the 99.99% SLA. To achieve a 99.99% SLA, you need to deploy VMs across different availability zones within the same region. Deploying within a single availability zone provides the same SLA as Availability Sets, which is 99.95%.
Conclusion: No.
Statement 3: VM5 is eligible for a Service Level Agreement (SLA) of 99.99 percent.
Analysis: VM5 and VM6 are in different availability zones. When you deploy virtual machines across availability zones, Azure guarantees a 99.99% uptime SLA. This is because availability zones are physically separate datacenters within an Azure region, providing fault tolerance against datacenter-level failures.
Conclusion: Yes.
Therefore, the correct answers are:
Statement 1: Yes
Statement 2: No
Statement 3: Yes
A company plans to use third-party application software to perform complex data analysis processes. The software will use up to 500 identical virtual machines (VMs) based on an Azure Marketplace VM image.
You need to design the infrastructure for the third-party application server. The solution must meet the following requirements:
– The number of VMs that are running at any given point in time must change when the user workload changes.
– When a new version of the application is available in Azure Marketplace it must be deployed without causing application downtime.
– Use VM scale sets.
– Minimize the need for ongoing maintenance.
Which two technologies should you recommend? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
single placement group
single storage account
managed disks
autoscale
Let’s analyze each option in the context of the requirements:
Single placement group:
Purpose: Placement groups in VM scale sets control the physical placement of VMs. They can be either single or regional. Single placement groups limit the scale set to a single fault domain and upgrade domain within a region. While they can be used for scenarios requiring low latency and tight proximity within a scale set, they don’t directly address the requirements of autoscaling, zero-downtime deployments for application updates, or minimizing maintenance in the context of this specific question.
Relevance to Requirements: Indirectly related to availability within a single region, but not directly to scaling, zero-downtime updates, or minimizing maintenance in the way the question is framed.
Single storage account:
Purpose: In older (unmanaged disk) VM scale sets, you would need to manage storage accounts for VM disks. However, with managed disks, Azure handles the storage account management. Recommending a “single storage account” is not directly relevant to the requirements in the context of modern VM scale sets using managed disks. In fact, for very large scale sets, Azure might distribute disks across multiple storage accounts under the hood for scalability and performance. With managed disks, the user doesn’t need to manage storage accounts directly.
Relevance to Requirements: Not directly relevant, and less so with managed disks being the best practice.
Managed disks:
Purpose: Managed disks simplify disk management for Azure VMs and VM scale sets. They handle storage account creation and management in the background, providing better reliability, scalability, and management compared to unmanaged disks. Using managed disks minimizes the need for ongoing maintenance related to storage accounts, as Azure handles this.
Relevance to Requirements: Directly contributes to minimizing ongoing maintenance by abstracting away storage account management. Also, managed disks are a prerequisite for many advanced VM scale set features, including autoscale and easier updates.
Autoscale:
Purpose: Autoscale is a core feature of VM scale sets that allows the number of VMs to be automatically adjusted based on metrics (like CPU utilization, memory, custom metrics) or a schedule. This directly addresses the requirement for the number of VMs to change with user workload. Autoscale also helps minimize ongoing maintenance by automating the scaling process, reducing the need for manual intervention to adjust VM counts. Furthermore, rolling upgrades (for zero-downtime deployments) are often triggered or work in conjunction with autoscaling to manage capacity during updates.
Relevance to Requirements: Directly addresses scaling based on workload and minimizes ongoing maintenance by automating scaling. Indirectly related to zero-downtime deployments as scaling is often part of deployment strategies.
Conclusion:
The two technologies that best address the requirements are:
Managed disks: Minimizes ongoing maintenance related to storage management and is a best practice for VM scale sets.
Autoscale: Directly addresses the dynamic scaling requirement and minimizes manual maintenance related to scaling.
While neither option directly and completely solves the zero-downtime deployment of Marketplace application updates on its own, using Managed disks is a foundational best practice for modern VM scale sets, and Autoscale is critical for dynamically adjusting resources based on workload and managing the scale set efficiently, which are key components of a well-designed, maintainable, and scalable infrastructure. Rolling upgrades within VM Scale Sets, often used for zero-downtime updates, rely on these underlying technologies.
Final Answer: The final answer is managed disks and autoscale.
You have a resource group named RG1 that contains the following:
– A virtual network that contains two subnets named – Subnet1 and AzureFirewallSubnet
– An Azure Storage account named contososa1
– An Azure firewall deployed to AzureFirewallSubnet
You need to ensure that contososa1 is accessible from Subnet1 over the Azure backbone network.
What should you do?
Modify the Firewalls and virtual networks settings for contososa1.
Create a stored access policy for contososa1.
Implement a virtual network service endpoint.
Remove the Azure firewall.
The correct answer is Implement a virtual network service endpoint.
Here’s why:
Virtual Network Service Endpoints:
Purpose: Virtual network service endpoints are the recommended way to secure Azure service resources to only your virtual network. They extend your virtual network private address space and the identity of your virtual network to the Azure services over a direct connection.
Azure Backbone Network: When you use service endpoints, traffic from your virtual network to the Azure service (in this case, Azure Storage) travels over the Azure backbone network. This provides optimized routing, security, and performance compared to going over the public internet.
Security: Service endpoints also enable you to secure your Azure Storage account (contososa1) to only allow traffic originating from the specified subnet (Subnet1). This is achieved through the Storage Account’s firewall settings, which you configure after enabling the service endpoint.
Modify the Firewalls and virtual networks settings for contososa1:
While you will need to modify the “Firewalls and virtual networks settings” of the storage account, this is a subsequent step to enabling service endpoints. Simply modifying these settings without enabling a service endpoint on the subnet will not ensure that the traffic uses the Azure backbone network. This option is incomplete as a standalone solution.
Create a stored access policy for contososa1:
Stored access policies are used to generate Shared Access Signature (SAS) tokens. SAS tokens are for granting delegated access to storage resources with specific permissions and for a defined period. They are related to authentication and authorization, not to network connectivity or ensuring traffic traverses the Azure backbone network. SAS tokens do not address the requirement.
Remove the Azure firewall:
Removing the Azure firewall is a security risk and is not the correct approach. The Azure firewall is designed to enhance network security, not hinder access to storage from within the virtual network when properly configured. Removing it is counterproductive and does not help achieve the requirement of backbone network access in a secure manner.
In summary:
To ensure contososa1 is accessible from Subnet1 over the Azure backbone network, the fundamental step is to Implement a virtual network service endpoint for Azure Storage on Subnet1. After this, you would then configure the “Firewalls and virtual networks settings” of contososa1 to allow access from Subnet1.
Final Answer: The final answer is Implement a virtual network service endpoint.
You have an Azure subscription that contains 100 virtual machines.
You have a set of PowerShell scripts that validate the virtual machine environment.
You need to run the scripts whenever there is an operating system update on the virtual machines. The solution must minimize implementation time and recurring costs.
Which three resources should you use to implement the scripts? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
an alert action group
an Azure Monitor query
an Azure Automation runbook
a virtual machine that has network access to the 100 virtual machines
an alert rule
To meet the requirements of running PowerShell scripts on OS updates for 100 VMs while minimizing implementation time and recurring costs, we need to leverage Azure services that provide automation and monitoring capabilities. Let’s evaluate each option:
an alert action group:
Purpose: Action groups in Azure Monitor are used to define a collection of actions to perform when an alert is triggered. Actions can include sending notifications, calling webhooks, and crucially, running Azure Automation runbooks.
Relevance: This is a crucial component. We will need an action group to connect the alert (detecting OS updates) to the execution of the PowerShell scripts (validation scripts). Action groups are designed for automating responses to alerts.
an Azure Monitor query:
Purpose: Azure Monitor queries (typically Log Analytics queries) are used to retrieve and analyze data collected by Azure Monitor, such as logs and metrics. You can use queries to identify specific events or conditions within your Azure environment.
Relevance: While an Azure Monitor query itself doesn’t directly run scripts, it’s essential for defining the condition that triggers the script execution. We would use a query to detect OS update events in the Azure Activity Log or other relevant logs. This query would then be used as the basis for an alert rule. However, as a standalone resource for implementing the scripts, it’s less direct than other options. The alert rule is the resource that uses the query and triggers actions.
an Azure Automation runbook:
Purpose: Azure Automation runbooks allow you to automate tasks in Azure and hybrid environments using PowerShell or Python.
Relevance: This is the core component for running the PowerShell validation scripts. The runbook will contain the PowerShell scripts that validate the VM environment. We will trigger this runbook when an OS update is detected.
a virtual machine that has network access to the 100 virtual machines:
Purpose: A VM could be used as a jump box or control machine to manually run scripts against the 100 VMs.
Relevance: This option is not suitable for automated script execution based on OS updates and does not minimize implementation time or recurring costs. It would require manual scheduling or complex custom scripting to detect OS updates and trigger scripts, adding to implementation time and ongoing management overhead. It also incurs costs for running the VM continuously. The requirement is for an automated solution, making this option less desirable.
an alert rule:
Purpose: Azure Monitor alert rules are used to detect specific conditions in your Azure environment based on metrics, logs, or activity log events. When the condition is met, the alert rule triggers defined actions.
Relevance: This is essential. We will need an alert rule to monitor for OS update events on the virtual machines. The alert rule will use a condition (possibly based on an Azure Monitor query that detects OS update events in activity logs) and be configured to trigger the action group (which in turn runs the Automation Runbook).
The Three Essential Resources:
To implement the automated PowerShell script execution upon OS updates with minimal implementation time and recurring costs, the three core resources are:
an alert action group: To define the action of running the Automation Runbook when an alert is triggered.
an Azure Automation runbook: To contain and execute the PowerShell validation scripts.
an alert rule: To monitor for OS update events and trigger the action group when an update occurs.
These three resources work together to create an automated, serverless, and cost-effective solution. While an Azure Monitor query is implicitly needed to define the alert condition, the alert rule is the resource that directly triggers the action based on that condition.
Final Answer: The final answer is:
an alert action group
an Azure Automation runbook
an alert rule
You have an Active Directory forest named contoso.com.
You install and configure Azure AD Connect to use password hash synchronization as the single sign-on (SSO) method. Staging mode is enabled.
You review the synchronization results and discover that the Synchronization Service Manager does not display any sync jobs.
You need to ensure that the synchronization completes successfully.
What should you do?
Run Azure AD Connect and disable staging mode.
From Synchronization Service Manager, run a full import.
Run Azure AD Connect and set the SSO method to Pass-through Authentication.
From Azure PowerShell, run Start-AdSyncSyncCycle –PolicyType Initial.
The correct answer is: Run Azure AD Connect and disable staging mode.
Explanation:
Here’s why this is the correct solution and why the other options are incorrect:
Run Azure AD Connect and disable staging mode. (Correct)
Staging Mode Behavior: When Azure AD Connect is configured in staging mode, it is designed to be a passive server. This means it performs import and synchronization operations, but it does not export changes to Azure AD. This is by design for staging mode, allowing you to test configurations or have a backup server without actively synchronizing.
Synchronization Service Manager in Staging Mode: In staging mode, you might see import and synchronization operations in the Synchronization Service Manager, but export operations will be skipped, and you might not see the typical “sync jobs” that indicate active synchronization to Azure AD. The lack of sync jobs in the description strongly suggests staging mode is the cause.
Disabling Staging Mode: To make the Azure AD Connect server active and allow it to synchronize changes to Azure AD, you must disable staging mode. This is the primary purpose of staging mode - to be turned on for specific scenarios and off for active synchronization.
From Synchronization Service Manager, run a full import. (Incorrect)
Full Import Purpose: A full import is used to refresh the connector space with all objects from the connected directories (Active Directory and Azure AD). While a full import might be necessary after certain configuration changes or if data is inconsistent, it does not override staging mode. If staging mode is enabled, even after a full import, the export step (which synchronizes changes to Azure AD) will still be skipped. Running a full import alone will not resolve the issue of no sync jobs being displayed because the core problem is staging mode being active.
Run Azure AD Connect and set the SSO method to Pass-through Authentication. (Incorrect)
SSO Method Irrelevance: The SSO method (Password Hash Synchronization, Pass-through Authentication, or Federation) is a separate configuration from staging mode. Changing the SSO method will not disable staging mode or cause synchronization to start if staging mode is enabled. Password hash synchronization is already configured as per the question, and changing it to Pass-through Authentication is not related to the issue of no sync jobs in staging mode.
From Azure PowerShell, run Start-AdSyncSyncCycle –PolicyType Initial. (Incorrect)
Start-AdSyncSyncCycle Purpose: This PowerShell command is used to manually trigger a synchronization cycle. While this command can initiate a sync, it will still respect the staging mode configuration. If staging mode is enabled, running this command will likely start an import and synchronization cycle, but the export to Azure AD will still be skipped because of staging mode. Therefore, this command will not resolve the fundamental issue of staging mode preventing active synchronization.
In summary: The root cause is staging mode being enabled. Disabling staging mode using the Azure AD Connect wizard is the direct and necessary action to allow synchronization to Azure AD to proceed and for sync jobs to be displayed correctly in Synchronization Service Manager.
Final Answer: The final answer is Run Azure AD Connect and disable staging mode.
Your on-premises network contains 100 virtual machines that run Windows Server 2019.
You have an Azure subscription that contains an Azure Log Analytics workspace named Workspace1.
You need to collect errors from the Windows event logs on the virtual machines.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Create an Azure Event Grid domain.
Deploy the Microsoft Monitoring Agent.
Configure Windows Event Forwarding on the virtual machines.
Create an Azure Sentinel workspace.
Modify Agent configuration settings in Workspace1.
The correct answers are:
Deploy the Microsoft Monitoring Agent.
Modify Agent configuration settings in Workspace1.
Explanation:
Let’s break down why these two actions are necessary and why the others are not the best choices:
Deploy the Microsoft Monitoring Agent (MMA).
Why it’s correct: The Microsoft Monitoring Agent (MMA), also known as the Log Analytics agent, is the primary agent used to connect Windows and Linux machines (including on-premises VMs) to Azure Monitor and Log Analytics. To collect data from the on-premises Windows Server 2019 VMs and send it to Workspace1, you must deploy the MMA agent on each of these VMs. The agent is responsible for gathering the event logs and securely transmitting them to the configured Log Analytics workspace.
Why other options are not replacements: Without an agent installed on the VMs, there’s no mechanism to collect and send the event log data to Azure.
Modify Agent configuration settings in Workspace1.
Why it’s correct: After deploying the MMA agent, you need to configure your Log Analytics workspace (Workspace1) to specify which data to collect from the agents. This is done in the “Agents configuration” or “Data” settings within the Log Analytics workspace in the Azure portal. Specifically, you need to configure the Windows Event Logs settings to collect Error events. You can specify which event logs to collect (e.g., Application, System, Security) and the minimum severity level (e.g., Error, Warning, Information).
Why other options are not replacements: Simply deploying the agent is not enough. You need to tell the Log Analytics workspace what kind of data to expect and collect from the connected agents. This configuration within Workspace1 is essential for data collection to start and for collecting only errors as required.
Create an Azure Event Grid domain.
Why it’s incorrect: Azure Event Grid is a service for routing events from various Azure services and custom applications to event handlers. It’s not directly used for collecting Windows Event Logs from VMs. Event Grid is more for real-time event-driven architectures, not log aggregation in this context.
Configure Windows Event Forwarding on the virtual machines.
Why it’s incorrect: While Windows Event Forwarding (WEF) can be used to forward events to a central collector, it’s not necessary for this scenario. The MMA agent can directly collect Windows Event Logs from the local machine and send them to Log Analytics without requiring WEF. Using WEF would add unnecessary complexity and management overhead for 100 VMs. MMA is the simpler and more direct approach for this requirement.
Create an Azure Sentinel workspace.
Why it’s incorrect: Azure Sentinel is a Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) solution built on top of Azure Log Analytics. While Sentinel uses Log Analytics workspaces as its data store, creating a new Azure Sentinel workspace is not necessary to simply collect Windows Event Logs into an existing Log Analytics workspace (Workspace1). You already have Workspace1, and that’s sufficient for log collection. Sentinel is for advanced security analysis on top of collected logs, not for the basic act of collecting logs itself.
Therefore, the two essential actions are deploying the MMA agent and configuring the Log Analytics workspace to collect Windows Event Log errors.
Final Answer: The final answer is Deploy the Microsoft Monitoring Agent and Modify Agent configuration settings in Workspace1.
You have an Azure subscription named Subscription1.
You deploy a Linux virtual machine named VM1 to Subscription1.
You need to monitor the metrics and the logs of VM1.
What should you use?
Azure HDInsight
Azure Analysis Services
Linux Diagnostic Extension (LAD)
the AzurePerformanceDiagnostics extension
The correct answer is Linux Diagnostic Extension (LAD).
Explanation:
Linux Diagnostic Extension (LAD):
Purpose: LAD is an Azure VM Extension specifically designed for Linux virtual machines. Its primary function is to collect system-level metrics and logs from the Linux operating system and send them to Azure Monitor.
Functionality: LAD can collect a wide range of metrics, including CPU utilization, memory usage, disk I/O, network traffic, and more. It can also be configured to collect various logs, such as syslog, authlog, and custom application logs.
Integration with Azure Monitor: LAD seamlessly integrates with Azure Monitor, allowing you to visualize metrics, query logs, set up alerts, and perform in-depth analysis of your Linux VM’s performance and health within the Azure portal.
Let’s look at why the other options are incorrect:
Azure HDInsight:
Purpose: Azure HDInsight is a cloud service for big data analytics. It allows you to process massive amounts of data using popular open-source frameworks like Hadoop, Spark, Hive, and others.
Relevance to VM Monitoring: Azure HDInsight is not related to monitoring the metrics and logs of a single Linux VM. It’s for big data processing and analytics, not general VM monitoring.
Azure Analysis Services:
Purpose: Azure Analysis Services is a fully managed platform as a service (PaaS) that provides enterprise-grade semantic modeling capabilities in the cloud. It’s used for building and deploying OLAP (Online Analytical Processing) models for business intelligence and data analysis.
Relevance to VM Monitoring: Azure Analysis Services is not used for VM monitoring. It’s for business intelligence and analytical data modeling.
the AzurePerformanceDiagnostics extension:
Purpose: The AzurePerformanceDiagnostics extension is designed for Windows virtual machines. It collects performance data and helps troubleshoot performance issues on Windows VMs.
Relevance to Linux VMs: This extension is not compatible with Linux VMs. It is specifically for Windows operating systems.
In summary:
For monitoring metrics and logs of a Linux virtual machine in Azure, the Linux Diagnostic Extension (LAD) is the correct and purpose-built tool. It’s designed to collect the necessary data from Linux VMs and integrate with Azure Monitor for visualization, analysis, and alerting.
Final Answer: The final answer is Linux Diagnostic Extension (LAD).
HOTSPOT
You plan to deploy five virtual machines to a virtual network subnet.
Each virtual machine will have a public IP address and a private IP address.
Each virtual machine requires the same inbound and outbound security rules.
What is the minimum number of network interfaces and network security groups that you require? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Minimum number of network interfaces:
5
10
15
20
Minimum number of network security groups:
1
2
5
10
Answer:
Minimum number of network interfaces: 5
Minimum number of network security groups: 1
Explanation:
Minimum number of network interfaces: 5
Each virtual machine needs to be connected to a virtual network to have a private IP address and to be accessible over the network.
A network interface (NIC) is the resource that allows a virtual machine to connect to a virtual network.
Since you have five virtual machines, you need a minimum of five network interfaces, one for each virtual machine. A single NIC can be configured with both a private and a public IP address.
Minimum number of network security groups: 1
Network security groups (NSGs) are used to filter network traffic to and from Azure resources in a virtual network.
You can associate an NSG with either a subnet or individual network interfaces.
Since all five virtual machines require the same inbound and outbound security rules, you can apply a single NSG at the subnet level. By applying the NSG to the subnet, all virtual machines within that subnet (including the five you plan to deploy) will automatically inherit the security rules defined in that NSG.
Therefore, you only need a minimum of one network security group applied to the subnet to enforce the same security rules for all five virtual machines.
Final Answer:
Minimum number of network interfaces: 5
Minimum number of network security groups: 1
You have an Azure subscription named Subscription1 that includes an Azure File share named share1.
You create several Azure virtual machines in Subscription1. All of the virtual machines belong to the same virtual network.
You have an on-premises Hyper-V server named Server1. Server1 hosts a virtual machine named VM1.
You plan to replicate VM1 to Azure.
You need to create additional objects in Subscription1 to support the planned deployment.
Which three objects should you create? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Hyper-V site
Azure Recovery Services Vault
storage account
replication policy
Azure Traffic Manager instance
endpoint
The three objects you should create in Subscription1 to support the planned Hyper-V VM replication to Azure are:
Azure Recovery Services Vault
Storage account
Hyper-V site
Explanation:
Azure Recovery Services Vault: This is the central management service for Azure Site Recovery (ASR). ASR is the Azure service used to replicate on-premises Hyper-V VMs to Azure. You must create a Recovery Services vault in your Azure subscription to configure and manage the replication, failover, and failback processes.
Storage account: When you replicate on-premises VMs to Azure using ASR, the replicated VM disks are stored as managed disks within Azure Storage accounts. You need a storage account in your Azure subscription to serve as the target location for the replicated VM data.
Hyper-V site: In Azure Site Recovery terminology, you need to register your on-premises Hyper-V environment as a “Configuration Manager” site within the Recovery Services vault. Although the option is just “Hyper-V site”, it refers to the logical representation of your on-premises Hyper-V infrastructure within Azure Site Recovery, which is essential for ASR to communicate with and manage your on-premises Hyper-V hosts. This setup involves deploying the Azure Site Recovery Provider on your Hyper-V hosts and registering them to the Recovery Services vault.
Let’s examine why the other options are not the primary necessary objects for basic Hyper-V to Azure replication:
replication policy: While a replication policy is required to configure the settings for replication (like recovery point retention, snapshot frequency), it’s something you configure within the Recovery Services vault, not a standalone object you create before setting up replication. It’s a configuration setting, not a top-level resource to create first. However, given that the question asks for objects to create, and a replication policy is indeed a configurable entity within ASR, it’s a plausible option.
Azure Traffic Manager instance: Azure Traffic Manager is a DNS-based traffic load balancer. It is used to distribute traffic across different Azure regions. It is not directly involved in the Hyper-V to Azure replication process itself. Traffic Manager might be used in a disaster recovery scenario after a failover to Azure to direct user traffic to the Azure-replicated VMs, but it’s not a prerequisite object for setting up the replication itself.
Endpoint: “Endpoint” is a very generic term. In Azure networking, endpoints can refer to various things (like service endpoints, private endpoints, public endpoints of VMs, etc.). In the context of basic Hyper-V to Azure replication setup, “endpoint” is not a specific Azure object that you would typically create upfront as a primary requirement. It’s too broad and not directly relevant as a core object needed for initial replication setup.
Considering the core components needed to initiate Hyper-V to Azure replication, the three most essential and directly created objects are: Azure Recovery Services Vault, Storage account, and Hyper-V site (representing the on-premises Hyper-V environment within ASR), and arguably replication policy as a key configuration entity that you need to define.
Given the options and the context of setting up replication, the best three choices that represent core object creation are:
Azure Recovery Services Vault
storage account
replication policy
Although “Hyper-V site” is also conceptually needed, “replication policy” is a more concrete and explicitly configured entity in Azure Site Recovery and is often considered a key “object” in the configuration process. If forced to pick three from the provided list, these three are the most pertinent.
Final Answer: The final answer is Azure Recovery Services Vault, storage account, and replication policy.
You manage an Active Directory domain named contoso.local.
You install Azure AD Connect and connect to an Azure Active Directory (Azure AD) tenant named contoso.com without syncing any accounts.
You need to ensure that only users who have a UPN suffix of contoso.com in the contoso.local domain sync to Azure AD.
What should you do?
Use the Synchronization Service Manager to modify the Metaverse Designer tab.
Use Azure AD Connect to customize the synchronization options.
Use the Synchronization Rules Editor to create a synchronization rule.
Use Synchronization Service Manager to modify the Active Directory Domain Services (AD DS) Connector.
The correct answer is Use Azure AD Connect to customize the synchronization options.
Explanation:
Here’s why this option is the most appropriate and why the others are less suitable:
Use Azure AD Connect to customize the synchronization options. (Correct)
Azure AD Connect Customization: Azure AD Connect provides a wizard-driven interface for configuring and customizing synchronization. When you run Azure AD Connect again after the initial setup, you can choose to “Customize synchronization options.”
Filtering Options: Within the customization options, Azure AD Connect allows you to configure filtering based on domains, organizational units (OUs), and attributes. This attribute-based filtering is precisely what you need to filter users based on their UPN suffix.
UPN Suffix Filtering: You can configure attribute-based filtering to only synchronize users where the userPrincipalName attribute ends with @contoso.com. This ensures that only users with the desired UPN suffix from the contoso.local domain are synchronized to Azure AD.
User-Friendly Approach: Using the Azure AD Connect wizard is the recommended and most user-friendly method for making common synchronization customizations like filtering.
Use the Synchronization Rules Editor to create a synchronization rule. (Less Ideal, but Possible)
Synchronization Rules Editor Power: The Synchronization Rules Editor is a more powerful tool that allows for very granular control over synchronization rules. You could use it to create a custom inbound synchronization rule that filters users based on their UPN suffix.
Complexity: However, using the Synchronization Rules Editor is generally considered a more advanced approach. It requires a deeper understanding of the synchronization engine and rule syntax. For a relatively simple filtering requirement like this, using the Azure AD Connect wizard is a simpler and more recommended approach. Directly editing sync rules is typically reserved for more complex scenarios that the wizard doesn’t directly cover.
Use Synchronization Service Manager to modify the Metaverse Designer tab. (Incorrect)
Metaverse Designer Purpose: The Metaverse Designer in Synchronization Service Manager is primarily used to manage the schema of the Metaverse, which is the central identity repository in Azure AD Connect. It’s used for extending the schema or modifying object type mappings. It’s not the tool for filtering users based on attributes like UPN suffix.
Use Synchronization Service Manager to modify the Active Directory Domain Services (AD DS) Connector. (Incorrect)
AD DS Connector Configuration: Modifying the AD DS Connector in Synchronization Service Manager mainly involves configuring the connection to your on-premises Active Directory, such as credentials, connected domains, and OU selection for synchronization. While you can select specific OUs to sync, you cannot directly apply attribute-based filtering (like UPN suffix) at the connector level. Attribute-based filtering is configured through synchronization rules or the Azure AD Connect wizard’s customization options.
In summary:
While the Synchronization Rules Editor could be used, the Azure AD Connect wizard’s customization options provide a more user-friendly and recommended way to achieve the desired filtering based on UPN suffix. It’s the intended tool for common synchronization configuration changes like filtering.
Final Answer: The final answer is Use Azure AD Connect to customize the synchronization options.
You have an Azure subscription that contains the resource groups shown in the following table.
Name Region
RG1 West US
RG2 West US
RG3 East US
You have the Azure SQL servers shown in the following table.
Name Region In resource group
Sql1 West US RG1
Sql2 East US RG2
Sql3 West US RG3
Sql4 West US RG1
You create an Azure SQL database named DB1 on Sql1 in an elastic pool named Pool1.
You need to create an Azure SQL database named DB2 in Pool1.
Where should you deploy DB2?
Sql1
Sql2
Sql3
Sql4
To create an Azure SQL database (DB2) within an existing elastic pool (Pool1), DB2 must be deployed on the same Azure SQL server that hosts the elastic pool.
According to the question:
DB1 is an Azure SQL database created on Sql1.
DB1 is in an elastic pool named Pool1.
This directly implies that Pool1 is hosted on Sql1.
Elastic pools are server-scoped. An elastic pool is created on a specific Azure SQL server, and all databases that are part of that elastic pool must reside on the same SQL server. You cannot have databases in an elastic pool that are spread across different SQL servers.
Therefore, to create DB2 in Pool1, you must deploy DB2 on Sql1.
Let’s examine the options:
Sql1: This is the correct answer. Since Pool1 is hosted on Sql1, DB2 must also be deployed on Sql1 to be part of Pool1.
Sql2: Sql2 is located in the East US region and resource group RG2. Sql1 is in West US and RG1. Elastic pools and databases within them must be in the same region and on the same SQL server. Sql2 is not the correct location.
Sql3: Sql3 is located in the West US region and resource group RG3. While Sql3 is in the same region (West US) as Sql1, it is a different SQL server. You cannot add a database to an elastic pool that is hosted on a different SQL server.
Sql4: Sql4 is located in the West US region and resource group RG1, the same region and resource group as Sql1. However, it is still a different SQL server instance than Sql1. You cannot add a database to an elastic pool hosted on Sql1 if you deploy DB2 on Sql4.
Conclusion:
The only valid location to deploy DB2 so that it can be part of Pool1 is Sql1, as Pool1 is hosted on Sql1.
Final Answer: The final answer is Sql1.
HOTSPOT
You deploy an Azure virtual machine scale set named VSS1 that contains 30 virtual machine instances across three zones in the same Azure region. The instances host an application named App1 that must be accessible by using HTTP and HTTPS traffic. Currently, VSS1 is inaccessible from the internet.
You need to use Azure Load Balancer to provide access to App1 across all the instances from the internet by using a single IP address.
What should you configure? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Minimum number of network security groups (NSGs) to create:
1
3
30
Objects to assign to the network security groups (NSGs):
1 subnet
3 subnets
30 network interfaces
Minimum number of Azure Standard Load Balancer rules to create:
1
2
3
4
6
Answer Area:
Minimum number of network security groups (NSGs) to create: 1
Objects to assign to the network security groups (NSGs): 1 subnet
Minimum number of Azure Standard Load Balancer rules to create: 2
Explanation:
Minimum number of network security groups (NSGs) to create: 1
You can use a single Network Security Group (NSG) to control inbound and outbound traffic for all virtual machine instances within the virtual machine scale set.
NSGs can be associated with a subnet or individual network interfaces. To minimize management and apply the same security rules to all instances in the scale set, you should apply the NSG to the subnet in which VSS1 is deployed.
Since all VMs in the scale set need the same security rules (allowing HTTP and HTTPS), one NSG applied at the subnet level is sufficient.
Objects to assign to the network security groups (NSGs): 1 subnet
As explained above, applying the NSG to the subnet is the most efficient way to manage security rules for all VMs in the VSS. You don’t need to create NSGs for each zone or for each individual network interface when the security requirements are the same across all instances.
Minimum number of Azure Standard Load Balancer rules to create: 2
You need to provide access to App1 using both HTTP (port 80) and HTTPS (port 443).
Azure Load Balancer rules define how traffic is distributed to the backend pool. Each rule typically handles traffic for a specific port and protocol combination.
You will need one load balancer rule to handle HTTP traffic (port 80) and forward it to the backend pool (VSS1 instances).
You will need a second load balancer rule to handle HTTPS traffic (port 443) and forward it to the same backend pool (VSS1 instances).
Therefore, you require a minimum of two load balancer rules to handle both HTTP and HTTPS traffic.
Final Answer:
Minimum number of network security groups (NSGs) to create: 1
Objects to assign to the network security groups (NSGs): 1 subnet
Minimum number of Azure Standard Load Balancer rules to create: 2
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an app named App1 that uses data from two on-premises Microsoft SQL Server databases named DB1 and DB2.
You plan to move DB1 and DB2 to Azure.
You need to implement Azure services to host DB1 and DB2. The solution must support server-side transactions across DB1 and DB2.
Solution: You deploy DB1 and DB2 as Azure SQL databases each on a different Azure SQL Database server.
Does this meet the goal?
Yes
No
The question asks if deploying DB1 and DB2 as Azure SQL databases on different Azure SQL Database servers supports server-side transactions across them.
Azure SQL Database (Single Database): This deployment option provides a single, isolated database. Each Azure SQL Database server is a logical construct providing administrative scope for databases. Databases on different Azure SQL Database servers are fundamentally independent and isolated from a transactional perspective.
Server-Side Transactions Across Databases: True server-side transactions across databases typically require distributed transaction capabilities. In SQL Server, this might be handled using Distributed Transaction Coordinator (DTC) in certain scenarios or linked servers with limitations on full transactional guarantees in distributed scenarios.
Azure SQL Database Limitations: Azure SQL Database (single database option) has limitations when it comes to distributed transactions across independent database servers. While you can query across databases using features like elastic queries, these are not designed for ACID (Atomicity, Consistency, Isolation, Durability) transactions that span multiple independent Azure SQL databases.
Cross-Database Transactions in Azure SQL Database: While some form of cross-database querying is possible in Azure SQL Database, true distributed transactions in the traditional sense (like within a single SQL Server instance or using DTC across instances) are not directly supported across independent Azure SQL Databases on different servers.
Analysis of the Proposed Solution:
The solution proposes deploying DB1 and DB2 as separate Azure SQL databases, each on a different Azure SQL Database server. This setup creates two completely independent Azure SQL databases. In this configuration, server-side transactions that span across DB1 and DB2 and maintain full ACID properties are NOT natively supported by Azure SQL Database.
While application-level transaction management or eventual consistency patterns might be possible, the requirement is for server-side transactions. Deploying them as independent Azure SQL Databases on different servers does not directly fulfill this requirement with built-in server features.
Conclusion:
The proposed solution does not meet the goal of supporting server-side transactions across DB1 and DB2 using the described Azure SQL Database deployment. To achieve server-side transactions, a different Azure SQL deployment option (like Azure SQL Managed Instance, which offers more SQL Server instance-level features, or deploying SQL Server on Azure VMs) or a different application architecture might be necessary.
Final Answer: No
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an app named App1 that uses data from two on-premises Microsoft SQL Server databases named DB1 and DB2.
You plan to move DB1 and DB2 to Azure.
You need to implement Azure services to host DB1 and DB2. The solution must support server-side transactions across DB1 and DB2.
Solution: You deploy DB1 and DB2 as Azure SQL databases on the same Azure SQL Database server.
Does this meet the goal?
Yes
No
The question asks if deploying DB1 and DB2 as Azure SQL databases on the same Azure SQL Database server supports server-side transactions across them.
Azure SQL Database Server Scope: In Azure SQL Database, a “server” is a logical construct. It’s a management unit for Azure SQL databases. Crucially, databases deployed on the same Azure SQL Database server share the same physical SQL Server instance in the backend infrastructure (though this is abstracted from the user).
Cross-Database Transactions within a Single SQL Server Instance: Microsoft SQL Server, by design, supports cross-database transactions when the databases reside on the same SQL Server instance. You can use standard SQL transaction commands (like BEGIN TRANSACTION, COMMIT TRANSACTION, ROLLBACK TRANSACTION) to create transactions that span multiple databases on the same instance. You can reference objects in other databases using three or four-part naming conventions (e.g., DatabaseName.SchemaName.ObjectName).
Azure SQL Database Behavior: When you deploy multiple Azure SQL databases on the same Azure SQL Database server, they effectively reside within the same logical SQL Server instance environment provided by the Azure SQL Database service. This means they inherit the cross-database transaction capabilities of SQL Server.
Server-Side Transactions: The requirement is for server-side transactions. This implies that the transaction management should be handled by the database server itself, not just at the application level. SQL Server’s built-in transaction management features, when used for cross-database transactions on the same instance (or Azure SQL Database server), fulfill this requirement.
Analysis of the Proposed Solution:
The solution proposes deploying DB1 and DB2 as Azure SQL databases on the same Azure SQL Database server. This is the key point. Because they are on the same server, they can participate in server-side transactions that span both databases. You can write stored procedures or application code that initiates a transaction, performs operations in both DB1 and DB2 (using proper database naming conventions), and then commits or rolls back the entire transaction as a single atomic unit.
Conclusion:
The proposed solution does meet the goal. Deploying DB1 and DB2 as Azure SQL databases on the same Azure SQL Database server enables the implementation of server-side transactions across DB1 and DB2 using standard SQL Server transactional capabilities.
Final Answer: Yes
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure Cosmos DB database that contains a container named Container1. The partition key for Container1 is set to /day. Container1 contains the items shown in the following table.
— —
Name Content
Item1 {
“id”: “1”,
“day”: “Mon”,
“value”: “10”
}
Item2 {
“id”: “2”,
“day”: “Mon”,
“value”: “15”
}
Item3 {
“id”: “3”,
“day”: “Tue”,
“value”: “10”
}
Item4 {
“id”: “4”,
“day”: “Wed”,
“value”: “15”
}
— —
You need to programmatically query Azure Cosmos DB and retrieve Item1 and Item2 only.
Solution: You run the following query.
SELECT day
WHERE value = “10”
You set the Enable Cross Partition Query property to False.
Does this meet the goal?
Yes
No
The goal is to retrieve Item1 and Item2 only from Azure Cosmos DB Container1.
Let’s analyze the provided solution:
Solution:
You run the following query:
SELECT day
WHERE value = “10”
Use code with caution.
SQL
You set the Enable Cross Partition Query property to False.
Item Data Analysis:
Item1: {“id”: “1”, “day”: “Mon”, “value”: “10”}
Item2: {“id”: “2”, “day”: “Mon”, “value”: “15”}
Item3: {“id”: “3”, “day”: “Tue”, “value”: “10”}
Item4: {“id”: “4”, “day”: “Wed”, “value”: “15”}
Query Evaluation:
The query SELECT day WHERE value = “10” filters items based on the value property being equal to “10”.
Item1: value is “10”, so it matches the WHERE clause.
Item2: value is “15”, so it does not match the WHERE clause.
Item3: value is “10”, so it matches the WHERE clause.
Item4: value is “15”, so it does not match the WHERE clause.
Therefore, based on the WHERE value = “10” condition, the query would retrieve Item1 and Item3, not Item1 and Item2.
Cross Partition Query Setting:
Setting Enable Cross Partition Query to False means that the query will only be executed against a single partition. However, the query itself is already flawed in achieving the goal, regardless of the cross-partition setting, because it will not retrieve Item2.
Goal Evaluation:
The goal is to retrieve Item1 and Item2 only.
The query SELECT day WHERE value = “10” will retrieve Item1 and Item3, not Item1 and Item2. Item2 has a value of “15” and will be excluded by the WHERE clause.
Conclusion:
The provided solution does not meet the goal because the query SELECT day WHERE value = “10” will retrieve Item1 and Item3, not Item1 and Item2. Setting Enable Cross Partition Query to False does not change the fact that the query’s filter condition is incorrect for retrieving Item1 and Item2 only.
Final Answer: No
HOTSPOT
You have an on-premises data center and an Azure subscription. The data center contains two VPN devices. The subscription contains an Azure virtual network named VNet1. VNet1 contains a gateway subnet.
You need to create a site-to-site VPN. The solution must ensure that if a single instance of an Azure VPN gateway fails, or a single on-premises VPN device fails, the failure will not cause an interruption that is longer than two minutes.
What is the minimum number of public IP addresses, virtual network gateways, and local network gateways required in Azure? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Public IP addresses:
1
2
3
4
Virtual network gateways:
1
2
3
4
Local network gateways:
1
2
3
4
Answer Area:
Public IP addresses: 2
Virtual network gateways: 2
Local network gateways: 2
Explanation:
To achieve high availability and minimize downtime to under two minutes in case of a single component failure for a site-to-site VPN connection, you need to configure redundancy on both the Azure side and the on-premises side.
Public IP Addresses: 2
In an active-standby Azure VPN Gateway setup, you need a Standard SKU public IP address for each VPN gateway instance.
You will deploy two Azure VPN gateway instances in an active-standby configuration for redundancy.
Therefore, you need a minimum of two public IP addresses. One for the active VPN gateway instance and one for the standby instance.
Virtual network gateways: 2
To achieve high availability for the Azure VPN gateway, you must deploy it in an active-standby configuration.
This configuration requires two Azure virtual network gateways. One gateway will be active and handle traffic, while the other will be in standby mode, ready to take over in case of a failure of the active gateway.
Hence, you need a minimum of two virtual network gateways.
Local network gateways: 2
To ensure redundancy on the on-premises side and to leverage your two VPN devices, you should create two local network gateways in Azure.
Each local network gateway will represent one of your on-premises VPN devices and will be configured with the public IP address of that device.
By having two local network gateways, you can create redundant connections from your Azure VPN gateways to your on-premises network through different VPN devices.
Therefore, you need a minimum of two local network gateways to represent your two on-premises VPN devices and achieve on-premises redundancy in conjunction with Azure’s active-standby VPN gateway setup.
In summary:
By using 2 Public IP addresses, 2 Virtual Network Gateways (in active-standby), and 2 Local Network Gateways, you create a resilient site-to-site VPN solution that can withstand the failure of a single Azure VPN gateway instance or a single on-premises VPN device with minimal interruption. This setup allows for automatic failover within the Azure VPN gateway pair and provides redundant paths via two on-premises VPN devices.
Final Answer:
Public IP addresses: 2
Virtual network gateways: 2
Local network gateways: 2
You have an Azure subscription that contains an Azure Sentinel workspace. Sentinel is configured to monitor several Azure resources.
You need to send notification emails to resource owners when alerts or recommendations are generated for a resource.
What should you use?
Logic Apps Designer
Azure Security Center
Automation Runbook
Azure Machine Learning Studio
The correct answer is Logic Apps Designer.
Explanation:
Here’s why Logic Apps Designer is the best choice and why the other options are less suitable for this specific scenario:
Logic Apps Designer:
Purpose: Azure Logic Apps is a cloud-based platform for automating workflows and integrating services. The Logic Apps Designer is the visual interface for creating these workflows.
Integration with Azure Sentinel: Logic Apps has native connectors and triggers for Azure Sentinel. You can create a Logic App that is triggered when a new Azure Sentinel alert is generated.
Email Notification Capabilities: Logic Apps has built-in connectors for various email services (like Office 365 Outlook, Gmail, SendGrid, etc.). You can easily add an action in your Logic App workflow to send an email notification.
Dynamic Resource Owner Lookup: Logic Apps can integrate with Azure Resource Graph or Azure Resource Manager to dynamically retrieve information about the resource that triggered the alert. You could potentially use Resource Graph to query for tags on the resource or use Azure RBAC to find the owners/contributors and send notifications to them.
Flexibility and Customization: Logic Apps provides a highly flexible and customizable way to build notification workflows. You can tailor the email content, recipients, and notification logic based on the specifics of the Sentinel alert.
Azure Security Center (Microsoft Defender for Cloud):
Purpose: Microsoft Defender for Cloud focuses on security posture management and threat protection. While it provides security alerts and recommendations, its native email notification capabilities are primarily for Security Center’s own findings, not for routing notifications based on Sentinel alerts to resource owners.
Limited Customization for Sentinel Alerts: While Defender for Cloud and Sentinel are integrated, Defender for Cloud’s notification system is not designed to be the primary mechanism for handling notifications from Sentinel alerts and routing them to resource owners based on resource context.
Automation Runbook:
Purpose: Azure Automation Runbooks are used for general automation tasks in Azure and hybrid environments.
Can Send Emails: Runbooks can be coded (PowerShell or Python) to send emails.
Complexity for Workflows: While you could use an Automation Runbook to achieve the goal, it would generally require more coding and be less visually intuitive to build the workflow compared to Logic Apps. You would need to handle the trigger (e.g., via a webhook from Sentinel), resource owner lookup logic, and email sending all within the Runbook code. Logic Apps provides a more declarative, visual, and connector-driven approach that is often simpler for this type of workflow.
Azure Machine Learning Studio:
Purpose: Azure Machine Learning Studio is for building, training, and deploying machine learning models. It has no relevance to sending email notifications for Azure Sentinel alerts.
Why Logic Apps is the Best Choice:
Logic Apps Designer is the most direct and efficient tool for this specific requirement because:
Native Sentinel Integration: It has built-in triggers for Azure Sentinel alerts.
Email Connectors: It provides easy-to-use connectors for sending emails.
Workflow Automation Focus: It is designed for building automated workflows, making it ideal for handling alerts and triggering actions like email notifications.
Resource Context and Customization: It provides the flexibility to retrieve resource information and customize notifications based on alert details and resource ownership.
Final Answer: The final answer is Logic Apps Designer.
HOTSPOT
You have an Azure subscription that contains the virtual networks shown in the following table.
Name Location Virtual machine
Vnet1 North Europe VM1
Vnet2 West Europe VM2
You create an Azure Cosmos DB account as shown in the exhibit. (Click the Exhibit tab.)
— —
Microsoft Azure
Search resources, services, and docs (G+/)
Home > Azure Cosmos DB > Create Azure Cosmos DB Account
Create Azure Cosmos DB Account
✓ Validation Success
Basics Networking Tags Review + create
Creation Time
Estimated Account Creation Time (in
minutes)
6
The estimated creation time is calculated based on the location you have
selected
Basics
Subscription
Resource Group
Location
Account Name
API
Account Type
Geo-Redundancy
Multi-region Writes
Networking
Connectivity method
Private endpoints
Private endpoint
Subscription1
RG1
North Europe
(new) cosmos75246
Core (SQL)
Production
Enable
Disable
Private endpoint
Endpoint1 (Core (SQL)) (Vnet1)
Create
Previous
Next
Download a template for automation
— —
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Statements
Cosmos75246 is accessible by using a public IP address.
VM1 can read from cosmos75246.
VM2 can read from cosmos75246.
Let’s analyze each statement based on the provided Azure Cosmos DB account configuration and virtual network setup.
Statement 1: Cosmos75246 is accessible by using a public IP address.
Analysis: The exhibit shows that the Connectivity method for the Cosmos DB account is set to Private endpoint. When you configure a private endpoint for Azure Cosmos DB, you are specifically enabling access to the Cosmos DB account through a private IP address within your virtual network, rather than through its public endpoint. While Cosmos DB accounts generally have a public endpoint, configuring a private endpoint and not explicitly enabling public network access in firewall settings (which is not shown in the exhibit but is a common practice when using private endpoints) effectively restricts public access. With the configuration shown, the primary access method being configured is the private endpoint.
Conclusion: Based on the configuration, especially the “Private endpoint” setting, the Cosmos DB account is primarily designed for private access. Therefore, the statement that it is accessible by using a public IP address in the context of this configuration is misleading. While a public endpoint might technically exist, the configured access is via private endpoint, suggesting public access is likely restricted or not the intended access method. Thus, the statement is No.
Statement 2: VM1 can read from cosmos75246.
Analysis: VM1 is located in Vnet1 in the North Europe region. The private endpoint Endpoint1 for Cosmos DB is being created within Vnet1. Private endpoints are designed to enable private connectivity to Azure services from within a specific virtual network. Since VM1 is in the same virtual network (Vnet1) where the private endpoint is created, VM1 will be able to access the Cosmos DB account using the private IP address of the endpoint.
Conclusion: VM1 is in the same virtual network as the private endpoint configured for Cosmos DB. Therefore, VM1 can read from cosmos75246. The statement is Yes.
Statement 3: VM2 can read from cosmos75246.
Analysis: VM2 is located in Vnet2 in the West Europe region. The private endpoint Endpoint1 for Cosmos DB is created in Vnet1 in the North Europe region. Private endpoints provide connectivity within the virtual network where they are created. By default, virtual networks are isolated from each other. Vnet1 and Vnet2 are separate virtual networks and are even in different Azure regions (North Europe and West Europe). Without explicit configuration like VNet peering or VPN connectivity to connect Vnet2 to Vnet1, VM2, which is in Vnet2, will not be able to directly access the Cosmos DB account through the private endpoint that is deployed in Vnet1. Private endpoints do not inherently grant access from other virtual networks.
Conclusion: VM2 is in a different virtual network (Vnet2) than the private endpoint (Vnet1). Without additional networking configurations to connect Vnet2 to Vnet1, VM2 cannot read from cosmos75246 via the private endpoint in Vnet1. Therefore, the statement is No.
Final Answer:
Statements Yes No
Cosmos75246 is accessible by using a public IP address. ☐ ☑
VM1 can read from cosmos75246. ☑ ☐
VM2 can read from cosmos75246. ☐ ☑
HOTSPOT
You have an Azure subscription named Subscription1.
Subscription1 contains the virtual machines in the following table.
Name IP address
VM1 10.0.1.4
VM2 10.0.2.4
VM3 10.0.3.4
Subscription1 contains a virtual network named VNet1 that has the subnets in the following table.
— —
Name Address space | Connected virtual machine
Subnet1 10.0.1.0/24 VM1
Subnet2 10.0.2.0/24 VM2
Subnet3 10.0.3.0/24 VM3
— —
VM3 has a network adapter named NIC3. IP forwarding is enabled on NIC3. Routing is enabled on VM3.
You create a route table named RT1 that contains the routes in the following table.
Address prefix Next hop type Next hop address
10.0.1.0/24 Virtual appliance 10.0.3.4
10.0.2.0/24 Virtual appliance 10.0.3.4
You apply RT1 to Subnet1 and Subnet2.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Statements
VM3 can establish a network connection to VM1.
If VM3 is turned off, VM2 can establish a network connection to VM1.
VM1 can establish a network connection to VM2.
Let’s analyze each statement based on the network configuration provided.
Statement 1: VM3 can establish a network connection to VM1.
VM3 is in Subnet3 (10.0.3.0/24). VM1 is in Subnet1 (10.0.1.0/24).
There is no route table applied to Subnet3, where VM3 resides.
By default, Azure virtual networks allow direct communication between subnets within the same VNet.
Therefore, VM3 should be able to directly communicate with VM1 using the default system routes of the virtual network.
Conclusion for Statement 1: Yes.
Statement 2: If VM3 is turned off, VM2 can establish a network connection to VM1.
VM2 is in Subnet2 (10.0.2.0/24). VM1 is in Subnet1 (10.0.1.0/24).
Route table RT1 is applied to Subnet2.
RT1 contains a route for the address prefix 10.0.1.0/24 (which includes VM1’s IP address) with the next hop type “Virtual appliance” and the next hop address 10.0.3.4 (which is VM3’s IP address).
This means that traffic from VM2 destined for VM1 (10.0.1.0/24) is directed to VM3.
If VM3 is turned off, the next hop virtual appliance (10.0.3.4) becomes unavailable.
When the next hop specified in a route table is unavailable, Azure will not automatically fall back to default system routes for that destination prefix. Traffic matching that route will likely be dropped or routing will fail because the specified next hop is unreachable.
Therefore, if VM3 is turned off, VM2 will not be able to establish a network connection to VM1 because the custom route in RT1 points to a non-functional appliance, and default VNet routing is overridden by RT1 for traffic from Subnet2 to Subnet1.
Conclusion for Statement 2: No.
Statement 3: VM1 can establish a network connection to VM2.
VM1 is in Subnet1 (10.0.1.0/24). VM2 is in Subnet2 (10.0.2.0/24).
Route table RT1 is applied to Subnet1.
RT1 contains a route for the address prefix 10.0.2.0/24 (which includes VM2’s IP address) with the next hop type “Virtual appliance” and the next hop address 10.0.3.4 (which is VM3’s IP address).
This means that traffic from VM1 destined for VM2 (10.0.2.0/24) is directed to VM3.
As long as VM3 is running and configured to route traffic between Subnet1 and Subnet2 (which is implied by “Routing is enabled on VM3” and IP forwarding on NIC3), VM1 will be able to establish a network connection to VM2, although the traffic will pass through VM3.
Conclusion for Statement 3: Yes.
Final Answer:
Statements Yes No
VM3 can establish a network connection to VM1. ☑ ☐
If VM3 is turned off, VM2 can establish a network connection to VM1. ☐ ☑
VM1 can establish a network connection to VM2. ☑ ☐
DRAG DROP
You have an Azure virtual machine named VM1 that runs Windows Server 2016.
You install a line-of-business application on VM1.
You need to create a scale set by using VM1 as a custom image.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Actions
Run sysprep.exe on VM1.
Install Network Load Balancing (NLB) on VM1.
From Azure CLI, deallocate VM1 and mark VM1 as generalized.
From Azure CLI, apply a custom script extension.
Create a virtual machine scale set.
Answer Area
Answer Area:
Run sysprep.exe on VM1.
From Azure CLI, deallocate VM1 and mark VM1 as generalized.
Create a virtual machine scale set.
Explanation:
Run sysprep.exe on VM1: The first step to create a custom image from a Windows VM is to generalize the VM using the System Preparation tool (Sysprep). Sysprep removes computer-specific information, such as the computer name and security identifiers (SIDs), ensuring that the image can be used to create multiple unique VMs. Running sysprep.exe /generalize /shutdown /oobe is the standard command to prepare a Windows VM for image capture.
From Azure CLI, deallocate VM1 and mark VM1 as generalized: After running Sysprep and shutting down the VM, you need to deallocate the VM in Azure. Deallocating the VM releases the compute resources associated with it, making it possible to capture the VM as an image. Using the Azure CLI commands az vm deallocate –resource-group <resource_group_name> --name VM1 and az vm generalize --resource-group <resource_group_name> --name VM1 is the correct way to deallocate and mark the VM as generalized in Azure. Marking the VM as generalized in Azure signals to the platform that the VM has been sysprepped and is ready to be used as a source for a custom image.</resource_group_name></resource_group_name>
Create a virtual machine scale set: Once VM1 is generalized and deallocated, you can create a virtual machine scale set using this generalized VM as a custom image. When creating the scale set, you will specify the resource ID of the generalized VM as the imageReference source in the scale set’s configuration. The scale set will then use this custom image to deploy new VM instances.
The other actions are not part of the necessary sequence for creating a scale set from a custom image based on an existing VM:
Install Network Load Balancing (NLB) on VM1: NLB is used for load balancing traffic within a set of VMs, not for creating a custom image. NLB configuration would typically be done after the scale set is created and deployed, or as part of application deployment within the VM instances.
From Azure CLI, apply a custom script extension: Custom script extensions are used to run scripts on VMs after they are provisioned from an image. They are not needed to prepare a VM for image capture itself. While you can use custom script extensions in scale sets to customize instances after deployment, it’s not a prerequisite step for creating the custom image.
Final Answer:
Answer Area
1. Run sysprep.exe on VM1.
2. From Azure CLI, deallocate VM1 and mark VM1 as generalized.
3. Create a virtual machine scale set.
You have an Azure virtual network that contains a subnet named Subnet1. Subnet1 contains 50 virtual machines. Twenty-five of the virtual machines are web servers and the other 25 are application servers.
You need to filter traffic between the web servers and the application servers by using application security groups.
Which additional resource should you provision?
Azure Firewall
a user-defined route
Azure Private Link
a network security group (NSG)
To filter traffic between web servers and application servers within the same subnet using Application Security Groups (ASGs), you need to use a network security group (NSG).
Here’s why:
Network Security Groups (NSGs): NSGs are Azure’s fundamental network traffic filtering service. They allow you to control inbound and outbound traffic to Azure resources within a virtual network. NSGs use security rules to allow or deny traffic based on various criteria such as:
Source and destination IP addresses or address prefixes
Source and destination ports
Protocol (TCP, UDP, ICMP)
Application Security Groups (ASGs)
Application Security Groups (ASGs): ASGs are designed to group virtual machines based on application context. They allow you to define network security rules based on these groups instead of individual IP addresses. This simplifies security management, especially in dynamic environments where IP addresses might change.
Filtering with ASGs and NSGs: To filter traffic using ASGs, you need to create an NSG and define security rules within that NSG that use ASGs as the source or destination.
In this scenario, you would:
Create two Application Security Groups: One for the web servers (e.g., ASG_WebServers) and one for the application servers (e.g., ASG_AppServers).
Associate the VMs with ASGs: Associate the 25 web server VMs with ASG_WebServers and the 25 application server VMs with ASG_AppServers.
Create a Network Security Group (if one doesn’t already exist for Subnet1).
Create NSG Security Rules: Within the NSG associated with Subnet1, you would create rules like:
Allow inbound traffic from ASG_WebServers to ASG_AppServers on the specific ports required for communication between web and application servers (e.g., port 8080).
Deny inbound traffic from ASG_WebServers to ASG_AppServers on other ports if needed, to restrict communication.
You can also add rules to control traffic in the reverse direction (from AppServers to WebServers) if required.
Let’s look at why the other options are not the correct “additional resource”:
Azure Firewall: Azure Firewall is a more advanced, cloud-native firewall service that provides network and application-level protection, including threat intelligence. While Azure Firewall can filter traffic, it’s an over-engineered solution for simply filtering traffic within a subnet based on application groups. NSGs with ASGs are the more appropriate and cost-effective solution for this specific requirement.
A user-defined route (UDR): User-defined routes control the routing of traffic, not the filtering of traffic based on application groups. UDRs are used to direct traffic through network appliances like firewalls or NVAs. They are not relevant for implementing application security group-based filtering.
Azure Private Link: Azure Private Link provides private connectivity to Azure PaaS services and customer-owned services. It is not related to filtering traffic between VMs within a subnet using application security groups.
Conclusion:
To filter traffic between web servers and application servers using Application Security Groups, you need to provision a network security group (NSG) as the additional resource. You will then configure NSG rules that use the ASGs to define the traffic filtering policies.
Final Answer: The final answer is a network security group (NSG).