test2 Flashcards
https://infraexam.com/microsoft/az-304-microsoft-azure-architect-design/az-304-part-07/
Overview. General Overview
Litware, Inc. is a medium-sized finance company.
Overview. Physical Locations
Litware has a main office in Boston.
Existing Environment. Identity Environment
The network contains an Active Directory forest named Litware.com that is linked to an Azure Active Directory (Azure AD) tenant named Litware.com. All users have Azure Active Directory Premium P2 licenses.
Litware has a second Azure AD tenant named dev.Litware.com that is used as a development environment.
The Litware.com tenant has a conditional acess policy named capolicy1. Capolicy1 requires that when users manage the Azure subscription for a production environment by
using the Azure portal, they must connect from a hybrid Azure AD-joined device.
Existing Environment. Azure Environment
Litware has 10 Azure subscriptions that are linked to the Litware.com tenant and five Azure subscriptions that are linked to the dev.Litware.com tenant. All the subscriptions are in an Enterprise Agreement (EA).
The Litware.com tenant contains a custom Azure role-based access control (Azure RBAC) role named Role1 that grants the DataActions read permission to the blobs and files in Azure Storage.
Existing Environment. On-premises Environment
The on-premises network of Litware contains the resources shown in the following table.
Name Type Configuration
SERVER1 Ubuntu 18.04 vitual machines hosted on Hyper-V The vitual machines host a third-party app named App1.
SERVER2 App1 uses an external storage solution that provides Apache Hadoop-compatible data storage. The data storage supports POSIX access control list (ACL) file-level permissions.
SERVER3
SERVER10 Server that runs Windows Server 2016 The server contains a Microsoft SQL Server instance that hosts two databases named DB1 and DB2.
Existing Environment. Network Environment
Litware has ExpressRoute connectivity to Azure.
Planned Changes and Requirements. Planned Changes
Litware plans to implement the following changes:
✑ Migrate DB1 and DB2 to Azure.
✑ Migrate App1 to Azure virtual machines.
✑ Deploy the Azure virtual machines that will host App1 to Azure dedicated hosts.
Planned Changes and Requirements. Authentication and Authorization Requirements
Litware identifies the following authentication and authorization requirements:
✑ Users that manage the production environment by using the Azure portal must connect from a hybrid Azure AD-joined device and authenticate by using Azure Multi-Factor Authentication (MFA).
✑ The Network Contributor built-in RBAC role must be used to grant permission to all the virtual networks in all the Azure subscriptions.
✑ To access the resources in Azure, App1 must use the managed identity of the virtual machines that will host the app.
✑ Role1 must be used to assign permissions to the storage accounts of all the Azure subscriptions.
✑ RBAC roles must be applied at the highest level possible.
Planned Changes and Requirements. Resiliency Requirements
Litware identifies the following resiliency requirements:
✑ Once migrated to Azure, DB1 and DB2 must meet the following requirements:
- Maintain availability if two availability zones in the local Azure region fail.
- Fail over automatically.
- Minimize I/O latency.
✑ App1 must meet the following requirements:
- Be hosted in an Azure region that supports availability zones.
- Be hosted on Azure virtual machines that support automatic scaling.
- Maintain availability if two availability zones in the local Azure region fail.
Planned Changes and Requirements. Security and Compliance Requirements
Litware identifies the following security and compliance requirements:
✑ Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.
✑ On-premises users and services must be able to access the Azure Storage account that will host the data in App1.
✑ Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.
✑ All Azure SQL databases in the production environment must have Transparent Data Encryption (TDE) enabled.
✑ App1 must not share physical hardware with other workloads.
Planned Changes and Requirements. Business Requirements
Litware identifies the following business requirements:
✑ Minimize administrative effort.
✑ Minimize costs.
You plan to migrate App1 to Azure. The solution must meet the authentication and authorization requirements.
Which type of endpoint should App1 use to obtain an access token?
Azure Instance Metadata Service (IMDS)
Azure AD
Azure Service Management
Microsoft identity platform
The correct answer is: Azure Instance Metadata Service (IMDS)
Explanation:
Managed Identities and IMDS:
Why it’s the right choice: The requirements state that “To access the resources in Azure, App1 must use the managed identity of the virtual machines that will host the app”. Managed identities for Azure resources provide an identity that applications running in an Azure VM can use to access other Azure resources. The Azure Instance Metadata Service (IMDS) is the service that provides this identity information to the VM.
How it works:
You enable a managed identity for the virtual machines hosting App1.
Within the App1 code, you make a request to the IMDS to obtain an access token.
The IMDS service, running inside each Azure VM, returns a token that can be used to access other Azure resources (e.g., storage accounts, Key Vault) without requiring to store credentials in the application code. This access token is automatically rotated by the managed identity service.
This token is then passed to the destination service to provide access, after verifying the token is valid with Azure AD.
Security Benefits: Using managed identities and IMDS avoids storing sensitive credentials in configuration files, environment variables, or the application code itself. This is a security best practice.
Relevance to the scenario: It directly fulfills the requirement to use managed identities for accessing Azure resources from App1.
Why Other Options are Incorrect:
Azure AD: While Azure AD is used to authenticate users and apps, the app itself (App1 running on the VM) does not need to perform a standard Azure AD login. The managed identity handles this for the application. The application uses a token from IMDS, it does not use the Azure AD endpoint directly.
Azure Service Management: This is a deprecated method for Azure management. This is not the correct way to authenticate application level access.
Microsoft identity platform: This is the overall identity platform in Azure, but it’s not used for direct token retrieval within a VM with a managed identity. App1 should not use the Microsoft Identity Platform directly, it should use IMDS to get a token from the managed identity.
In Summary:
The correct endpoint for App1 to obtain an access token is the Azure Instance Metadata Service (IMDS). IMDS is designed specifically for providing applications within Azure VMs access tokens that are used for accessing other Azure services when used with a managed identity.
Important Notes for Azure 304 Exam:
Managed Identities: You MUST understand how managed identities work and how to use them. Be familiar with the two types of managed identity: System-assigned and User-assigned.
Azure Instance Metadata Service (IMDS): Know the purpose of IMDS and how it provides information about the Azure VM instance (including access tokens for managed identities).
Secure Authentication: Understand the security benefits of using managed identities instead of embedding secrets in code or configuration files.
Authentication Scenarios: Be able to recognize different authentication scenarios (user login vs. application access) and know which Azure service to use to achieve the required access pattern.
Service Principals: Be familiar with the concept of service principals and their relationship with application identity, but understand that a service principal is not directly needed here since the managed identity service creates and manages the service principals.
Key Takeaway: For applications running in Azure VMs that need to access other Azure resources, managed identities via the Azure IMDS are the recommended approach. The application does not authenticate with Azure AD directly, it gets a token from the IMDS.
HOTSPOT
You need to ensure that users managing the production environment are registered for Azure MFA and must authenticate by using Azure MFA when they sign in to the Azure portal. The solution must meet the authentication and authorization requirements.
What should you do? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
To register the users for Azure MFA, use:
Azure AD Identity Protection
Security defaults in Azure AD
Per-user MFA in the MFA management UI
To enforce Azure MFA authentication, configure:
Grant control in capolicy1
Session control in capolicy1
Sign-in risk policy in Azure AD Identity Protection for the Litware.com tenant
Correct Answers:
To register the users for Azure MFA, use: Per-user MFA in the MFA management UI
To enforce Azure MFA authentication, configure: Grant control in capolicy1
Explanation:
Per-User MFA in the MFA Management UI:
Why it’s the right choice: Per-user MFA is the standard way of configuring MFA on user accounts and is often used when you do not want to enable security defaults (as it allows for more granular control). You must configure this on the user before conditional access can be applied.
How it Works: This action will cause each user in the required group to be registered for Multi-Factor authentication. This method is ideal when you want direct control over user MFA status, or when security defaults are not enabled.
Relevance to the scenario: The requirement specifies that “users must authenticate by using Azure MFA when they sign in to the Azure portal.” The first step is to register the users.
Grant Control in capolicy1:
Why it’s the right choice: The requirements specified that there is a Conditional Access Policy (capolicy1), therefore this is where we must configure the requirement to enforce MFA. Within the Grant controls of the conditional access policy you must require MFA to satisfy the requirement.
How it works: You will need to modify capolicy1 in order to ensure that all the required conditions are satisfied before being granted access to Azure Portal. In addition to enabling MFA, you may also need to specify other conditions, such as device type or location, to fulfill the full requirement for the conditional access policy.
Relevance to the scenario: The conditional access policy enforces access control based on the authentication and authorization rules specified in the requirements, which also specify that “users…must connect from a hybrid Azure AD-joined device”. This conditional access policy will enforce the requirement for MFA.
Why Other Options are Incorrect:
To register the users for Azure MFA, use: Azure AD Identity Protection: Azure AD Identity Protection is used to detect and investigate risky sign-in behavior and to configure risk-based conditional access policies. It’s not the primary mechanism for registering users for MFA. While Identity Protection does have an MFA registration policy, it does not enable MFA, but only prompts a user to register for MFA.
To register the users for Azure MFA, use: Security defaults in Azure AD: Security defaults is a blanket setting that enables multi-factor authentication and many other security settings. While this option is also valid, it does not allow for the more fine-grained control that is needed for conditional access, and therefore is not the correct answer.
To enforce Azure MFA authentication, configure: Session control in capolicy1: Session controls in a conditional access policy are used to control user browser sessions, not to enforce MFA requirements, and are therefore not the correct mechanism to solve this requirement.
To enforce Azure MFA authentication, configure: Sign-in risk policy in Azure AD Identity Protection for the Litware.com tenant: Identity protection is a good tool for detecting risk and automatically responding to high risk sign-in attempts. It does not directly enable MFA for all user logins, but rather responds to high risk sign-in attempts, therefore this is not the correct service.
In Summary:
The best approach is to first enable Per-user MFA, and then enforce MFA through the Conditional Access Policy (capolicy1).
Important Notes for Azure 304 Exam:
Azure MFA: Know how to enable and enforce MFA for users. Be familiar with both Per-user MFA, and the security defaults settings in Azure AD.
Conditional Access Policies: You MUST know how conditional access policies work and how to configure access rules (including MFA requirements).
Grant Controls: Understand the use of grant controls to enforce authentication requirements.
Azure AD Identity Protection: Understand how Identity Protection works, but be aware it is for risk-based policies, and is not intended for setting up MFA on a user account, or enforcing MFA on logins.
Hybrid Azure AD Join: Be familiar with the benefits and requirements for Hybrid Azure AD-joined devices and how to use them in conjunction with conditional access policies.
Service Selection: Be able to pick the correct service for each task, and understand that setting up MFA and enforcing MFA are distinct steps that require different tools.
Azure Environment -
Litware has 10 Azure subscriptions that are linked to the Litware.com tenant and five Azure subscriptions that are linked to the dev.litware.com tenant. All the subscriptions are in an Enterprise Agreement (EA).
The litware.com tenant contains a custom Azure role-based access control (Azure RBAC) role named Role1 that grants the DataActions read permission to the blobs and files in Azure Storage.
On-Premises Environment -
The on-premises network of Litware contains the resources shown in the following table.
Network Environment -
Litware has ExpressRoute connectivity to Azure.
Planned Changes and Requirements
Litware plans to implement the following changes:
Migrate DB1 and DB2 to Azure.
Migrate App1 to Azure virtual machines.
Migrate the external storage used by App1 to Azure Storage.
Deploy the Azure virtual machines that will host App1 to Azure dedicated hosts.
Authentication and Authorization Requirements
Litware identifies the following authentication and authorization requirements:
Only users that manage the production environment by using the Azure portal must connect from a hybrid Azure AD-joined device and authenticate by using
Azure Multi-Factor Authentication (MFA).
The Network Contributor built-in RBAC role must be used to grant permissions to the network administrators for all the virtual networks in all the Azure subscriptions.
To access the resources in Azure, App1 must use the managed identity of the virtual machines that will host the app.
RBAC roles must be applied at the highest level possible.
Resiliency Requirements -
Litware identifies the following resiliency requirements:
Once migrated to Azure, DB1 and DB2 must meet the following requirements:
Maintain availability if two availability zones in the local Azure region fail.
Fail over automatically.
Minimize I/O latency.
App1 must meet the following requirements:
Be hosted in an Azure region that supports availability zones.
Be hosted on Azure virtual machines that support automatic scaling.
Maintain availability if two availability zones in the local Azure region fail.
Security and Compliance Requirements
Litware identifies the following security and compliance requirements:
Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.
On-premises users and services must be able to access the Azure Storage account that will host the data in App1.
Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.
All Azure SQL databases in the production environment must have Transparent Data Encryption (TDE) enabled.
App1 must NOT share physical hardware with other workloads.
Business Requirements -
Litware identifies the following business requirements:
Minimize administrative effort.
Minimize costs.
After you migrate App1 to Azure, you need to enforce the data modification requirements to meet the security and compliance requirements.
What should you do?
Introductory Info
Question
Answers
A. Create an access policy for the blob service.
B. Implement Azure resource locks.
C. Create Azure RBAC assignments.
D. Modify the access level of the blob service
which option is correct? why correct? which important note for azure 305 exam?
The Goal
As before, the primary goal is to enforce this requirement:
“Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.”
Evaluating the Options Based on Proximity
Let’s analyze each option again:
A. Create an access policy for the blob service.
Why it’s closest to being correct: While it doesn’t directly enforce immutability, access policies do allow you to control write access. By carefully constructing an access policy, you could, in theory, grant write access for a specific period or to a particular user/group, and then potentially restrict it later to help prevent further modification. However, it is important to remember this does not ensure immutability and is just a temporary restriction to the data.
Why it’s still not ideal: Access policies do not inherently prevent modification. A user or process could still modify the data if granted the appropriate permissions. It can also get complex to manage.
B. Implement Azure resource locks.
Why it’s NOT a good fit: As mentioned previously, resource locks focus on preventing deletion or changes to the resources, not the data within the resources. This is not even remotely related to the requirement.
C. Create Azure RBAC assignments.
Why it’s NOT a good fit: Like resource locks, RBAC controls the permissions of who can do what with the Azure resources. RBAC does not provide a mechanism for ensuring immutability of the data.
D. Modify the access level of the blob service.
Why it’s NOT a good fit: Access levels (e.g., public, private, blob) controls who can access the storage account, not how the data within it is modified.
The Closest Correct Answer
Given the limited options, A. Create an access policy for the blob service. is the closest to the correct approach, however it is still not correct.
Why? Because out of all the given answers it does the best to address the prompt, albeit incorrectly. Access policies are better than nothing, while the rest do not even come close to addressing the prompt.
Important Note for the AZ-305 Exam
The main takeaway here is that the exam will sometimes give you a multiple-choice question where the best answer isn’t provided. This forces you to choose the least incorrect option.
Here’s what you need to remember for these types of questions:
Understand Core Concepts: Have a strong grasp of the core Azure services, like Storage, RBAC, etc. and how they function.
Identify What’s Missing: If the correct feature is not an option, identify what comes closest.
Consider the Intent: What is the requirement asking? Then look for the answer that best aligns with that intent. In this case, the intent is to prevent modification of data.
Process of Elimination: Discard answers that are completely irrelevant.
A Scenario Where A Would Work, However it does not satisfy the prompt:
Access policies for data immutability could look like this:
Grant Write Access Initially: A user/process with write access writes the data
Restrict Write Access: Access policies would restrict write access to all but users/groups responsible for administration of the data.
Create New Policy: After the 3-year window, an access policy could be created to provide read-only access.
This method has some issues:
Complexity: Managing access policies like this is complex and is not scalable.
Not Truly Immutable: Even with all that complexity, a user with the right access can still delete and modify the data.
In summary:
A. Create an access policy for the blob service is the closest to the correct approach in the given options. The correct approach would have been to set up an immutable policy, which is not provided in the answers. For the AZ-305 exam, it is important to choose the answer that is closest to correct, even if it is not correct.
HOTSPOT
You plan to migrate App1 to Azure.
You need to recommend a storage solution for App1 that meets the security and compliance requirements.
Which type of storage should you recommend, and how should you recommend configuring the storage? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Storage account type:
Premium page blobs
Premium file shares
Standard general-purpose v2
Configuration:
NFSv3
Large file shares
Hierarchical namespace
Here’s the breakdown of the correct answer and why:
Storage account type: Standard general-purpose v2
Configuration: Hierarchical namespace
Explanation:
Standard general-purpose v2: This storage account type allows you to utilize Blob storage, which is the key to meeting the immutability requirement. Azure Blob storage offers Immutability policies (write once, read many - WORM). This directly addresses the security and compliance requirement to prevent modification of new and existing data for three years.
Hierarchical namespace: While not directly related to the immutability requirement, hierarchical namespace (available in Azure Data Lake Storage Gen2, which is built on top of Standard general-purpose v2) provides a file system structure that can be beneficial for organizing and managing the data for App1. Given the available options, it’s the most relevant configuration choice.
Why other options are incorrect:
Storage Account Type:
Premium page blobs: Primarily used for Azure Virtual Machine disks and do not offer built-in immutability policies suitable for this requirement.
Premium file shares: While offering SMB access (potentially useful for on-premises access), they don’t have the built-in immutability policies of Blob storage.
Configuration:
NFSv3: While a file sharing protocol, it’s less relevant in this context as the primary requirement is immutability. Also, accessing blob storage from on-premises would typically be done through other methods (like Azure File Sync or the Storage Explorer).
Large file shares: This refers to the capacity of file shares, not the core security and compliance feature needed here.
Important Considerations:
On-premises access: While the recommendation leans towards Blob storage for immutability, you’ll need to consider how on-premises users and services will access the data. Options include:
Azure Storage Explorer: A free tool that allows access to Azure Storage.
Azure File Sync: If the data lends itself to a file-sharing model, you could sync a portion of the blob storage to an on-premises file server.
Direct API access: On-premises applications could be developed to interact with the Blob Storage APIs.
Preventing public endpoint access: This can be achieved by configuring private endpoints for the storage account, regardless of the storage type chosen.
HOTSPOT
You plan to migrate DB1 and DB2 to Azure.
You need to ensure that the Azure database and the service tier meet the resiliency and business requirements.
What should you configure? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Database:
A single Azure SQL database
Azure SQL Managed Instance
An Azure SQL Database elastic pool
Service tier:
Hyperscale
Business Critical
General Purpose
Explanation:
Box 1: SQL Managed Instance
Scenario: Once migrated to Azure, DB1 and DB2 must meet the following requirements:
✑ Maintain availability if two availability zones in the local Azure region fail.
✑ Fail over automatically.
✑ Minimize I/O latency.
The auto-failover groups feature allows you to manage the replication and failover of a group of databases on a server or all databases in a managed instance to another region. It is a declarative abstraction on top of the existing active geo-replication feature, designed to simplify deployment and management of geo-replicated databases at scale. You can initiate a geo-failover manually or you can delegate it to the Azure service based on a user-defined policy. The latter option allows you to automatically recover multiple related databases in a secondary region after a catastrophic failure or other unplanned event that results in full or partial loss of the SQL Database or SQL Managed Instance availability in the primary region.
Box 2: Business critical
SQL Managed Instance is available in two service tiers:
General purpose: Designed for applications with typical performance and I/O latency requirements.
Business critical: Designed for applications with low I/O latency requirements and minimal impact of underlying maintenance operations on the workload.
You plan to migrate App1 to Azure.
You need to recommend a network connectivity solution for the Azure Storage account that will host the App1 data. The solution must meet the security and compliance requirements.
What should you include in the recommendation?
a private endpoint
a service endpoint that has a service endpoint policy
Azure public peering for an ExpressRoute circuit
Microsoft peering for an ExpressRoute circuit
Understanding the Requirements
Here are the key networking-related requirements:
Security:
“Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.”
Connectivity:
“On-premises users and services must be able to access the Azure Storage account that will host the data in App1.”
Existing Environment:
“Litware has ExpressRoute connectivity to Azure.”
Analyzing the Options
Let’s evaluate each option against these requirements:
a private endpoint
Pros: Provides a private IP address within the virtual network for the storage account, thus preventing public access, which meets the security requirement. Enables on-prem resources to connect via the private IP over the express route connection.
Cons: Can increase cost slightly, requires virtual network integration.
Suitability: Highly suitable. It meets the security requirement of preventing public access and allows on-premises users to access the storage account over the private network and ExpressRoute connection.
a service endpoint that has a service endpoint policy
Pros: Allows VNETs to access the storage account without exposing it to the public internet.
Cons: Does not allow for on-premises resources to access the storage account.
Suitability: Not suitable. This only prevents traffic from public endpoints, however the on-premises traffic will still need to go through the public internet.
Azure public peering for an ExpressRoute circuit
Pros: Can provide access to Azure public services, such as storage, via the ExpressRoute connection.
Cons: Does not block access from the public internet, which does not meet the security requirements.
Suitability: Not suitable because public peering is not a secure method to access storage.
Microsoft peering for an ExpressRoute circuit
Pros: Allows private access to Azure resources, including Azure Storage.
Cons: Does not natively prevent access from the public internet. Requires additional configuration to do so.
Suitability: While Microsoft peering is the route that will be used by the resources to communicate via the express route, it is not a configuration that prevents public access.
The Correct Recommendation
Based on the analysis, the correct solution is:
a private endpoint
Explanation
Private Endpoints provide a network interface for the storage account directly within a virtual network. This ensures that access to the storage is limited to only resources within the private network. Traffic goes through the ExpressRoute circuit to the private IP on the VNET.
By using a private endpoint, you effectively prevent access from the public internet, fulfilling the security requirement.
Why other options are not correct:
Service endpoints only lock access from virtual networks to the storage account, it does not prevent on-premises systems from going through the public endpoint of the storage account.
Public peering is used to access public Azure services, it does not fulfill the security requirements of preventing access from the public internet.
Microsoft peering allows on-prem systems to access resources through private IP addresses, however it does not prevent on-prem resources from also using the public endpoint. Private Endpoints are needed to block the public endpoint.
Important Notes for the AZ-305 Exam
Private Endpoints vs Service Endpoints: Know the fundamental differences. Service endpoints provide network isolation within Azure networks, but don’t prevent public access. Private endpoints, on the other hand, allow resources within VNETs to communicate to resources via private IP addresses.
ExpressRoute Peering: Understand the differences between Microsoft, Azure public and private peering.
Security and Compliance: Prioritize solutions that align with security requirements. Blocking public access is a common ask.
Read Requirements Carefully: Ensure you meet all requirements including the networking and security.
DRAG DROP
You need to configure an Azure policy to ensure that the Azure SQL databases have TDE enabled. The solution must meet the security and compliance requirements.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Actions
Create an Azure policy definition that uses the deployIfNotExists effect.
Create a user-assigned managed identity.
Invoke a remediation task.
Create an Azure policy assignment.
Create an Azure policy definition that uses the Modify effect.
Answer Area
Understanding the Goal
The goal is to use Azure Policy to automatically enable TDE on all Azure SQL databases within the scope of the policy.
Key Concepts
Azure Policy: Allows you to create, assign, and manage policies that enforce rules across your Azure resources.
Policy Definition: Specifies the conditions that must be met and the actions to take if the conditions are not met.
Policy Assignment: Applies the policy definition to a specific scope (subscription, resource group, etc.).
deployIfNotExists Effect: This policy effect will deploy an ARM template if the resource does not have the configuration (TDE enabled).
Modify Effect: This effect will modify the resource to enforce the condition if it does not exist.
Remediation Task: A process for correcting resources that are not compliant with the policy.
User-Assigned Managed Identity: An identity object in Azure which allows for RBAC permissions and avoids the need for storing credentials for an application.
Steps in the Correct Sequence
Here’s the correct sequence of actions, with explanations:
Create an Azure policy definition that uses the deployIfNotExists effect.
Why? This is the first step. You need to define what the policy should do. For TDE, deployIfNotExists is used to deploy a configuration if it’s missing. The deployIfNotExists will deploy an ARM template that enables TDE on the database.
This step specifies the “rule” that will be enforced.
Create an Azure policy assignment.
Why? After defining the policy, you need to assign it to a scope, such as a subscription or a resource group. This step specifies where the policy is applied.
This tells Azure what needs to be checked against the policy.
Invoke a remediation task.
Why? The initial policy assignment will remediate new resources. However existing resources will need a remediation task to be launched to apply the policy to the non-compliant resources.
The Correct Drag-and-Drop Order
Here’s how you should arrange the actions in the answer area:
Create an Azure policy definition that uses the deployIfNotExists effect.
Create an Azure policy assignment.
Invoke a remediation task.
Why Other Options are Incorrect in this context:
Create a user-assigned managed identity: Although managed identities are used in conjunction with policies that use the deployIfNotExists effect, they do not need to be created specifically. The system assigned managed identity of the policy will perform the remediation. Therefore, creating a user-assigned managed identity is not needed and not within the scope of the task.
Create an Azure policy definition that uses the Modify effect: Although Modify is used in Azure policies, it is not relevant in the configuration of TDE. deployIfNotExists is a better approach because TDE needs to be enabled, which requires a deployment.
Important Notes for the AZ-305 Exam
Azure Policy Effects: Be extremely familiar with different policy effects, especially deployIfNotExists, audit, deny, and modify.
Policy Definition vs. Assignment: Understand the difference between defining a policy and applying it to resources.
Remediation: Understand how to use remediation tasks to fix non-compliant resources.
Scope: Be able to set the appropriate scope for policy assignments.
Managed Identities: Know how to use managed identities for secure resource management with Azure policies.
HOTSPOT
You plan to migrate App1 to Azure.
You need to recommend a high-availability solution for App1. The solution must meet the resiliency requirements.
What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Number of host groups:
1
2
3
6
Number of virtual machine scale sets:
0
1
3
Number of host groups: 3
Number of virtual machine scale sets: 1
Explanation:
Number of host groups: 3
Requirement: Maintain availability if two availability zones in the local Azure region fail.
Dedicated Hosts and Zones: Azure Dedicated Hosts are a regional resource, but you deploy host groups within specific availability zones. To be resilient to the failure of two availability zones, you need your virtual machines spread across at least three availability zones. Since you’re using dedicated hosts, you need a host group in each of those three availability zones.
Number of virtual machine scale sets: 1
Requirement: Be hosted on Azure virtual machines that support automatic scaling and maintain availability if two availability zones fail.
Virtual Machine Scale Sets and Zones: Azure Virtual Machine Scale Sets allow you to deploy and manage a set of identical, auto-scaling virtual machines. A single VM Scale Set can be configured to span multiple availability zones. This is the recommended approach for high availability and automatic scaling across zones. You don’t need multiple scale sets for each zone; one can manage the deployment across the necessary zones.
Why other options are incorrect:
Number of host groups:
1: This would not provide any availability zone resilience. If the single zone with the host group fails, App1 goes down.
2: This would only protect against the failure of a single availability zone. The requirement is resilience against two zone failures.
6: While this would provide more resilience, it’s not necessary to meet the specific requirement of tolerating two zone failures and would likely be more expensive.
Number of virtual machine scale sets:
0: You need to use Virtual Machine Scale Sets to meet the automatic scaling requirement.
3: While technically possible to have three separate VM scale sets (one in each zone), it adds unnecessary management complexity. A single VM scale set configured to span multiple availability zones is the standard and more efficient approach.
You need to implement the Azure RBAC role assignments for the Network Contributor role.
The solution must meet the authentication and authorization requirements.
What is the minimum number of assignments that you must use?
1
2
5
10
15
The correct answer is 2.
Here’s why:
Management Groups: The most efficient way to apply RBAC roles across multiple subscriptions is by using Azure Management Groups. Since all subscriptions are within an Enterprise Agreement (EA), it’s highly likely that they are organized under Management Groups.
Litware.com and dev.litware.com Tenants: You have subscriptions in two different tenants (litware.com and dev.litware.com). Therefore, even if the subscriptions within each tenant are organized under a single management group, you would need to apply the Network Contributor role at the management group level for each tenant.
Minimum Assignments:
One assignment of the Network Contributor role at the management group level associated with the litware.com tenant. This will apply the role to all 10 subscriptions within that tenant.
One assignment of the Network Contributor role at the management group level associated with the dev.litware.com tenant. This will apply the role to all 5 subscriptions within that tenant.
Why other options are incorrect:
1: You have subscriptions in two different tenants, so a single assignment won’t cover all subscriptions.
5: This might be the number of subscriptions in one of the tenants, but not all.
10: This might be the number of subscriptions in the litware.com tenant, but not all.
15: This is the total number of subscriptions, and you don’t need to assign the role individually to each subscription if using management groups.
HOTSPOT
You plan to migrate App1 to Azure.
You need to estimate the compute costs for App1 in Azure. The solution must meet the security and compliance requirements.
What should you use to estimate the costs, and what should you implement to minimize the costs? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
To estimate the costs, use:
Azure Advisor
The Azure Cost Management Power BI app
The Azure Total Cost of Ownership (TCO) calculator
Implement:
Azure Reservations
Azure Hybrid Benefit
Azure Spot Virtual Machine pricing
To estimate the costs, use: The Azure Total Cost of Ownership (TCO) calculator
Why correct: The Azure TCO calculator is specifically designed to compare the cost of running your workloads on-premises versus in Azure. It allows you to input details about your current infrastructure and planned Azure resources to get an estimated cost for migrating to the cloud. This is the most direct and comprehensive tool for this purpose.
Implement: Azure Reservations
Why correct: Azure Reservations offer significant discounts (up to 72% compared to pay-as-you-go pricing) by committing to using specific Azure resources (like virtual machines for App1) for a defined period (typically 1 or 3 years). This is a highly effective way to minimize compute costs for predictable workloads like App1 once it’s migrated.
Why the other options are less suitable:
To estimate the costs, use:
Azure Advisor: While Azure Advisor provides cost optimization recommendations, it primarily analyzes your existing Azure usage. Since App1 is being migrated, you don’t have existing Azure usage for it yet, making the TCO calculator more appropriate for initial estimations.
The Azure Cost Management Power BI app: This is a tool for visualizing and analyzing your current Azure spending. It’s not designed for pre-migration cost estimations.
Implement:
Azure Hybrid Benefit: While Azure Hybrid Benefit can significantly reduce costs for Windows Server virtual machines (if App1 runs on Windows Server and you have eligible licenses), Azure Reservations provide a more general cost-saving mechanism applicable to various compute resources, making it a slightly broader and potentially more impactful option for overall cost minimization. However, if App1 uses Windows Server, Hybrid Benefit is also a very strong contender and could be considered equally “closest” if not for the broader applicability of Reservations.
Azure Spot Virtual Machine pricing: Spot VMs offer deep discounts but come with the risk of eviction if Azure needs the capacity back. For a production application like App1, especially considering the security and compliance requirements mentioned in the broader scenario, relying on potentially unstable Spot VMs is generally not recommended. The risk of interruption outweighs the cost savings in this context.
In summary:
The Azure TCO calculator is the most direct tool for pre-migration cost estimation.
Azure Reservations are generally the most effective and broadly applicable method for implementing cost savings for compute resources like the VMs hosting App1, assuming a relatively stable workload.
Existing Environment: Technical Environment
The on-premises network contains a single Active Directory domain named contoso.com.
Contoso has a single Azure subscription.
Existing Environment: Business Partnerships
Contoso has a business partnership with Fabrikam, Inc. Fabrikam users access some Contoso applications over the internet by using Azure Active Directory (Azure AD) guest accounts.
Requirements: Planned Changes
Contoso plans to deploy two applications named App1 and App2 to Azure.
Requirements: App1
App1 will be a Python web app hosted in Azure App Service that requires a Linux runtime.
Users from Contoso and Fabrikam will access App1.
App1 will access several services that require third-party credentials and access strings.
The credentials and access strings are stored in Azure Key Vault.
App1 will have six instances: three in the East US Azure region and three in the West Europe Azure region.
App1 has the following data requirements:
✑ Each instance will write data to a data store in the same availability zone as the instance.
✑ Data written by any App1 instance must be visible to all App1 instances.
App1 will only be accessible from the internet. App1 has the following connection requirements:
✑ Connections to App1 must pass through a web application firewall (WAF).
✑ Connections to App1 must be active-active load balanced between instances.
✑ All connections to App1 from North America must be directed to the East US region. All other connections must be directed to the West Europe region.
Every hour, you will run a maintenance task by invoking a PowerShell script that copies files from all the App1 instances. The PowerShell script will run from a central location.
Requirements: App2
App2 will be a NET app hosted in App Service that requires a Windows runtime.
App2 has the following file storage requirements:
✑ Save files to an Azure Storage account.
✑ Replicate files to an on-premises location.
✑ Ensure that on-premises clients can read the files over the LAN by using the SMB protocol.
You need to monitor App2 to analyze how long it takes to perform different transactions within the application. The solution must not require changes to the application code.
Application Development Requirements
Application developers will constantly develop new versions of App1 and App2.
The development process must meet the following requirements:
✑ A staging instance of a new application version must be deployed to the application host before the new version is used in production.
✑ After testing the new version, the staging version of the application will replace the production version.
✑ The switch to the new application version from staging to production must occur without any downtime of the application.
Identity Requirements
Contoso identifies the following requirements for managing Fabrikam access to resources:
✑ uk.co.certification.simulator.questionpool.PList@1863e940
✑ The solution must minimize development effort.
Security Requirement
All secrets used by Azure services must be stored in Azure Key Vault.
Services that require credentials must have the credentials tied to the service instance. The credentials must NOT be shared between services.
DRAG DROP
You need to recommend a solution that meets the file storage requirements for App2.
What should you deploy to the Azure subscription and the on-premises network? To answer, drag the appropriate services to the correct locations. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Services
Azure Blob Storage
Azure Data Box
Azure Data Box Gateway
Azure Data Lake Storage
Azure File Sync
Azure Files
Answer Area
Azure subscription: Service
On-premises network: Service
Deconstruct the Requirements: First, identify the key requirements for App2’s file storage:
Store files in an Azure Storage account.
Replicate files to an on-premises location.
On-premises clients need to read files via SMB over the LAN.
Azure Storage Options - Initial Brainstorm: Think about the different Azure Storage services and their core functionalities:
Azure Blob Storage: Excellent for unstructured data, cost-effective, but doesn’t natively provide SMB access or direct on-premises synchronization.
Azure Data Lake Storage: Built on Blob Storage, optimized for big data analytics. Doesn’t directly address SMB access or on-premises sync in the way required.
Azure Files: Provides fully managed file shares in the cloud, accessible via SMB. This seems promising for on-premises access.
Azure File Sync: A service to synchronize Azure File shares with on-premises Windows Servers. This looks like a strong candidate for fulfilling the on-premises replication and SMB access needs.
Azure Data Box: A physical appliance for transferring large amounts of data to Azure. Not suitable for ongoing synchronization.
Azure Data Box Gateway: A virtual appliance that acts as a network file share, caching data to Azure. While it involves on-premises component, Azure File Sync is a better fit for the replication requirement.
Focus on the SMB Requirement: The “on-premises clients can read the files over the LAN by using the SMB protocol” requirement is a strong indicator that Azure Files will be needed in Azure. Blob Storage and Data Lake Storage don’t offer native SMB access.
Address the On-premises Replication: The requirement to “replicate files to an on-premises location” points towards a synchronization solution. Azure File Sync is specifically designed to synchronize Azure File shares with on-premises Windows Servers, making it the ideal choice for this.
Map to the Answer Area: Based on the above analysis:
Azure Subscription: Azure Files will host the file shares in the cloud.
On-premises network: Azure File Sync will be installed on a Windows Server on-premises to synchronize with the Azure File share.
Review and Confirm: Double-check if the chosen services meet all three initial requirements:
Store in Azure Storage Account? Yes, Azure Files is an Azure Storage service.
Replicate on-premises? Yes, Azure File Sync handles the synchronization.
SMB access? Yes, Azure Files provides SMB access, and File Sync makes it available on-premises.
Eliminate Other Options (and justify):
Azure Blob Storage/Data Lake Storage: Lack native SMB access.
Azure Data Box/Data Box Gateway: Not designed for continuous synchronization. Data Box is for bulk transfer, and Data Box Gateway is more of a caching solution, not direct replication of the entire Azure file share on-premises.
This systematic approach helps identify the core requirements and match them to the appropriate Azure services, leading to the correct solution.Here’s the breakdown of the correct services and why:
Azure subscription: Azure Files
Why: Azure Files provides fully managed file shares in the cloud that are accessible via the Server Message Block (SMB) protocol. This directly addresses the requirement to save files to an Azure Storage account and make them accessible over SMB.
On-premises network: Azure File Sync
Why: Azure File Sync is the Azure service that enables you to synchronize Azure file shares with on-premises Windows Servers. This fulfills the requirement to replicate files to an on-premises location and allows on-premises clients to read the files over the LAN using the SMB protocol.
Therefore, the correct answer area is:
Azure subscription: Azure Files
On-premises network: Azure File Sync
Explanation of why other options are incorrect:
Azure Blob Storage: While a core Azure storage service, it doesn’t natively provide SMB access required for on-premises clients to read files over the LAN.
Azure Data Box: This is a physical appliance used for transferring large amounts of data into Azure. It’s not for ongoing synchronization or SMB access.
Azure Data Box Gateway: This is a virtual appliance that resides on your on-premises network and acts as a network file share, caching data to Azure Blob storage. While it involves an on-premises component, it doesn’t directly replicate the Azure file share for native SMB access like Azure File Sync.
Azure Data Lake Storage: This is built on top of Blob storage and is optimized for big data analytics. It doesn’t directly provide SMB access in the same way as Azure Files.
You need to recommend a solution that meets the data requirements for App1.
What should you recommend deploying to each availability zone that contains an instance of App1?
an Azure Cosmos DB that uses multi-region writes
an Azure Storage account that uses geo-zone-redundant storage (GZRS)
an Azure Data Lake store that uses geo-zone-redundant storage (GZRS)
an Azure SQL database that uses active geo-replication
The correct answer is an Azure Cosmos DB that uses multi-region writes.
Here’s why:
Data Requirements Breakdown:
Each instance writes data to a data store in the same availability zone: This implies a need for a local data store for low latency writes.
Data written by any App1 instance must be visible to all App1 instances: This necessitates a globally consistent data store that replicates across regions.
Why Azure Cosmos DB with Multi-Region Writes Fits:
Multi-Region Writes: This feature of Cosmos DB allows you to designate multiple Azure regions as writeable. You would deploy a Cosmos DB account with write regions in both East US and West Europe.
Local Writes: Each App1 instance would be configured to write to the Cosmos DB region closest to it (within the same availability zone’s region). This ensures low-latency writes.
Global Consistency: Cosmos DB provides various consistency levels. For this requirement, you would likely choose “Strong” or “Session” consistency to ensure that data written in one region is eventually (or immediately, with Strong consistency) visible to all other regions.
Availability Zones: Cosmos DB itself offers high availability within a region by replicating data across multiple availability zones.
Why Other Options Are Less Suitable:
Azure Storage account with GZRS: GZRS provides high availability and durability by replicating data synchronously across three availability zones within a primary region and asynchronously to a secondary region. However, it doesn’t offer the same level of fine-grained control over write regions and automatic data replication for active-active scenarios like Cosmos DB. Also, accessing blob storage directly from multiple instances for transactional data can be complex.
Azure Data Lake Store with GZRS: Similar limitations to Azure Storage with GZRS. It’s primarily designed for large-scale analytics data, not transactional data requiring low-latency writes from multiple instances.
Azure SQL database with active geo-replication: While active geo-replication provides read replicas in different regions, only the primary region is writable. This doesn’t directly meet the requirement of each instance writing to a local data store and having that data immediately available to all instances across regions in an active-active manner.
HOTSPOT
You are evaluating whether to use Azure Traffic Manager and Azure Application Gateway to meet the connection requirements for App1.
What is the minimum numbers of instances required for each service? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Azure Traffic Manager:
1
2
3
6
Azure Application Gateway:
1
2
3
6
Azure Traffic Manager: 1
Why: Azure Traffic Manager is a DNS-based traffic routing service. You only need one Traffic Manager profile to configure the geographic routing policy. Traffic Manager itself is a highly available, globally distributed service managed by Azure. You don’t need multiple instances for redundancy or load balancing the Traffic Manager service itself. Its availability is built-in.
Azure Application Gateway: 2
Why: You need at least two instances of Azure Application Gateway. Here’s the breakdown:
One instance in the East US region: To provide the WAF and load balancing for the three App1 instances in East US.
One instance in the West Europe region: To provide the WAF and load balancing for the three App1 instances in West Europe.
Since connections must pass through a WAF and you have instances in two distinct regions with traffic being directed based on geography, you need a separate Application Gateway in each region to handle the regional traffic and provide WAF protection.
Therefore, the correct answer area is:
Azure Traffic Manager: 1
Azure Application Gateway: 2
HOTSPOT -
You need to recommend a solution to ensure that App1 can access the third-party credentials and access strings. The solution must meet the security requirements.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Authenticate App1 by using:
A certificate
A system-assigned managed identity
A user-assigned managed identity
Authorize App1 to retrieve Key Vault
secrets by using:
An access policy
A connected service
A private link
A role assignment
Explanation:
Scenario: Security Requirement
All secrets used by Azure services must be stored in Azure Key Vault.
Services that require credentials must have the credentials tied to the service instance. The credentials must NOT be shared between services.
Box 1: A service principal
A service principal is a type of security principal that identifies an application or service, which is to say, a piece of code rather than a user or group. A service principal’s object ID is known as its client ID and acts like its username. The service principal’s client secret acts like its password.
Note: Authentication with Key Vault works in conjunction with Azure Active Directory (Azure AD), which is responsible for authenticating the identity of any given security principal.
A security principal is an object that represents a user, group, service, or application that’s requesting access to Azure resources. Azure assigns a unique object ID to every security principal.
Box 2: A role assignment
You can provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control.
You need to recommend an App Service architecture that meets the requirements for App1.
The solution must minimize costs.
What should few recommend?
one App Service Environment (ASE) per availability zone
one App Service plan per availability zone
one App Service plan per region
one App Service Environment (ASE) per region
Understanding the Requirements
Here are the key requirements for App1’s App Service deployment:
High Availability: App1 has six instances, three in East US and three in West Europe, spread across availability zones within each region.
Web App Service: The App1 app will be hosted on Azure App Service.
Minimize Costs: The solution should be the most cost-effective while maintaining the necessary features.
Linux Runtime: The App1 app is a python app with a linux runtime.
Key Concepts
Azure App Service: A PaaS service for hosting web applications, mobile backends, and APIs.
App Service Plan: Defines the underlying compute resources (VMs) on which your app(s) run.
App Service Environment (ASE): Provides a fully isolated and dedicated environment for running your App Service apps.
Availability Zones: Physically separate locations within an Azure region that provide high availability.
Analyzing the Options
Let’s evaluate each option based on its cost-effectiveness and ability to meet the requirements:
one App Service Environment (ASE) per availability zone
Pros: Highest level of isolation and control, can have virtual network integration.
Cons: Most expensive solution.
Suitability: Not suitable due to high costs.
one App Service plan per availability zone
Pros: Provides zone redundancy, and can potentially have different size VMs in each zone if needed.
Cons: Can lead to increased costs due to over provisioning of resources if one app services plan per zone is chosen.
Suitability: Not the most cost-effective approach.
one App Service plan per region
Pros: Cost-effective for multiple instances of an app in a single region, allows multiple VMs to be spun up on one app service plan.
Cons: Requires availability zones to be supported by the underlying VM size.
Suitability: Suitable, most cost effective option if VMs chosen support availability zones.
one App Service Environment (ASE) per region
Pros: Provides isolation and control within a region.
Cons: Very expensive and not needed for this scenario.
Suitability: Not suitable due to high costs.
The Correct Recommendation
Based on the analysis, the most cost-effective solution is:
one App Service plan per region
Explanation
App Service Plan per region: By creating a single App Service plan per region, you can host multiple instances of App1 (three per region) on the same underlying VMs. This is more cost-effective than using separate plans per availability zone.
Availability Zones: When choosing the VM size, be sure to choose a size that supports availability zones.
Zone Redundancy: App service automatically handles the zone redundancy in the single app service plan per region.
Why Other Options Are Not Correct
ASE per availability zone: Highly expensive and not needed when App Service can handle the availability zone deployment.
App Service plan per availability zone: Not cost-effective due to over provisioning and having three app services plans when one per region can handle all instances.
ASE per region: Very costly and unnecessary.
Your company has deployed several virtual machines (VMs) on-premises and to Azure. Azure ExpressRoute has been deployed and configured for on-premises to Azure connectivity.
Several VMs are exhibiting network connectivity issues.
You need to analyze the network traffic to determine whether packets are being allowed or denied to the VMs.
Solution: Use the Azure Advisor to analyze the network traffic.
Does the solution meet the goal?
Yes
No
Understanding the Goal
The goal is to analyze network traffic to determine if packets are being allowed or denied to the VMs, which indicates a network connectivity issue.
Analyzing the Proposed Solution
Azure Advisor: Azure Advisor is a service that analyzes your Azure environment and provides recommendations for cost optimization, security, reliability, and performance. It does not analyze or show you network traffic for VMs, nor can it view network traffic for on-prem VMs.
Evaluation
Azure Advisor will not help you determine what packets are being allowed or denied to a virtual machine.
The Correct Solution
The tools that would be best suited for this scenario would be:
Azure Network Watcher: Network Watcher can help you monitor and troubleshoot network traffic.
Network Security Group (NSG) Flow Logs: NSG flow logs would provide details on what traffic is being allowed or denied from and to VMs.
On-Prem Packet Capture Tools: Wireshark or other tools can be used on-prem to diagnose traffic issues.
Does the Solution Meet the Goal?
No, the solution does not meet the goal. Azure Advisor is not the correct tool for analyzing network traffic flow and packet information.
DRAG DROP
You plan to import data from your on-premises environment to Azure. The data Is shown in the following table.
On-premises source Azure target
A Microsoft SQL Server 2012 database An Azure SQL database
A table in a Microsoft SQL Server 2014 database An Azure Cosmos DB account that uses the SQL API
What should you recommend using to migrate the data? To answer, drag the appropriate tools to the correct data sources-Each tool may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Tools
AzCopy
Azure Cosmos DB Data Migration Tool
Data Management Gateway
Data Migration Assistant
Answer Area
From the SQL Server 2012 database: Tool
From the table in the SQL Server 2014 database: Tool
From the SQL Server 2012 database: Data Migration Assistant
Why: The Data Migration Assistant (DMA) is Microsoft’s primary tool for migrating SQL Server databases to Azure SQL Database. It can assess your on-premises SQL Server database for compatibility issues, recommend performance improvements, and then perform the data migration. While SQL Server 2012 is an older version, DMA often supports migrations from various SQL Server versions to Azure SQL Database.
From the table in the SQL Server 2014 database: Azure Cosmos DB Data Migration Tool
Why: The Azure Cosmos DB Data Migration Tool (dtui.exe) is specifically designed for importing data into Azure Cosmos DB from various sources, including SQL Server. Since the target is an Azure Cosmos DB account using the SQL API, this tool is the most direct and efficient way to migrate the data. You can select specific tables for migration.
Therefore, the correct answer area is:
From the SQL Server 2012 database: Data Migration Assistant
From the table in the SQL Server 2014 database: Azure Cosmos DB Data Migration Tool
Explanation of why other tools are incorrect:
AzCopy: This is a command-line utility for copying data to and from Azure Blob Storage, Azure Files, and Azure Data Lake Storage. It’s not designed for migrating relational database schemas and data to Azure SQL Database or Cosmos DB.
Data Management Gateway (Integration Runtime): This is a component of Azure Data Factory that enables data movement between on-premises data stores and cloud services. While it could be used for this, the direct migration tools (DMA and Cosmos DB Data Migration Tool) are simpler and more purpose-built for these specific scenarios. Using Data Factory would introduce more complexity than necessary for a straightforward data migration.
HOTSPOT
You need to design a storage solution for an app that will store large amounts of frequently used data.
The solution must meet the following requirements:
✑ Maximize data throughput.
✑ Prevent the modification of data for one year.
✑ Minimize latency for read and write operations.
Which Azure Storage account type and storage service should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Storage account type:
BlobStorage
BlockBlobStorage
FileStorage
StorageV2 with Premium performance
StorageV2 with Standard performance
Storage service:
Blob
File
Table
Storage account type: BlockBlobStorage
Storage service: Blob
Explanation:
Let’s break down the requirements and why this combination is the best fit:
Requirements:
Maximize Data Throughput: The solution needs to handle a high volume of data transfer.
Prevent Data Modification (Immutable for 1 Year): Data must be stored in a way that prevents any changes for one year.
Minimize Latency: Read and write operations should be as fast as possible.
Storage Account Type: BlockBlobStorage
Why it’s the best choice:
Optimized for Block Blobs: BlockBlobStorage accounts are specifically designed and optimized for storing and accessing block blobs. Block blobs are ideal for unstructured data like text or binary data, which is common for applications storing large amounts of data.
High Throughput: BlockBlobStorage accounts are designed to deliver high throughput for read and write operations.
Immutable Storage Support: BlockBlobStorage accounts support immutable storage policies, allowing you to store data in a WORM (Write Once, Read Many) state, preventing modification for a specified period (like the one year required).
Why other options are less suitable:
BlobStorage: BlobStorage is an older account type. It is recommended that you use a BlockBlobStorage or a general-purpose v2 account instead.
FileStorage: FileStorage accounts are optimized for file shares (using the SMB protocol). They are not the best choice for maximizing throughput for large amounts of unstructured data.
StorageV2 (General-purpose v2): While StorageV2 accounts support block blobs, they also support other storage types (files, queues, tables). BlockBlobStorage accounts generally provide better performance for exclusively block blob workloads, which is the case here.
StorageV2 (Premium performance): Premium performance is optimized for very low latency, but it comes at a higher cost. The requirement is to minimize latency, not necessarily to achieve the absolute lowest possible latency at any cost.
Storage Service: Blob
Why it’s the best choice:
Large, Unstructured Data: Blob storage is designed for storing large amounts of unstructured data, such as text or binary data, which aligns with the app’s requirements.
High Throughput: Blob storage, especially in BlockBlobStorage accounts, is optimized for high throughput.
Immutability: Blob storage supports immutability policies at the blob or container level.
Why other options are less suitable:
File: File storage is for file shares accessed via SMB. It’s not the best option for maximizing throughput for large amounts of unstructured data.
Table: Table storage is a NoSQL key-value store. It’s not suitable for storing large amounts of unstructured data or for maximizing throughput.
Implementation Details:
Create a BlockBlobStorage Account: When creating the storage account in Azure, choose BlockBlobStorage as the account type.
Create a Container: Within the storage account, create a container to store your blobs.
Configure Immutability:
You can set a time-based retention policy at the container level or on individual blobs.
Configure the policy to prevent modifications and deletions for one year.
Upload Data as Block Blobs: Use Azure Storage SDKs, AzCopy, or other tools to upload your data to the container as block blobs.
HOTSPOT
You need to recommend an Azure Storage Account configuration for two applications named Application1 and Applications.
The configuration must meet the following requirements:
- Storage for Application1 must provide the highest possible transaction rates and the lowest possible latency.
- Storage for Application2 must provide the lowest possible storage costs per GB.
- Storage for both applications must be optimized for uploads and downloads.
- Storage for both applications must be available in an event of datacenter failure.
What should you recommend? To answer, select the appropriate options in the answer area NOTE: Each correct selection is worth one point
Answer Area
Application1:
BlobStorage with Standard performance, Hot access tier, and Read-access geo-redundant storage (RA-GRS) replication
BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication
General purpose v1 with Premium performance and Locally-redundant storage (LRS) replication
General purpose v2 with Standard performance, Hot access tier, and Locally-redundant storage (LRS) replication
Application2:
BlobStorage with Standard performance, Cool access tier, and Geo-redundant storage (GRS) replication
BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication
General purpose v1 with Standard performance and Read-access geo-redundant storage (RA-GRS) replication
General purpose v2 with Standard performance, Cool access tier, and Read-access geo-redundant storage (RA-GRS) replication
Application1:
BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication
Application2:
General purpose v2 with Standard performance, Cool access tier, and Geo-redundant storage (GRS) replication
Explanation:
Application1
Requirements:
Highest possible transaction rates
Lowest possible latency
Optimized for uploads and downloads
Available in case of a datacenter failure
Why BlockBlobStorage with Premium performance and ZRS is the best choice:
BlockBlobStorage: Optimized for storing and retrieving large amounts of unstructured data (blobs) with high throughput and low latency, which aligns with the need for high transaction rates and optimized uploads/downloads.
Premium performance: Provides the lowest possible latency and highest transaction rates among Azure Storage account options. It uses SSDs for storage, making it ideal for performance-sensitive workloads.
Zone-redundant storage (ZRS): ZRS replicates your data synchronously across three Azure availability zones within a single region. This ensures that your data remains available even if one data center (availability zone) fails.
Why other options are less suitable:
BlobStorage with Standard performance, Hot access tier, and RA-GRS: BlobStorage accounts are generally used for general purpose blob storage and are less optimized for high performance compared to BlockBlobStorage. Standard performance offers higher latency than Premium. RA-GRS provides higher availability but is not necessary since ZRS is sufficient.
General purpose v1 with Premium performance and LRS: General-purpose v1 accounts are an older generation. They don’t support the combination of Premium performance and ZRS. LRS only replicates within a single data center and wouldn’t meet the availability requirement.
General purpose v2 with Standard performance, Hot access tier, and LRS: General-purpose v2 accounts with Standard performance offer higher latency than Premium performance. LRS does not protect against data center failures.
Application2
Requirements:
Lowest possible storage cost per GB
Optimized for uploads and downloads
Available in case of a datacenter failure
Why General purpose v2 with Standard performance, Cool access tier, and GRS is the best choice:
General purpose v2: A good choice for a wide range of storage scenarios, including cost-sensitive applications.
Standard performance: Offers a balance between cost and performance, suitable when the lowest possible latency is not the primary concern.
Cool access tier: Designed for infrequently accessed data, providing the lowest storage cost per GB. While optimized for uploads and downloads, access costs are higher than Hot tier, so it’s best for data not accessed frequently.
Geo-redundant storage (GRS): Replicates your data to a secondary region hundreds of miles away from the primary region. This ensures data availability even if an entire region experiences an outage.
Why other options are less suitable:
BlobStorage with Standard performance, Cool access tier, and GRS: While suitable for cost optimization, general-purpose v2 accounts are generally recommended over BlobStorage accounts.
BlockBlobStorage with Premium performance and ZRS: Premium performance is too expensive for this application, which prioritizes cost savings. ZRS is not necessary when GRS is sufficient.
General purpose v1 with Standard performance and RA-GRS: General-purpose v1 accounts are older generation. GRS is generally preferred over RA-GRS when read access to the secondary region is not specifically required.
In summary:
For Application1, the combination of BlockBlobStorage, Premium performance, and ZRS delivers the highest transaction rates, lowest latency, and availability in case of a data center failure.
For Application2, the combination of General purpose v2, Standard performance, Cool access tier, and GRS provides the lowest storage cost per GB while still ensuring availability and being optimized for uploads and downloads.
HOTSPOT
Your company develops a web service that is deployed to an Azure virtual machine named VM1. The web service allows an API to access real-time data from VM1.
The current virtual machine deployment is shown in the Deployment exhibit. (Click the Deployment tab).
VNet1: This is the overall virtual network encompassing the subnets.
Subnet1: Contains two virtual machines:
VM 1
VM 2
ProdSubnet
The chief technology officer (CTO) sends you the following email message: “Our developers have deployed the web service to a virtual machine named VM1. Testing has shown that the API is accessible from VM1 and VM2. Our partners must be able to connect to the API over the Internet. Partners will use this data in applications that they develop.”
You deploy an Azure API Management (APIM) service. The relevant API Management configuration is shown in the API exhibit. (Click the API tab.)
Virtual Network:
Off
External (selected)
Internal
LOCATION VIRTUAL NETWORK SUBNET
West Europe VNet1 ProdSubnet
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
Statements
The API is available to partners over the Internet.
The APIM instance can access real-time data from VM1.
A VPN gateway is required for partner access.
Statements
The API is available to partners over the Internet. Yes
The APIM instance can access real-time data from VM1. Yes
A VPN gateway is required for partner access. No
Explanation:
- The API is available to partners over the Internet. - Yes
Why? The API Management (APIM) instance is configured with a Virtual Network setting of External. This means that the APIM instance is deployed with a public IP address and is accessible from the internet. Partners can access the API through the APIM gateway’s public endpoint.
- The APIM instance can access real-time data from VM1. - Yes
Why? The APIM instance is configured to be deployed within the same virtual network (VNet1) and subnet (ProdSubnet) as VM1. This internal deployment allows the APIM instance to communicate directly with VM1 over the private network. It can, therefore, access the real-time data exposed by the web service running on VM1.
- A VPN gateway is required for partner access. - No
Why? Partners access the API through the APIM instance’s public endpoint, which is exposed to the internet because of the External Virtual Network setting. A VPN gateway is used for creating secure site-to-site or point-to-site connections between an on-premises network (or a single computer) and an Azure virtual network. It’s not needed when accessing a public endpoint.
Your company has 300 virtual machines hosted in a VMware environment. The virtual machines vary in size and have various utilization levels.
You plan to move all the virtual machines to Azure.
You need to recommend how many and what size Azure virtual machines will be required to move the current workloads to Azure. The solution must minimize administrative effort.
What should you use to make the recommendation?
Azure Cost Management
Azure Pricing calculator
Azure Migrate
Azure Advisor
Understanding the Goal
The goal is to determine:
How many Azure VMs: What is the total number of Azure VMs required for migrating the workloads
What size Azure VMs: What is the appropriate size (SKU) for each Azure VMs for each on-prem VM.
Minimize Effort: Minimize manual administrative effort in the planning and recommendation process.
Analyzing the Options
Let’s evaluate each option based on its suitability for this task:
Azure Cost Management:
Pros: Analyzes costs of existing Azure resources, and helps with cost optimization.
Cons: Not designed for planning and sizing migrations from on-premises environments. It’s for analyzing Azure spend, and does not help determine size of VMs needed.
Suitability: Not suitable for the given scenario.
Azure Pricing Calculator:
Pros: Helps estimate the cost of planned Azure resources, such as VMs, but requires knowledge of the specifications needed.
Cons: Requires manual input of VM sizes and specifications, which is not ideal for 300 VMs with various utilization levels. It does not take into consideration utilization of existing on-prem VMs, but assumes all VMs are at their peak performance.
Suitability: Not ideal because it will take far too much administrative overhead in manually inputting the information.
Azure Migrate:
Pros: Specifically designed for assessing and migrating on-premises workloads to Azure. Can discover on-premises VMs and provide recommendations for sizing and right sizing Azure VMs based on utilization patterns.
Cons: Requires the setup of the Azure Migrate appliance, which is a one-time setup.
Suitability: Highly suitable for this scenario.
Azure Advisor:
Pros: Analyzes existing Azure resources and provides recommendations for cost, security, reliability, and performance.
Cons: Not designed for migration planning. Does not help in determining right sizing of VMs when moving from on-prem.
Suitability: Not suitable for the given scenario.
The Correct Recommendation
Based on the analysis, the best tool for this scenario is:
Azure Migrate
Explanation
Azure Migrate automates the discovery and assessment of on-premises VMs and their utilization. It analyzes data from on-prem to provide the recommended size Azure VMs based on peak utilization. This approach minimizes administrative effort and ensures that the recommendations are based on actual resource usage.
Why Other Options Are Not Suitable
Azure Cost Management: Analyzes existing Azure costs and is not for migration planning.
Azure Pricing Calculator: Requires manual configuration and does not analyze on-premises environment to determine correct Azure VM size.
Azure Advisor: Analyzes existing Azure resources and is not for migration planning.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity.
Several virtual machines exhibit network connectivity issues.
You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines.
Solution: Use Azure Advisor to analyze the network traffic.
Does this meet the goal?
A. Yes
B. No
Understanding the Goal
The goal is to analyze network traffic to determine if packets are being allowed or denied to the VMs, which would indicate a network connectivity issue.
Analyzing the Proposed Solution
Azure Advisor: Azure Advisor is a service that analyzes your Azure environment and provides recommendations for cost optimization, security, reliability, and performance. It does not analyze or show network traffic for VMs. It also does not provide any insight into on-prem network traffic.
Evaluation
Azure Advisor will not help you determine what packets are being allowed or denied to a virtual machine.
The Correct Solution
The tools that would be best suited for this scenario would be:
Azure Network Watcher: Network Watcher can help you monitor and troubleshoot network traffic.
Network Security Group (NSG) Flow Logs: NSG flow logs would provide details on what traffic is being allowed or denied from and to VMs.
On-Prem Packet Capture Tools: Wireshark or other tools can be used on-prem to diagnose traffic issues.
Does the Solution Meet the Goal?
The answer is:
B. No
You have an on-premises network and an Azure subscription. The on-premises network has several branch offices.
A branch office in Toronto contains a virtual machine named VM1 that is configured as a file server. Users access the shared files on VM1 from all the offices.
You need to recommend a solution to ensure that the users can access the shares files as quickly as possible if the Toronto branch office is inaccessible.
What should you include in the recommendation?
a Recovery Services vault and Azure Backup
an Azure file share and Azure File Sync
Azure blob containers and Azure File Sync
a Recovery Services vault and Windows Server Backup
Understanding the Requirements
Here’s a breakdown of the key requirements:
On-premises File Server: A file server (VM1) in the Toronto branch office.
File Access from all Offices: Users in all branch offices access shared files on VM1.
High Availability: Need a solution for quick access to the files if the Toronto office becomes unavailable.
Quick Access: Minimize latency for users accessing files if the Toronto office is down.
Analyzing the Options
Let’s evaluate each option based on its suitability:
a Recovery Services vault and Azure Backup:
Pros: Provides backup and restore capabilities for VMs.
Cons: Restoring a VM from backup can be time-consuming, and does not offer a way for users to directly connect to shares if VM1 is down. This also does not meet the quick access requirement.
Suitability: Not suitable for quick access if the primary VM is down.
an Azure file share and Azure File Sync:
Pros: Azure Files provides a cloud-based SMB share, Azure File Sync can sync the data from on-premises to Azure, and can be set up in multiple locations for quick access during an outage.
Cons: Requires configuring Azure File Sync and setting up a caching server in other locations.
Suitability: Highly suitable, meets all requirements.
Azure blob containers and Azure File Sync:
Pros: Azure Blob Storage provides scalable cloud storage.
Cons: Azure File Sync does not sync with blob storage. Not suitable because it does not meet the requirement for local SMB share, and is not the correct data store for Azure File Sync.
Suitability: Not suitable for the scenario, and a combination of services that do not work well together.
a Recovery Services vault and Windows Server Backup
Pros: Provides backup and restore capabilities for VMs and their data.
Cons: Restoring from a backup is a lengthy process, and does not meet the requirement for quick access to shares.
Suitability: Not suitable because it is not designed for quick access during an outage.
The Correct Recommendation
Based on the analysis, the correct solution is:
an Azure file share and Azure File Sync
Explanation
Azure File Share: Provides a cloud-based SMB share that is highly available and allows for access from multiple locations.
Azure File Sync: Allows for continuous syncing of files from the on-premises file server to the Azure File Share.
Caching Servers: With Azure File Sync, other on-premises servers can be set up as caching endpoints, enabling quick, local access to the data even if the Toronto office is offline. This meets the requirement of fast access to files during a Toronto branch outage.
Why Other Options are Incorrect
Recovery Services and Azure Backup: Does not provide immediate access to files and has downtime due to the restoration process.
Blob Containers with Azure File Sync: Does not work because File Sync is designed to be used with Azure File Shares, not Blob Containers.
Recovery Services and Windows Server Backup: Does not provide immediate access to files and has downtime due to the restoration process.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity.
Several virtual machines exhibit network connectivity issues.
You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines.
Solution: Use Azure Network Watcher to run IP flow verify to analyze the network traffic.
Does this meet the goal?
A. Yes
B. No
Understanding the Goal
The goal is to analyze network traffic to determine if packets are being allowed or denied to the VMs, indicating a network connectivity problem.
Analyzing the Proposed Solution
Azure Network Watcher: A service that allows you to monitor and diagnose network issues.
IP Flow Verify: A feature within Network Watcher that allows you to specify source and destination IPs, ports, and protocol and determine if the network security group rules (NSGs) will allow or deny the specified traffic. This provides insight into the network rules and if the rules are causing a problem.
Evaluation
Azure Network Watcher with the IP flow verify feature is indeed the correct tool for diagnosing network traffic and connectivity issues.
Does the Solution Meet the Goal?
The answer is:
A. Yes
Explanation
Azure Network Watcher: Provides tools to monitor, diagnose, and gain insights into your network.
IP Flow Verify: Allows you to check if a packet is allowed or denied between a source and destination based on the current network security rules.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity.
Several virtual machines exhibit network connectivity issues.
You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines.
Solution: Use Azure Traffic Analytics in Azure Network Watcher to analyze the network traffic.
Does this meet the goal?
A. Yes
B. No
Understanding the Goal
The goal is to analyze network traffic to determine if packets are being allowed or denied to the VMs, which would indicate a network connectivity problem.
Analyzing the Proposed Solution
Azure Network Watcher: A service in Azure that allows you to monitor and diagnose network issues.
Azure Traffic Analytics: A feature within Network Watcher that analyzes NSG flow logs to provide insights into network traffic flow. Traffic Analytics will give you insights into flows, application performance, security, and capacity. However, it does not provide a view into whether packets are being allowed or denied to specific VMs and can not be used on-prem.
Evaluation
Traffic Analytics can help you understand the overall traffic flow and patterns in your environment, and is useful to understand who is connecting to what, but it does not give a view into specific packets being allowed or denied.
Does the Solution Meet the Goal?
The answer is:
B. No
Explanation
Azure Traffic Analytics: While useful for visualizing network traffic, it does not show specific information on whether packets are being allowed or denied for individual VMs and does not work for on-prem VMs.
Traffic Flow vs. Packet Level: Traffic analytics summarizes traffic patterns, but does not give packet level information.
DRAG DROP –
You have an Azure subscription. The subscription contains Azure virtual machines that run Windows Server 2016 and Linux.
You need to use Azure Monitor to design an alerting strategy for security-related events.
Which Azure Monitor Logs tables should you query? To answer, drag the appropriate tables to the correct log types. Each table may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:
Tables
AzureActivity
AzureDiagnostics
Event
Syslog
Answer Area
Events from Windows event logs: Table
Events from Linux system logging: Table
Understanding the Requirements
The goal is to identify the correct log tables in Azure Monitor Logs to query for:
Security Events: Events related to security on both Windows and Linux virtual machines.
Windows VMs: Security events from Windows event logs.
Linux VMs: Security events from Linux system logging.
Analyzing the Options
Let’s evaluate each log table based on its suitability:
AzureActivity:
Pros: Stores activity log data related to Azure resource operations (create, update, delete).
Cons: Not for VM security events.
Suitability: Not suitable for this scenario.
AzureDiagnostics:
Pros: Stores a variety of diagnostic data for Azure resources.
Cons: Generic and not specifically for VM security events and not the correct table for security related events.
Suitability: Not suitable for this scenario.
Event:
Pros: Stores Windows event log data, including security events.
Cons: Does not contain Linux data.
Suitability: Suitable for Windows security events.
Syslog:
Pros: Stores Linux system log data, including security events.
Cons: Does not contain Windows data.
Suitability: Suitable for Linux security events.
The Correct Placement
Based on the analysis, here’s how the tables should be placed:
Events from Windows event logs:
Event
Events from Linux system logging:
Syslog
Explanation
Event Table: The Event table in Azure Monitor Logs is specifically designed to store Windows event log data, including security events.
Syslog Table: The Syslog table in Azure Monitor Logs stores data from the Linux system logging service. This is where you would find Linux security events.
Why Other Options are Incorrect
AzureActivity: Activity logs contain information about operations on Azure resources, not security events from VMs.
AzureDiagnostics: Is a generic table that does not contain security specific events from Windows and Linux servers.
A company named Contoso Ltd., has a single-domain Active Directory forest named contoso.com.
Contoso is preparing to migrate all workloads to Azure. Contoso wants users to use single sign-on (SSO) when they access cloud-based services that integrate with Azure Active Directory (Azure AD).
You need to identify any objects in Active Directory that will fail to synchronize to Azure AD due to formatting issues. The solution must minimize costs.
What should you include in the solution?
A. Azure AD Connect Health
B. Microsoft Office 365 IdFix
C. Azure Advisor
D. Password Export Server version 3.1 (PES v3.1) in Active Directory Migration Tool (ADMT)
The correct answer is B. Microsoft Office 365 IdFix.
Here’s why:
Microsoft Office 365 IdFix: This tool is specifically designed to identify and help remediate synchronization errors in your on-premises Active Directory environment before you connect it to Azure AD. It scans your directory for common issues like duplicate attributes, invalid characters, and formatting problems that can prevent successful synchronization. It’s a free tool from Microsoft.
Let’s look at why the other options are not the best fit:
A. Azure AD Connect Health: Azure AD Connect Health is a monitoring tool that helps you understand the health and performance of your Azure AD Connect infrastructure after it’s set up and synchronizing. While it can show you errors, it’s not designed for the initial pre-migration cleanup and identification of formatting issues.
C. Azure Advisor: Azure Advisor analyzes your Azure resources and provides recommendations for cost optimization, security, reliability, and performance. It doesn’t directly interact with your on-premises Active Directory to identify formatting issues.
D. Password Export Server version 3.1 (PES v3.1) in Active Directory Migration Tool (ADMT): PES is used to migrate passwords from one Active Directory domain to another. While ADMT is a migration tool, PES specifically focuses on password migration and is not relevant for identifying object formatting issues that would prevent Azure AD Connect synchronization.
Therefore, Microsoft Office 365 IdFix is the most appropriate and cost-effective solution for identifying Active Directory objects with formatting issues before synchronizing to Azure AD. It directly addresses the requirement of finding objects that will fail to sync due to these issues.
You have an on-premises Hyper-V cluster that hosts 20 virtual machines. Some virtual machines run Windows Server 2016 and some run Linux.
You plan to migrate the virtual machines to an Azure subscription.
You need to recommend a solution to replicate the disks of the virtual machines to Azure. The solution must ensure that the virtual machines remain available during the migration of the disks.
Solution: You recommend implementing an Azure Storage account, and then using Azure Migrate.
Does this meet the goal?
A. Yes
B. No
A. Yes
Explanation:
Using Azure Migrate to replicate the disks of the virtual machines to an Azure Storage account is a valid and recommended approach for migrating on-premises Hyper-V VMs to Azure with minimal downtime.
Here’s why:
Azure Migrate: This service provides tools specifically designed for migrating on-premises workloads to Azure. For Hyper-V VMs, it offers agent-based replication.
Agent-based Replication: This method installs an agent on each virtual machine. The agent performs an initial full replication of the VM’s disks to the designated Azure Storage account. After the initial replication, it continuously replicates only the changes (incremental replication). This allows the on-premises VMs to remain running and available during the majority of the replication process.
Cutover: When you’re ready to migrate, Azure Migrate orchestrates a final synchronization of any remaining changes and then creates the virtual machines in Azure using the replicated disks. This cutover process is typically much faster than a full migration done during a maintenance window.
You plan to deploy an Azure App Service web app that will have multiple instances across multiple Azure regions.
You need to recommend a load balancing service for the planned deployment. The solution must meet the following requirements:
✑ Maintain access to the app in the event of a regional outage.
✑ Support Azure Web Application Firewall (WAF).
✑ Support cookie-based affinity.
✑ Support URL routing.
What should you include in the recommendation?
A. Azure Front Door
B. Azure Load Balancer
C. Azure Traffic Manager
D. Azure Application Gateway
The correct answer is A. Azure Front Door.
Here’s why:
Azure Front Door:
Maintain access in the event of a regional outage: Azure Front Door is a global, scalable entry point that uses the Microsoft global network to create fast, secure, and widely scalable web applications. It can automatically route traffic to the next closest healthy region if one region experiences an outage.
Support Azure Web Application Firewall (WAF): Azure Front Door has an integrated Azure WAF to protect your web applications from common web exploits and vulnerabilities.
Support cookie-based affinity: Azure Front Door supports session affinity (also known as sticky sessions) using cookies, ensuring that requests from the same client are routed to the same backend instance within a region.
Support URL routing: Azure Front Door allows you to define routing rules based on URL paths to direct traffic to different backend pools.
Let’s look at why the other options are less suitable:
Azure Load Balancer: Azure Load Balancer is a regional load balancer (either internal or public). It does not inherently provide global failover across regions. It also does not have built-in WAF capabilities or support URL routing. While the Standard tier supports session persistence, it’s not as advanced as Front Door’s cookie-based affinity across regions.
Azure Traffic Manager: Azure Traffic Manager is a DNS-based traffic routing service. While it can direct traffic to different regions based on various routing methods (including priority for failover), it operates at the DNS level and does not inspect HTTP traffic. Therefore, it does not support WAF, cookie-based affinity at the HTTP level, or URL routing.
Azure Application Gateway: Azure Application Gateway is a regional web traffic load balancer that operates at Layer 7 of the OSI model. It supports WAF, cookie-based affinity, and URL routing within a region. However, it is a regional service and does not inherently provide global failover in the same way that Azure Front Door does. While you can deploy multiple Application Gateways in different regions and use a service like Traffic Manager in front, Front Door provides a more integrated and streamlined solution for global load balancing and failover with WAF.
In summary, Azure Front Door is the most appropriate service to meet all the specified requirements for a globally distributed, resilient, and secure web application deployment.
You are designing a solution that will include containerized applications running in an Azure Kubernetes Service (AKS) cluster.
You need to recommend a load balancing solution for HTTPS traffic. The solution must meet the following requirements:
✑ Automatically configure load balancing rules as the applications are deployed to the cluster.
✑ Support Azure Web Application Firewall (WAF).
✑ Support cookie-based affinity.
✑ Support URL routing.
What should you include the recommendation?
A. an NGINX ingress controller
B. Application Gateway Ingress Controller (AGIC)
C. an HTTP application routing ingress controller
D. the Kubernetes load balancer service
The correct answer is B. Application Gateway Ingress Controller (AGIC).
Here’s why:
Application Gateway Ingress Controller (AGIC):
Automatically configure load balancing rules: AGIC deploys an Azure Application Gateway in your managed Azure Kubernetes Service (AKS) cluster’s virtual network. When you define Kubernetes Ingress resources, AGIC automatically configures the Application Gateway’s routing rules, listeners, and backend pools to match your Ingress configuration. This makes deployment and management of load balancing rules very streamlined.
Support Azure Web Application Firewall (WAF): AGIC leverages the capabilities of Azure Application Gateway, which has built-in support for Azure WAF. You can configure WAF policies on the Application Gateway to protect your applications from common web exploits.
Support cookie-based affinity: Azure Application Gateway, and therefore AGIC, supports cookie-based session affinity (also known as sticky sessions). This ensures that requests from the same client are routed to the same backend pod within your AKS cluster.
Support URL routing: Azure Application Gateway is a Layer-7 load balancer, meaning it can make routing decisions based on the URL path of the incoming request. AGIC allows you to define URL routing rules within your Kubernetes Ingress resources.
Let’s look at why the other options are less suitable:
A. an NGINX ingress controller: While NGINX is a powerful and widely used ingress controller, and it can be configured to support cookie-based affinity and URL routing, it does not inherently provide automatic configuration with Azure services like WAF. You would typically need to configure a separate WAF solution and integrate it with your NGINX setup, which adds complexity.
C. an HTTP application routing ingress controller: This is a simpler, AKS-managed ingress controller that provides basic HTTP routing. However, it does not support Azure WAF directly and has limited capabilities for advanced features like cookie-based affinity and complex URL routing compared to AGIC.
D. the Kubernetes load balancer service: The Kubernetes LoadBalancer service provisions a basic Azure Load Balancer. Azure Load Balancer is a Layer-4 load balancer, which operates at the transport layer. It does not have the ability to inspect HTTP headers or URLs and therefore cannot provide URL routing or cookie-based affinity based on HTTP cookies. While you can associate a WAF with an Azure Load Balancer, it’s not as integrated and seamless as using AGIC with Application Gateway’s WAF.
You have an Azure subscription that contains an Azure SQL database.
You plan to use Azure reservations on the Azure SQL database.
To which resource type will the reservation discount be applied?
A. vCore compute
B. DTU compute
C. Storage
D. License
The correct answer is A. vCore compute.
Explanation
Azure Reservations: Azure reservations provide a discount on Azure resources when you commit to spending a certain amount on a specific resource type for one or three years.
Azure SQL Database: Azure SQL Database has two main purchasing models:
vCore-based: This model allows you to choose the number of virtual cores (vCores), the amount of memory, and the storage size and type.
DTU-based: This model uses a bundled measure of compute, storage, and I/O resources called Database Transaction Units (DTUs).
Reservations and Azure SQL Database: Azure reservations for Azure SQL Database apply to the compute resources used by your database.
vCore-based Model: Reservations apply specifically to the vCore compute cost.
DTU-based Model: Reservations apply to the DTU compute cost, which is a component of the overall DTU bundle.
Other Resource Types:
Storage: Storage costs are separate from compute costs and are not covered by Azure SQL Database reservations. You might consider reserved capacity for storage separately.
License: SQL Server licenses are handled separately, especially if you are using the Azure Hybrid Benefit. Reservations for Azure SQL Database do not cover license costs.
Why vCore compute is the most accurate answer:
While technically reservations can apply to both vCore and DTU compute, the question specifically mentions “Azure SQL database,” which implies a broader scope than just single databases or elastic pools that are available in the DTU model. The vCore model also includes Managed Instances, which is not available in DTU. Since vCore is the more widely encompassing resource type across all purchasing models for Azure SQL Database, it is the most accurate answer.
Overview. General Overview
Litware, Inc. is a medium-sized finance company.
Overview. Physical Locations
Litware has a main office in Boston.
Existing Environment. Identity Environment
The network contains an Active Directory forest named Litware.com that is linked to an Azure Active Directory (Azure AD) tenant named Litware.com. All users have Azure Active Directory Premium P2 licenses.
Litware has a second Azure AD tenant named dev.Litware.com that is used as a development environment.
The Litware.com tenant has a conditional acess policy named capolicy1. Capolicy1 requires that when users manage the Azure subscription for a production environment by
using the Azure portal, they must connect from a hybrid Azure AD-joined device.
Existing Environment. Azure Environment
Litware has 10 Azure subscriptions that are linked to the Litware.com tenant and five Azure subscriptions that are linked to the dev.Litware.com tenant. All the subscriptions are in an Enterprise Agreement (EA).
The Litware.com tenant contains a custom Azure role-based access control (Azure RBAC) role named Role1 that grants the DataActions read permission to the blobs and files in Azure Storage.
Existing Environment. On-premises Environment
The on-premises network of Litware contains the resources shown in the following table.
The on-premises network of Litware contains the resources shown in the following table.
Name Type Configuration
SERVER1 Ubuntu 18.04 virtual machines hosted on Hyper-V The virtual machines host a third-party app named App1. App1 uses an external storage solution that provides Apache Hadoop-compatible data storage. The data storage supports POSIX access control list (ACL) file-level permissions.
SERVER2 Ubuntu 18.04 virtual machines hosted on Hyper-V (Same as SERVER1 description)
SERVER3 Ubuntu 18.04 virtual machines hosted on Hyper-V (Same as SERVER1 description)
SERVER10 Server that runs Windows Server 2016 The server contains a Microsoft SQL Server instance that hosts two databases named DB1 and DB2.
Existing Environment. Network Environment
Litware has ExpressRoute connectivity to Azure.
Planned Changes and Requirements. Planned Changes
Litware plans to implement the following changes:
✑ Migrate DB1 and DB2 to Azure.
✑ Migrate App1 to Azure virtual machines.
✑ Deploy the Azure virtual machines that will host App1 to Azure dedicated hosts.
Planned Changes and Requirements. Authentication and Authorization Requirements
Litware identifies the following authentication and authorization requirements:
✑ Users that manage the production environment by using the Azure portal must connect from a hybrid Azure AD-joined device and authenticate by using Azure Multi-Factor Authentication (MFA).
✑ The Network Contributor built-in RBAC role must be used to grant permission to all the virtual networks in all the Azure subscriptions.
✑ To access the resources in Azure, App1 must use the managed identity of the virtual machines that will host the app.
✑ Role1 must be used to assign permissions to the storage accounts of all the Azure subscriptions.
✑ RBAC roles must be applied at the highest level possible.
Planned Changes and Requirements. Resiliency Requirements
Litware identifies the following resiliency requirements:
✑ Once migrated to Azure, DB1 and DB2 must meet the following requirements:
- Maintain availability if two availability zones in the local Azure region fail.
- Fail over automatically.
- Minimize I/O latency.
✑ App1 must meet the following requirements:
- Be hosted in an Azure region that supports availability zones.
- Be hosted on Azure virtual machines that support automatic scaling.
- Maintain availability if two availability zones in the local Azure region fail.
Planned Changes and Requirements. Security and Compliance Requirements
Litware identifies the following security and compliance requirements:
✑ Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.
✑ On-premises users and services must be able to access the Azure Storage account that will host the data in App1.
✑ Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.
✑ All Azure SQL databases in the production environment must have Transparent Data Encryption (TDE) enabled.
✑ App1 must not share physical hardware with other workloads.
Planned Changes and Requirements. Business Requirements
Litware identifies the following business requirements:
✑ Minimize administrative effort. ✑ Minimize costs.
HOTSPOT -
You plan to migrate App1 to Azure.
You need to recommend a high-availability solution for App1. The solution must meet the resiliency requirements.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Number of host groups:
1
2
3
6
Number of virtual machine scale sets:
0
1
3
Here’s a breakdown of the reasoning for the correct choices:
Number of Host Groups: 3
Requirement: App1 must maintain availability if two availability zones in the local Azure region fail.
Dedicated Hosts and Availability Zones: To guarantee that App1 survives the failure of two availability zones, you need instances of App1 running in at least three availability zones.
Host Groups per AZ: Since App1 needs to run on dedicated hosts, and you want to isolate those dedicated hosts by availability zone for better fault tolerance, you would create a separate host group in each of the three availability zones.
Number of Virtual Machine Scale Sets: 1
Requirement: App1 must be hosted on Azure virtual machines that support automatic scaling.
Requirement: App1 must maintain availability if two availability zones in the local Azure region fail.
VMSS Capability: A single Azure Virtual Machine Scale Set (VMSS) can be configured to span across multiple availability zones. This allows you to achieve both automatic scaling and high availability across the three availability zones where your dedicated hosts reside.
Why not the other options?
Number of Host Groups: 1: Having only one host group means all your dedicated hosts are within a single fault domain. If that availability zone fails, your entire App1 instance is down. This doesn’t meet the requirement of surviving two AZ failures.
Number of Host Groups: 2: Having two host groups allows you to survive one AZ failure, but not the failure of two.
Number of Host Groups: 6: While technically possible, it’s unnecessary and increases complexity and cost without providing additional benefit for this specific requirement. You only need to cover three availability zones.
Number of Virtual Machine Scale Sets: 0: You need a VMSS to achieve automatic scaling, which is a stated requirement.
Number of Virtual Machine Scale Sets: 3: While you could create a separate VMSS for each host group/availability zone, it adds unnecessary management overhead. A single VMSS spanning the zones is the recommended and more efficient approach for this scenario.
Therefore, the correct answer is:
Number of host groups: 3
Number of virtual machine scale sets: 1
You plan to migrate App1 to Azure.
You need to recommend a network connectivity solution for the Azure Storage account that will host the App1 data. The solution must meet the security and compliance requirements.
What should you include in the recommendation?
A. a private endpoint
B. a service endpoint that has a service endpoint policy
C. Azure public peering for an ExpressRoute circuit
D. Microsoft peering for an ExpressRoute circuit
The most appropriate recommendation to meet the security and compliance requirements for network connectivity to the Azure Storage account hosting App1 data is A. a private endpoint.
Here’s why:
A. a private endpoint: This is the most secure option. A private endpoint creates a network interface within your virtual network for the storage account. This effectively brings the storage service into your private network, eliminating the public endpoint entirely. This directly fulfills the requirement to “prevent access to the public endpoint of the Azure Storage account.” On-premises users can then access the storage account through the existing ExpressRoute connection, keeping all traffic within the private network.
Let’s look at why the other options are less suitable:
B. a service endpoint that has a service endpoint policy: Service endpoints allow you to restrict network access to the storage account to specific subnets within your virtual network. While it adds a layer of security, it does not eliminate the public endpoint. Traffic from the on-premises network would still technically traverse the public endpoint, even if it’s restricted by the service endpoint policy. This doesn’t fully meet the requirement of preventing public endpoint access.
C. Azure public peering for an ExpressRoute circuit: Azure public peering allows you to access Azure public services (like storage) over your ExpressRoute connection. However, it doesn’t inherently prevent public access to the storage account. The storage account would still have a public endpoint accessible from the internet. Public peering is about providing a private path for accessing public services, not about making those services private.
D. Microsoft peering for an ExpressRoute circuit: Microsoft peering allows you to access Microsoft 365 services and Azure PaaS services (including Storage) over your ExpressRoute connection. Similar to public peering, it doesn’t inherently prevent public access to the storage account’s public endpoint. It provides a private path for accessing these services but doesn’t eliminate the public accessibility.
Therefore, the single best answer is A. a private endpoint.
If you were forced to choose up to three, the reasoning would be:
A. a private endpoint (Most Important): Directly addresses the requirement to prevent public endpoint access.
B. a service endpoint that has a service endpoint policy (Secondary Layer): While it doesn’t eliminate the public endpoint, it adds an extra layer of network-level security by restricting access to specific subnets. This can be used in conjunction with a private endpoint for defense in depth, or as a less secure alternative if private endpoints are not feasible for some reason (though they are generally recommended for this scenario).
Neither C nor D are suitable for meeting the primary requirement of preventing public access. They facilitate private connectivity to Azure but don’t make the storage account private.
In conclusion, for this specific requirement, the most accurate and secure solution is A. a private endpoint.
You migrate App1 to Azure.
You need to ensure that the data storage for App1 meets the security and compliance requirement
What should you do?
Create an access policy for the blob
Modify the access level of the blob service.
Implement Azure resource locks.
Create Azure RBAC assignments.
The correct answer is Create an access policy for the blob.
Here’s why:
Security and Compliance Requirement: The core requirement is to prevent modification of data for three years after it’s written, while still allowing new data to be added. This is a classic use case for immutability.
Azure Blob Storage Immutability: Azure Blob Storage offers a feature called Immutable Storage with Policy Lock. This allows you to set time-based retention policies or legal holds on blob data. Once a policy is set and locked, blobs cannot be modified or deleted within the retention period.
How Access Policies Relate: In the context of Azure Blob Storage immutability, you create an immutability policy which is a type of access policy that governs the retention period and immutability rules for the blob or container.
Let’s look at why the other options are incorrect:
Modify the access level of the blob service: Access levels (like Hot, Cool, Archive) are related to storage costs and access frequency. They don’t provide immutability features.
Implement Azure resource locks: Azure resource locks prevent administrative operations on the storage account or container (like deleting it). They do not prevent modifications to the data within the blobs.
Create Azure RBAC assignments: RBAC controls who has permission to access and manage the storage account and its contents. While you can restrict write access, it doesn’t enforce time-based immutability. A user with write access could still modify data unless an immutability policy is in place.
Therefore, to ensure data immutability for three years as required, you need to create an immutability policy (a type of access policy) for the blob container or individual blobs where App1’s data is stored.
HOTSPOT
How should the migrated databases DB1 and DB2 be implemented in Azure?
Database:
A single Azure SQL database
Azure SQL Managed Instance
An Azure SOL Database elastic pool
Service tier:
Hyperscale
Business Critical
General Purpose
Here’s how the migrated databases DB1 and DB2 should be implemented in Azure, based on the requirements:
Database: Azure SQL Managed Instance
Service tier: Business Critical
Explanation:
Azure SQL Managed Instance:
Maintain availability if two availability zones in the local Azure region fail: Azure SQL Managed Instance in the Business Critical tier supports zone redundancy. This means your instances are placed across multiple availability zones in the same region, ensuring availability even if one or two zones fail.
Fail over automatically: Business Critical Managed Instances have built-in automatic failover to a secondary replica in a different availability zone.
Minimize I/O latency: The Business Critical service tier provides the lowest I/O latency due to its premium-performance local SSD storage.
Closer to On-premises SQL Server: Managed Instance provides a near 100% compatibility with on-premises SQL Server, making migration easier.
Business Critical Service Tier:
Addresses all resiliency requirements: As explained above, it provides the necessary availability, automatic failover, and low latency.
Supports Transparent Data Encryption (TDE): This is a requirement for all production Azure SQL databases, and Business Critical supports it.
Why other options are less suitable:
A single Azure SQL database: While it offers high availability, it typically relies on replicating within the same availability zone or to a secondary region, not across multiple availability zones within the same region for the base General Purpose tier. Hyperscale can offer zone redundancy, but Business Critical is generally better for minimizing I/O latency.
Azure SQL Database elastic pool: Elastic pools are for managing resources for multiple databases. While individual databases within the pool can have high availability, the pool itself doesn’t inherently provide the multi-AZ failover required for DB1 and DB2 individually.
Hyperscale: While Hyperscale offers zone redundancy and is suitable for very large databases, it might not offer the same level of consistently low I/O latency as the Business Critical tier, which is optimized for transactional workloads.
General Purpose: Does not offer the multi-AZ resilience needed to survive two availability zone failures.
Therefore, Azure SQL Managed Instance with the Business Critical service tier is the best choice to meet all the stated resiliency and performance requirements for DB1 and DB2.
DRAG DROP -
You need to configure an Azure policy to ensure that the Azure SQL databases have Transparent Data Encryption (TDE) enabled. The solution must meet the security and compliance requirements.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
Actions
Create an Azure policy definition that uses the deployIfNotExists effect.
Invoke a remediation task.
Create an Azure policy definition that uses the Modify effect.
Create an Azure policy assignment.
Create a user-assigned managed identity.
Answer Area
Here’s the correct sequence of actions to configure an Azure Policy for enabling Transparent Data Encryption (TDE) on Azure SQL databases:
Answer Area:
Create an Azure policy definition that uses the deployIfNotExists effect.
Create an Azure policy assignment.
Invoke a remediation task.
Explanation of the steps:
Create an Azure policy definition that uses the deployIfNotExists effect:
This is the foundational step. You need to define the policy itself.
The deployIfNotExists effect is crucial here. It allows the policy to automatically deploy resources (in this case, enable TDE) if the specified condition (TDE not enabled) is met.
The policy definition will include the logic to identify Azure SQL databases and check their TDE status. It will also contain the deployment details (typically an ARM template or a set of operations) to enable TDE.
Create an Azure policy assignment:
Once the policy definition is created, you need to assign it to a specific scope (management group, subscription, or resource group).
This tells Azure where the policy should be enforced. When you create the assignment, the policy will start evaluating resources within that scope.
Invoke a remediation task:
The deployIfNotExists effect only applies to new or updated resources after the policy assignment.
To bring existing non-compliant Azure SQL databases into compliance, you need to run a remediation task.
The remediation task will evaluate the resources within the policy’s scope and apply the deployment defined in the deployIfNotExists policy to the non-compliant ones, effectively enabling TDE on them.
Why the other options are not in this sequence:
Create an Azure policy definition that uses the Modify effect: While the Modify effect can also be used for some configuration changes, deployIfNotExists is generally more suitable for ensuring a specific resource or setting exists (like TDE being enabled). Modify is better for changing existing properties.
Create a user-assigned managed identity: While managed identities are often used with Azure Policy for the deployment aspect of deployIfNotExists, the system-assigned managed identity created automatically during the policy assignment is usually sufficient for this scenario. You wouldn’t necessarily create a separate user-assigned identity as a prerequisite for the basic functionality. However, for more complex scenarios or specific permissions, a user-assigned identity might be needed.
Invoke a remediation task: This step is performed after the policy definition and assignment to address existing resources.
You plan to deploy multiple instances of an Azure web app across several Azure regions.
You need to design an access solution for the app.
The solution must meet the following replication requirements:
✑ Support rate limiting.
✑ Balance requests between all instances.
✑ Ensure that users can access the app in the event of a regional outage.
Solution: You use Azure Application Gateway to provide access to the app.
Does this meet the goal?
Yes
No
No, this does not fully meet the goal.
Here’s why:
Support Rate Limiting: Azure Application Gateway does support rate limiting through Web Application Firewall (WAF) policies. So, this requirement is met.
Balance Requests Between All Instances: Azure Application Gateway can load balance requests across multiple backend instances. This requirement is met.
Ensure that users can access the app in the event of a regional outage: This is where the proposed solution falls short. While Application Gateway can load balance within a region, it is a regional service. If the Azure region where the Application Gateway is deployed experiences an outage, the Application Gateway itself will be unavailable, and users will not be able to access the app.
To meet the requirement of regional outage resilience, you would need a more comprehensive solution that includes:
Deploying Application Gateway instances in multiple Azure regions.
Using a global load balancer like Azure Front Door or Azure Traffic Manager in front of the regional Application Gateways. This global service can direct traffic to the healthy regional gateway in case of a regional failure.
In summary, while Azure Application Gateway handles load balancing and rate limiting well, it doesn’t inherently provide regional failover capabilities on its own.
You have an Azure subscription that contains a Basic Azure virtual WAN named Virtual/WAN1 and the virtual hubs shown in the following table.
Name Azure region
Hub1 US East
Hub2 US West
You have an ExpressRoute circuit in the US East region.
You need to create an ExpressRoute association to VirtualWAN1.
What should you do first?
Upgrade VirtualWAN1 to Standard.
Create a gateway on Hub1.
Create a hub virtual network in US East.
Enable the ExpressRoute premium add-on.
The correct first step is to Upgrade VirtualWAN1 to Standard.
Here’s why:
Basic Virtual WAN Limitations: A Basic Azure Virtual WAN does not support ExpressRoute connections. You need a Standard Virtual WAN to establish an ExpressRoute association.
Let’s look at why the other options are incorrect as the first step:
Create a gateway on Hub1: You will eventually need to create an ExpressRoute gateway within Hub1, but you cannot do this on a Basic Virtual WAN. You need to upgrade to Standard first to unlock this capability.
Create a hub virtual network in US East: The prompt states that Hub1 already exists in US East. You don’t need to create a separate hub virtual network. Hubs are created within the Virtual WAN itself.
Enable the ExpressRoute premium add-on: While the premium add-on enables global connectivity for ExpressRoute, it’s not a prerequisite for establishing a basic connection within the same region as the ExpressRoute circuit (US East in this case). The fundamental issue is the Basic Virtual WAN tier.
Therefore, upgrading the Virtual WAN to Standard is the necessary first step to enable ExpressRoute connectivity.