test2 Flashcards
https://infraexam.com/microsoft/az-304-microsoft-azure-architect-design/az-304-part-07/
Overview. General Overview
Litware, Inc. is a medium-sized finance company.
Overview. Physical Locations
Litware has a main office in Boston.
Existing Environment. Identity Environment
The network contains an Active Directory forest named Litware.com that is linked to an Azure Active Directory (Azure AD) tenant named Litware.com. All users have Azure Active Directory Premium P2 licenses.
Litware has a second Azure AD tenant named dev.Litware.com that is used as a development environment.
The Litware.com tenant has a conditional acess policy named capolicy1. Capolicy1 requires that when users manage the Azure subscription for a production environment by
using the Azure portal, they must connect from a hybrid Azure AD-joined device.
Existing Environment. Azure Environment
Litware has 10 Azure subscriptions that are linked to the Litware.com tenant and five Azure subscriptions that are linked to the dev.Litware.com tenant. All the subscriptions are in an Enterprise Agreement (EA).
The Litware.com tenant contains a custom Azure role-based access control (Azure RBAC) role named Role1 that grants the DataActions read permission to the blobs and files in Azure Storage.
Existing Environment. On-premises Environment
The on-premises network of Litware contains the resources shown in the following table.
Name Type Configuration
SERVER1 Ubuntu 18.04 vitual machines hosted on Hyper-V The vitual machines host a third-party app named App1.
SERVER2 App1 uses an external storage solution that provides Apache Hadoop-compatible data storage. The data storage supports POSIX access control list (ACL) file-level permissions.
SERVER3
SERVER10 Server that runs Windows Server 2016 The server contains a Microsoft SQL Server instance that hosts two databases named DB1 and DB2.
Existing Environment. Network Environment
Litware has ExpressRoute connectivity to Azure.
Planned Changes and Requirements. Planned Changes
Litware plans to implement the following changes:
✑ Migrate DB1 and DB2 to Azure.
✑ Migrate App1 to Azure virtual machines.
✑ Deploy the Azure virtual machines that will host App1 to Azure dedicated hosts.
Planned Changes and Requirements. Authentication and Authorization Requirements
Litware identifies the following authentication and authorization requirements:
✑ Users that manage the production environment by using the Azure portal must connect from a hybrid Azure AD-joined device and authenticate by using Azure Multi-Factor Authentication (MFA).
✑ The Network Contributor built-in RBAC role must be used to grant permission to all the virtual networks in all the Azure subscriptions.
✑ To access the resources in Azure, App1 must use the managed identity of the virtual machines that will host the app.
✑ Role1 must be used to assign permissions to the storage accounts of all the Azure subscriptions.
✑ RBAC roles must be applied at the highest level possible.
Planned Changes and Requirements. Resiliency Requirements
Litware identifies the following resiliency requirements:
✑ Once migrated to Azure, DB1 and DB2 must meet the following requirements:
- Maintain availability if two availability zones in the local Azure region fail.
- Fail over automatically.
- Minimize I/O latency.
✑ App1 must meet the following requirements:
- Be hosted in an Azure region that supports availability zones.
- Be hosted on Azure virtual machines that support automatic scaling.
- Maintain availability if two availability zones in the local Azure region fail.
Planned Changes and Requirements. Security and Compliance Requirements
Litware identifies the following security and compliance requirements:
✑ Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.
✑ On-premises users and services must be able to access the Azure Storage account that will host the data in App1.
✑ Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.
✑ All Azure SQL databases in the production environment must have Transparent Data Encryption (TDE) enabled.
✑ App1 must not share physical hardware with other workloads.
Planned Changes and Requirements. Business Requirements
Litware identifies the following business requirements:
✑ Minimize administrative effort.
✑ Minimize costs.
You plan to migrate App1 to Azure. The solution must meet the authentication and authorization requirements.
Which type of endpoint should App1 use to obtain an access token?
Azure Instance Metadata Service (IMDS)
Azure AD
Azure Service Management
Microsoft identity platform
The correct answer is: Azure Instance Metadata Service (IMDS)
Explanation:
Managed Identities and IMDS:
Why it’s the right choice: The requirements state that “To access the resources in Azure, App1 must use the managed identity of the virtual machines that will host the app”. Managed identities for Azure resources provide an identity that applications running in an Azure VM can use to access other Azure resources. The Azure Instance Metadata Service (IMDS) is the service that provides this identity information to the VM.
How it works:
You enable a managed identity for the virtual machines hosting App1.
Within the App1 code, you make a request to the IMDS to obtain an access token.
The IMDS service, running inside each Azure VM, returns a token that can be used to access other Azure resources (e.g., storage accounts, Key Vault) without requiring to store credentials in the application code. This access token is automatically rotated by the managed identity service.
This token is then passed to the destination service to provide access, after verifying the token is valid with Azure AD.
Security Benefits: Using managed identities and IMDS avoids storing sensitive credentials in configuration files, environment variables, or the application code itself. This is a security best practice.
Relevance to the scenario: It directly fulfills the requirement to use managed identities for accessing Azure resources from App1.
Why Other Options are Incorrect:
Azure AD: While Azure AD is used to authenticate users and apps, the app itself (App1 running on the VM) does not need to perform a standard Azure AD login. The managed identity handles this for the application. The application uses a token from IMDS, it does not use the Azure AD endpoint directly.
Azure Service Management: This is a deprecated method for Azure management. This is not the correct way to authenticate application level access.
Microsoft identity platform: This is the overall identity platform in Azure, but it’s not used for direct token retrieval within a VM with a managed identity. App1 should not use the Microsoft Identity Platform directly, it should use IMDS to get a token from the managed identity.
In Summary:
The correct endpoint for App1 to obtain an access token is the Azure Instance Metadata Service (IMDS). IMDS is designed specifically for providing applications within Azure VMs access tokens that are used for accessing other Azure services when used with a managed identity.
Important Notes for Azure 304 Exam:
Managed Identities: You MUST understand how managed identities work and how to use them. Be familiar with the two types of managed identity: System-assigned and User-assigned.
Azure Instance Metadata Service (IMDS): Know the purpose of IMDS and how it provides information about the Azure VM instance (including access tokens for managed identities).
Secure Authentication: Understand the security benefits of using managed identities instead of embedding secrets in code or configuration files.
Authentication Scenarios: Be able to recognize different authentication scenarios (user login vs. application access) and know which Azure service to use to achieve the required access pattern.
Service Principals: Be familiar with the concept of service principals and their relationship with application identity, but understand that a service principal is not directly needed here since the managed identity service creates and manages the service principals.
Key Takeaway: For applications running in Azure VMs that need to access other Azure resources, managed identities via the Azure IMDS are the recommended approach. The application does not authenticate with Azure AD directly, it gets a token from the IMDS.
HOTSPOT
You need to ensure that users managing the production environment are registered for Azure MFA and must authenticate by using Azure MFA when they sign in to the Azure portal. The solution must meet the authentication and authorization requirements.
What should you do? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
To register the users for Azure MFA, use:
Azure AD Identity Protection
Security defaults in Azure AD
Per-user MFA in the MFA management UI
To enforce Azure MFA authentication, configure:
Grant control in capolicy1
Session control in capolicy1
Sign-in risk policy in Azure AD Identity Protection for the Litware.com tenant
Correct Answers:
To register the users for Azure MFA, use: Per-user MFA in the MFA management UI
To enforce Azure MFA authentication, configure: Grant control in capolicy1
Explanation:
Per-User MFA in the MFA Management UI:
Why it’s the right choice: Per-user MFA is the standard way of configuring MFA on user accounts and is often used when you do not want to enable security defaults (as it allows for more granular control). You must configure this on the user before conditional access can be applied.
How it Works: This action will cause each user in the required group to be registered for Multi-Factor authentication. This method is ideal when you want direct control over user MFA status, or when security defaults are not enabled.
Relevance to the scenario: The requirement specifies that “users must authenticate by using Azure MFA when they sign in to the Azure portal.” The first step is to register the users.
Grant Control in capolicy1:
Why it’s the right choice: The requirements specified that there is a Conditional Access Policy (capolicy1), therefore this is where we must configure the requirement to enforce MFA. Within the Grant controls of the conditional access policy you must require MFA to satisfy the requirement.
How it works: You will need to modify capolicy1 in order to ensure that all the required conditions are satisfied before being granted access to Azure Portal. In addition to enabling MFA, you may also need to specify other conditions, such as device type or location, to fulfill the full requirement for the conditional access policy.
Relevance to the scenario: The conditional access policy enforces access control based on the authentication and authorization rules specified in the requirements, which also specify that “users…must connect from a hybrid Azure AD-joined device”. This conditional access policy will enforce the requirement for MFA.
Why Other Options are Incorrect:
To register the users for Azure MFA, use: Azure AD Identity Protection: Azure AD Identity Protection is used to detect and investigate risky sign-in behavior and to configure risk-based conditional access policies. It’s not the primary mechanism for registering users for MFA. While Identity Protection does have an MFA registration policy, it does not enable MFA, but only prompts a user to register for MFA.
To register the users for Azure MFA, use: Security defaults in Azure AD: Security defaults is a blanket setting that enables multi-factor authentication and many other security settings. While this option is also valid, it does not allow for the more fine-grained control that is needed for conditional access, and therefore is not the correct answer.
To enforce Azure MFA authentication, configure: Session control in capolicy1: Session controls in a conditional access policy are used to control user browser sessions, not to enforce MFA requirements, and are therefore not the correct mechanism to solve this requirement.
To enforce Azure MFA authentication, configure: Sign-in risk policy in Azure AD Identity Protection for the Litware.com tenant: Identity protection is a good tool for detecting risk and automatically responding to high risk sign-in attempts. It does not directly enable MFA for all user logins, but rather responds to high risk sign-in attempts, therefore this is not the correct service.
In Summary:
The best approach is to first enable Per-user MFA, and then enforce MFA through the Conditional Access Policy (capolicy1).
Important Notes for Azure 304 Exam:
Azure MFA: Know how to enable and enforce MFA for users. Be familiar with both Per-user MFA, and the security defaults settings in Azure AD.
Conditional Access Policies: You MUST know how conditional access policies work and how to configure access rules (including MFA requirements).
Grant Controls: Understand the use of grant controls to enforce authentication requirements.
Azure AD Identity Protection: Understand how Identity Protection works, but be aware it is for risk-based policies, and is not intended for setting up MFA on a user account, or enforcing MFA on logins.
Hybrid Azure AD Join: Be familiar with the benefits and requirements for Hybrid Azure AD-joined devices and how to use them in conjunction with conditional access policies.
Service Selection: Be able to pick the correct service for each task, and understand that setting up MFA and enforcing MFA are distinct steps that require different tools.
Azure Environment -
Litware has 10 Azure subscriptions that are linked to the Litware.com tenant and five Azure subscriptions that are linked to the dev.litware.com tenant. All the subscriptions are in an Enterprise Agreement (EA).
The litware.com tenant contains a custom Azure role-based access control (Azure RBAC) role named Role1 that grants the DataActions read permission to the blobs and files in Azure Storage.
On-Premises Environment -
The on-premises network of Litware contains the resources shown in the following table.
Network Environment -
Litware has ExpressRoute connectivity to Azure.
Planned Changes and Requirements
Litware plans to implement the following changes:
Migrate DB1 and DB2 to Azure.
Migrate App1 to Azure virtual machines.
Migrate the external storage used by App1 to Azure Storage.
Deploy the Azure virtual machines that will host App1 to Azure dedicated hosts.
Authentication and Authorization Requirements
Litware identifies the following authentication and authorization requirements:
Only users that manage the production environment by using the Azure portal must connect from a hybrid Azure AD-joined device and authenticate by using
Azure Multi-Factor Authentication (MFA).
The Network Contributor built-in RBAC role must be used to grant permissions to the network administrators for all the virtual networks in all the Azure subscriptions.
To access the resources in Azure, App1 must use the managed identity of the virtual machines that will host the app.
RBAC roles must be applied at the highest level possible.
Resiliency Requirements -
Litware identifies the following resiliency requirements:
Once migrated to Azure, DB1 and DB2 must meet the following requirements:
Maintain availability if two availability zones in the local Azure region fail.
Fail over automatically.
Minimize I/O latency.
App1 must meet the following requirements:
Be hosted in an Azure region that supports availability zones.
Be hosted on Azure virtual machines that support automatic scaling.
Maintain availability if two availability zones in the local Azure region fail.
Security and Compliance Requirements
Litware identifies the following security and compliance requirements:
Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.
On-premises users and services must be able to access the Azure Storage account that will host the data in App1.
Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.
All Azure SQL databases in the production environment must have Transparent Data Encryption (TDE) enabled.
App1 must NOT share physical hardware with other workloads.
Business Requirements -
Litware identifies the following business requirements:
Minimize administrative effort.
Minimize costs.
After you migrate App1 to Azure, you need to enforce the data modification requirements to meet the security and compliance requirements.
What should you do?
Introductory Info
Question
Answers
A. Create an access policy for the blob service.
B. Implement Azure resource locks.
C. Create Azure RBAC assignments.
D. Modify the access level of the blob service
which option is correct? why correct? which important note for azure 305 exam?
The Goal
As before, the primary goal is to enforce this requirement:
“Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.”
Evaluating the Options Based on Proximity
Let’s analyze each option again:
A. Create an access policy for the blob service.
Why it’s closest to being correct: While it doesn’t directly enforce immutability, access policies do allow you to control write access. By carefully constructing an access policy, you could, in theory, grant write access for a specific period or to a particular user/group, and then potentially restrict it later to help prevent further modification. However, it is important to remember this does not ensure immutability and is just a temporary restriction to the data.
Why it’s still not ideal: Access policies do not inherently prevent modification. A user or process could still modify the data if granted the appropriate permissions. It can also get complex to manage.
B. Implement Azure resource locks.
Why it’s NOT a good fit: As mentioned previously, resource locks focus on preventing deletion or changes to the resources, not the data within the resources. This is not even remotely related to the requirement.
C. Create Azure RBAC assignments.
Why it’s NOT a good fit: Like resource locks, RBAC controls the permissions of who can do what with the Azure resources. RBAC does not provide a mechanism for ensuring immutability of the data.
D. Modify the access level of the blob service.
Why it’s NOT a good fit: Access levels (e.g., public, private, blob) controls who can access the storage account, not how the data within it is modified.
The Closest Correct Answer
Given the limited options, A. Create an access policy for the blob service. is the closest to the correct approach, however it is still not correct.
Why? Because out of all the given answers it does the best to address the prompt, albeit incorrectly. Access policies are better than nothing, while the rest do not even come close to addressing the prompt.
Important Note for the AZ-305 Exam
The main takeaway here is that the exam will sometimes give you a multiple-choice question where the best answer isn’t provided. This forces you to choose the least incorrect option.
Here’s what you need to remember for these types of questions:
Understand Core Concepts: Have a strong grasp of the core Azure services, like Storage, RBAC, etc. and how they function.
Identify What’s Missing: If the correct feature is not an option, identify what comes closest.
Consider the Intent: What is the requirement asking? Then look for the answer that best aligns with that intent. In this case, the intent is to prevent modification of data.
Process of Elimination: Discard answers that are completely irrelevant.
A Scenario Where A Would Work, However it does not satisfy the prompt:
Access policies for data immutability could look like this:
Grant Write Access Initially: A user/process with write access writes the data
Restrict Write Access: Access policies would restrict write access to all but users/groups responsible for administration of the data.
Create New Policy: After the 3-year window, an access policy could be created to provide read-only access.
This method has some issues:
Complexity: Managing access policies like this is complex and is not scalable.
Not Truly Immutable: Even with all that complexity, a user with the right access can still delete and modify the data.
In summary:
A. Create an access policy for the blob service is the closest to the correct approach in the given options. The correct approach would have been to set up an immutable policy, which is not provided in the answers. For the AZ-305 exam, it is important to choose the answer that is closest to correct, even if it is not correct.
HOTSPOT
You plan to migrate App1 to Azure.
You need to recommend a storage solution for App1 that meets the security and compliance requirements.
Which type of storage should you recommend, and how should you recommend configuring the storage? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Storage account type:
Premium page blobs
Premium file shares
Standard general-purpose v2
Configuration:
NFSv3
Large file shares
Hierarchical namespace
Here’s the breakdown of the correct answer and why:
Storage account type: Standard general-purpose v2
Configuration: Hierarchical namespace
Explanation:
Standard general-purpose v2: This storage account type allows you to utilize Blob storage, which is the key to meeting the immutability requirement. Azure Blob storage offers Immutability policies (write once, read many - WORM). This directly addresses the security and compliance requirement to prevent modification of new and existing data for three years.
Hierarchical namespace: While not directly related to the immutability requirement, hierarchical namespace (available in Azure Data Lake Storage Gen2, which is built on top of Standard general-purpose v2) provides a file system structure that can be beneficial for organizing and managing the data for App1. Given the available options, it’s the most relevant configuration choice.
Why other options are incorrect:
Storage Account Type:
Premium page blobs: Primarily used for Azure Virtual Machine disks and do not offer built-in immutability policies suitable for this requirement.
Premium file shares: While offering SMB access (potentially useful for on-premises access), they don’t have the built-in immutability policies of Blob storage.
Configuration:
NFSv3: While a file sharing protocol, it’s less relevant in this context as the primary requirement is immutability. Also, accessing blob storage from on-premises would typically be done through other methods (like Azure File Sync or the Storage Explorer).
Large file shares: This refers to the capacity of file shares, not the core security and compliance feature needed here.
Important Considerations:
On-premises access: While the recommendation leans towards Blob storage for immutability, you’ll need to consider how on-premises users and services will access the data. Options include:
Azure Storage Explorer: A free tool that allows access to Azure Storage.
Azure File Sync: If the data lends itself to a file-sharing model, you could sync a portion of the blob storage to an on-premises file server.
Direct API access: On-premises applications could be developed to interact with the Blob Storage APIs.
Preventing public endpoint access: This can be achieved by configuring private endpoints for the storage account, regardless of the storage type chosen.
HOTSPOT
You plan to migrate DB1 and DB2 to Azure.
You need to ensure that the Azure database and the service tier meet the resiliency and business requirements.
What should you configure? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Database:
A single Azure SQL database
Azure SQL Managed Instance
An Azure SQL Database elastic pool
Service tier:
Hyperscale
Business Critical
General Purpose
Explanation:
Box 1: SQL Managed Instance
Scenario: Once migrated to Azure, DB1 and DB2 must meet the following requirements:
✑ Maintain availability if two availability zones in the local Azure region fail.
✑ Fail over automatically.
✑ Minimize I/O latency.
The auto-failover groups feature allows you to manage the replication and failover of a group of databases on a server or all databases in a managed instance to another region. It is a declarative abstraction on top of the existing active geo-replication feature, designed to simplify deployment and management of geo-replicated databases at scale. You can initiate a geo-failover manually or you can delegate it to the Azure service based on a user-defined policy. The latter option allows you to automatically recover multiple related databases in a secondary region after a catastrophic failure or other unplanned event that results in full or partial loss of the SQL Database or SQL Managed Instance availability in the primary region.
Box 2: Business critical
SQL Managed Instance is available in two service tiers:
General purpose: Designed for applications with typical performance and I/O latency requirements.
Business critical: Designed for applications with low I/O latency requirements and minimal impact of underlying maintenance operations on the workload.
You plan to migrate App1 to Azure.
You need to recommend a network connectivity solution for the Azure Storage account that will host the App1 data. The solution must meet the security and compliance requirements.
What should you include in the recommendation?
a private endpoint
a service endpoint that has a service endpoint policy
Azure public peering for an ExpressRoute circuit
Microsoft peering for an ExpressRoute circuit
Understanding the Requirements
Here are the key networking-related requirements:
Security:
“Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.”
Connectivity:
“On-premises users and services must be able to access the Azure Storage account that will host the data in App1.”
Existing Environment:
“Litware has ExpressRoute connectivity to Azure.”
Analyzing the Options
Let’s evaluate each option against these requirements:
a private endpoint
Pros: Provides a private IP address within the virtual network for the storage account, thus preventing public access, which meets the security requirement. Enables on-prem resources to connect via the private IP over the express route connection.
Cons: Can increase cost slightly, requires virtual network integration.
Suitability: Highly suitable. It meets the security requirement of preventing public access and allows on-premises users to access the storage account over the private network and ExpressRoute connection.
a service endpoint that has a service endpoint policy
Pros: Allows VNETs to access the storage account without exposing it to the public internet.
Cons: Does not allow for on-premises resources to access the storage account.
Suitability: Not suitable. This only prevents traffic from public endpoints, however the on-premises traffic will still need to go through the public internet.
Azure public peering for an ExpressRoute circuit
Pros: Can provide access to Azure public services, such as storage, via the ExpressRoute connection.
Cons: Does not block access from the public internet, which does not meet the security requirements.
Suitability: Not suitable because public peering is not a secure method to access storage.
Microsoft peering for an ExpressRoute circuit
Pros: Allows private access to Azure resources, including Azure Storage.
Cons: Does not natively prevent access from the public internet. Requires additional configuration to do so.
Suitability: While Microsoft peering is the route that will be used by the resources to communicate via the express route, it is not a configuration that prevents public access.
The Correct Recommendation
Based on the analysis, the correct solution is:
a private endpoint
Explanation
Private Endpoints provide a network interface for the storage account directly within a virtual network. This ensures that access to the storage is limited to only resources within the private network. Traffic goes through the ExpressRoute circuit to the private IP on the VNET.
By using a private endpoint, you effectively prevent access from the public internet, fulfilling the security requirement.
Why other options are not correct:
Service endpoints only lock access from virtual networks to the storage account, it does not prevent on-premises systems from going through the public endpoint of the storage account.
Public peering is used to access public Azure services, it does not fulfill the security requirements of preventing access from the public internet.
Microsoft peering allows on-prem systems to access resources through private IP addresses, however it does not prevent on-prem resources from also using the public endpoint. Private Endpoints are needed to block the public endpoint.
Important Notes for the AZ-305 Exam
Private Endpoints vs Service Endpoints: Know the fundamental differences. Service endpoints provide network isolation within Azure networks, but don’t prevent public access. Private endpoints, on the other hand, allow resources within VNETs to communicate to resources via private IP addresses.
ExpressRoute Peering: Understand the differences between Microsoft, Azure public and private peering.
Security and Compliance: Prioritize solutions that align with security requirements. Blocking public access is a common ask.
Read Requirements Carefully: Ensure you meet all requirements including the networking and security.
DRAG DROP
You need to configure an Azure policy to ensure that the Azure SQL databases have TDE enabled. The solution must meet the security and compliance requirements.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Actions
Create an Azure policy definition that uses the deployIfNotExists effect.
Create a user-assigned managed identity.
Invoke a remediation task.
Create an Azure policy assignment.
Create an Azure policy definition that uses the Modify effect.
Answer Area
Understanding the Goal
The goal is to use Azure Policy to automatically enable TDE on all Azure SQL databases within the scope of the policy.
Key Concepts
Azure Policy: Allows you to create, assign, and manage policies that enforce rules across your Azure resources.
Policy Definition: Specifies the conditions that must be met and the actions to take if the conditions are not met.
Policy Assignment: Applies the policy definition to a specific scope (subscription, resource group, etc.).
deployIfNotExists Effect: This policy effect will deploy an ARM template if the resource does not have the configuration (TDE enabled).
Modify Effect: This effect will modify the resource to enforce the condition if it does not exist.
Remediation Task: A process for correcting resources that are not compliant with the policy.
User-Assigned Managed Identity: An identity object in Azure which allows for RBAC permissions and avoids the need for storing credentials for an application.
Steps in the Correct Sequence
Here’s the correct sequence of actions, with explanations:
Create an Azure policy definition that uses the deployIfNotExists effect.
Why? This is the first step. You need to define what the policy should do. For TDE, deployIfNotExists is used to deploy a configuration if it’s missing. The deployIfNotExists will deploy an ARM template that enables TDE on the database.
This step specifies the “rule” that will be enforced.
Create an Azure policy assignment.
Why? After defining the policy, you need to assign it to a scope, such as a subscription or a resource group. This step specifies where the policy is applied.
This tells Azure what needs to be checked against the policy.
Invoke a remediation task.
Why? The initial policy assignment will remediate new resources. However existing resources will need a remediation task to be launched to apply the policy to the non-compliant resources.
The Correct Drag-and-Drop Order
Here’s how you should arrange the actions in the answer area:
Create an Azure policy definition that uses the deployIfNotExists effect.
Create an Azure policy assignment.
Invoke a remediation task.
Why Other Options are Incorrect in this context:
Create a user-assigned managed identity: Although managed identities are used in conjunction with policies that use the deployIfNotExists effect, they do not need to be created specifically. The system assigned managed identity of the policy will perform the remediation. Therefore, creating a user-assigned managed identity is not needed and not within the scope of the task.
Create an Azure policy definition that uses the Modify effect: Although Modify is used in Azure policies, it is not relevant in the configuration of TDE. deployIfNotExists is a better approach because TDE needs to be enabled, which requires a deployment.
Important Notes for the AZ-305 Exam
Azure Policy Effects: Be extremely familiar with different policy effects, especially deployIfNotExists, audit, deny, and modify.
Policy Definition vs. Assignment: Understand the difference between defining a policy and applying it to resources.
Remediation: Understand how to use remediation tasks to fix non-compliant resources.
Scope: Be able to set the appropriate scope for policy assignments.
Managed Identities: Know how to use managed identities for secure resource management with Azure policies.
HOTSPOT
You plan to migrate App1 to Azure.
You need to recommend a high-availability solution for App1. The solution must meet the resiliency requirements.
What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Number of host groups:
1
2
3
6
Number of virtual machine scale sets:
0
1
3
Number of host groups: 3
Number of virtual machine scale sets: 1
Explanation:
Number of host groups: 3
Requirement: Maintain availability if two availability zones in the local Azure region fail.
Dedicated Hosts and Zones: Azure Dedicated Hosts are a regional resource, but you deploy host groups within specific availability zones. To be resilient to the failure of two availability zones, you need your virtual machines spread across at least three availability zones. Since you’re using dedicated hosts, you need a host group in each of those three availability zones.
Number of virtual machine scale sets: 1
Requirement: Be hosted on Azure virtual machines that support automatic scaling and maintain availability if two availability zones fail.
Virtual Machine Scale Sets and Zones: Azure Virtual Machine Scale Sets allow you to deploy and manage a set of identical, auto-scaling virtual machines. A single VM Scale Set can be configured to span multiple availability zones. This is the recommended approach for high availability and automatic scaling across zones. You don’t need multiple scale sets for each zone; one can manage the deployment across the necessary zones.
Why other options are incorrect:
Number of host groups:
1: This would not provide any availability zone resilience. If the single zone with the host group fails, App1 goes down.
2: This would only protect against the failure of a single availability zone. The requirement is resilience against two zone failures.
6: While this would provide more resilience, it’s not necessary to meet the specific requirement of tolerating two zone failures and would likely be more expensive.
Number of virtual machine scale sets:
0: You need to use Virtual Machine Scale Sets to meet the automatic scaling requirement.
3: While technically possible to have three separate VM scale sets (one in each zone), it adds unnecessary management complexity. A single VM scale set configured to span multiple availability zones is the standard and more efficient approach.
You need to implement the Azure RBAC role assignments for the Network Contributor role.
The solution must meet the authentication and authorization requirements.
What is the minimum number of assignments that you must use?
1
2
5
10
15
The correct answer is 2.
Here’s why:
Management Groups: The most efficient way to apply RBAC roles across multiple subscriptions is by using Azure Management Groups. Since all subscriptions are within an Enterprise Agreement (EA), it’s highly likely that they are organized under Management Groups.
Litware.com and dev.litware.com Tenants: You have subscriptions in two different tenants (litware.com and dev.litware.com). Therefore, even if the subscriptions within each tenant are organized under a single management group, you would need to apply the Network Contributor role at the management group level for each tenant.
Minimum Assignments:
One assignment of the Network Contributor role at the management group level associated with the litware.com tenant. This will apply the role to all 10 subscriptions within that tenant.
One assignment of the Network Contributor role at the management group level associated with the dev.litware.com tenant. This will apply the role to all 5 subscriptions within that tenant.
Why other options are incorrect:
1: You have subscriptions in two different tenants, so a single assignment won’t cover all subscriptions.
5: This might be the number of subscriptions in one of the tenants, but not all.
10: This might be the number of subscriptions in the litware.com tenant, but not all.
15: This is the total number of subscriptions, and you don’t need to assign the role individually to each subscription if using management groups.
HOTSPOT
You plan to migrate App1 to Azure.
You need to estimate the compute costs for App1 in Azure. The solution must meet the security and compliance requirements.
What should you use to estimate the costs, and what should you implement to minimize the costs? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
To estimate the costs, use:
Azure Advisor
The Azure Cost Management Power BI app
The Azure Total Cost of Ownership (TCO) calculator
Implement:
Azure Reservations
Azure Hybrid Benefit
Azure Spot Virtual Machine pricing
To estimate the costs, use: The Azure Total Cost of Ownership (TCO) calculator
Why correct: The Azure TCO calculator is specifically designed to compare the cost of running your workloads on-premises versus in Azure. It allows you to input details about your current infrastructure and planned Azure resources to get an estimated cost for migrating to the cloud. This is the most direct and comprehensive tool for this purpose.
Implement: Azure Reservations
Why correct: Azure Reservations offer significant discounts (up to 72% compared to pay-as-you-go pricing) by committing to using specific Azure resources (like virtual machines for App1) for a defined period (typically 1 or 3 years). This is a highly effective way to minimize compute costs for predictable workloads like App1 once it’s migrated.
Why the other options are less suitable:
To estimate the costs, use:
Azure Advisor: While Azure Advisor provides cost optimization recommendations, it primarily analyzes your existing Azure usage. Since App1 is being migrated, you don’t have existing Azure usage for it yet, making the TCO calculator more appropriate for initial estimations.
The Azure Cost Management Power BI app: This is a tool for visualizing and analyzing your current Azure spending. It’s not designed for pre-migration cost estimations.
Implement:
Azure Hybrid Benefit: While Azure Hybrid Benefit can significantly reduce costs for Windows Server virtual machines (if App1 runs on Windows Server and you have eligible licenses), Azure Reservations provide a more general cost-saving mechanism applicable to various compute resources, making it a slightly broader and potentially more impactful option for overall cost minimization. However, if App1 uses Windows Server, Hybrid Benefit is also a very strong contender and could be considered equally “closest” if not for the broader applicability of Reservations.
Azure Spot Virtual Machine pricing: Spot VMs offer deep discounts but come with the risk of eviction if Azure needs the capacity back. For a production application like App1, especially considering the security and compliance requirements mentioned in the broader scenario, relying on potentially unstable Spot VMs is generally not recommended. The risk of interruption outweighs the cost savings in this context.
In summary:
The Azure TCO calculator is the most direct tool for pre-migration cost estimation.
Azure Reservations are generally the most effective and broadly applicable method for implementing cost savings for compute resources like the VMs hosting App1, assuming a relatively stable workload.
Existing Environment: Technical Environment
The on-premises network contains a single Active Directory domain named contoso.com.
Contoso has a single Azure subscription.
Existing Environment: Business Partnerships
Contoso has a business partnership with Fabrikam, Inc. Fabrikam users access some Contoso applications over the internet by using Azure Active Directory (Azure AD) guest accounts.
Requirements: Planned Changes
Contoso plans to deploy two applications named App1 and App2 to Azure.
Requirements: App1
App1 will be a Python web app hosted in Azure App Service that requires a Linux runtime.
Users from Contoso and Fabrikam will access App1.
App1 will access several services that require third-party credentials and access strings.
The credentials and access strings are stored in Azure Key Vault.
App1 will have six instances: three in the East US Azure region and three in the West Europe Azure region.
App1 has the following data requirements:
✑ Each instance will write data to a data store in the same availability zone as the instance.
✑ Data written by any App1 instance must be visible to all App1 instances.
App1 will only be accessible from the internet. App1 has the following connection requirements:
✑ Connections to App1 must pass through a web application firewall (WAF).
✑ Connections to App1 must be active-active load balanced between instances.
✑ All connections to App1 from North America must be directed to the East US region. All other connections must be directed to the West Europe region.
Every hour, you will run a maintenance task by invoking a PowerShell script that copies files from all the App1 instances. The PowerShell script will run from a central location.
Requirements: App2
App2 will be a NET app hosted in App Service that requires a Windows runtime.
App2 has the following file storage requirements:
✑ Save files to an Azure Storage account.
✑ Replicate files to an on-premises location.
✑ Ensure that on-premises clients can read the files over the LAN by using the SMB protocol.
You need to monitor App2 to analyze how long it takes to perform different transactions within the application. The solution must not require changes to the application code.
Application Development Requirements
Application developers will constantly develop new versions of App1 and App2.
The development process must meet the following requirements:
✑ A staging instance of a new application version must be deployed to the application host before the new version is used in production.
✑ After testing the new version, the staging version of the application will replace the production version.
✑ The switch to the new application version from staging to production must occur without any downtime of the application.
Identity Requirements
Contoso identifies the following requirements for managing Fabrikam access to resources:
✑ uk.co.certification.simulator.questionpool.PList@1863e940
✑ The solution must minimize development effort.
Security Requirement
All secrets used by Azure services must be stored in Azure Key Vault.
Services that require credentials must have the credentials tied to the service instance. The credentials must NOT be shared between services.
DRAG DROP
You need to recommend a solution that meets the file storage requirements for App2.
What should you deploy to the Azure subscription and the on-premises network? To answer, drag the appropriate services to the correct locations. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Services
Azure Blob Storage
Azure Data Box
Azure Data Box Gateway
Azure Data Lake Storage
Azure File Sync
Azure Files
Answer Area
Azure subscription: Service
On-premises network: Service
Deconstruct the Requirements: First, identify the key requirements for App2’s file storage:
Store files in an Azure Storage account.
Replicate files to an on-premises location.
On-premises clients need to read files via SMB over the LAN.
Azure Storage Options - Initial Brainstorm: Think about the different Azure Storage services and their core functionalities:
Azure Blob Storage: Excellent for unstructured data, cost-effective, but doesn’t natively provide SMB access or direct on-premises synchronization.
Azure Data Lake Storage: Built on Blob Storage, optimized for big data analytics. Doesn’t directly address SMB access or on-premises sync in the way required.
Azure Files: Provides fully managed file shares in the cloud, accessible via SMB. This seems promising for on-premises access.
Azure File Sync: A service to synchronize Azure File shares with on-premises Windows Servers. This looks like a strong candidate for fulfilling the on-premises replication and SMB access needs.
Azure Data Box: A physical appliance for transferring large amounts of data to Azure. Not suitable for ongoing synchronization.
Azure Data Box Gateway: A virtual appliance that acts as a network file share, caching data to Azure. While it involves on-premises component, Azure File Sync is a better fit for the replication requirement.
Focus on the SMB Requirement: The “on-premises clients can read the files over the LAN by using the SMB protocol” requirement is a strong indicator that Azure Files will be needed in Azure. Blob Storage and Data Lake Storage don’t offer native SMB access.
Address the On-premises Replication: The requirement to “replicate files to an on-premises location” points towards a synchronization solution. Azure File Sync is specifically designed to synchronize Azure File shares with on-premises Windows Servers, making it the ideal choice for this.
Map to the Answer Area: Based on the above analysis:
Azure Subscription: Azure Files will host the file shares in the cloud.
On-premises network: Azure File Sync will be installed on a Windows Server on-premises to synchronize with the Azure File share.
Review and Confirm: Double-check if the chosen services meet all three initial requirements:
Store in Azure Storage Account? Yes, Azure Files is an Azure Storage service.
Replicate on-premises? Yes, Azure File Sync handles the synchronization.
SMB access? Yes, Azure Files provides SMB access, and File Sync makes it available on-premises.
Eliminate Other Options (and justify):
Azure Blob Storage/Data Lake Storage: Lack native SMB access.
Azure Data Box/Data Box Gateway: Not designed for continuous synchronization. Data Box is for bulk transfer, and Data Box Gateway is more of a caching solution, not direct replication of the entire Azure file share on-premises.
This systematic approach helps identify the core requirements and match them to the appropriate Azure services, leading to the correct solution.Here’s the breakdown of the correct services and why:
Azure subscription: Azure Files
Why: Azure Files provides fully managed file shares in the cloud that are accessible via the Server Message Block (SMB) protocol. This directly addresses the requirement to save files to an Azure Storage account and make them accessible over SMB.
On-premises network: Azure File Sync
Why: Azure File Sync is the Azure service that enables you to synchronize Azure file shares with on-premises Windows Servers. This fulfills the requirement to replicate files to an on-premises location and allows on-premises clients to read the files over the LAN using the SMB protocol.
Therefore, the correct answer area is:
Azure subscription: Azure Files
On-premises network: Azure File Sync
Explanation of why other options are incorrect:
Azure Blob Storage: While a core Azure storage service, it doesn’t natively provide SMB access required for on-premises clients to read files over the LAN.
Azure Data Box: This is a physical appliance used for transferring large amounts of data into Azure. It’s not for ongoing synchronization or SMB access.
Azure Data Box Gateway: This is a virtual appliance that resides on your on-premises network and acts as a network file share, caching data to Azure Blob storage. While it involves an on-premises component, it doesn’t directly replicate the Azure file share for native SMB access like Azure File Sync.
Azure Data Lake Storage: This is built on top of Blob storage and is optimized for big data analytics. It doesn’t directly provide SMB access in the same way as Azure Files.
You need to recommend a solution that meets the data requirements for App1.
What should you recommend deploying to each availability zone that contains an instance of App1?
an Azure Cosmos DB that uses multi-region writes
an Azure Storage account that uses geo-zone-redundant storage (GZRS)
an Azure Data Lake store that uses geo-zone-redundant storage (GZRS)
an Azure SQL database that uses active geo-replication
The correct answer is an Azure Cosmos DB that uses multi-region writes.
Here’s why:
Data Requirements Breakdown:
Each instance writes data to a data store in the same availability zone: This implies a need for a local data store for low latency writes.
Data written by any App1 instance must be visible to all App1 instances: This necessitates a globally consistent data store that replicates across regions.
Why Azure Cosmos DB with Multi-Region Writes Fits:
Multi-Region Writes: This feature of Cosmos DB allows you to designate multiple Azure regions as writeable. You would deploy a Cosmos DB account with write regions in both East US and West Europe.
Local Writes: Each App1 instance would be configured to write to the Cosmos DB region closest to it (within the same availability zone’s region). This ensures low-latency writes.
Global Consistency: Cosmos DB provides various consistency levels. For this requirement, you would likely choose “Strong” or “Session” consistency to ensure that data written in one region is eventually (or immediately, with Strong consistency) visible to all other regions.
Availability Zones: Cosmos DB itself offers high availability within a region by replicating data across multiple availability zones.
Why Other Options Are Less Suitable:
Azure Storage account with GZRS: GZRS provides high availability and durability by replicating data synchronously across three availability zones within a primary region and asynchronously to a secondary region. However, it doesn’t offer the same level of fine-grained control over write regions and automatic data replication for active-active scenarios like Cosmos DB. Also, accessing blob storage directly from multiple instances for transactional data can be complex.
Azure Data Lake Store with GZRS: Similar limitations to Azure Storage with GZRS. It’s primarily designed for large-scale analytics data, not transactional data requiring low-latency writes from multiple instances.
Azure SQL database with active geo-replication: While active geo-replication provides read replicas in different regions, only the primary region is writable. This doesn’t directly meet the requirement of each instance writing to a local data store and having that data immediately available to all instances across regions in an active-active manner.
HOTSPOT
You are evaluating whether to use Azure Traffic Manager and Azure Application Gateway to meet the connection requirements for App1.
What is the minimum numbers of instances required for each service? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Azure Traffic Manager:
1
2
3
6
Azure Application Gateway:
1
2
3
6
Azure Traffic Manager: 1
Why: Azure Traffic Manager is a DNS-based traffic routing service. You only need one Traffic Manager profile to configure the geographic routing policy. Traffic Manager itself is a highly available, globally distributed service managed by Azure. You don’t need multiple instances for redundancy or load balancing the Traffic Manager service itself. Its availability is built-in.
Azure Application Gateway: 2
Why: You need at least two instances of Azure Application Gateway. Here’s the breakdown:
One instance in the East US region: To provide the WAF and load balancing for the three App1 instances in East US.
One instance in the West Europe region: To provide the WAF and load balancing for the three App1 instances in West Europe.
Since connections must pass through a WAF and you have instances in two distinct regions with traffic being directed based on geography, you need a separate Application Gateway in each region to handle the regional traffic and provide WAF protection.
Therefore, the correct answer area is:
Azure Traffic Manager: 1
Azure Application Gateway: 2
HOTSPOT -
You need to recommend a solution to ensure that App1 can access the third-party credentials and access strings. The solution must meet the security requirements.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Authenticate App1 by using:
A certificate
A system-assigned managed identity
A user-assigned managed identity
Authorize App1 to retrieve Key Vault
secrets by using:
An access policy
A connected service
A private link
A role assignment
Explanation:
Scenario: Security Requirement
All secrets used by Azure services must be stored in Azure Key Vault.
Services that require credentials must have the credentials tied to the service instance. The credentials must NOT be shared between services.
Box 1: A service principal
A service principal is a type of security principal that identifies an application or service, which is to say, a piece of code rather than a user or group. A service principal’s object ID is known as its client ID and acts like its username. The service principal’s client secret acts like its password.
Note: Authentication with Key Vault works in conjunction with Azure Active Directory (Azure AD), which is responsible for authenticating the identity of any given security principal.
A security principal is an object that represents a user, group, service, or application that’s requesting access to Azure resources. Azure assigns a unique object ID to every security principal.
Box 2: A role assignment
You can provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control.
You need to recommend an App Service architecture that meets the requirements for App1.
The solution must minimize costs.
What should few recommend?
one App Service Environment (ASE) per availability zone
one App Service plan per availability zone
one App Service plan per region
one App Service Environment (ASE) per region
Understanding the Requirements
Here are the key requirements for App1’s App Service deployment:
High Availability: App1 has six instances, three in East US and three in West Europe, spread across availability zones within each region.
Web App Service: The App1 app will be hosted on Azure App Service.
Minimize Costs: The solution should be the most cost-effective while maintaining the necessary features.
Linux Runtime: The App1 app is a python app with a linux runtime.
Key Concepts
Azure App Service: A PaaS service for hosting web applications, mobile backends, and APIs.
App Service Plan: Defines the underlying compute resources (VMs) on which your app(s) run.
App Service Environment (ASE): Provides a fully isolated and dedicated environment for running your App Service apps.
Availability Zones: Physically separate locations within an Azure region that provide high availability.
Analyzing the Options
Let’s evaluate each option based on its cost-effectiveness and ability to meet the requirements:
one App Service Environment (ASE) per availability zone
Pros: Highest level of isolation and control, can have virtual network integration.
Cons: Most expensive solution.
Suitability: Not suitable due to high costs.
one App Service plan per availability zone
Pros: Provides zone redundancy, and can potentially have different size VMs in each zone if needed.
Cons: Can lead to increased costs due to over provisioning of resources if one app services plan per zone is chosen.
Suitability: Not the most cost-effective approach.
one App Service plan per region
Pros: Cost-effective for multiple instances of an app in a single region, allows multiple VMs to be spun up on one app service plan.
Cons: Requires availability zones to be supported by the underlying VM size.
Suitability: Suitable, most cost effective option if VMs chosen support availability zones.
one App Service Environment (ASE) per region
Pros: Provides isolation and control within a region.
Cons: Very expensive and not needed for this scenario.
Suitability: Not suitable due to high costs.
The Correct Recommendation
Based on the analysis, the most cost-effective solution is:
one App Service plan per region
Explanation
App Service Plan per region: By creating a single App Service plan per region, you can host multiple instances of App1 (three per region) on the same underlying VMs. This is more cost-effective than using separate plans per availability zone.
Availability Zones: When choosing the VM size, be sure to choose a size that supports availability zones.
Zone Redundancy: App service automatically handles the zone redundancy in the single app service plan per region.
Why Other Options Are Not Correct
ASE per availability zone: Highly expensive and not needed when App Service can handle the availability zone deployment.
App Service plan per availability zone: Not cost-effective due to over provisioning and having three app services plans when one per region can handle all instances.
ASE per region: Very costly and unnecessary.
Your company has deployed several virtual machines (VMs) on-premises and to Azure. Azure ExpressRoute has been deployed and configured for on-premises to Azure connectivity.
Several VMs are exhibiting network connectivity issues.
You need to analyze the network traffic to determine whether packets are being allowed or denied to the VMs.
Solution: Use the Azure Advisor to analyze the network traffic.
Does the solution meet the goal?
Yes
No
Understanding the Goal
The goal is to analyze network traffic to determine if packets are being allowed or denied to the VMs, which indicates a network connectivity issue.
Analyzing the Proposed Solution
Azure Advisor: Azure Advisor is a service that analyzes your Azure environment and provides recommendations for cost optimization, security, reliability, and performance. It does not analyze or show you network traffic for VMs, nor can it view network traffic for on-prem VMs.
Evaluation
Azure Advisor will not help you determine what packets are being allowed or denied to a virtual machine.
The Correct Solution
The tools that would be best suited for this scenario would be:
Azure Network Watcher: Network Watcher can help you monitor and troubleshoot network traffic.
Network Security Group (NSG) Flow Logs: NSG flow logs would provide details on what traffic is being allowed or denied from and to VMs.
On-Prem Packet Capture Tools: Wireshark or other tools can be used on-prem to diagnose traffic issues.
Does the Solution Meet the Goal?
No, the solution does not meet the goal. Azure Advisor is not the correct tool for analyzing network traffic flow and packet information.
DRAG DROP
You plan to import data from your on-premises environment to Azure. The data Is shown in the following table.
On-premises source Azure target
A Microsoft SQL Server 2012 database An Azure SQL database
A table in a Microsoft SQL Server 2014 database An Azure Cosmos DB account that uses the SQL API
What should you recommend using to migrate the data? To answer, drag the appropriate tools to the correct data sources-Each tool may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Tools
AzCopy
Azure Cosmos DB Data Migration Tool
Data Management Gateway
Data Migration Assistant
Answer Area
From the SQL Server 2012 database: Tool
From the table in the SQL Server 2014 database: Tool
From the SQL Server 2012 database: Data Migration Assistant
Why: The Data Migration Assistant (DMA) is Microsoft’s primary tool for migrating SQL Server databases to Azure SQL Database. It can assess your on-premises SQL Server database for compatibility issues, recommend performance improvements, and then perform the data migration. While SQL Server 2012 is an older version, DMA often supports migrations from various SQL Server versions to Azure SQL Database.
From the table in the SQL Server 2014 database: Azure Cosmos DB Data Migration Tool
Why: The Azure Cosmos DB Data Migration Tool (dtui.exe) is specifically designed for importing data into Azure Cosmos DB from various sources, including SQL Server. Since the target is an Azure Cosmos DB account using the SQL API, this tool is the most direct and efficient way to migrate the data. You can select specific tables for migration.
Therefore, the correct answer area is:
From the SQL Server 2012 database: Data Migration Assistant
From the table in the SQL Server 2014 database: Azure Cosmos DB Data Migration Tool
Explanation of why other tools are incorrect:
AzCopy: This is a command-line utility for copying data to and from Azure Blob Storage, Azure Files, and Azure Data Lake Storage. It’s not designed for migrating relational database schemas and data to Azure SQL Database or Cosmos DB.
Data Management Gateway (Integration Runtime): This is a component of Azure Data Factory that enables data movement between on-premises data stores and cloud services. While it could be used for this, the direct migration tools (DMA and Cosmos DB Data Migration Tool) are simpler and more purpose-built for these specific scenarios. Using Data Factory would introduce more complexity than necessary for a straightforward data migration.
HOTSPOT
You need to design a storage solution for an app that will store large amounts of frequently used data.
The solution must meet the following requirements:
✑ Maximize data throughput.
✑ Prevent the modification of data for one year.
✑ Minimize latency for read and write operations.
Which Azure Storage account type and storage service should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Storage account type:
BlobStorage
BlockBlobStorage
FileStorage
StorageV2 with Premium performance
StorageV2 with Standard performance
Storage service:
Blob
File
Table
Storage account type: BlockBlobStorage
Storage service: Blob
Explanation:
Let’s break down the requirements and why this combination is the best fit:
Requirements:
Maximize Data Throughput: The solution needs to handle a high volume of data transfer.
Prevent Data Modification (Immutable for 1 Year): Data must be stored in a way that prevents any changes for one year.
Minimize Latency: Read and write operations should be as fast as possible.
Storage Account Type: BlockBlobStorage
Why it’s the best choice:
Optimized for Block Blobs: BlockBlobStorage accounts are specifically designed and optimized for storing and accessing block blobs. Block blobs are ideal for unstructured data like text or binary data, which is common for applications storing large amounts of data.
High Throughput: BlockBlobStorage accounts are designed to deliver high throughput for read and write operations.
Immutable Storage Support: BlockBlobStorage accounts support immutable storage policies, allowing you to store data in a WORM (Write Once, Read Many) state, preventing modification for a specified period (like the one year required).
Why other options are less suitable:
BlobStorage: BlobStorage is an older account type. It is recommended that you use a BlockBlobStorage or a general-purpose v2 account instead.
FileStorage: FileStorage accounts are optimized for file shares (using the SMB protocol). They are not the best choice for maximizing throughput for large amounts of unstructured data.
StorageV2 (General-purpose v2): While StorageV2 accounts support block blobs, they also support other storage types (files, queues, tables). BlockBlobStorage accounts generally provide better performance for exclusively block blob workloads, which is the case here.
StorageV2 (Premium performance): Premium performance is optimized for very low latency, but it comes at a higher cost. The requirement is to minimize latency, not necessarily to achieve the absolute lowest possible latency at any cost.
Storage Service: Blob
Why it’s the best choice:
Large, Unstructured Data: Blob storage is designed for storing large amounts of unstructured data, such as text or binary data, which aligns with the app’s requirements.
High Throughput: Blob storage, especially in BlockBlobStorage accounts, is optimized for high throughput.
Immutability: Blob storage supports immutability policies at the blob or container level.
Why other options are less suitable:
File: File storage is for file shares accessed via SMB. It’s not the best option for maximizing throughput for large amounts of unstructured data.
Table: Table storage is a NoSQL key-value store. It’s not suitable for storing large amounts of unstructured data or for maximizing throughput.
Implementation Details:
Create a BlockBlobStorage Account: When creating the storage account in Azure, choose BlockBlobStorage as the account type.
Create a Container: Within the storage account, create a container to store your blobs.
Configure Immutability:
You can set a time-based retention policy at the container level or on individual blobs.
Configure the policy to prevent modifications and deletions for one year.
Upload Data as Block Blobs: Use Azure Storage SDKs, AzCopy, or other tools to upload your data to the container as block blobs.
HOTSPOT
You need to recommend an Azure Storage Account configuration for two applications named Application1 and Applications.
The configuration must meet the following requirements:
- Storage for Application1 must provide the highest possible transaction rates and the lowest possible latency.
- Storage for Application2 must provide the lowest possible storage costs per GB.
- Storage for both applications must be optimized for uploads and downloads.
- Storage for both applications must be available in an event of datacenter failure.
What should you recommend? To answer, select the appropriate options in the answer area NOTE: Each correct selection is worth one point
Answer Area
Application1:
BlobStorage with Standard performance, Hot access tier, and Read-access geo-redundant storage (RA-GRS) replication
BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication
General purpose v1 with Premium performance and Locally-redundant storage (LRS) replication
General purpose v2 with Standard performance, Hot access tier, and Locally-redundant storage (LRS) replication
Application2:
BlobStorage with Standard performance, Cool access tier, and Geo-redundant storage (GRS) replication
BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication
General purpose v1 with Standard performance and Read-access geo-redundant storage (RA-GRS) replication
General purpose v2 with Standard performance, Cool access tier, and Read-access geo-redundant storage (RA-GRS) replication
Answer Area
Application1:
BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication
Application2:
General purpose v2 with Standard performance, Cool access tier, and Read-access geo-redundant storage (RA-GRS) replication
Let’s break down the requirements and analyze the options for each application.
Application 1 Requirements: Highest Performance & Lowest Latency
Highest possible transaction rates and the lowest possible latency: This strongly indicates the need for Premium performance. Premium performance storage accounts are backed by Solid-State Drives (SSDs), which provide significantly higher throughput and lower latency compared to Standard performance accounts backed by Hard Disk Drives (HDDs).
Storage for both applications must be optimized for uploads and downloads: Both Premium and Standard performance accounts can be optimized for uploads and downloads depending on the account type and access tier. However, Premium performance inherently provides better performance for these operations.
Storage for both applications must be available in an event of datacenter failure: This requires some form of redundancy beyond Locally-redundant storage (LRS). Zone-redundant storage (ZRS), Geo-redundant storage (GRS), and Read-access geo-redundant storage (RA-GRS) provide protection against datacenter failures. For the highest performance requirement, Zone-redundant storage (ZRS) is often preferred over GRS/RA-GRS because replication is within the same Azure region, minimizing latency compared to geo-replication.
Considering these points, let’s evaluate the options for Application 1:
BlobStorage with Standard performance, Hot access tier, and Read-access geo-redundant storage (RA-GRS) replication: Standard performance is not for highest transaction rates and lowest latency. Incorrect.
BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication: Correct. BlockBlobStorage is suitable for general blob storage and optimized for uploads/downloads. Premium performance provides the highest transaction rates and lowest latency. ZRS provides availability within a region across availability zones, protecting against datacenter failures within that region while minimizing latency impact compared to geo-replication.
General purpose v1 with Premium performance and Locally-redundant storage (LRS) replication: General purpose v1 accounts do not support Premium performance for block blobs. Incorrect.
General purpose v2 with Standard performance, Hot access tier, and Locally-redundant storage (LRS) replication: Standard performance is not for highest transaction rates and lowest latency. LRS does not protect against datacenter failure. Incorrect.
Application 2 Requirements: Lowest Storage Cost
Storage for Application2 must provide the lowest possible storage costs per GB: This indicates the need for Standard performance and the Cool access tier. The Cool access tier is designed for infrequently accessed data and offers lower storage costs compared to the Hot access tier, although it has higher transaction costs.
Storage for both applications must be optimized for uploads and downloads: Standard performance accounts with Cool access tier are still suitable for uploads and downloads, although performance will be lower than Premium. The Cool tier is designed for infrequent access, so very frequent access might increase transaction costs and not be optimal for “lowest overall cost” if access is surprisingly high. However, based on “lowest storage cost per GB”, Cool tier is the primary driver.
Storage for both applications must be available in an event of datacenter failure: This requires geo-redundancy. Geo-redundant storage (GRS) or Read-access geo-redundant storage (RA-GRS) are suitable. RA-GRS offers read access to the secondary region, which can be beneficial in disaster recovery scenarios and is often a good balance of cost and availability. GRS is also acceptable and might be marginally cheaper than RA-GRS, but RA-GRS provides immediate read access in the secondary region if a failover occurs.
Considering these points, let’s evaluate the options for Application 2:
BlobStorage with Standard performance, Cool access tier, and Geo-redundant storage (GRS) replication: Correct. BlobStorage is suitable. Standard performance and Cool access tier minimize storage costs. GRS provides geo-redundancy for datacenter failure.
BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication: Premium performance is expensive and not for lowest cost. Incorrect.
General purpose v1 with Standard performance and Read-access geo-redundant storage (RA-GRS) replication: General purpose v1 is older and less feature-rich than v2. While Standard performance and RA-GRS are acceptable, using v2 is generally recommended for new deployments. Not the best option compared to using v2.
General purpose v2 with Standard performance, Cool access tier, and Read-access geo-redundant storage (RA-GRS) replication: Correct and potentially slightly better than option 1. General purpose v2 accounts are recommended for most new storage scenarios and offer all storage services. Standard performance and Cool access tier provide low storage costs. RA-GRS provides geo-redundancy and read access in the secondary region, offering a good balance of cost and availability. Using General Purpose v2 is generally preferred over dedicated BlobStorage accounts for new deployments as it’s more versatile.
While both option 1 and option 4 for Application 2 are valid in meeting the core requirements, option 4 using General purpose v2 is generally considered best practice for new deployments due to its broader capabilities and Microsoft’s recommendation. RA-GRS is also often preferred over GRS for a slight cost increase, gaining read access to the secondary region in case of disaster.
HOTSPOT
Your company develops a web service that is deployed to an Azure virtual machine named VM1. The web service allows an API to access real-time data from VM1.
The current virtual machine deployment is shown in the Deployment exhibit. (Click the Deployment tab).
VNet1: This is the overall virtual network encompassing the subnets.
Subnet1: Contains two virtual machines:
VM 1
VM 2
ProdSubnet
The chief technology officer (CTO) sends you the following email message: “Our developers have deployed the web service to a virtual machine named VM1. Testing has shown that the API is accessible from VM1 and VM2. Our partners must be able to connect to the API over the Internet. Partners will use this data in applications that they develop.”
You deploy an Azure API Management (APIM) service. The relevant API Management configuration is shown in the API exhibit. (Click the API tab.)
Virtual Network:
Off
External (selected)
Internal
LOCATION VIRTUAL NETWORK SUBNET
West Europe VNet1 ProdSubnet
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
Statements
The API is available to partners over the Internet.
The APIM instance can access real-time data from VM1.
A VPN gateway is required for partner access.
Statements
The API is available to partners over the Internet. Yes
The APIM instance can access real-time data from VM1. Yes
A VPN gateway is required for partner access. No
Explanation:
- The API is available to partners over the Internet. - Yes
Why? The API Management (APIM) instance is configured with a Virtual Network setting of External. This means that the APIM instance is deployed with a public IP address and is accessible from the internet. Partners can access the API through the APIM gateway’s public endpoint.
- The APIM instance can access real-time data from VM1. - Yes
Why? The APIM instance is configured to be deployed within the same virtual network (VNet1) and subnet (ProdSubnet) as VM1. This internal deployment allows the APIM instance to communicate directly with VM1 over the private network. It can, therefore, access the real-time data exposed by the web service running on VM1.
- A VPN gateway is required for partner access. - No
Why? Partners access the API through the APIM instance’s public endpoint, which is exposed to the internet because of the External Virtual Network setting. A VPN gateway is used for creating secure site-to-site or point-to-site connections between an on-premises network (or a single computer) and an Azure virtual network. It’s not needed when accessing a public endpoint.
Your company has 300 virtual machines hosted in a VMware environment. The virtual machines vary in size and have various utilization levels.
You plan to move all the virtual machines to Azure.
You need to recommend how many and what size Azure virtual machines will be required to move the current workloads to Azure. The solution must minimize administrative effort.
What should you use to make the recommendation?
Azure Cost Management
Azure Pricing calculator
Azure Migrate
Azure Advisor
Let’s analyze each option in the context of the question:
Azure Cost Management: Azure Cost Management is a tool used to analyze, manage, and optimize your Azure spending after you have resources deployed in Azure. It helps you understand your costs, set budgets, and identify cost optimization opportunities within your existing Azure environment. It does not directly help in assessing your on-premises VMware environment and recommending Azure VM sizes for migration.
Azure Pricing calculator: The Azure Pricing calculator is a tool for estimating the cost of Azure services before you deploy them. You can manually input specifications for Azure VMs (like size, operating system, region) to get an estimated cost. While useful for budgeting, it requires you to already know what VM sizes you need. It does not automatically analyze your existing VMware VMs to provide sizing recommendations. Manually inputting details for 300 VMs would be a lot of administrative effort and prone to error.
Azure Migrate: Azure Migrate is a dedicated service in Azure designed specifically for migrating on-premises workloads to Azure. It provides tools for:
Discovery: It can discover your on-premises VMware VMs, their configurations, and performance utilization.
Assessment: Based on the discovery, Azure Migrate can analyze your VMs and provide recommendations for:
Azure VM sizes: Suggesting appropriate Azure VM sizes (e.g., Standard_D4s_v3) that match the resource requirements of your on-premises VMs.
Number of Azure VMs: Determining how many Azure VMs are needed to host your workloads.
Cost estimations: Providing cost estimates for running the recommended Azure VMs.
Migration: Facilitating the actual migration of VMs to Azure.
Azure Migrate is designed to minimize administrative effort by automating the discovery and assessment process, providing data-driven recommendations for Azure VM sizing.
Azure Advisor: Azure Advisor is a personalized cloud consultant that helps you follow best practices to optimize your Azure deployments. It provides recommendations across cost, security, reliability, performance, and operational excellence. While Azure Advisor can suggest optimizations for existing Azure VMs, it is not designed to assess on-premises VMware environments and recommend Azure VM sizing for migration. It operates within the Azure environment, not for pre-migration planning from on-premises.
Conclusion:
Azure Migrate is the only service specifically built to address the scenario described in the question. It is designed to assess your on-premises VMware environment and provide recommendations for Azure VM sizing and count, minimizing administrative effort in the process.
Final Answer: Azure Migrate
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity.
Several virtual machines exhibit network connectivity issues.
You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines.
Solution: Use Azure Advisor to analyze the network traffic.
Does this meet the goal?
A. Yes
B. No
Understanding the Goal
The goal is to analyze network traffic to determine if packets are being allowed or denied to the VMs, which would indicate a network connectivity issue.
Analyzing the Proposed Solution
Azure Advisor: Azure Advisor is a service that analyzes your Azure environment and provides recommendations for cost optimization, security, reliability, and performance. It does not analyze or show network traffic for VMs. It also does not provide any insight into on-prem network traffic.
Evaluation
Azure Advisor will not help you determine what packets are being allowed or denied to a virtual machine.
The Correct Solution
The tools that would be best suited for this scenario would be:
Azure Network Watcher: Network Watcher can help you monitor and troubleshoot network traffic.
Network Security Group (NSG) Flow Logs: NSG flow logs would provide details on what traffic is being allowed or denied from and to VMs.
On-Prem Packet Capture Tools: Wireshark or other tools can be used on-prem to diagnose traffic issues.
Does the Solution Meet the Goal?
The answer is:
B. No
You have an on-premises network and an Azure subscription. The on-premises network has several branch offices.
A branch office in Toronto contains a virtual machine named VM1 that is configured as a file server. Users access the shared files on VM1 from all the offices.
You need to recommend a solution to ensure that the users can access the shares files as quickly as possible if the Toronto branch office is inaccessible.
What should you include in the recommendation?
a Recovery Services vault and Azure Backup
an Azure file share and Azure File Sync
Azure blob containers and Azure File Sync
a Recovery Services vault and Windows Server Backup
Understanding the Requirements
Here’s a breakdown of the key requirements:
On-premises File Server: A file server (VM1) in the Toronto branch office.
File Access from all Offices: Users in all branch offices access shared files on VM1.
High Availability: Need a solution for quick access to the files if the Toronto office becomes unavailable.
Quick Access: Minimize latency for users accessing files if the Toronto office is down.
Analyzing the Options
Let’s evaluate each option based on its suitability:
a Recovery Services vault and Azure Backup:
Pros: Provides backup and restore capabilities for VMs.
Cons: Restoring a VM from backup can be time-consuming, and does not offer a way for users to directly connect to shares if VM1 is down. This also does not meet the quick access requirement.
Suitability: Not suitable for quick access if the primary VM is down.
an Azure file share and Azure File Sync:
Pros: Azure Files provides a cloud-based SMB share, Azure File Sync can sync the data from on-premises to Azure, and can be set up in multiple locations for quick access during an outage.
Cons: Requires configuring Azure File Sync and setting up a caching server in other locations.
Suitability: Highly suitable, meets all requirements.
Azure blob containers and Azure File Sync:
Pros: Azure Blob Storage provides scalable cloud storage.
Cons: Azure File Sync does not sync with blob storage. Not suitable because it does not meet the requirement for local SMB share, and is not the correct data store for Azure File Sync.
Suitability: Not suitable for the scenario, and a combination of services that do not work well together.
a Recovery Services vault and Windows Server Backup
Pros: Provides backup and restore capabilities for VMs and their data.
Cons: Restoring from a backup is a lengthy process, and does not meet the requirement for quick access to shares.
Suitability: Not suitable because it is not designed for quick access during an outage.
The Correct Recommendation
Based on the analysis, the correct solution is:
an Azure file share and Azure File Sync
Explanation
Azure File Share: Provides a cloud-based SMB share that is highly available and allows for access from multiple locations.
Azure File Sync: Allows for continuous syncing of files from the on-premises file server to the Azure File Share.
Caching Servers: With Azure File Sync, other on-premises servers can be set up as caching endpoints, enabling quick, local access to the data even if the Toronto office is offline. This meets the requirement of fast access to files during a Toronto branch outage.
Why Other Options are Incorrect
Recovery Services and Azure Backup: Does not provide immediate access to files and has downtime due to the restoration process.
Blob Containers with Azure File Sync: Does not work because File Sync is designed to be used with Azure File Shares, not Blob Containers.
Recovery Services and Windows Server Backup: Does not provide immediate access to files and has downtime due to the restoration process.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity.
Several virtual machines exhibit network connectivity issues.
You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines.
Solution: Use Azure Network Watcher to run IP flow verify to analyze the network traffic.
Does this meet the goal?
A. Yes
B. No
Understanding the Goal
The goal is to analyze network traffic to determine if packets are being allowed or denied to the VMs, indicating a network connectivity problem.
Analyzing the Proposed Solution
Azure Network Watcher: A service that allows you to monitor and diagnose network issues.
IP Flow Verify: A feature within Network Watcher that allows you to specify source and destination IPs, ports, and protocol and determine if the network security group rules (NSGs) will allow or deny the specified traffic. This provides insight into the network rules and if the rules are causing a problem.
Evaluation
Azure Network Watcher with the IP flow verify feature is indeed the correct tool for diagnosing network traffic and connectivity issues.
Does the Solution Meet the Goal?
The answer is:
A. Yes
Explanation
Azure Network Watcher: Provides tools to monitor, diagnose, and gain insights into your network.
IP Flow Verify: Allows you to check if a packet is allowed or denied between a source and destination based on the current network security rules.