test2 Flashcards
https://infraexam.com/microsoft/az-304-microsoft-azure-architect-design/az-304-part-07/
Overview. General Overview
Litware, Inc. is a medium-sized finance company.
Overview. Physical Locations
Litware has a main office in Boston.
Existing Environment. Identity Environment
The network contains an Active Directory forest named Litware.com that is linked to an Azure Active Directory (Azure AD) tenant named Litware.com. All users have Azure Active Directory Premium P2 licenses.
Litware has a second Azure AD tenant named dev.Litware.com that is used as a development environment.
The Litware.com tenant has a conditional acess policy named capolicy1. Capolicy1 requires that when users manage the Azure subscription for a production environment by
using the Azure portal, they must connect from a hybrid Azure AD-joined device.
Existing Environment. Azure Environment
Litware has 10 Azure subscriptions that are linked to the Litware.com tenant and five Azure subscriptions that are linked to the dev.Litware.com tenant. All the subscriptions are in an Enterprise Agreement (EA).
The Litware.com tenant contains a custom Azure role-based access control (Azure RBAC) role named Role1 that grants the DataActions read permission to the blobs and files in Azure Storage.
Existing Environment. On-premises Environment
The on-premises network of Litware contains the resources shown in the following table.
Name Type Configuration
SERVER1 Ubuntu 18.04 vitual machines hosted on Hyper-V The vitual machines host a third-party app named App1.
SERVER2 App1 uses an external storage solution that provides Apache Hadoop-compatible data storage. The data storage supports POSIX access control list (ACL) file-level permissions.
SERVER3
SERVER10 Server that runs Windows Server 2016 The server contains a Microsoft SQL Server instance that hosts two databases named DB1 and DB2.
Existing Environment. Network Environment
Litware has ExpressRoute connectivity to Azure.
Planned Changes and Requirements. Planned Changes
Litware plans to implement the following changes:
✑ Migrate DB1 and DB2 to Azure.
✑ Migrate App1 to Azure virtual machines.
✑ Deploy the Azure virtual machines that will host App1 to Azure dedicated hosts.
Planned Changes and Requirements. Authentication and Authorization Requirements
Litware identifies the following authentication and authorization requirements:
✑ Users that manage the production environment by using the Azure portal must connect from a hybrid Azure AD-joined device and authenticate by using Azure Multi-Factor Authentication (MFA).
✑ The Network Contributor built-in RBAC role must be used to grant permission to all the virtual networks in all the Azure subscriptions.
✑ To access the resources in Azure, App1 must use the managed identity of the virtual machines that will host the app.
✑ Role1 must be used to assign permissions to the storage accounts of all the Azure subscriptions.
✑ RBAC roles must be applied at the highest level possible.
Planned Changes and Requirements. Resiliency Requirements
Litware identifies the following resiliency requirements:
✑ Once migrated to Azure, DB1 and DB2 must meet the following requirements:
- Maintain availability if two availability zones in the local Azure region fail.
- Fail over automatically.
- Minimize I/O latency.
✑ App1 must meet the following requirements:
- Be hosted in an Azure region that supports availability zones.
- Be hosted on Azure virtual machines that support automatic scaling.
- Maintain availability if two availability zones in the local Azure region fail.
Planned Changes and Requirements. Security and Compliance Requirements
Litware identifies the following security and compliance requirements:
✑ Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.
✑ On-premises users and services must be able to access the Azure Storage account that will host the data in App1.
✑ Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.
✑ All Azure SQL databases in the production environment must have Transparent Data Encryption (TDE) enabled.
✑ App1 must not share physical hardware with other workloads.
Planned Changes and Requirements. Business Requirements
Litware identifies the following business requirements:
✑ Minimize administrative effort.
✑ Minimize costs.
You plan to migrate App1 to Azure. The solution must meet the authentication and authorization requirements.
Which type of endpoint should App1 use to obtain an access token?
Azure Instance Metadata Service (IMDS)
Azure AD
Azure Service Management
Microsoft identity platform
The correct answer is: Azure Instance Metadata Service (IMDS)
Explanation:
Managed Identities and IMDS:
Why it’s the right choice: The requirements state that “To access the resources in Azure, App1 must use the managed identity of the virtual machines that will host the app”. Managed identities for Azure resources provide an identity that applications running in an Azure VM can use to access other Azure resources. The Azure Instance Metadata Service (IMDS) is the service that provides this identity information to the VM.
How it works:
You enable a managed identity for the virtual machines hosting App1.
Within the App1 code, you make a request to the IMDS to obtain an access token.
The IMDS service, running inside each Azure VM, returns a token that can be used to access other Azure resources (e.g., storage accounts, Key Vault) without requiring to store credentials in the application code. This access token is automatically rotated by the managed identity service.
This token is then passed to the destination service to provide access, after verifying the token is valid with Azure AD.
Security Benefits: Using managed identities and IMDS avoids storing sensitive credentials in configuration files, environment variables, or the application code itself. This is a security best practice.
Relevance to the scenario: It directly fulfills the requirement to use managed identities for accessing Azure resources from App1.
Why Other Options are Incorrect:
Azure AD: While Azure AD is used to authenticate users and apps, the app itself (App1 running on the VM) does not need to perform a standard Azure AD login. The managed identity handles this for the application. The application uses a token from IMDS, it does not use the Azure AD endpoint directly.
Azure Service Management: This is a deprecated method for Azure management. This is not the correct way to authenticate application level access.
Microsoft identity platform: This is the overall identity platform in Azure, but it’s not used for direct token retrieval within a VM with a managed identity. App1 should not use the Microsoft Identity Platform directly, it should use IMDS to get a token from the managed identity.
In Summary:
The correct endpoint for App1 to obtain an access token is the Azure Instance Metadata Service (IMDS). IMDS is designed specifically for providing applications within Azure VMs access tokens that are used for accessing other Azure services when used with a managed identity.
Important Notes for Azure 304 Exam:
Managed Identities: You MUST understand how managed identities work and how to use them. Be familiar with the two types of managed identity: System-assigned and User-assigned.
Azure Instance Metadata Service (IMDS): Know the purpose of IMDS and how it provides information about the Azure VM instance (including access tokens for managed identities).
Secure Authentication: Understand the security benefits of using managed identities instead of embedding secrets in code or configuration files.
Authentication Scenarios: Be able to recognize different authentication scenarios (user login vs. application access) and know which Azure service to use to achieve the required access pattern.
Service Principals: Be familiar with the concept of service principals and their relationship with application identity, but understand that a service principal is not directly needed here since the managed identity service creates and manages the service principals.
Key Takeaway: For applications running in Azure VMs that need to access other Azure resources, managed identities via the Azure IMDS are the recommended approach. The application does not authenticate with Azure AD directly, it gets a token from the IMDS.
HOTSPOT
You need to ensure that users managing the production environment are registered for Azure MFA and must authenticate by using Azure MFA when they sign in to the Azure portal. The solution must meet the authentication and authorization requirements.
What should you do? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
To register the users for Azure MFA, use:
Azure AD Identity Protection
Security defaults in Azure AD
Per-user MFA in the MFA management UI
To enforce Azure MFA authentication, configure:
Grant control in capolicy1
Session control in capolicy1
Sign-in risk policy in Azure AD Identity Protection for the Litware.com tenant
Correct Answers:
To register the users for Azure MFA, use: Per-user MFA in the MFA management UI
To enforce Azure MFA authentication, configure: Grant control in capolicy1
Explanation:
Per-User MFA in the MFA Management UI:
Why it’s the right choice: Per-user MFA is the standard way of configuring MFA on user accounts and is often used when you do not want to enable security defaults (as it allows for more granular control). You must configure this on the user before conditional access can be applied.
How it Works: This action will cause each user in the required group to be registered for Multi-Factor authentication. This method is ideal when you want direct control over user MFA status, or when security defaults are not enabled.
Relevance to the scenario: The requirement specifies that “users must authenticate by using Azure MFA when they sign in to the Azure portal.” The first step is to register the users.
Grant Control in capolicy1:
Why it’s the right choice: The requirements specified that there is a Conditional Access Policy (capolicy1), therefore this is where we must configure the requirement to enforce MFA. Within the Grant controls of the conditional access policy you must require MFA to satisfy the requirement.
How it works: You will need to modify capolicy1 in order to ensure that all the required conditions are satisfied before being granted access to Azure Portal. In addition to enabling MFA, you may also need to specify other conditions, such as device type or location, to fulfill the full requirement for the conditional access policy.
Relevance to the scenario: The conditional access policy enforces access control based on the authentication and authorization rules specified in the requirements, which also specify that “users…must connect from a hybrid Azure AD-joined device”. This conditional access policy will enforce the requirement for MFA.
Why Other Options are Incorrect:
To register the users for Azure MFA, use: Azure AD Identity Protection: Azure AD Identity Protection is used to detect and investigate risky sign-in behavior and to configure risk-based conditional access policies. It’s not the primary mechanism for registering users for MFA. While Identity Protection does have an MFA registration policy, it does not enable MFA, but only prompts a user to register for MFA.
To register the users for Azure MFA, use: Security defaults in Azure AD: Security defaults is a blanket setting that enables multi-factor authentication and many other security settings. While this option is also valid, it does not allow for the more fine-grained control that is needed for conditional access, and therefore is not the correct answer.
To enforce Azure MFA authentication, configure: Session control in capolicy1: Session controls in a conditional access policy are used to control user browser sessions, not to enforce MFA requirements, and are therefore not the correct mechanism to solve this requirement.
To enforce Azure MFA authentication, configure: Sign-in risk policy in Azure AD Identity Protection for the Litware.com tenant: Identity protection is a good tool for detecting risk and automatically responding to high risk sign-in attempts. It does not directly enable MFA for all user logins, but rather responds to high risk sign-in attempts, therefore this is not the correct service.
In Summary:
The best approach is to first enable Per-user MFA, and then enforce MFA through the Conditional Access Policy (capolicy1).
Important Notes for Azure 304 Exam:
Azure MFA: Know how to enable and enforce MFA for users. Be familiar with both Per-user MFA, and the security defaults settings in Azure AD.
Conditional Access Policies: You MUST know how conditional access policies work and how to configure access rules (including MFA requirements).
Grant Controls: Understand the use of grant controls to enforce authentication requirements.
Azure AD Identity Protection: Understand how Identity Protection works, but be aware it is for risk-based policies, and is not intended for setting up MFA on a user account, or enforcing MFA on logins.
Hybrid Azure AD Join: Be familiar with the benefits and requirements for Hybrid Azure AD-joined devices and how to use them in conjunction with conditional access policies.
Service Selection: Be able to pick the correct service for each task, and understand that setting up MFA and enforcing MFA are distinct steps that require different tools.
Azure Environment -
Litware has 10 Azure subscriptions that are linked to the Litware.com tenant and five Azure subscriptions that are linked to the dev.litware.com tenant. All the subscriptions are in an Enterprise Agreement (EA).
The litware.com tenant contains a custom Azure role-based access control (Azure RBAC) role named Role1 that grants the DataActions read permission to the blobs and files in Azure Storage.
On-Premises Environment -
The on-premises network of Litware contains the resources shown in the following table.
Network Environment -
Litware has ExpressRoute connectivity to Azure.
Planned Changes and Requirements
Litware plans to implement the following changes:
Migrate DB1 and DB2 to Azure.
Migrate App1 to Azure virtual machines.
Migrate the external storage used by App1 to Azure Storage.
Deploy the Azure virtual machines that will host App1 to Azure dedicated hosts.
Authentication and Authorization Requirements
Litware identifies the following authentication and authorization requirements:
Only users that manage the production environment by using the Azure portal must connect from a hybrid Azure AD-joined device and authenticate by using
Azure Multi-Factor Authentication (MFA).
The Network Contributor built-in RBAC role must be used to grant permissions to the network administrators for all the virtual networks in all the Azure subscriptions.
To access the resources in Azure, App1 must use the managed identity of the virtual machines that will host the app.
RBAC roles must be applied at the highest level possible.
Resiliency Requirements -
Litware identifies the following resiliency requirements:
Once migrated to Azure, DB1 and DB2 must meet the following requirements:
Maintain availability if two availability zones in the local Azure region fail.
Fail over automatically.
Minimize I/O latency.
App1 must meet the following requirements:
Be hosted in an Azure region that supports availability zones.
Be hosted on Azure virtual machines that support automatic scaling.
Maintain availability if two availability zones in the local Azure region fail.
Security and Compliance Requirements
Litware identifies the following security and compliance requirements:
Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.
On-premises users and services must be able to access the Azure Storage account that will host the data in App1.
Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.
All Azure SQL databases in the production environment must have Transparent Data Encryption (TDE) enabled.
App1 must NOT share physical hardware with other workloads.
Business Requirements -
Litware identifies the following business requirements:
Minimize administrative effort.
Minimize costs.
After you migrate App1 to Azure, you need to enforce the data modification requirements to meet the security and compliance requirements.
What should you do?
Introductory Info
Question
Answers
A. Create an access policy for the blob service.
B. Implement Azure resource locks.
C. Create Azure RBAC assignments.
D. Modify the access level of the blob service
which option is correct? why correct? which important note for azure 305 exam?
The Goal
As before, the primary goal is to enforce this requirement:
“Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.”
Evaluating the Options Based on Proximity
Let’s analyze each option again:
A. Create an access policy for the blob service.
Why it’s closest to being correct: While it doesn’t directly enforce immutability, access policies do allow you to control write access. By carefully constructing an access policy, you could, in theory, grant write access for a specific period or to a particular user/group, and then potentially restrict it later to help prevent further modification. However, it is important to remember this does not ensure immutability and is just a temporary restriction to the data.
Why it’s still not ideal: Access policies do not inherently prevent modification. A user or process could still modify the data if granted the appropriate permissions. It can also get complex to manage.
B. Implement Azure resource locks.
Why it’s NOT a good fit: As mentioned previously, resource locks focus on preventing deletion or changes to the resources, not the data within the resources. This is not even remotely related to the requirement.
C. Create Azure RBAC assignments.
Why it’s NOT a good fit: Like resource locks, RBAC controls the permissions of who can do what with the Azure resources. RBAC does not provide a mechanism for ensuring immutability of the data.
D. Modify the access level of the blob service.
Why it’s NOT a good fit: Access levels (e.g., public, private, blob) controls who can access the storage account, not how the data within it is modified.
The Closest Correct Answer
Given the limited options, A. Create an access policy for the blob service. is the closest to the correct approach, however it is still not correct.
Why? Because out of all the given answers it does the best to address the prompt, albeit incorrectly. Access policies are better than nothing, while the rest do not even come close to addressing the prompt.
Important Note for the AZ-305 Exam
The main takeaway here is that the exam will sometimes give you a multiple-choice question where the best answer isn’t provided. This forces you to choose the least incorrect option.
Here’s what you need to remember for these types of questions:
Understand Core Concepts: Have a strong grasp of the core Azure services, like Storage, RBAC, etc. and how they function.
Identify What’s Missing: If the correct feature is not an option, identify what comes closest.
Consider the Intent: What is the requirement asking? Then look for the answer that best aligns with that intent. In this case, the intent is to prevent modification of data.
Process of Elimination: Discard answers that are completely irrelevant.
A Scenario Where A Would Work, However it does not satisfy the prompt:
Access policies for data immutability could look like this:
Grant Write Access Initially: A user/process with write access writes the data
Restrict Write Access: Access policies would restrict write access to all but users/groups responsible for administration of the data.
Create New Policy: After the 3-year window, an access policy could be created to provide read-only access.
This method has some issues:
Complexity: Managing access policies like this is complex and is not scalable.
Not Truly Immutable: Even with all that complexity, a user with the right access can still delete and modify the data.
In summary:
A. Create an access policy for the blob service is the closest to the correct approach in the given options. The correct approach would have been to set up an immutable policy, which is not provided in the answers. For the AZ-305 exam, it is important to choose the answer that is closest to correct, even if it is not correct.
HOTSPOT
You plan to migrate App1 to Azure.
You need to recommend a storage solution for App1 that meets the security and compliance requirements.
Which type of storage should you recommend, and how should you recommend configuring the storage? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Storage account type:
Premium page blobs
Premium file shares
Standard general-purpose v2
Configuration:
NFSv3
Large file shares
Hierarchical namespace
Understanding the Requirements
Let’s recap the key requirements for App1’s storage:
Security and Compliance: The most important part of the prompt is the requirement from the other questions:
“Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.” This is about data immutability.
“On-premises users and services must be able to access the Azure Storage account that will host the data in App1.”
“Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.”
Functionality:
The storage will be used by App1, likely as the application’s data store.
Analyzing the Options
Let’s look at each option and determine if they fit based on requirements:
Storage Account Type
Premium page blobs:
Pros: Page blobs are optimized for random read/write operations and high I/O. Often used for virtual machine disks, databases, and some specialized workloads.
Cons: Page blobs do not have inherent immutability. They are not best for standard application data. They can be more expensive then other types of storage.
Suitability: Not suitable for this scenario.
Premium file shares:
Pros: Premium file shares are designed for high performance access with low latency.
Cons: Premium file shares lack inherent immutability. Also not optimized for data storage. Can be very expensive.
Suitability: Not suitable for this scenario.
Standard general-purpose v2:
Pros: Cost-effective for storing general purpose data, supports immutability, multiple access tiers and various features needed in this scenario.
Cons: Less I/O performance than premium storage options.
Suitability: Very suitable for this scenario.
Configuration Options
NFSv3:
Pros: Network File System (NFS) v3. Provides a network protocol that allows on-prem resources to access the storage account over the network.
Cons: No inherent immutability, complex to configure correctly.
Suitability: Necessary for on-premises access to the storage account
Large file shares:
Pros: Allows large file shares.
Cons: Does not provide immutability.
Suitability: Not relevant to the data immutability requirement.
Hierarchical namespace:
Pros: Provides a file structure organization which allows ACLs and permissions management.
Cons: Hierarchical namespace is not related to immutable data.
Suitability: Not relevant to the data immutability requirement.
The Correct Choices
Based on the analysis above, here is how the hotspot should be answered:
Storage Account Type: Standard general-purpose v2
Configuration: NFSv3
Explanation
Standard general-purpose v2 is the best option because it allows for cost effective storage and immutability for the data.
NFSv3 is required for on-premises users to be able to access the storage account.
Important Notes for AZ-305
Immutability: Ensure you are familiar with Azure storage’s immutability features and how they are configured.
Storage Account Types: Understand the differences between the various storage account types (general-purpose v2, blob, premium), and when to use them.
Networking for Storage: Understand how to configure access for on-prem resources (Private Endpoints, VNET Service Endpoints, NFSv3).
Cost Optimization: Always consider cost optimization when selecting storage options.
Read the Entire Question: Read carefully for both explicit and implicit requirements. This scenario has explicit storage requirements, but the key driver is the requirement from the previous question, immutability.
Let me know if you have other Azure scenarios to explore!
HOTSPOT
You plan to migrate DB1 and DB2 to Azure.
You need to ensure that the Azure database and the service tier meet the resiliency and business requirements.
What should you configure? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Database:
A single Azure SQL database
Azure SQL Managed Instance
An Azure SQL Database elastic pool
Service tier:
Hyperscale
Business Critical
General Purpose
Understanding the Requirements
Here are the key requirements we need to address for the database migration:
Resiliency:
“Maintain availability if two availability zones in the local Azure region fail.”
“Fail over automatically.”
“Minimize I/O latency.”
Business:
“Minimize administrative effort.”
“Minimize costs.”
Analyzing the Options
Let’s evaluate each option against these requirements:
Database Options
A single Azure SQL database:
Pros: Easy to set up, cost-effective for single databases, can use availability zones.
Cons: Requires more management for HA and DR. Limited in scale. Not designed to have automatic failover across availability zones.
Suitability: This can meet the single requirement of being hosted in an availability zone, but it does not inherently provide cross-AZ automatic failover, which is required.
Azure SQL Managed Instance:
Pros: Provides an environment closer to the on-premises SQL Server engine, easier to lift-and-shift. Has HA and DR options.
Cons: More expensive than single Azure SQL database. Still requires more management for HA and DR in comparison to elastic pools.
Suitability: Could work, but does not meet the cost requirement compared to elastic pools.
An Azure SQL Database elastic pool:
Pros: Cost-effective for multiple databases, simplified management of resources, availability zone redundancy and cross-zone failover, can share resources across multiple DBs.
Cons: Less granular control over the SQL server engine compared to managed instances.
Suitability: Excellent for this scenario. It meets the multi-zone resiliency and automatic failover, while also enabling a cost-efficient design as it is a pool of databases rather than one database per server.
Service Tier Options
Hyperscale:
Pros: Very high scalability and performance, excellent for high-throughput workloads, excellent for business-critical databases.
Cons: Higher cost. Not designed for the given business requirement of cost efficiency.
Suitability: Not suitable for this scenario as its not cost efficient and does not offer better performance then business critical.
Business Critical:
Pros: Designed for mission-critical applications. Highest I/O performance, and low latency. Provides automatic failover for multiple availability zones.
Cons: Higher cost compared to other service tiers.
Suitability: Highly suitable, as it provides the resiliency requirement while also meeting the I/O latency requirement.
General Purpose:
Pros: Cost effective option, balances price and I/O for general use databases.
Cons: No built in cross-availability zone failover and low I/O performance.
Suitability: Not suitable, as it cannot provide the availability zone redundancy and high I/O performance necessary.
The Correct Choices
Based on the analysis, here’s how the hotspot should be answered:
Database: An Azure SQL Database elastic pool
Service tier: Business Critical
Explanation
Elastic pool provides the best combination of cost efficiency, management ease, and resiliency when compared to a single database or managed instance.
Business Critical best addresses the requirements for availability zones, automatic failover and I/O latency.
Important Notes for the AZ-305 Exam
Azure SQL Options: Thoroughly understand the differences between single Azure SQL Databases, Elastic Pools, and Managed Instances. Know the use cases for each.
Service Tiers: Be familiar with the performance and cost implications of each service tier: General Purpose, Business Critical, and Hyperscale.
Availability Zones and HA/DR: Be able to implement cross-AZ failover and understand the underlying concepts for HA and DR.
Cost vs. Performance: Be able to analyze the trade-offs between cost and performance, and how to choose the right configuration for a given workload.
Reading Prompt Carefully: Take care when understanding requirements to ensure you are addressing all necessary concerns.
You plan to migrate App1 to Azure.
You need to recommend a network connectivity solution for the Azure Storage account that will host the App1 data. The solution must meet the security and compliance requirements.
What should you include in the recommendation?
a private endpoint
a service endpoint that has a service endpoint policy
Azure public peering for an ExpressRoute circuit
Microsoft peering for an ExpressRoute circuit
Understanding the Requirements
Here are the key networking-related requirements:
Security:
“Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.”
Connectivity:
“On-premises users and services must be able to access the Azure Storage account that will host the data in App1.”
Existing Environment:
“Litware has ExpressRoute connectivity to Azure.”
Analyzing the Options
Let’s evaluate each option against these requirements:
a private endpoint
Pros: Provides a private IP address within the virtual network for the storage account, thus preventing public access, which meets the security requirement. Enables on-prem resources to connect via the private IP over the express route connection.
Cons: Can increase cost slightly, requires virtual network integration.
Suitability: Highly suitable. It meets the security requirement of preventing public access and allows on-premises users to access the storage account over the private network and ExpressRoute connection.
a service endpoint that has a service endpoint policy
Pros: Allows VNETs to access the storage account without exposing it to the public internet.
Cons: Does not allow for on-premises resources to access the storage account.
Suitability: Not suitable. This only prevents traffic from public endpoints, however the on-premises traffic will still need to go through the public internet.
Azure public peering for an ExpressRoute circuit
Pros: Can provide access to Azure public services, such as storage, via the ExpressRoute connection.
Cons: Does not block access from the public internet, which does not meet the security requirements.
Suitability: Not suitable because public peering is not a secure method to access storage.
Microsoft peering for an ExpressRoute circuit
Pros: Allows private access to Azure resources, including Azure Storage.
Cons: Does not natively prevent access from the public internet. Requires additional configuration to do so.
Suitability: While Microsoft peering is the route that will be used by the resources to communicate via the express route, it is not a configuration that prevents public access.
The Correct Recommendation
Based on the analysis, the correct solution is:
a private endpoint
Explanation
Private Endpoints provide a network interface for the storage account directly within a virtual network. This ensures that access to the storage is limited to only resources within the private network. Traffic goes through the ExpressRoute circuit to the private IP on the VNET.
By using a private endpoint, you effectively prevent access from the public internet, fulfilling the security requirement.
Why other options are not correct:
Service endpoints only lock access from virtual networks to the storage account, it does not prevent on-premises systems from going through the public endpoint of the storage account.
Public peering is used to access public Azure services, it does not fulfill the security requirements of preventing access from the public internet.
Microsoft peering allows on-prem systems to access resources through private IP addresses, however it does not prevent on-prem resources from also using the public endpoint. Private Endpoints are needed to block the public endpoint.
Important Notes for the AZ-305 Exam
Private Endpoints vs Service Endpoints: Know the fundamental differences. Service endpoints provide network isolation within Azure networks, but don’t prevent public access. Private endpoints, on the other hand, allow resources within VNETs to communicate to resources via private IP addresses.
ExpressRoute Peering: Understand the differences between Microsoft, Azure public and private peering.
Security and Compliance: Prioritize solutions that align with security requirements. Blocking public access is a common ask.
Read Requirements Carefully: Ensure you meet all requirements including the networking and security.
DRAG DROP
You need to configure an Azure policy to ensure that the Azure SQL databases have TDE enabled. The solution must meet the security and compliance requirements.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Actions
Create an Azure policy definition that uses the deployIfNotExists effect.
Create a user-assigned managed identity.
Invoke a remediation task.
Create an Azure policy assignment.
Create an Azure policy definition that uses the Modify effect.
Answer Area
Understanding the Goal
The goal is to use Azure Policy to automatically enable TDE on all Azure SQL databases within the scope of the policy.
Key Concepts
Azure Policy: Allows you to create, assign, and manage policies that enforce rules across your Azure resources.
Policy Definition: Specifies the conditions that must be met and the actions to take if the conditions are not met.
Policy Assignment: Applies the policy definition to a specific scope (subscription, resource group, etc.).
deployIfNotExists Effect: This policy effect will deploy an ARM template if the resource does not have the configuration (TDE enabled).
Modify Effect: This effect will modify the resource to enforce the condition if it does not exist.
Remediation Task: A process for correcting resources that are not compliant with the policy.
User-Assigned Managed Identity: An identity object in Azure which allows for RBAC permissions and avoids the need for storing credentials for an application.
Steps in the Correct Sequence
Here’s the correct sequence of actions, with explanations:
Create an Azure policy definition that uses the deployIfNotExists effect.
Why? This is the first step. You need to define what the policy should do. For TDE, deployIfNotExists is used to deploy a configuration if it’s missing. The deployIfNotExists will deploy an ARM template that enables TDE on the database.
This step specifies the “rule” that will be enforced.
Create an Azure policy assignment.
Why? After defining the policy, you need to assign it to a scope, such as a subscription or a resource group. This step specifies where the policy is applied.
This tells Azure what needs to be checked against the policy.
Invoke a remediation task.
Why? The initial policy assignment will remediate new resources. However existing resources will need a remediation task to be launched to apply the policy to the non-compliant resources.
The Correct Drag-and-Drop Order
Here’s how you should arrange the actions in the answer area:
Create an Azure policy definition that uses the deployIfNotExists effect.
Create an Azure policy assignment.
Invoke a remediation task.
Why Other Options are Incorrect in this context:
Create a user-assigned managed identity: Although managed identities are used in conjunction with policies that use the deployIfNotExists effect, they do not need to be created specifically. The system assigned managed identity of the policy will perform the remediation. Therefore, creating a user-assigned managed identity is not needed and not within the scope of the task.
Create an Azure policy definition that uses the Modify effect: Although Modify is used in Azure policies, it is not relevant in the configuration of TDE. deployIfNotExists is a better approach because TDE needs to be enabled, which requires a deployment.
Important Notes for the AZ-305 Exam
Azure Policy Effects: Be extremely familiar with different policy effects, especially deployIfNotExists, audit, deny, and modify.
Policy Definition vs. Assignment: Understand the difference between defining a policy and applying it to resources.
Remediation: Understand how to use remediation tasks to fix non-compliant resources.
Scope: Be able to set the appropriate scope for policy assignments.
Managed Identities: Know how to use managed identities for secure resource management with Azure policies.
HOTSPOT
You plan to migrate App1 to Azure.
You need to recommend a high-availability solution for App1. The solution must meet the resiliency requirements.
What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Number of host groups:
1
2
3
6
Number of virtual machine scale sets:
0
1
3
Understanding the Requirements
Here are the key resiliency requirements for App1:
“Be hosted in an Azure region that supports availability zones.”
“Be hosted on Azure virtual machines that support automatic scaling.”
“Maintain availability if two availability zones in the local Azure region fail.”
“App1 must NOT share physical hardware with other workloads.” (This implies using dedicated hosts)
Key Concepts
Azure Dedicated Hosts: Provide physical servers dedicated to your Azure virtual machines.
Host Groups: Collections of dedicated hosts.
Virtual Machine Scale Sets (VMSS): Allow you to create and manage a group of identical load-balanced virtual machines.
Availability Zones: Physically separate locations within an Azure region that provide high availability.
Analyzing the Options
Let’s analyze each option given the requirements:
Number of Host Groups
1: Not sufficient. If a single host group fails (because it’s in a single availability zone), then App1 is down. Does not meet the 2 availability zone failure requirement.
2: Not sufficient. Two host groups could potentially be in two zones. However since the requirement stated two availability zone failures would be needed, this is not enough.
3: Sufficient. Using three host groups spread across three zones ensures availability if two availability zones fail.
6: Not needed. Three zones is sufficient to meet the requirements. Using 6 host groups is unnecessary and increases cost.
Number of Virtual Machine Scale Sets
0: Not suitable. VMSS is required to automate scale out and scaling across availability zones.
1: Not suitable. One VMSS in one availability zone is not enough to address the requirement.
3: Suitable. Three VMSS spread across three availability zones is suitable for automated scaling.
The Correct Choices
Here’s how the hotspot should be answered:
Number of host groups: 3
Number of virtual machine scale sets: 3
Explanation
3 Host Groups: Because the application needs to remain available during two zone outages, you need dedicated hosts in 3 different availability zones.
3 Virtual Machine Scale Sets: You need to distribute App1’s virtual machines across different zones, and VMSS’s are required to provide automatic scaling. Each VMSS will be placed in each availability zone to provide the high availability.
Why Other Options are Not Correct
Using 1 or 2 host groups would not meet the availability zone redundancy requirement.
Having 0 or 1 virtual machine scale sets does not provide the needed HA by using multiple zones.
Important Notes for the AZ-305 Exam
Dedicated Hosts: Understand the use cases for dedicated hosts, and how they differ from standard VMs.
Host Groups: Understand the use case for host groups and what they are used for.
Virtual Machine Scale Sets: Understand how to implement automatic scaling using VMSS.
Availability Zones: Be able to design highly available solutions using Availability Zones.
High Availability Requirements: Read the requirements carefully to understand what the solution needs to provide.
Cost Optimization: Avoid implementing unnecessary complexity and cost.
You need to implement the Azure RBAC role assignments for the Network Contributor role.
The solution must meet the authentication and authorization requirements.
What is the minimum number of assignments that you must use?
1
2
5
10
15
Understanding the Requirements
Here’s the key requirement:
“The Network Contributor built-in RBAC role must be used to grant permissions to the network administrators for all the virtual networks in all the Azure subscriptions.”
“RBAC roles must be applied at the highest level possible.”
Key Concepts
Azure RBAC: Role-Based Access Control, used to manage access to Azure resources.
Network Contributor Role: A built-in role that grants permissions to manage virtual networks and other networking resources.
Role Assignment: The process of associating a role with a user, group, or service principal at a particular scope.
Scope: The level at which a role assignment applies (e.g., subscription, resource group, resource).
Hierarchy: RBACs roles are inhereted down through a resource hierarchy. This means RBAC roles set at the subscription level will apply to all the child resources within that subscription.
Analyzing the Scenario
Subscriptions: Litware has 10 Azure subscriptions in litware.com and 5 in dev.litware.com, for a total of 15 subscriptions.
Highest Level Possible: Applying RBAC at the highest possible level minimizes the administrative effort and the number of assignments needed. In this case, the highest level is the subscription.
All Virtual Networks: Applying Network Contributor at the subscription level provides access to all virtual networks within that subscription.
Number of Assignments: Because the requirement states all subscriptions need to have this role, we will need at least one role assignment per subscription.
Determining the Minimum Number of Assignments
Since we are applying the role to all virtual networks in all subscriptions, and RBAC roles inherit to child resources, we need one assignment per subscription. Therefore:
15 role assignments are needed.
Why Other Options Are Incorrect
1: Not sufficient. One assignment at the Management Group would only provide access to subscriptions within that Management Group.
2: Not sufficient. An assignment at each tenant would not cover each subscription.
5: Not sufficient. Would cover the dev.litware.com subscriptions, but not the litware.com subscriptions.
10: Not sufficient. Would cover the litware.com subscriptions, but not the dev.litware.com subscriptions.
The Correct Answer
The correct answer is:
15
Important Notes for the AZ-305 Exam
RBAC Scope: Understand how RBAC scopes work and the impact of assigning roles at different levels.
Built-in Roles: Be familiar with common built-in roles like Network Contributor, Owner, Reader, Contributor, etc.
Least Privilege: Always strive for the least privilege principle, only granting the necessary access to users or groups.
Hierarchy: Understand how resource hierarchies work and how RBAC role assignments are inherited down the tree.
Minimize Administrative Effort: Aim to reduce complexity and administrative burden.
Management Groups: Be aware that RBAC role assignment can occur at the Management Group level in addition to the resource, resource group, and subscription levels.
HOTSPOT
You plan to migrate App1 to Azure.
You need to estimate the compute costs for App1 in Azure. The solution must meet the security and compliance requirements.
What should you use to estimate the costs, and what should you implement to minimize the costs? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
To estimate the costs, use:
Azure Advisor
The Azure Cost Management Power BI app
The Azure Total Cost of Ownership (TCO) calculator
Implement:
Azure Reservations
Azure Hybrid Benefit
Azure Spot Virtual Machine pricing
Understanding the Requirements
Compute Cost Estimation: You need a method to accurately estimate the cost of the Azure virtual machines that will host App1.
Cost Minimization: You need to identify strategies to reduce the compute costs, especially keeping the security and compliance in mind.
Dedicated Hosts: The prompt also mentioned that App1 must be on dedicated hosts.
Analyzing the Options
Let’s evaluate each option based on how well it fits the requirement:
Cost Estimation Tools
Azure Advisor:
Pros: Analyzes your existing Azure resources and recommends cost optimization options.
Cons: Primarily for existing resources and does not work in the planning phase.
Suitability: Not the best fit for cost estimation before the environment is setup.
The Azure Cost Management Power BI app:
Pros: Allows for visualization and analysis of cost data from your Azure subscriptions, based on past usage.
Cons: Not suitable for cost estimation of future resources, but is used for analyzing actual costs.
Suitability: Not the best fit for cost estimation before the environment is setup.
The Azure Total Cost of Ownership (TCO) calculator:
Pros: Designed to compare on-premises and Azure costs and allows cost estimation of future resources.
Cons: Might need some initial data input to estimate accurately.
Suitability: Very suitable for cost estimation of future Azure resources, and is useful for planning.
Cost Minimization Options
Azure Reservations:
Pros: Provides significant discounts when you commit to using specific Azure resources for a 1 or 3 year term.
Cons: Requires commitment, might not be flexible.
Suitability: Very suitable for minimizing long term costs.
Azure Hybrid Benefit:
Pros: Lets you use your existing on-premises Windows Server licenses to reduce the cost of Azure Virtual Machines.
Cons: Only applies to Windows Server licenses.
Suitability: Very suitable, if you have eligible licenses.
Azure Spot Virtual Machine pricing:
Pros: Offers very significant discounts for VMs.
Cons: Spot VMs can be evicted with short notice, unsuitable for production workloads. Does not meet the application’s production needs.
Suitability: Not suitable for a production application.
The Correct Choices
Here’s how the hotspot should be answered:
To estimate the costs, use: The Azure Total Cost of Ownership (TCO) calculator
Implement: Azure Reservations and Azure Hybrid Benefit
Explanation
Azure TCO Calculator: The TCO calculator is the correct tool for estimating costs during the planning phase. It allows for input of future resources and is perfect for this scenario.
Azure Reservations: Reservations will significantly reduce compute costs for the virtual machines when committed to a long term period.
Azure Hybrid Benefit: If applicable, using the Azure Hybrid Benefit will reduce costs and will help keep resources secure.
Why Other Options Are Not Suitable
Azure Advisor and Azure Cost Management are for analyzing current spend, not estimating future costs.
Azure Spot VMs are unsuitable for this production environment.
Important Notes for the AZ-305 Exam
Cost Estimation Tools: Be familiar with tools like the TCO calculator and the Azure pricing calculator, and how to use them for estimates.
Cost Optimization Techniques: Understand various methods for cost optimization, such as reservations, hybrid benefit, and spot VMs.
Workload Suitability: Recognize that not all cost-saving methods are appropriate for every workload (e.g., spot VMs are not for production).
Dedicated Hosts: Remember that the prompt also required dedicated hosts, so the options selected must support dedicated hosts.
Existing Environment: Technical Environment
The on-premises network contains a single Active Directory domain named contoso.com.
Contoso has a single Azure subscription.
Existing Environment: Business Partnerships
Contoso has a business partnership with Fabrikam, Inc. Fabrikam users access some Contoso applications over the internet by using Azure Active Directory (Azure AD) guest accounts.
Requirements: Planned Changes
Contoso plans to deploy two applications named App1 and App2 to Azure.
Requirements: App1
App1 will be a Python web app hosted in Azure App Service that requires a Linux runtime.
Users from Contoso and Fabrikam will access App1.
App1 will access several services that require third-party credentials and access strings.
The credentials and access strings are stored in Azure Key Vault.
App1 will have six instances: three in the East US Azure region and three in the West Europe Azure region.
App1 has the following data requirements:
✑ Each instance will write data to a data store in the same availability zone as the instance.
✑ Data written by any App1 instance must be visible to all App1 instances.
App1 will only be accessible from the internet. App1 has the following connection requirements:
✑ Connections to App1 must pass through a web application firewall (WAF).
✑ Connections to App1 must be active-active load balanced between instances.
✑ All connections to App1 from North America must be directed to the East US region. All other connections must be directed to the West Europe region.
Every hour, you will run a maintenance task by invoking a PowerShell script that copies files from all the App1 instances. The PowerShell script will run from a central location.
Requirements: App2
App2 will be a NET app hosted in App Service that requires a Windows runtime.
App2 has the following file storage requirements:
✑ Save files to an Azure Storage account.
✑ Replicate files to an on-premises location.
✑ Ensure that on-premises clients can read the files over the LAN by using the SMB protocol.
You need to monitor App2 to analyze how long it takes to perform different transactions within the application. The solution must not require changes to the application code.
Application Development Requirements
Application developers will constantly develop new versions of App1 and App2.
The development process must meet the following requirements:
✑ A staging instance of a new application version must be deployed to the application host before the new version is used in production.
✑ After testing the new version, the staging version of the application will replace the production version.
✑ The switch to the new application version from staging to production must occur without any downtime of the application.
Identity Requirements
Contoso identifies the following requirements for managing Fabrikam access to resources:
✑ uk.co.certification.simulator.questionpool.PList@1863e940
✑ The solution must minimize development effort.
Security Requirement
All secrets used by Azure services must be stored in Azure Key Vault.
Services that require credentials must have the credentials tied to the service instance. The credentials must NOT be shared between services.
DRAG DROP
You need to recommend a solution that meets the file storage requirements for App2.
What should you deploy to the Azure subscription and the on-premises network? To answer, drag the appropriate services to the correct locations. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Services
Azure Blob Storage
Azure Data Box
Azure Data Box Gateway
Azure Data Lake Storage
Azure File Sync
Azure Files
Answer Area
Azure subscription: Service
On-premises network: Service
Understanding the Requirements
Here are the key file storage requirements for App2:
Azure Storage: Store files in an Azure Storage account.
On-premises Replication: Replicate the files to an on-premises location.
On-premises SMB Access: Ensure on-premises clients can read the files over the LAN using the SMB protocol.
Analyzing the Options
Let’s evaluate each service based on its suitability for these requirements:
Azure Blob Storage:
Pros: Scalable and cost-effective for storing large amounts of unstructured data.
Cons: Not directly accessible via SMB, and not easy for on-premises replication.
Suitability: Not suitable to directly meet the SMB access requirement.
Azure Data Box:
Pros: Used for large data transfers into and out of Azure, and great when network bandwidth is limited.
Cons: Not used for continuous synchronization of data, and does not provide SMB access.
Suitability: Not suitable for this scenario, as it is used for initial data transfer and not ongoing synchronization.
Azure Data Box Gateway:
Pros: A virtual appliance that sits on premises and transfers data to/from Azure, using local caching.
Cons: Does not directly provide SMB access to clients, but rather transfers data from on-prem to the cloud.
Suitability: Not suitable for directly meeting the requirement for local SMB share.
Azure Data Lake Storage:
Pros: Designed for big data analytics workloads.
Cons: Not optimized for transactional file storage or direct SMB access.
Suitability: Not suitable for this transactional file storage scenario, where a local SMB share is needed.
Azure File Sync:
Pros: Synchronizes files between Azure File Shares and on-premises Windows Servers.
Cons: Requires a Windows Server on-premises to be installed and configured as a synchronization endpoint.
Suitability: Highly suitable, allows for files to be stored in the Azure File Share, and then synced to an on-premises file server for SMB share.
Azure Files:
Pros: Provides SMB file shares in Azure.
Cons: By itself, does not provide on-prem replication.
Suitability: Suitable for the Azure side storage, and is a key requirement for Azure File Sync.
The Correct Placement
Based on the analysis, here’s how the services should be placed:
Azure subscription:
Azure Files
Azure File Sync
On-premises network:
Azure File Sync
Explanation
Azure Files provides the cloud-based SMB file share for storing App2’s files.
Azure File Sync ensures that the contents of the Azure File share are synced with a server on-premises, allowing for local file access through the SMB protocol. The on-prem File Sync agent will sync the data down from Azure.
Why Other Options are Incorrect
Azure Blob Storage is not suitable for SMB access
Azure Data Box is designed for bulk transfers, not ongoing sync
Azure Data Box Gateway acts as a cache and does not provide SMB access.
Azure Data Lake Storage is for big data analytics, not for a transactional SMB share.
Important Notes for the AZ-305 Exam
Azure File Sync: Understand how Azure File Sync works and when to use it. This is very important in the AZ-305 exam.
Azure Files: Understand the use cases of Azure File Shares.
SMB Protocol: Be aware that SMB shares enable file sharing over a local area network.
Hybrid Scenarios: Recognize when hybrid configurations (cloud and on-prem) are necessary and how to implement them.
You need to recommend a solution that meets the data requirements for App1.
What should you recommend deploying to each availability zone that contains an instance of App1?
an Azure Cosmos DB that uses multi-region writes
an Azure Storage account that uses geo-zone-redundant storage (GZRS)
an Azure Data Lake store that uses geo-zone-redundant storage (GZRS)
an Azure SQL database that uses active geo-replication
Understanding the Requirements
Here are the key data requirements for App1:
Local Writes: “Each instance will write data to a data store in the same availability zone as the instance.”
Global Visibility: “Data written by any App1 instance must be visible to all App1 instances.”
Multi-Zone: App1 has three instances in East US and three instances in West Europe, and all three instances in each region are spread across availability zones.
Analyzing the Options
Let’s evaluate each option based on its ability to meet these requirements:
an Azure Cosmos DB that uses multi-region writes
Pros: Globally distributed database with multi-region write capability, low latency reads and writes, with automatic failover.
Cons: More expensive than some other options.
Suitability: Highly suitable. The multi-region writes feature enables the low latency local writes, and all instances can see the data globally.
an Azure Storage account that uses geo-zone-redundant storage (GZRS)
Pros: Provides both zone and geo-redundancy for high availability.
Cons: Geo-replication has a delay, and it’s not designed for each instance to perform local writes to the same storage, this is not an active-active scenario. It will also create unnecessary cross-region traffic due to geo-replication.
Suitability: Not suitable. It does not enable local write. It has delay in data replication.
an Azure Data Lake store that uses geo-zone-redundant storage (GZRS)
Pros: Scalable and cost-effective for storing large amounts of unstructured data with GZRS enabled.
Cons: Geo-replication has a delay, and it’s not designed for each instance to perform local writes to the same storage, this is not an active-active scenario. It will also create unnecessary cross-region traffic due to geo-replication.
Suitability: Not suitable. It does not enable local write. It has delay in data replication.
an Azure SQL database that uses active geo-replication
Pros: Provides high availability, can failover to a secondary database.
Cons: Does not have multi-write capabilities. Each instance in an availability zone would need to connect to the database through its read endpoint and it will introduce cross-zone latency.
Suitability: Not suitable. Does not have multi-write capabilities, and increases cross zone and cross region latency.
The Correct Recommendation
Based on the analysis, the correct solution is:
an Azure Cosmos DB that uses multi-region writes
Explanation
Multi-Region Writes: Azure Cosmos DB with multi-region writes enabled allows each instance of App1 to write to the database in its own region with low latency.
Global Visibility: All data written to the database becomes immediately available to all other instances of App1 in all regions due to the global distribution.
Automatic Failover: Cosmos DB can failover to different regions.
Why Other Options Are Incorrect
GZRS Storage Accounts: Do not support multi-region writes, data replication has a delay, and all instances are attempting to write to the same storage, which does not meet the requirements.
Azure SQL Active Geo-Replication: Is an active-passive approach to HA/DR. Data is written to the primary node and asynchronously replicated to a secondary replica. It does not support the low latency, multi-region write, active-active model needed for App1.
Important Notes for the AZ-305 Exam
Cosmos DB Multi-Region Writes: Understand the benefits and requirements for using Cosmos DB with multi-region writes.
Data Replication Strategies: Be familiar with different data replication methods and understand their limitations (e.g., asynchronous geo-replication delays).
High Availability: Know how to design highly available solutions that are resilient to zone outages.
Active-Active vs Active-Passive: Be able to differentiate between active-active and active-passive deployments.
Latency: Be aware of latency implications when choosing a data solution.
HOTSPOT
You are evaluating whether to use Azure Traffic Manager and Azure Application Gateway to meet the connection requirements for App1.
What is the minimum numbers of instances required for each service? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Azure Traffic Manager:
1
2
3
6
Azure Application Gateway:
1
2
3
6
Understanding the Requirements
Here are the key connection requirements for App1:
“Connections to App1 must pass through a web application firewall (WAF).”
“Connections to App1 must be active-active load balanced between instances.”
“All connections to App1 from North America must be directed to the East US region. All other connections must be directed to the West Europe region.”
Key Concepts
Azure Traffic Manager: A DNS-based traffic load balancer that distributes traffic to different Azure regions based on routing rules.
Azure Application Gateway: A web traffic load balancer with WAF capabilities that can be used for application-layer routing and protection.
Active-Active Load Balancing: Distributes traffic to multiple active application instances.
Analyzing the Options
Let’s evaluate the instances needed for each service:
Azure Traffic Manager:
Minimum Instances: 1 is needed because traffic manager is a DNS based load balancer that is not dependent on number of availability zones.
Why?: Traffic Manager is used to direct users to the correct region using DNS based on where the user is originating from. This is done via DNS, so there is no requirement for more than one instance.
Suitability: Suitable for global routing.
Azure Application Gateway:
Minimum Instances: 2 are needed in a single region when hosted in an availability zone because Application Gateway is not a zone-redundant service by itself, and can only fail over to another instance in another zone.
Why?: Application Gateway is used for load balancing and WAF functionality. Because we have two regions, we need a minimum of 2 instances of application gateway (1 per region). If we want zone resiliency we need at least two in each region.
Suitability: Suitable for WAF and application load balancing.
The Correct Choices
Here’s how the hotspot should be answered:
Azure Traffic Manager: 1
Azure Application Gateway: 2
Explanation
Traffic Manager (1): A single Traffic Manager instance can perform DNS-based routing to direct traffic to either the East US or West Europe Azure regions based on where the traffic is originating from.
Application Gateway (2): A minimum of one Application Gateway instance per region is required. The minimum number of instances needed for an Application Gateway is two if you are deploying across availability zones. We need one in both East US and West Europe to handle traffic from their respective regions.
Why Other Options are Incorrect
Traffic Manager (2, 3, 6): More than one Traffic Manager instance does not provide any added benefit.
Application Gateway (1): One instance of application gateway would not provide zone redundancy.
Application Gateway (3 or 6): These are not needed, one per region is needed, but each region must have at least two for availability zone redundancy.
Important Notes for the AZ-305 Exam
Traffic Manager Use Cases: Be familiar with how Traffic Manager is used for global routing and DNS-based load balancing.
Application Gateway Use Cases: Understand how Application Gateway is used for application-level routing, load balancing, and WAF.
Minimum Instance Count: Be aware of the minimum instance counts that is needed to provide redundancy and scale for Azure services.
Active-Active Load Balancing: Understand the purpose of active-active load balancing and how it differs from active-passive solutions.
Global Routing: Be able to design solutions for global routing that meet regional requirements.
Availability Zones: Understand how to deploy solutions with availability zone redundancy.
HOTSPOT -
You need to recommend a solution to ensure that App1 can access the third-party credentials and access strings. The solution must meet the security requirements.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Authenticate App1 by using:
A certificate
A system-assigned managed identity
A user-assigned managed identity
Authorize App1 to retrieve Key Vault
secrets by using:
An access policy
A connected service
A private link
A role assignment
Understanding the Requirements
Here are the key requirements for accessing secrets:
Secure Access: App1 must access third-party credentials and access strings stored in Azure Key Vault.
Security:
“All secrets used by Azure services must be stored in Azure Key Vault.”
“Services that require credentials must have the credentials tied to the service instance. The credentials must NOT be shared between services.”
Minimize Development Effort: Ensure that the approach minimizes the work needed to implement it.
Key Concepts
Azure Key Vault: A secure service for managing secrets, keys, and certificates.
Managed Identities: Azure automatically manages the identity and authenticates to services, so you don’t need to manage credentials.
System-Assigned Managed Identity: An identity that is tied directly to the resource, and has the same lifecycle of the resource.
User-Assigned Managed Identity: An identity that is independent of the resource, and has a separate lifecycle.
Authentication: Verifying the identity of a client.
Authorization: Granting permissions to access specific resources.
Access Policies: Defines what permissions are granted to specific users or applications to access the key vault.
Role Assignments: Defines RBAC roles that can be assigned to users, groups, or applications for accessing resources.
Private Link: Provides private access to Azure services by keeping traffic within the Azure network.
Connected Service: An object that allows for integration between a service and another resource.
Analyzing the Options
Let’s evaluate each option based on its suitability:
Authentication Method
A certificate:
Pros: Provides a strong authentication method.
Cons: Requires manual certificate management, which increases administrative overhead, and does not tie credentials to service instances.
Suitability: Not suitable, as it does not meet the requirement to tie credentials to the service instance.
A system-assigned managed identity:
Pros: Automatically managed by Azure, and securely tied to the resource, meets security requirement, minimizes development effort and follows best practices.
Cons: None for this scenario.
Suitability: Highly suitable, it meets all the security and best practices requirements.
A user-assigned managed identity:
Pros: Managed by Azure, and can be used across multiple resources.
Cons: Requires manual creation, and is more complex for this scenario.
Suitability: Not as suitable because it requires more effort than a system assigned managed identity.
Authorization Method
An access policy:
Pros: Defines who can access Key Vault secrets and what they can do with them, very simple and easy to configure.
Cons: Needs to be setup for all new resources, and does not allow for centralized management.
Suitability: Not suitable because a role assignment is the preferred method of granting authorization to a service principal, and minimizes complexity.
A connected service:
Pros: Integrates resources in Azure and can provide authorization.
Cons: Not the correct way to authorize key vault access to an app service, and requires more effort.
Suitability: Not suitable.
A private link:
Pros: Provides secure network access to resources.
Cons: Not the method for controlling authorization, this is related to networking.
Suitability: Not suitable.
A role assignment:
Pros: RBAC roles that can be assigned to a service principal for Key Vault access, and the system managed identity of the resource is a service principal.
Cons: Requires assigning a role.
Suitability: Suitable, and the preferred method to assign permissions to a service principal.
The Correct Choices
Here’s how the hotspot should be answered:
Authenticate App1 by using: A system-assigned managed identity
Authorize App1 to retrieve Key Vault secrets by using: A role assignment
Explanation
System-Assigned Managed Identity: Enables secure access to Azure resources without needing to manage credentials. The identity is tied to the service instance and the lifecycle of the service.
Role Assignment: Provides the necessary permissions for the managed identity to access Key Vault secrets.
Why Other Options Are Not Suitable
User-assigned managed identities can be used however they are more complex to configure and not needed for this scenario.
Certificates: Requires manual management and do not adhere to the best practice of tying credentials to the service instance.
Access policies are designed for access by users and specific applications, but we have an easy way to achieve the same result through role assignments. Access policies also need to be configured on every resource.
Private link is a networking requirement for private access, it does not address the permissions required for App1 to access Key Vault.
Connected Services are not applicable for Key Vault access
Important Notes for the AZ-305 Exam
Managed Identities: Thoroughly understand how managed identities work and how to use them with different Azure services. System vs user assigned and their differences.
Key Vault Security: Know the best practices for securing access to Key Vault resources.
RBAC: Be familiar with Azure RBAC and how to use it for managing permissions.
Authentication vs Authorization: Understand the difference between these two processes.
Least Privilege: Understand how to implement the least privilege principle with access control.
You need to recommend an App Service architecture that meets the requirements for App1.
The solution must minimize costs.
What should few recommend?
one App Service Environment (ASE) per availability zone
one App Service plan per availability zone
one App Service plan per region
one App Service Environment (ASE) per region
Understanding the Requirements
Here are the key requirements for App1’s App Service deployment:
High Availability: App1 has six instances, three in East US and three in West Europe, spread across availability zones within each region.
Web App Service: The App1 app will be hosted on Azure App Service.
Minimize Costs: The solution should be the most cost-effective while maintaining the necessary features.
Linux Runtime: The App1 app is a python app with a linux runtime.
Key Concepts
Azure App Service: A PaaS service for hosting web applications, mobile backends, and APIs.
App Service Plan: Defines the underlying compute resources (VMs) on which your app(s) run.
App Service Environment (ASE): Provides a fully isolated and dedicated environment for running your App Service apps.
Availability Zones: Physically separate locations within an Azure region that provide high availability.
Analyzing the Options
Let’s evaluate each option based on its cost-effectiveness and ability to meet the requirements:
one App Service Environment (ASE) per availability zone
Pros: Highest level of isolation and control, can have virtual network integration.
Cons: Most expensive solution.
Suitability: Not suitable due to high costs.
one App Service plan per availability zone
Pros: Provides zone redundancy, and can potentially have different size VMs in each zone if needed.
Cons: Can lead to increased costs due to over provisioning of resources if one app services plan per zone is chosen.
Suitability: Not the most cost-effective approach.
one App Service plan per region
Pros: Cost-effective for multiple instances of an app in a single region, allows multiple VMs to be spun up on one app service plan.
Cons: Requires availability zones to be supported by the underlying VM size.
Suitability: Suitable, most cost effective option if VMs chosen support availability zones.
one App Service Environment (ASE) per region
Pros: Provides isolation and control within a region.
Cons: Very expensive and not needed for this scenario.
Suitability: Not suitable due to high costs.
The Correct Recommendation
Based on the analysis, the most cost-effective solution is:
one App Service plan per region
Explanation
App Service Plan per region: By creating a single App Service plan per region, you can host multiple instances of App1 (three per region) on the same underlying VMs. This is more cost-effective than using separate plans per availability zone.
Availability Zones: When choosing the VM size, be sure to choose a size that supports availability zones.
Zone Redundancy: App service automatically handles the zone redundancy in the single app service plan per region.
Why Other Options Are Not Correct
ASE per availability zone: Highly expensive and not needed when App Service can handle the availability zone deployment.
App Service plan per availability zone: Not cost-effective due to over provisioning and having three app services plans when one per region can handle all instances.
ASE per region: Very costly and unnecessary.
Your company has deployed several virtual machines (VMs) on-premises and to Azure. Azure ExpressRoute has been deployed and configured for on-premises to Azure connectivity.
Several VMs are exhibiting network connectivity issues.
You need to analyze the network traffic to determine whether packets are being allowed or denied to the VMs.
Solution: Use the Azure Advisor to analyze the network traffic.
Does the solution meet the goal?
Yes
No
Understanding the Goal
The goal is to analyze network traffic to determine if packets are being allowed or denied to the VMs, which indicates a network connectivity issue.
Analyzing the Proposed Solution
Azure Advisor: Azure Advisor is a service that analyzes your Azure environment and provides recommendations for cost optimization, security, reliability, and performance. It does not analyze or show you network traffic for VMs, nor can it view network traffic for on-prem VMs.
Evaluation
Azure Advisor will not help you determine what packets are being allowed or denied to a virtual machine.
The Correct Solution
The tools that would be best suited for this scenario would be:
Azure Network Watcher: Network Watcher can help you monitor and troubleshoot network traffic.
Network Security Group (NSG) Flow Logs: NSG flow logs would provide details on what traffic is being allowed or denied from and to VMs.
On-Prem Packet Capture Tools: Wireshark or other tools can be used on-prem to diagnose traffic issues.
Does the Solution Meet the Goal?
No, the solution does not meet the goal. Azure Advisor is not the correct tool for analyzing network traffic flow and packet information.