test2 Flashcards

https://infraexam.com/microsoft/az-304-microsoft-azure-architect-design/az-304-part-07/

1
Q

Overview. General Overview

Litware, Inc. is a medium-sized finance company.

Overview. Physical Locations

Litware has a main office in Boston.

Existing Environment. Identity Environment

The network contains an Active Directory forest named Litware.com that is linked to an Azure Active Directory (Azure AD) tenant named Litware.com. All users have Azure Active Directory Premium P2 licenses.

Litware has a second Azure AD tenant named dev.Litware.com that is used as a development environment.

The Litware.com tenant has a conditional acess policy named capolicy1. Capolicy1 requires that when users manage the Azure subscription for a production environment by

using the Azure portal, they must connect from a hybrid Azure AD-joined device.

Existing Environment. Azure Environment

Litware has 10 Azure subscriptions that are linked to the Litware.com tenant and five Azure subscriptions that are linked to the dev.Litware.com tenant. All the subscriptions are in an Enterprise Agreement (EA).

The Litware.com tenant contains a custom Azure role-based access control (Azure RBAC) role named Role1 that grants the DataActions read permission to the blobs and files in Azure Storage.

Existing Environment. On-premises Environment

The on-premises network of Litware contains the resources shown in the following table.
Name Type Configuration
SERVER1 Ubuntu 18.04 vitual machines hosted on Hyper-V The vitual machines host a third-party app named App1.
SERVER2 App1 uses an external storage solution that provides Apache Hadoop-compatible data storage. The data storage supports POSIX access control list (ACL) file-level permissions.
SERVER3
SERVER10 Server that runs Windows Server 2016 The server contains a Microsoft SQL Server instance that hosts two databases named DB1 and DB2.

Existing Environment. Network Environment

Litware has ExpressRoute connectivity to Azure.

Planned Changes and Requirements. Planned Changes

Litware plans to implement the following changes:

✑ Migrate DB1 and DB2 to Azure.

✑ Migrate App1 to Azure virtual machines.

✑ Deploy the Azure virtual machines that will host App1 to Azure dedicated hosts.

Planned Changes and Requirements. Authentication and Authorization Requirements

Litware identifies the following authentication and authorization requirements:

✑ Users that manage the production environment by using the Azure portal must connect from a hybrid Azure AD-joined device and authenticate by using Azure Multi-Factor Authentication (MFA).

✑ The Network Contributor built-in RBAC role must be used to grant permission to all the virtual networks in all the Azure subscriptions.

✑ To access the resources in Azure, App1 must use the managed identity of the virtual machines that will host the app.

✑ Role1 must be used to assign permissions to the storage accounts of all the Azure subscriptions.

✑ RBAC roles must be applied at the highest level possible.

Planned Changes and Requirements. Resiliency Requirements

Litware identifies the following resiliency requirements:

✑ Once migrated to Azure, DB1 and DB2 must meet the following requirements:

  • Maintain availability if two availability zones in the local Azure region fail.
  • Fail over automatically.
  • Minimize I/O latency.

✑ App1 must meet the following requirements:

  • Be hosted in an Azure region that supports availability zones.
  • Be hosted on Azure virtual machines that support automatic scaling.
  • Maintain availability if two availability zones in the local Azure region fail.

Planned Changes and Requirements. Security and Compliance Requirements

Litware identifies the following security and compliance requirements:

✑ Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.

✑ On-premises users and services must be able to access the Azure Storage account that will host the data in App1.

✑ Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.

✑ All Azure SQL databases in the production environment must have Transparent Data Encryption (TDE) enabled.

✑ App1 must not share physical hardware with other workloads.

Planned Changes and Requirements. Business Requirements

Litware identifies the following business requirements:

✑ Minimize administrative effort.

✑ Minimize costs.

You plan to migrate App1 to Azure. The solution must meet the authentication and authorization requirements.

Which type of endpoint should App1 use to obtain an access token?

Azure Instance Metadata Service (IMDS)
Azure AD
Azure Service Management
Microsoft identity platform

A

The correct answer is: Azure Instance Metadata Service (IMDS)

Explanation:

Managed Identities and IMDS:

Why it’s the right choice: The requirements state that “To access the resources in Azure, App1 must use the managed identity of the virtual machines that will host the app”. Managed identities for Azure resources provide an identity that applications running in an Azure VM can use to access other Azure resources. The Azure Instance Metadata Service (IMDS) is the service that provides this identity information to the VM.

How it works:

You enable a managed identity for the virtual machines hosting App1.

Within the App1 code, you make a request to the IMDS to obtain an access token.

The IMDS service, running inside each Azure VM, returns a token that can be used to access other Azure resources (e.g., storage accounts, Key Vault) without requiring to store credentials in the application code. This access token is automatically rotated by the managed identity service.

This token is then passed to the destination service to provide access, after verifying the token is valid with Azure AD.

Security Benefits: Using managed identities and IMDS avoids storing sensitive credentials in configuration files, environment variables, or the application code itself. This is a security best practice.

Relevance to the scenario: It directly fulfills the requirement to use managed identities for accessing Azure resources from App1.

Why Other Options are Incorrect:

Azure AD: While Azure AD is used to authenticate users and apps, the app itself (App1 running on the VM) does not need to perform a standard Azure AD login. The managed identity handles this for the application. The application uses a token from IMDS, it does not use the Azure AD endpoint directly.

Azure Service Management: This is a deprecated method for Azure management. This is not the correct way to authenticate application level access.

Microsoft identity platform: This is the overall identity platform in Azure, but it’s not used for direct token retrieval within a VM with a managed identity. App1 should not use the Microsoft Identity Platform directly, it should use IMDS to get a token from the managed identity.

In Summary:

The correct endpoint for App1 to obtain an access token is the Azure Instance Metadata Service (IMDS). IMDS is designed specifically for providing applications within Azure VMs access tokens that are used for accessing other Azure services when used with a managed identity.

Important Notes for Azure 304 Exam:

Managed Identities: You MUST understand how managed identities work and how to use them. Be familiar with the two types of managed identity: System-assigned and User-assigned.

Azure Instance Metadata Service (IMDS): Know the purpose of IMDS and how it provides information about the Azure VM instance (including access tokens for managed identities).

Secure Authentication: Understand the security benefits of using managed identities instead of embedding secrets in code or configuration files.

Authentication Scenarios: Be able to recognize different authentication scenarios (user login vs. application access) and know which Azure service to use to achieve the required access pattern.

Service Principals: Be familiar with the concept of service principals and their relationship with application identity, but understand that a service principal is not directly needed here since the managed identity service creates and manages the service principals.

Key Takeaway: For applications running in Azure VMs that need to access other Azure resources, managed identities via the Azure IMDS are the recommended approach. The application does not authenticate with Azure AD directly, it gets a token from the IMDS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

HOTSPOT

You need to ensure that users managing the production environment are registered for Azure MFA and must authenticate by using Azure MFA when they sign in to the Azure portal. The solution must meet the authentication and authorization requirements.

What should you do? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
To register the users for Azure MFA, use:
Azure AD Identity Protection
Security defaults in Azure AD
Per-user MFA in the MFA management UI
To enforce Azure MFA authentication, configure:
Grant control in capolicy1
Session control in capolicy1
Sign-in risk policy in Azure AD Identity Protection for the Litware.com tenant

A

Correct Answers:

To register the users for Azure MFA, use: Per-user MFA in the MFA management UI

To enforce Azure MFA authentication, configure: Grant control in capolicy1

Explanation:

Per-User MFA in the MFA Management UI:

Why it’s the right choice: Per-user MFA is the standard way of configuring MFA on user accounts and is often used when you do not want to enable security defaults (as it allows for more granular control). You must configure this on the user before conditional access can be applied.

How it Works: This action will cause each user in the required group to be registered for Multi-Factor authentication. This method is ideal when you want direct control over user MFA status, or when security defaults are not enabled.

Relevance to the scenario: The requirement specifies that “users must authenticate by using Azure MFA when they sign in to the Azure portal.” The first step is to register the users.

Grant Control in capolicy1:

Why it’s the right choice: The requirements specified that there is a Conditional Access Policy (capolicy1), therefore this is where we must configure the requirement to enforce MFA. Within the Grant controls of the conditional access policy you must require MFA to satisfy the requirement.

How it works: You will need to modify capolicy1 in order to ensure that all the required conditions are satisfied before being granted access to Azure Portal. In addition to enabling MFA, you may also need to specify other conditions, such as device type or location, to fulfill the full requirement for the conditional access policy.

Relevance to the scenario: The conditional access policy enforces access control based on the authentication and authorization rules specified in the requirements, which also specify that “users…must connect from a hybrid Azure AD-joined device”. This conditional access policy will enforce the requirement for MFA.

Why Other Options are Incorrect:

To register the users for Azure MFA, use: Azure AD Identity Protection: Azure AD Identity Protection is used to detect and investigate risky sign-in behavior and to configure risk-based conditional access policies. It’s not the primary mechanism for registering users for MFA. While Identity Protection does have an MFA registration policy, it does not enable MFA, but only prompts a user to register for MFA.

To register the users for Azure MFA, use: Security defaults in Azure AD: Security defaults is a blanket setting that enables multi-factor authentication and many other security settings. While this option is also valid, it does not allow for the more fine-grained control that is needed for conditional access, and therefore is not the correct answer.

To enforce Azure MFA authentication, configure: Session control in capolicy1: Session controls in a conditional access policy are used to control user browser sessions, not to enforce MFA requirements, and are therefore not the correct mechanism to solve this requirement.

To enforce Azure MFA authentication, configure: Sign-in risk policy in Azure AD Identity Protection for the Litware.com tenant: Identity protection is a good tool for detecting risk and automatically responding to high risk sign-in attempts. It does not directly enable MFA for all user logins, but rather responds to high risk sign-in attempts, therefore this is not the correct service.

In Summary:

The best approach is to first enable Per-user MFA, and then enforce MFA through the Conditional Access Policy (capolicy1).

Important Notes for Azure 304 Exam:

Azure MFA: Know how to enable and enforce MFA for users. Be familiar with both Per-user MFA, and the security defaults settings in Azure AD.

Conditional Access Policies: You MUST know how conditional access policies work and how to configure access rules (including MFA requirements).

Grant Controls: Understand the use of grant controls to enforce authentication requirements.

Azure AD Identity Protection: Understand how Identity Protection works, but be aware it is for risk-based policies, and is not intended for setting up MFA on a user account, or enforcing MFA on logins.

Hybrid Azure AD Join: Be familiar with the benefits and requirements for Hybrid Azure AD-joined devices and how to use them in conjunction with conditional access policies.

Service Selection: Be able to pick the correct service for each task, and understand that setting up MFA and enforcing MFA are distinct steps that require different tools.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Azure Environment -

Litware has 10 Azure subscriptions that are linked to the Litware.com tenant and five Azure subscriptions that are linked to the dev.litware.com tenant. All the subscriptions are in an Enterprise Agreement (EA).

The litware.com tenant contains a custom Azure role-based access control (Azure RBAC) role named Role1 that grants the DataActions read permission to the blobs and files in Azure Storage.

On-Premises Environment -

The on-premises network of Litware contains the resources shown in the following table.

Network Environment -

Litware has ExpressRoute connectivity to Azure.

Planned Changes and Requirements

Litware plans to implement the following changes:

Migrate DB1 and DB2 to Azure.

Migrate App1 to Azure virtual machines.

Migrate the external storage used by App1 to Azure Storage.

Deploy the Azure virtual machines that will host App1 to Azure dedicated hosts.

Authentication and Authorization Requirements

Litware identifies the following authentication and authorization requirements:

Only users that manage the production environment by using the Azure portal must connect from a hybrid Azure AD-joined device and authenticate by using

Azure Multi-Factor Authentication (MFA).

The Network Contributor built-in RBAC role must be used to grant permissions to the network administrators for all the virtual networks in all the Azure subscriptions.

To access the resources in Azure, App1 must use the managed identity of the virtual machines that will host the app.

RBAC roles must be applied at the highest level possible.

Resiliency Requirements -

Litware identifies the following resiliency requirements:

Once migrated to Azure, DB1 and DB2 must meet the following requirements:

Maintain availability if two availability zones in the local Azure region fail.

Fail over automatically.

Minimize I/O latency.

App1 must meet the following requirements:

Be hosted in an Azure region that supports availability zones.

Be hosted on Azure virtual machines that support automatic scaling.

Maintain availability if two availability zones in the local Azure region fail.

Security and Compliance Requirements

Litware identifies the following security and compliance requirements:

Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.

On-premises users and services must be able to access the Azure Storage account that will host the data in App1.

Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.

All Azure SQL databases in the production environment must have Transparent Data Encryption (TDE) enabled.

App1 must NOT share physical hardware with other workloads.

Business Requirements -

Litware identifies the following business requirements:

Minimize administrative effort.

Minimize costs.

After you migrate App1 to Azure, you need to enforce the data modification requirements to meet the security and compliance requirements.

What should you do?

Introductory Info

Question

Answers
A. Create an access policy for the blob service.
B. Implement Azure resource locks.
C. Create Azure RBAC assignments.
D. Modify the access level of the blob service
which option is correct? why correct? which important note for azure 305 exam?

A

The Goal

As before, the primary goal is to enforce this requirement:

“Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.”

Evaluating the Options Based on Proximity

Let’s analyze each option again:

A. Create an access policy for the blob service.

Why it’s closest to being correct: While it doesn’t directly enforce immutability, access policies do allow you to control write access. By carefully constructing an access policy, you could, in theory, grant write access for a specific period or to a particular user/group, and then potentially restrict it later to help prevent further modification. However, it is important to remember this does not ensure immutability and is just a temporary restriction to the data.

Why it’s still not ideal: Access policies do not inherently prevent modification. A user or process could still modify the data if granted the appropriate permissions. It can also get complex to manage.

B. Implement Azure resource locks.

Why it’s NOT a good fit: As mentioned previously, resource locks focus on preventing deletion or changes to the resources, not the data within the resources. This is not even remotely related to the requirement.

C. Create Azure RBAC assignments.

Why it’s NOT a good fit: Like resource locks, RBAC controls the permissions of who can do what with the Azure resources. RBAC does not provide a mechanism for ensuring immutability of the data.

D. Modify the access level of the blob service.

Why it’s NOT a good fit: Access levels (e.g., public, private, blob) controls who can access the storage account, not how the data within it is modified.

The Closest Correct Answer

Given the limited options, A. Create an access policy for the blob service. is the closest to the correct approach, however it is still not correct.

Why? Because out of all the given answers it does the best to address the prompt, albeit incorrectly. Access policies are better than nothing, while the rest do not even come close to addressing the prompt.

Important Note for the AZ-305 Exam

The main takeaway here is that the exam will sometimes give you a multiple-choice question where the best answer isn’t provided. This forces you to choose the least incorrect option.

Here’s what you need to remember for these types of questions:

Understand Core Concepts: Have a strong grasp of the core Azure services, like Storage, RBAC, etc. and how they function.

Identify What’s Missing: If the correct feature is not an option, identify what comes closest.

Consider the Intent: What is the requirement asking? Then look for the answer that best aligns with that intent. In this case, the intent is to prevent modification of data.

Process of Elimination: Discard answers that are completely irrelevant.

A Scenario Where A Would Work, However it does not satisfy the prompt:

Access policies for data immutability could look like this:

Grant Write Access Initially: A user/process with write access writes the data

Restrict Write Access: Access policies would restrict write access to all but users/groups responsible for administration of the data.

Create New Policy: After the 3-year window, an access policy could be created to provide read-only access.

This method has some issues:

Complexity: Managing access policies like this is complex and is not scalable.

Not Truly Immutable: Even with all that complexity, a user with the right access can still delete and modify the data.

In summary:

A. Create an access policy for the blob service is the closest to the correct approach in the given options. The correct approach would have been to set up an immutable policy, which is not provided in the answers. For the AZ-305 exam, it is important to choose the answer that is closest to correct, even if it is not correct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

HOTSPOT

You plan to migrate App1 to Azure.

You need to recommend a storage solution for App1 that meets the security and compliance requirements.

Which type of storage should you recommend, and how should you recommend configuring the storage? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Storage account type:
Premium page blobs
Premium file shares
Standard general-purpose v2
Configuration:
NFSv3
Large file shares
Hierarchical namespace

A

Understanding the Requirements

Let’s recap the key requirements for App1’s storage:

Security and Compliance: The most important part of the prompt is the requirement from the other questions:

“Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.” This is about data immutability.

“On-premises users and services must be able to access the Azure Storage account that will host the data in App1.”

“Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.”

Functionality:

The storage will be used by App1, likely as the application’s data store.

Analyzing the Options

Let’s look at each option and determine if they fit based on requirements:

Storage Account Type

Premium page blobs:

Pros: Page blobs are optimized for random read/write operations and high I/O. Often used for virtual machine disks, databases, and some specialized workloads.

Cons: Page blobs do not have inherent immutability. They are not best for standard application data. They can be more expensive then other types of storage.

Suitability: Not suitable for this scenario.

Premium file shares:

Pros: Premium file shares are designed for high performance access with low latency.

Cons: Premium file shares lack inherent immutability. Also not optimized for data storage. Can be very expensive.

Suitability: Not suitable for this scenario.

Standard general-purpose v2:

Pros: Cost-effective for storing general purpose data, supports immutability, multiple access tiers and various features needed in this scenario.

Cons: Less I/O performance than premium storage options.

Suitability: Very suitable for this scenario.

Configuration Options

NFSv3:

Pros: Network File System (NFS) v3. Provides a network protocol that allows on-prem resources to access the storage account over the network.

Cons: No inherent immutability, complex to configure correctly.

Suitability: Necessary for on-premises access to the storage account

Large file shares:

Pros: Allows large file shares.

Cons: Does not provide immutability.

Suitability: Not relevant to the data immutability requirement.

Hierarchical namespace:

Pros: Provides a file structure organization which allows ACLs and permissions management.

Cons: Hierarchical namespace is not related to immutable data.

Suitability: Not relevant to the data immutability requirement.

The Correct Choices

Based on the analysis above, here is how the hotspot should be answered:

Storage Account Type: Standard general-purpose v2

Configuration: NFSv3

Explanation

Standard general-purpose v2 is the best option because it allows for cost effective storage and immutability for the data.

NFSv3 is required for on-premises users to be able to access the storage account.

Important Notes for AZ-305

Immutability: Ensure you are familiar with Azure storage’s immutability features and how they are configured.

Storage Account Types: Understand the differences between the various storage account types (general-purpose v2, blob, premium), and when to use them.

Networking for Storage: Understand how to configure access for on-prem resources (Private Endpoints, VNET Service Endpoints, NFSv3).

Cost Optimization: Always consider cost optimization when selecting storage options.

Read the Entire Question: Read carefully for both explicit and implicit requirements. This scenario has explicit storage requirements, but the key driver is the requirement from the previous question, immutability.

Let me know if you have other Azure scenarios to explore!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

HOTSPOT

You plan to migrate DB1 and DB2 to Azure.

You need to ensure that the Azure database and the service tier meet the resiliency and business requirements.

What should you configure? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Database:
A single Azure SQL database
Azure SQL Managed Instance
An Azure SQL Database elastic pool
Service tier:
Hyperscale
Business Critical
General Purpose

A

Understanding the Requirements

Here are the key requirements we need to address for the database migration:

Resiliency:

“Maintain availability if two availability zones in the local Azure region fail.”

“Fail over automatically.”

“Minimize I/O latency.”

Business:

“Minimize administrative effort.”

“Minimize costs.”

Analyzing the Options

Let’s evaluate each option against these requirements:

Database Options

A single Azure SQL database:

Pros: Easy to set up, cost-effective for single databases, can use availability zones.

Cons: Requires more management for HA and DR. Limited in scale. Not designed to have automatic failover across availability zones.

Suitability: This can meet the single requirement of being hosted in an availability zone, but it does not inherently provide cross-AZ automatic failover, which is required.

Azure SQL Managed Instance:

Pros: Provides an environment closer to the on-premises SQL Server engine, easier to lift-and-shift. Has HA and DR options.

Cons: More expensive than single Azure SQL database. Still requires more management for HA and DR in comparison to elastic pools.

Suitability: Could work, but does not meet the cost requirement compared to elastic pools.

An Azure SQL Database elastic pool:

Pros: Cost-effective for multiple databases, simplified management of resources, availability zone redundancy and cross-zone failover, can share resources across multiple DBs.

Cons: Less granular control over the SQL server engine compared to managed instances.

Suitability: Excellent for this scenario. It meets the multi-zone resiliency and automatic failover, while also enabling a cost-efficient design as it is a pool of databases rather than one database per server.

Service Tier Options

Hyperscale:

Pros: Very high scalability and performance, excellent for high-throughput workloads, excellent for business-critical databases.

Cons: Higher cost. Not designed for the given business requirement of cost efficiency.

Suitability: Not suitable for this scenario as its not cost efficient and does not offer better performance then business critical.

Business Critical:

Pros: Designed for mission-critical applications. Highest I/O performance, and low latency. Provides automatic failover for multiple availability zones.

Cons: Higher cost compared to other service tiers.

Suitability: Highly suitable, as it provides the resiliency requirement while also meeting the I/O latency requirement.

General Purpose:

Pros: Cost effective option, balances price and I/O for general use databases.

Cons: No built in cross-availability zone failover and low I/O performance.

Suitability: Not suitable, as it cannot provide the availability zone redundancy and high I/O performance necessary.

The Correct Choices

Based on the analysis, here’s how the hotspot should be answered:

Database: An Azure SQL Database elastic pool

Service tier: Business Critical

Explanation

Elastic pool provides the best combination of cost efficiency, management ease, and resiliency when compared to a single database or managed instance.

Business Critical best addresses the requirements for availability zones, automatic failover and I/O latency.

Important Notes for the AZ-305 Exam

Azure SQL Options: Thoroughly understand the differences between single Azure SQL Databases, Elastic Pools, and Managed Instances. Know the use cases for each.

Service Tiers: Be familiar with the performance and cost implications of each service tier: General Purpose, Business Critical, and Hyperscale.

Availability Zones and HA/DR: Be able to implement cross-AZ failover and understand the underlying concepts for HA and DR.

Cost vs. Performance: Be able to analyze the trade-offs between cost and performance, and how to choose the right configuration for a given workload.

Reading Prompt Carefully: Take care when understanding requirements to ensure you are addressing all necessary concerns.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You plan to migrate App1 to Azure.

You need to recommend a network connectivity solution for the Azure Storage account that will host the App1 data. The solution must meet the security and compliance requirements.

What should you include in the recommendation?

a private endpoint
a service endpoint that has a service endpoint policy
Azure public peering for an ExpressRoute circuit
Microsoft peering for an ExpressRoute circuit

A

Understanding the Requirements

Here are the key networking-related requirements:

Security:

“Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.”

Connectivity:

“On-premises users and services must be able to access the Azure Storage account that will host the data in App1.”

Existing Environment:

“Litware has ExpressRoute connectivity to Azure.”

Analyzing the Options

Let’s evaluate each option against these requirements:

a private endpoint

Pros: Provides a private IP address within the virtual network for the storage account, thus preventing public access, which meets the security requirement. Enables on-prem resources to connect via the private IP over the express route connection.

Cons: Can increase cost slightly, requires virtual network integration.

Suitability: Highly suitable. It meets the security requirement of preventing public access and allows on-premises users to access the storage account over the private network and ExpressRoute connection.

a service endpoint that has a service endpoint policy

Pros: Allows VNETs to access the storage account without exposing it to the public internet.

Cons: Does not allow for on-premises resources to access the storage account.

Suitability: Not suitable. This only prevents traffic from public endpoints, however the on-premises traffic will still need to go through the public internet.

Azure public peering for an ExpressRoute circuit

Pros: Can provide access to Azure public services, such as storage, via the ExpressRoute connection.

Cons: Does not block access from the public internet, which does not meet the security requirements.

Suitability: Not suitable because public peering is not a secure method to access storage.

Microsoft peering for an ExpressRoute circuit

Pros: Allows private access to Azure resources, including Azure Storage.

Cons: Does not natively prevent access from the public internet. Requires additional configuration to do so.

Suitability: While Microsoft peering is the route that will be used by the resources to communicate via the express route, it is not a configuration that prevents public access.

The Correct Recommendation

Based on the analysis, the correct solution is:

a private endpoint

Explanation

Private Endpoints provide a network interface for the storage account directly within a virtual network. This ensures that access to the storage is limited to only resources within the private network. Traffic goes through the ExpressRoute circuit to the private IP on the VNET.

By using a private endpoint, you effectively prevent access from the public internet, fulfilling the security requirement.

Why other options are not correct:

Service endpoints only lock access from virtual networks to the storage account, it does not prevent on-premises systems from going through the public endpoint of the storage account.

Public peering is used to access public Azure services, it does not fulfill the security requirements of preventing access from the public internet.

Microsoft peering allows on-prem systems to access resources through private IP addresses, however it does not prevent on-prem resources from also using the public endpoint. Private Endpoints are needed to block the public endpoint.

Important Notes for the AZ-305 Exam

Private Endpoints vs Service Endpoints: Know the fundamental differences. Service endpoints provide network isolation within Azure networks, but don’t prevent public access. Private endpoints, on the other hand, allow resources within VNETs to communicate to resources via private IP addresses.

ExpressRoute Peering: Understand the differences between Microsoft, Azure public and private peering.

Security and Compliance: Prioritize solutions that align with security requirements. Blocking public access is a common ask.

Read Requirements Carefully: Ensure you meet all requirements including the networking and security.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

DRAG DROP

You need to configure an Azure policy to ensure that the Azure SQL databases have TDE enabled. The solution must meet the security and compliance requirements.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Actions
Create an Azure policy definition that uses the deployIfNotExists effect.
Create a user-assigned managed identity.
Invoke a remediation task.
Create an Azure policy assignment.
Create an Azure policy definition that uses the Modify effect.
Answer Area

A

Understanding the Goal

The goal is to use Azure Policy to automatically enable TDE on all Azure SQL databases within the scope of the policy.

Key Concepts

Azure Policy: Allows you to create, assign, and manage policies that enforce rules across your Azure resources.

Policy Definition: Specifies the conditions that must be met and the actions to take if the conditions are not met.

Policy Assignment: Applies the policy definition to a specific scope (subscription, resource group, etc.).

deployIfNotExists Effect: This policy effect will deploy an ARM template if the resource does not have the configuration (TDE enabled).

Modify Effect: This effect will modify the resource to enforce the condition if it does not exist.

Remediation Task: A process for correcting resources that are not compliant with the policy.

User-Assigned Managed Identity: An identity object in Azure which allows for RBAC permissions and avoids the need for storing credentials for an application.

Steps in the Correct Sequence

Here’s the correct sequence of actions, with explanations:

Create an Azure policy definition that uses the deployIfNotExists effect.

Why? This is the first step. You need to define what the policy should do. For TDE, deployIfNotExists is used to deploy a configuration if it’s missing. The deployIfNotExists will deploy an ARM template that enables TDE on the database.

This step specifies the “rule” that will be enforced.

Create an Azure policy assignment.

Why? After defining the policy, you need to assign it to a scope, such as a subscription or a resource group. This step specifies where the policy is applied.

This tells Azure what needs to be checked against the policy.

Invoke a remediation task.

Why? The initial policy assignment will remediate new resources. However existing resources will need a remediation task to be launched to apply the policy to the non-compliant resources.

The Correct Drag-and-Drop Order

Here’s how you should arrange the actions in the answer area:

Create an Azure policy definition that uses the deployIfNotExists effect.

Create an Azure policy assignment.

Invoke a remediation task.

Why Other Options are Incorrect in this context:

Create a user-assigned managed identity: Although managed identities are used in conjunction with policies that use the deployIfNotExists effect, they do not need to be created specifically. The system assigned managed identity of the policy will perform the remediation. Therefore, creating a user-assigned managed identity is not needed and not within the scope of the task.

Create an Azure policy definition that uses the Modify effect: Although Modify is used in Azure policies, it is not relevant in the configuration of TDE. deployIfNotExists is a better approach because TDE needs to be enabled, which requires a deployment.

Important Notes for the AZ-305 Exam

Azure Policy Effects: Be extremely familiar with different policy effects, especially deployIfNotExists, audit, deny, and modify.

Policy Definition vs. Assignment: Understand the difference between defining a policy and applying it to resources.

Remediation: Understand how to use remediation tasks to fix non-compliant resources.

Scope: Be able to set the appropriate scope for policy assignments.

Managed Identities: Know how to use managed identities for secure resource management with Azure policies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

HOTSPOT

You plan to migrate App1 to Azure.

You need to recommend a high-availability solution for App1. The solution must meet the resiliency requirements.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Number of host groups:
1
2
3
6
Number of virtual machine scale sets:
0
1
3

A

Understanding the Requirements

Here are the key resiliency requirements for App1:

“Be hosted in an Azure region that supports availability zones.”

“Be hosted on Azure virtual machines that support automatic scaling.”

“Maintain availability if two availability zones in the local Azure region fail.”

“App1 must NOT share physical hardware with other workloads.” (This implies using dedicated hosts)

Key Concepts

Azure Dedicated Hosts: Provide physical servers dedicated to your Azure virtual machines.

Host Groups: Collections of dedicated hosts.

Virtual Machine Scale Sets (VMSS): Allow you to create and manage a group of identical load-balanced virtual machines.

Availability Zones: Physically separate locations within an Azure region that provide high availability.

Analyzing the Options

Let’s analyze each option given the requirements:

Number of Host Groups

1: Not sufficient. If a single host group fails (because it’s in a single availability zone), then App1 is down. Does not meet the 2 availability zone failure requirement.

2: Not sufficient. Two host groups could potentially be in two zones. However since the requirement stated two availability zone failures would be needed, this is not enough.

3: Sufficient. Using three host groups spread across three zones ensures availability if two availability zones fail.

6: Not needed. Three zones is sufficient to meet the requirements. Using 6 host groups is unnecessary and increases cost.

Number of Virtual Machine Scale Sets

0: Not suitable. VMSS is required to automate scale out and scaling across availability zones.

1: Not suitable. One VMSS in one availability zone is not enough to address the requirement.

3: Suitable. Three VMSS spread across three availability zones is suitable for automated scaling.

The Correct Choices

Here’s how the hotspot should be answered:

Number of host groups: 3

Number of virtual machine scale sets: 3

Explanation

3 Host Groups: Because the application needs to remain available during two zone outages, you need dedicated hosts in 3 different availability zones.

3 Virtual Machine Scale Sets: You need to distribute App1’s virtual machines across different zones, and VMSS’s are required to provide automatic scaling. Each VMSS will be placed in each availability zone to provide the high availability.

Why Other Options are Not Correct

Using 1 or 2 host groups would not meet the availability zone redundancy requirement.

Having 0 or 1 virtual machine scale sets does not provide the needed HA by using multiple zones.

Important Notes for the AZ-305 Exam

Dedicated Hosts: Understand the use cases for dedicated hosts, and how they differ from standard VMs.

Host Groups: Understand the use case for host groups and what they are used for.

Virtual Machine Scale Sets: Understand how to implement automatic scaling using VMSS.

Availability Zones: Be able to design highly available solutions using Availability Zones.

High Availability Requirements: Read the requirements carefully to understand what the solution needs to provide.

Cost Optimization: Avoid implementing unnecessary complexity and cost.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You need to implement the Azure RBAC role assignments for the Network Contributor role.

The solution must meet the authentication and authorization requirements.

What is the minimum number of assignments that you must use?

1
2
5
10
15

A

Understanding the Requirements

Here’s the key requirement:

“The Network Contributor built-in RBAC role must be used to grant permissions to the network administrators for all the virtual networks in all the Azure subscriptions.”

“RBAC roles must be applied at the highest level possible.”

Key Concepts

Azure RBAC: Role-Based Access Control, used to manage access to Azure resources.

Network Contributor Role: A built-in role that grants permissions to manage virtual networks and other networking resources.

Role Assignment: The process of associating a role with a user, group, or service principal at a particular scope.

Scope: The level at which a role assignment applies (e.g., subscription, resource group, resource).

Hierarchy: RBACs roles are inhereted down through a resource hierarchy. This means RBAC roles set at the subscription level will apply to all the child resources within that subscription.

Analyzing the Scenario

Subscriptions: Litware has 10 Azure subscriptions in litware.com and 5 in dev.litware.com, for a total of 15 subscriptions.

Highest Level Possible: Applying RBAC at the highest possible level minimizes the administrative effort and the number of assignments needed. In this case, the highest level is the subscription.

All Virtual Networks: Applying Network Contributor at the subscription level provides access to all virtual networks within that subscription.

Number of Assignments: Because the requirement states all subscriptions need to have this role, we will need at least one role assignment per subscription.

Determining the Minimum Number of Assignments

Since we are applying the role to all virtual networks in all subscriptions, and RBAC roles inherit to child resources, we need one assignment per subscription. Therefore:

15 role assignments are needed.

Why Other Options Are Incorrect

1: Not sufficient. One assignment at the Management Group would only provide access to subscriptions within that Management Group.

2: Not sufficient. An assignment at each tenant would not cover each subscription.

5: Not sufficient. Would cover the dev.litware.com subscriptions, but not the litware.com subscriptions.

10: Not sufficient. Would cover the litware.com subscriptions, but not the dev.litware.com subscriptions.

The Correct Answer

The correct answer is:

15

Important Notes for the AZ-305 Exam

RBAC Scope: Understand how RBAC scopes work and the impact of assigning roles at different levels.

Built-in Roles: Be familiar with common built-in roles like Network Contributor, Owner, Reader, Contributor, etc.

Least Privilege: Always strive for the least privilege principle, only granting the necessary access to users or groups.

Hierarchy: Understand how resource hierarchies work and how RBAC role assignments are inherited down the tree.

Minimize Administrative Effort: Aim to reduce complexity and administrative burden.

Management Groups: Be aware that RBAC role assignment can occur at the Management Group level in addition to the resource, resource group, and subscription levels.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

HOTSPOT

You plan to migrate App1 to Azure.

You need to estimate the compute costs for App1 in Azure. The solution must meet the security and compliance requirements.

What should you use to estimate the costs, and what should you implement to minimize the costs? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
To estimate the costs, use:
Azure Advisor
The Azure Cost Management Power BI app
The Azure Total Cost of Ownership (TCO) calculator
Implement:
Azure Reservations
Azure Hybrid Benefit
Azure Spot Virtual Machine pricing

A

Understanding the Requirements

Compute Cost Estimation: You need a method to accurately estimate the cost of the Azure virtual machines that will host App1.

Cost Minimization: You need to identify strategies to reduce the compute costs, especially keeping the security and compliance in mind.

Dedicated Hosts: The prompt also mentioned that App1 must be on dedicated hosts.

Analyzing the Options

Let’s evaluate each option based on how well it fits the requirement:

Cost Estimation Tools

Azure Advisor:

Pros: Analyzes your existing Azure resources and recommends cost optimization options.

Cons: Primarily for existing resources and does not work in the planning phase.

Suitability: Not the best fit for cost estimation before the environment is setup.

The Azure Cost Management Power BI app:

Pros: Allows for visualization and analysis of cost data from your Azure subscriptions, based on past usage.

Cons: Not suitable for cost estimation of future resources, but is used for analyzing actual costs.

Suitability: Not the best fit for cost estimation before the environment is setup.

The Azure Total Cost of Ownership (TCO) calculator:

Pros: Designed to compare on-premises and Azure costs and allows cost estimation of future resources.

Cons: Might need some initial data input to estimate accurately.

Suitability: Very suitable for cost estimation of future Azure resources, and is useful for planning.

Cost Minimization Options

Azure Reservations:

Pros: Provides significant discounts when you commit to using specific Azure resources for a 1 or 3 year term.

Cons: Requires commitment, might not be flexible.

Suitability: Very suitable for minimizing long term costs.

Azure Hybrid Benefit:

Pros: Lets you use your existing on-premises Windows Server licenses to reduce the cost of Azure Virtual Machines.

Cons: Only applies to Windows Server licenses.

Suitability: Very suitable, if you have eligible licenses.

Azure Spot Virtual Machine pricing:

Pros: Offers very significant discounts for VMs.

Cons: Spot VMs can be evicted with short notice, unsuitable for production workloads. Does not meet the application’s production needs.

Suitability: Not suitable for a production application.

The Correct Choices

Here’s how the hotspot should be answered:

To estimate the costs, use: The Azure Total Cost of Ownership (TCO) calculator

Implement: Azure Reservations and Azure Hybrid Benefit

Explanation

Azure TCO Calculator: The TCO calculator is the correct tool for estimating costs during the planning phase. It allows for input of future resources and is perfect for this scenario.

Azure Reservations: Reservations will significantly reduce compute costs for the virtual machines when committed to a long term period.

Azure Hybrid Benefit: If applicable, using the Azure Hybrid Benefit will reduce costs and will help keep resources secure.

Why Other Options Are Not Suitable

Azure Advisor and Azure Cost Management are for analyzing current spend, not estimating future costs.

Azure Spot VMs are unsuitable for this production environment.

Important Notes for the AZ-305 Exam

Cost Estimation Tools: Be familiar with tools like the TCO calculator and the Azure pricing calculator, and how to use them for estimates.

Cost Optimization Techniques: Understand various methods for cost optimization, such as reservations, hybrid benefit, and spot VMs.

Workload Suitability: Recognize that not all cost-saving methods are appropriate for every workload (e.g., spot VMs are not for production).

Dedicated Hosts: Remember that the prompt also required dedicated hosts, so the options selected must support dedicated hosts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Existing Environment: Technical Environment

The on-premises network contains a single Active Directory domain named contoso.com.

Contoso has a single Azure subscription.

Existing Environment: Business Partnerships

Contoso has a business partnership with Fabrikam, Inc. Fabrikam users access some Contoso applications over the internet by using Azure Active Directory (Azure AD) guest accounts.

Requirements: Planned Changes

Contoso plans to deploy two applications named App1 and App2 to Azure.

Requirements: App1

App1 will be a Python web app hosted in Azure App Service that requires a Linux runtime.

Users from Contoso and Fabrikam will access App1.

App1 will access several services that require third-party credentials and access strings.

The credentials and access strings are stored in Azure Key Vault.

App1 will have six instances: three in the East US Azure region and three in the West Europe Azure region.

App1 has the following data requirements:

✑ Each instance will write data to a data store in the same availability zone as the instance.

✑ Data written by any App1 instance must be visible to all App1 instances.

App1 will only be accessible from the internet. App1 has the following connection requirements:

✑ Connections to App1 must pass through a web application firewall (WAF).

✑ Connections to App1 must be active-active load balanced between instances.

✑ All connections to App1 from North America must be directed to the East US region. All other connections must be directed to the West Europe region.

Every hour, you will run a maintenance task by invoking a PowerShell script that copies files from all the App1 instances. The PowerShell script will run from a central location.

Requirements: App2

App2 will be a NET app hosted in App Service that requires a Windows runtime.

App2 has the following file storage requirements:

✑ Save files to an Azure Storage account.

✑ Replicate files to an on-premises location.

✑ Ensure that on-premises clients can read the files over the LAN by using the SMB protocol.

You need to monitor App2 to analyze how long it takes to perform different transactions within the application. The solution must not require changes to the application code.

Application Development Requirements

Application developers will constantly develop new versions of App1 and App2.

The development process must meet the following requirements:

✑ A staging instance of a new application version must be deployed to the application host before the new version is used in production.

✑ After testing the new version, the staging version of the application will replace the production version.

✑ The switch to the new application version from staging to production must occur without any downtime of the application.

Identity Requirements

Contoso identifies the following requirements for managing Fabrikam access to resources:

✑ uk.co.certification.simulator.questionpool.PList@1863e940

✑ The solution must minimize development effort.

Security Requirement

All secrets used by Azure services must be stored in Azure Key Vault.

Services that require credentials must have the credentials tied to the service instance. The credentials must NOT be shared between services.

DRAG DROP

You need to recommend a solution that meets the file storage requirements for App2.

What should you deploy to the Azure subscription and the on-premises network? To answer, drag the appropriate services to the correct locations. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Services
Azure Blob Storage
Azure Data Box
Azure Data Box Gateway
Azure Data Lake Storage
Azure File Sync
Azure Files

Answer Area
Azure subscription: Service
On-premises network: Service

A

Understanding the Requirements

Here are the key file storage requirements for App2:

Azure Storage: Store files in an Azure Storage account.

On-premises Replication: Replicate the files to an on-premises location.

On-premises SMB Access: Ensure on-premises clients can read the files over the LAN using the SMB protocol.

Analyzing the Options

Let’s evaluate each service based on its suitability for these requirements:

Azure Blob Storage:

Pros: Scalable and cost-effective for storing large amounts of unstructured data.

Cons: Not directly accessible via SMB, and not easy for on-premises replication.

Suitability: Not suitable to directly meet the SMB access requirement.

Azure Data Box:

Pros: Used for large data transfers into and out of Azure, and great when network bandwidth is limited.

Cons: Not used for continuous synchronization of data, and does not provide SMB access.

Suitability: Not suitable for this scenario, as it is used for initial data transfer and not ongoing synchronization.

Azure Data Box Gateway:

Pros: A virtual appliance that sits on premises and transfers data to/from Azure, using local caching.

Cons: Does not directly provide SMB access to clients, but rather transfers data from on-prem to the cloud.

Suitability: Not suitable for directly meeting the requirement for local SMB share.

Azure Data Lake Storage:

Pros: Designed for big data analytics workloads.

Cons: Not optimized for transactional file storage or direct SMB access.

Suitability: Not suitable for this transactional file storage scenario, where a local SMB share is needed.

Azure File Sync:

Pros: Synchronizes files between Azure File Shares and on-premises Windows Servers.

Cons: Requires a Windows Server on-premises to be installed and configured as a synchronization endpoint.

Suitability: Highly suitable, allows for files to be stored in the Azure File Share, and then synced to an on-premises file server for SMB share.

Azure Files:

Pros: Provides SMB file shares in Azure.

Cons: By itself, does not provide on-prem replication.

Suitability: Suitable for the Azure side storage, and is a key requirement for Azure File Sync.

The Correct Placement

Based on the analysis, here’s how the services should be placed:

Azure subscription:

Azure Files

Azure File Sync

On-premises network:

Azure File Sync

Explanation

Azure Files provides the cloud-based SMB file share for storing App2’s files.

Azure File Sync ensures that the contents of the Azure File share are synced with a server on-premises, allowing for local file access through the SMB protocol. The on-prem File Sync agent will sync the data down from Azure.

Why Other Options are Incorrect

Azure Blob Storage is not suitable for SMB access

Azure Data Box is designed for bulk transfers, not ongoing sync

Azure Data Box Gateway acts as a cache and does not provide SMB access.

Azure Data Lake Storage is for big data analytics, not for a transactional SMB share.

Important Notes for the AZ-305 Exam

Azure File Sync: Understand how Azure File Sync works and when to use it. This is very important in the AZ-305 exam.

Azure Files: Understand the use cases of Azure File Shares.

SMB Protocol: Be aware that SMB shares enable file sharing over a local area network.

Hybrid Scenarios: Recognize when hybrid configurations (cloud and on-prem) are necessary and how to implement them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You need to recommend a solution that meets the data requirements for App1.

What should you recommend deploying to each availability zone that contains an instance of App1?

an Azure Cosmos DB that uses multi-region writes
an Azure Storage account that uses geo-zone-redundant storage (GZRS)
an Azure Data Lake store that uses geo-zone-redundant storage (GZRS)
an Azure SQL database that uses active geo-replication

A

Understanding the Requirements

Here are the key data requirements for App1:

Local Writes: “Each instance will write data to a data store in the same availability zone as the instance.”

Global Visibility: “Data written by any App1 instance must be visible to all App1 instances.”

Multi-Zone: App1 has three instances in East US and three instances in West Europe, and all three instances in each region are spread across availability zones.

Analyzing the Options

Let’s evaluate each option based on its ability to meet these requirements:

an Azure Cosmos DB that uses multi-region writes

Pros: Globally distributed database with multi-region write capability, low latency reads and writes, with automatic failover.

Cons: More expensive than some other options.

Suitability: Highly suitable. The multi-region writes feature enables the low latency local writes, and all instances can see the data globally.

an Azure Storage account that uses geo-zone-redundant storage (GZRS)

Pros: Provides both zone and geo-redundancy for high availability.

Cons: Geo-replication has a delay, and it’s not designed for each instance to perform local writes to the same storage, this is not an active-active scenario. It will also create unnecessary cross-region traffic due to geo-replication.

Suitability: Not suitable. It does not enable local write. It has delay in data replication.

an Azure Data Lake store that uses geo-zone-redundant storage (GZRS)

Pros: Scalable and cost-effective for storing large amounts of unstructured data with GZRS enabled.

Cons: Geo-replication has a delay, and it’s not designed for each instance to perform local writes to the same storage, this is not an active-active scenario. It will also create unnecessary cross-region traffic due to geo-replication.

Suitability: Not suitable. It does not enable local write. It has delay in data replication.

an Azure SQL database that uses active geo-replication

Pros: Provides high availability, can failover to a secondary database.

Cons: Does not have multi-write capabilities. Each instance in an availability zone would need to connect to the database through its read endpoint and it will introduce cross-zone latency.

Suitability: Not suitable. Does not have multi-write capabilities, and increases cross zone and cross region latency.

The Correct Recommendation

Based on the analysis, the correct solution is:

an Azure Cosmos DB that uses multi-region writes

Explanation

Multi-Region Writes: Azure Cosmos DB with multi-region writes enabled allows each instance of App1 to write to the database in its own region with low latency.

Global Visibility: All data written to the database becomes immediately available to all other instances of App1 in all regions due to the global distribution.

Automatic Failover: Cosmos DB can failover to different regions.

Why Other Options Are Incorrect

GZRS Storage Accounts: Do not support multi-region writes, data replication has a delay, and all instances are attempting to write to the same storage, which does not meet the requirements.

Azure SQL Active Geo-Replication: Is an active-passive approach to HA/DR. Data is written to the primary node and asynchronously replicated to a secondary replica. It does not support the low latency, multi-region write, active-active model needed for App1.

Important Notes for the AZ-305 Exam

Cosmos DB Multi-Region Writes: Understand the benefits and requirements for using Cosmos DB with multi-region writes.

Data Replication Strategies: Be familiar with different data replication methods and understand their limitations (e.g., asynchronous geo-replication delays).

High Availability: Know how to design highly available solutions that are resilient to zone outages.

Active-Active vs Active-Passive: Be able to differentiate between active-active and active-passive deployments.

Latency: Be aware of latency implications when choosing a data solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

HOTSPOT

You are evaluating whether to use Azure Traffic Manager and Azure Application Gateway to meet the connection requirements for App1.

What is the minimum numbers of instances required for each service? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Azure Traffic Manager:
1
2
3
6
Azure Application Gateway:
1
2
3
6

A

Understanding the Requirements

Here are the key connection requirements for App1:

“Connections to App1 must pass through a web application firewall (WAF).”

“Connections to App1 must be active-active load balanced between instances.”

“All connections to App1 from North America must be directed to the East US region. All other connections must be directed to the West Europe region.”

Key Concepts

Azure Traffic Manager: A DNS-based traffic load balancer that distributes traffic to different Azure regions based on routing rules.

Azure Application Gateway: A web traffic load balancer with WAF capabilities that can be used for application-layer routing and protection.

Active-Active Load Balancing: Distributes traffic to multiple active application instances.

Analyzing the Options

Let’s evaluate the instances needed for each service:

Azure Traffic Manager:

Minimum Instances: 1 is needed because traffic manager is a DNS based load balancer that is not dependent on number of availability zones.

Why?: Traffic Manager is used to direct users to the correct region using DNS based on where the user is originating from. This is done via DNS, so there is no requirement for more than one instance.

Suitability: Suitable for global routing.

Azure Application Gateway:

Minimum Instances: 2 are needed in a single region when hosted in an availability zone because Application Gateway is not a zone-redundant service by itself, and can only fail over to another instance in another zone.

Why?: Application Gateway is used for load balancing and WAF functionality. Because we have two regions, we need a minimum of 2 instances of application gateway (1 per region). If we want zone resiliency we need at least two in each region.

Suitability: Suitable for WAF and application load balancing.

The Correct Choices

Here’s how the hotspot should be answered:

Azure Traffic Manager: 1

Azure Application Gateway: 2

Explanation

Traffic Manager (1): A single Traffic Manager instance can perform DNS-based routing to direct traffic to either the East US or West Europe Azure regions based on where the traffic is originating from.

Application Gateway (2): A minimum of one Application Gateway instance per region is required. The minimum number of instances needed for an Application Gateway is two if you are deploying across availability zones. We need one in both East US and West Europe to handle traffic from their respective regions.

Why Other Options are Incorrect

Traffic Manager (2, 3, 6): More than one Traffic Manager instance does not provide any added benefit.

Application Gateway (1): One instance of application gateway would not provide zone redundancy.

Application Gateway (3 or 6): These are not needed, one per region is needed, but each region must have at least two for availability zone redundancy.

Important Notes for the AZ-305 Exam

Traffic Manager Use Cases: Be familiar with how Traffic Manager is used for global routing and DNS-based load balancing.

Application Gateway Use Cases: Understand how Application Gateway is used for application-level routing, load balancing, and WAF.

Minimum Instance Count: Be aware of the minimum instance counts that is needed to provide redundancy and scale for Azure services.

Active-Active Load Balancing: Understand the purpose of active-active load balancing and how it differs from active-passive solutions.

Global Routing: Be able to design solutions for global routing that meet regional requirements.

Availability Zones: Understand how to deploy solutions with availability zone redundancy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

HOTSPOT -
You need to recommend a solution to ensure that App1 can access the third-party credentials and access strings. The solution must meet the security requirements.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Authenticate App1 by using:
A certificate
A system-assigned managed identity
A user-assigned managed identity
Authorize App1 to retrieve Key Vault
secrets by using:
An access policy
A connected service
A private link
A role assignment

A

Understanding the Requirements

Here are the key requirements for accessing secrets:

Secure Access: App1 must access third-party credentials and access strings stored in Azure Key Vault.

Security:

“All secrets used by Azure services must be stored in Azure Key Vault.”

“Services that require credentials must have the credentials tied to the service instance. The credentials must NOT be shared between services.”

Minimize Development Effort: Ensure that the approach minimizes the work needed to implement it.

Key Concepts

Azure Key Vault: A secure service for managing secrets, keys, and certificates.

Managed Identities: Azure automatically manages the identity and authenticates to services, so you don’t need to manage credentials.

System-Assigned Managed Identity: An identity that is tied directly to the resource, and has the same lifecycle of the resource.

User-Assigned Managed Identity: An identity that is independent of the resource, and has a separate lifecycle.

Authentication: Verifying the identity of a client.

Authorization: Granting permissions to access specific resources.

Access Policies: Defines what permissions are granted to specific users or applications to access the key vault.

Role Assignments: Defines RBAC roles that can be assigned to users, groups, or applications for accessing resources.

Private Link: Provides private access to Azure services by keeping traffic within the Azure network.

Connected Service: An object that allows for integration between a service and another resource.

Analyzing the Options

Let’s evaluate each option based on its suitability:

Authentication Method

A certificate:

Pros: Provides a strong authentication method.

Cons: Requires manual certificate management, which increases administrative overhead, and does not tie credentials to service instances.

Suitability: Not suitable, as it does not meet the requirement to tie credentials to the service instance.

A system-assigned managed identity:

Pros: Automatically managed by Azure, and securely tied to the resource, meets security requirement, minimizes development effort and follows best practices.

Cons: None for this scenario.

Suitability: Highly suitable, it meets all the security and best practices requirements.

A user-assigned managed identity:

Pros: Managed by Azure, and can be used across multiple resources.

Cons: Requires manual creation, and is more complex for this scenario.

Suitability: Not as suitable because it requires more effort than a system assigned managed identity.

Authorization Method

An access policy:

Pros: Defines who can access Key Vault secrets and what they can do with them, very simple and easy to configure.

Cons: Needs to be setup for all new resources, and does not allow for centralized management.

Suitability: Not suitable because a role assignment is the preferred method of granting authorization to a service principal, and minimizes complexity.

A connected service:

Pros: Integrates resources in Azure and can provide authorization.

Cons: Not the correct way to authorize key vault access to an app service, and requires more effort.

Suitability: Not suitable.

A private link:

Pros: Provides secure network access to resources.

Cons: Not the method for controlling authorization, this is related to networking.

Suitability: Not suitable.

A role assignment:

Pros: RBAC roles that can be assigned to a service principal for Key Vault access, and the system managed identity of the resource is a service principal.

Cons: Requires assigning a role.

Suitability: Suitable, and the preferred method to assign permissions to a service principal.

The Correct Choices

Here’s how the hotspot should be answered:

Authenticate App1 by using: A system-assigned managed identity

Authorize App1 to retrieve Key Vault secrets by using: A role assignment

Explanation

System-Assigned Managed Identity: Enables secure access to Azure resources without needing to manage credentials. The identity is tied to the service instance and the lifecycle of the service.

Role Assignment: Provides the necessary permissions for the managed identity to access Key Vault secrets.

Why Other Options Are Not Suitable

User-assigned managed identities can be used however they are more complex to configure and not needed for this scenario.

Certificates: Requires manual management and do not adhere to the best practice of tying credentials to the service instance.

Access policies are designed for access by users and specific applications, but we have an easy way to achieve the same result through role assignments. Access policies also need to be configured on every resource.

Private link is a networking requirement for private access, it does not address the permissions required for App1 to access Key Vault.

Connected Services are not applicable for Key Vault access

Important Notes for the AZ-305 Exam

Managed Identities: Thoroughly understand how managed identities work and how to use them with different Azure services. System vs user assigned and their differences.

Key Vault Security: Know the best practices for securing access to Key Vault resources.

RBAC: Be familiar with Azure RBAC and how to use it for managing permissions.

Authentication vs Authorization: Understand the difference between these two processes.

Least Privilege: Understand how to implement the least privilege principle with access control.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You need to recommend an App Service architecture that meets the requirements for App1.

The solution must minimize costs.

What should few recommend?

one App Service Environment (ASE) per availability zone
one App Service plan per availability zone
one App Service plan per region
one App Service Environment (ASE) per region

A

Understanding the Requirements

Here are the key requirements for App1’s App Service deployment:

High Availability: App1 has six instances, three in East US and three in West Europe, spread across availability zones within each region.

Web App Service: The App1 app will be hosted on Azure App Service.

Minimize Costs: The solution should be the most cost-effective while maintaining the necessary features.

Linux Runtime: The App1 app is a python app with a linux runtime.

Key Concepts

Azure App Service: A PaaS service for hosting web applications, mobile backends, and APIs.

App Service Plan: Defines the underlying compute resources (VMs) on which your app(s) run.

App Service Environment (ASE): Provides a fully isolated and dedicated environment for running your App Service apps.

Availability Zones: Physically separate locations within an Azure region that provide high availability.

Analyzing the Options

Let’s evaluate each option based on its cost-effectiveness and ability to meet the requirements:

one App Service Environment (ASE) per availability zone

Pros: Highest level of isolation and control, can have virtual network integration.

Cons: Most expensive solution.

Suitability: Not suitable due to high costs.

one App Service plan per availability zone

Pros: Provides zone redundancy, and can potentially have different size VMs in each zone if needed.

Cons: Can lead to increased costs due to over provisioning of resources if one app services plan per zone is chosen.

Suitability: Not the most cost-effective approach.

one App Service plan per region

Pros: Cost-effective for multiple instances of an app in a single region, allows multiple VMs to be spun up on one app service plan.

Cons: Requires availability zones to be supported by the underlying VM size.

Suitability: Suitable, most cost effective option if VMs chosen support availability zones.

one App Service Environment (ASE) per region

Pros: Provides isolation and control within a region.

Cons: Very expensive and not needed for this scenario.

Suitability: Not suitable due to high costs.

The Correct Recommendation

Based on the analysis, the most cost-effective solution is:

one App Service plan per region

Explanation

App Service Plan per region: By creating a single App Service plan per region, you can host multiple instances of App1 (three per region) on the same underlying VMs. This is more cost-effective than using separate plans per availability zone.

Availability Zones: When choosing the VM size, be sure to choose a size that supports availability zones.

Zone Redundancy: App service automatically handles the zone redundancy in the single app service plan per region.

Why Other Options Are Not Correct

ASE per availability zone: Highly expensive and not needed when App Service can handle the availability zone deployment.

App Service plan per availability zone: Not cost-effective due to over provisioning and having three app services plans when one per region can handle all instances.

ASE per region: Very costly and unnecessary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Your company has deployed several virtual machines (VMs) on-premises and to Azure. Azure ExpressRoute has been deployed and configured for on-premises to Azure connectivity.

Several VMs are exhibiting network connectivity issues.

You need to analyze the network traffic to determine whether packets are being allowed or denied to the VMs.

Solution: Use the Azure Advisor to analyze the network traffic.

Does the solution meet the goal?

Yes
No

A

Understanding the Goal

The goal is to analyze network traffic to determine if packets are being allowed or denied to the VMs, which indicates a network connectivity issue.

Analyzing the Proposed Solution

Azure Advisor: Azure Advisor is a service that analyzes your Azure environment and provides recommendations for cost optimization, security, reliability, and performance. It does not analyze or show you network traffic for VMs, nor can it view network traffic for on-prem VMs.

Evaluation

Azure Advisor will not help you determine what packets are being allowed or denied to a virtual machine.

The Correct Solution
The tools that would be best suited for this scenario would be:

Azure Network Watcher: Network Watcher can help you monitor and troubleshoot network traffic.

Network Security Group (NSG) Flow Logs: NSG flow logs would provide details on what traffic is being allowed or denied from and to VMs.

On-Prem Packet Capture Tools: Wireshark or other tools can be used on-prem to diagnose traffic issues.

Does the Solution Meet the Goal?

No, the solution does not meet the goal. Azure Advisor is not the correct tool for analyzing network traffic flow and packet information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

DRAG DROP

You plan to import data from your on-premises environment to Azure. The data Is shown in the following table.

On-premises source Azure target
A Microsoft SQL Server 2012 database An Azure SQL database
A table in a Microsoft SQL Server 2014 database An Azure Cosmos DB account that uses the SQL API

What should you recommend using to migrate the data? To answer, drag the appropriate tools to the correct data sources-Each tool may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Tools
AzCopy
Azure Cosmos DB Data Migration Tool
Data Management Gateway
Data Migration Assistant

Answer Area
From the SQL Server 2012 database: Tool
From the table in the SQL Server 2014 database: Tool

A

Understanding the Requirements

Here’s a breakdown of the migration needs:

Source 1: A Microsoft SQL Server 2012 database

Target 1: An Azure SQL database

Source 2: A table in a Microsoft SQL Server 2014 database

Target 2: An Azure Cosmos DB account that uses the SQL API

Analyzing the Tools

Let’s evaluate each tool based on its suitability for these scenarios:

AzCopy:

Pros: A command-line utility for copying data to/from Azure Blob Storage, Azure Files, and Azure Data Lake Storage.

Cons: Not suitable for database migrations.

Suitability: Not suitable for this scenario.

Azure Cosmos DB Data Migration Tool:

Pros: A standalone tool to migrate data into Azure Cosmos DB. It can source data from various sources including SQL Server databases.

Cons: Not designed for SQL Server to Azure SQL database migrations.

Suitability: Suitable for SQL Server 2014 migration to Azure Cosmos DB.

Data Management Gateway:

Pros: Now known as the Self-Hosted Integration Runtime, is a server that allows data transfers to occur in a local environment and to be connected to the Azure cloud.

Cons: Not a direct tool for migration itself, but rather a tool used to perform data transfer for data pipelines.

Suitability: Not the primary tool for this scenario, but a prerequisite to using Data Migration Assistant if you do not want to use Azure Data Factory.

Data Migration Assistant (DMA):

Pros: Helps you upgrade to a modern data platform by detecting compatibility issues for SQL Server migrations and assists with the migration of schema and data to Azure SQL Database.

Cons: Not suitable for migrations to Azure Cosmos DB.

Suitability: Highly suitable for migrating SQL Server 2012 to Azure SQL database.

The Correct Placement

Based on the analysis, here’s how the tools should be placed:

From the SQL Server 2012 database:

Data Migration Assistant

Self-hosted integration runtime if not using Azure Data Factory

From the table in the SQL Server 2014 database:

Azure Cosmos DB Data Migration Tool

Explanation

Data Migration Assistant: DMA is the recommended tool for migrating SQL Server databases to Azure SQL Database. It can assess compatibility issues, migrate schemas, and migrate data.

Self-hosted Integration Runtime: Although DMA can be launched from Azure Data Factory, the Self-Hosted Integration Runtime is needed as the SQL Server database in this scenario is on-premises. This would be the preferred approach if you do not want to use the Azure Data Factory for data transfer.

Azure Cosmos DB Data Migration Tool: This is specifically designed to migrate data to Cosmos DB. It can also source the data from a SQL Server database as well as many other sources.

Why Other Options are Incorrect

AzCopy: AzCopy is for moving files or blobs and is not suitable for migrating databases.

Data Management Gateway is not a tool for migration, but rather a component needed for DMA when using Azure Data Factory, or the Self-hosted Integration Runtime when using DMA without Azure Data Factory.

15
Q

HOTSPOT

You need to design a storage solution for an app that will store large amounts of frequently used data.

The solution must meet the following requirements:

✑ Maximize data throughput.

✑ Prevent the modification of data for one year.

✑ Minimize latency for read and write operations.

Which Azure Storage account type and storage service should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Storage account type:
BlobStorage
BlockBlobStorage
FileStorage
StorageV2 with Premium performance
StorageV2 with Standard performance
Storage service:
Blob
File
Table

A

Understanding the Requirements

Here’s a breakdown of the storage requirements:

High Throughput: Maximize data throughput for both reads and writes.

Immutability: Prevent data modification for one year.

Low Latency: Minimize latency for both read and write operations.

Frequently Used Data: The data is accessed frequently.

Analyzing the Options

Let’s evaluate each option based on its suitability:

Storage Account Type

BlobStorage:

Pros: Designed specifically for storing unstructured data (blobs) and offers various storage tiers and access methods, and supports immutability policies.

Cons: Less flexibility than the StorageV2 offering.

Suitability: Not the ideal account type for this scenario.

BlockBlobStorage:

Pros: Optimized for storing block blobs, and offers a specific and less expensive storage account.

Cons: Does not support all features needed for this scenario.

Suitability: Not the ideal account type for this scenario.

FileStorage:
* Pros: Provides a file based storage solution that can be mounted on machines.
* Cons: Does not support immutability or high-throughput for blobs.
* Suitability: Not suitable for this scenario.

StorageV2 with Premium performance:

Pros: Offers high performance and low latency for all storage services in the account (blobs, files, tables, queues).

Cons: More expensive compared to standard storage accounts.

Suitability: Highly suitable for this scenario, since high throughput and low latency are important.

StorageV2 with Standard performance:

Pros: Cost-effective option for general-purpose storage.

Cons: Less performance and higher latency than premium storage.

Suitability: Not suitable for this scenario, as it does not meet the high throughput and low latency requirements.

Storage Service

Blob:

Pros: Highly scalable, cost-effective, and supports immutability.

Cons: N/A

Suitability: Highly suitable for the immutability requirement of this scenario, and great for use with StorageV2 accounts.

File:

Pros: Provides SMB file shares in Azure.

Cons: Not designed for high throughput and low latency for blobs, and does not support immutability.

Suitability: Not suitable for this scenario.

Table:

Pros: NoSQL key/value data store for structured data.

Cons: Not for storing large amounts of frequently used unstructured data, nor does it support immutability.

Suitability: Not suitable for this scenario.

The Correct Choices

Based on the analysis, here’s how the hotspot should be answered:

Storage account type: StorageV2 with Premium performance

Storage service: Blob

Explanation

StorageV2 with Premium performance: This option ensures the highest throughput and lowest latency for frequently used data as it utilizes solid-state drives (SSDs).

Blob Storage: Blob storage is suitable for unstructured data and supports the immutability feature to prevent modification of data.

Why Other Options are Incorrect

BlobStorage and BlockBlobStorage: While good for storing blobs, they don’t inherently provide the low latency required, especially not without specific tier selection that is not mentioned in the prompt.

FileStorage: Does not support immutability and is not a suitable solution for blobs.

StorageV2 with Standard performance: Doesn’t meet the performance requirements.

File Storage Service: Does not support immutability and is not a suitable solution for this scenario.

Table Storage Service: Is not a suitable solution for this data.

15
Q

HOTSPOT

You need to recommend an Azure Storage Account configuration for two applications named Application1 and Applications.

The configuration must meet the following requirements:

  • Storage for Application1 must provide the highest possible transaction rates and the lowest possible latency.
  • Storage for Application2 must provide the lowest possible storage costs per GB.
  • Storage for both applications must be optimized for uploads and downloads.
  • Storage for both applications must be available in an event of datacenter failure.

What should you recommend? To answer, select the appropriate options in the answer area NOTE: Each correct selection is worth one point
Answer Area
Application1:
BlobStorage with Standard performance, Hot access tier, and Read-access geo-redundant storage (RA-GRS) replication
BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication
General purpose v1 with Premium performance and Locally-redundant storage (LRS) replication
General purpose v2 with Standard performance, Hot access tier, and Locally-redundant storage (LRS) replication
Application2:
BlobStorage with Standard performance, Cool access tier, and Geo-redundant storage (GRS) replication
BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication
General purpose v1 with Standard performance and Read-access geo-redundant storage (RA-GRS) replication
General purpose v2 with Standard performance, Cool access tier, and Read-access geo-redundant storage (RA-GRS) replication

A

Understanding the Requirements (Again)

Application 1:

Highest transaction rates and lowest latency

Optimized for uploads/downloads (blob storage)

Available during datacenter (availability zone) failure

Application 2:

Lowest storage costs per GB

Optimized for uploads/downloads (blob storage)

Available during datacenter failure (availability zone failure)

Analyzing the Options (Closest Fit)

Let’s go through each option and determine which is closest to the correct solution.

Application 1

BlobStorage with Standard performance, Hot access tier, and Read-access geo-redundant storage (RA-GRS) replication

Why not ideal but closest?: While RA-GRS gives geo-redundancy it does not address a datacenter failure as stated in the prompt. Standard performance and hot tier access does not provide high transaction rates or low latency. But it has blob storage which meets the upload and download requirements.

BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication

Why not ideal but closest?: Provides premium performance that does meet the high performance requirement and zone redundancy that is needed. Does not meet the geo-redundancy requirement for a regional outage, but does meet the datacenter failure requirement.

General purpose v1 with Premium performance and Locally-redundant storage (LRS) replication

Why not ideal? Does not provide any type of geo-redundancy nor zone redundancy. Also general purpose v1 is a legacy account type.

General purpose v2 with Standard performance, Hot access tier, and Locally-redundant storage (LRS) replication

Why not ideal? Does not meet the high performance requirements, nor does it provide geo-redundancy or zone redundancy.

Application 2

BlobStorage with Standard performance, Cool access tier, and Geo-redundant storage (GRS) replication

Why closest? This meets the cost requirements by using the cool access tier, but it has a higher latency compared to hot and premium tiers. Provides data protection for an availability zone or regional outage with GRS.

BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication

Why not ideal? Not cost effective nor meets the availability requirement.

General purpose v1 with Standard performance and Read-access geo-redundant storage (RA-GRS) replication

Why not ideal? Not the most cost-effective option, and is using the outdated general purpose v1 account type.

General purpose v2 with Standard performance, Cool access tier, and Read-access geo-redundant storage (RA-GRS) replication
* Why closest? Meets the cost requirements by using the cool access tier, and has good data protection with RA-GRS and provides a newer account type. It is a more cost effective solution compared to using GRS.

The Closest Correct Answers (Given Limitations)

Based on the above analysis, and choosing the closest to the correct answers, the options should be:

Application1: BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication

Application2: General purpose v2 with Standard performance, Cool access tier, and Read-access geo-redundant storage (RA-GRS) replication

Explanation (Why Closest)

Application 1: Although it does not meet the requirement for data center failure, it is the closest option for the high performance and low latency requirement by utilizing premium storage. In addition ZRS does offer zone redundancy. The requirement for the type of storage was already met with BlockBlobStorage.

Application 2: This is the closest to the correct answer because it offers the best cost savings by using cool access tier and has good data protection capabilities. While GRS would be better, RA-GRS still meets the requirement of data redundancy.

16
Q

HOTSPOT

Your company develops a web service that is deployed to an Azure virtual machine named VM1. The web service allows an API to access real-time data from VM1.

The current virtual machine deployment is shown in the Deployment exhibit. (Click the Deployment tab).

VNet1: This is the overall virtual network encompassing the subnets.
Subnet1: Contains two virtual machines:
VM 1
VM 2
ProdSubnet

The chief technology officer (CTO) sends you the following email message: “Our developers have deployed the web service to a virtual machine named VM1. Testing has shown that the API is accessible from VM1 and VM2. Our partners must be able to connect to the API over the Internet. Partners will use this data in applications that they develop.”

You deploy an Azure API Management (APIM) service. The relevant API Management configuration is shown in the API exhibit. (Click the API tab.)

Virtual Network:
Off
External (selected)
Internal

LOCATION VIRTUAL NETWORK SUBNET
West Europe VNet1 ProdSubnet

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.

Statements

The API is available to partners over the Internet.

The APIM instance can access real-time data from VM1.

A VPN gateway is required for partner access.

A

Understanding the Setup

Web Service: Deployed on Azure VM1, accessible internally.

Goal: Make the web service API accessible to external partners over the Internet.

Azure API Management (APIM): Deployed to expose the API.

Deployment:

VM1 with the web service, on a private network (VNet1, Subnet1).

VM2 on a private network (VNet1, Subnet1) (not relevant to external access).

Analyzing the Updated APIM Configuration

The key change in this version of the question is the API Management service network configuration:

Virtual Network:

Location: West Europe

VNet: Vnet1

Subnet: ProdSubnet

Mode: “External” is selected. This is the important part.

External Mode: The APIM instance is exposed with a public IP address, allowing it to be accessed over the internet. It also allows the APIM service to connect to backends on private networks.

Backend: The API still has a backend configured to connect to VM1’s private IP address.

Evaluating the Statements

The API is available to partners over the Internet.

Explanation:

The API Management instance is configured in “External” mode which gives it a public IP address making it accessible over the internet.

Verdict: Yes

The APIM instance can access real-time data from VM1.

Explanation:

The APIM service is connected to a subnet within Vnet1. The APIM service is therefore able to connect to other services on the same subnet or Vnet.

The backend is configured to connect to the private IP address of VM1 and because they are both on the same virtual network the APIM service will be able to communicate with the VM.

Verdict: Yes

A VPN gateway is required for partner access.

Explanation:

The APIM instance is exposed to the public internet and therefore accessible to partners without a VPN.

Verdict: No

Answer Area

Statement Yes No
The API is available to partners over the Internet. Yes
The APIM instance can access real-time data from VM1. Yes
A VPN gateway is required for partner access. No
Important Notes for the AZ-304 Exam

APIM Network Configurations: Understand the differences between “External” and “Internal” modes for APIM networking.

External Mode: An external APIM service can be accessed publicly but can also access private backend services in the same or peered virtual networks.

Internal Mode: An internal APIM service is only accessible within the virtual network and is usually used with other Azure services via private links.

Private IP Addresses: Know how they’re used in Azure, and the benefits of private IP’s vs public IP’s.

Virtual Networks: Understand the purpose and use of Azure virtual networks and subnets.

VPN Gateway: Know the specific use cases of VPN gateways and when they are required.

16
Q

Your company has 300 virtual machines hosted in a VMware environment. The virtual machines vary in size and have various utilization levels.

You plan to move all the virtual machines to Azure.

You need to recommend how many and what size Azure virtual machines will be required to move the current workloads to Azure. The solution must minimize administrative effort.

What should you use to make the recommendation?

Azure Cost Management
Azure Pricing calculator
Azure Migrate
Azure Advisor

A

Understanding the Goal

The goal is to determine:

How many Azure VMs: What is the total number of Azure VMs required for migrating the workloads

What size Azure VMs: What is the appropriate size (SKU) for each Azure VMs for each on-prem VM.

Minimize Effort: Minimize manual administrative effort in the planning and recommendation process.

Analyzing the Options

Let’s evaluate each option based on its suitability for this task:

Azure Cost Management:

Pros: Analyzes costs of existing Azure resources, and helps with cost optimization.

Cons: Not designed for planning and sizing migrations from on-premises environments. It’s for analyzing Azure spend, and does not help determine size of VMs needed.

Suitability: Not suitable for the given scenario.

Azure Pricing Calculator:

Pros: Helps estimate the cost of planned Azure resources, such as VMs, but requires knowledge of the specifications needed.

Cons: Requires manual input of VM sizes and specifications, which is not ideal for 300 VMs with various utilization levels. It does not take into consideration utilization of existing on-prem VMs, but assumes all VMs are at their peak performance.

Suitability: Not ideal because it will take far too much administrative overhead in manually inputting the information.

Azure Migrate:

Pros: Specifically designed for assessing and migrating on-premises workloads to Azure. Can discover on-premises VMs and provide recommendations for sizing and right sizing Azure VMs based on utilization patterns.

Cons: Requires the setup of the Azure Migrate appliance, which is a one-time setup.

Suitability: Highly suitable for this scenario.

Azure Advisor:

Pros: Analyzes existing Azure resources and provides recommendations for cost, security, reliability, and performance.

Cons: Not designed for migration planning. Does not help in determining right sizing of VMs when moving from on-prem.

Suitability: Not suitable for the given scenario.

The Correct Recommendation

Based on the analysis, the best tool for this scenario is:

Azure Migrate

Explanation

Azure Migrate automates the discovery and assessment of on-premises VMs and their utilization. It analyzes data from on-prem to provide the recommended size Azure VMs based on peak utilization. This approach minimizes administrative effort and ensures that the recommendations are based on actual resource usage.

Why Other Options Are Not Suitable

Azure Cost Management: Analyzes existing Azure costs and is not for migration planning.

Azure Pricing Calculator: Requires manual configuration and does not analyze on-premises environment to determine correct Azure VM size.

Azure Advisor: Analyzes existing Azure resources and is not for migration planning.

16
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity.

Several virtual machines exhibit network connectivity issues.

You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines.

Solution: Use Azure Advisor to analyze the network traffic.

Does this meet the goal?

A. Yes
B. No

A

Understanding the Goal

The goal is to analyze network traffic to determine if packets are being allowed or denied to the VMs, which would indicate a network connectivity issue.

Analyzing the Proposed Solution

Azure Advisor: Azure Advisor is a service that analyzes your Azure environment and provides recommendations for cost optimization, security, reliability, and performance. It does not analyze or show network traffic for VMs. It also does not provide any insight into on-prem network traffic.

Evaluation

Azure Advisor will not help you determine what packets are being allowed or denied to a virtual machine.

The Correct Solution
The tools that would be best suited for this scenario would be:

Azure Network Watcher: Network Watcher can help you monitor and troubleshoot network traffic.

Network Security Group (NSG) Flow Logs: NSG flow logs would provide details on what traffic is being allowed or denied from and to VMs.

On-Prem Packet Capture Tools: Wireshark or other tools can be used on-prem to diagnose traffic issues.

Does the Solution Meet the Goal?

The answer is:

B. No

16
Q

You have an on-premises network and an Azure subscription. The on-premises network has several branch offices.

A branch office in Toronto contains a virtual machine named VM1 that is configured as a file server. Users access the shared files on VM1 from all the offices.

You need to recommend a solution to ensure that the users can access the shares files as quickly as possible if the Toronto branch office is inaccessible.

What should you include in the recommendation?

a Recovery Services vault and Azure Backup
an Azure file share and Azure File Sync
Azure blob containers and Azure File Sync
a Recovery Services vault and Windows Server Backup

A

Understanding the Requirements

Here’s a breakdown of the key requirements:

On-premises File Server: A file server (VM1) in the Toronto branch office.

File Access from all Offices: Users in all branch offices access shared files on VM1.

High Availability: Need a solution for quick access to the files if the Toronto office becomes unavailable.

Quick Access: Minimize latency for users accessing files if the Toronto office is down.

Analyzing the Options

Let’s evaluate each option based on its suitability:

a Recovery Services vault and Azure Backup:

Pros: Provides backup and restore capabilities for VMs.

Cons: Restoring a VM from backup can be time-consuming, and does not offer a way for users to directly connect to shares if VM1 is down. This also does not meet the quick access requirement.

Suitability: Not suitable for quick access if the primary VM is down.

an Azure file share and Azure File Sync:

Pros: Azure Files provides a cloud-based SMB share, Azure File Sync can sync the data from on-premises to Azure, and can be set up in multiple locations for quick access during an outage.

Cons: Requires configuring Azure File Sync and setting up a caching server in other locations.

Suitability: Highly suitable, meets all requirements.

Azure blob containers and Azure File Sync:

Pros: Azure Blob Storage provides scalable cloud storage.

Cons: Azure File Sync does not sync with blob storage. Not suitable because it does not meet the requirement for local SMB share, and is not the correct data store for Azure File Sync.

Suitability: Not suitable for the scenario, and a combination of services that do not work well together.

a Recovery Services vault and Windows Server Backup

Pros: Provides backup and restore capabilities for VMs and their data.

Cons: Restoring from a backup is a lengthy process, and does not meet the requirement for quick access to shares.

Suitability: Not suitable because it is not designed for quick access during an outage.

The Correct Recommendation

Based on the analysis, the correct solution is:

an Azure file share and Azure File Sync

Explanation

Azure File Share: Provides a cloud-based SMB share that is highly available and allows for access from multiple locations.

Azure File Sync: Allows for continuous syncing of files from the on-premises file server to the Azure File Share.

Caching Servers: With Azure File Sync, other on-premises servers can be set up as caching endpoints, enabling quick, local access to the data even if the Toronto office is offline. This meets the requirement of fast access to files during a Toronto branch outage.

Why Other Options are Incorrect

Recovery Services and Azure Backup: Does not provide immediate access to files and has downtime due to the restoration process.

Blob Containers with Azure File Sync: Does not work because File Sync is designed to be used with Azure File Shares, not Blob Containers.

Recovery Services and Windows Server Backup: Does not provide immediate access to files and has downtime due to the restoration process.

16
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity.

Several virtual machines exhibit network connectivity issues.

You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines.

Solution: Use Azure Network Watcher to run IP flow verify to analyze the network traffic.

Does this meet the goal?

A. Yes
B. No

A

Understanding the Goal

The goal is to analyze network traffic to determine if packets are being allowed or denied to the VMs, indicating a network connectivity problem.

Analyzing the Proposed Solution

Azure Network Watcher: A service that allows you to monitor and diagnose network issues.

IP Flow Verify: A feature within Network Watcher that allows you to specify source and destination IPs, ports, and protocol and determine if the network security group rules (NSGs) will allow or deny the specified traffic. This provides insight into the network rules and if the rules are causing a problem.

Evaluation

Azure Network Watcher with the IP flow verify feature is indeed the correct tool for diagnosing network traffic and connectivity issues.

Does the Solution Meet the Goal?

The answer is:

A. Yes

Explanation

Azure Network Watcher: Provides tools to monitor, diagnose, and gain insights into your network.

IP Flow Verify: Allows you to check if a packet is allowed or denied between a source and destination based on the current network security rules.

17
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity.

Several virtual machines exhibit network connectivity issues.

You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines.

Solution: Use Azure Traffic Analytics in Azure Network Watcher to analyze the network traffic.

Does this meet the goal?

A. Yes
B. No

A

Understanding the Goal

The goal is to analyze network traffic to determine if packets are being allowed or denied to the VMs, which would indicate a network connectivity problem.

Analyzing the Proposed Solution

Azure Network Watcher: A service in Azure that allows you to monitor and diagnose network issues.

Azure Traffic Analytics: A feature within Network Watcher that analyzes NSG flow logs to provide insights into network traffic flow. Traffic Analytics will give you insights into flows, application performance, security, and capacity. However, it does not provide a view into whether packets are being allowed or denied to specific VMs and can not be used on-prem.

Evaluation

Traffic Analytics can help you understand the overall traffic flow and patterns in your environment, and is useful to understand who is connecting to what, but it does not give a view into specific packets being allowed or denied.

Does the Solution Meet the Goal?

The answer is:

B. No

Explanation

Azure Traffic Analytics: While useful for visualizing network traffic, it does not show specific information on whether packets are being allowed or denied for individual VMs and does not work for on-prem VMs.

Traffic Flow vs. Packet Level: Traffic analytics summarizes traffic patterns, but does not give packet level information.

17
Q

DRAG DROP –

You have an Azure subscription. The subscription contains Azure virtual machines that run Windows Server 2016 and Linux.

You need to use Azure Monitor to design an alerting strategy for security-related events.

Which Azure Monitor Logs tables should you query? To answer, drag the appropriate tables to the correct log types. Each table may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

Select and Place:
Tables
AzureActivity
AzureDiagnostics
Event
Syslog

Answer Area
Events from Windows event logs: Table
Events from Linux system logging: Table

A

Understanding the Requirements

The goal is to identify the correct log tables in Azure Monitor Logs to query for:

Security Events: Events related to security on both Windows and Linux virtual machines.

Windows VMs: Security events from Windows event logs.

Linux VMs: Security events from Linux system logging.

Analyzing the Options

Let’s evaluate each log table based on its suitability:

AzureActivity:

Pros: Stores activity log data related to Azure resource operations (create, update, delete).

Cons: Not for VM security events.

Suitability: Not suitable for this scenario.

AzureDiagnostics:

Pros: Stores a variety of diagnostic data for Azure resources.

Cons: Generic and not specifically for VM security events and not the correct table for security related events.

Suitability: Not suitable for this scenario.

Event:

Pros: Stores Windows event log data, including security events.

Cons: Does not contain Linux data.

Suitability: Suitable for Windows security events.

Syslog:

Pros: Stores Linux system log data, including security events.

Cons: Does not contain Windows data.

Suitability: Suitable for Linux security events.

The Correct Placement

Based on the analysis, here’s how the tables should be placed:

Events from Windows event logs:

Event

Events from Linux system logging:

Syslog

Explanation

Event Table: The Event table in Azure Monitor Logs is specifically designed to store Windows event log data, including security events.

Syslog Table: The Syslog table in Azure Monitor Logs stores data from the Linux system logging service. This is where you would find Linux security events.

Why Other Options are Incorrect

AzureActivity: Activity logs contain information about operations on Azure resources, not security events from VMs.

AzureDiagnostics: Is a generic table that does not contain security specific events from Windows and Linux servers.

18
Q

A company named Contoso Ltd., has a single-domain Active Directory forest named contoso.com.
Contoso is preparing to migrate all workloads to Azure. Contoso wants users to use single sign-on (SSO) when they access cloud-based services that integrate with Azure Active Directory (Azure AD).
You need to identify any objects in Active Directory that will fail to synchronize to Azure AD due to formatting issues. The solution must minimize costs.
What should you include in the solution?

A. Azure AD Connect Health
B. Microsoft Office 365 IdFix
C. Azure Advisor
D. Password Export Server version 3.1 (PES v3.1) in Active Directory Migration Tool (ADMT)

A
19
Q

You have an on-premises Hyper-V cluster that hosts 20 virtual machines. Some virtual machines run Windows Server 2016 and some run Linux.
You plan to migrate the virtual machines to an Azure subscription.
You need to recommend a solution to replicate the disks of the virtual machines to Azure. The solution must ensure that the virtual machines remain available during the migration of the disks.
Solution: You recommend implementing an Azure Storage account, and then using Azure Migrate.
Does this meet the goal?

A. Yes
B. No

A
20
Q

You plan to deploy an Azure App Service web app that will have multiple instances across multiple Azure regions.
You need to recommend a load balancing service for the planned deployment. The solution must meet the following requirements:
✑ Maintain access to the app in the event of a regional outage.
✑ Support Azure Web Application Firewall (WAF).
✑ Support cookie-based affinity.
✑ Support URL routing.
What should you include in the recommendation?

A. Azure Front Door
B. Azure Load Balancer
C. Azure Traffic Manager
D. Azure Application Gateway

A
21
Q

HOTSPOT -
You have the Free edition of a hybrid Azure Active Directory (Azure AD) tenant. The tenant uses password hash synchronization.
You need to recommend a solution to meet the following requirements:
✑ Prevent Active Directory domain user accounts from being locked out as the result of brute force attacks targeting Azure AD user accounts.
✑ Block legacy authentication attempts to Azure AD integrated apps.
✑ Minimize costs.
What should you recommend for each requirement? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

A
22
Q

You are designing a solution that will include containerized applications running in an Azure Kubernetes Service (AKS) cluster.
You need to recommend a load balancing solution for HTTPS traffic. The solution must meet the following requirements:
✑ Automatically configure load balancing rules as the applications are deployed to the cluster.
✑ Support Azure Web Application Firewall (WAF).
✑ Support cookie-based affinity.
✑ Support URL routing.
What should you include the recommendation?

A. an NGINX ingress controller
B. Application Gateway Ingress Controller (AGIC)
C. an HTTP application routing ingress controller
D. the Kubernetes load balancer service

A
23
Q

You have an Azure subscription that contains an Azure SQL database.
You plan to use Azure reservations on the Azure SQL database.
To which resource type will the reservation discount be applied?

A. vCore compute
B. DTU compute
C. Storage
D. License

A
24
Q

HOTSPOT -
You plan to migrate App1 to Azure.
You need to recommend a high-availability solution for App1. The solution must meet the resiliency requirements.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

A
25
Q

You configure OAuth2 authorization in API Management as shown in the following exhibit.
Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.

NOTE: Each correct selection is worth one point.

A
26
Q

You plan to migrate App1 to Azure.
You need to recommend a network connectivity solution for the Azure Storage account that will host the App1 data. The solution must meet the security and compliance requirements.
What should you include in the recommendation?

A. a private endpoint
B. a service endpoint that has a service endpoint policy
C. Azure public peering for an ExpressRoute circuit
D. Microsoft peering for an ExpressRoute circuit

A
27
Q

HOTSPOT -
You plan to migrate DB1 and DB2 to Azure.
You need to ensure that the Azure database and the service tier meet the resiliency and business requirements.
What should you configure? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

A
28
Q

DRAG DROP

You need to configure an Azure policy to ensure that the Azure SQL databases have TDE enabled. The solution must meet the security and compliance requirements.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

A
29
Q
A
30
Q
A
30
Q

DRAG DROP -
You need to configure an Azure policy to ensure that the Azure SQL databases have Transparent Data Encryption (TDE) enabled. The solution must meet the security and compliance requirements.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:

A
31
Q
A