test2 Flashcards

https://infraexam.com/microsoft/az-304-microsoft-azure-architect-design/az-304-part-07/

1
Q

Overview. General Overview

Litware, Inc. is a medium-sized finance company.

Overview. Physical Locations

Litware has a main office in Boston.

Existing Environment. Identity Environment

The network contains an Active Directory forest named Litware.com that is linked to an Azure Active Directory (Azure AD) tenant named Litware.com. All users have Azure Active Directory Premium P2 licenses.

Litware has a second Azure AD tenant named dev.Litware.com that is used as a development environment.

The Litware.com tenant has a conditional acess policy named capolicy1. Capolicy1 requires that when users manage the Azure subscription for a production environment by

using the Azure portal, they must connect from a hybrid Azure AD-joined device.

Existing Environment. Azure Environment

Litware has 10 Azure subscriptions that are linked to the Litware.com tenant and five Azure subscriptions that are linked to the dev.Litware.com tenant. All the subscriptions are in an Enterprise Agreement (EA).

The Litware.com tenant contains a custom Azure role-based access control (Azure RBAC) role named Role1 that grants the DataActions read permission to the blobs and files in Azure Storage.

Existing Environment. On-premises Environment

The on-premises network of Litware contains the resources shown in the following table.
Name Type Configuration
SERVER1 Ubuntu 18.04 vitual machines hosted on Hyper-V The vitual machines host a third-party app named App1.
SERVER2 App1 uses an external storage solution that provides Apache Hadoop-compatible data storage. The data storage supports POSIX access control list (ACL) file-level permissions.
SERVER3
SERVER10 Server that runs Windows Server 2016 The server contains a Microsoft SQL Server instance that hosts two databases named DB1 and DB2.

Existing Environment. Network Environment

Litware has ExpressRoute connectivity to Azure.

Planned Changes and Requirements. Planned Changes

Litware plans to implement the following changes:

✑ Migrate DB1 and DB2 to Azure.

✑ Migrate App1 to Azure virtual machines.

✑ Deploy the Azure virtual machines that will host App1 to Azure dedicated hosts.

Planned Changes and Requirements. Authentication and Authorization Requirements

Litware identifies the following authentication and authorization requirements:

✑ Users that manage the production environment by using the Azure portal must connect from a hybrid Azure AD-joined device and authenticate by using Azure Multi-Factor Authentication (MFA).

✑ The Network Contributor built-in RBAC role must be used to grant permission to all the virtual networks in all the Azure subscriptions.

✑ To access the resources in Azure, App1 must use the managed identity of the virtual machines that will host the app.

✑ Role1 must be used to assign permissions to the storage accounts of all the Azure subscriptions.

✑ RBAC roles must be applied at the highest level possible.

Planned Changes and Requirements. Resiliency Requirements

Litware identifies the following resiliency requirements:

✑ Once migrated to Azure, DB1 and DB2 must meet the following requirements:

  • Maintain availability if two availability zones in the local Azure region fail.
  • Fail over automatically.
  • Minimize I/O latency.

✑ App1 must meet the following requirements:

  • Be hosted in an Azure region that supports availability zones.
  • Be hosted on Azure virtual machines that support automatic scaling.
  • Maintain availability if two availability zones in the local Azure region fail.

Planned Changes and Requirements. Security and Compliance Requirements

Litware identifies the following security and compliance requirements:

✑ Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.

✑ On-premises users and services must be able to access the Azure Storage account that will host the data in App1.

✑ Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.

✑ All Azure SQL databases in the production environment must have Transparent Data Encryption (TDE) enabled.

✑ App1 must not share physical hardware with other workloads.

Planned Changes and Requirements. Business Requirements

Litware identifies the following business requirements:

✑ Minimize administrative effort.

✑ Minimize costs.

You plan to migrate App1 to Azure. The solution must meet the authentication and authorization requirements.

Which type of endpoint should App1 use to obtain an access token?

Azure Instance Metadata Service (IMDS)
Azure AD
Azure Service Management
Microsoft identity platform

A

The correct answer is: Azure Instance Metadata Service (IMDS)

Explanation:

Managed Identities and IMDS:

Why it’s the right choice: The requirements state that “To access the resources in Azure, App1 must use the managed identity of the virtual machines that will host the app”. Managed identities for Azure resources provide an identity that applications running in an Azure VM can use to access other Azure resources. The Azure Instance Metadata Service (IMDS) is the service that provides this identity information to the VM.

How it works:

You enable a managed identity for the virtual machines hosting App1.

Within the App1 code, you make a request to the IMDS to obtain an access token.

The IMDS service, running inside each Azure VM, returns a token that can be used to access other Azure resources (e.g., storage accounts, Key Vault) without requiring to store credentials in the application code. This access token is automatically rotated by the managed identity service.

This token is then passed to the destination service to provide access, after verifying the token is valid with Azure AD.

Security Benefits: Using managed identities and IMDS avoids storing sensitive credentials in configuration files, environment variables, or the application code itself. This is a security best practice.

Relevance to the scenario: It directly fulfills the requirement to use managed identities for accessing Azure resources from App1.

Why Other Options are Incorrect:

Azure AD: While Azure AD is used to authenticate users and apps, the app itself (App1 running on the VM) does not need to perform a standard Azure AD login. The managed identity handles this for the application. The application uses a token from IMDS, it does not use the Azure AD endpoint directly.

Azure Service Management: This is a deprecated method for Azure management. This is not the correct way to authenticate application level access.

Microsoft identity platform: This is the overall identity platform in Azure, but it’s not used for direct token retrieval within a VM with a managed identity. App1 should not use the Microsoft Identity Platform directly, it should use IMDS to get a token from the managed identity.

In Summary:

The correct endpoint for App1 to obtain an access token is the Azure Instance Metadata Service (IMDS). IMDS is designed specifically for providing applications within Azure VMs access tokens that are used for accessing other Azure services when used with a managed identity.

Important Notes for Azure 304 Exam:

Managed Identities: You MUST understand how managed identities work and how to use them. Be familiar with the two types of managed identity: System-assigned and User-assigned.

Azure Instance Metadata Service (IMDS): Know the purpose of IMDS and how it provides information about the Azure VM instance (including access tokens for managed identities).

Secure Authentication: Understand the security benefits of using managed identities instead of embedding secrets in code or configuration files.

Authentication Scenarios: Be able to recognize different authentication scenarios (user login vs. application access) and know which Azure service to use to achieve the required access pattern.

Service Principals: Be familiar with the concept of service principals and their relationship with application identity, but understand that a service principal is not directly needed here since the managed identity service creates and manages the service principals.

Key Takeaway: For applications running in Azure VMs that need to access other Azure resources, managed identities via the Azure IMDS are the recommended approach. The application does not authenticate with Azure AD directly, it gets a token from the IMDS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

HOTSPOT

You need to ensure that users managing the production environment are registered for Azure MFA and must authenticate by using Azure MFA when they sign in to the Azure portal. The solution must meet the authentication and authorization requirements.

What should you do? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
To register the users for Azure MFA, use:
Azure AD Identity Protection
Security defaults in Azure AD
Per-user MFA in the MFA management UI
To enforce Azure MFA authentication, configure:
Grant control in capolicy1
Session control in capolicy1
Sign-in risk policy in Azure AD Identity Protection for the Litware.com tenant

A

Correct Answers:

To register the users for Azure MFA, use: Per-user MFA in the MFA management UI

To enforce Azure MFA authentication, configure: Grant control in capolicy1

Explanation:

Per-User MFA in the MFA Management UI:

Why it’s the right choice: Per-user MFA is the standard way of configuring MFA on user accounts and is often used when you do not want to enable security defaults (as it allows for more granular control). You must configure this on the user before conditional access can be applied.

How it Works: This action will cause each user in the required group to be registered for Multi-Factor authentication. This method is ideal when you want direct control over user MFA status, or when security defaults are not enabled.

Relevance to the scenario: The requirement specifies that “users must authenticate by using Azure MFA when they sign in to the Azure portal.” The first step is to register the users.

Grant Control in capolicy1:

Why it’s the right choice: The requirements specified that there is a Conditional Access Policy (capolicy1), therefore this is where we must configure the requirement to enforce MFA. Within the Grant controls of the conditional access policy you must require MFA to satisfy the requirement.

How it works: You will need to modify capolicy1 in order to ensure that all the required conditions are satisfied before being granted access to Azure Portal. In addition to enabling MFA, you may also need to specify other conditions, such as device type or location, to fulfill the full requirement for the conditional access policy.

Relevance to the scenario: The conditional access policy enforces access control based on the authentication and authorization rules specified in the requirements, which also specify that “users…must connect from a hybrid Azure AD-joined device”. This conditional access policy will enforce the requirement for MFA.

Why Other Options are Incorrect:

To register the users for Azure MFA, use: Azure AD Identity Protection: Azure AD Identity Protection is used to detect and investigate risky sign-in behavior and to configure risk-based conditional access policies. It’s not the primary mechanism for registering users for MFA. While Identity Protection does have an MFA registration policy, it does not enable MFA, but only prompts a user to register for MFA.

To register the users for Azure MFA, use: Security defaults in Azure AD: Security defaults is a blanket setting that enables multi-factor authentication and many other security settings. While this option is also valid, it does not allow for the more fine-grained control that is needed for conditional access, and therefore is not the correct answer.

To enforce Azure MFA authentication, configure: Session control in capolicy1: Session controls in a conditional access policy are used to control user browser sessions, not to enforce MFA requirements, and are therefore not the correct mechanism to solve this requirement.

To enforce Azure MFA authentication, configure: Sign-in risk policy in Azure AD Identity Protection for the Litware.com tenant: Identity protection is a good tool for detecting risk and automatically responding to high risk sign-in attempts. It does not directly enable MFA for all user logins, but rather responds to high risk sign-in attempts, therefore this is not the correct service.

In Summary:

The best approach is to first enable Per-user MFA, and then enforce MFA through the Conditional Access Policy (capolicy1).

Important Notes for Azure 304 Exam:

Azure MFA: Know how to enable and enforce MFA for users. Be familiar with both Per-user MFA, and the security defaults settings in Azure AD.

Conditional Access Policies: You MUST know how conditional access policies work and how to configure access rules (including MFA requirements).

Grant Controls: Understand the use of grant controls to enforce authentication requirements.

Azure AD Identity Protection: Understand how Identity Protection works, but be aware it is for risk-based policies, and is not intended for setting up MFA on a user account, or enforcing MFA on logins.

Hybrid Azure AD Join: Be familiar with the benefits and requirements for Hybrid Azure AD-joined devices and how to use them in conjunction with conditional access policies.

Service Selection: Be able to pick the correct service for each task, and understand that setting up MFA and enforcing MFA are distinct steps that require different tools.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Azure Environment -

Litware has 10 Azure subscriptions that are linked to the Litware.com tenant and five Azure subscriptions that are linked to the dev.litware.com tenant. All the subscriptions are in an Enterprise Agreement (EA).

The litware.com tenant contains a custom Azure role-based access control (Azure RBAC) role named Role1 that grants the DataActions read permission to the blobs and files in Azure Storage.

On-Premises Environment -

The on-premises network of Litware contains the resources shown in the following table.

Network Environment -

Litware has ExpressRoute connectivity to Azure.

Planned Changes and Requirements

Litware plans to implement the following changes:

Migrate DB1 and DB2 to Azure.

Migrate App1 to Azure virtual machines.

Migrate the external storage used by App1 to Azure Storage.

Deploy the Azure virtual machines that will host App1 to Azure dedicated hosts.

Authentication and Authorization Requirements

Litware identifies the following authentication and authorization requirements:

Only users that manage the production environment by using the Azure portal must connect from a hybrid Azure AD-joined device and authenticate by using

Azure Multi-Factor Authentication (MFA).

The Network Contributor built-in RBAC role must be used to grant permissions to the network administrators for all the virtual networks in all the Azure subscriptions.

To access the resources in Azure, App1 must use the managed identity of the virtual machines that will host the app.

RBAC roles must be applied at the highest level possible.

Resiliency Requirements -

Litware identifies the following resiliency requirements:

Once migrated to Azure, DB1 and DB2 must meet the following requirements:

Maintain availability if two availability zones in the local Azure region fail.

Fail over automatically.

Minimize I/O latency.

App1 must meet the following requirements:

Be hosted in an Azure region that supports availability zones.

Be hosted on Azure virtual machines that support automatic scaling.

Maintain availability if two availability zones in the local Azure region fail.

Security and Compliance Requirements

Litware identifies the following security and compliance requirements:

Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.

On-premises users and services must be able to access the Azure Storage account that will host the data in App1.

Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.

All Azure SQL databases in the production environment must have Transparent Data Encryption (TDE) enabled.

App1 must NOT share physical hardware with other workloads.

Business Requirements -

Litware identifies the following business requirements:

Minimize administrative effort.

Minimize costs.

After you migrate App1 to Azure, you need to enforce the data modification requirements to meet the security and compliance requirements.

What should you do?

Introductory Info

Question

Answers
A. Create an access policy for the blob service.
B. Implement Azure resource locks.
C. Create Azure RBAC assignments.
D. Modify the access level of the blob service
which option is correct? why correct? which important note for azure 305 exam?

A

The Goal

As before, the primary goal is to enforce this requirement:

“Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.”

Evaluating the Options Based on Proximity

Let’s analyze each option again:

A. Create an access policy for the blob service.

Why it’s closest to being correct: While it doesn’t directly enforce immutability, access policies do allow you to control write access. By carefully constructing an access policy, you could, in theory, grant write access for a specific period or to a particular user/group, and then potentially restrict it later to help prevent further modification. However, it is important to remember this does not ensure immutability and is just a temporary restriction to the data.

Why it’s still not ideal: Access policies do not inherently prevent modification. A user or process could still modify the data if granted the appropriate permissions. It can also get complex to manage.

B. Implement Azure resource locks.

Why it’s NOT a good fit: As mentioned previously, resource locks focus on preventing deletion or changes to the resources, not the data within the resources. This is not even remotely related to the requirement.

C. Create Azure RBAC assignments.

Why it’s NOT a good fit: Like resource locks, RBAC controls the permissions of who can do what with the Azure resources. RBAC does not provide a mechanism for ensuring immutability of the data.

D. Modify the access level of the blob service.

Why it’s NOT a good fit: Access levels (e.g., public, private, blob) controls who can access the storage account, not how the data within it is modified.

The Closest Correct Answer

Given the limited options, A. Create an access policy for the blob service. is the closest to the correct approach, however it is still not correct.

Why? Because out of all the given answers it does the best to address the prompt, albeit incorrectly. Access policies are better than nothing, while the rest do not even come close to addressing the prompt.

Important Note for the AZ-305 Exam

The main takeaway here is that the exam will sometimes give you a multiple-choice question where the best answer isn’t provided. This forces you to choose the least incorrect option.

Here’s what you need to remember for these types of questions:

Understand Core Concepts: Have a strong grasp of the core Azure services, like Storage, RBAC, etc. and how they function.

Identify What’s Missing: If the correct feature is not an option, identify what comes closest.

Consider the Intent: What is the requirement asking? Then look for the answer that best aligns with that intent. In this case, the intent is to prevent modification of data.

Process of Elimination: Discard answers that are completely irrelevant.

A Scenario Where A Would Work, However it does not satisfy the prompt:

Access policies for data immutability could look like this:

Grant Write Access Initially: A user/process with write access writes the data

Restrict Write Access: Access policies would restrict write access to all but users/groups responsible for administration of the data.

Create New Policy: After the 3-year window, an access policy could be created to provide read-only access.

This method has some issues:

Complexity: Managing access policies like this is complex and is not scalable.

Not Truly Immutable: Even with all that complexity, a user with the right access can still delete and modify the data.

In summary:

A. Create an access policy for the blob service is the closest to the correct approach in the given options. The correct approach would have been to set up an immutable policy, which is not provided in the answers. For the AZ-305 exam, it is important to choose the answer that is closest to correct, even if it is not correct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

HOTSPOT

You plan to migrate App1 to Azure.

You need to recommend a storage solution for App1 that meets the security and compliance requirements.

Which type of storage should you recommend, and how should you recommend configuring the storage? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Storage account type:
Premium page blobs
Premium file shares
Standard general-purpose v2
Configuration:
NFSv3
Large file shares
Hierarchical namespace

A

Here’s the breakdown of the correct answer and why:

Storage account type: Standard general-purpose v2

Configuration: Hierarchical namespace

Explanation:

Standard general-purpose v2: This storage account type allows you to utilize Blob storage, which is the key to meeting the immutability requirement. Azure Blob storage offers Immutability policies (write once, read many - WORM). This directly addresses the security and compliance requirement to prevent modification of new and existing data for three years.

Hierarchical namespace: While not directly related to the immutability requirement, hierarchical namespace (available in Azure Data Lake Storage Gen2, which is built on top of Standard general-purpose v2) provides a file system structure that can be beneficial for organizing and managing the data for App1. Given the available options, it’s the most relevant configuration choice.

Why other options are incorrect:

Storage Account Type:

Premium page blobs: Primarily used for Azure Virtual Machine disks and do not offer built-in immutability policies suitable for this requirement.

Premium file shares: While offering SMB access (potentially useful for on-premises access), they don’t have the built-in immutability policies of Blob storage.

Configuration:

NFSv3: While a file sharing protocol, it’s less relevant in this context as the primary requirement is immutability. Also, accessing blob storage from on-premises would typically be done through other methods (like Azure File Sync or the Storage Explorer).

Large file shares: This refers to the capacity of file shares, not the core security and compliance feature needed here.

Important Considerations:

On-premises access: While the recommendation leans towards Blob storage for immutability, you’ll need to consider how on-premises users and services will access the data. Options include:

Azure Storage Explorer: A free tool that allows access to Azure Storage.

Azure File Sync: If the data lends itself to a file-sharing model, you could sync a portion of the blob storage to an on-premises file server.

Direct API access: On-premises applications could be developed to interact with the Blob Storage APIs.

Preventing public endpoint access: This can be achieved by configuring private endpoints for the storage account, regardless of the storage type chosen.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

HOTSPOT

You plan to migrate DB1 and DB2 to Azure.

You need to ensure that the Azure database and the service tier meet the resiliency and business requirements.

What should you configure? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Database:
A single Azure SQL database
Azure SQL Managed Instance
An Azure SQL Database elastic pool
Service tier:
Hyperscale
Business Critical
General Purpose

A

Explanation:

Box 1: SQL Managed Instance

Scenario: Once migrated to Azure, DB1 and DB2 must meet the following requirements:

✑ Maintain availability if two availability zones in the local Azure region fail.

✑ Fail over automatically.

✑ Minimize I/O latency.

The auto-failover groups feature allows you to manage the replication and failover of a group of databases on a server or all databases in a managed instance to another region. It is a declarative abstraction on top of the existing active geo-replication feature, designed to simplify deployment and management of geo-replicated databases at scale. You can initiate a geo-failover manually or you can delegate it to the Azure service based on a user-defined policy. The latter option allows you to automatically recover multiple related databases in a secondary region after a catastrophic failure or other unplanned event that results in full or partial loss of the SQL Database or SQL Managed Instance availability in the primary region.

Box 2: Business critical

SQL Managed Instance is available in two service tiers:

General purpose: Designed for applications with typical performance and I/O latency requirements.

Business critical: Designed for applications with low I/O latency requirements and minimal impact of underlying maintenance operations on the workload.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You plan to migrate App1 to Azure.

You need to recommend a network connectivity solution for the Azure Storage account that will host the App1 data. The solution must meet the security and compliance requirements.

What should you include in the recommendation?

a private endpoint
a service endpoint that has a service endpoint policy
Azure public peering for an ExpressRoute circuit
Microsoft peering for an ExpressRoute circuit

A

Understanding the Requirements

Here are the key networking-related requirements:

Security:

“Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.”

Connectivity:

“On-premises users and services must be able to access the Azure Storage account that will host the data in App1.”

Existing Environment:

“Litware has ExpressRoute connectivity to Azure.”

Analyzing the Options

Let’s evaluate each option against these requirements:

a private endpoint

Pros: Provides a private IP address within the virtual network for the storage account, thus preventing public access, which meets the security requirement. Enables on-prem resources to connect via the private IP over the express route connection.

Cons: Can increase cost slightly, requires virtual network integration.

Suitability: Highly suitable. It meets the security requirement of preventing public access and allows on-premises users to access the storage account over the private network and ExpressRoute connection.

a service endpoint that has a service endpoint policy

Pros: Allows VNETs to access the storage account without exposing it to the public internet.

Cons: Does not allow for on-premises resources to access the storage account.

Suitability: Not suitable. This only prevents traffic from public endpoints, however the on-premises traffic will still need to go through the public internet.

Azure public peering for an ExpressRoute circuit

Pros: Can provide access to Azure public services, such as storage, via the ExpressRoute connection.

Cons: Does not block access from the public internet, which does not meet the security requirements.

Suitability: Not suitable because public peering is not a secure method to access storage.

Microsoft peering for an ExpressRoute circuit

Pros: Allows private access to Azure resources, including Azure Storage.

Cons: Does not natively prevent access from the public internet. Requires additional configuration to do so.

Suitability: While Microsoft peering is the route that will be used by the resources to communicate via the express route, it is not a configuration that prevents public access.

The Correct Recommendation

Based on the analysis, the correct solution is:

a private endpoint

Explanation

Private Endpoints provide a network interface for the storage account directly within a virtual network. This ensures that access to the storage is limited to only resources within the private network. Traffic goes through the ExpressRoute circuit to the private IP on the VNET.

By using a private endpoint, you effectively prevent access from the public internet, fulfilling the security requirement.

Why other options are not correct:

Service endpoints only lock access from virtual networks to the storage account, it does not prevent on-premises systems from going through the public endpoint of the storage account.

Public peering is used to access public Azure services, it does not fulfill the security requirements of preventing access from the public internet.

Microsoft peering allows on-prem systems to access resources through private IP addresses, however it does not prevent on-prem resources from also using the public endpoint. Private Endpoints are needed to block the public endpoint.

Important Notes for the AZ-305 Exam

Private Endpoints vs Service Endpoints: Know the fundamental differences. Service endpoints provide network isolation within Azure networks, but don’t prevent public access. Private endpoints, on the other hand, allow resources within VNETs to communicate to resources via private IP addresses.

ExpressRoute Peering: Understand the differences between Microsoft, Azure public and private peering.

Security and Compliance: Prioritize solutions that align with security requirements. Blocking public access is a common ask.

Read Requirements Carefully: Ensure you meet all requirements including the networking and security.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

DRAG DROP

You need to configure an Azure policy to ensure that the Azure SQL databases have TDE enabled. The solution must meet the security and compliance requirements.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Actions
Create an Azure policy definition that uses the deployIfNotExists effect.
Create a user-assigned managed identity.
Invoke a remediation task.
Create an Azure policy assignment.
Create an Azure policy definition that uses the Modify effect.
Answer Area

A

Understanding the Goal

The goal is to use Azure Policy to automatically enable TDE on all Azure SQL databases within the scope of the policy.

Key Concepts

Azure Policy: Allows you to create, assign, and manage policies that enforce rules across your Azure resources.

Policy Definition: Specifies the conditions that must be met and the actions to take if the conditions are not met.

Policy Assignment: Applies the policy definition to a specific scope (subscription, resource group, etc.).

deployIfNotExists Effect: This policy effect will deploy an ARM template if the resource does not have the configuration (TDE enabled).

Modify Effect: This effect will modify the resource to enforce the condition if it does not exist.

Remediation Task: A process for correcting resources that are not compliant with the policy.

User-Assigned Managed Identity: An identity object in Azure which allows for RBAC permissions and avoids the need for storing credentials for an application.

Steps in the Correct Sequence

Here’s the correct sequence of actions, with explanations:

Create an Azure policy definition that uses the deployIfNotExists effect.

Why? This is the first step. You need to define what the policy should do. For TDE, deployIfNotExists is used to deploy a configuration if it’s missing. The deployIfNotExists will deploy an ARM template that enables TDE on the database.

This step specifies the “rule” that will be enforced.

Create an Azure policy assignment.

Why? After defining the policy, you need to assign it to a scope, such as a subscription or a resource group. This step specifies where the policy is applied.

This tells Azure what needs to be checked against the policy.

Invoke a remediation task.

Why? The initial policy assignment will remediate new resources. However existing resources will need a remediation task to be launched to apply the policy to the non-compliant resources.

The Correct Drag-and-Drop Order

Here’s how you should arrange the actions in the answer area:

Create an Azure policy definition that uses the deployIfNotExists effect.

Create an Azure policy assignment.

Invoke a remediation task.

Why Other Options are Incorrect in this context:

Create a user-assigned managed identity: Although managed identities are used in conjunction with policies that use the deployIfNotExists effect, they do not need to be created specifically. The system assigned managed identity of the policy will perform the remediation. Therefore, creating a user-assigned managed identity is not needed and not within the scope of the task.

Create an Azure policy definition that uses the Modify effect: Although Modify is used in Azure policies, it is not relevant in the configuration of TDE. deployIfNotExists is a better approach because TDE needs to be enabled, which requires a deployment.

Important Notes for the AZ-305 Exam

Azure Policy Effects: Be extremely familiar with different policy effects, especially deployIfNotExists, audit, deny, and modify.

Policy Definition vs. Assignment: Understand the difference between defining a policy and applying it to resources.

Remediation: Understand how to use remediation tasks to fix non-compliant resources.

Scope: Be able to set the appropriate scope for policy assignments.

Managed Identities: Know how to use managed identities for secure resource management with Azure policies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

HOTSPOT

You plan to migrate App1 to Azure.

You need to recommend a high-availability solution for App1. The solution must meet the resiliency requirements.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Number of host groups:
1
2
3
6
Number of virtual machine scale sets:
0
1
3

A

Number of host groups: 3

Number of virtual machine scale sets: 1

Explanation:

Number of host groups: 3

Requirement: Maintain availability if two availability zones in the local Azure region fail.

Dedicated Hosts and Zones: Azure Dedicated Hosts are a regional resource, but you deploy host groups within specific availability zones. To be resilient to the failure of two availability zones, you need your virtual machines spread across at least three availability zones. Since you’re using dedicated hosts, you need a host group in each of those three availability zones.

Number of virtual machine scale sets: 1

Requirement: Be hosted on Azure virtual machines that support automatic scaling and maintain availability if two availability zones fail.

Virtual Machine Scale Sets and Zones: Azure Virtual Machine Scale Sets allow you to deploy and manage a set of identical, auto-scaling virtual machines. A single VM Scale Set can be configured to span multiple availability zones. This is the recommended approach for high availability and automatic scaling across zones. You don’t need multiple scale sets for each zone; one can manage the deployment across the necessary zones.

Why other options are incorrect:

Number of host groups:

1: This would not provide any availability zone resilience. If the single zone with the host group fails, App1 goes down.

2: This would only protect against the failure of a single availability zone. The requirement is resilience against two zone failures.

6: While this would provide more resilience, it’s not necessary to meet the specific requirement of tolerating two zone failures and would likely be more expensive.

Number of virtual machine scale sets:

0: You need to use Virtual Machine Scale Sets to meet the automatic scaling requirement.

3: While technically possible to have three separate VM scale sets (one in each zone), it adds unnecessary management complexity. A single VM scale set configured to span multiple availability zones is the standard and more efficient approach.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You need to implement the Azure RBAC role assignments for the Network Contributor role.

The solution must meet the authentication and authorization requirements.

What is the minimum number of assignments that you must use?

1
2
5
10
15

A

The correct answer is 2.

Here’s why:

Management Groups: The most efficient way to apply RBAC roles across multiple subscriptions is by using Azure Management Groups. Since all subscriptions are within an Enterprise Agreement (EA), it’s highly likely that they are organized under Management Groups.

Litware.com and dev.litware.com Tenants: You have subscriptions in two different tenants (litware.com and dev.litware.com). Therefore, even if the subscriptions within each tenant are organized under a single management group, you would need to apply the Network Contributor role at the management group level for each tenant.

Minimum Assignments:

One assignment of the Network Contributor role at the management group level associated with the litware.com tenant. This will apply the role to all 10 subscriptions within that tenant.

One assignment of the Network Contributor role at the management group level associated with the dev.litware.com tenant. This will apply the role to all 5 subscriptions within that tenant.

Why other options are incorrect:

1: You have subscriptions in two different tenants, so a single assignment won’t cover all subscriptions.

5: This might be the number of subscriptions in one of the tenants, but not all.

10: This might be the number of subscriptions in the litware.com tenant, but not all.

15: This is the total number of subscriptions, and you don’t need to assign the role individually to each subscription if using management groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

HOTSPOT

You plan to migrate App1 to Azure.

You need to estimate the compute costs for App1 in Azure. The solution must meet the security and compliance requirements.

What should you use to estimate the costs, and what should you implement to minimize the costs? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
To estimate the costs, use:
Azure Advisor
The Azure Cost Management Power BI app
The Azure Total Cost of Ownership (TCO) calculator
Implement:
Azure Reservations
Azure Hybrid Benefit
Azure Spot Virtual Machine pricing

A

To estimate the costs, use: The Azure Total Cost of Ownership (TCO) calculator

Why correct: The Azure TCO calculator is specifically designed to compare the cost of running your workloads on-premises versus in Azure. It allows you to input details about your current infrastructure and planned Azure resources to get an estimated cost for migrating to the cloud. This is the most direct and comprehensive tool for this purpose.

Implement: Azure Reservations

Why correct: Azure Reservations offer significant discounts (up to 72% compared to pay-as-you-go pricing) by committing to using specific Azure resources (like virtual machines for App1) for a defined period (typically 1 or 3 years). This is a highly effective way to minimize compute costs for predictable workloads like App1 once it’s migrated.

Why the other options are less suitable:

To estimate the costs, use:

Azure Advisor: While Azure Advisor provides cost optimization recommendations, it primarily analyzes your existing Azure usage. Since App1 is being migrated, you don’t have existing Azure usage for it yet, making the TCO calculator more appropriate for initial estimations.

The Azure Cost Management Power BI app: This is a tool for visualizing and analyzing your current Azure spending. It’s not designed for pre-migration cost estimations.

Implement:

Azure Hybrid Benefit: While Azure Hybrid Benefit can significantly reduce costs for Windows Server virtual machines (if App1 runs on Windows Server and you have eligible licenses), Azure Reservations provide a more general cost-saving mechanism applicable to various compute resources, making it a slightly broader and potentially more impactful option for overall cost minimization. However, if App1 uses Windows Server, Hybrid Benefit is also a very strong contender and could be considered equally “closest” if not for the broader applicability of Reservations.

Azure Spot Virtual Machine pricing: Spot VMs offer deep discounts but come with the risk of eviction if Azure needs the capacity back. For a production application like App1, especially considering the security and compliance requirements mentioned in the broader scenario, relying on potentially unstable Spot VMs is generally not recommended. The risk of interruption outweighs the cost savings in this context.

In summary:

The Azure TCO calculator is the most direct tool for pre-migration cost estimation.

Azure Reservations are generally the most effective and broadly applicable method for implementing cost savings for compute resources like the VMs hosting App1, assuming a relatively stable workload.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Existing Environment: Technical Environment

The on-premises network contains a single Active Directory domain named contoso.com.

Contoso has a single Azure subscription.

Existing Environment: Business Partnerships

Contoso has a business partnership with Fabrikam, Inc. Fabrikam users access some Contoso applications over the internet by using Azure Active Directory (Azure AD) guest accounts.

Requirements: Planned Changes

Contoso plans to deploy two applications named App1 and App2 to Azure.

Requirements: App1

App1 will be a Python web app hosted in Azure App Service that requires a Linux runtime.

Users from Contoso and Fabrikam will access App1.

App1 will access several services that require third-party credentials and access strings.

The credentials and access strings are stored in Azure Key Vault.

App1 will have six instances: three in the East US Azure region and three in the West Europe Azure region.

App1 has the following data requirements:

✑ Each instance will write data to a data store in the same availability zone as the instance.

✑ Data written by any App1 instance must be visible to all App1 instances.

App1 will only be accessible from the internet. App1 has the following connection requirements:

✑ Connections to App1 must pass through a web application firewall (WAF).

✑ Connections to App1 must be active-active load balanced between instances.

✑ All connections to App1 from North America must be directed to the East US region. All other connections must be directed to the West Europe region.

Every hour, you will run a maintenance task by invoking a PowerShell script that copies files from all the App1 instances. The PowerShell script will run from a central location.

Requirements: App2

App2 will be a NET app hosted in App Service that requires a Windows runtime.

App2 has the following file storage requirements:

✑ Save files to an Azure Storage account.

✑ Replicate files to an on-premises location.

✑ Ensure that on-premises clients can read the files over the LAN by using the SMB protocol.

You need to monitor App2 to analyze how long it takes to perform different transactions within the application. The solution must not require changes to the application code.

Application Development Requirements

Application developers will constantly develop new versions of App1 and App2.

The development process must meet the following requirements:

✑ A staging instance of a new application version must be deployed to the application host before the new version is used in production.

✑ After testing the new version, the staging version of the application will replace the production version.

✑ The switch to the new application version from staging to production must occur without any downtime of the application.

Identity Requirements

Contoso identifies the following requirements for managing Fabrikam access to resources:

✑ uk.co.certification.simulator.questionpool.PList@1863e940

✑ The solution must minimize development effort.

Security Requirement

All secrets used by Azure services must be stored in Azure Key Vault.

Services that require credentials must have the credentials tied to the service instance. The credentials must NOT be shared between services.

DRAG DROP

You need to recommend a solution that meets the file storage requirements for App2.

What should you deploy to the Azure subscription and the on-premises network? To answer, drag the appropriate services to the correct locations. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Services
Azure Blob Storage
Azure Data Box
Azure Data Box Gateway
Azure Data Lake Storage
Azure File Sync
Azure Files

Answer Area
Azure subscription: Service
On-premises network: Service

A

Deconstruct the Requirements: First, identify the key requirements for App2’s file storage:

Store files in an Azure Storage account.

Replicate files to an on-premises location.

On-premises clients need to read files via SMB over the LAN.

Azure Storage Options - Initial Brainstorm: Think about the different Azure Storage services and their core functionalities:

Azure Blob Storage: Excellent for unstructured data, cost-effective, but doesn’t natively provide SMB access or direct on-premises synchronization.

Azure Data Lake Storage: Built on Blob Storage, optimized for big data analytics. Doesn’t directly address SMB access or on-premises sync in the way required.

Azure Files: Provides fully managed file shares in the cloud, accessible via SMB. This seems promising for on-premises access.

Azure File Sync: A service to synchronize Azure File shares with on-premises Windows Servers. This looks like a strong candidate for fulfilling the on-premises replication and SMB access needs.

Azure Data Box: A physical appliance for transferring large amounts of data to Azure. Not suitable for ongoing synchronization.

Azure Data Box Gateway: A virtual appliance that acts as a network file share, caching data to Azure. While it involves on-premises component, Azure File Sync is a better fit for the replication requirement.

Focus on the SMB Requirement: The “on-premises clients can read the files over the LAN by using the SMB protocol” requirement is a strong indicator that Azure Files will be needed in Azure. Blob Storage and Data Lake Storage don’t offer native SMB access.

Address the On-premises Replication: The requirement to “replicate files to an on-premises location” points towards a synchronization solution. Azure File Sync is specifically designed to synchronize Azure File shares with on-premises Windows Servers, making it the ideal choice for this.

Map to the Answer Area: Based on the above analysis:

Azure Subscription: Azure Files will host the file shares in the cloud.

On-premises network: Azure File Sync will be installed on a Windows Server on-premises to synchronize with the Azure File share.

Review and Confirm: Double-check if the chosen services meet all three initial requirements:

Store in Azure Storage Account? Yes, Azure Files is an Azure Storage service.

Replicate on-premises? Yes, Azure File Sync handles the synchronization.

SMB access? Yes, Azure Files provides SMB access, and File Sync makes it available on-premises.

Eliminate Other Options (and justify):

Azure Blob Storage/Data Lake Storage: Lack native SMB access.

Azure Data Box/Data Box Gateway: Not designed for continuous synchronization. Data Box is for bulk transfer, and Data Box Gateway is more of a caching solution, not direct replication of the entire Azure file share on-premises.

This systematic approach helps identify the core requirements and match them to the appropriate Azure services, leading to the correct solution.Here’s the breakdown of the correct services and why:

Azure subscription: Azure Files

Why: Azure Files provides fully managed file shares in the cloud that are accessible via the Server Message Block (SMB) protocol. This directly addresses the requirement to save files to an Azure Storage account and make them accessible over SMB.

On-premises network: Azure File Sync

Why: Azure File Sync is the Azure service that enables you to synchronize Azure file shares with on-premises Windows Servers. This fulfills the requirement to replicate files to an on-premises location and allows on-premises clients to read the files over the LAN using the SMB protocol.

Therefore, the correct answer area is:

Azure subscription: Azure Files
On-premises network: Azure File Sync

Explanation of why other options are incorrect:

Azure Blob Storage: While a core Azure storage service, it doesn’t natively provide SMB access required for on-premises clients to read files over the LAN.

Azure Data Box: This is a physical appliance used for transferring large amounts of data into Azure. It’s not for ongoing synchronization or SMB access.

Azure Data Box Gateway: This is a virtual appliance that resides on your on-premises network and acts as a network file share, caching data to Azure Blob storage. While it involves an on-premises component, it doesn’t directly replicate the Azure file share for native SMB access like Azure File Sync.

Azure Data Lake Storage: This is built on top of Blob storage and is optimized for big data analytics. It doesn’t directly provide SMB access in the same way as Azure Files.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You need to recommend a solution that meets the data requirements for App1.

What should you recommend deploying to each availability zone that contains an instance of App1?

an Azure Cosmos DB that uses multi-region writes
an Azure Storage account that uses geo-zone-redundant storage (GZRS)
an Azure Data Lake store that uses geo-zone-redundant storage (GZRS)
an Azure SQL database that uses active geo-replication

A

The correct answer is an Azure Cosmos DB that uses multi-region writes.

Here’s why:

Data Requirements Breakdown:

Each instance writes data to a data store in the same availability zone: This implies a need for a local data store for low latency writes.

Data written by any App1 instance must be visible to all App1 instances: This necessitates a globally consistent data store that replicates across regions.

Why Azure Cosmos DB with Multi-Region Writes Fits:

Multi-Region Writes: This feature of Cosmos DB allows you to designate multiple Azure regions as writeable. You would deploy a Cosmos DB account with write regions in both East US and West Europe.

Local Writes: Each App1 instance would be configured to write to the Cosmos DB region closest to it (within the same availability zone’s region). This ensures low-latency writes.

Global Consistency: Cosmos DB provides various consistency levels. For this requirement, you would likely choose “Strong” or “Session” consistency to ensure that data written in one region is eventually (or immediately, with Strong consistency) visible to all other regions.

Availability Zones: Cosmos DB itself offers high availability within a region by replicating data across multiple availability zones.

Why Other Options Are Less Suitable:

Azure Storage account with GZRS: GZRS provides high availability and durability by replicating data synchronously across three availability zones within a primary region and asynchronously to a secondary region. However, it doesn’t offer the same level of fine-grained control over write regions and automatic data replication for active-active scenarios like Cosmos DB. Also, accessing blob storage directly from multiple instances for transactional data can be complex.

Azure Data Lake Store with GZRS: Similar limitations to Azure Storage with GZRS. It’s primarily designed for large-scale analytics data, not transactional data requiring low-latency writes from multiple instances.

Azure SQL database with active geo-replication: While active geo-replication provides read replicas in different regions, only the primary region is writable. This doesn’t directly meet the requirement of each instance writing to a local data store and having that data immediately available to all instances across regions in an active-active manner.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

HOTSPOT

You are evaluating whether to use Azure Traffic Manager and Azure Application Gateway to meet the connection requirements for App1.

What is the minimum numbers of instances required for each service? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Azure Traffic Manager:
1
2
3
6
Azure Application Gateway:
1
2
3
6

A

Azure Traffic Manager: 1

Why: Azure Traffic Manager is a DNS-based traffic routing service. You only need one Traffic Manager profile to configure the geographic routing policy. Traffic Manager itself is a highly available, globally distributed service managed by Azure. You don’t need multiple instances for redundancy or load balancing the Traffic Manager service itself. Its availability is built-in.

Azure Application Gateway: 2

Why: You need at least two instances of Azure Application Gateway. Here’s the breakdown:

One instance in the East US region: To provide the WAF and load balancing for the three App1 instances in East US.

One instance in the West Europe region: To provide the WAF and load balancing for the three App1 instances in West Europe.

Since connections must pass through a WAF and you have instances in two distinct regions with traffic being directed based on geography, you need a separate Application Gateway in each region to handle the regional traffic and provide WAF protection.

Therefore, the correct answer area is:

Azure Traffic Manager: 1
Azure Application Gateway: 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

HOTSPOT -
You need to recommend a solution to ensure that App1 can access the third-party credentials and access strings. The solution must meet the security requirements.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Authenticate App1 by using:
A certificate
A system-assigned managed identity
A user-assigned managed identity
Authorize App1 to retrieve Key Vault
secrets by using:
An access policy
A connected service
A private link
A role assignment

A

Explanation:

Scenario: Security Requirement

All secrets used by Azure services must be stored in Azure Key Vault.

Services that require credentials must have the credentials tied to the service instance. The credentials must NOT be shared between services.

Box 1: A service principal

A service principal is a type of security principal that identifies an application or service, which is to say, a piece of code rather than a user or group. A service principal’s object ID is known as its client ID and acts like its username. The service principal’s client secret acts like its password.

Note: Authentication with Key Vault works in conjunction with Azure Active Directory (Azure AD), which is responsible for authenticating the identity of any given security principal.

A security principal is an object that represents a user, group, service, or application that’s requesting access to Azure resources. Azure assigns a unique object ID to every security principal.

Box 2: A role assignment

You can provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You need to recommend an App Service architecture that meets the requirements for App1.

The solution must minimize costs.

What should few recommend?

one App Service Environment (ASE) per availability zone
one App Service plan per availability zone
one App Service plan per region
one App Service Environment (ASE) per region

A

Understanding the Requirements

Here are the key requirements for App1’s App Service deployment:

High Availability: App1 has six instances, three in East US and three in West Europe, spread across availability zones within each region.

Web App Service: The App1 app will be hosted on Azure App Service.

Minimize Costs: The solution should be the most cost-effective while maintaining the necessary features.

Linux Runtime: The App1 app is a python app with a linux runtime.

Key Concepts

Azure App Service: A PaaS service for hosting web applications, mobile backends, and APIs.

App Service Plan: Defines the underlying compute resources (VMs) on which your app(s) run.

App Service Environment (ASE): Provides a fully isolated and dedicated environment for running your App Service apps.

Availability Zones: Physically separate locations within an Azure region that provide high availability.

Analyzing the Options

Let’s evaluate each option based on its cost-effectiveness and ability to meet the requirements:

one App Service Environment (ASE) per availability zone

Pros: Highest level of isolation and control, can have virtual network integration.

Cons: Most expensive solution.

Suitability: Not suitable due to high costs.

one App Service plan per availability zone

Pros: Provides zone redundancy, and can potentially have different size VMs in each zone if needed.

Cons: Can lead to increased costs due to over provisioning of resources if one app services plan per zone is chosen.

Suitability: Not the most cost-effective approach.

one App Service plan per region

Pros: Cost-effective for multiple instances of an app in a single region, allows multiple VMs to be spun up on one app service plan.

Cons: Requires availability zones to be supported by the underlying VM size.

Suitability: Suitable, most cost effective option if VMs chosen support availability zones.

one App Service Environment (ASE) per region

Pros: Provides isolation and control within a region.

Cons: Very expensive and not needed for this scenario.

Suitability: Not suitable due to high costs.

The Correct Recommendation

Based on the analysis, the most cost-effective solution is:

one App Service plan per region

Explanation

App Service Plan per region: By creating a single App Service plan per region, you can host multiple instances of App1 (three per region) on the same underlying VMs. This is more cost-effective than using separate plans per availability zone.

Availability Zones: When choosing the VM size, be sure to choose a size that supports availability zones.

Zone Redundancy: App service automatically handles the zone redundancy in the single app service plan per region.

Why Other Options Are Not Correct

ASE per availability zone: Highly expensive and not needed when App Service can handle the availability zone deployment.

App Service plan per availability zone: Not cost-effective due to over provisioning and having three app services plans when one per region can handle all instances.

ASE per region: Very costly and unnecessary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Your company has deployed several virtual machines (VMs) on-premises and to Azure. Azure ExpressRoute has been deployed and configured for on-premises to Azure connectivity.

Several VMs are exhibiting network connectivity issues.

You need to analyze the network traffic to determine whether packets are being allowed or denied to the VMs.

Solution: Use the Azure Advisor to analyze the network traffic.

Does the solution meet the goal?

Yes
No

A

Understanding the Goal

The goal is to analyze network traffic to determine if packets are being allowed or denied to the VMs, which indicates a network connectivity issue.

Analyzing the Proposed Solution

Azure Advisor: Azure Advisor is a service that analyzes your Azure environment and provides recommendations for cost optimization, security, reliability, and performance. It does not analyze or show you network traffic for VMs, nor can it view network traffic for on-prem VMs.

Evaluation

Azure Advisor will not help you determine what packets are being allowed or denied to a virtual machine.

The Correct Solution
The tools that would be best suited for this scenario would be:

Azure Network Watcher: Network Watcher can help you monitor and troubleshoot network traffic.

Network Security Group (NSG) Flow Logs: NSG flow logs would provide details on what traffic is being allowed or denied from and to VMs.

On-Prem Packet Capture Tools: Wireshark or other tools can be used on-prem to diagnose traffic issues.

Does the Solution Meet the Goal?

No, the solution does not meet the goal. Azure Advisor is not the correct tool for analyzing network traffic flow and packet information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

DRAG DROP

You plan to import data from your on-premises environment to Azure. The data Is shown in the following table.

On-premises source Azure target
A Microsoft SQL Server 2012 database An Azure SQL database
A table in a Microsoft SQL Server 2014 database An Azure Cosmos DB account that uses the SQL API

What should you recommend using to migrate the data? To answer, drag the appropriate tools to the correct data sources-Each tool may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Tools
AzCopy
Azure Cosmos DB Data Migration Tool
Data Management Gateway
Data Migration Assistant

Answer Area
From the SQL Server 2012 database: Tool
From the table in the SQL Server 2014 database: Tool

A

From the SQL Server 2012 database: Data Migration Assistant

Why: The Data Migration Assistant (DMA) is Microsoft’s primary tool for migrating SQL Server databases to Azure SQL Database. It can assess your on-premises SQL Server database for compatibility issues, recommend performance improvements, and then perform the data migration. While SQL Server 2012 is an older version, DMA often supports migrations from various SQL Server versions to Azure SQL Database.

From the table in the SQL Server 2014 database: Azure Cosmos DB Data Migration Tool

Why: The Azure Cosmos DB Data Migration Tool (dtui.exe) is specifically designed for importing data into Azure Cosmos DB from various sources, including SQL Server. Since the target is an Azure Cosmos DB account using the SQL API, this tool is the most direct and efficient way to migrate the data. You can select specific tables for migration.

Therefore, the correct answer area is:

From the SQL Server 2012 database: Data Migration Assistant
From the table in the SQL Server 2014 database: Azure Cosmos DB Data Migration Tool

Explanation of why other tools are incorrect:

AzCopy: This is a command-line utility for copying data to and from Azure Blob Storage, Azure Files, and Azure Data Lake Storage. It’s not designed for migrating relational database schemas and data to Azure SQL Database or Cosmos DB.

Data Management Gateway (Integration Runtime): This is a component of Azure Data Factory that enables data movement between on-premises data stores and cloud services. While it could be used for this, the direct migration tools (DMA and Cosmos DB Data Migration Tool) are simpler and more purpose-built for these specific scenarios. Using Data Factory would introduce more complexity than necessary for a straightforward data migration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

HOTSPOT

You need to design a storage solution for an app that will store large amounts of frequently used data.

The solution must meet the following requirements:

✑ Maximize data throughput.

✑ Prevent the modification of data for one year.

✑ Minimize latency for read and write operations.

Which Azure Storage account type and storage service should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Storage account type:
BlobStorage
BlockBlobStorage
FileStorage
StorageV2 with Premium performance
StorageV2 with Standard performance
Storage service:
Blob
File
Table

A

Storage account type: BlockBlobStorage

Storage service: Blob

Explanation:

Let’s break down the requirements and why this combination is the best fit:

Requirements:

Maximize Data Throughput: The solution needs to handle a high volume of data transfer.

Prevent Data Modification (Immutable for 1 Year): Data must be stored in a way that prevents any changes for one year.

Minimize Latency: Read and write operations should be as fast as possible.

Storage Account Type: BlockBlobStorage

Why it’s the best choice:

Optimized for Block Blobs: BlockBlobStorage accounts are specifically designed and optimized for storing and accessing block blobs. Block blobs are ideal for unstructured data like text or binary data, which is common for applications storing large amounts of data.

High Throughput: BlockBlobStorage accounts are designed to deliver high throughput for read and write operations.

Immutable Storage Support: BlockBlobStorage accounts support immutable storage policies, allowing you to store data in a WORM (Write Once, Read Many) state, preventing modification for a specified period (like the one year required).

Why other options are less suitable:

BlobStorage: BlobStorage is an older account type. It is recommended that you use a BlockBlobStorage or a general-purpose v2 account instead.

FileStorage: FileStorage accounts are optimized for file shares (using the SMB protocol). They are not the best choice for maximizing throughput for large amounts of unstructured data.

StorageV2 (General-purpose v2): While StorageV2 accounts support block blobs, they also support other storage types (files, queues, tables). BlockBlobStorage accounts generally provide better performance for exclusively block blob workloads, which is the case here.

StorageV2 (Premium performance): Premium performance is optimized for very low latency, but it comes at a higher cost. The requirement is to minimize latency, not necessarily to achieve the absolute lowest possible latency at any cost.

Storage Service: Blob

Why it’s the best choice:

Large, Unstructured Data: Blob storage is designed for storing large amounts of unstructured data, such as text or binary data, which aligns with the app’s requirements.

High Throughput: Blob storage, especially in BlockBlobStorage accounts, is optimized for high throughput.

Immutability: Blob storage supports immutability policies at the blob or container level.

Why other options are less suitable:

File: File storage is for file shares accessed via SMB. It’s not the best option for maximizing throughput for large amounts of unstructured data.

Table: Table storage is a NoSQL key-value store. It’s not suitable for storing large amounts of unstructured data or for maximizing throughput.

Implementation Details:

Create a BlockBlobStorage Account: When creating the storage account in Azure, choose BlockBlobStorage as the account type.

Create a Container: Within the storage account, create a container to store your blobs.

Configure Immutability:

You can set a time-based retention policy at the container level or on individual blobs.

Configure the policy to prevent modifications and deletions for one year.

Upload Data as Block Blobs: Use Azure Storage SDKs, AzCopy, or other tools to upload your data to the container as block blobs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

HOTSPOT

You need to recommend an Azure Storage Account configuration for two applications named Application1 and Applications.

The configuration must meet the following requirements:

  • Storage for Application1 must provide the highest possible transaction rates and the lowest possible latency.
  • Storage for Application2 must provide the lowest possible storage costs per GB.
  • Storage for both applications must be optimized for uploads and downloads.
  • Storage for both applications must be available in an event of datacenter failure.

What should you recommend? To answer, select the appropriate options in the answer area NOTE: Each correct selection is worth one point
Answer Area
Application1:
BlobStorage with Standard performance, Hot access tier, and Read-access geo-redundant storage (RA-GRS) replication
BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication
General purpose v1 with Premium performance and Locally-redundant storage (LRS) replication
General purpose v2 with Standard performance, Hot access tier, and Locally-redundant storage (LRS) replication
Application2:
BlobStorage with Standard performance, Cool access tier, and Geo-redundant storage (GRS) replication
BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication
General purpose v1 with Standard performance and Read-access geo-redundant storage (RA-GRS) replication
General purpose v2 with Standard performance, Cool access tier, and Read-access geo-redundant storage (RA-GRS) replication

A

Application1:

BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication

Application2:

General purpose v2 with Standard performance, Cool access tier, and Geo-redundant storage (GRS) replication

Explanation:

Application1

Requirements:

Highest possible transaction rates

Lowest possible latency

Optimized for uploads and downloads

Available in case of a datacenter failure

Why BlockBlobStorage with Premium performance and ZRS is the best choice:

BlockBlobStorage: Optimized for storing and retrieving large amounts of unstructured data (blobs) with high throughput and low latency, which aligns with the need for high transaction rates and optimized uploads/downloads.

Premium performance: Provides the lowest possible latency and highest transaction rates among Azure Storage account options. It uses SSDs for storage, making it ideal for performance-sensitive workloads.

Zone-redundant storage (ZRS): ZRS replicates your data synchronously across three Azure availability zones within a single region. This ensures that your data remains available even if one data center (availability zone) fails.

Why other options are less suitable:

BlobStorage with Standard performance, Hot access tier, and RA-GRS: BlobStorage accounts are generally used for general purpose blob storage and are less optimized for high performance compared to BlockBlobStorage. Standard performance offers higher latency than Premium. RA-GRS provides higher availability but is not necessary since ZRS is sufficient.

General purpose v1 with Premium performance and LRS: General-purpose v1 accounts are an older generation. They don’t support the combination of Premium performance and ZRS. LRS only replicates within a single data center and wouldn’t meet the availability requirement.

General purpose v2 with Standard performance, Hot access tier, and LRS: General-purpose v2 accounts with Standard performance offer higher latency than Premium performance. LRS does not protect against data center failures.

Application2

Requirements:

Lowest possible storage cost per GB

Optimized for uploads and downloads

Available in case of a datacenter failure

Why General purpose v2 with Standard performance, Cool access tier, and GRS is the best choice:

General purpose v2: A good choice for a wide range of storage scenarios, including cost-sensitive applications.

Standard performance: Offers a balance between cost and performance, suitable when the lowest possible latency is not the primary concern.

Cool access tier: Designed for infrequently accessed data, providing the lowest storage cost per GB. While optimized for uploads and downloads, access costs are higher than Hot tier, so it’s best for data not accessed frequently.

Geo-redundant storage (GRS): Replicates your data to a secondary region hundreds of miles away from the primary region. This ensures data availability even if an entire region experiences an outage.

Why other options are less suitable:

BlobStorage with Standard performance, Cool access tier, and GRS: While suitable for cost optimization, general-purpose v2 accounts are generally recommended over BlobStorage accounts.

BlockBlobStorage with Premium performance and ZRS: Premium performance is too expensive for this application, which prioritizes cost savings. ZRS is not necessary when GRS is sufficient.

General purpose v1 with Standard performance and RA-GRS: General-purpose v1 accounts are older generation. GRS is generally preferred over RA-GRS when read access to the secondary region is not specifically required.

In summary:

For Application1, the combination of BlockBlobStorage, Premium performance, and ZRS delivers the highest transaction rates, lowest latency, and availability in case of a data center failure.

For Application2, the combination of General purpose v2, Standard performance, Cool access tier, and GRS provides the lowest storage cost per GB while still ensuring availability and being optimized for uploads and downloads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

HOTSPOT

Your company develops a web service that is deployed to an Azure virtual machine named VM1. The web service allows an API to access real-time data from VM1.

The current virtual machine deployment is shown in the Deployment exhibit. (Click the Deployment tab).

VNet1: This is the overall virtual network encompassing the subnets.
Subnet1: Contains two virtual machines:
VM 1
VM 2
ProdSubnet

The chief technology officer (CTO) sends you the following email message: “Our developers have deployed the web service to a virtual machine named VM1. Testing has shown that the API is accessible from VM1 and VM2. Our partners must be able to connect to the API over the Internet. Partners will use this data in applications that they develop.”

You deploy an Azure API Management (APIM) service. The relevant API Management configuration is shown in the API exhibit. (Click the API tab.)

Virtual Network:
Off
External (selected)
Internal

LOCATION VIRTUAL NETWORK SUBNET
West Europe VNet1 ProdSubnet

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.

Statements

The API is available to partners over the Internet.

The APIM instance can access real-time data from VM1.

A VPN gateway is required for partner access.

A

Statements

The API is available to partners over the Internet. Yes

The APIM instance can access real-time data from VM1. Yes

A VPN gateway is required for partner access. No

Explanation:

  1. The API is available to partners over the Internet. - Yes

Why? The API Management (APIM) instance is configured with a Virtual Network setting of External. This means that the APIM instance is deployed with a public IP address and is accessible from the internet. Partners can access the API through the APIM gateway’s public endpoint.

  1. The APIM instance can access real-time data from VM1. - Yes

Why? The APIM instance is configured to be deployed within the same virtual network (VNet1) and subnet (ProdSubnet) as VM1. This internal deployment allows the APIM instance to communicate directly with VM1 over the private network. It can, therefore, access the real-time data exposed by the web service running on VM1.

  1. A VPN gateway is required for partner access. - No

Why? Partners access the API through the APIM instance’s public endpoint, which is exposed to the internet because of the External Virtual Network setting. A VPN gateway is used for creating secure site-to-site or point-to-site connections between an on-premises network (or a single computer) and an Azure virtual network. It’s not needed when accessing a public endpoint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Your company has 300 virtual machines hosted in a VMware environment. The virtual machines vary in size and have various utilization levels.

You plan to move all the virtual machines to Azure.

You need to recommend how many and what size Azure virtual machines will be required to move the current workloads to Azure. The solution must minimize administrative effort.

What should you use to make the recommendation?

Azure Cost Management
Azure Pricing calculator
Azure Migrate
Azure Advisor

A

Understanding the Goal

The goal is to determine:

How many Azure VMs: What is the total number of Azure VMs required for migrating the workloads

What size Azure VMs: What is the appropriate size (SKU) for each Azure VMs for each on-prem VM.

Minimize Effort: Minimize manual administrative effort in the planning and recommendation process.

Analyzing the Options

Let’s evaluate each option based on its suitability for this task:

Azure Cost Management:

Pros: Analyzes costs of existing Azure resources, and helps with cost optimization.

Cons: Not designed for planning and sizing migrations from on-premises environments. It’s for analyzing Azure spend, and does not help determine size of VMs needed.

Suitability: Not suitable for the given scenario.

Azure Pricing Calculator:

Pros: Helps estimate the cost of planned Azure resources, such as VMs, but requires knowledge of the specifications needed.

Cons: Requires manual input of VM sizes and specifications, which is not ideal for 300 VMs with various utilization levels. It does not take into consideration utilization of existing on-prem VMs, but assumes all VMs are at their peak performance.

Suitability: Not ideal because it will take far too much administrative overhead in manually inputting the information.

Azure Migrate:

Pros: Specifically designed for assessing and migrating on-premises workloads to Azure. Can discover on-premises VMs and provide recommendations for sizing and right sizing Azure VMs based on utilization patterns.

Cons: Requires the setup of the Azure Migrate appliance, which is a one-time setup.

Suitability: Highly suitable for this scenario.

Azure Advisor:

Pros: Analyzes existing Azure resources and provides recommendations for cost, security, reliability, and performance.

Cons: Not designed for migration planning. Does not help in determining right sizing of VMs when moving from on-prem.

Suitability: Not suitable for the given scenario.

The Correct Recommendation

Based on the analysis, the best tool for this scenario is:

Azure Migrate

Explanation

Azure Migrate automates the discovery and assessment of on-premises VMs and their utilization. It analyzes data from on-prem to provide the recommended size Azure VMs based on peak utilization. This approach minimizes administrative effort and ensures that the recommendations are based on actual resource usage.

Why Other Options Are Not Suitable

Azure Cost Management: Analyzes existing Azure costs and is not for migration planning.

Azure Pricing Calculator: Requires manual configuration and does not analyze on-premises environment to determine correct Azure VM size.

Azure Advisor: Analyzes existing Azure resources and is not for migration planning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity.

Several virtual machines exhibit network connectivity issues.

You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines.

Solution: Use Azure Advisor to analyze the network traffic.

Does this meet the goal?

A. Yes
B. No

A

Understanding the Goal

The goal is to analyze network traffic to determine if packets are being allowed or denied to the VMs, which would indicate a network connectivity issue.

Analyzing the Proposed Solution

Azure Advisor: Azure Advisor is a service that analyzes your Azure environment and provides recommendations for cost optimization, security, reliability, and performance. It does not analyze or show network traffic for VMs. It also does not provide any insight into on-prem network traffic.

Evaluation

Azure Advisor will not help you determine what packets are being allowed or denied to a virtual machine.

The Correct Solution
The tools that would be best suited for this scenario would be:

Azure Network Watcher: Network Watcher can help you monitor and troubleshoot network traffic.

Network Security Group (NSG) Flow Logs: NSG flow logs would provide details on what traffic is being allowed or denied from and to VMs.

On-Prem Packet Capture Tools: Wireshark or other tools can be used on-prem to diagnose traffic issues.

Does the Solution Meet the Goal?

The answer is:

B. No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You have an on-premises network and an Azure subscription. The on-premises network has several branch offices.

A branch office in Toronto contains a virtual machine named VM1 that is configured as a file server. Users access the shared files on VM1 from all the offices.

You need to recommend a solution to ensure that the users can access the shares files as quickly as possible if the Toronto branch office is inaccessible.

What should you include in the recommendation?

a Recovery Services vault and Azure Backup
an Azure file share and Azure File Sync
Azure blob containers and Azure File Sync
a Recovery Services vault and Windows Server Backup

A

Understanding the Requirements

Here’s a breakdown of the key requirements:

On-premises File Server: A file server (VM1) in the Toronto branch office.

File Access from all Offices: Users in all branch offices access shared files on VM1.

High Availability: Need a solution for quick access to the files if the Toronto office becomes unavailable.

Quick Access: Minimize latency for users accessing files if the Toronto office is down.

Analyzing the Options

Let’s evaluate each option based on its suitability:

a Recovery Services vault and Azure Backup:

Pros: Provides backup and restore capabilities for VMs.

Cons: Restoring a VM from backup can be time-consuming, and does not offer a way for users to directly connect to shares if VM1 is down. This also does not meet the quick access requirement.

Suitability: Not suitable for quick access if the primary VM is down.

an Azure file share and Azure File Sync:

Pros: Azure Files provides a cloud-based SMB share, Azure File Sync can sync the data from on-premises to Azure, and can be set up in multiple locations for quick access during an outage.

Cons: Requires configuring Azure File Sync and setting up a caching server in other locations.

Suitability: Highly suitable, meets all requirements.

Azure blob containers and Azure File Sync:

Pros: Azure Blob Storage provides scalable cloud storage.

Cons: Azure File Sync does not sync with blob storage. Not suitable because it does not meet the requirement for local SMB share, and is not the correct data store for Azure File Sync.

Suitability: Not suitable for the scenario, and a combination of services that do not work well together.

a Recovery Services vault and Windows Server Backup

Pros: Provides backup and restore capabilities for VMs and their data.

Cons: Restoring from a backup is a lengthy process, and does not meet the requirement for quick access to shares.

Suitability: Not suitable because it is not designed for quick access during an outage.

The Correct Recommendation

Based on the analysis, the correct solution is:

an Azure file share and Azure File Sync

Explanation

Azure File Share: Provides a cloud-based SMB share that is highly available and allows for access from multiple locations.

Azure File Sync: Allows for continuous syncing of files from the on-premises file server to the Azure File Share.

Caching Servers: With Azure File Sync, other on-premises servers can be set up as caching endpoints, enabling quick, local access to the data even if the Toronto office is offline. This meets the requirement of fast access to files during a Toronto branch outage.

Why Other Options are Incorrect

Recovery Services and Azure Backup: Does not provide immediate access to files and has downtime due to the restoration process.

Blob Containers with Azure File Sync: Does not work because File Sync is designed to be used with Azure File Shares, not Blob Containers.

Recovery Services and Windows Server Backup: Does not provide immediate access to files and has downtime due to the restoration process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity.

Several virtual machines exhibit network connectivity issues.

You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines.

Solution: Use Azure Network Watcher to run IP flow verify to analyze the network traffic.

Does this meet the goal?

A. Yes
B. No

A

Understanding the Goal

The goal is to analyze network traffic to determine if packets are being allowed or denied to the VMs, indicating a network connectivity problem.

Analyzing the Proposed Solution

Azure Network Watcher: A service that allows you to monitor and diagnose network issues.

IP Flow Verify: A feature within Network Watcher that allows you to specify source and destination IPs, ports, and protocol and determine if the network security group rules (NSGs) will allow or deny the specified traffic. This provides insight into the network rules and if the rules are causing a problem.

Evaluation

Azure Network Watcher with the IP flow verify feature is indeed the correct tool for diagnosing network traffic and connectivity issues.

Does the Solution Meet the Goal?

The answer is:

A. Yes

Explanation

Azure Network Watcher: Provides tools to monitor, diagnose, and gain insights into your network.

IP Flow Verify: Allows you to check if a packet is allowed or denied between a source and destination based on the current network security rules.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity.

Several virtual machines exhibit network connectivity issues.

You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines.

Solution: Use Azure Traffic Analytics in Azure Network Watcher to analyze the network traffic.

Does this meet the goal?

A. Yes
B. No

A

Understanding the Goal

The goal is to analyze network traffic to determine if packets are being allowed or denied to the VMs, which would indicate a network connectivity problem.

Analyzing the Proposed Solution

Azure Network Watcher: A service in Azure that allows you to monitor and diagnose network issues.

Azure Traffic Analytics: A feature within Network Watcher that analyzes NSG flow logs to provide insights into network traffic flow. Traffic Analytics will give you insights into flows, application performance, security, and capacity. However, it does not provide a view into whether packets are being allowed or denied to specific VMs and can not be used on-prem.

Evaluation

Traffic Analytics can help you understand the overall traffic flow and patterns in your environment, and is useful to understand who is connecting to what, but it does not give a view into specific packets being allowed or denied.

Does the Solution Meet the Goal?

The answer is:

B. No

Explanation

Azure Traffic Analytics: While useful for visualizing network traffic, it does not show specific information on whether packets are being allowed or denied for individual VMs and does not work for on-prem VMs.

Traffic Flow vs. Packet Level: Traffic analytics summarizes traffic patterns, but does not give packet level information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

DRAG DROP –

You have an Azure subscription. The subscription contains Azure virtual machines that run Windows Server 2016 and Linux.

You need to use Azure Monitor to design an alerting strategy for security-related events.

Which Azure Monitor Logs tables should you query? To answer, drag the appropriate tables to the correct log types. Each table may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

Select and Place:
Tables
AzureActivity
AzureDiagnostics
Event
Syslog

Answer Area
Events from Windows event logs: Table
Events from Linux system logging: Table

A

Understanding the Requirements

The goal is to identify the correct log tables in Azure Monitor Logs to query for:

Security Events: Events related to security on both Windows and Linux virtual machines.

Windows VMs: Security events from Windows event logs.

Linux VMs: Security events from Linux system logging.

Analyzing the Options

Let’s evaluate each log table based on its suitability:

AzureActivity:

Pros: Stores activity log data related to Azure resource operations (create, update, delete).

Cons: Not for VM security events.

Suitability: Not suitable for this scenario.

AzureDiagnostics:

Pros: Stores a variety of diagnostic data for Azure resources.

Cons: Generic and not specifically for VM security events and not the correct table for security related events.

Suitability: Not suitable for this scenario.

Event:

Pros: Stores Windows event log data, including security events.

Cons: Does not contain Linux data.

Suitability: Suitable for Windows security events.

Syslog:

Pros: Stores Linux system log data, including security events.

Cons: Does not contain Windows data.

Suitability: Suitable for Linux security events.

The Correct Placement

Based on the analysis, here’s how the tables should be placed:

Events from Windows event logs:

Event

Events from Linux system logging:

Syslog

Explanation

Event Table: The Event table in Azure Monitor Logs is specifically designed to store Windows event log data, including security events.

Syslog Table: The Syslog table in Azure Monitor Logs stores data from the Linux system logging service. This is where you would find Linux security events.

Why Other Options are Incorrect

AzureActivity: Activity logs contain information about operations on Azure resources, not security events from VMs.

AzureDiagnostics: Is a generic table that does not contain security specific events from Windows and Linux servers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A company named Contoso Ltd., has a single-domain Active Directory forest named contoso.com.
Contoso is preparing to migrate all workloads to Azure. Contoso wants users to use single sign-on (SSO) when they access cloud-based services that integrate with Azure Active Directory (Azure AD).
You need to identify any objects in Active Directory that will fail to synchronize to Azure AD due to formatting issues. The solution must minimize costs.
What should you include in the solution?

A. Azure AD Connect Health
B. Microsoft Office 365 IdFix
C. Azure Advisor
D. Password Export Server version 3.1 (PES v3.1) in Active Directory Migration Tool (ADMT)

A

The correct answer is B. Microsoft Office 365 IdFix.

Here’s why:

Microsoft Office 365 IdFix: This tool is specifically designed to identify and help remediate synchronization errors in your on-premises Active Directory environment before you connect it to Azure AD. It scans your directory for common issues like duplicate attributes, invalid characters, and formatting problems that can prevent successful synchronization. It’s a free tool from Microsoft.

Let’s look at why the other options are not the best fit:

A. Azure AD Connect Health: Azure AD Connect Health is a monitoring tool that helps you understand the health and performance of your Azure AD Connect infrastructure after it’s set up and synchronizing. While it can show you errors, it’s not designed for the initial pre-migration cleanup and identification of formatting issues.

C. Azure Advisor: Azure Advisor analyzes your Azure resources and provides recommendations for cost optimization, security, reliability, and performance. It doesn’t directly interact with your on-premises Active Directory to identify formatting issues.

D. Password Export Server version 3.1 (PES v3.1) in Active Directory Migration Tool (ADMT): PES is used to migrate passwords from one Active Directory domain to another. While ADMT is a migration tool, PES specifically focuses on password migration and is not relevant for identifying object formatting issues that would prevent Azure AD Connect synchronization.

Therefore, Microsoft Office 365 IdFix is the most appropriate and cost-effective solution for identifying Active Directory objects with formatting issues before synchronizing to Azure AD. It directly addresses the requirement of finding objects that will fail to sync due to these issues.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

You have an on-premises Hyper-V cluster that hosts 20 virtual machines. Some virtual machines run Windows Server 2016 and some run Linux.
You plan to migrate the virtual machines to an Azure subscription.
You need to recommend a solution to replicate the disks of the virtual machines to Azure. The solution must ensure that the virtual machines remain available during the migration of the disks.
Solution: You recommend implementing an Azure Storage account, and then using Azure Migrate.
Does this meet the goal?

A. Yes
B. No

A

A. Yes

Explanation:

Using Azure Migrate to replicate the disks of the virtual machines to an Azure Storage account is a valid and recommended approach for migrating on-premises Hyper-V VMs to Azure with minimal downtime.

Here’s why:

Azure Migrate: This service provides tools specifically designed for migrating on-premises workloads to Azure. For Hyper-V VMs, it offers agent-based replication.

Agent-based Replication: This method installs an agent on each virtual machine. The agent performs an initial full replication of the VM’s disks to the designated Azure Storage account. After the initial replication, it continuously replicates only the changes (incremental replication). This allows the on-premises VMs to remain running and available during the majority of the replication process.

Cutover: When you’re ready to migrate, Azure Migrate orchestrates a final synchronization of any remaining changes and then creates the virtual machines in Azure using the replicated disks. This cutover process is typically much faster than a full migration done during a maintenance window.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

You plan to deploy an Azure App Service web app that will have multiple instances across multiple Azure regions.
You need to recommend a load balancing service for the planned deployment. The solution must meet the following requirements:
✑ Maintain access to the app in the event of a regional outage.
✑ Support Azure Web Application Firewall (WAF).
✑ Support cookie-based affinity.
✑ Support URL routing.
What should you include in the recommendation?

A. Azure Front Door
B. Azure Load Balancer
C. Azure Traffic Manager
D. Azure Application Gateway

A

The correct answer is A. Azure Front Door.

Here’s why:

Azure Front Door:

Maintain access in the event of a regional outage: Azure Front Door is a global, scalable entry point that uses the Microsoft global network to create fast, secure, and widely scalable web applications. It can automatically route traffic to the next closest healthy region if one region experiences an outage.

Support Azure Web Application Firewall (WAF): Azure Front Door has an integrated Azure WAF to protect your web applications from common web exploits and vulnerabilities.

Support cookie-based affinity: Azure Front Door supports session affinity (also known as sticky sessions) using cookies, ensuring that requests from the same client are routed to the same backend instance within a region.

Support URL routing: Azure Front Door allows you to define routing rules based on URL paths to direct traffic to different backend pools.

Let’s look at why the other options are less suitable:

Azure Load Balancer: Azure Load Balancer is a regional load balancer (either internal or public). It does not inherently provide global failover across regions. It also does not have built-in WAF capabilities or support URL routing. While the Standard tier supports session persistence, it’s not as advanced as Front Door’s cookie-based affinity across regions.

Azure Traffic Manager: Azure Traffic Manager is a DNS-based traffic routing service. While it can direct traffic to different regions based on various routing methods (including priority for failover), it operates at the DNS level and does not inspect HTTP traffic. Therefore, it does not support WAF, cookie-based affinity at the HTTP level, or URL routing.

Azure Application Gateway: Azure Application Gateway is a regional web traffic load balancer that operates at Layer 7 of the OSI model. It supports WAF, cookie-based affinity, and URL routing within a region. However, it is a regional service and does not inherently provide global failover in the same way that Azure Front Door does. While you can deploy multiple Application Gateways in different regions and use a service like Traffic Manager in front, Front Door provides a more integrated and streamlined solution for global load balancing and failover with WAF.

In summary, Azure Front Door is the most appropriate service to meet all the specified requirements for a globally distributed, resilient, and secure web application deployment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

You are designing a solution that will include containerized applications running in an Azure Kubernetes Service (AKS) cluster.
You need to recommend a load balancing solution for HTTPS traffic. The solution must meet the following requirements:
✑ Automatically configure load balancing rules as the applications are deployed to the cluster.
✑ Support Azure Web Application Firewall (WAF).
✑ Support cookie-based affinity.
✑ Support URL routing.
What should you include the recommendation?

A. an NGINX ingress controller
B. Application Gateway Ingress Controller (AGIC)
C. an HTTP application routing ingress controller
D. the Kubernetes load balancer service

A

The correct answer is B. Application Gateway Ingress Controller (AGIC).

Here’s why:

Application Gateway Ingress Controller (AGIC):

Automatically configure load balancing rules: AGIC deploys an Azure Application Gateway in your managed Azure Kubernetes Service (AKS) cluster’s virtual network. When you define Kubernetes Ingress resources, AGIC automatically configures the Application Gateway’s routing rules, listeners, and backend pools to match your Ingress configuration. This makes deployment and management of load balancing rules very streamlined.

Support Azure Web Application Firewall (WAF): AGIC leverages the capabilities of Azure Application Gateway, which has built-in support for Azure WAF. You can configure WAF policies on the Application Gateway to protect your applications from common web exploits.

Support cookie-based affinity: Azure Application Gateway, and therefore AGIC, supports cookie-based session affinity (also known as sticky sessions). This ensures that requests from the same client are routed to the same backend pod within your AKS cluster.

Support URL routing: Azure Application Gateway is a Layer-7 load balancer, meaning it can make routing decisions based on the URL path of the incoming request. AGIC allows you to define URL routing rules within your Kubernetes Ingress resources.

Let’s look at why the other options are less suitable:

A. an NGINX ingress controller: While NGINX is a powerful and widely used ingress controller, and it can be configured to support cookie-based affinity and URL routing, it does not inherently provide automatic configuration with Azure services like WAF. You would typically need to configure a separate WAF solution and integrate it with your NGINX setup, which adds complexity.

C. an HTTP application routing ingress controller: This is a simpler, AKS-managed ingress controller that provides basic HTTP routing. However, it does not support Azure WAF directly and has limited capabilities for advanced features like cookie-based affinity and complex URL routing compared to AGIC.

D. the Kubernetes load balancer service: The Kubernetes LoadBalancer service provisions a basic Azure Load Balancer. Azure Load Balancer is a Layer-4 load balancer, which operates at the transport layer. It does not have the ability to inspect HTTP headers or URLs and therefore cannot provide URL routing or cookie-based affinity based on HTTP cookies. While you can associate a WAF with an Azure Load Balancer, it’s not as integrated and seamless as using AGIC with Application Gateway’s WAF.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

You have an Azure subscription that contains an Azure SQL database.
You plan to use Azure reservations on the Azure SQL database.
To which resource type will the reservation discount be applied?

A. vCore compute
B. DTU compute
C. Storage
D. License

A

The correct answer is A. vCore compute.

Explanation

Azure Reservations: Azure reservations provide a discount on Azure resources when you commit to spending a certain amount on a specific resource type for one or three years.

Azure SQL Database: Azure SQL Database has two main purchasing models:

vCore-based: This model allows you to choose the number of virtual cores (vCores), the amount of memory, and the storage size and type.

DTU-based: This model uses a bundled measure of compute, storage, and I/O resources called Database Transaction Units (DTUs).

Reservations and Azure SQL Database: Azure reservations for Azure SQL Database apply to the compute resources used by your database.

vCore-based Model: Reservations apply specifically to the vCore compute cost.

DTU-based Model: Reservations apply to the DTU compute cost, which is a component of the overall DTU bundle.

Other Resource Types:

Storage: Storage costs are separate from compute costs and are not covered by Azure SQL Database reservations. You might consider reserved capacity for storage separately.

License: SQL Server licenses are handled separately, especially if you are using the Azure Hybrid Benefit. Reservations for Azure SQL Database do not cover license costs.

Why vCore compute is the most accurate answer:

While technically reservations can apply to both vCore and DTU compute, the question specifically mentions “Azure SQL database,” which implies a broader scope than just single databases or elastic pools that are available in the DTU model. The vCore model also includes Managed Instances, which is not available in DTU. Since vCore is the more widely encompassing resource type across all purchasing models for Azure SQL Database, it is the most accurate answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Overview. General Overview

Litware, Inc. is a medium-sized finance company.

Overview. Physical Locations

Litware has a main office in Boston.

Existing Environment. Identity Environment

The network contains an Active Directory forest named Litware.com that is linked to an Azure Active Directory (Azure AD) tenant named Litware.com. All users have Azure Active Directory Premium P2 licenses.

Litware has a second Azure AD tenant named dev.Litware.com that is used as a development environment.

The Litware.com tenant has a conditional acess policy named capolicy1. Capolicy1 requires that when users manage the Azure subscription for a production environment by

using the Azure portal, they must connect from a hybrid Azure AD-joined device.

Existing Environment. Azure Environment

Litware has 10 Azure subscriptions that are linked to the Litware.com tenant and five Azure subscriptions that are linked to the dev.Litware.com tenant. All the subscriptions are in an Enterprise Agreement (EA).

The Litware.com tenant contains a custom Azure role-based access control (Azure RBAC) role named Role1 that grants the DataActions read permission to the blobs and files in Azure Storage.

Existing Environment. On-premises Environment

The on-premises network of Litware contains the resources shown in the following table.
The on-premises network of Litware contains the resources shown in the following table.

Name Type Configuration
SERVER1 Ubuntu 18.04 virtual machines hosted on Hyper-V The virtual machines host a third-party app named App1. App1 uses an external storage solution that provides Apache Hadoop-compatible data storage. The data storage supports POSIX access control list (ACL) file-level permissions.
SERVER2 Ubuntu 18.04 virtual machines hosted on Hyper-V (Same as SERVER1 description)
SERVER3 Ubuntu 18.04 virtual machines hosted on Hyper-V (Same as SERVER1 description)
SERVER10 Server that runs Windows Server 2016 The server contains a Microsoft SQL Server instance that hosts two databases named DB1 and DB2.

Existing Environment. Network Environment

Litware has ExpressRoute connectivity to Azure.

Planned Changes and Requirements. Planned Changes

Litware plans to implement the following changes:

✑ Migrate DB1 and DB2 to Azure.

✑ Migrate App1 to Azure virtual machines.

✑ Deploy the Azure virtual machines that will host App1 to Azure dedicated hosts.

Planned Changes and Requirements. Authentication and Authorization Requirements

Litware identifies the following authentication and authorization requirements:

✑ Users that manage the production environment by using the Azure portal must connect from a hybrid Azure AD-joined device and authenticate by using Azure Multi-Factor Authentication (MFA).

✑ The Network Contributor built-in RBAC role must be used to grant permission to all the virtual networks in all the Azure subscriptions.

✑ To access the resources in Azure, App1 must use the managed identity of the virtual machines that will host the app.

✑ Role1 must be used to assign permissions to the storage accounts of all the Azure subscriptions.

✑ RBAC roles must be applied at the highest level possible.

Planned Changes and Requirements. Resiliency Requirements

Litware identifies the following resiliency requirements:

✑ Once migrated to Azure, DB1 and DB2 must meet the following requirements:

  • Maintain availability if two availability zones in the local Azure region fail.
  • Fail over automatically.
  • Minimize I/O latency.

✑ App1 must meet the following requirements:

  • Be hosted in an Azure region that supports availability zones.
  • Be hosted on Azure virtual machines that support automatic scaling.
  • Maintain availability if two availability zones in the local Azure region fail.

Planned Changes and Requirements. Security and Compliance Requirements

Litware identifies the following security and compliance requirements:

✑ Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.

✑ On-premises users and services must be able to access the Azure Storage account that will host the data in App1.

✑ Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.

✑ All Azure SQL databases in the production environment must have Transparent Data Encryption (TDE) enabled.

✑ App1 must not share physical hardware with other workloads.

Planned Changes and Requirements. Business Requirements

Litware identifies the following business requirements:

✑ Minimize administrative effort. ✑ Minimize costs.

HOTSPOT -
You plan to migrate App1 to Azure.
You need to recommend a high-availability solution for App1. The solution must meet the resiliency requirements.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Number of host groups:
1
2
3
6
Number of virtual machine scale sets:
0
1
3

A

Here’s a breakdown of the reasoning for the correct choices:

Number of Host Groups: 3

Requirement: App1 must maintain availability if two availability zones in the local Azure region fail.

Dedicated Hosts and Availability Zones: To guarantee that App1 survives the failure of two availability zones, you need instances of App1 running in at least three availability zones.

Host Groups per AZ: Since App1 needs to run on dedicated hosts, and you want to isolate those dedicated hosts by availability zone for better fault tolerance, you would create a separate host group in each of the three availability zones.

Number of Virtual Machine Scale Sets: 1

Requirement: App1 must be hosted on Azure virtual machines that support automatic scaling.

Requirement: App1 must maintain availability if two availability zones in the local Azure region fail.

VMSS Capability: A single Azure Virtual Machine Scale Set (VMSS) can be configured to span across multiple availability zones. This allows you to achieve both automatic scaling and high availability across the three availability zones where your dedicated hosts reside.

Why not the other options?

Number of Host Groups: 1: Having only one host group means all your dedicated hosts are within a single fault domain. If that availability zone fails, your entire App1 instance is down. This doesn’t meet the requirement of surviving two AZ failures.

Number of Host Groups: 2: Having two host groups allows you to survive one AZ failure, but not the failure of two.

Number of Host Groups: 6: While technically possible, it’s unnecessary and increases complexity and cost without providing additional benefit for this specific requirement. You only need to cover three availability zones.

Number of Virtual Machine Scale Sets: 0: You need a VMSS to achieve automatic scaling, which is a stated requirement.

Number of Virtual Machine Scale Sets: 3: While you could create a separate VMSS for each host group/availability zone, it adds unnecessary management overhead. A single VMSS spanning the zones is the recommended and more efficient approach for this scenario.

Therefore, the correct answer is:

Number of host groups: 3

Number of virtual machine scale sets: 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

You plan to migrate App1 to Azure.
You need to recommend a network connectivity solution for the Azure Storage account that will host the App1 data. The solution must meet the security and compliance requirements.
What should you include in the recommendation?

A. a private endpoint
B. a service endpoint that has a service endpoint policy
C. Azure public peering for an ExpressRoute circuit
D. Microsoft peering for an ExpressRoute circuit

A

The most appropriate recommendation to meet the security and compliance requirements for network connectivity to the Azure Storage account hosting App1 data is A. a private endpoint.

Here’s why:

A. a private endpoint: This is the most secure option. A private endpoint creates a network interface within your virtual network for the storage account. This effectively brings the storage service into your private network, eliminating the public endpoint entirely. This directly fulfills the requirement to “prevent access to the public endpoint of the Azure Storage account.” On-premises users can then access the storage account through the existing ExpressRoute connection, keeping all traffic within the private network.

Let’s look at why the other options are less suitable:

B. a service endpoint that has a service endpoint policy: Service endpoints allow you to restrict network access to the storage account to specific subnets within your virtual network. While it adds a layer of security, it does not eliminate the public endpoint. Traffic from the on-premises network would still technically traverse the public endpoint, even if it’s restricted by the service endpoint policy. This doesn’t fully meet the requirement of preventing public endpoint access.

C. Azure public peering for an ExpressRoute circuit: Azure public peering allows you to access Azure public services (like storage) over your ExpressRoute connection. However, it doesn’t inherently prevent public access to the storage account. The storage account would still have a public endpoint accessible from the internet. Public peering is about providing a private path for accessing public services, not about making those services private.

D. Microsoft peering for an ExpressRoute circuit: Microsoft peering allows you to access Microsoft 365 services and Azure PaaS services (including Storage) over your ExpressRoute connection. Similar to public peering, it doesn’t inherently prevent public access to the storage account’s public endpoint. It provides a private path for accessing these services but doesn’t eliminate the public accessibility.

Therefore, the single best answer is A. a private endpoint.

If you were forced to choose up to three, the reasoning would be:

A. a private endpoint (Most Important): Directly addresses the requirement to prevent public endpoint access.

B. a service endpoint that has a service endpoint policy (Secondary Layer): While it doesn’t eliminate the public endpoint, it adds an extra layer of network-level security by restricting access to specific subnets. This can be used in conjunction with a private endpoint for defense in depth, or as a less secure alternative if private endpoints are not feasible for some reason (though they are generally recommended for this scenario).

Neither C nor D are suitable for meeting the primary requirement of preventing public access. They facilitate private connectivity to Azure but don’t make the storage account private.

In conclusion, for this specific requirement, the most accurate and secure solution is A. a private endpoint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

You migrate App1 to Azure.

You need to ensure that the data storage for App1 meets the security and compliance requirement

What should you do?

Create an access policy for the blob
Modify the access level of the blob service.
Implement Azure resource locks.
Create Azure RBAC assignments.

A

The correct answer is Create an access policy for the blob.

Here’s why:

Security and Compliance Requirement: The core requirement is to prevent modification of data for three years after it’s written, while still allowing new data to be added. This is a classic use case for immutability.

Azure Blob Storage Immutability: Azure Blob Storage offers a feature called Immutable Storage with Policy Lock. This allows you to set time-based retention policies or legal holds on blob data. Once a policy is set and locked, blobs cannot be modified or deleted within the retention period.

How Access Policies Relate: In the context of Azure Blob Storage immutability, you create an immutability policy which is a type of access policy that governs the retention period and immutability rules for the blob or container.

Let’s look at why the other options are incorrect:

Modify the access level of the blob service: Access levels (like Hot, Cool, Archive) are related to storage costs and access frequency. They don’t provide immutability features.

Implement Azure resource locks: Azure resource locks prevent administrative operations on the storage account or container (like deleting it). They do not prevent modifications to the data within the blobs.

Create Azure RBAC assignments: RBAC controls who has permission to access and manage the storage account and its contents. While you can restrict write access, it doesn’t enforce time-based immutability. A user with write access could still modify data unless an immutability policy is in place.

Therefore, to ensure data immutability for three years as required, you need to create an immutability policy (a type of access policy) for the blob container or individual blobs where App1’s data is stored.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

HOTSPOT

How should the migrated databases DB1 and DB2 be implemented in Azure?
Database:
A single Azure SQL database
Azure SQL Managed Instance
An Azure SOL Database elastic pool
Service tier:
Hyperscale
Business Critical
General Purpose

A

Here’s how the migrated databases DB1 and DB2 should be implemented in Azure, based on the requirements:

Database: Azure SQL Managed Instance

Service tier: Business Critical

Explanation:

Azure SQL Managed Instance:

Maintain availability if two availability zones in the local Azure region fail: Azure SQL Managed Instance in the Business Critical tier supports zone redundancy. This means your instances are placed across multiple availability zones in the same region, ensuring availability even if one or two zones fail.

Fail over automatically: Business Critical Managed Instances have built-in automatic failover to a secondary replica in a different availability zone.

Minimize I/O latency: The Business Critical service tier provides the lowest I/O latency due to its premium-performance local SSD storage.

Closer to On-premises SQL Server: Managed Instance provides a near 100% compatibility with on-premises SQL Server, making migration easier.

Business Critical Service Tier:

Addresses all resiliency requirements: As explained above, it provides the necessary availability, automatic failover, and low latency.

Supports Transparent Data Encryption (TDE): This is a requirement for all production Azure SQL databases, and Business Critical supports it.

Why other options are less suitable:

A single Azure SQL database: While it offers high availability, it typically relies on replicating within the same availability zone or to a secondary region, not across multiple availability zones within the same region for the base General Purpose tier. Hyperscale can offer zone redundancy, but Business Critical is generally better for minimizing I/O latency.

Azure SQL Database elastic pool: Elastic pools are for managing resources for multiple databases. While individual databases within the pool can have high availability, the pool itself doesn’t inherently provide the multi-AZ failover required for DB1 and DB2 individually.

Hyperscale: While Hyperscale offers zone redundancy and is suitable for very large databases, it might not offer the same level of consistently low I/O latency as the Business Critical tier, which is optimized for transactional workloads.

General Purpose: Does not offer the multi-AZ resilience needed to survive two availability zone failures.

Therefore, Azure SQL Managed Instance with the Business Critical service tier is the best choice to meet all the stated resiliency and performance requirements for DB1 and DB2.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

DRAG DROP -
You need to configure an Azure policy to ensure that the Azure SQL databases have Transparent Data Encryption (TDE) enabled. The solution must meet the security and compliance requirements.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
Actions
Create an Azure policy definition that uses the deployIfNotExists effect.
Invoke a remediation task.
Create an Azure policy definition that uses the Modify effect.
Create an Azure policy assignment.
Create a user-assigned managed identity.
Answer Area

A

Here’s the correct sequence of actions to configure an Azure Policy for enabling Transparent Data Encryption (TDE) on Azure SQL databases:

Answer Area:

Create an Azure policy definition that uses the deployIfNotExists effect.

Create an Azure policy assignment.

Invoke a remediation task.

Explanation of the steps:

Create an Azure policy definition that uses the deployIfNotExists effect:

This is the foundational step. You need to define the policy itself.

The deployIfNotExists effect is crucial here. It allows the policy to automatically deploy resources (in this case, enable TDE) if the specified condition (TDE not enabled) is met.

The policy definition will include the logic to identify Azure SQL databases and check their TDE status. It will also contain the deployment details (typically an ARM template or a set of operations) to enable TDE.

Create an Azure policy assignment:

Once the policy definition is created, you need to assign it to a specific scope (management group, subscription, or resource group).

This tells Azure where the policy should be enforced. When you create the assignment, the policy will start evaluating resources within that scope.

Invoke a remediation task:

The deployIfNotExists effect only applies to new or updated resources after the policy assignment.

To bring existing non-compliant Azure SQL databases into compliance, you need to run a remediation task.

The remediation task will evaluate the resources within the policy’s scope and apply the deployment defined in the deployIfNotExists policy to the non-compliant ones, effectively enabling TDE on them.

Why the other options are not in this sequence:

Create an Azure policy definition that uses the Modify effect: While the Modify effect can also be used for some configuration changes, deployIfNotExists is generally more suitable for ensuring a specific resource or setting exists (like TDE being enabled). Modify is better for changing existing properties.

Create a user-assigned managed identity: While managed identities are often used with Azure Policy for the deployment aspect of deployIfNotExists, the system-assigned managed identity created automatically during the policy assignment is usually sufficient for this scenario. You wouldn’t necessarily create a separate user-assigned identity as a prerequisite for the basic functionality. However, for more complex scenarios or specific permissions, a user-assigned identity might be needed.

Invoke a remediation task: This step is performed after the policy definition and assignment to address existing resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

You plan to deploy multiple instances of an Azure web app across several Azure regions.

You need to design an access solution for the app.

The solution must meet the following replication requirements:

✑ Support rate limiting.

✑ Balance requests between all instances.

✑ Ensure that users can access the app in the event of a regional outage.

Solution: You use Azure Application Gateway to provide access to the app.

Does this meet the goal?

Yes
No

A

No, this does not fully meet the goal.

Here’s why:

Support Rate Limiting: Azure Application Gateway does support rate limiting through Web Application Firewall (WAF) policies. So, this requirement is met.

Balance Requests Between All Instances: Azure Application Gateway can load balance requests across multiple backend instances. This requirement is met.

Ensure that users can access the app in the event of a regional outage: This is where the proposed solution falls short. While Application Gateway can load balance within a region, it is a regional service. If the Azure region where the Application Gateway is deployed experiences an outage, the Application Gateway itself will be unavailable, and users will not be able to access the app.

To meet the requirement of regional outage resilience, you would need a more comprehensive solution that includes:

Deploying Application Gateway instances in multiple Azure regions.

Using a global load balancer like Azure Front Door or Azure Traffic Manager in front of the regional Application Gateways. This global service can direct traffic to the healthy regional gateway in case of a regional failure.

In summary, while Azure Application Gateway handles load balancing and rate limiting well, it doesn’t inherently provide regional failover capabilities on its own.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

You have an Azure subscription that contains a Basic Azure virtual WAN named Virtual/WAN1 and the virtual hubs shown in the following table.

Name Azure region
Hub1 US East
Hub2 US West

You have an ExpressRoute circuit in the US East region.

You need to create an ExpressRoute association to VirtualWAN1.

What should you do first?

Upgrade VirtualWAN1 to Standard.
Create a gateway on Hub1.
Create a hub virtual network in US East.
Enable the ExpressRoute premium add-on.

A

The correct first step is to Upgrade VirtualWAN1 to Standard.

Here’s why:

Basic Virtual WAN Limitations: A Basic Azure Virtual WAN does not support ExpressRoute connections. You need a Standard Virtual WAN to establish an ExpressRoute association.

Let’s look at why the other options are incorrect as the first step:

Create a gateway on Hub1: You will eventually need to create an ExpressRoute gateway within Hub1, but you cannot do this on a Basic Virtual WAN. You need to upgrade to Standard first to unlock this capability.

Create a hub virtual network in US East: The prompt states that Hub1 already exists in US East. You don’t need to create a separate hub virtual network. Hubs are created within the Virtual WAN itself.

Enable the ExpressRoute premium add-on: While the premium add-on enables global connectivity for ExpressRoute, it’s not a prerequisite for establishing a basic connection within the same region as the ExpressRoute circuit (US East in this case). The fundamental issue is the Basic Virtual WAN tier.

Therefore, upgrading the Virtual WAN to Standard is the necessary first step to enable ExpressRoute connectivity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

You ate designing a SQL database solution. The solution will include 20 databases that will be 20 GB each and have varying usage patterns. You need to recommend a database platform to host the databases.

The solution must meet the following requirements:

  • The compute resources allocated to the databases must scale dynamically.
  • The solution must meet an SLA of 99.99% uptime.
  • The solution must have reserved capacity.
  • Compute charges must be minimized.

What should you include in the recommendation?

20 databases on a Microsoft SQL server that runs on an Azure virtual machine
20 instances of Azure SQL Database serverless
20 databases on a Microsoft SQL server that runs on an Azure virtual machine in an availability set
an elastic pool that contains 20 Azure SQL databases

A

The correct recommendation is an elastic pool that contains 20 Azure SQL databases. Here’s why:

Dynamic Scaling: Elastic pools allow multiple databases to share a pool of resources (DTUs or vCores). The compute resources are dynamically allocated to databases within the pool based on their needs, scaling up or down automatically. This directly addresses the requirement for dynamic scaling.

SLA of 99.99% Uptime: Azure SQL Database, including databases within an elastic pool, provides a 99.99% uptime SLA.

Reserved Capacity: You can provision reserved DTUs or vCores for an elastic pool. This allows you to purchase compute capacity at a reduced cost compared to pay-as-you-go pricing, fulfilling the reserved capacity requirement.

Minimize Compute Charges: Elastic pools are cost-effective for scenarios with multiple databases that have varying usage patterns. Instead of provisioning resources for the peak load of each individual database, you provision for the combined peak load of the pool, which is often lower. This helps minimize overall compute charges.

Why other options are less suitable:

20 databases on a Microsoft SQL server that runs on an Azure virtual machine: While you can scale the VM, it’s not as dynamic as an elastic pool. You’d likely need to over-provision the VM to handle peak loads, leading to higher costs. Also, achieving 99.99% uptime requires setting up Availability Sets and configuring SQL Server Always On Availability Groups, increasing complexity.

20 instances of Azure SQL Database serverless: While serverless offers dynamic scaling and cost optimization for individual databases, it doesn’t directly support reserved capacity in the same way as elastic pools. Also, managing 20 individual serverless instances might increase administrative overhead compared to a single elastic pool.

20 databases on a Microsoft SQL server that runs on an Azure virtual machine in an availability set: Availability sets improve uptime but don’t provide the dynamic scaling and cost optimization of an elastic pool. You’d still need to manually scale the VM and potentially over-provision resources.

Therefore, an elastic pool provides the best balance of dynamic scaling, high availability, reserved capacity options, and cost minimization for the described scenario.

30
Q

HOTSPOT

You configure OAuth2 authorization in API Management as shown in the following exhibit.
Add OAuth2 service
API Management service

Display name*
Unique name used to reference this authorization server on the APIs.

Id*
(Id input field)

Description
(Authorization server description input field)

Client registration page URL*
https://contoso.com/register

Authorization grant types
(Checkboxes for the following options:)

Authorization code (selected)
Implicit
Resource owner password
Client credentials
Authorization endpoint URL*
https://login.microsoftonline.com/contosoonmicrosoft.com/…

[ ] Support state parameter

Authorization request method

GET (selected)
POST
Token endpoint URL*
(Token endpoint input field)

[Create] (Button at the bottom)

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point.
The selected authorization grant type is for [answer choice].
Background services
Headless device authentication
Web applications
To enable custom data in the grant flow, select [answer choice].
Client credentials
Resource owner password
Support state parameter

A

Here’s the breakdown of the answers based on the provided information:

The selected authorization grant type is for: Web applications

Explanation: The “Authorization code” grant type is the standard and most secure method for web applications to obtain access tokens on behalf of a user. It involves a redirect flow where the user authenticates with the authorization server, and the application receives an authorization code that it can then exchange for an access token.

To enable custom data in the grant flow, select: Support state parameter

Explanation: The “state” parameter in the OAuth 2.0 authorization request is used to maintain state between the authorization request and the callback. While its primary purpose is to prevent CSRF attacks, it can also be used by the application to pass custom data that will be returned unchanged in the redirect URI after authorization. This allows the application to correlate the authorization response with the original request.

31
Q

You plan to create an Azure Storage account that will host file shares. The shares will be accessed from on-premises applications that are transaction-intensive.

You need to recommend a solution to minimize latency when accessing the file shares. The solution must provide the highest-level of resiliency for the selected storage tier.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Storage tier:
Hot
Premium
Transaction optimized
Resiliency:
Geo-redundant storage (GRS)
Zone-redundant storage (ZRS)
Locally-redundant storage (LRS)

A

Here’s the breakdown of the recommended solution:

Storage tier: Premium

Resiliency: Zone-redundant storage (ZRS)

Explanation:

Storage tier: Premium

Minimizing Latency: The Premium storage tier is designed for I/O intensive workloads and provides consistent, low-latency performance. This is crucial for transaction-intensive applications accessing file shares. Hot storage, while suitable for frequent access, doesn’t offer the same guaranteed low latency as Premium. “Transaction optimized” is not a standard Azure storage tier option.

Resiliency: Zone-redundant storage (ZRS)

Highest Level of Resiliency for Premium: For Premium file shares, Zone-redundant storage (ZRS) provides the highest level of resiliency. ZRS synchronously replicates your data across three availability zones in the Azure region. This protects your data from data center failures, offering significantly better resiliency than Locally-redundant storage (LRS), which only replicates within a single data center. While Geo-redundant storage (GRS) offers regional disaster recovery, it’s not an option for Premium file shares. Premium file shares only support LRS and ZRS. Given the requirement for the “highest level of resiliency for the selected storage tier,” ZRS is the correct choice for Premium.

32
Q

You are designing an Azure Cosmos DB solution that will host multiple writable replicas in multiple Azure regions.

You need to recommend the strongest database consistency level for the design.

The solution must meet the following requirements:

✑ Provide a latency-based Service Level Agreement (SLA) for writes.

✑ Support multiple regions.

Which consistency level should you recommend?

bounded staleness
strong
session
consistent prefix

A

The correct consistency level to recommend is bounded staleness. Here’s why:

Strong Consistency: While providing the strongest consistency, strong consistency in a globally distributed database like Azure Cosmos DB with multiple writable regions comes with a significant trade-off: higher latency for writes. Every write operation needs to be committed across all replicas before it’s acknowledged, which introduces network latency between regions. This directly contradicts the requirement for a latency-based SLA for writes.

Bounded Staleness Consistency: This consistency level offers a good balance between consistency and availability/latency. It guarantees that reads will lag behind writes by no more than a specified time duration or number of versions. This allows for lower write latency as writes don’t need to be immediately reflected across all regions. Azure Cosmos DB provides latency SLAs for reads with bounded staleness, making it suitable for the requirement.

Session Consistency: This is the most widely used consistency level for single-region applications. It guarantees that within a single client session, you will always read your own writes, and reads are monotonic. However, it doesn’t provide the same level of consistency guarantees across multiple regions for all users and doesn’t directly offer a latency-based SLA for writes across all regions.

Consistent Prefix Consistency: This guarantees that reads will see prefixes of writes in the order they were written. While stronger than session consistency, it still doesn’t offer the same level of consistency guarantees as bounded staleness or strong consistency across multiple regions and doesn’t have a direct latency-based write SLA for multi-region writes.

In summary: Bounded staleness provides the strongest consistency level that can still meet the requirement of a latency-based SLA for writes in a multi-region writable Azure Cosmos DB setup. Strong consistency would likely violate the latency SLA, while session and consistent prefix are weaker consistency models.

33
Q

You have an on-premises Microsoft SQL server named SQLI that hosts 50 databases.

You plan to migrate SQL 1 to Azure SQL Managed Instance.

You need to perform an offline migration of SQL 1. The solution must minimize administrative effort.

What should you include in the solution?
SQL Server Migration Assistant (SSMA)
Azure Migrate
Data Migration Assistant (DMA)
Azure Database Migration Service

A

The correct answer is Azure Database Migration Service (DMS).

Here’s why:

Azure Database Migration Service (DMS): DMS is a fully managed service designed specifically for migrating databases to Azure with minimal downtime. For an offline migration to Azure SQL Managed Instance, DMS offers the following advantages:

Simplified and Automated Process: DMS automates many of the steps involved in migrating SQL Server databases to Managed Instance, reducing manual effort.

Schema and Data Migration: DMS handles both schema and data migration efficiently.

Monitoring and Management: It provides monitoring capabilities during the migration process.

Scalability: DMS can handle the migration of multiple databases efficiently.

Specifically Designed for Azure: It’s built to work seamlessly with Azure SQL Managed Instance.

Let’s look at why the other options are less suitable for minimizing administrative effort in an offline migration to Azure SQL Managed Instance:

SQL Server Migration Assistant (SSMA): While SSMA is a free tool for migrating databases to Azure SQL Database and SQL Server on Azure VMs, it’s a more manual process compared to DMS. You would need to manually configure and execute the migration for each of the 50 databases, increasing administrative effort.

Azure Migrate: Azure Migrate is a broader service for migrating various resources to Azure, including servers and databases. While it can be used for SQL Server migration, it’s generally more focused on migrating entire virtual machines or physical servers hosting SQL Server. For an offline migration of just the databases to Managed Instance, DMS is a more direct and specialized solution, reducing administrative overhead.

Data Migration Assistant (DMA): DMA is primarily a tool for assessing and identifying compatibility issues before migration. While it can help with schema and data migration, it’s not a fully managed service like DMS and requires more manual steps to orchestrate the migration of multiple databases to Managed Instance.

Therefore, Azure Database Migration Service (DMS) is the recommended solution to minimize administrative effort for an offline migration of 50 databases from an on-premises SQL Server to Azure SQL Managed Instance.

34
Q

HOTSPOT

You plan to migrate App1 to Azure.

You need to recommend a storage solution for App1 that meets the security and compliance requirements.

Which type of storage should you recommend, and how should you recommend configuring the storage? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Storage account type:
Premium page blobs
Premium file shares
Standard general-purpose v2
Configuration:
NFSv3
Large file shares
Hierarchical namespace

A

Here’s the breakdown of the recommended storage solution for App1:

Storage account type: Standard general-purpose v2

Configuration: Hierarchical namespace

Explanation:

Storage account type: Standard general-purpose v2

Cost-effectiveness: Standard general-purpose v2 accounts are the recommended base for most storage scenarios and are generally more cost-effective than Premium options for the described requirements.

Blob Storage Capabilities: This type of account allows you to leverage Azure Blob Storage, which is suitable for storing large amounts of unstructured data, including the Hadoop-compatible data for App1. Crucially, Blob Storage supports the immutability feature needed for compliance.

Configuration: Hierarchical namespace

Azure Data Lake Storage Gen2: Enabling the hierarchical namespace feature on a Standard general-purpose v2 account transforms it into an Azure Data Lake Storage Gen2 account.

POSIX ACL Support: This is the key reason for selecting this configuration. Azure Data Lake Storage Gen2 provides a hierarchical file system on top of blob storage and supports POSIX-compliant access control lists (ACLs), directly addressing the requirement for compatibility with App1’s existing storage solution.

On-premises Access: You can securely access Azure Data Lake Storage Gen2 from on-premises using various methods, including mounting it as a network drive (using SMB or NFS depending on your setup) or using Azure Data Lake Storage SDKs.

Preventing Public Access: You can easily prevent public access to the storage account using Azure networking features like private endpoints, firewall rules, and virtual network service endpoints.

Immutability: Azure Blob Storage, which underlies Azure Data Lake Storage Gen2, supports Immutable Storage with Policy Lock. This feature allows you to create time-based retention policies or legal holds to prevent the modification or deletion of data for a specified period (the three years required in this case).

Why other options are less suitable:

Premium page blobs: Primarily used for Azure Virtual Machine disks and don’t fit the requirements for file sharing and on-premises access.

Premium file shares: While they offer low-latency access via SMB, they don’t inherently provide the same level of immutability features as Blob Storage and don’t directly support POSIX ACLs.

NFSv3: While you can access Azure Blob Storage using NFSv3, it’s a protocol choice for access, not a fundamental configuration of the storage account itself. It doesn’t inherently provide immutability.

Large file shares: Refers to the capacity of Azure File Storage, not the type of storage needed for App1’s Hadoop-compatible data and immutability requirements.

Therefore, the optimal solution is to use a Standard general-purpose v2 storage account with the Hierarchical namespace enabled to leverage Azure Data Lake Storage Gen2’s POSIX ACL support and the immutability features of the underlying Blob Storage.

35
Q

You have an Azure Active Directory (Azure AD) tenant named contoso.com that has a security group named Group’. Group i is configured Tor assigned membership. Group I has 50 members. including 20 guest users.

You need To recommend a solution for evaluating the member ship of Group1.

The solution must meet the following requirements:

  • The evaluation must be repeated automatically every three months
  • Every member must be able to report whether they need to be in Group1
  • Users who report that they do not need to be in Group 1 must be removed from Group1 automatically
  • Users who do not report whether they need to be m Group1 must be removed from Group1 automatically.

What should you include in me recommendation?

implement Azure AU Identity Protection.
Change the Membership type of Group1 to Dynamic User.
Implement Azure AD Privileged Identity Management.
Create an access review.

A

The correct recommendation is Create an access review.

Here’s why:

Recurring Evaluation: Azure AD access reviews can be configured to run on a recurring schedule, such as every three months, meeting the first requirement.

Self-Attestation: Access reviews allow you to configure the review type to be “Members review their own access.” This enables each of the 50 members, including the guest users, to report whether they need to be in Group1.

Automatic Removal (If Not Needed): When configuring the access review, you can set the “Upon completion settings” to “Apply results.” You can further configure it to “Remove access” for users who deny their need for continued membership.

Automatic Removal (If No Response): Within the “Upon completion settings,” you can also configure the review to “Remove access” for users who don’t respond within the specified review period.

Let’s look at why the other options are not the best fit:

Implement Azure AD Identity Protection: Azure AD Identity Protection focuses on detecting, investigating, and remediating risk-based identity detections and vulnerabilities. It doesn’t directly address the requirement for periodic group membership reviews and self-attestation.

Change the Membership type of Group1 to Dynamic User: Dynamic groups determine membership based on rules. While you could potentially create a complex rule, it wouldn’t inherently allow users to self-attest or provide the automated removal based on non-response. Dynamic groups are more about automated membership based on attributes, not self-governance.

Implement Azure AD Privileged Identity Management (PIM): PIM is primarily focused on managing, controlling, and monitoring access to privileged roles and resources. While PIM includes access reviews, it’s more geared towards managing elevated access rather than regular group membership for all users, including guest users. Using the standard access review feature is more appropriate for this scenario.

36
Q

HOTSPOT

You have an Azure subscription that is linked to an Azure Active Directory Premium Plan 2 tenant.

The tenant has multi-factor authentication (MFA) enabled for all users.

You have the named locations shown in the following table.
Name IP address range Trusted
NY 192.168.2.0/27 Yes
DC 192.168.1.0/27 No
LA 192.168.3.0/27 No
You have the users shown in the following table.
Name Device operating User-risk level Matching
system compliance policies
User1 Windows 10 High None
User2 Windows 10 Medium None
User3 macOS Low None
You plan to deploy the Conditional Access policies shown in the following table.
Name Assignment Conditions: Locations Conditions: User risk Conditions: Sign-in risk Access Control: Grant
CA1 All users Trusted locations High, Medium None Block access
CA2 All users NY None High, Medium Block access
CA3 All users LA None None Grant access: Require device to marked as compliant

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
Statements
To ensure that the conditions in CA1 can be evaluated, you must enforce an Azure Active
Directory (Azure AD) Identity Protection user risk policy.
To ensure that the conditions in CA2 can be evaluated, you must enforce an Azure Active
Directory (Azure AD) Identity Protection sign-in risk policy.
To ensure that the conditions in CA3 can be evaluated, you must deploy Microsoft Endpoint Manager.

A

Here’s the breakdown of the answers for each statement:

Statements:

To ensure that the conditions in CA1 can be evaluated, you must enforce an Azure Active Directory (Azure AD) Identity Protection user risk policy. Yes

Explanation: CA1’s conditions are based on “User risk,” specifically “High” and “Medium.” User risk is a core feature of Azure AD Identity Protection. Identity Protection analyzes user sign-in patterns and signals to assign a risk level. Without an Identity Protection user risk policy enabled and running, there would be no user risk levels to evaluate in the Conditional Access policy.

To ensure that the conditions in CA2 can be evaluated, you must enforce an Azure Active Directory (Azure AD) Identity Protection sign-in risk policy. Yes

Explanation: CA2’s conditions are based on “Sign-in risk,” specifically “High” and “Medium.” Similar to user risk, sign-in risk is a feature of Azure AD Identity Protection. It analyzes the characteristics of a specific sign-in attempt (e.g., unusual location, unfamiliar device) to determine the risk. An Identity Protection sign-in risk policy needs to be enabled to provide these risk levels for the Conditional Access policy to evaluate.

To ensure that the conditions in CA3 can be evaluated, you must deploy Microsoft Endpoint Manager. Yes

Explanation: CA3’s grant control is “Require device to be marked as compliant.” Device compliance is a feature managed by a Mobile Device Management (MDM) solution. In the Microsoft ecosystem, Microsoft Endpoint Manager (specifically Intune) is the primary service used to manage and enforce device compliance policies. For Conditional Access to evaluate if a device is compliant, that device needs to be enrolled and its compliance status reported by Microsoft Endpoint Manager.

37
Q

You have an on-premises application named App1 that uses an Oracle database.

You plan to use Azure Databricks to transform and load data from App1 to an Azure Synapse Analytics instance.

You need to ensure that the App1 data is available to Databricks.

Which two Azure services should you include in the solution? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

Azure Data Box Edge
Azure Data Lake Storage
Azure Data Factory
Azure Data Box Gateway
Azure Import/Export service

A

The two Azure services that should be included in the solution are:

Azure Data Factory

Azure Data Lake Storage

Here’s why:

Azure Data Factory: Azure Data Factory is a cloud-based data integration service that allows you to create data-driven workflows for orchestrating and automating data movement and data transformation. In this scenario, Azure Data Factory would be used to:

Connect to the on-premises Oracle database (App1). ADF has connectors for various databases, including Oracle.

Extract data from the Oracle database.

Load the extracted data into Azure Data Lake Storage.

Azure Data Lake Storage: Azure Data Lake Storage is a highly scalable and cost-effective data lake solution built on Azure Blob Storage. It’s optimized for big data analytics workloads, which is the purpose of using Azure Databricks.

Store the data extracted from App1: ADF would land the data from the Oracle database into Azure Data Lake Storage.

Provide accessible storage for Azure Databricks: Azure Databricks can natively connect to and process data stored in Azure Data Lake Storage.

Why the other options are not the best fit:

Azure Data Box Edge/Azure Data Box Gateway/Azure Import/Export service: These services are primarily used for large-scale data transfers, especially when network bandwidth is a constraint. While they could be used, they add unnecessary complexity for a scenario where ongoing data availability for Databricks is required. Azure Data Factory provides a more streamlined and automated approach for this type of data integration.

38
Q

You are designing an order processing system in Azure that will contain the Azure resources shown in the following table.

Name Type | Purpose
App1 Web app | Processes customer orders
Function1 Function | Check product availability at vendor 1
Function2 Function | Check product availability at vendor 2
storage1 Storage account | Stores order processing logs

The order processing system will have the following transaction flow:

✑ A customer will place an order by using App1.

✑ When the order is received, App1 will generate a message to check for product availability at vendor 1 and vendor 2.

✑ An integration component will process the message, and then trigger either Function1 or Function2 depending on the type of order.

✑ Once a vendor confirms the product availability, a status message for App1 will be generated by Function1 or Function2.

✑ All the steps of the transaction will be logged to storage1.

Which type of resource should you recommend for the integration component?

an Azure Data Factory pipeline
an Azure Service Bus queue
an Azure Event Grid domain
an Azure Event Hubs capture

A

The most suitable type of resource for the integration component is an Azure Event Grid domain. Here’s why:

Event-Driven Architecture: The transaction flow describes an event-driven pattern where App1 generates a message (an event) that needs to be routed to the appropriate function. Event Grid is specifically designed for this type of architecture.

Conditional Routing: Event Grid allows you to define event subscriptions with filters based on the event data. You can configure subscriptions within the Event Grid domain to route the messages to either Function1 or Function2 based on the “type of order” information contained within the message generated by App1.

Scalability and Reliability: Event Grid is a highly scalable and reliable service, ensuring messages are delivered efficiently.

Loose Coupling: Using Event Grid promotes loose coupling between App1 and the functions. App1 only needs to publish the event; it doesn’t need to know the specific functions that will process it.

Here’s why the other options are less suitable:

Azure Data Factory pipeline: While ADF can handle data movement and transformation, it’s more oriented towards batch processing and scheduled workflows. It’s not the ideal choice for real-time event routing and conditional triggering of functions.

Azure Service Bus queue: Service Bus queues are excellent for reliable asynchronous messaging. However, to achieve the conditional triggering of functions, you would need an additional component (like an Azure Function or Logic App) to listen to the queue, inspect the message, and then invoke the appropriate function. Event Grid provides this routing capability directly.

Azure Event Hubs capture: Event Hubs is designed for high-throughput ingestion of event streams, often for telemetry or analytics purposes. While it can handle events, its primary focus is not on conditional routing and triggering of individual functions based on event content. The “capture” feature is for persisting events to storage, which isn’t the core requirement here.

39
Q

You have an Azure Functions microservice app named Appl that is hosted in the Consumption plan.

App1 uses an Azure Queue Storage trigger.

You plan to migrate App1 to an Azure Kubernetes Service (AKS) cluster.

You need to prepare the AKS cluster to support Appl.

The solution must meet the following requirements:

  • Use the same scaling mechanism as the current deployment.
  • Support kubenet and Azure Container Netwoking Interface (CNI) networking.

Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct answer is worth one point.

Configure the horizontal pod autoscaler.
Install Virtual Kubelet.
Configure the AKS cluster autoscaler.
Configure the virtual node add-on.
Install Kubemetes-based Event Driven Autoscaling (KEDA).

A

Here are the two correct actions you should perform:

Install Kubernetes-based Event Driven Autoscaling (KEDA).

Configure the AKS cluster autoscaler.

Here’s why:

Install Kubernetes-based Event Driven Autoscaling (KEDA):

Same Scaling Mechanism: KEDA is specifically designed to bring event-driven scaling capabilities to Kubernetes. It can monitor the length of Azure Queue Storage queues and automatically scale the number of pods in your deployment or stateful set that are processing messages from the queue. This directly replicates the scaling behavior of the Azure Functions Consumption plan triggered by a queue.

Configure the AKS cluster autoscaler:

Support Scaling: While KEDA handles scaling the number of pods for your application, the AKS cluster autoscaler is responsible for scaling the number of nodes in your AKS cluster. As KEDA scales up your application pods in response to queue messages, the cluster autoscaler ensures that there are enough underlying nodes in the AKS cluster to accommodate these new pods.

Why other options are incorrect:

Configure the horizontal pod autoscaler (HPA): While HPA is a standard Kubernetes autoscaler, it typically scales based on CPU or memory utilization of the pods. It doesn’t directly integrate with Azure Queue Storage triggers like KEDA does. You could potentially use HPA in conjunction with custom metrics based on queue length, but KEDA provides a more direct and integrated solution for this specific scenario.

Install Virtual Kubelet: Virtual Kubelet allows you to connect your AKS cluster to other compute platforms like Azure Container Instances (ACI). While it can help with bursting scenarios, it doesn’t directly address the requirement of using the same scaling mechanism as the Consumption plan for queue triggers.

Configure the virtual node add-on: Similar to Virtual Kubelet, the virtual node add-on leverages ACI to run pods. It doesn’t provide the event-driven scaling based on Azure Queue Storage that KEDA offers.

40
Q

You plan to migrate App1 to Azure. The solution must meet the authentication and authorization requirements.

Which type of endpoint should App1 use to obtain an access token?

Azure Instance Metadata Service (IMDS)
Azure AD
Azure Service Management
Microsoft identity platform

A

The correct type of endpoint for App1 to obtain an access token is the Microsoft identity platform.

Here’s why:

Microsoft identity platform: This is Microsoft’s evolution of the Azure AD developer platform. It provides the OAuth 2.0 and OpenID Connect compliant authentication service that Azure resources use for authentication and authorization. When App1 uses its managed identity, it interacts with the Microsoft identity platform to request and receive access tokens.

Here’s why the other options are less suitable:

Azure Instance Metadata Service (IMDS): IMDS is used by Azure VMs to retrieve information about the VM itself, such as its identity, resource group, and location. While App1 will use IMDS to discover its assigned managed identity, it doesn’t directly obtain the access token from IMDS. Instead, it uses the information from IMDS to request a token from the Microsoft identity platform endpoint.

Azure AD: While Azure AD is the underlying identity provider, the Microsoft identity platform is the specific developer platform and endpoint that applications interact with to get tokens. Think of Azure AD as the identity database, and the Microsoft identity platform as the service and endpoints that allow applications to authenticate against it.

Azure Service Management: This is the older, classic deployment model for Azure. Modern applications using managed identities rely on the Azure Resource Manager model and the Microsoft identity platform for authentication.

In summary, App1, leveraging its managed identity, will communicate with the Microsoft identity platform endpoint to request and obtain the necessary access tokens for accessing other Azure resources.

41
Q

Your company currently has an application that is hosted on their on-premises environment. The application currently connects to two databases in the on-premises environment. The databases are named whizlabdb1 and whizlabdb2.

You have to move the databases onto Azure. The databases have to support server-side transactions across both of the databases.

Solution: You decide to deploy the databases to an Azure SQL database-managed instance.

Would this fulfill the requirement?

Yes
No

A

Yes, this would fulfill the requirement.

Azure SQL Managed Instance does support server-side transactions that span multiple databases within the same Managed Instance. This is a key feature of Managed Instance that aligns with the capabilities of on-premises SQL Server.

Therefore, deploying whizlabdb1 and whizlabdb2 to the same Azure SQL Managed Instance will allow the application to continue using server-side transactions across both databases.

42
Q

HOTSPOT

You have an on-premises Microsoft SQL Server database named SQL1.

You plan to migrate SQL 1 to Azure.

You need to recommend a hosting solution for SQL1.

The solution must meet the following requirements:

  • Support the deployment of multiple secondary, read-only replicas.
  • Support automatic replication between primary and secondary replicas.
  • Support failover between primary and secondary replicas within a 15-minute recovery time objective (RTO).
    Answer Area
    Azure service or service tier:
    Azure SQL Database
    Azure SQL Managed Instance
    The Hyperscale service tier
    Replication mechanism:
    Active geo-replication
    Auto-failover groups
    Standard geo-replication
A

Answer Area:

Azure service or service tier: Azure SQL Managed Instance

Replication mechanism: Auto-failover groups

Explanation:

Azure SQL Managed Instance:

Multiple Read-Only Replicas: Azure SQL Managed Instance (Business Critical tier) supports multiple readable secondary replicas within the same managed instance.

Automatic Replication: Managed Instance has built-in automatic replication between the primary and secondary replicas.

Auto-failover groups:

Failover with 15-minute RTO: Auto-failover groups are specifically designed to provide simplified deployment and management of geo-replicated databases, enabling automatic failover to a secondary region in case of a primary region outage. While designed for geo-replication, they can also be used within the same region for enhanced availability and can meet the 15-minute RTO requirement.

Why other options are less suitable:

Azure SQL Database: While Azure SQL Database (Business Critical or Premium tiers) supports readable secondary replicas and active geo-replication, achieving automatic failover with a guaranteed 15-minute RTO is best accomplished using auto-failover groups. Single databases don’t natively support multiple read-only replicas in the same way Managed Instance does.

The Hyperscale service tier: Hyperscale in Azure SQL Database supports read scale-out with multiple readable secondary replicas and has automatic replication. However, while failover is fast, relying solely on the service tier’s failover mechanism might not explicitly guarantee a 15-minute RTO in all scenarios. Auto-failover groups provide a more explicit control over the failover process and RTO.

Active geo-replication: While it provides readable secondary replicas and automatic data replication to a secondary region, the failover process is typically manual or requires DNS changes, which might not meet the 15-minute RTO requirement.

Standard geo-replication: This provides basic asynchronous data replication for disaster recovery but does not offer readable secondary replicas or automatic failover within a 15-minute RTO.

43
Q

DRAG DROP

You need to recommend a solution that meets the file storage requirements for App2.

What should you deploy to the Azure subscription and the on-premises network? To answer, drag the appropriate services to the correct locations. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.

Services
Azure Blob Storage
Azure Data Box
Azure Data Box Gateway
Azure Data Lake Storage
Azure File Sync
Azure Files

Answer Area
Azure subscription: Service
On-premises network: Service

A

Answer Area

Azure subscription: Azure Files

On-premises network: Azure File Sync

Explanation:

Azure Files (Azure subscription): Azure Files provides fully managed file shares in the cloud that can be accessed via the standard Server Message Block (SMB) protocol. This makes it a natural fit for applications needing file storage.

Azure File Sync (On-premises network): Azure File Sync is installed on your on-premises Windows Server and synchronizes files and folders between your on-premises file shares and Azure Files. This creates a hybrid file sharing solution, making the same data accessible both on-premises and in Azure.

Why other options are not the primary solution for this scenario:

Azure Blob Storage: While Blob storage can store file data, it’s primarily object storage and requires different APIs for access compared to traditional file shares. It’s not the direct solution for providing SMB access to on-premises applications.

Azure Data Box & Azure Data Box Gateway: These are typically used for large-scale data transfers into Azure. While they can be part of a migration strategy, they aren’t the ongoing solution for providing file access to App2 from both locations. Azure Data Box is for offline transfer, and Azure Data Box Gateway acts as a network file share gateway to Azure Blob Storage, not direct Azure Files synchronization.

Azure Data Lake Storage: This is designed for big data analytics and while it can store file data, it doesn’t provide the standard SMB access needed for seamless integration with existing applications like Azure Files does.

44
Q

You have an Azure subscription that contains two applications named App1 and App2. App1 is a sales processing application. When a transaction in App1 requires shipping, a message is added to an Azure Storage account queue, and then App2 listens to the queue for relevant transactions.

In the future, additional applications will be added that will process some of the shipping requests based on the specific details of the transactions.

You need to recommend a replacement for the storage account queue to ensure that each additional application will be able to read the relevant transactions.

What should you recommend?

one Azure Service Bus queue
one Azure Service Bus topic
one Azure Data Factory pipeline
multiple storage account queues

A

The correct recommendation is one Azure Service Bus topic.

Here’s why:

Publish/Subscribe Pattern: Azure Service Bus topics implement a publish/subscribe messaging pattern. This allows App1 to publish a single message to the topic, and then multiple independent subscribers (the additional applications) can create subscriptions to that topic and receive a copy of the messages relevant to them.

Filtering: Service Bus topics support filtering. This means that when the additional applications are added, they can create subscriptions with filters that only receive the shipping requests relevant to their specific processing needs based on the transaction details.

Decoupling: Using a Service Bus topic decouples App1 from the specific applications that process the shipping requests. App1 only needs to know how to send a message to the topic; it doesn’t need to know about the individual applications.

Here’s why the other options are not as suitable:

One Azure Service Bus queue: Azure Service Bus queues follow a point-to-point messaging pattern. Once a message is processed and removed from a queue, it’s generally not available to other consumers. This doesn’t meet the requirement for multiple applications to read the same transactions.

One Azure Data Factory pipeline: Azure Data Factory is a data integration service for building ETL (Extract, Transform, Load) processes. It’s not designed for real-time message distribution to multiple independent consumers.

Multiple storage account queues: While you could create multiple storage account queues and have App1 send the same message to each queue, this creates tight coupling between App1 and all the downstream applications. It also adds management overhead and complexity as you add more applications. Service Bus topics provide a more elegant and scalable solution.

45
Q

DRAG DROP

You have two app registrations named App1 and App2 in Azure AD. App1 supports role-based access control (RBAC) and includes a role named Writer.

You need to ensure that when App2 authenticates to access App1, the tokens issued by Azure AD include the Writer role claim.

Which blade should you use to modify each app registration? To answer, drag the appropriate blades to the correct app registrations. Each blade may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Blades
API permissions
App roles
Token configuration
Answer Area
App1: Blade
App2: Blade

A

Answer Area

App1: App roles

App2: API permissions

Explanation:

App1: App roles: The App roles blade in the App1 registration is where you define the roles that your application exposes. Since the Writer role is part of App1, you need to ensure this role is defined within App1’s app registration under the App roles blade.

App2: API permissions: The API permissions blade in the App2 registration is where you configure which APIs and permissions App2 needs to access. To get the Writer role claim in its tokens when accessing App1, App2 needs to request this permission. You would add a permission to access App1 and then select the Writer role from the list of roles exposed by App1.

Why other blades are not the primary choice:

Token configuration: The Token configuration blade is used to customize the claims included in tokens issued for the application itself (e.g., adding optional claims, configuring group claims). It’s not the primary place to manage permissions to another application’s roles.

46
Q

You are designing an app that will use Azure Cosmos DB to collate sales data from multiple countries. You need to recommend an API for the app.

The solution must meet the following requirements:

  • Support SQL queries.
  • Support geo-replication.
  • Store and access data relationally.

Which API should you recommend?

PostgreSQL
NoSQL
Apache Cassandra
MongoDB

A

The correct API to recommend is PostgreSQL. Here’s why:

Support SQL queries: The Azure Cosmos DB API for PostgreSQL natively supports the PostgreSQL query language, which is a standard SQL dialect.

Support geo-replication: Azure Cosmos DB’s API for PostgreSQL leverages the underlying global distribution capabilities of Cosmos DB, allowing you to create multiple writable replicas across different Azure regions for low latency and high availability.

Store and access data relationally: PostgreSQL is a relational database. The Azure Cosmos DB API for PostgreSQL provides a fully managed, scalable, and distributed relational database service compatible with PostgreSQL.

Why the other options are incorrect:

NoSQL (Core) API: While the Core (formerly SQL) API of Azure Cosmos DB supports a SQL-like query language, it’s a NoSQL database and doesn’t inherently store or access data in a relational manner in the same way as a traditional relational database.

Apache Cassandra API: Apache Cassandra is a NoSQL database known for its high availability and scalability. While it has a query language (CQL) that is similar to SQL, it’s not standard SQL and it’s not a relational database.

MongoDB API: MongoDB is a document database (NoSQL). It uses its own query language, which is different from SQL, and it does not store data relationally.

47
Q

You have an on-premises storage solution.

You need to migrate the solution to Azure. The solution must support Hadoop Distributed File System (HDFS).

What should you use?

Azure Data Lake Storage Gen2
Azure NetApp Files
Azure Data Share
Azure Table storage

A

The correct answer is Azure Data Lake Storage Gen2.

Here’s why:

Azure Data Lake Storage Gen2: This service is specifically designed for big data analytics workloads and is built on top of Azure Blob Storage. A key feature of Data Lake Storage Gen2 is its hierarchical namespace, which enables it to function as a fully managed, Hadoop-compatible file system. This means you can interact with data using the same HDFS semantics as your on-premises Hadoop environment.

Here’s why the other options are not the correct choice:

Azure NetApp Files: This is a high-performance, enterprise-grade file storage service that provides NFS and SMB access. It does not directly support HDFS.

Azure Data Share: This is a service for securely sharing data with external organizations. It’s not a storage solution itself designed to replace an on-premises HDFS.

Azure Table storage: This is a NoSQL key-value store, which is fundamentally different from a file system like HDFS.

48
Q

HOTSPOT

You have an Azure subscription that contains 50 Azure SQL databases.

You create an Azure Resource Manager (ARM) template named. Template1 that enables Transparent Data Encryption (TDE).

You need to create an Azure Policy definition named Policy1 that will use Template1 to enable IDE for any noncompliant Azure SQL databases.

How should you configure Policy 1? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Set available effects to:
DeployIfNotExists
EnforceRegoPolicy
Modify
Include in the definition:
roles required to perform the remediation task
The identity required to perform the remediation task
The scopes of the policy assignments
The role-based access control (RBAC) roles required to perform the remediation task

A

Set available effects to: DeployIfNotExists

Why correct: The primary goal of the policy is to enable TDE if it’s not already enabled. The DeployIfNotExists effect is specifically designed for this purpose. It will check if the TDE configuration exists (or meets the desired state) and if not, it will deploy the ARM template to enable it.

Include in the definition: The identity required to perform the remediation task

Why correct: When using the DeployIfNotExists effect, the Azure Policy needs an identity with the necessary permissions to execute the deployment defined in the ARM template. Without specifying the identity, the policy won’t be able to perform the action of enabling TDE. This is a critical piece for the policy to function automatically.

Why other options are less suitable as single choices:

EnforceRegoPolicy: This is for Azure Policy as Code and uses the Rego language. It’s not the appropriate effect for deploying an ARM template.

Modify: While Modify can change resource properties, DeployIfNotExists is a more direct and suitable choice for ensuring a resource or configuration exists.

roles required to perform the remediation task: While essential for authorization, the policy first needs an identity to act. Without the identity, knowing the required roles is insufficient.

The scopes of the policy assignments: The scope is where the policy is applied, not part of the definition of how the remediation is performed.

The role-based access control (RBAC) roles required to perform the remediation task: Similar to “roles required…”, while necessary, the identity is the fundamental requirement for the policy to act.

49
Q

HOTSPOT

You have an Azure subscription that contains the resources shown in the following table.

Name Type Kind Location
storagel Azure Storage account Storage East US
storage2 Azure Storage account Storage V2 East US
Workspacel Azure Log Analytics workspace Not applicable East US
Workspace2 Azure Log Analytics workspace Not applicable East US
Hub1 Azure event hub Not applicable East US

You create an Azure SQL database named DB1 that is hosted in the East US region.

To DB1, you add a diagnostic setting named Settings1. Settings1 archives SQL Insights to storage1 and sends SQL Insights to Workspace1.

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selections is worth one point.
Statements
You can add a new diagnostic setting that archives SQLInsights logs to storage2.
You can add a new diagnostic setting that sends SQLInsights logs to Workspace2.
You can add a new diagnostic setting that sends SQLInsights logs to Hub1.

A

Here’s the breakdown of the answers for each statement:

You can add a new diagnostic setting that archives SQLInsights logs to storage2. Yes

Explanation: Azure SQL Database diagnostic settings can archive logs to Azure Storage accounts. The “Kind” of the storage account (Storage V2 in the case of storage2) doesn’t prevent it from being a valid target for archiving diagnostic logs.

You can add a new diagnostic setting that sends SQLInsights logs to Workspace2. Yes

Explanation: Azure SQL Database diagnostic settings can send logs to multiple Azure Log Analytics workspaces. You can configure additional diagnostic settings to send logs to Workspace2 in addition to the existing setting sending logs to Workspace1.

You can add a new diagnostic setting that sends SQLInsights logs to Hub1. Yes

Explanation: Azure SQL Database diagnostic settings can send logs to Azure Event Hubs. Hub1 is an Azure Event Hub, making it a valid destination for SQLInsights logs via a diagnostic setting.

50
Q

DRAG DROP

You have an on-premises network that uses on IP address space of 172.16.0.0/16

You plan to deploy 25 virtual machines to a new azure subscription.

You identity the following technical requirements.

✑ All Azure virtual machines must be placed on the same subnet subnet1.

✑ All the Azure virtual machines must be able to communicate with all on premises severs.

✑ The servers must be able to communicate between the on-premises network and Azure by using a site to site VPN.

You need to recommend a subnet design that meets the technical requirements.

What should you include in the recommendation? To answer, drag the appropriate network addresses to the correct subnet. Each network address may be used once, more than once or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Network Addresses
172.16.0.0/16
172.16.1.0/28
192.168.0.0/24
192.168.1.0/28

Answer Area
Subnet1: Network address
Gateway subnet: Network address

A

Correct Answer:

Subnet1: Network address: 192.168.0.0/24

Gateway subnet: Network address: 192.168.1.0/28

Explanation:

Subnet1 (192.168.0.0/24):

Meets the VM requirement: A /24 subnet provides 256 IP addresses, which is more than enough to accommodate 25 virtual machines.

Avoids overlap: This IP address range (192.168.0.0/24) does not overlap with the on-premises network’s IP address space (172.16.0.0/16). This is crucial for successful VPN connectivity.

Gateway subnet (192.168.1.0/28):

Dedicated for the VPN Gateway: Azure requires a dedicated subnet for the VPN gateway. This subnet should not contain any other resources.

Appropriate size: A /28 subnet provides 16 IP addresses, which is sufficient for the Azure VPN gateway. Microsoft recommends a /28 or larger for gateway subnets.

Avoids overlap: This IP address range (192.168.1.0/28) also does not overlap with the on-premises network’s IP address space.

Why other options are incorrect:

172.16.0.0/16 for Subnet1: This is incorrect because it overlaps directly with the on-premises network’s IP address space. Using overlapping address spaces will cause routing conflicts and prevent the VPN from working correctly.

172.16.1.0/28 for Subnet1: While this doesn’t directly overlap with the base on-premises range, it’s within the larger 172.16.0.0/16 range. It’s generally best practice to use completely separate private IP ranges to avoid any potential future conflicts or confusion. Also, a /28 is too small for 25 VMs.

Using any 172.16.x.x range for the Gateway Subnet: Similar to the point above, using any part of the 172.16.0.0/16 range for the gateway subnet creates a risk of overlap and is not recommended.

51
Q

Case Study

This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.

To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.

At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

To start the case study

To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Overview. General Overview

Litware, Inc. is a medium-sized finance company.

Overview. Physical Locations

Litware has a main office in Boston.

Existing Environment. Identity Environment

The network contains an Active Directory forest named Litware.com that is linked to an Azure Active Directory (Azure AD) tenant named Litware.com. All users have Azure Active Directory Premium P2 licenses.

Litware has a second Azure AD tenant named dev.Litware.com that is used as a development environment.

The Litware.com tenant has a conditional acess policy named capolicy1. Capolicy1 requires that when users manage the Azure subscription for a production environment by

using the Azure portal, they must connect from a hybrid Azure AD-joined device.

Existing Environment. Azure Environment

Litware has 10 Azure subscriptions that are linked to the Litware.com tenant and five Azure subscriptions that are linked to the dev.Litware.com tenant. All the subscriptions are in an Enterprise Agreement (EA).

The Litware.com tenant contains a custom Azure role-based access control (Azure RBAC) role named Role1 that grants the DataActions read permission to the blobs and files in Azure Storage.

Existing Environment. On-premises Environment

The on-premises network of Litware contains the resources shown in the following table.

Name Type Configuration
SERVER1 Ubuntu 18.04 vitual machines hosted on Hyper-V The vitual machines host a third-party app named App1. App1 uses an external storage solution that provides Apache Hadoop-compatible data storage. The data storage supports POSIX access control list (ACL) file-level permissions.
SERVER10 Server that runs Windows Server 2016 The server contains a Microsoft SQL Server instance that hosts two databases named DB1 and DB2.

Existing Environment. Network Environment

Litware has ExpressRoute connectivity to Azure.

Planned Changes and Requirements. Planned Changes

Litware plans to implement the following changes:

✑ Migrate DB1 and DB2 to Azure.

✑ Migrate App1 to Azure virtual machines.

✑ Deploy the Azure virtual machines that will host App1 to Azure dedicated hosts.

Planned Changes and Requirements. Authentication and Authorization Requirements

Litware identifies the following authentication and authorization requirements:

✑ Users that manage the production environment by using the Azure portal must connect from a hybrid Azure AD-joined device and authenticate by using Azure Multi-Factor Authentication (MFA).

✑ The Network Contributor built-in RBAC role must be used to grant permission to all the virtual networks in all the Azure subscriptions.

✑ To access the resources in Azure, App1 must use the managed identity of the virtual machines that will host the app.

✑ Role1 must be used to assign permissions to the storage accounts of all the Azure subscriptions.

✑ RBAC roles must be applied at the highest level possible.

Planned Changes and Requirements. Resiliency Requirements

Litware identifies the following resiliency requirements:

✑ Once migrated to Azure, DB1 and DB2 must meet the following requirements:

  • Maintain availability if two availability zones in the local Azure region fail.
  • Fail over automatically.
  • Minimize I/O latency.

✑ App1 must meet the following requirements:

  • Be hosted in an Azure region that supports availability zones.
  • Be hosted on Azure virtual machines that support automatic scaling.
  • Maintain availability if two availability zones in the local Azure region fail.

Planned Changes and Requirements. Security and Compliance Requirements

Litware identifies the following security and compliance requirements:

✑ Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.

✑ On-premises users and services must be able to access the Azure Storage account that will host the data in App1.

✑ Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.

✑ All Azure SQL databases in the production environment must have Transparent Data Encryption (TDE) enabled.

✑ App1 must not share physical hardware with other workloads.

Planned Changes and Requirements. Business Requirements

Litware identifies the following business requirements:

✑ Minimize administrative effort.

✑ Minimize costs.

HOTSPOT

You plan to migrate DB1 and DB2 to Azure.

You need to ensure that the Azure database and the service tier meet the resiliency and business requirements.

What should you configure? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Database:
A single Azure SQL database
Azure SQL Managed Instance
An Azure SQL Database elastic pool
Service tier:
Hyperscale
Business Critical
General Purpose

A

The correct answer is:

Database: Azure SQL Database
Service tier: Business Critical

Explanation:

Here’s why this combination is the best fit based on the requirements:

Database:

Azure SQL Database: This is the most suitable option for migrating individual SQL Server databases like DB1 and DB2 to Azure. It offers various service tiers tailored for different performance and availability needs.

Azure SQL Managed Instance: While Managed Instance is a good choice for migrating entire SQL Server instances with minimal changes, it’s generally more complex and might be more expensive than a single Azure SQL Database, especially for just two databases. It might introduce more administrative overhead than necessary.

Azure SQL Database elastic pool: Elastic pools are ideal when you have many databases with unpredictable usage patterns, allowing them to share resources efficiently. However, with only DB1 and DB2 and specific resiliency requirements, an elastic pool would add unnecessary complexity.

Service tier:

Business Critical: This tier is specifically designed for mission-critical applications that demand high availability, low latency, and resilience. It directly addresses the following requirements:

“Maintain availability if two availability zones in the local Azure region fail”: Business Critical uses multiple synchronous replicas distributed across different availability zones, ensuring the database remains available even if two zones become unavailable.

“Fail over automatically”: Business Critical provides automatic failover to a secondary replica in case of an outage.

“Minimize I/O latency”: Business Critical leverages local SSD storage for the lowest possible I/O latency.

Hyperscale: While Hyperscale offers high availability and scalability, its primary focus is on very large databases (up to 100 TB). It’s likely overkill for migrating two regular-sized databases and could be more expensive than Business Critical.

General Purpose: This tier provides a balance of compute, memory, and I/O resources, making it suitable for a wide range of workloads. However, it does not offer the same level of high availability as Business Critical, especially concerning multiple availability zone failures. It relies on zone-redundant storage and availability zones if configured, but not built-in the same way as Business Critical.

Why other options are less suitable:

Azure SQL Managed Instance with any tier: While a viable option, it’s likely an over-engineered solution for just two databases and could lead to higher costs and complexity.

Azure SQL Database with General Purpose: Does not meet the requirement of maintaining availability if two availability zones fail.

Azure SQL Database with Hyperscale: While meeting the availability requirement, it might be an unnecessary cost for the scale of DB1 and DB2.

Azure SQL Database elastic pool with any tier: Adds complexity without a clear benefit for only two databases with specific requirements.

52
Q

HOTSPOT

You are designing an application that will use Azure Linux virtual machines to analyze video files. The files will be uploaded from corporate offices that connect to Azure by using ExpressRoute.

You plan to provision an Azure Storage account to host the files.

You need to ensure that the storage account meets the following requirements:

  • Supports video files of up to 7 TB
  • Provides the highest availability possible
  • Ensures that storage is optimized for the large video files
  • Ensures that files from the on-premises network are uploaded by using ExpressRoute

How should you configure the storage account? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Storage account type:
Premium file shares
Premium page blobs
Standard general-purpose v2
Data redundancy:
Geo-redundant storage (GRS)
Locally-redundant storage (LRS)
Zone-redundant storage (ZRS)
These are the selections for Data redundancy
Networking:
Azure Route Server
A private endpoint
A service endpoint

A

Answer Area

Storage account type: Standard general-purpose v2

Data redundancy: Geo-redundant storage (GRS)

Networking: A private endpoint

Explanation:

  1. Storage account type:

Standard general-purpose v2: This is the most appropriate choice because:

Supports large files: General-purpose v2 accounts support very large files through the use of block blobs, which is suitable for video files up to approximately 190.7 TiB.

Cost-effective for large files: It’s generally more cost-effective than Premium options for large file storage, especially when the primary need is capacity rather than ultra-high IOPS.

Hierarchical Namespace: General purpose V2 accounts support enabling the Hierarchical Namespace feature, turning the storage account into Azure Data Lake Storage Gen2, providing file-level ACLs and POSIX-like permissions.

Premium file shares: Premium file shares are designed for high-performance, low-latency file access. While they can handle large files, they are significantly more expensive and not necessary for this scenario where the primary concern is storage capacity and availability.

Premium page blobs: Premium page blobs are optimized for random read/write operations and are typically used for VHDs. They are not the best fit for storing and sequentially accessing large video files.

  1. Data redundancy:

Geo-redundant storage (GRS): This option provides the highest level of durability and availability by replicating data to a secondary region hundreds of miles away from the primary region. If there’s a major outage in the primary region, the data remains accessible in the secondary region (read access is available if RA-GRS is configured).

Read-access geo-redundant storage (RA-GRS): This option is similar to GRS but additionally provides read access to the secondary region, improving availability and potentially reducing latency for read operations when accessing data from the secondary region.

Locally-redundant storage (LRS): LRS only replicates data within a single data center. It does not offer protection against regional outages.

Zone-redundant storage (ZRS): ZRS replicates data across three availability zones within a single region. While better than LRS, it still does not provide protection against a complete regional outage.

If high availability is the main objective, then GRS and RA-GRS are the best options.

  1. Networking:

A private endpoint: This is the most secure and efficient way to connect to the storage account from the on-premises network via ExpressRoute. A private endpoint assigns a private IP address from your virtual network to the storage account, effectively bringing the storage account into your private network space.

ExpressRoute Integration: Private endpoints work seamlessly with ExpressRoute, allowing traffic to flow directly between your on-premises network and the storage account over your private ExpressRoute connection.

Azure Route Server: Azure Route Server simplifies dynamic routing between your network virtual appliance (NVA) and your virtual network. It is not directly related to storage account connectivity.

A service endpoint: Service endpoints are used to secure access to Azure services from within your virtual network, but they don’t extend to on-premises networks connected via ExpressRoute. Traffic would still traverse the public internet after leaving the ExpressRoute circuit.

53
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is being deployed and configured for on-premises to Azure connectivity.

Several virtual machines exhibit network connectivity issues.

You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines.

Solution: Use Azure Traffic Analytics in Azure Network Watcher to analyze the network traffic.

Does this meet the goal?

Yes
No

A

No

Explanation:

While Azure Traffic Analytics is a powerful tool for analyzing network traffic, it does not directly address troubleshooting network connectivity issues between on-premises virtual machines and Azure virtual machines over ExpressRoute, specifically to identify if packets are being allowed or denied.

Here’s why:

Traffic Analytics relies on NSG Flow Logs: Traffic Analytics analyzes NSG flow logs to provide insights into traffic patterns. NSG flow logs capture information about traffic flowing through Network Security Groups (NSGs) in Azure. However, they do not capture traffic that does not traverse an NSG, such as traffic between on-premises and Azure resources if those resources are not secured behind an NSG in Azure or in the case of NSGs on-premises.

Limited Scope: Traffic Analytics primarily focuses on Azure-to-Azure or internet-to-Azure traffic. The traffic between on-premises and Azure VMs via ExpressRoute might not be fully captured if the traffic does not pass through NSGs that have flow logging enabled.

Not for real-time troubleshooting: Traffic Analytics provides aggregated views and trends, making it suitable for understanding overall traffic patterns but not ideal for pinpointing real-time packet drops or connectivity issues.

Better Solution:

To troubleshoot network connectivity issues and determine if packets are being allowed or denied between on-premises and Azure virtual machines over ExpressRoute, you should use a combination of the following:

Connection Troubleshoot in Network Watcher: Connection Troubleshoot allows you to check connectivity from a source to a destination, identify latency and whether it is permitted or blocked by network security groups (NSGs) or user-defined routes (UDRs).

Packet Capture in Network Watcher: This feature allows you to capture network traffic on Azure virtual machines, similar to using Wireshark or tcpdump. You can analyze the captured packets to diagnose connectivity problems.

ExpressRoute Monitoring: Use Azure Monitor for Networks and ExpressRoute-specific metrics to check the health and connectivity of your ExpressRoute circuit.

IP flow verify in Network Watcher: It can be used to check if packets are being allowed or denied to or from a specific virtual machine based on NSG rules.

Next hop in Network Watcher: It helps determine the next hop for a packet and identify if it’s following the expected route.

On-premises Network Monitoring Tools: Utilize your existing network monitoring tools on-premises to check connectivity and packet flow up to the ExpressRoute edge.

Azure VPN Troubleshooter

54
Q

Case Study -

This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.

To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.

At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

To start the case study -

To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Existing Environment: Technical Environment

The on-premises network contains a single Active Directory domain named contoso.com.

Contoso has a single Azure subscription.

Existing Environment: Business Partnerships

Contoso has a business partnership with Fabrikam, Inc. Fabrikam users access some Contoso applications over the internet by using Azure Active Directory

(Azure AD) guest accounts.

Requirements: Planned Changes -

Contoso plans to deploy two applications named App1 and App2 to Azure.

Requirements: App1 -

App1 will be a Python web app hosted in Azure App Service that requires a Linux runtime. Users from Contoso and Fabrikam will access App1.

App1 will access several services that require third-party credentials and access strings. The credentials and access strings are stored in Azure Key Vault.

App1 will have six instances: three in the East US Azure region and three in the West Europe Azure region.

App1 has the following data requirements:

Each instance will write data to a data store in the same availability zone as the instance.

Data written by any App1 instance must be visible to all App1 instances.

App1 will only be accessible from the internet. App1 has the following connection requirements:

Connections to App1 must pass through a web application firewall (WAF).

Connections to App1 must be active-active load balanced between instances.

All connections to App1 from North America must be directed to the East US region. All other connections must be directed to the West Europe region.

Every hour, you will run a maintenance task by invoking a PowerShell script that copies files from all the App1 instances. The PowerShell script will run from a central location.

Requirements: App2 -

App2 will be a NET app hosted in App Service that requires a Windows runtime. App2 has the following file storage requirements:

Save files to an Azure Storage account.

Replicate files to an on-premises location.

Ensure that on-premises clients can read the files over the LAN by using the SMB protocol.

You need to monitor App2 to analyze how long it takes to perform different transactions within the application. The solution must not require changes to the application code.

Application Development Requirements

Application developers will constantly develop new versions of App1 and App2. The development process must meet the following requirements:

A staging instance of a new application version must be deployed to the application host before the new version is used in production.

After testing the new version, the staging version of the application will replace the production version.

The switch to the new application version from staging to production must occur without any downtime of the application.

Identity Requirements -

Contoso identifies the following requirements for managing Fabrikam access to resources:

Every month, an account manager at Fabrikam must review which Fabrikam users have access permissions to App1. Accounts that no longer need permissions must be removed as guests.

The solution must minimize development effort.

Security Requirement -

All secrets used by Azure services must be stored in Azure Key Vault.

Services that require credentials must have the credentials tied to the service instance. The credentials must NOT be shared between services.

You need to recommend a solution that meets the data requirements for App1.

What should you recommend deploying to each availability zone that contains an instance of App1?

Introductory Info

Question

Answers
A. an Azure Cosmos DB that uses multi-region writes
B. an Azure Data Lake store that uses geo-zone-redundant storage (GZRS)
C. an Azure SQL database that uses active geo-replication
D. an Azure Storage account that uses geo-zone-redundant storage (GZRS)

A

The correct answer is A. an Azure Cosmos DB that uses multi-region writes.

Here’s why:

App1’s data requirements:

Each instance writes data to a data store in the same availability zone.

Data written by any instance must be visible to all instances.

Why Azure Cosmos DB with multi-region writes is the best fit:

Availability Zone Support: Azure Cosmos DB can be deployed across availability zones within a region, ensuring that each App1 instance can write to a local Cosmos DB instance within the same availability zone for low latency and high availability.

Global Data Visibility: Multi-region writes in Cosmos DB enable data written to any region to be automatically replicated to all other regions configured for the database. This ensures that all App1 instances, regardless of their location, have access to the same data.

Strong Consistency (Optional): While not explicitly stated as a requirement, Cosmos DB offers strong consistency models if needed, ensuring that reads always reflect the latest writes. If eventual consistency is acceptable, it can offer even higher availability and lower latency.

Why other options are not suitable:

B. an Azure Data Lake store that uses geo-zone-redundant storage (GZRS): While GZRS provides high availability and data redundancy, it is primarily designed for storing large amounts of unstructured data. It is not the ideal choice for an application that likely needs a structured or semi-structured database with query capabilities. Additionally, it might not provide the required consistency model.

C. an Azure SQL database that uses active geo-replication: Active geo-replication creates read-only replicas in other regions. While it supports automatic failover, it does not allow writes to the secondary replicas. This would not meet the requirement of each instance writing to a local data store.

D. an Azure Storage account that uses geo-zone-redundant storage (GZRS): Similar to Azure Data Lake Store, Azure Storage accounts (Blob storage, in particular) are best suited for unstructured data. While suitable for storing files, they are not designed for database-like workloads and lack the querying and consistency features needed by App1.

55
Q

HOTSPOT

You need to implement the Azure RBAC role assignment. The solution must meet the authentication and authorization requirements.

How many assignment should you configure for the Network Contributor role for Role1? To answer, select appropriate in the answer area.
Answer Area
Network Contributor.
1
2
15

Role1:
1
2
15

A

Network Contributor: 2

Role1: 2

56
Q

You have an Azure subscription that contains a Basic Azure virtual WAN named Virtual/WAN1 and the virtual hubs shown in the following table.
Name Azure region
Hub1 US East
Hub2 US West

You have an ExpressRoute circuit in the US East region.

You need to create an ExpressRoute association to VirtualWAN1.

What should you do first?

Upgrade VirtualWAN1 to Standard.
Create a gateway on Hub1.
Create a hub virtual network in US East.
Enable the ExpressRoute premium add-on.

A

The correct answer is Upgrade VirtualWAN1 to Standard.

Here’s why:

Basic Virtual WAN Limitations: Basic Azure Virtual WANs do not support ExpressRoute connections. You need a Standard Azure Virtual WAN to create ExpressRoute associations.

Order of Operations: You need to upgrade the Virtual WAN to a supporting tier before you can perform any actions related to connecting an ExpressRoute circuit.

Let’s look at why the other options are not the correct first step:

Create a gateway on Hub1: You will eventually need to create an ExpressRoute gateway within Hub1 to connect the ExpressRoute circuit. However, you can’t create this gateway on a Basic Virtual WAN. The Virtual WAN needs to be Standard first.

Create a hub virtual network in US East: Hub1 already is a virtual hub in the US East region. You don’t need to create a separate hub virtual network. The hub itself is the managed virtual network within the Virtual WAN.

Enable the ExpressRoute premium add-on: While the ExpressRoute premium add-on provides additional global reach for your ExpressRoute circuit, it’s not a prerequisite for the initial association with a Virtual WAN. You can associate a basic ExpressRoute circuit with a Standard Virtual WAN. The premium add-on can be enabled later if needed.

57
Q

HOTSPOT

You plan to deploy a custom database solution that will have multiple instances as shown in the following table.

Host virtual machine Azure Availability Zone Azure region
USDB1 1 US East
USDB2 2 US East
USDB3 3 US East
EUDB1 1 West Europe
EUDB2 2 West Europe
EUDB3 3 West Europe

Client applications will access database servers by using db.contoso.com.

You need to recommend load balancing services for the planned deployment.

The solution must meet the following requirements:

✑ Access to at least one database server must be maintained in the event of a regional outage.

✑ The virtual machines must not connect to the internet directly.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Global load balancing service:
Azure Application Gateway
Azure Front Door
Azure Load Balancer
Azure Traffic Manager

Availability Zone load balancing service:
Azure Application Gateway
Azure Front Door
Azure Load Balancer
Azure Traffic Manager

A

To address the requirement of maintaining access during a regional outage, you need a global load balancing service. For load balancing within each region’s availability zones, you need a regional load balancing service that supports availability zones.

Here’s the breakdown:

Global load balancing service:

Azure Traffic Manager: This is the most suitable option for global load balancing in this scenario. Traffic Manager works at the DNS level and can direct traffic to healthy endpoints in different regions. In case of a regional outage, it will automatically redirect traffic to the healthy region.

Availability Zone load balancing service:

Azure Load Balancer (Standard tier is required for availability zone support): This is the appropriate choice for distributing traffic across virtual machines within the same region, specifically across different availability zones. It ensures that if one availability zone fails, the other healthy zones continue to serve traffic.

Therefore, the correct selections are:

Global load balancing service:
✔️ Azure Traffic Manager

Availability Zone load balancing service:
✔️ Azure Load Balancer

Explanation of why other options are less suitable:

Azure Application Gateway: While it offers advanced features and can be deployed across availability zones, it’s primarily a web traffic (HTTP/HTTPS) load balancer and is regional. It doesn’t provide the global failover capability in case of a regional outage like Traffic Manager does.

Azure Front Door: Front Door is a global, scalable web application accelerator and HTTP(S) load balancer. While it offers global capabilities, Traffic Manager is generally a better fit for this type of database scenario where you might have non-HTTP traffic or simpler routing needs.

Not Selecting Azure Application Gateway for Availability Zones: While Application Gateway can be deployed across availability zones, Azure Load Balancer is the more fundamental and often cost-effective solution for basic TCP/UDP load balancing within a region and across availability zones.

58
Q

Overview:

Existing Environment

Fabrikam, Inc. is an engineering company that has offices throughout Europe. The company has a main office in London and three branch offices in Amsterdam Berlin, and Rome.

Active Directory Environment:

The network contains two Active Directory forests named corp.fabnkam.com and rd.fabrikam.com. There are no trust relationships between the forests. Corp.fabrikam.com is a production forest that contains identities used for internal user and computer authentication. Rd.fabrikam.com is used by the research and development (R&D) department only. The R&D department is restricted to using on-premises resources only.

Network Infrastructure:

Each office contains at least one domain controller from the corp.fabrikam.com domain.

The main office contains all the domain controllers for the rd.fabrikam.com forest.

All the offices have a high-speed connection to the Internet.

An existing application named WebApp1 is hosted in the data center of the London office. WebApp1 is used by customers to place and track orders. WebApp1 has a web tier that uses Microsoft Internet Information Services (IIS) and a database tier that runs Microsoft SQL Server 2016. The web tier and the database tier are deployed to virtual machines that run on Hyper-V.

The IT department currently uses a separate Hyper-V environment to test updates to WebApp1.

Fabrikam purchases all Microsoft licenses through a Microsoft Enterprise Agreement that includes Software Assurance.

Problem Statement:

The use of Web App1 is unpredictable. At peak times, users often report delays. At other times, many resources for WebApp1 are underutilized.

Requirements:

Planned Changes:

Fabrikam plans to move most of its production workloads to Azure during the next few years.

As one of its first projects, the company plans to establish a hybrid identity model, facilitating an upcoming Microsoft Office 365 deployment All R&D operations will remain on-premises.

Fabrikam plans to migrate the production and test instances of WebApp1 to Azure.

Technical Requirements:

Fabrikam identifies the following technical requirements:

  • Web site content must be easily updated from a single point.
  • User input must be minimized when provisioning new app instances.
  • Whenever possible, existing on premises licenses must be used to reduce cost.
  • Users must always authenticate by using their corp.fabrikam.com UPN identity.
  • Any new deployments to Azure must be redundant in case an Azure region fails.
  • Whenever possible, solutions must be deployed to Azure by using platform as a service (PaaS).
  • An email distribution group named IT Support must be notified of any issues relating to the directory synchronization services.
  • Directory synchronization between Azure Active Directory (Azure AD) and corp.fabhkam.com must not be affected by a link failure between Azure and the on premises network.

Database Requirements:

Fabrikam identifies the following database requirements:

  • Database metrics for the production instance of WebApp1 must be available for analysis so that database administrators can optimize the performance settings.
  • To avoid disrupting customer access, database downtime must be minimized when databases are migrated.
  • Database backups must be retained for a minimum of seven years to meet compliance requirement

Security Requirements:

Fabrikam identifies the following security requirements:

  • Company information including policies, templates, and data must be inaccessible to anyone outside the company
  • Users on the on-premises network must be able to authenticate to corp.fabrikam.com if an Internet link fails.
  • Administrators must be able authenticate to the Azure portal by using their corp.fabrikam.com credentials.
  • All administrative access to the Azure portal must be secured by using multi-factor authentication.
  • The testing of WebApp1 updates must not be visible to anyone outside the company.

HOTSPOT

To meet the authentication requirements of Fabrikam, what should you include in the solution? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Minimum number of Azure AD tenants:
0
1
2
3
4
Minimum number of custom domains to add:
0
1
2
3
4
Minimum number of conditional access policies to create:
0
1
2
3
4

A

Minimum number of Azure AD tenants
Answer: 1

Reason:

Fabrikam needs a single Azure AD tenant to synchronize identities from the corp.fabrikam.com domain for hybrid identity and Office 365 deployment.
The rd.fabrikam.com forest does not require Azure AD integration since the R&D department is restricted to on-premises resources only.
One Azure AD tenant is sufficient to meet the requirement of authenticating with corp.fabrikam.com UPN identities and allowing administrators to access the Azure portal.
2. Minimum number of custom domains to add
Answer: 1

Reason:

Fabrikam needs to add corp.fabrikam.com as a custom domain in Azure AD to support authentication using the UPNs from this forest.
The rd.fabrikam.com domain is not part of the Azure AD integration and remains solely on-premises, so no additional custom domain is required.
3. Minimum number of conditional access policies to create
Answer: 2

Reason:

Policy 1: A conditional access policy is required to enforce multi-factor authentication (MFA) for administrative access to the Azure portal, meeting the security requirement.
Policy 2: A second conditional access policy can ensure that only users from trusted locations (e.g., on-premises or compliant devices) can authenticate to Azure AD, addressing the requirement to secure company information and minimize exposure.
Summary of Correct Choices
Minimum number of Azure AD tenants: 1
Minimum number of custom domains to add: 1
Minimum number of conditional access policies to create: 2

59
Q

You plan to deploy an Azure SQL database that will store Personally Identifiable Information (Pll). You need to ensure that only privileged users can view the Pll.

What should you include in the solution?

Transparent Data Encryption (TDE)
Data Discovery & Classification
dynamic data masking
role-based access control (RBAC)

A

The correct answers are:

dynamic data masking

role-based access control (RBAC)

Here’s why:

Dynamic Data Masking: This feature allows you to obscure sensitive data from non-privileged users. You can define masking rules at the column level to hide the actual PII data while still allowing users with the correct permissions to see the unmasked data. This directly addresses the requirement of preventing unauthorized viewing.

Role-Based Access Control (RBAC): RBAC is essential for granting specific permissions to users based on their roles. You would create roles that have permission to view the columns containing PII and assign those roles only to privileged users. This controls who has access to the sensitive information in the first place.

Let’s look at why the other options are important but don’t directly address the specific requirement of controlling viewing of PII:

Transparent Data Encryption (TDE): TDE encrypts the database at rest and in transit. While crucial for security and compliance, it doesn’t prevent authorized database users (even those without specific privileges) from viewing the data once they are connected to the database. TDE protects against unauthorized access to the physical database files, backups, and transaction log files.

Data Discovery & Classification: This feature helps you identify and categorize sensitive data like PII within your database. It’s a very valuable step for understanding your data landscape and applying appropriate security measures. However, it doesn’t enforce access restrictions on its own. You need other mechanisms like RBAC and dynamic data masking to control who can view the classified data.

In summary:

RBAC controls who can access the data.

Dynamic Data Masking controls what data they see based on their privileges.

60
Q

Your network contains an on-premises Active Directory forest.

You discover that when users change jobs within your company, the membership of the user groups are not being updated. As a result, the users can access resources that are no longer relevant to their job.

You plan to integrate Active Directory and Azure Active Directory (Azure AD) by using Azure AD Connect.

You need to recommend a solution to ensure that group owners are emailed monthly about the group memberships they manage.

What should you include in the recommendation?

conditional access policies
Tenant Restrictions
Azure AD access reviews
Azure AD Identity Protection

A

The correct answer is Azure AD access reviews.

Here’s why:

Azure AD access reviews allow you to automate the process of reviewing group memberships. You can configure a recurring access review for each group, designating the group owner(s) as the reviewers. During the review process, the owners will receive notifications (including email) to review the current members and approve or remove them. This directly addresses the requirement of emailing group owners monthly about their group memberships.

Let’s look at why the other options are not the best fit:

Conditional access policies: These policies are used to enforce access controls based on specific conditions. While they are important for security, they don’t directly address the problem of keeping group memberships up to date.

Tenant Restrictions: This feature allows you to control which external Azure AD tenants your users can access. It’s not related to internal group membership management.

Azure AD Identity Protection: This service helps you detect and respond to identity-based risks, such as leaked credentials or suspicious sign-ins. It doesn’t directly address the need for group membership reviews and notifications.

61
Q

Case Study

This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.

To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.

At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

To start the case study

To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Overview. General Overview

Litware, Inc. is a medium-sized finance company.

Overview. Physical Locations

Litware has a main office in Boston.

Existing Environment. Identity Environment

The network contains an Active Directory forest named Litware.com that is linked to an Azure Active Directory (Azure AD) tenant named Litware.com. All users have Azure Active Directory Premium P2 licenses.

Litware has a second Azure AD tenant named dev.Litware.com that is used as a development environment.

The Litware.com tenant has a conditional acess policy named capolicy1. Capolicy1 requires that when users manage the Azure subscription for a production environment by

using the Azure portal, they must connect from a hybrid Azure AD-joined device.

Existing Environment. Azure Environment

Litware has 10 Azure subscriptions that are linked to the Litware.com tenant and five Azure subscriptions that are linked to the dev.Litware.com tenant. All the subscriptions are in an Enterprise Agreement (EA).

The Litware.com tenant contains a custom Azure role-based access control (Azure RBAC) role named Role1 that grants the DataActions read permission to the blobs and files in Azure Storage.

Existing Environment. On-premises Environment

The on-premises network of Litware contains the resources shown in the following table.

Name Type Configuration
SERVER1 Ubuntu 18.04 vitual machines hosted on Hyper-V The vitual machines host a third-party app named App1. App1 uses an external storage solution that provides Apache Hadoop-compatible data storage. The data storage supports POSIX access control list (ACL) file-level permissions.
SERVER10 Server that runs Windows Server 2016 The server contains a Microsoft SQL Server instance that hosts two databases named DB1 and DB2.

Existing Environment. Network Environment

Litware has ExpressRoute connectivity to Azure.

Planned Changes and Requirements. Planned Changes

Litware plans to implement the following changes:

✑ Migrate DB1 and DB2 to Azure.

✑ Migrate App1 to Azure virtual machines.

✑ Deploy the Azure virtual machines that will host App1 to Azure dedicated hosts.

Planned Changes and Requirements. Authentication and Authorization Requirements

Litware identifies the following authentication and authorization requirements:

✑ Users that manage the production environment by using the Azure portal must connect from a hybrid Azure AD-joined device and authenticate by using Azure Multi-Factor Authentication (MFA).

✑ The Network Contributor built-in RBAC role must be used to grant permission to all the virtual networks in all the Azure subscriptions.

✑ To access the resources in Azure, App1 must use the managed identity of the virtual machines that will host the app.

✑ Role1 must be used to assign permissions to the storage accounts of all the Azure subscriptions.

✑ RBAC roles must be applied at the highest level possible.

Planned Changes and Requirements. Resiliency Requirements

Litware identifies the following resiliency requirements:

✑ Once migrated to Azure, DB1 and DB2 must meet the following requirements:

  • Maintain availability if two availability zones in the local Azure region fail.
  • Fail over automatically.
  • Minimize I/O latency.

✑ App1 must meet the following requirements:

  • Be hosted in an Azure region that supports availability zones.
  • Be hosted on Azure virtual machines that support automatic scaling.
  • Maintain availability if two availability zones in the local Azure region fail.

Planned Changes and Requirements. Security and Compliance Requirements

Litware identifies the following security and compliance requirements:

✑ Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.

✑ On-premises users and services must be able to access the Azure Storage account that will host the data in App1.

✑ Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.

✑ All Azure SQL databases in the production environment must have Transparent Data Encryption (TDE) enabled.

✑ App1 must not share physical hardware with other workloads.

Planned Changes and Requirements. Business Requirements

Litware identifies the following business requirements:

✑ Minimize administrative effort.

✑ Minimize costs.

HOTSPOT

You need to ensure that users managing the production environment are registered for Azure MFA and must authenticate by using Azure MFA when they sign in to the Azure portal. The solution must meet the authentication and authorization requirements.

What should you do? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
To register the users for Azure MFA, use:
Azure AD Identity Protection
Security defaults in Azure AD
Per-user MFA in the MFA management UI
To enforce Azure MFA authentication, configure:
Grant control in capolicy1
Session control in capolicy1
Sign-in risk policy in Azure AD Identity Protection for the Litware.com tenant

A

To register the users for Azure MFA, use:

Per-user MFA in the MFA management UI: While modern Conditional Access policies are the preferred method for enforcing MFA, registering users can still be done via the per-user MFA settings in the Azure AD admin center. Users need to go through the registration process to set up their MFA methods.

To enforce Azure MFA authentication, configure:

Grant control in capolicy1: The existing conditional access policy, capolicy1, already targets users managing the production environment via the Azure portal. To enforce MFA, you would modify the Grant controls within this policy to require multi-factor authentication.

Explanation of why other options are less suitable:

Registration:

Azure AD Identity Protection: While Identity Protection can trigger MFA based on risk, it’s not the primary mechanism for initially registering users for MFA.

Security defaults in Azure AD: Security defaults enforce MFA for all users and administrators. This is a broad approach and doesn’t leverage the existing conditional access policy or target specific users.

Enforcement:

Session control in capolicy1: Session controls in Conditional Access focus on what happens after a user has authenticated. They are not used to enforce the initial MFA requirement.

Sign-in risk policy in Azure AD Identity Protection for the Litware.com tenant: While sign-in risk policies can trigger MFA, the requirement is to enforce MFA for a specific action (managing the production environment via the portal), which is better handled by a dedicated Conditional Access policy. Modifying the existing capolicy1 is more efficient and aligned with the current setup.

Therefore, the correct selections are:

To register the users for Azure MFA, use: ✔️ Per-user MFA in the MFA management UI
To enforce Azure MFA authentication, configure: ✔️ Grant control in capolicy1

62
Q

You have an Azure subscription that contains an Azure SQL database.

You are evaluating whether to use Azure reservations on the Azure SQL database.

Which tool should you use to estimate the potential savings?

The Purchase reservations blade in the Azure portal
The Advisor blade in the Azure portal
The SQL database blade in the Azure portal

A

The correct answer is The Purchase reservations blade in the Azure portal.

Here’s why:

Purchase reservations blade: This blade within the Azure portal is specifically designed for purchasing and managing Azure reservations. It typically includes tools to analyze your current usage and estimate the potential cost savings you could achieve by purchasing reservations for your Azure SQL database. You can specify the type and quantity of resources you want to reserve, and the tool will calculate the estimated savings compared to pay-as-you-go pricing.

Let’s look at why the other options are less suitable:

The Advisor blade in the Azure portal: Azure Advisor can provide recommendations for cost optimization, including suggesting the purchase of reservations. However, it’s more of a reactive tool that analyzes your existing usage patterns. The “Purchase reservations” blade is the more proactive tool for exploring potential savings before making a purchase.

The SQL database blade in the Azure portal: This blade is primarily for managing the configuration and monitoring of your individual Azure SQL database. While you can see the current cost of your database here, it doesn’t have built-in tools for estimating the savings from purchasing reservations.

63
Q

HOTSPOT

You plan to develop a new app that will store business critical data.

The app must meet the following requirements:

✑ Prevent new data from being modified for one year.

✑ Minimize read latency.

✑ Maximize data resiliency.

You need to recommend a storage solution for the app.

What should you recommend? To answer, select the appropriate options in the answer area.
Azure Storage account kind:
StorageV2
BlobStorage
BlockBlobStorage
Replication:
Zone-redundant storage (ZRS)
Locally-redundant storage (LRS)
Read-access geo-redundant storage (RA-GRS)

A

Azure Storage account kind:

StorageV2

Replication:

Read-access geo-redundant storage (RA-GRS)

Explanation:

Azure Storage account kind: StorageV2

StorageV2 (General-purpose v2 accounts) are the recommended account type for most scenarios. They support all Azure Storage services (Blobs, Files, Queues, Tables) and importantly for your requirements, they support immutability policies for Blob storage. This is crucial for preventing modification of new data for one year. While BlobStorage is an option for blob-specific scenarios, StorageV2 is more versatile and generally the best choice. BlockBlobStorage is optimized for high-throughput and large objects but doesn’t inherently fulfill the immutability requirement as directly as the immutability policies within standard Blob storage (available under StorageV2).

Replication: Read-access geo-redundant storage (RA-GRS)

Read-access geo-redundant storage (RA-GRS) provides the highest level of data resiliency. It replicates your data to a secondary region hundreds of miles away, ensuring that your data is protected even in the event of a regional outage. The “read-access” part of RA-GRS also helps in minimizing read latency, as you can potentially read data from the secondary region if the primary region is unavailable or experiencing issues.

Zone-redundant storage (ZRS) provides good resiliency within a single region by replicating data across three availability zones. However, it doesn’t protect against regional failures as well as RA-GRS.

Locally-redundant storage (LRS) only replicates data within a single data center, offering the least amount of resiliency and not meeting the “maximize data resiliency” requirement.

Therefore, StorageV2 with RA-GRS provides the best combination of features to meet all the stated requirements:

Immutability (within StorageV2’s Blob service) prevents data modification.

RA-GRS helps minimize read latency (through potential secondary reads).

RA-GRS maximizes data resiliency by replicating to a secondary region.

63
Q

DRAG DROP

You plan to import data from your on-premises environment to Azure.

The data Is shown in the following table.
On-premises source Azure target
A Microsoft SQL Server 2012 database An Azure SQL database
A table in a Microsoft SQL Server 2014 database An Azure Cosmos DB account that uses the SQL API

What should you recommend using to migrate the data? To answer, drag the appropriate tools to the correct data sources-Each tool may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Tools
AzCopy
Azure Cosmos DB Data Migration Tool
Data Management Gateway
Data Migration Assistant

Answer Area
From the SQL Server 2012 database: Tool
From the table in the SQL Server 2014 database: Tool

A

From the SQL Server 2012 database:

Data Migration Assistant

Explanation: The Data Migration Assistant (DMA) is a free tool from Microsoft designed specifically for migrating SQL Server databases to Azure SQL Database. It assesses your database for compatibility issues, identifies feature changes, and recommends performance improvements before you migrate. While you could potentially use other methods, DMA is the most direct and recommended tool for this scenario.

From the table in the SQL Server 2014 database:

Azure Cosmos DB Data Migration Tool

Explanation: The Azure Cosmos DB Data Migration Tool is specifically designed for importing data into Azure Cosmos DB from various sources, including SQL Server. Since you’re targeting an Azure Cosmos DB account with the SQL API, this is the most appropriate tool. It allows you to select specific tables and map the schema for the Cosmos DB target.

Why other tools are less suitable:

AzCopy: This is primarily used for copying blobs and files to and from Azure Storage. It’s not designed for structured database migrations.

Data Management Gateway: This is a component of Azure Data Factory and Azure Synapse Pipelines used to provide a secure connection between your on-premises environment and Azure data services. While it can be used in data migration scenarios within a pipeline, it’s not the direct tool you would use to initiate and execute the migration like DMA or the Cosmos DB Data Migration Tool.

Therefore, the correct drag-and-drop is:

From the SQL Server 2012 database: ✔️ Data Migration Assistant
From the table in the SQL Server 2014 database: ✔️ Azure Cosmos DB Data Migration Tool

64
Q

You are designing an app that will include two components. The components will communicate by sending messages via a queue. You need to recommend a solution to process the messages by using a First in. First out (FIFO) pattern.

What should you include in the recommendation?

storage queues with a custom metadata setting
Azure Service Bus queues with sessions enabled
Azure Service Bus queues with partitioning enabled
storage queues with a stored access policy

A

The correct answer is Azure Service Bus queues with sessions enabled.

Here’s why:

Azure Service Bus queues with sessions enabled: Service Bus sessions provide a guaranteed first-in, first-out (FIFO) delivery of messages. Messages within the same session are processed in the order they were enqueued. This is the primary mechanism in Azure for ensuring strict FIFO message processing.

Let’s look at why the other options are not the correct choice for strict FIFO:

Storage queues with a custom metadata setting: While storage queues offer basic queue functionality, they do not inherently guarantee FIFO order in all circumstances, especially under heavy load or failure scenarios. Custom metadata can add information to messages but doesn’t change the fundamental delivery order guarantees.

Azure Service Bus queues with partitioning enabled: Partitioning in Service Bus is primarily for scaling and increasing throughput by distributing the queue across multiple brokers. While it can improve performance, it does not inherently guarantee FIFO order across all partitions unless sessions are also used.

Storage queues with a stored access policy: Stored access policies are used to grant time-bound access to the storage queue. They do not influence the message processing order.

65
Q

Your company has an on-premises Hyper-V cluster that contains 20 virtual machines. Some of the virtual machines are based on Windows and some in Linux. You have to migrate the virtual machines onto Azure.

You have to recommend a solution that would be used to replicate the disks of the virtual machines to Azure. The solution needs to ensure that the virtual machines remain available when the migration of the disks is in progress.

You decide to create an Azure storage account and then run AzCopy Would this fulfill the requirement?

Yes
No

A

No, this would not fulfill the requirement.

Here’s why:

AzCopy is a file copy utility, not a live migration tool. While you can use AzCopy to copy the VHD/VHDX files of the virtual machines to an Azure storage account, this process typically requires the virtual machines to be shut down to ensure data consistency during the copy.

Downtime is required. If you shut down the VMs to copy their disks with AzCopy, they will be unavailable during the entire transfer process.

No ongoing replication. AzCopy performs a one-time copy. It doesn’t provide a mechanism for ongoing synchronization or replication while the VMs are running.

To meet the requirement of the virtual machines remaining available during disk migration, you would need to use a solution that supports live migration or continuous replication, such as:

Azure Site Recovery (ASR): ASR is the recommended service for migrating on-premises virtual machines to Azure with minimal downtime. It performs continuous replication of the VM disks to Azure while the VMs are running on-premises. You can then perform a planned failover to Azure with a short outage window.

Azure Migrate: Azure Migrate is a hub for migration tools, including agentless and agent-based migration options. It can leverage Azure Site Recovery for VM replication.

66
Q

HOTSPOT

Your company deploys an Azure App Service Web App.

During testing the application fails under load. The application cannot handle more than 100 concurrent user sessions. You enable the Always On feature. You also configure auto-scaling to increase counts from two to 10 based on HTTP queue length.

You need to improve the performance of the application.

Which solution should you use for each application scenario? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Solution
Azure Redis Cache
Azure Traffic Manager
Azure Content Delivery Network
Azure Application Gateway

Scenario
Store content close to end users: Solution
Store content close to the application: Solution

A

Store content close to end users:

Azure Content Delivery Network

Explanation: Azure CDN is specifically designed to cache static content (like images, CSS, JavaScript files) at edge servers located geographically closer to end users. This reduces latency for users accessing this content, improving the application’s perceived performance and reducing the load on the web app itself.

Store content close to the application:

Azure Redis Cache

Explanation: Azure Redis Cache is an in-memory data store that can be used to cache frequently accessed data by the web app. By storing data in cache, the application can retrieve it much faster than fetching it from the primary data store. This significantly reduces latency and improves the application’s responsiveness under load.

Why other options are less suitable for these scenarios:

Azure Traffic Manager: Traffic Manager is a DNS-based load balancer that directs traffic to different endpoints based on routing methods (like performance or geographic). It doesn’t store or cache content.

Azure Application Gateway: Application Gateway is a web traffic load balancer that provides features like SSL termination, web application firewall (WAF), and session affinity. It doesn’t primarily focus on caching content for performance.

Therefore, the correct selections are:

Store content close to end users: ✔️ Azure Content Delivery Network
Store content close to the application: ✔️ Azure Redis Cache

67
Q

HOTSPOT

You have an Azure App Service web app that uses a system-assigned managed identity.

You need to recommend a solution to store their settings of the web app as secrets in an Azure key vault

The solution must meet the following requirements:

  • Minimize changes to the app code,
  • Use the principle of least privilege.

What should you include in the recommendation? To answer, select the appropriate options in the answer area.
Answer Area
Key Vault integration method:
Key Vault references in Application settings
Key Vault references in Appsettings.json
Key Vault references in Web.config
Key Vault SDK
Key Vault permissions for the managed identity:
Keys: Get
Keys: List and Get
Secrets: Get
Secrets: List and Get

A

Key Vault integration method:

Key Vault references in Application settings

Key Vault permissions for the managed identity:

Secrets: Get

Explanation:

Key Vault references in Application settings: This is the recommended way to integrate Azure Key Vault secrets with Azure App Service without making significant changes to the application code. You can define app settings with a specific syntax that tells App Service to fetch the value from Key Vault. This keeps the secret management external to the application’s core logic.

Secrets: Get: To adhere to the principle of least privilege, the managed identity of the web app only needs the permission to retrieve the secrets. It doesn’t need permissions to list all secrets or manage keys. Granting the “Secrets: Get” permission specifically on the key vault and the relevant secrets is the most secure approach.

Why other options are not the best fit:

Key Vault references in Appsettings.json / Web.config: While technically possible, using Application settings for Key Vault integration is the more direct and recommended approach by Microsoft for Azure App Service. It’s better integrated with the platform’s features.

Key Vault SDK: Using the Key Vault SDK would require code changes within the application to retrieve secrets, which violates the “minimize changes to the app code” requirement.

Keys: Get / Keys: List and Get: These permissions are related to managing cryptographic keys in the Key Vault, not secrets. Since you’re storing application settings as secrets, you need the Secrets permissions.

Secrets: List and Get: While this would work, it grants the managed identity the ability to list all secrets in the Key Vault, which is more privilege than necessary. “Secrets: Get” is sufficient and adheres to the principle of least privilege.

Therefore, the correct selections are:

Key Vault integration method: ✔️ Key Vault references in Application settings
Key Vault permissions for the managed identity: ✔️ Secrets: Get

68
Q

HOTSPOT

You have an Azure AD tenant that contains a management group named MG1.

You have the Azure subscriptions shown in the following table.
Name Management group
Sub1 MG1
Sub2 MG1
Sub3 Tenant Root Group

The subscriptions contain the resource groups shown in the following table.
Name Subscription
RG1 Sub1
RG2 Sub2
RG3 Sub3

The subscription contains the Azure AD security groups shown in the following table.
Name Member of
Group1 Group3
Group2 Group3
Group3 None

The subscription contains the user accounts shown in the following table.
Name Member of
User1 Group1
User2 Group2
User3 Group1, Group2

You perform the following actions:

  • Assign User3 the Contributor role for Sub1.
  • Assign Group1 the Virtual Machine Contributor role for MG1.
  • Assign Group3 the Contributor role for the Tenant Root Group.

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
Answer Area
Statements Yes No
User1 can create a new virtual machine in RG1.
User2 can grant permissions to Group2.
User3 can create a storage account in RG2.

A

User1 can create a new virtual machine in RG1.

Yes.

Explanation: User1 is a member of Group1. Group1 has the “Virtual Machine Contributor” role assigned at the MG1 level. MG1 contains Sub1, and Sub1 contains RG1. Therefore, User1 inherits the “Virtual Machine Contributor” role for RG1 and can create virtual machines there.

User2 can grant permissions to Group2.

No.

Explanation: User2 is a member of Group2. Group2 itself has no role assignments. While Group2 is a member of Group3, which has the “Contributor” role at the Tenant Root Group, the “Contributor” role does not inherently grant the ability to manage Azure AD objects or grant permissions to groups. Granting permissions requires roles like “Owner” or “User Access Administrator” on the relevant scope (in this case, likely the Azure AD tenant).

User3 can create a storage account in RG2.

Yes.

Explanation: User3 has the “Contributor” role assigned directly at the Sub1 level. The “Contributor” role grants broad permissions to manage resources within the subscription. Since RG2 is in Sub2, and both Sub1 and Sub2 are within MG1, and importantly, User3 also inherits the “Contributor” role from Group3 at the Tenant Root Group level, they have sufficient permissions to create a storage account in RG2. While their direct “Contributor” role is on Sub1, the inherited “Contributor” role at the tenant level applies to all subscriptions.

Therefore, the answers are:

Statements Yes No
User1 can create a new virtual machine in RG1.: Yes
User2 can grant permissions to Group2.: No
User3 can create a storage account in RG2.: Yes

69
Q

HOTSPOT

You have an on-premises file server that stores 2 TB of data files.

You plan to move the data files to Azure Blob Storage In the West Europe Azure region,

You need to recommend a storage account type to store the data files and a replication solution for the storage account.

The solution must meet the following requirements:

  • Be available if a single Azure datacenter fails.
  • Support storage tiers.
  • Minimize cost.

What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Storage Account type:
Premium block blobs
Standard general-purpose v1
Standard general-purpose v2
Redundancy:
Geo-redundant storage (GRS)
Zone-redundant storage (ZRS)
Locally-redundant storage (LRS)
Read-access geo-redundant storage (RA-GRS)

A

The correct answer is:

Storage Account type: Standard general-purpose v2
Redundancy: Zone-redundant storage (ZRS)

Explanation:

Let’s break down why this combination is the best fit based on the requirements:

Storage Account Type: Standard general-purpose v2

Cost Minimization: Standard general-purpose v2 storage accounts are designed for most storage scenarios and are generally the most cost-effective option for storing data files compared to Premium Block Blobs and Standard general-purpose v1.

Support Storage Tiers: Standard general-purpose v2 accounts support all Azure Storage tiers (Hot, Cool, and Archive) for blob storage, which is a requirement.

General Purpose: For storing data files, a general-purpose v2 account is perfectly suitable and efficient.

Why not Premium block blobs? Premium block blob storage is optimized for high transaction rates and low latency, typically used for virtual machine disks or high-performance applications. It is more expensive than Standard and not necessary for storing data files where performance is not the primary concern and cost minimization is important.

Why not Standard general-purpose v1? Standard general-purpose v1 accounts are the legacy version and are not recommended for new deployments. v2 accounts offer better performance, features, and are generally more cost-optimized.

Redundancy: Zone-redundant storage (ZRS)

Availability during a single datacenter failure: ZRS replicates your data synchronously across three availability zones in the West Europe region. Availability Zones are physically separate datacenters within the same Azure region. If one datacenter (availability zone) fails, your data remains available from the other zones. This directly meets the requirement of being available if a single Azure datacenter fails.

Cost Minimization (compared to GRS/RA-GRS): ZRS is more expensive than LRS but less expensive than GRS and RA-GRS. It provides a good balance between cost and availability for datacenter-level failures within a region.

Support Storage Tiers: ZRS is compatible with storage tiers in Standard general-purpose v2 accounts.

Why not Locally-redundant storage (LRS)? LRS replicates data within a single datacenter. If that entire datacenter fails, data in an LRS account might be lost or unavailable until the datacenter is recovered. LRS does not meet the requirement of availability during a single datacenter failure.

Why not Geo-redundant storage (GRS) or Read-access geo-redundant storage (RA-GRS)? GRS and RA-GRS replicate data to a secondary region (in this case, paired region of West Europe). While they offer protection against regional outages, they are more expensive than ZRS. The requirement is only for availability during a single datacenter failure. ZRS achieves this at a lower cost than GRS/RA-GRS. If regional disaster recovery was a requirement, GRS or RA-GRS would be considered, but for just datacenter failure within West Europe, ZRS is sufficient and more cost-effective.

70
Q

You plan to migrate App1 to Azure. The solution must meet the authentication and authorization requirements.

Which of the endpoint should App1 use to obtain an access token?

Microsoft identify platform
Azure AD
Azure instance Service (IMDS)
Azure Service management

A

The correct answer is Azure Instance Metadata Service (IMDS).

Explanation:

When an Azure VM has a system-assigned or user-assigned managed identity, the application running on that VM can request an access token from the Azure Instance Metadata Service (IMDS).

Here’s how it works:

The application makes an HTTP request to a specific non-routable IP address (169.254.169.254) and port (80) on the VM. This address is only accessible from within the VM itself.

The IMDS endpoint authenticates the request based on the VM’s identity.

The IMDS endpoint retrieves an access token for the requested Azure resource based on the permissions granted to the managed identity.

The IMDS endpoint returns the access token to the application.

Why other options are incorrect:

Microsoft identify platform: This is a broader term that encompasses all of Microsoft’s identity services, including Azure AD. While the IMDS uses the Microsoft identity platform under the hood, it’s not the specific endpoint the application directly interacts with.

Azure AD: While Azure AD is the identity provider, the application running on the VM doesn’t typically need to directly authenticate with Azure AD to get a token. The IMDS acts as an intermediary.

Azure Service management: This refers to the older Azure Service Management APIs (ASM), which are largely superseded by Azure Resource Manager (ARM). IMDS is the mechanism for acquiring tokens for resources managed through ARM.

71
Q

You plan to deploy an Azure App Service web app that will have multiple instances across multiple Azure regions.

You need to recommend a load balancing service for the planned deployment.

The solution must meet the following requirements:

✑ Maintain access to the app in the event of a regional outage.

✑ Support Azure Web Application Firewall (WAF).

✑ Support cookie-based affinity.

✑ Support URL routing.

What should you include in the recommendation?

Azure Front Door
Azure Load Balancer
Azure Traffic Manager
Azure Application Gateway

A

The correct answer is Azure Front Door.

Here’s why:

Maintain access in the event of a regional outage: Azure Front Door is a global service. It can route traffic to healthy backend instances in different regions if one region experiences an outage.

Support Azure Web Application Firewall (WAF): Azure Front Door has an integrated WAF service that provides centralized protection of your web applications from common exploits and vulnerabilities.

Support cookie-based affinity: Azure Front Door allows you to configure session affinity (sticky sessions) based on cookies, ensuring that requests from the same user are directed to the same backend instance within a session.

Support URL routing: Azure Front Door allows you to configure routing rules based on the URL path of the incoming request, enabling you to direct different types of requests to different backend pools.

Let’s look at why the other options are not the best fit:

Azure Load Balancer: Azure Load Balancer is a regional load balancer. It cannot provide resilience in the event of a regional outage. While it can provide basic load balancing and session affinity, it does not offer built-in WAF or advanced URL routing capabilities.

Azure Traffic Manager: Azure Traffic Manager is a DNS-based traffic routing service. While it can provide regional outage resilience, it doesn’t operate at the HTTP/HTTPS level and thus does not offer WAF, cookie-based affinity (in the same way as HTTP/HTTPS load balancers), or advanced URL routing capabilities.

Azure Application Gateway: Azure Application Gateway is a regional web traffic load balancer that offers WAF, cookie-based affinity, and URL routing. However, being regional, it cannot maintain access to the app if the region where it’s deployed experiences an outage.

72
Q

HOTSPOT

You need to recommend a solution to ensure that App1 can access the third-party credentials and access strings. The solution must meet the security requirements.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Authenticate App1 by using:
A certificate
A service principal
A system-assigned managed identity
A user-assigned managed identity
Authorize App1 to retrieve Key Vault secrets by using:
An access policy
A connected service
A private link
A role assignment

A

Authenticate App1 by using:

A system-assigned managed identity

Authorize App1 to retrieve Key Vault secrets by using:

A role assignment

Explanation:

Authenticate App1 by using a system-assigned managed identity: This is the most straightforward and secure way for an application running on an Azure resource (like a VM, which is where App1 will be hosted) to authenticate to other Azure services, like Key Vault. With a system-assigned managed identity, Azure automatically manages the credentials, and you don’t need to embed any secrets in your application code.

Authorize App1 to retrieve Key Vault secrets by using a role assignment: Azure role-based access control (RBAC) is the recommended way to manage permissions for Azure resources. You can assign a role (like the “Key Vault Secrets User” built-in role) to the managed identity of the VM where App1 is running. This role assignment grants the managed identity the necessary permissions to retrieve secrets from the Key Vault. This adheres to the principle of least privilege by granting only the necessary permissions.

Why other options are less suitable:

A certificate/A service principal: While these can be used, managed identities are generally preferred for applications running within Azure because they simplify credential management and rotation.

A user-assigned managed identity: While user-assigned managed identities offer more flexibility for sharing identities across resources, system-assigned is simpler for a single application running on a VM and still meets the security requirements.

An access policy: Access policies are another way to grant permissions to Key Vault, and they would work. However, using Azure RBAC with role assignments is the more modern and increasingly recommended approach for managed identities. It provides better integration with the overall Azure permission model.

A connected service: Connected services are more about integrating different Azure services, not directly about authorizing access to Key Vault secrets.

A private link: Private Link provides private connectivity to Azure services, but it doesn’t handle authorization.

73
Q

You have an on-premises Microsoft SQL Server 2008 instance that hosts a 50-GB database.

You need to migrate the database to an Azure SQL managed instance. The solution must minimize downtime.

What should you use?

Azure Migrate
WANdisco LiveData Platform for Azure
Azure Data Studio
SQL Server Management Studio (SSMS)

A

The correct answer is WANdisco LiveData Platform for Azure.

Here’s why:

WANdisco LiveData Platform for Azure is specifically designed for near-zero downtime migrations of large datasets, including databases. It uses a patented technology to keep the source and target databases synchronized in real-time. This allows you to continue using the on-premises SQL Server 2008 instance until you’re ready to cut over to the Azure SQL Managed Instance, minimizing downtime.

Let’s look at why the other options are less suitable for minimizing downtime:

Azure Migrate: While Azure Migrate can migrate SQL Server databases to Azure SQL Managed Instance, the standard approach involves taking a backup and restoring it. This will involve downtime while the restore operation is in progress. While Azure Migrate can use the Azure Database Migration Service (DMS) for online migrations with minimal downtime, DMS has limitations regarding source database versions, and SQL Server 2008 might not be fully supported for online migrations with DMS.

Azure Data Studio: Azure Data Studio is a tool for managing databases. It can be used to connect to and query both the on-premises SQL Server and the Azure SQL Managed Instance. However, it doesn’t provide a built-in mechanism for minimizing downtime during the migration itself. You would likely use it to execute scripts or initiate backups/restores, which involve downtime.

SQL Server Management Studio (SSMS): Similar to Azure Data Studio, SSMS is a management tool. You could use SSMS to perform backup and restore operations or generate scripts for migration, but these methods typically involve a significant downtime window.

74
Q

You have .NeT web service named service1 that has the following requirements.

✑ Must read and write to the local file system.

✑ Must write to the Windows Application event log.

You need to recommend a solution to host Service1 in Azure.

The solution must meet the following requirements:

✑ Minimize maintenance overhead.

✑ Minimize costs.

What should you include in the recommendation?

an Azure App Service web app
an Azure virtual machine scale set
an App Service Environment (ASE)
an Azure Functions app

A

The correct answer is an Azure App Service web app.

Here’s why:

Minimize maintenance overhead: Azure App Service is a Platform-as-a-Service (PaaS) offering. Azure handles the underlying infrastructure, operating system patching, and server maintenance. This significantly reduces the maintenance burden compared to managing virtual machines.

Minimize costs: Azure App Service is generally a cost-effective option for hosting web applications. You pay for the compute resources used by your app, and there are various pricing tiers to suit different needs.

Read and write to the local file system: While Azure App Service has a sandboxed environment, your application can read and write to the D:\home directory.

Write to the Windows Application event log: Azure App Service allows your application to write to the Windows Application event log.

Let’s look at why the other options are less suitable:

An Azure virtual machine scale set: While a virtual machine scale set provides flexibility, it falls under Infrastructure-as-a-Service (IaaS). This means you are responsible for managing the operating system, patching, and other server maintenance tasks, which increases the maintenance overhead and potentially the cost.

An App Service Environment (ASE): An ASE provides an isolated and dedicated environment for running App Service apps at a large scale. It’s significantly more expensive than a regular App Service plan and is typically used for large, mission-critical workloads with strict isolation requirements. It doesn’t align with the “minimize costs” requirement for a single web service.

An Azure Functions app: Azure Functions are designed for event-driven, serverless workloads. While you could potentially adapt your web service to run as a Function, it’s not the primary use case for web services that require continuous operation and might involve more significant code changes. Also, writing to the local file system and the Windows Event Log might be more complex or have limitations in a Functions environment.

75
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You need to deploy resources to host a stateless web app in an Azure subscription.

The solution must meet the following requirements:

✑ Provide access to the full .NET framework.

✑ Provide redundancy if an Azure region fails.

✑ Grant administrators access to the operating system to install custom application dependencies.

Solution: You deploy two Azure virtual machines to two Azure regions, and you create a Traffic Manager profile.

Does this meet the goal?

Yes
No

A

Yes

Explanation:

Providing access to the full .NET framework: Deploying to Azure Virtual Machines (VMs) allows you to install and run any version of the .NET Framework you need.

Providing redundancy if an Azure region fails: Deploying VMs to two different Azure regions and using Azure Traffic Manager provides cross-region redundancy. If one region experiences an outage, Traffic Manager can route traffic to the healthy region.

Granting administrators access to the operating system: Azure VMs provide full administrative access to the operating system, allowing administrators to install custom dependencies.

76
Q

You need to design a highly available Azure SQL database that meets the following requirements:

✑ Failover between replicas of the database must occur without any data loss.

✑ The database must remain available in the event of a zone outage.

✑ Costs must be minimized

Which deployment option should you use?

Azure SQL Database Standard
Azure SQL Database Serverless
Azure SQL Managed Instance General Purpose
Azure SQL Database Premium

A

The correct answer is Azure SQL Database Premium. Here’s why:

Failover between replicas of the database must occur without any data loss: The Premium tier of Azure SQL Database uses synchronous replication between replicas within an availability zone pair. This ensures that every transaction is committed to all replicas before being acknowledged to the client, guaranteeing no data loss during failover.

The database must remain available in the event of a zone outage: The Premium tier supports zone redundancy. This means the primary replica and at least one secondary replica are located in different availability zones within the same Azure region. If one zone fails, the database automatically fails over to a replica in another healthy zone with no data loss due to synchronous replication.

Costs must be minimized: While the Premium tier is more expensive than Standard or General Purpose, it’s necessary to meet the stringent requirements of zero data loss and zone outage resilience.

Let’s look at why the other options are not suitable:

Azure SQL Database Standard: The Standard tier typically uses asynchronous replication, which can lead to data loss in a failover scenario. While it can be configured for zone redundancy, the replication lag might not guarantee zero data loss.

Azure SQL Database Serverless: Similar to the Standard tier, Serverless relies on asynchronous replication and might not guarantee zero data loss during failover.

Azure SQL Managed Instance General Purpose: While General Purpose can be configured for zone redundancy, it typically uses remote storage with asynchronous replication by default. To achieve zero data loss, you would need to configure it for synchronous replication, which might increase costs and potentially approach the cost of a Premium tier.

77
Q

You are developing a sales application that will contain several Azure cloud services and will handle different components of a transaction. Different cloud services will process customer orders, billing, payment, inventory, and shipping.

You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using REST messages.

What should you include in the recommendation?

Azure Service Bus
Azure Blob storage
Azure Notification Hubs
Azure Application Gateway

A

The correct answer is Azure Service Bus.

Here’s why:

Asynchronous Communication: Azure Service Bus is a fully managed enterprise integration message broker. It allows services to send and receive messages asynchronously, meaning the sender doesn’t have to wait for a response from the receiver. This is ideal for decoupling the different components of the sales application.

REST Support: Azure Service Bus supports sending and receiving messages using REST APIs. This makes it compatible with services that communicate using RESTful principles.

Let’s look at why the other options are not the best fit:

Azure Blob storage: Azure Blob storage is primarily for storing unstructured data. While services could potentially use blobs to exchange information, it’s not a direct mechanism for asynchronous messaging. It would require more complex logic for managing message queues and delivery guarantees.

Azure Notification Hubs: Azure Notification Hubs is designed for pushing notifications to mobile applications. It’s not intended for general-purpose service-to-service communication.

Azure Application Gateway: Azure Application Gateway is a web traffic load balancer and reverse proxy for managing HTTP and HTTPS traffic to web applications. It’s not designed for general asynchronous service communication or message brokering.

78
Q

A company named Contoso, Ltd. has an Azure Active Directory (Azure AD) tenant that is integrated with Microsoft Office 365 and an Azure subscription.

Contoso has an on-premises identity infrastructure. The infrastructure includes servers that run Active Directory Domain Services (AD DS), and Azure AD Connect

Contoso has a partnership with a company named Fabrikam, Inc. Fabrikam has an Active Directory forest and an Office 365 tenant. Fabrikam has the same on-premises identity infrastructure as Contoso.

A team of 10 developers from Fabrikam will work on an Azure solution that will be hosted in the Azure subscription of Contoso. The developers must be added to the Contributor role for a resource in the Contoso subscription.

You need to recommend a solution to ensure that Contoso can assign the role to the 10 Fabrikam developers. The solution must ensure that the Fabrikam developers use their existing credentials to access resources.

What should you recommend?

Configure a forest trust between the on-premises Active Directory forests of Contoso and Fabrikam.
Configure an organization relationship between the Office 365 tenants of Fabrikam and Contoso.
In the Azure AD tenant of Contoso, use MIM to create guest accounts for the Fabrikam developers.
Configure an AD FS relying party trust between the fabrikam and Contoso AD FS infrastructures.

A

The correct answer is In the Azure AD tenant of Contoso, use MIM to create guest accounts for the Fabrikam developers.

Here’s a breakdown of why this is the closest and why the other options are less suitable:

Why Option 3 (MIM to create guest accounts) is the closest:

Guest Accounts and External Access: The core requirement is to allow Fabrikam developers to access Contoso’s Azure resources using their existing credentials. Guest accounts in Azure AD are specifically designed for this scenario – granting external users access to resources within your Azure AD tenant.

Existing Credentials: While MIM itself doesn’t directly enable using existing Fabrikam credentials in the Fabrikam tenant, the concept of creating guest accounts in Contoso Azure AD is the right approach. The question might be slightly simplified or outdated in its wording. In modern Azure AD, you would use Azure AD B2B Collaboration, which is the evolution of guest accounts and directly supports using external identities (like Fabrikam’s Azure AD accounts) without requiring new credentials in Contoso. MIM could be used to automate the creation of guest accounts, although it’s not the primary or recommended tool for this specific scenario in a cloud-first approach.

Role Assignment: Once guest accounts are created in Contoso Azure AD (whether manually, via MIM, or ideally via B2B), you can then assign the “Contributor” role to these guest accounts for the specific Azure resource in Contoso’s subscription.

Why other options are incorrect or less suitable:

Option 1: Configure a forest trust between the on-premises Active Directory forests of Contoso and Fabrikam.

Incorrect: Forest trusts are primarily for on-premises Active Directory environments. While they establish trust between domains, they don’t directly address granting access to Azure resources. Azure AD is a cloud-based identity provider, and while it syncs with on-premises AD via Azure AD Connect, a forest trust is not the correct mechanism for cross-tenant Azure resource access. Forest trusts are complex to set up between organizations and are not the modern cloud-centric approach.

Option 2: Configure an organization relationship between the Office 365 tenants of Fabrikam and Contoso.

Incorrect: Organization relationships (federation) in Office 365 are primarily for enabling features like calendar sharing and free/busy lookups between Office 365 tenants. They don’t directly grant access to Azure resources or enable Azure Role-Based Access Control (RBAC). While it establishes a level of trust for Office 365 services, it’s not relevant for Azure subscription access.

Option 4: Configure an AD FS relying party trust between the fabrikam and Contoso AD FS infrastructures.

Incorrect: AD FS (Active Directory Federation Services) is a federation technology, but it’s more complex than necessary for this scenario. While AD FS could be used for cross-organization authentication, Azure AD B2B Collaboration (which is conceptually related to guest accounts) is the simpler, more modern, and recommended approach for granting external users access to Azure resources. Setting up AD FS relying party trusts between organizations is also complex and often requires more infrastructure overhead. It’s not the “cloud-native” solution for this problem.

Why Option 3 is “closest” and likely the intended answer in an exam context:

Focus on Guest Accounts: The option that mentions “guest accounts” is the most relevant because guest accounts (or Azure AD B2B Collaboration) are the direct feature in Azure AD designed for external user access.

Exam Context and Simplification: Exam questions are sometimes simplified and may not always reflect the absolute latest best practices. The question might be testing the fundamental concept of granting external access via guest accounts in Azure AD. While MIM might not be the best tool to implement B2B in a modern context, the option points towards the correct concept of guest accounts in Contoso’s Azure AD.

Elimination of Other Options: The other options are clearly less relevant or incorrect for granting Azure resource access to external users with their existing credentials.

In a real-world scenario, the best solution would be to use Azure AD B2B Collaboration:

Invite Fabrikam developers as guest users to Contoso’s Azure AD tenant. You can do this by inviting them directly using their Fabrikam Office 365 accounts (which are backed by Fabrikam Azure AD).

Assign the “Contributor” role to these guest users at the desired scope (resource group, resource, or subscription) in Contoso’s Azure subscription.

This Azure AD B2B approach directly allows Fabrikam developers to use their Fabrikam credentials to authenticate and access the Contoso Azure resources.

79
Q

HOTSPOT

You manage a database environment for a Microsoft Volume Licensing customer named Contoso, Ltd. Contoso uses License Mobility through Software Assurance.

You need to deploy 50 databases.

The solution must meet the following requirements:

✑ Support automatic scaling.

✑ Minimize Microsoft SQL Server licensing costs.

What should you include in the solution? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Purchase model:
DTU
vCore
Azure reserved virtual machine instances
Deployment option:
An Azure SQL managed instance
An Azure SQL Database elastic pool
A SQL Server Always On availability group

A

Purchase model:

vCore

Deployment option:

An Azure SQL Database elastic pool

Explanation:

Purchase model: vCore: The vCore purchasing model allows you to leverage your existing SQL Server licenses with Software Assurance through the Azure Hybrid Benefit. This significantly reduces the SQL Server licensing costs, directly addressing the requirement to minimize these costs.

Deployment option: An Azure SQL Database elastic pool: Azure SQL Database elastic pools are designed for scenarios with multiple databases that have varying and unpredictable usage patterns. They allow you to share a pool of resources (compute and storage) among multiple databases, optimizing cost efficiency. The elastic pool can automatically scale resources up or down based on the demand of the databases within the pool, fulfilling the requirement for automatic scaling.

Why other options are less suitable:

DTU: The DTU purchase model includes the SQL Server license cost in the price, so you can’t leverage your existing Software Assurance for License Mobility.

Azure reserved virtual machine instances: While you can use reserved instances to reduce the cost of virtual machines, this option requires you to deploy SQL Server on Azure VMs, which increases management overhead and doesn’t directly leverage the automatic scaling capabilities of Azure SQL Database. Also, you would need to manage the licensing yourself even with reserved instances.

An Azure SQL managed instance: While Managed Instance supports License Mobility, it’s generally more expensive per database than using an elastic pool for a large number of databases with potentially varying workloads. Scaling is also done at the instance level rather than individual database level within a pool.

A SQL Server Always On availability group: This involves deploying SQL Server on Azure Virtual Machines, which increases management overhead and does not directly utilize the PaaS benefits like automatic scaling of Azure SQL Database. While License Mobility can be used, managing licensing and the infrastructure for 50 databases would be more complex and potentially costly.

80
Q

The accounting department at your company migrates to a new financial accounting software. The accounting department must keep file-based database backups for seven years for compliance purposes. It is unlikely that the backups will be used to recover data.

You need to move the backups to Azure. The solution must minimize costs.

Where should you store the backups?

Azure Blob storage that uses the Archive tier
Azure SQL Database
Azure Blob storage that uses the Cool tier
a Recovery Services vault

A

The correct answer is Azure Blob storage that uses the Archive tier.

Here’s why:

Minimize Costs: The Archive tier in Azure Blob storage is the most cost-effective option for storing data that is infrequently accessed and has high latency requirements for retrieval. This perfectly aligns with the requirement to minimize costs for backups that are unlikely to be used for recovery.

Long-Term Storage: The Archive tier is designed for long-term data retention, making it suitable for the seven-year compliance requirement.

Suitability for Backups: File-based database backups are a type of unstructured data that can be effectively stored in Azure Blob storage.

Let’s look at why the other options are less suitable:

Azure SQL Database: Azure SQL Database is a fully managed relational database service. It’s designed for actively used databases, not for storing file-based backups. Storing backups directly in a live database would be significantly more expensive and inefficient.

Azure Blob storage that uses the Cool tier: The Cool tier is more cost-effective than the Hot tier but more expensive than the Archive tier. Since the backups are unlikely to be used, the extra cost for the slightly faster retrieval time of the Cool tier is not justified.

A Recovery Services vault: Recovery Services vaults are primarily used for backing up and restoring Azure resources (like VMs, databases). While you could potentially store file backups there, it’s typically more expensive and designed for a more active recovery strategy (e.g., point-in-time restores). For simple archival, Azure Blob storage with the Archive tier is more cost-effective.

81
Q

You are designing an application that will aggregate content for users.

You need to recommend a database solution for the application.

The solution must meet the following requirements:

✑ Support SQL commands.

✑ Support multi-master writes.

✑ Guarantee low latency read operations.

What should you include in the recommendation?

Azure Cosmos DB SQL API
Azure SQL Database that uses active geo-replication
Azure SQL Database Hyperscale
Azure Database for PostgreSQL

A

The correct answer is Azure Cosmos DB SQL API.

Here’s why:

Support SQL commands: The Azure Cosmos DB SQL API allows you to query and manipulate data using SQL-like syntax.

Support multi-master writes: Azure Cosmos DB is a globally distributed, multi-model database service that natively supports multi-master writes. This means you can have replicas in multiple regions where writes can be performed simultaneously, providing high availability and low latency writes for users across different locations.

Guarantee low latency read operations: Azure Cosmos DB is designed for low latency reads and writes at a global scale. Its multi-master capabilities and tunable consistency levels allow you to configure the system to prioritize low latency reads for your application’s needs.

Let’s look at why the other options are not the best fit:

Azure SQL Database that uses active geo-replication: While active geo-replication provides readable secondary replicas for disaster recovery and read scale-out, it doesn’t support multi-master writes. Writes are primarily directed to the primary replica.

Azure SQL Database Hyperscale: Azure SQL Database Hyperscale is designed for very large databases with high performance and scalability. While it offers excellent read scale-out, it doesn’t inherently support multi-master writes.

Azure Database for PostgreSQL: While PostgreSQL is a robust relational database and can be configured for read replicas, it doesn’t natively offer a built-in, fully managed multi-master write capability in the same way as Azure Cosmos DB.

81
Q

You are designing a message application that will run on an on-premises Ubuntu virtual machine.

The application will use Azure Storage queues.

You need to recommend a processing solution for the application to interact with the storage queues.

The solution must meet the following requirements:

✑ Create and delete queues daily.

✑ Be scheduled by using a CRON job.

✑ Upload messages every five minutes.

What should developers use to interact with the queues?

Azure CLI
AzCopy
Azure Data Factory
.NET Core

A

The correct answer is .NET Core.

Here’s why:

Azure CLI and .NET Core meet the requirement. Azure CLI can change over time, and it’s not a good practice to use commandline tools in an application.

82
Q

DRAG DROP

You have an on-premises named App 1.

Customers App1 to manage digital images.

You plan to migrate App1 to Azure.

You need to recommend a data storage solution for Appl.

The solution must meet the following image storage requirements:

✑ Encrypt images at rest.

✑ Allow files up to 50M

Services
Azure Blob storage
Azure Cosmos DB
Azure SQL Database
Azure Table storage

Answer Area
Image storage: Service
Customer accounts: Service

A

Answer Area

Image storage: Azure Blob storage
Customer accounts: Azure SQL Database

Explanation:

Image storage: Azure Blob storage

Encryption at Rest: Azure Blob storage automatically encrypts data at rest.

File Size: Azure Blob storage can handle individual blobs much larger than 50MB.

Cost-Effective: Blob storage is a cost-effective solution for storing large amounts of unstructured data like images.

Customer accounts: Azure SQL Database

Structured Data: Customer account information is typically structured data that benefits from a relational database.

Querying: Azure SQL Database allows you to use SQL to efficiently query and manage customer account data.

Transaction Support: It provides robust transaction support, important for managing customer account details.

Why other options are not the best fit:

Azure Cosmos DB: While Cosmos DB can store binary data and supports encryption at rest, it’s primarily a NoSQL database designed for high scalability and global distribution. It might be overkill and potentially more expensive for simple image storage compared to Blob storage. For customer accounts, it’s a viable option but might be more complex than needed if the requirements are primarily relational.

Azure Table storage: Azure Table storage is a NoSQL key-value store. While it’s cost-effective, it has limitations on individual property sizes and is generally not the best fit for storing large binary files like images. It could be used for simple customer account data but lacks the full relational capabilities of SQL Database.

83
Q

HOTSPOT

You have an Azure App Service web app named Webapp1 that connects to an Azure SQL database named DB1. Webapp1 and DB1 are deployed to the East US Azure region.

You need to ensure that all the traffic between Weoapp1 and DB1 is sent via a private connection.

What should you do? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Create a virtual network that contains at least:
1 subnet
2 subnets
3 subnets
From the virtual network, configure name resolution to use:
A private DNS zone
A public DNS zone
The Azure DNS Private Resolver

A

The correct answer is:

Create a virtual network that contains at least: 2 subnets
From the virtual network, configure name resolution to use: A private DNS zone

Explanation:

Why 2 subnets?

Subnet 1: For Azure App Service VNet Integration: The App Service needs to be integrated with the virtual network. This is done by assigning a range of private IP addresses from a dedicated subnet within your virtual network to the App Service instances.

Subnet 2: For Azure SQL Database Private Endpoint: To establish a private connection to the SQL Database, you need to create a Private Endpoint for the database within your virtual network. This Private Endpoint gets an IP address from a subnet within your virtual network.

Why a private DNS zone?

When you create a Private Endpoint for the Azure SQL Database, Azure creates a network interface within your virtual network. To access the database through this private endpoint, the web app needs to resolve the database’s fully qualified domain name (FQDN) to the private IP address of the Private Endpoint, not the public IP address.

A private DNS zone allows you to create DNS records that are only resolvable within your virtual network. You would create an “A” record in the private DNS zone that maps the SQL Database’s FQDN to the private IP address of its Private Endpoint.

Why not 1 or 3 subnets?

While you could theoretically use more subnets for organizational purposes, the minimum required for this specific scenario is two.

One subnet is insufficient because you need separate subnets for the App Service integration and the Private Endpoint for better network segmentation and management.

Why not a public DNS zone?

A public DNS zone resolves to public IP addresses. The goal is to establish a private connection, so resolving to the public IP address of the SQL Database would defeat the purpose.

Why not the Azure DNS Private Resolver?

The Azure DNS Private Resolver is used to query on-premises DNS servers from Azure or vice-versa. While it can play a role in hybrid scenarios, it’s not strictly necessary for establishing a private connection between Azure services within the same Azure region. A private DNS zone is the more direct and simpler solution for this scenario.