Exam Questions Flashcards

1
Q

You have an Azure subscription that contains a custom application named Application1. Application1 was developed by an external company named Fabrikam,
Ltd. Developers at Fabrikam were assigned role-based access control (RBAC) permissions to the Application1 components. All users are licensed for the
Microsoft 365 E5 plan.
You need to recommend a solution to verify whether the Fabrikam developers still require permissions to Application1. The solution must meet the following requirements:
✑ To the manager of the developers, send a monthly email message that lists the access permissions to Application1.
✑ If the manager does not verify an access permission, automatically revoke that permission.
✑ Minimize development effort.
What should you recommend?

A. In Azure Active Directory (Azure AD), create an access review of Application1.
B. Create an Azure Automation runbook that runs the Get-AzRoleAssignment cmdlet.
C. In Azure Active Directory (Azure AD) Privileged Identity Management, create a custom role assignment for the Application1 resources.
D. Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet.

A

Correct Answer: A

Recommendation: A. In Azure Active Directory (Azure AD), create an access review of Application1.

Explanation:
* Access reviews are designed specifically for this purpose: periodically evaluating access permissions and requiring approval to maintain them.
* Automatic revocation: Access reviews can be configured to automatically revoke permissions if not verified by the manager.
* Minimal development effort: Access reviews are a built-in Azure AD feature, requiring minimal configuration and no custom development.
* Monthly email reports: Access reviews can be scheduled to send email notifications to the manager with a list of permissions to review.

Comparison to other options:
* B. Azure Automation runbook: While this option could technically be used, it would require significant development effort to create the script, send emails, and manage access revocations.
* C. Privileged Identity Management (PIM): PIM is primarily for managing privileged roles and doesn’t fit the requirement of reviewing all access permissions.
* D. Get-AzureADUserAppRoleAssignment cmdlet: Similar to option B, this would require custom scripting and development effort.

Therefore, creating an access review in Azure AD is the most efficient and effective solution to meet the given requirements.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You have an Azure subscription. The subscription has a blob container that contains multiple blobs.
Ten users in the finance department of your company plan to access the blobs during the month of April.
You need to recommend a solution to enable access to the blobs during the month of April only.
Which security solution should you include in the recommendation?

A. shared access signatures (SAS)
B. Conditional Access policies
C. certificates
D. access keys

A

Correct Answer: A

Shared Access Signatures (SAS) allows for limited-time fine grained access control to resources. So you can generate URL, specify duration (for month of April) and disseminate URL to 10 team members. On May 1, the SAS token is automatically invalidated, denying team members continued access.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You have an Azure Active Directory (Azure AD) tenant that syncs with an on-premises Active Directory domain.
You have an internal web app named WebApp1 that is hosted on-premises. WebApp1 uses Integrated Windows authentication.
Some users work remotely and do NOT have VPN access to the on-premises network.
You need to provide the remote users with single sign-on (SSO) access to WebApp1.
Which two features should you include in the solution? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

A. Azure AD Application Proxy
B. Azure AD Privileged Identity Management (PIM)
C. Conditional Access policies
D. Azure Arc
E. Azure AD enterprise applications
F. Azure Application Gateway

A

Correct Answer: AE

A: Application Proxy is a feature of Azure AD that enables users to access on-premises web applications from a remote client. Application Proxy includes both the
Application Proxy service which runs in the cloud, and the Application Proxy connector which runs on an on-premises server.
You can configure single sign-on to an Application Proxy application.
E: Add an on-premises app to Azure AD
Now that you’ve prepared your environment and installed a connector, you’re ready to add on-premises applications to Azure AD.
1. Sign in as an administrator in the Azure portal.
2. In the left navigation panel, select Azure Active Directory.
3. Select Enterprise applications, and then select New application.
4. Select Add an on-premises application button which appears about halfway down the page in the On-premises applications section. Alternatively, you can select Create your own application at the top of the page and then select Configure Application Proxy for secure remote access to an on-premise application.
5. In the Add your own on-premises application section, provide the following information about your application.
6. Etc.
Incorrect:
Not C: Conditional Access policies are not required.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You have an Azure Active Directory (Azure AD) tenant named contoso.com that has a security group named Group1. Group1 is configured for assigned membership. Group1 has 50 members, including 20 guest users.
You need to recommend a solution for evaluating the membership of Group1. The solution must meet the following requirements:
✑ The evaluation must be repeated automatically every three months.
✑ Every member must be able to report whether they need to be in Group1.
✑ Users who report that they do not need to be in Group1 must be removed from Group1 automatically.
✑ Users who do not report whether they need to be in Group1 must be removed from Group1 automatically.
What should you include in the recommendation?

A. Implement Azure AD Identity Protection.
B. Change the Membership type of Group1 to Dynamic User.
C. Create an access review.
D. Implement Azure AD Privileged Identity Management (PIM).

A

Correct Answer: C

Azure Active Directory (Azure AD) access reviews enable organizations to efficiently manage group memberships, access to enterprise applications, and role assignments. User’s access can be reviewed on a regular basis to make sure only the right people have continued access.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

You plan to deploy Azure Databricks to support a machine learning application. Data engineers will mount an Azure Data Lake Storage account to the Databricks file system. Permissions to folders are granted directly to the data engineers.
You need to recommend a design for the planned Databrick deployment. The solution must meet the following requirements:
✑ Ensure that the data engineers can only access folders to which they have permissions.
✑ Minimize development effort.
✑ Minimize costs.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Hot Area

A

Answer

Box 1: Premium -
Premium Databricks SKU is required for credential passhtrough.

Box 2: Credential passthrough - (this is about to be decommissioned, now is preferred to use Unity Catalog. In Such a case Standard Databricks SKU can be used
Athenticate automatically to Azure Data Lake Storage Gen1 (ADLS Gen1) and Azure Data Lake Storage Gen2 (ADLS Gen2) from Azure Databricks clusters using the same Azure Active Directory (Azure AD) identity that you use to log into Azure Databricks. When you enable Azure Data Lake Storage credential passthrough for your cluster, commands that you run on that cluster can read and write data in Azure Data Lake Storage without requiring you to configure service principal credentials for access to storage.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company named Contoso, Ltd. has an Azure Active Directory (Azure AD) tenant that is integrated with Microsoft 365 and an Azure subscription.
Contoso has an on-premises identity infrastructure. The infrastructure includes servers that run Active Directory Domain Services (AD DS) and Azure AD Connect.
Contoso has a partnership with a company named Fabrikam. Inc. Fabrikam has an Active Directory forest and a Microsoft 365 tenant. Fabrikam has the same on- premises identity infrastructure components as Contoso.
A team of 10 developers from Fabrikam will work on an Azure solution that will be hosted in the Azure subscription of Contoso. The developers must be added to the Contributor role for a resource group in the Contoso subscription.
You need to recommend a solution to ensure that Contoso can assign the role to the 10 Fabrikam developers. The solution must ensure that the Fabrikam developers use their existing credentials to access resources
What should you recommend?

A. In the Azure AD tenant of Contoso. create cloud-only user accounts for the Fabrikam developers.
B. Configure a forest trust between the on-premises Active Directory forests of Contoso and Fabrikam.
C. Configure an organization relationship between the Microsoft 365 tenants of Fabrikam and Contoso.
D. In the Azure AD tenant of Contoso, create guest accounts for the Fabnkam developers.

A

Correct Answer: D

You can use the capabilities in Azure Active Directory B2B to collaborate with external guest users and you can use Azure RBAC to grant just the permissions that guest users need in your environment.
Incorrect:
Not B: Forest trust is used for internal security, not external access.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You plan to deploy an Azure web app named App1 that will use Azure Active Directory (Azure AD) authentication.
App1 will be accessed from the internet by the users at your company. All the users have computers that run Windows 10 and are joined to Azure AD.
You need to recommend a solution to ensure that the users can connect to App1 without being prompted for authentication and can access App1 only from company-owned computers.
What should you recommend for each requirement? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Hot Area

A

Correct Answer

Box 1: An Azure AD app registration
Azure active directory (AD) provides cloud based directory and identity management services.You can use azure AD to manage users of your application and authenticate access to your applications using azure active directory.
You register your application with Azure active directory tenant.
Box 2: A conditional access policy
Conditional Access policies at their simplest are if-then statements, if a user wants to access a resource, then they must complete an action.
By using Conditional Access policies, you can apply the right access controls when needed to keep your organization secure and stay out of your user’s way when not needed.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity.
Several virtual machines exhibit network connectivity issues.
You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines.

Solution: Use Azure Traffic Analytics in Azure Network Watcher to analyze the network traffic.
Does this meet the goal?

A. Yes
B. No

A

Correct Answer: B

Instead use Azure Network Watcher IP Flow Verify, which allows you to detect traffic filtering issues at a VM level.
Note: IP flow verify checks if a packet is allowed or denied to or from a virtual machine. The information consists of direction, protocol, local IP, remote IP, local port, and remote port. If the packet is denied by a security group, the name of the rule that denied the packet is returned. While any source or destination IP can be chosen, IP flow verify helps administrators quickly diagnose connectivity issues from or to the internet and from or to the on-premises environment.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity.
Several virtual machines exhibit network connectivity issues.
You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines.

Solution: Use Azure Advisor to analyze the network traffic.

Does this meet the goal?

A. Yes
B. No

A

Correct Answer: B

Instead use Azure Network Watcher IP Flow Verify, which allows you to detect traffic filtering issues at a VM level.
Note: IP flow verify checks if a packet is allowed or denied to or from a virtual machine. The information consists of direction, protocol, local IP, remote IP, local port, and remote port. If the packet is denied by a security group, the name of the rule that denied the packet is returned. While any source or destination IP can be chosen, IP flow verify helps administrators quickly diagnose connectivity issues from or to the internet and from or to the on-premises environment.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity.
Several virtual machines exhibit network connectivity issues.
You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines.

Solution: Use Azure Network Watcher to run IP flow verify to analyze the network traffic.
Does this meet the goal?

A. Yes
B. No

A

Correct Answer: A

Azure Network Watcher IP Flow Verify allows you to detect traffic filtering issues at a VM level.
IP flow verify checks if a packet is allowed or denied to or from a virtual machine. The information consists of direction, protocol, local IP, remote IP, local port, and remote port. If the packet is denied by a security group, the name of the rule that denied the packet is returned. While any source or destination IP can be chosen,
IP flow verify helps administrators quickly diagnose connectivity issues from or to the internet and from or to the on-premises environment.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You have an Azure subscription. The subscription contains Azure virtual machines that run Windows Server 2016 and Linux.
You need to use Azure Monitor to design an alerting strategy for security-related events.
Which Azure Monitor Logs tables should you query? To answer, drag the appropriate tables to the correct log types. Each table may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.

Select and Place: Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You are designing a large Azure environment that will contain many subscriptions.
You plan to use Azure Policy as part of a governance solution.
To which three scopes can you assign Azure Policy definitions? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.

A. Azure Active Directory (Azure AD) administrative units
B. Azure Active Directory (Azure AD) tenants
C. subscriptions
D. compute resources
E. resource groups
F. management groups

A

Correct Answer: CEF

Azure Policy evaluates resources in Azure by comparing the properties of those resources to business rules. Once your business rules have been formed, the policy definition or initiative is assigned to any scope of resources that Azure supports, such as management groups, subscriptions, resource groups, or individual resources.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Your on-premises network contains a server named Server1 that runs an ASP.NET application named App1.
You have a hybrid deployment of Azure Active Directory (Azure AD).
You need to recommend a solution to ensure that users sign in by using their Azure AD account and Azure Multi-Factor Authentication (MFA) when they connect to App1 from the internet.

Which three features should you recommend be deployed and configured in sequence? To answer, move the appropriate features from the list of features to the answer area and arrange them in the correct order.
Select and Place: Answer Area

A

Correct Answer

Step 1: Azure AD Application Proxy
Start by enabling communication to Azure data centers to prepare your environment for Azure AD Application Proxy.
Step 2: an Azure AD enterprise application
Add an on-premises app to Azure AD.
Now that you’ve prepared your environment and installed a connector, you’re ready to add on-premises applications to Azure AD.
1. Sign in as an administrator in the Azure portal.
2. In the left navigation panel, select Azure Active Directory.
3. Select Enterprise applications, and then select New application.
4. Etc.
Step 3: Setup a conditional Access Policy to ensure MFA

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You need to recommend a solution to generate a monthly report of all the new Azure Resource Manager (ARM) resource deployments in your Azure subscription.
What should you include in the recommendation?

A. Azure Activity Log
B. Azure Advisor
C. Azure Analysis Services
D. Azure Monitor action groups

A

Correct Answer: A

Activity logs are kept for 90 days. You can query for any range of dates, as long as the starting date isn’t more than 90 days in the past.
Through activity logs, you can determine:
✑ what operations were taken on the resources in your subscription
✑ who started the operation
✑ when the operation occurred
✑ the status of the operation
✑ the values of other properties that might
help you research the operation

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity.
Several virtual machines exhibit network connectivity issues.
You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines.

Solution: Install and configure the Azure Monitoring agent and the Dependency Agent on all the virtual machines. Use VM insights in Azure Monitor to analyze the network traffic.
Does this meet the goal?

A. Yes
B. No

A

Correct Answer: B

Use the Azure Monitor agent if you need to:
Collect guest logs and metrics from any machine in Azure, in other clouds, or on-premises.
Use the Dependency agent if you need to:
Use the Map feature VM insights or the Service Map solution.
Note: Instead use Azure Network Watcher IP Flow Verify allows you to detect traffic filtering issues at a VM level.
IP flow verify checks if a packet is allowed or denied to or from a virtual machine. The information consists of direction, protocol, local IP, remote IP, local port, and remote port. If the packet is denied by a security group, the name of the rule that denied the packet is returned. While any source or destination IP can be chosen,
IP flow verify helps administrators quickly diagnose connectivity issues from or to the internet and from or to the on-premises environment.

Reference 1
Reference 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

DRAG DROP -
You need to design an architecture to capture the creation of users and the assignment of roles. The captured data must be stored in Azure Cosmos DB.

Which services should you include in the design? To answer, drag the appropriate services to the correct targets. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.

Select and replace

A

Correct Answer

Box 1: Azure Event Hubs -
You can route Azure Active Directory (Azure AD) activity logs to several endpoints for long term retention and data insights.
The Event Hub is used for streaming.

Box 2: Azure Function -
Use an Azure Function along with a cosmos DB change feed, and store the data in Cosmos DB.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Your company, named Contoso, Ltd., implements several Azure logic apps that have HTTP triggers. The logic apps provide access to an on-premises web service.
Contoso establishes a partnership with another company named Fabrikam, Inc.
Fabrikam does not have an existing Azure Active Directory (Azure AD) tenant and uses third-party OAuth 2.0 identity management to authenticate its users.
Developers at Fabrikam plan to use a subset of the logic apps to build applications that will integrate with the on-premises web service of Contoso.
You need to design a solution to provide the Fabrikam developers with access to the logic apps. The solution must meet the following requirements:
✑ Requests to the logic apps from the developers must be limited to lower rates than the requests from the users at Contoso.
✑ The developers must be able to rely on their existing OAuth 2.0 provider to gain access to the logic apps.
✑ The solution must NOT require changes to the logic apps.
✑ The solution must NOT use Azure AD guest accounts.
What should you include in the solution?

A. Azure Front Door
B. Azure AD Application Proxy
C. Azure AD business-to-business (B2B)
D. Azure API Management

A

Correct Answer: D

The best solution to provide Fabrikam developers with access to the logic apps while meeting the given requirements is:

D. Azure API Management

Here’s why:

  • Rate limiting: Azure API Management allows you to set rate limits for different API consumers, ensuring that requests from Fabrikam developers are limited to lower rates than those from Contoso users.
  • OAuth 2.0 integration: Azure API Management supports integration with various identity providers, including third-party OAuth 2.0 providers. This means Fabrikam developers can use their existing OAuth 2.0 provider to authenticate and gain access to the logic apps.
  • No changes to logic apps: Azure API Management acts as a gateway, handling authentication, authorization, and rate limiting without requiring any modifications to the existing logic apps.
  • No Azure AD guest accounts: The solution relies on the existing OAuth 2.0 provider, eliminating the need for Azure AD guest accounts.

While Azure Front Door and Azure AD Application Proxy can be used for other purposes, they do not directly address the specific requirements of this scenario. Azure AD B2B is not suitable because it involves creating guest accounts in Azure AD, which is explicitly prohibited in the requirements.

Therefore, Azure API Management is the most appropriate solution to provide Fabrikam developers with access to the logic apps while meeting the given constraints.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

You have an Azure subscription that contains 300 virtual machines that run Windows Server 2019.
You need to centrally monitor all warning events in the System logs of the virtual machines.
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Hot Area

A

Correct Answer

Box 1: A Log Analytics workspace
Send resource logs to a Log Analytics workspace to enable the features of Azure Monitor Logs.
You must create a diagnostic setting for each Azure resource to send its resource logs to a Log Analytics workspace to use with Azure Monitor Logs.
Box 2: Install the Azure Monitor agent
Use the Azure Monitor agent if you need to:
Collect guest logs and metrics from any machine in Azure, in other clouds, or on-premises.
Manage data collection configuration centrally

Reference

Reference 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

You have several Azure App Service web apps that use Azure Key Vault to store data encryption keys.
Several departments have the following requests to support the web app

Which service should you recommend for each department’s request? To answer, configure the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Hot Area

A

Answer

Box 1: Azure AD Privileged Identity Management
Privileged Identity Management provides time-based and approval-based role activation to mitigate the risks of excessive, unnecessary, or misused access permissions on resources that you care about. Here are some of the key features of Privileged Identity Management:
Provide just-in-time privileged access to Azure AD and Azure resources
Assign time-bound access to resources using start and end dates
Require approval to activate privileged roles
Enforce multi-factor authentication to activate any role
Use justification to understand why users activate
Get notifications when privileged roles are activated
Conduct access reviews to ensure users still need roles
Download audit history for internal or external audit
Prevents removal of the last active Global Administrator role assignment

Box 2: Azure Managed Identity -
Managed identities provide an identity for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication.
Applications may use the managed identity to obtain Azure AD tokens. With Azure Key Vault, developers can use managed identities to access resources. Key
Vault stores credentials in a secure manner and gives access to storage accounts.
Box 3: Azure AD Privileged Identity Management
Privileged Identity Management provides time-based and approval-based role activation to mitigate the risks of excessive, unnecessary, or misused access permissions on resources that you care about. Here are some of the key features of Privileged Identity Management:
Provide just-in-time privileged access to Azure AD and Azure resources
Assign time-bound access to resources using start and end dates

Reference 1
Reference 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Your company has the divisions shown in the following table

You plan to deploy a custom application to each subscription. The application will contain the following:
✑ A resource group
✑ An Azure web app
✑ Custom role assignments
✑ An Azure Cosmos DB account
You need to use Azure Blueprints to deploy the application to each subscription.
What is the minimum number of objects required to deploy the application? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Hot Area

A

Answer

Box 1: 2 -
One management group for each Azure AD tenant
Azure management groups provide a level of scope above subscriptions.
All subscriptions within a management group automatically inherit the conditions applied to the management group.
All subscriptions within a single management group must trust the same Azure Active Directory tenant.

Box 2: 1 -
One single blueprint definition can be assigned to different existing management groups or subscriptions.
When creating a blueprint definition, you’ll define where the blueprint is saved. Blueprints can be saved to a management group or subscription that you have
Contributor access to. If the location is a management group, the blueprint is available to assign to any child subscription of that management group.

Box 3: 2 -

Blueprint assignment -
Each Published Version of a blueprint can be assigned (with a max name length of 90 characters) to an existing management group or subscription.
Assigning a blueprint definition to a management group means the assignment object exists at the management group. The deployment of artifacts still targets a subscription.

Management Groups Overview
Blueprints overview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

You need to design an Azure policy that will implement the following functionality:

✑ For new resources, assign tags and values that match the tags and values of the resource group to which the resources are deployed.
✑ For existing resources, identify whether the tags and values match the tags and values of the resource group that contains the resources.
✑ For any non-compliant resources, trigger auto-generated remediation tasks to create missing tags and values.
The solution must use the principle of least privilege.
What should you include in the design? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Hot Area

A

Answer

Box 1: Modify -
Modify is used to add, update, or remove properties or tags on a subscription or resource during creation or update. A common example is updating tags on resources such as costCenter. Existing non-compliant resources can be remediated with a remediation task. A single Modify rule can have any number of operations. Policy assignments with effect set as Modify require a managed identity to do remediation.
Incorrect:
* The following effects are deprecated: EnforceOPAConstraint EnforceRegoPolicy
* Append is used to add additional fields to the requested resource during creation or update. A common example is specifying allowed IPs for a storage resource.
Append is intended for use with non-tag properties. While Append can add tags to a resource during a create or update request, it’s recommended to use the
Modify effect for tags instead.
Box 2: A managed identity with the Contributor role
The managed identity needs to be granted the appropriate roles required for remediating resources to grant the managed identity.
Contributor - Can create and manage all types of Azure resources but can’t grant access to others.
Incorrect:
User Access Administrator: lets you manage user access to Azure resources.

Governance Policy Effects
Remediate Resources
RBAC Build-In roles

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Monitoring

You have an Azure subscription that contains the resources shown in the following table

You create an Azure SQL database named DB1 that is hosted in the East US Azure region.
To DB1, you add a diagnostic setting named Settings1. Settings1 archive SQLInsights to storage1 and sends SQLInsights to Workspace1.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.

Hot Area

A

Answer

Analyzing the Statements

Given the information provided, here’s the breakdown of the statements:

  1. You can add a new diagnostic setting that archives SQLInsights logs to storage2.
    • Yes. You can create a new diagnostic setting for DB1 that archives SQLInsights logs to storage2. This would be in addition to the existing setting that archives to storage1.
  2. You can add a new diagnostic setting that sends SQLInsights logs to Workspace2.
    • Yes. You can create a new diagnostic setting for DB1 that sends SQLInsights logs to Workspace2. This would be in addition to the existing setting that sends to Workspace1.
  3. You can add a new diagnostic setting that sends SQLInsights logs to Hub1.
    • No. Hub1 is an Azure event hub, which is primarily designed for streaming data. It’s not directly suitable for storing and analyzing log data like SQLInsights. While you might be able to configure a custom pipeline to send SQLInsights data to Hub1, it’s not a straightforward or recommended approach.

In summary:

  • You can configure multiple diagnostic settings for a single Azure SQL database.
  • You can choose different storage accounts and Log Analytics workspaces for archiving and analyzing SQLInsights logs.
  • Sending SQLInsights data to an event hub (like Hub1) is not directly supported and would require custom configuration.

Azure Monitor: Diagnostic Settings
Azure Sql: Diagnostic Telemetry

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

You plan to deploy an Azure SQL database that will store Personally Identifiable Information (PII).
You need to ensure that only privileged users can view the PII.
What should you include in the solution?

A. dynamic data masking
B. role-based access control (RBAC)
C. Data Discovery & Classification
D. Transparent Data Encryption (TDE)

A

A. dynamic data masking

Dynamic Data Masking (DDM) is a feature in Azure SQL Database that helps you protect sensitive data by obfuscating it from non-privileged users. DDM allows you to define masking rules on specific columns, so that the data in those columns is automatically replaced with a masked value when queried by users without the appropriate permissions. This ensures that only privileged users can view the actual Personally Identifiable Information (PII), while other users will see the masked data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

You plan to deploy an app that will use an Azure Storage account.
You need to deploy the storage account. The storage account must meet the following requirements:
✑ Store the data for multiple users.
✑ Encrypt each user’s data by using a separate key.
✑ Encrypt all the data in the storage account by using customer-managed keys.
What should you deploy?

A. files in a premium file share storage account
B. blobs in a general purpose v2 storage account
C. blobs in an Azure Data Lake Storage Gen2 account
D. files in a general purpose v2 storage account

A

Correct Answer: B

You can specify a customer-provided key on Blob storage operations. A client making a read or write request against Blob storage can include an encryption key on the request for granular control over how blob data is encrypted and decrypted.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

You have an Azure App Service web app that uses a system-assigned managed identity.
You need to recommend a solution to store the settings of the web app as secrets in an Azure key vault. The solution must meet the following requirements:
✑ Minimize changes to the app code.
✑ Use the principle of least privilege.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Hot Area

A

Answer

Box 1: Key Vault references in Application settings
Source Application Settings from Key Vault.
Key Vault references can be used as values for Application Settings, allowing you to keep secrets in Key Vault instead of the site config. Application Settings are securely encrypted at rest, but if you need secret management capabilities, they should go into Key Vault.
To use a Key Vault reference for an app setting, set the reference as the value of the setting. Your app can reference the secret through its key as normal. No code changes are required.

Box 2: Secrets: Get -
In order to read secrets from Key Vault, you need to have a vault created and give your app permission to access it.
1. Create a key vault by following the Key Vault quickstart.
2. Create a managed identity for your application.
3. Key Vault references will use the app’s system assigned identity by default, but you can specify a user-assigned identity.
4. Create an access policy in Key Vault for the application identity you created earlier. Enable the “Get” secret permission on this policy.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

You plan to deploy an application named App1 that will run on five Azure virtual machines. Additional virtual machines will be deployed later to run App1.
You need to recommend a solution to meet the following requirements for the virtual machines that will run App1:
✑ Ensure that the virtual machines can authenticate to Azure Active Directory (Azure AD) to gain access to an Azure key vault, Azure Logic Apps instances, and an Azure SQL database.
✑ Avoid assigning new roles and permissions for Azure services when you deploy additional virtual machines.
✑ Avoid storing secrets and certificates on the virtual machines.
✑ Minimize administrative effort for managing identities.
Which type of identity should you include in the recommendation?

A. a system-assigned managed identity
B. a service principal that is configured to use a certificate
C. a service principal that is configured to use a client secret
D. a user-assigned managed identity

A

Correct Answer: D

Managed identities provide an identity for applications to use when connecting to resources that support Azure Active Directory (Azure AD) authentication.
A user-assigned managed identity:
Can be shared.
The same user-assigned managed identity can be associated with more than one Azure resource.
Common usage:
Workloads that run on multiple resources and can share a single identity.
For example, a workload where multiple virtual machines need to access the same resource.
Incorrect:
Not A: A system-assigned managed identity can’t be shared. It can only be associated with a single Azure resource.
Typical usage:
Workloads that are contained within a single Azure resource.
Workloads for which you need independent identities.
For example, an application that runs on a single virtual machine.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

You have the resources shown in the following table

CDB1 hosts a container that stores continuously updated operational data.
You are designing a solution that will use AS1 to analyze the operational data daily.
You need to recommend a solution to analyze the data without affecting the performance of the operational data store.
What should you include in the recommendation?

A. Azure Cosmos DB change feed
B. Azure Data Factory with Azure Cosmos DB and Azure Synapse Analytics connectors
C. Azure Synapse Link for Azure Cosmos DB
D. Azure Synapse Analytics with PolyBase data loading

A

Correct Answer: C

Azure Synapse Link for Azure Cosmos DB creates a tight integration between Azure Cosmos DB and Azure Synapse Analytics. It enables customers to run near real-time analytics over their operational data with full performance isolation from their transactional workloads and without an ETL pipeline.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

You deploy several Azure SQL Database instances.
You plan to configure the Diagnostics settings on the databases as shown in the following exhibit

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.

Hot Area

A

Answer

Box 1: 90 days -
As per exhibit.

Box 2: 730 days -
How long is the data kept?
Raw data points (that is, items that you can query in Analytics and inspect in Search) are kept for up to 730 days.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

You have an application that is used by 6,000 users to validate their vacation requests. The application manages its own credential store.
Users must enter a username and password to access the application. The application does NOT support identity providers.
You plan to upgrade the application to use single sign-on (SSO) authentication by using an Azure Active Directory (Azure AD) application registration.
Which SSO method should you use?

A. header-based
B. SAML
C. password-based
D. OpenID Connect

A

Correct Answer: C

Password - On-premises applications can use a password-based method for SSO. This choice works when applications are configured for Application Proxy.
With password-based SSO, users sign in to the application with a username and password the first time they access it. After the first sign-on, Azure AD provides the username and password to the application. Password-based SSO enables secure application password storage and replay using a web browser extension or mobile app. This option uses the existing sign-in process provided by the application, enables an administrator to manage the passwords, and doesn’t require the user to know the password.
Incorrect:
Choosing an SSO method depends on how the application is configured for authentication. Cloud applications can use federation-based options, such as OpenID
Connect, OAuth, and SAML.
Federation - When you set up SSO to work between multiple identity providers, it’s called federation.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

You have an Azure subscription that contains a virtual network named VNET1 and 10 virtual machines. The virtual machines are connected to VNET1.
You need to design a solution to manage the virtual machines from the internet. The solution must meet the following requirements:
✑ Incoming connections to the virtual machines must be authenticated by using Azure Multi-Factor Authentication (MFA) before network connectivity is allowed.
✑ Incoming connections must use TLS and connect to TCP port 443.
✑ The solution must support RDP and SSH.
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Hot Area

A

Answer

Box 1: Just-in-time (JIT) VN access
Lock down inbound traffic to your Azure Virtual Machines with Microsoft Defender for Cloud’s just-in-time (JIT) virtual machine (VM) access feature. This reduces exposure to attacks while providing easy access when you need to connect to a VM.
Note: Threat actors actively hunt accessible machines with open management ports, like RDP or SSH. Your legitimate users also use these ports, so it’s not practical to keep them closed.
When you enable just-in-time VM access, you can select the ports on the VM to which inbound traffic will be blocked.
To solve this dilemma, Microsoft Defender for Cloud offers JIT. With JIT, you can lock down the inbound traffic to your VMs, reducing exposure to attacks while providing easy access to connect to VMs when needed.
Box 2: A conditional Access policy that has Cloud Apps assignment set to Azure Windows VM Sign-In
You can enforce Conditional Access policies such as multi-factor authentication or user sign-in risk check before authorizing access to Windows VMs in Azure that are enabled with Azure AD sign in. To apply Conditional Access policy, you must select the “Azure Windows VM Sign-In” app from the cloud apps or actions assignment option and then use Sign-in risk as a condition and/or require multi-factor authentication as a grant access control.

Reference
Reference 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

You are designing an Azure governance solution.
All Azure resources must be easily identifiable based on the following operational information: environment, owner, department and cost center.
You need to ensure that you can use the operational information when you generate reports for the Azure resources.
What should you include in the solution?

A. an Azure data catalog that uses the Azure REST API as a data source
B. an Azure management group that uses parent groups to create a hierarchy
C. an Azure policy that enforces tagging rules
D. Azure Active Directory (Azure AD) administrative units

A

Correct Answer: C

You apply tags to your Azure resources, resource groups, and subscriptions to logically organize them into a taxonomy. Each tag consists of a name and a value pair.
You use Azure Policy to enforce tagging rules and conventions. By creating a policy, you avoid the scenario of resources being deployed to your subscription that don’t have the expected tags for your organization. Instead of manually applying tags or searching for resources that aren’t compliant, you create a policy that automatically applies the needed tags during deployment.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Your company has the divisions shown in the following table

Sub1 contains an Azure App Service web app named App1. App1 uses Azure AD for single-tenant user authentication. Users from contoso.com can authenticate to App1.
You need to recommend a solution to enable users in the fabrikam.com tenant to authenticate to App1.
What should you recommend?

A. Configure the Azure AD provisioning service.
B. Enable Azure AD pass-through authentication and update the sign-in endpoint.
C. Use Azure AD entitlement management to govern external users.
D. Configure Azure AD join.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Your company has 20 web APIs that were developed in-house.
The company is developing 10 web apps that will use the web APIs. The web apps and the APIs are registered in the company s Azure Active Directory (Azure
AD) tenant. The web APIs are published by using Azure API Management.
You need to recommend a solution to block unauthorized requests originating from the web apps from reaching the web APIs. The solution must meet the following requirements:
✑ Use Azure AD-generated claims.
Minimize configuration and management effort.

What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Hot Area

A

Answer

The correct options to select are:

  • Grant permissions to allow the web apps to access the web APIs by using: Azure AD
  • Configure a JSON Web Token (JWT) validation policy by using: Azure API Management

Here’s why:

  • Azure AD is the most appropriate choice for granting permissions to the web apps to access the web APIs. Azure AD provides a robust and secure mechanism for managing access control and authorization. By using Azure AD, you can leverage the built-in features and capabilities of the platform to ensure that only authorized web apps can access the web APIs.
  • Azure API Management is the best option for configuring a JSON Web Token (JWT) validation policy. Azure API Management provides a centralized platform for managing and securing APIs. By configuring a JWT validation policy in Azure API Management, you can enforce authorization rules and prevent unauthorized access to the web APIs. This approach minimizes configuration and management effort, as you can manage the policy centrally rather than having to configure it in each individual web API.

The other options are not as suitable:

  • Azure API Management and The web APIs are not appropriate for granting permissions to the web apps. While Azure API Management can be used to manage access control for APIs, it is not the best choice for granting permissions to web apps. The web APIs themselves are not responsible for granting permissions.
  • The web APIs is not appropriate for configuring a JWT validation policy. The web APIs are designed to provide functionality, not to enforce security policies.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

You need to recommend a solution to generate a monthly report of all the new Azure Resource Manager (ARM) resource deployments in your Azure subscription.
What should you include in the recommendation?

A. Azure Log Analytics
B. Azure Arc
C. Azure Analysis Services
D. Application Insights

A

Correct Answer: A

The Activity log is a platform log in Azure that provides insight into subscription-level events. Activity log includes such information as when a resource is modified or when a virtual machine is started.
Activity log events are retained in Azure for 90 days and then deleted.
For more functionality, you should create a diagnostic setting to send the Activity log to one or more of these locations for the following reasons: to Azure Monitor Logs for more complex querying and alerting, and longer retention (up to two years) to Azure Event Hubs to forward outside of Azure to Azure Storage for cheaper, long-term archiving
Note: Azure Monitor builds on top of Log Analytics, the platform service that gathers log and metrics data from all your resources. The easiest way to think about it is that Azure Monitor is the marketing name, whereas Log Analytics is the technology that powers it.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Your company has the divisions shown in the following table

Sub1 contains an Azure App Service web app named App1. App1 uses Azure AD for single-tenant user authentication. Users from contoso.com can authenticate to App1.
You need to recommend a solution to enable users in the fabrikam.com tenant to authenticate to App1.
What should you recommend?

A. Configure the Azure AD provisioning service.
B. Configure assignments for the fabrikam.com users by using Azure AD Privileged Identity Management (PIM).
C. Use Azure AD entitlement management to govern external users.
D. Configure Azure AD Identity Protection.

A

Correct Answer: C

Entitlement management is an identity governance capability that enables organizations to manage identity and access lifecycle at scale by automating access request workflows, access assignments, reviews, and expiration. Entitlement management allows delegated non-admins to create access packages that external users from other organizations can request access to. One and multi-stage approval workflows can be configured to evaluate requests, and provision users for time-limited access with recurring reviews. Entitlement management enables policy-based provisioning and deprovisioning of external accounts.

Note: Access Packages -
An access package is the foundation of entitlement management. Access packages are groupings of policy-governed resources a user needs to collaborate on a project or do other tasks. For example, an access package might include: access to specific SharePoint sites. enterprise applications including your custom in-house and SaaS apps like Salesforce.
Microsoft Teams.
Microsoft 365 Groups.
Incorrect:
Not A: Automatic provisioning refers to creating user identities and roles in the cloud applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change.
Not B: Privileged Identity Management provides time-based and approval-based role activation to mitigate the risks of excessive, unnecessary, or misused access permissions on resources that you care about. Here are some of the key features of Privileged Identity Management:
Provide just-in-time privileged access to Azure AD and Azure resources
Assign time-bound access to resources using start and end dates
Etc.

Reference 1
Reference 2
Reference 3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

You are developing an app that will read activity logs for an Azure subscription by using Azure Functions.

You need to recommend an authentication solution for Azure Functions. The solution must minimize administrative effort.

What should you include in the recommendation?

A. an enterprise application in Azure AD
B. system-assigned managed identities
C. shared access signatures (SAS)
D. application registration in Azure AD

A

Correct Answer: B

Recommendation: B. system-assigned managed identities

Explanation:
* Minimal administrative effort: Managed identities are automatically created and managed by Azure, requiring minimal configuration.
* Strong security: Managed identities provide a secure way for Azure resources to authenticate to other Azure services without exposing credentials.
* Ideal for Azure Functions: Azure Functions seamlessly integrates with managed identities, making it easy to access resources like Azure Monitor logs.

Additional Considerations:
* Enterprise application in Azure AD (A): While this option can be used for authentication, it involves more administrative overhead in creating and managing the application and assigning permissions.
* Shared access signatures (SAS): SAS tokens provide temporary access to resources, but they require careful management and rotation to maintain security.
* Application registration in Azure AD (D): Similar to enterprise applications, this option also involves additional administrative tasks.

Therefore, system-assigned managed identities offer the best balance of security and minimal administrative effort for this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Your company has the divisions shown in the following table

Sub1 contains an Azure App Service web app named App1. App1 uses Azure AD for single-tenant user authentication. Users from contoso.com can authenticate to App1.

You need to recommend a solution to enable users in the fabrikam.com tenant to authenticate to App1.

What should you recommend?

A. Configure Azure AD join.
B. Use Azure AD entitlement management to govern external users.
C. Enable Azure AD pass-through authentication and update the sign-in endpoint.
D. Configure assignments for the fabrikam.com users by using Azure AD Privileged Identity Management (PIM).

A

Correct Answer: B

What can I do with entitlement Management?

Here are some of capabilities of entitlement management:
- Select connected organizations whose users can request access. When a user who isn’t yet in your directory requests access, and is approved, they’re automatically invited into your directory and assigned access. When their access expires, if they have no other access package assignments, their B2B account in your directory can be automatically removed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Your company has the divisions shown in the following table

Sub1 contains an Azure App Service web app named App1. App1 uses Azure AD for single-tenant user authentication. Users from contoso.com can authenticate to App1.

You need to recommend a solution to enable users in the fabrikam.com tenant to authenticate to App1.

What should you recommend?

A. Configure Azure AD join.
B. Configure Azure AD Identity Protection.
C. Use Azure AD entitlement management to govern external users.
D. Configure assignments for the fabrikam.com users by using Azure AD Privileged Identity Management (PIM).

A

Correct Answer: C

What can I do with entitlement Management?

Here are some of capabilities of entitlement management:
- Select connected organizations whose users can request access. When a user who isn’t yet in your directory requests access, and is approved, they’re automatically invited into your directory and assigned access. When their access expires, if they have no other access package assignments, their B2B account in your directory can be automatically removed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

You need to recommend a solution to generate a monthly report of all the new Azure Resource Manager (ARM) resource deployments in your Azure subscription.

What should you include in the recommendation?

A. Azure Activity Log
B. Azure Arc
C. Azure Analysis Services
D. Azure Monitor metrics

A

Correct Answer: A

This question has two variants up to this point.
If you don’t see **Log Analytics Workspace ** in the answer section, choose Azure Activity log. If you don’t see Activity Log, choose LA.

Azure Activity Log provides insights into subscription-level events that have occurred in your Azure account. It includes information about resource creation, deletion, and modification events, making it an excellent choice for monitoring new ARM resource deployments in your Azure subscription. You can export the Activity Log data to a storage account, Event Hubs, or Log Analytics workspace for further analysis and reporting. By creating a custom query or using the built-in tools for filtering and visualization, you can generate a monthly report of all the new ARM resource deployments in your Azure subscription.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

You have an Azure subscription that contains an Azure key vault named KV1 and a virtual machine named VM1. VM1 runs Windows Server 2022: Azure Edition.

You plan to deploy an ASP.Net Core-based application named App1 to VM1.

You need to configure App1 to use a system-assigned managed identity to retrieve secrets from KV1. The solution must minimize development effort.

What should you do? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

  1. Client credentials grant flows
  2. Azure Instance Metadata (IMDS) endpoint

The second answer is no correct.

The key difference in this scenario is that we are using a Managed Identity, which is a feature of Azure AD, and in that case, access tokens are obtained through the Azure Instance Metadata Service (IMDS) API. The managed identity is responsible for managing the lifecycle of these credentials.

Therefore, for the case of an application in an Azure VM that uses a managed identity to authenticate with Key Vault, the IMDS would be used, not an OAuth 2.0 endpoint directly.

Get a token using http

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Your company has the divisions shown in the following table

Sub1 contains an Azure App Service web app named App1. App1 uses Azure AD for single-tenant user authentication. Users from contoso.com can authenticate to App1.

You need to recommend a solution to enable users in the fabrikam.com tenant to authenticate to App1.

What should you recommend?

A. Configure Azure AD join.
B. Configure Azure AD Identity Protection.
C. Configure a Conditional Access policy.
D. Configure Supported account types in the application registration and update the sign-in endpoint.

A

Correct Answer: D

To enable users in the fabrikam.com tenant to authenticate to App1, you need to configure the application registration for App1 in Azure AD to support users from both contoso.com and fabrikam.com. This can be done by updating the “Supported account types” in the application registration to allow users from any organizational directory (Any Azure AD directory - Multitenant). Once this is done, you need to update the sign-in endpoint for the application to include the fabrikam.com tenant.

This will allow users from the fabrikam.com tenant to authenticate to App1 using their Azure AD credentials.

Identity Supported account types

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

You have an Azure AD tenant named contoso.com that has a security group named Group1. Group1 is configured for assigned memberships. Group1 has 50 members, including 20 guest users.

You need to recommend a solution for evaluating the membership of Group1. The solution must meet the following requirements:

  • The evaluation must be repeated automatically every three months.
  • Every member must be able to report whether they need to be in Group1.
  • Users who report that they do not need to be in Group1 must be removed from Group1 automatically.
  • Users who do not report whether they need to be in Group1 must be removed from Group1 automatically.

What should you include in the recommendation?

A. Implement Azure AD Identity Protection.
B. Change the Membership type of Group1 to Dynamic User.
C. Create an access review.
D. Implement Azure AD Privileged Identity Management (PIM).

A

Correct Answer: C

Based on the requirements below:

The evaluation must be repeated automatically every three months.
* Every member must be able to report whether they need to be in Group1.
* Users who report that they do not need to be in Group1 must be removed from Group1 automatically.
* Users who do not report whether they need to be in Group1 must be removed from Group1 automatically.

The correct answer should be: Create an access review

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

You have an Azure subscription named Sub1 that is linked to an Azure AD tenant named contoso.com.

You plan to implement two ASP.NET Core apps named App1 and App2 that will be deployed to 100 virtual machines in Sub1. Users will sign in to App1 and App2 by using their contoso.com credentials.

App1 requires read permissions to access the calendar of the signed-in user. App2 requires write permissions to access the calendar of the signed-in user.

You need to recommend an authentication and authorization solution for the apps. The solution must meet the following requirements:

  • Use the principle of least privilege.
  • Minimize administrative effort.

What should you include in the recommendation? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

  1. Application registration
  2. Delegated permissions

Important point here is that both apps are deployed to the same machines. So Managed identitied will violate the principle of least privelege. As a user/system managed identity will have to be assigned both read and write permission to user’s calendar.

App registeration will provide ability to use the service principal per app to set the correct permission required for the app.
Use delegated permissions to access user’s data as admin allowed/forces users to delegate the permission to the app.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Your company has the divisions shown in the following table

Sub1 contains an Azure App Service web app named App1. App1 uses Azure AD for single-tenant user authentication. Users from contoso.com can authenticate to App1.

You need to recommend a solution to enable users in the fabrikam.com tenant to authenticate to App1.

What should you recommend?

A. Enable Azure AD pass-through authentication and update the sign-in endpoint.
B. Use Azure AD entitlement management to govern external users.
C. Configure assignments for the fabrikam.com users by using Azure AD Privileged Identity Management (PIM).
D. Configure Azure AD Identity Protection.

A

Correct Answer: B

What can I do with entitlement management

Here are some of capabilities of entitlement management:
- Select connected organizations whose users can request access. When a user who isn’t yet in your directory requests access, and is approved, they’re automatically invited into your directory and assigned access. When their access expires, if they have no other access package assignments, their B2B account in your directory can be automatically removed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Your company has the divisions shown in the following table

Sub1 contains an Azure App Service web app named App1. App1 uses Azure AD for single-tenant user authentication. Users from contoso.com can authenticate to App1.

You need to recommend a solution to enable users in the fabrikam.com tenant to authenticate to App1.

What should you recommend?

A. Configure the Azure AD provisioning service.
B. Enable Azure AD pass-through authentication and update the sign-in endpoint.
C. Configure Supported account types in the application registration and update the sign-in endpoint.
D. Configure Azure AD join.

A

Correct Answer: C

the question has 2 answers but they are never together. Therefore it can be:
1. Use Azure AD entitlement management to govern external users
2. Configure Supported account types in the application registration and update the sign-in endpoint

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

You have an Azure AD tenant that contains a management group named MG1.

You have the Azure subscriptions shown in the following table

The subscriptions contain the resource groups shown in the following table

The subscription contains the Azure AD security groups shown in the following table

The subscription contains the user accounts shown in the following table

You perform the following actions:

Assign User3 the Contributor role for Sub1.
Assign Group1 the Virtual Machine Contributor role for MG1.
Assign Group3 the Contributor role for the Tenant Root Group.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

Since Group 1 is assigned VM contributor to MG1, it will be able to create a new VM in RG1.
User 2 is not able to grant permission to Group 2 because it is just a member with contributor role.
Since Group 3 has Contributor role for the Tenant Root Group, User3 can create storage account in RG2

You can add an existing Security group to another Security group (also known as nested groups). Depending on the group types, you can add a group as a member of another group, just like a user, which applies settings like roles and access to the nested groups.

How to manage groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

You have an Azure subscription that contains 1,000 resources.

You need to generate compliance reports for the subscription. The solution must ensure that the resources can be grouped by department.

What should you use to organize the resources?

A. application groups and quotas
B. Azure Policy and tags
C. administrative units and Azure Lighthouse
D. resource groups and role assignments

A

Correct Answer: B

Azure Policy allows you to define and enforce rules and regulations for your resources, ensuring compliance with organizational standards and industry regulations. You can create policies that specify the required tags for resources, such as department, and enforce their usage across the subscription. This will help you categorize and group resources based on departments.

Tags, on the other hand, are key-value pairs that you can assign to resources. By assigning tags to resources with the department information, you can easily filter and group resources based on departments when generating compliance reports.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

You need to recommend a solution to generate a monthly report of all the new Azure Resource Manager (ARM) resource deployments in your Azure subscription.

What should you include in the recommendation?

A. Azure Arc
B. Azure Monitor metrics
C. Azure Advisor
D. Azure Log Analytics

A

Correct Answer: D

Azure Log Analytics is a service that collects and analyzes data from various sources, including Azure resources, applications, and operating systems. It provides a centralized location for storing and querying log data, making it an ideal solution for monitoring and analyzing resource deployments.

By configuring Log Analytics to collect and store the deployment logs, you can easily query and filter the data to generate a report of all the new ARM resource deployments within a specific time frame, such as a month.

Therefore, the correct answer is D. Azure Log Analytics”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

You need to recommend a solution to generate a monthly report of all the new Azure Resource Manager (ARM) resource deployments in your Azure subscription.

What should you include in the recommendation?

A. Azure Monitor action groups
B. Azure Arc
C. Azure Monitor metrics
D. Azure Activity Log

A

Correct Answer: D

D. Azure Activity Log

Explanation:
* Azure Activity Log captures all actions performed on Azure resources.
* It provides detailed information about when, who, and what changes were made.
* You can query the Activity Log for specific resource types, operations, and timeframes.
* By filtering for resource creation events within a specific month, you can generate a report of new resource deployments.

Additional Considerations:
* Azure Monitor metrics are primarily for numerical data points and wouldn’t capture resource creation events.
* Azure Monitor action groups are used for alerting based on specific conditions, not for generating reports.
* Azure Arc is for managing on-premises and multi-cloud resources, which is not relevant to this scenario.

Therefore, Azure Activity Log is the most suitable option for generating a monthly report of new Azure Resource Manager resource deployments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

You have an Azure AD tenant that contains an administrative unit named MarketingAU. MarketingAU contains 100 users.

You create two users named User1 and User2.

You need to ensure that the users can perform the following actions in MarketingAU:

  • User1 must be able to create user accounts.
  • User2 must be able to reset user passwords.

Which role should you assign to each user? To answer, drag the appropriate roles to the correct users. Each role may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

Here’s an explanation:

The roles that you need to assign are:

User1: User Administrator for the MarketingAU administrative unit.

User2: Password Administrator or Helpdesk Administrator for the MarketingAU administrative unit.

The User Administrator role provides permissions to manage user accounts, including creating new users. The Password Administrator and Helpdesk Administrator roles provide permissions to reset user passwords.

Therefore, User1 needs the User Administrator role for the MarketingAU administrative unit to be able to create new user accounts.

User2 needs either the Password Administrator or Helpdesk Administrator role for the MarketingAU administrative unit to be able to reset user passwords.

Note that assigning Helpdesk Administrator for the tenant role to User2 would provide permissions to reset passwords for all users in the Azure AD tenant, not just in the MarketingAU administrative unit.

Admin units assign roles

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

You are designing an app that will be hosted on Azure virtual machines that run Ubuntu. The app will use a third-party email service to send email messages to users. The third-party email service requires that the app authenticate by using an API key.

You need to recommend an Azure Key Vault solution for storing and accessing the API key. The solution must minimize administrative effort.

What should you recommend using to store and access the key? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

  1. Storage: Secret.
    API keys are typically stored as secrets in Azure Key Vault. The key vault can store and manage secrets like API keys, passwords, or database connection strings.
  2. Should be service principal

A service principal is indeed the more appropriate choice for accessing a third-party email service using an API key.

Here’s a breakdown of why:

Managed Service Identity (MSI) is primarily designed for accessing other Azure resources. While it can be used for external resources, it’s often more complex to set up and manage.
Service Principal is specifically designed for applications to authenticate to other services, including external ones. It provides a clear separation of concerns and simplifies the authentication process.
To summarize:

Store the API key as a secret

in Azure Key Vault.
Use a service principal to authenticate to the third-party email service using the API key.
By following these steps, you’ll ensure secure storage of the API key and efficient authentication to the external service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

You have two app registrations named App1 and App2 in Azure AD. App1 supports role-based access control (RBAC) and includes a role named Writer.

You need to ensure that when App2 authenticates to access App1, the tokens issued by Azure AD include the Writer role claim.

Which blade should you use to modify each app registration? To answer, drag the appropriate blades to the correct app registrations. Each blade may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

  1. App1: b. App roles
  2. App2: c. Token configuration

This is assuming that the exam expects you to know that an application requesting a token (App2) would need to have the roles claim added via Token Configuration. While in practice, this is not the exact place to assign a role to an application, but given the choices provided, this would be the most appropriate.

This is because token configuration does indeed impact the claims present in a token, and since no other suitable choice is available (API Permissions would not be used to assign a role to the application), it seems this would be the expected answer.

However, please note this is not entirely accurate based on the full capabilities of Azure AD, but it’s the best choice given the options. Normally, you would assign the app role to the service principal of App2 in the context of Enterprise Applications, which is not an option here.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

You have an Azure subscription.

You plan to deploy a monitoring solution that will include the following:

  • Azure Monitor Network Insights
  • Application Insights
  • Microsoft Sentinel
  • VM insights

The monitoring solution will be managed by a single team.

What is the minimum number of Azure Monitor workspaces required?

A. 1
B. 2
C. 3
D. 4

A

A. 1

You only need a single Azure Monitor Log Analytics workspace for all these monitoring solutions.

Here’s why:

  • Azure Monitor Network Insights, Application Insights, Microsoft Sentinel, and VM insights, all of these components can send their data to a Log Analytics workspace.
  • The workspace is a unique environment for Azure Monitor log data. Each workspace has its own data repository and configuration, and data sources and solutions are configured to store their data in a workspace.

Therefore, a single Azure Monitor Log Analytics workspace can be utilized to collect and analyze data from all the components of the monitoring solution. This will also enable a unified management and analysis of the collected data.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

You need to recommend a solution to generate a monthly report of all the new Azure Resource Manager (ARM) resource deployments in your Azure subscription.

What should you include in the recommendation?

A. Application Insights
B. Azure Analysis Services
C. Azure Advisor
D. Azure Activity Log

A

D. Azure Activity Log

The Azure Activity Log records all ARM resource deployments in your Azure subscription, making it the appropriate choice for generating a monthly report of new resource deployments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

You have an Azure subscription that contains 10 web apps. The apps are integrated with Azure AD and are accessed by users on different project teams.

The users frequently move between projects.

You need to recommend an access management solution for the web apps. The solution must meet the following requirements:

  • The users must only have access to the app of the project to which they are assigned currently.
  • Project managers must verify which users have access to their project’s app and remove users that are no longer assigned to their project.
  • Once every 30 days, the project managers must be prompted automatically to verify which users are assigned to their projects.

What should you include in the recommendation?

A. Azure AD Identity Protection
B. Microsoft Defender for Identity
C. Microsoft Entra Permissions Management
D. Microsoft Entra ID Governance

A

Correct Answer: D

Microsoft AD Identity Governance (now Microsoft Entra ID Governance) allows you to balance your organization’s need for security and employee productivity with the right processes and visibility. It provides you with capabilities to ensure that the right people have the right access to the right resources.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

You have an Azure subscription that contains 50 Azure SQL databases.

You create an Azure Resource Manager (ARM) template named Template1 that enables Transparent Data Encryption (TDE).

You need to create an Azure Policy definition named Policy1 that will use Template1 to enable TDE for any noncompliant Azure SQL databases.

How should you configure Policy1? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer Box

Box 1: DeployIfNotExists

DeployIfNotExists policy definition executes a template deployment when the condition is met. Policy assignments with effect set as DeployIfNotExists require a managed identity to do remediation.

Box 2: The role-based access control (RBAC) roles required to perform the remediation task

The question is what you have to “Include in the definition:” of the policy.
Refer to list of DeployIfNotExists properties, among them is roleDefinitionIds (required) - This property must include an array of strings that match role-based access control role ID accessible by the subscription.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

You have an Azure subscription. The subscription contains a tiered app named App1 that is distributed across multiple containers hosted in Azure Container Instances.

You need to deploy an Azure Monitor monitoring solution for App. The solution must meet the following requirements:

  • Support using synthetic transaction monitoring to monitor traffic between the App1 components.
  • Minimize development effort.

What should you include in the solution?

A. Network insights
B. Application Insights
C. Container insights
D. Log Analytics Workspace insights

A

Correct Answer: B

B. Application Insights

Explanation:
* Application Insights is the ideal choice for monitoring a distributed application like App1 running on Azure Container Instances.
* It provides comprehensive application performance monitoring (APM) capabilities, including:
* Performance monitoring
* Dependency tracking
* Availability testing
* Synthetic transaction monitoring (essential for your requirement)
* Log management
* It integrates seamlessly with Azure Container Instances, making it easy to set up and use.
* It offers a rich set of features and visualizations, minimizing development effort.

Why not other options:
* Network Insights: Focuses on network connectivity and troubleshooting, not application performance monitoring.
* Container Insights: Primarily for monitoring container health and resource utilization within a Kubernetes cluster, not suitable for distributed applications.
* Log Analytics Workspace insights: While capable of collecting and analyzing logs, it lacks the built-in features and visualizations for application performance monitoring.

By choosing Application Insights, you get a powerful and comprehensive monitoring solution that meets all your requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

You have an Azure subscription that contains the resources shown in the following table:

Log files from App1 are registered to App1Logs. An average of 120 GB of log data is ingested per day.

You configure an Azure Monitor alert that will be triggered if the App1 logs contain error messages.

You need to minimize the Log Analytics costs associated with App1. The solution must meet the following requirements:
* Ensure that all the log files from App1 are ingested to App1Logs.
* Minimize the impact on the Azure Monitor alert.

Which resource should you modify, and which modification should you perform? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

Workspace1: This is the Log Analytics workspace where the logs are ingested. Modifying this resource helps manage costs associated with log ingestion.

Change to a commitment pricing tier: Commitment tiers offer discounted rates for log ingestion based on a fixed commitment, which can significantly reduce costs compared to the pay-as-you-go pricing tier, especially when dealing with large volumes of data like 120 GB per day. This change ensures that all log files are ingested while minimizing costs and impact on the Azure Monitor alert.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

You have 12 Azure subscriptions and three projects. Each project uses resources across multiple subscriptions.

You need to use Microsoft Cost Management to monitor costs on a per project basis. The solution must minimize administrative effort.

Which two components should you include in the solution? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A. budgets
B. resource tags
C. custom role-based access control (RBAC) roles
D. management groups
E. Azure boards

A

Correct Answer: AB

B. Resource tags

Why: Tags allow categorizing and tracking costs for resources by project across multiple subscriptions. This enables detailed cost analysis and reporting for each project.Use tags to assign metadata to resources (e.g., project name), making it easier to filter and analyze costs per project.

A. Budgets

Why: Budgets enable setting cost limits and alerts for each project. When combined with tags, budgets can help track and control costs effectively, ensuring each project stays within its allocated budget.Set up budgets for each project to monitor spending, receive alerts, and enforce cost controls based on tagged resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

You have an Azure subscription that contains multiple storage accounts.

You assign Azure Policy definitions to the storage accounts.

You need to recommend a solution to meet the following requirements:

  • Trigger on-demand Azure Policy compliance scans.
  • Raise Azure Monitor non-compliance alerts by querying logs collected by Log Analytics.

What should you recommend for each requirement? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

Provided answers look correct:

To trigger the compliance scans, use Azure CLI

An evaluation scan for a subscription or a resource group can be started with Azure CLI, Azure PowerShell, a call to the REST API, or by using the Azure Policy Compliance Scan GitHub Action. This scan is an asynchronous process. An evaluation scan for a subscription or a resource group can be started with Azure CLI, Azure PowerShell, a call to the REST API, or by using the Azure Policy Compliance Scan GitHub Action. This scan is an asynchronous process.

To generate alerts, configure diagnostic settings for the Azure activity logs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

You have an Azure subscription.

You plan to deploy five storage accounts that will store block blobs and five storage accounts that will host file shares. The file shares will be accessed by using the SMB protocol.

You need to recommend an access authorization solution for the storage accounts. The solution must meet the following requirements:

  • Maximize security.
  • Prevent the use of shared keys.
  • Whenever possible, support time-limited access.

What should you include in the solution? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

The correct answer is:

For the blobs:

  • A user delegation shared access signature (SAS) and a stored access policy
    For the file shares:
  • Azure AD credentials
    Explanation:
    For the blobs:
  • A user delegation shared access signature (SAS) provides fine-grained control over access to individual blobs or containers within a storage account.
  • A stored access policy allows you to define access rules that can be applied to multiple SAS tokens, simplifying management.
  • Combining a user delegation SAS and a stored access policy offers the best security and flexibility by enabling you to grant time-limited access to specific blobs or containers while centralizing access control.

For the file shares:

  • Azure AD credentials are the most secure option for accessing file shares over SMB. They provide strong authentication and authorization, eliminating the need for shared keys.
  • Azure AD credentials also support time-limited access through features like conditional access policies.
  • Using SAS tokens for file shares is less secure and not recommended, as they can be easily compromised and misused.

By following these recommendations, you can ensure that your storage accounts are protected against unauthorized access and that access is granted only to authorized users for specific time periods.

Source

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

You have an Azure subscription. The subscription contains 100 virtual machines that run Windows Server 2022 and have the Azure Monitor Agent installed.

You need to recommend a solution that meets the following requirements:

  • Forwards JSON-formatted logs from the virtual machines to a Log Analytics workspace
  • Transforms the logs and stores the data in a table in the Log Analytics workspace

What should you include in the recommendation? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

You have five Azure subscriptions. Each subscription is linked to a separate Azure AD tenant and contains virtual machines that run Windows Server 2022.

You plan to collect Windows security events from the virtual machines and send them to a single Log Analytics workspace.

You need to recommend a solution that meets the following requirements:

  • Collects event logs from multiple subscriptions
  • Supports the use of data collection rules (DCRs) to define which events to collect

What should you recommend for each requirement? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

Final Answer:

  1. To collect the event logs: Azure Lighthouse
  • Azure Lighthouse provides a centralized management experience across multiple subscriptions. It allows you to delegate administrative access to other tenants, enabling you to manage resources in those subscriptions as if they were your own.
  1. To support DCRs: The Azure Monitor agent
  • The Azure Monitor agent is the core component for collecting and sending data to Azure Monitor. It supports DCRs, allowing you to define which events to collect and send to Log Analytics.

Explanation:
* Azure Lighthouse is essential for managing resources across multiple subscriptions and tenants.
* Azure Monitor agent is the primary tool for collecting and filtering data from virtual machines. DCRs are a powerful feature of the Azure Monitor agent for customizing data collection.

By combining Azure Lighthouse and Azure Monitor agents with DCRs, you can effectively collect Windows security events from multiple subscriptions and send them to a single Log Analytics workspace for centralized monitoring and analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

You have 100 servers that run Windows Server 2012 R2 and host Microsoft SQL Server 2014 instances. The instances host databases that have the following characteristics:
✑ Stored procedures are implemented by using CLR.
✑ The largest database is currently 3 TB. None of the databases will ever exceed 4 TB.
You plan to move all the data from SQL Server to Azure.
You need to recommend a service to host the databases. The solution must meet the following requirements:
✑ Whenever possible, minimize management overhead for the migrated databases.
✑ Ensure that users can authenticate by using Azure Active Directory (Azure AD) credentials.
✑ Minimize the number of database changes required to facilitate the migration.
What should you include in the recommendation?

A. Azure SQL Database elastic pools
B. Azure SQL Managed Instance
C. Azure SQL Database single databases
D. SQL Server 2016 on Azure virtual machines

A

**Correct Answer: B **

SQL Managed Instance allows existing SQL Server customers to lift and shift their on-premises applications to the cloud with minimal application and database changes. At the same time, SQL Managed Instance preserves all PaaS capabilities (automatic patching and version updates, automated backups, high availability) that drastically reduce management overhead and TCO.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

You have an Azure subscription that contains an Azure Blob Storage account named store1.
You have an on-premises file server named Server1 that runs Windows Server 2016. Server1 stores 500 GB of company files.
You need to store a copy of the company files from Server1 in store1.
Which two possible Azure services achieve this goal? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.

A. an Azure Logic Apps integration account
B. an Azure Import/Export job
C. Azure Data Factory
D. an Azure Analysis services On-premises data gateway
E. an Azure Batch account

A

Correct Answer: BC

B: You can use the Azure Import/Export service to securely export large amounts of data from Azure Blob storage. The service requires you to ship empty drives to the Azure datacenter. The service exports data from your storage account to the drives and then ships the drives back.
C: Big data requires a service that can orchestrate and operationalize processes to refine these enormous stores of raw data into actionable business insights.
Azure Data Factory is a managed cloud service that’s built for these complex hybrid extract-transform-load (ETL), extract-load-transform (ELT), and data integration projects.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

You have an Azure subscription that contains two applications named App1 and App2. App1 is a sales processing application. When a transaction in App1 requires shipping, a message is added to an Azure Storage account queue, and then App2 listens to the queue for relevant transactions.
In the future, additional applications will be added that will process some of the shipping requests based on the specific details of the transactions.
You need to recommend a replacement for the storage account queue to ensure that each additional application will be able to read the relevant transactions.
What should you recommend?

A. one Azure Data Factory pipeline
B. multiple storage account queues
C. one Azure Service Bus queue
D. one Azure Service Bus topic

A

**Correct Answer: D **

A queue allows processing of a message by a single consumer. In contrast to queues, topics and subscriptions provide a one-to-many form of communication in a publish and subscribe pattern. It’s useful for scaling to large numbers of recipients. Each published message is made available to each subscription registered with the topic. Publisher sends a message to a topic and one or more subscribers receive a copy of the message, depending on filter rules set on these subscriptions.

Reference
Reference 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

You need to design a storage solution for an app that will store large amounts of frequently used data. The solution must meet the following requirements:
✑ Maximize data throughput.
✑ Prevent the modification of data for one year.
✑ Minimize latency for read and write operations.
Which Azure Storage account type and storage service should you recommend? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Hot Area

A

Answer

Box 1: BlockBlobStorage -
Block Blob is a premium storage account type for block blobs and append blobs. Recommended for scenarios with high transactions rates, or scenarios that use smaller objects or require consistently low storage latency.

Box 2: Blob -
The Archive tier is an offline tier for storing blob data that is rarely accessed. The Archive tier offers the lowest storage costs, but higher data retrieval costs and latency compared to the online tiers (Hot and Cool). Data must remain in the Archive tier for at least 180 days or be subject to an early deletion charge.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

You have an Azure subscription that contains the storage accounts shown in the following table

You plan to implement two new apps that have the requirements shown in the following table

Which storage accounts should you recommend using for each app? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Hot Area

A

Answer

Choosing Storage Accounts for App1 and App2

For App1:
* Storage1 and Storage2 only

Explanation:

  • Lifecycle management is a feature that allows you to automatically transition blobs between storage tiers based on defined policies.
  • To utilize this feature, you need at least two storage tiers: one for hot storage (Storage2: Premium) and one for cold storage (Storage1: Standard).
  • Storage3 (BlobStorage) is not suitable for lifecycle management as it’s specifically designed for block blobs.
  • Storage4 (FileStorage) is not relevant for storing blobs.

For App2:
* Storage4 only

Explanation:

  • Azure file shares are used for storing files and are accessible through the SMB protocol.
  • Storage4 is the only file storage account among the given options, making it the ideal choice for App2.
  • Storage1, Storage2, and Storage3 are not designed for file storage.

In summary:

  • App1 should use Storage1 (Standard) and Storage2 (Premium) for lifecycle management.
  • App2 should use Storage4 (Premium File Storage) for storing app data in a file share.

By selecting these storage accounts, you ensure optimal performance, cost-efficiency, and data management for both applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

You are designing an application that will be hosted in Azure.
The application will host video files that range from 50 MB to 12 GB. The application will use certificate-based authentication and will be available to users on the internet.
You need to recommend a storage option for the video files. The solution must provide the fastest read performance and must minimize storage costs.
What should you recommend?

A. Azure Files
B. Azure Data Lake Storage Gen2
C. Azure Blob Storage
D. Azure SQL Database

A

**Correct Answer: C **

Blob Storage: Stores large amounts of unstructured data, such as text or binary data, that can be accessed from anywhere in the world via HTTP or HTTPS. You can use Blob storage to expose data publicly to the world, or to store application data privately.
Max file in Blob Storage. 4.77 TB.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

You are designing a SQL database solution. The solution will include 20 databases that will be 20 GB each and have varying usage patterns.
You need to recommend a database platform to host the databases. The solution must meet the following requirements:
✑ The solution must meet a Service Level Agreement (SLA) of 99.99% uptime.
✑ The compute resources allocated to the databases must scale dynamically.
✑ The solution must have reserved capacity.
Compute charges must be minimized.

What should you include in the recommendation?

A. an elastic pool that contains 20 Azure SQL databases
B. 20 databases on a Microsoft SQL server that runs on an Azure virtual machine in an availability set
C. 20 databases on a Microsoft SQL server that runs on an Azure virtual machine
D. 20 instances of Azure SQL Database serverless

A

Correct Answer: A

The compute and storage redundancy is built in for business critical databases and elastic pools, with a SLA of 99.99%.
Reserved capacity provides you with the flexibility to temporarily move your hot databases in and out of elastic pools (within the same region and performance tier) as part of your normal operations without losing the reserved capacity benefit.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

You have an on-premises database that you plan to migrate to Azure.
You need to design the database architecture to meet the following requirements:
✑ Support scaling up and down.
✑ Support geo-redundant backups.
✑ Support a database of up to 75 TB.
✑ Be optimized for online transaction processing (OLTP).
What should you include in the design? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area

A

Answer

Box 1: Azure SQL Database -
Azure SQL Database:
Database size always depends on the underlying service tiers (e.g. Basic, Business Critical, Hyperscale).
It supports databases of up to 100 TB with Hyperscale service tier model.
Active geo-replication is a feature that lets you to create a continuously synchronized readable secondary database for a primary database. The readable secondary database may be in the same Azure region as the primary, or, more commonly, in a different region. This kind of readable secondary databases are also known as geo-secondaries, or geo-replicas.
Azure SQL Database and SQL Managed Instance enable you to dynamically add more resources to your database with minimal downtime.

Box 2: Hyperscale -
Incorrect Answers:
✑ SQL Server on Azure VM: geo-replication not supported.
✑ Azure Synapse Analytics is not optimized for online transaction processing (OLTP).
✑ Azure SQL Managed Instance max database size is up to currently available instance size (depending on the number of vCores).
Max instance storage size (reserved) - 2 TB for 4 vCores
- 8 TB for 8 vCores
- 16 TB for other sizes

Reference
Reference 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

You are planning an Azure IoT Hub solution that will include 50,000 IoT devices.
Each device will stream data, including temperature, device ID, and time data. Approximately 50,000 records will be written every second. The data will be visualized in near real time.
You need to recommend a service to store and query the data.
Which two services can you recommend? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.

A. Azure Table Storage
B. Azure Event Grid
C. Azure Cosmos DB SQL API
D. Azure Time Series Insights

A

Correct Answer: CD

D: Time Series Insights is a fully managed service for time series data. In this architecture, Time Series Insights performs the roles of stream processing, data store, and analytics and reporting. It accepts streaming data from either IoT Hub or Event Hubs and stores, processes, analyzes, and displays the data in near real time.
C: The processed data is stored in an analytical data store, such as Azure Data Explorer, HBase, Azure Cosmos DB, Azure Data Lake, or Blob Storage.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

You are designing an application that will aggregate content for users.
You need to recommend a database solution for the application. The solution must meet the following requirements:
✑ Support SQL commands.
✑ Support multi-master writes.
✑ Guarantee low latency read operations.
What should you include in the recommendation?

A. Azure Cosmos DB SQL API
B. Azure SQL Database that uses active geo-replication
C. Azure SQL Database Hyperscale
D. Azure Database for PostgreSQL

A

Correct Answer: A

With Cosmos DB’s novel multi-region (multi-master) writes replication protocol, every region supports both writes and reads. The multi-region writes capability also enables:
Unlimited elastic write and read scalability.
99.999% read and write availability all around the world.
Guaranteed reads and writes served in less than 10 milliseconds at the 99th percentile.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

You have an Azure subscription that contains the SQL servers on Azure shown in the following table

The subscription contains the storage accounts shown in the following table

You create the Azure SQL databases shown in the following table

For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Hot Area

A

Answer

Box 1: Yes -
Auditing works fine for a Standard account.

Box 2: No -
Auditing limitations: Premium storage is currently not supported for blockBlobStorage accounts, so needed to be StorageV2

Box 3: No -
Auditing limitations: Premium storage is currently not supported.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

Storage

You plan to import data from your on-premises environment to Azure. The data is shown in the following table

What should you recommend using to migrate the data? To answer, drag the appropriate tools to the correct data sources. Each tool may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place

A

Answer

Box 1: Data Migration Assistant -
The Data Migration Assistant (DMA) helps you upgrade to a modern data platform by detecting compatibility issues that can impact database functionality in your new version of SQL Server or Azure SQL Database. DMA recommends performance and reliability improvements for your target environment and allows you to move your schema, data, and uncontained objects from your source server to your target server.
Incorrect:
AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account.
Box 2: Azure Cosmos DB Data Migration Tool
Azure Cosmos DB Data Migration Tool can used to migrate a SQL Server Database table to Azure Cosmos.

Reference
Reference 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

You store web access logs data in Azure Blob Storage.
You plan to generate monthly reports from the access logs.
You need to recommend an automated process to upload the data to Azure SQL Database every month.
What should you include in the recommendation?

A. Microsoft SQL Server Migration Assistant (SSMA)
B. Data Migration Assistant (DMA)
C. AzCopy
D. Azure Data Factory

A

Correct Answer: D

You can create Data Factory pipelines that copies data from Azure Blob Storage to Azure SQL Database. The configuration pattern applies to copying from a file- based data store to a relational data store.
Required steps:
Create a data factory.
Create Azure Storage and Azure SQL Database linked services.
Create Azure Blob and Azure SQL Database datasets.
Create a pipeline contains a Copy activity.
Start a pipeline run.
Monitor the pipeline and activity runs.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

You have an Azure subscription.
Your on-premises network contains a file server named Server1. Server1 stores 5 ׀¢׀’ of company files that are accessed rarely.
You plan to copy the files to Azure Storage.
You need to implement a storage solution for the files that meets the following requirements:
✑ The files must be available within 24 hours of being requested.
✑ Storage costs must be minimized.
Which two possible storage solutions achieve this goal? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. Create an Azure Blob Storage account that is configured for the Cool default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.
B. Create a general-purpose v1 storage account. Create a blob container and copy the files to the blob container.
C. Create a general-purpose v2 storage account that is configured for the Cool default access tier. Create a file share in the storage account and copy the files to the file share.
D. Create a general-purpose v2 storage account that is configured for the Hot default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.
E. Create a general-purpose v1 storage account. Create a fie share in the storage account and copy the files to the file share.

A

Correct Answer: AD

To minimize costs: The Archive tier is optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements (on the order of hours).

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

You have an app named App1 that uses two on-premises Microsoft SQL Server databases named DB1 and DB2.
You plan to migrate DB1 and DB2 to Azure
You need to recommend an Azure solution to host DB1 and DB2. The solution must meet the following requirements:
✑ Support server-side transactions across DB1 and DB2.
✑ Minimize administrative effort to update the solution.
What should you recommend?
A. two Azure SQL databases in an elastic pool
B. two databases on the same Azure SQL managed instance
C. two databases on the same SQL Server instance on an Azure virtual machine
D. two Azure SQL databases on different Azure SQL Database servers

A

Correct Answer: B

Elastic database transactions for Azure SQL Database and Azure SQL Managed Instance allow you to run transactions that span several databases.
SQL Managed Instance enables system administrators to spend less time on administrative tasks because the service either performs them for you or greatly simplifies those tasks.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

You need to design a highly available Azure SQL database that meets the following requirements:
✑ Failover between replicas of the database must occur without any data loss.
✑ The database must remain available in the event of a zone outage.
✑ Costs must be minimized.
Which deployment option should you use?
A. Azure SQL Database Hyperscale
B. Azure SQL Database Premium
C. Azure SQL Database Basic
D. Azure SQL Managed Instance General Purpose

A

Correct Answer: B

Azure SQL Database Premium tier supports multiple redundant replicas for each database that are automatically provisioned in the same datacenter within a region. This design leverages the SQL Server AlwaysON technology and provides resilience to server failures with 99.99% availability SLA and RPO=0.
With the introduction of Azure Availability Zones, we are happy to announce that SQL Database now offers built-in support of Availability Zones in its Premium service tier.
Incorrect:
Not A: Hyperscale is more expensive than Premium.
Not C: Need Premium for Availability Zones.
Not D: Zone redundant configuration that is free on Azure SQL Premium is not available on Azure SQL Managed Instance.

Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

You are designing a data storage solution to support reporting.
The solution will ingest high volumes of data in the JSON format by using Azure Event Hubs. As the data arrives, Event Hubs will write the data to storage. The solution must meet the following requirements:
✑ Organize data in directories by date and time.
✑ Allow stored data to be queried directly, transformed into summarized tables, and then stored in a data warehouse.
✑ Ensure that the data warehouse can store 50 TB of relational data and support between 200 and 300 concurrent read operations.
Which service should you recommend for each type of data store? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area

A

Answer

Box 1: Azure Data Lake Storage Gen2
Azure Data Explorer integrates with Azure Blob Storage and Azure Data Lake Storage (Gen1 and Gen2), providing fast, cached, and indexed access to data stored in external storage. You can analyze and query data without prior ingestion into Azure Data Explorer. You can also query across ingested and uningested external data simultaneously.
Azure Data Lake Storage is optimized storage for big data analytics workloads.
Use cases: Batch, interactive, streaming analytics and machine learning data such as log files, IoT data, click streams, large datasets
Box 2: Azure SQL Database Hyperscale
Azure SQL Database Hyperscale is optimized for OLTP and high throughput analytics workloads with storage up to 100TB.
A Hyperscale database supports up to 100 TB of data and provides high throughput and performance, as well as rapid scaling to adapt to the workload requirements. Connectivity, query processing, database engine features, etc. work like any other database in Azure SQL Database.
Hyperscale is a multi-tiered architecture with caching at multiple levels. Effective IOPS will depend on the workload.
Compare to:
General purpose: 500 IOPS per vCore with 7,000 maximum IOPS
Business critical: 5,000 IOPS with 200,000 maximum IOPS
Incorrect:
* Azure Synapse Analytics Dedicated SQL pool.

Max database size: 240 TB -
A maximum of 128 concurrent queries will execute and remaining queries will be queued.

Data Lake Query Data
Service Tier Hyperscale
Sql Data warehouse service capacity limits

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

You have an app named App1 that uses an on-premises Microsoft SQL Server database named DB1.

You plan to migrate DB1 to an Azure SQL managed instance.

You need to enable customer managed Transparent Data Encryption (TDE) for the instance. The solution must maximize encryption strength.

Which type of encryption algorithm and key length should you use for the TDE protector?

A. RSA 3072
B. AES 256
C. RSA 4096
D. RSA 2048

A

Correct Answer: A

A. RSA 3072

RSA 3072 provides a higher level of encryption strength compared to RSA 2048. While RSA 4096 offers even stronger encryption, it is not supported by Azure SQL Database and Azure SQL Managed Instance for TDE protectors.

By choosing RSA 3072 for the TDE protector, you ensure strong encryption for your Azure SQL Managed Instance while complying with the platform’s requirements. This will help protect sensitive data and maintain compliance with relevant security standards and regulations.

Transparent Data Encription

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

You are planning an Azure IoT Hub solution that will include 50,000 IoT devices.

Each device will stream data, including temperature, device ID, and time data. Approximately 50,000 records will be written every second. The data will be visualized in near real time.

You need to recommend a service to store and query the data.

Which two services can you recommend? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

A. Azure Table Storage
B. Azure Event Grid
C. Azure Cosmos DB for NoSQL
D. Azure Time Series Insights

A

Correct Answer: CD

A. Azure Table Storage -> Throughput: scalability limit of 20,000 operations/s. -> Not enough for this question
B. Azure Event Grid -> It is only a broker, not a storage solution
Therefore, C and D are right

Cosmos DB Table
Event Grid Overview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

You are planning an Azure Storage solution for sensitive data. The data will be accessed daily. The dataset is less than 10 GB.

You need to recommend a storage solution that meets the following requirements:

  • All the data written to storage must be retained for five years.
  • Once the data is written, the data can only be read. Modifications and deletion must be prevented.
  • After five years, the data can be deleted, but never modified.
  • Data access charges must be minimized.

What should you recommend? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

  1. correct: General Purpose V2 with Hot access tier for blobs
  2. Should be Container access Policy for immutable storage. A resource lock does not prevent removal of files and folders. Prevents deleting resource inside the resource group
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

You are designing a data analytics solution that will use Azure Synapse and Azure Data Lake Storage Gen2.

You need to recommend Azure Synapse pools to meet the following requirements:

  • Ingest data from Data Lake Storage into hash-distributed tables.
  • Implement query, and update data in Delta Lake.

What should you recommend for each requirement? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

Recommended Azure Synapse Pools

Box 1: To Ingest data from Data Lake Storage into hash-distributed tables:
* A serverless Apache Spark pool

Explanation:
* Serverless Apache Spark pools offer the flexibility and scalability needed for data ingestion tasks.
* Spark can efficiently read data from Data Lake Storage and load it into hash-distributed tables in a dedicated SQL pool.
* Serverless pools eliminate the need to manage dedicated clusters, saving costs.

Box 2: To Implement, query, and update data in Delta Lake:
* A dedicated SQL pool

Explanation:
* Dedicated SQL pools provide high performance and predictable performance for complex query workloads and data updates.
* Delta Lake is optimized for OLTP workloads, and a dedicated SQL pool offers the best performance for these types of operations.
* Hash-distributed tables in a dedicated SQL pool are ideal for efficient data querying and updates.

Additional Considerations:
* Consider using a combination of serverless and dedicated pools for optimal cost-efficiency and performance. For example, use a serverless Apache Spark pool for initial data ingestion and transformation, then move the data to a dedicated SQL pool for querying and updates.
* Explore advanced features like Synapse Link for real-time integration between Data Lake Storage and dedicated SQL pool.

By utilizing these Azure Synapse pools, you can effectively ingest, process, and analyze your data in a cost-efficient and performant manner.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

You have an on-premises storage solution.

You need to migrate the solution to Azure. The solution must support Hadoop Distributed File System (HDFS).

What should you use?

A. Azure Data Lake Storage Gen2
B. Azure NetApp Files
C. Azure Data Share
D. Azure Table storage

A

Correct Answer: A

A. Azure Data Lake Storage Gen2

Azure Data Lake Storage Gen2 is the best choice for migrating your on-premises storage solution to Azure with support for Hadoop Distributed File System (HDFS). It is a highly scalable and cost-effective storage service designed for big data analytics, providing integration with Azure HDInsight, Azure Databricks, and other Azure services. It is built on Azure Blob Storage and combines the advantages of HDFS with Blob Storage, offering a hierarchical file system, fine-grained security, and high-performance analytics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

You have an on-premises app named App1.

Customers use App1 to manage digital images.

You plan to migrate App1 to Azure.

You need to recommend a data storage solution for App1. The solution must meet the following image storage requirements:

  • Encrypt images at rest.
  • Allow files up to 50 MB.
  • Manage access to the images by using Azure Web Application Firewall (WAF) on Azure Front Door.

The solution must meet the following customer account requirements:

  • Support automatic scale out of the storage.
  • Maintain the availability of App1 if a datacenter fails.
  • Support reading and writing data from multiple Azure regions.

Which service should you include in the recommendation for each type of data? To answer, drag the appropriate services to the correct type of data. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct answer is worth one point.
Drag Drop

A

Answer

Box 1 - Image storage: A. Azure Blob Storage

Azure Blob Storage is a suitable choice for storing digital images, as it supports encryption at rest, handles large file sizes (up to 50 MB or even larger), and can be used in conjunction with Azure Web Application Firewall (WAF) on Azure Front Door.

Box 2 - Customer accounts: B. Azure Cosmos DB

Azure Cosmos DB is a highly scalable, globally distributed, multi-model database service that supports automatic scale-out, ensures high availability even in the event of a datacenter failure, and allows for reading and writing data from multiple Azure regions. This makes it an ideal choice for storing customer account data in your scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

You are designing an application that will aggregate content for users.

You need to recommend a database solution for the application. The solution must meet the following requirements:

  • Support SQL commands.
  • Support multi-master writes.
  • Guarantee low latency read operations.

What should you include in the recommendation?

A. Azure Cosmos DB for NoSQL
B. Azure SQL Database that uses active geo-replication
C. Azure SQL Database Hyperscale
D. Azure Cosmos DB for PostgreSQL

A

Correct Answer: A

A. Azure Cosmos DB for NoSQL

Azure Cosmos DB is a globally distributed, multi-model database service that supports SQL commands, multi-master writes, and guarantees low latency read operations. It supports a variety of NoSQL data models including document, key-value, graph, and column-family. Azure Cosmos DB provides automatic and instant scalability, high availability, and low latency globally by replicating and synchronizing data across multiple Azure regions.

On the other hand, Azure SQL Database and Azure SQL Database Hyperscale are traditional relational database services that do not natively support multi-master writes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

You plan to migrate on-premises MySQL databases to Azure Database for MySQL Flexible Server.

You need to recommend a solution for the Azure Database for MySQL Flexible Server configuration. The solution must meet the following requirements:

  • The databases must be accessible if a datacenter fails.
  • Costs must be minimized.

Which compute tier should you recommend?

A. Burstable
B. General Purpose
C. Memory Optimized

A

Correct Answer: B

B. General Purpose

The General Purpose compute tier provides a balance between performance and cost. It is suitable for most common workloads and offers a good combination of CPU and memory resources. It provides high availability and fault tolerance by utilizing Azure’s infrastructure across multiple datacenters. This ensures that the databases remain accessible even if a datacenter fails.

The Burstable compute tier (option A) is designed for workloads with variable or unpredictable usage patterns. It provides burstable CPU performance but may not be the optimal choice for ensuring availability during a datacenter failure.

The Memory Optimized compute tier (option C) is designed for memory-intensive workloads that require high memory capacity. While it provides excellent performance for memory-bound workloads, it may not be necessary for minimizing costs or meeting the specified requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

You are designing an app that will use Azure Cosmos DB to collate sales from multiple countries.

You need to recommend an API for the app. The solution must meet the following requirements:

  • Support SQL queries.
  • Support geo-replication.
  • Store and access data relationally.

Which API should you recommend?

A. Apache Cassandra
B. PostgreSQL
C. MongoDB
D. NoSQL

A

Correct Answer: B

Choose Api

Store data relationally:
- NoSQL stores data in document format
- MongoDB stores data in a document structure (BSON format)

Support SQL Queries:
- Apache Cassandra uses Cassandra Query Language (CQL)

If you’re looking for a managed open source relational database with high performance and geo-replication, Azure Cosmos DB for PostgreSQL is the recommended choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

You have an app that generates 50,000 events daily.

You plan to stream the events to an Azure event hub and use Event Hubs Capture to implement cold path processing of the events. The output of Event Hubs Capture will be consumed by a reporting system.

You need to identify which type of Azure storage must be provisioned to support Event Hubs Capture, and which inbound data format the reporting system must support.

What should you identify? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area

A

Answer

Storage Type: Event Hubs Capture allows captured data to be written to either Azure Blob Storage or Azure Data Lake Storage Gen2. However, for cold path processing scenarios, which involve analyzing historical data, Azure Data Lake Storage Gen2 is the more suitable choice. It’s designed for big data analytics workloads and offers better performance and scalability for working with large datasets captured from event hubs.

Inbound Data Format: Event Hubs Capture uses Avro format for the captured data. Avro is a widely used open-source data format specifically designed for data exchange. It’s a row-oriented, binary format that provides rich data structures with inline schema definition. This makes it efficient for storage and easy for various analytics tools and reporting systems to understand and process the captured event data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

You have the resources shown in the following table.

CDB1 hosts a container that stores continuously updated operational data.

You are designing a solution that will use AS1 to analyze the operational data daily.

You need to recommend a solution to analyze the data without affecting the performance of the operational data store.

What should you include in the recommendation?

A. Azure Data Factory with Azure Cosmos DB and Azure Synapse Analytics connectors
B. Azure Synapse Analytics with PolyBase data loading
C. Azure Synapse Link for Azure Cosmos DB
D. Azure Cosmos DB change feed

A

The correct answer is C. Azure Synapse Link for Azure Cosmos DB.

Azure Synapse Link for Azure Cosmos DB creates a tight integration between Azure Cosmos DB and Azure Synapse Analytics, allowing you to run near real-time analytics over operational data in Azure Cosmos DB. It creates a “no-ETL” (Extract, Transform, Load) environment that allows you to analyze data directly without affecting the performance of the transactional workload, which is exactly what is required in this scenario.

A. Azure Data Factory with Azure Cosmos DB and Azure Synapse Analytics connectors would require ETL operations which might impact the performance of the operational data store.

B. Azure Synapse Analytics with PolyBase data loading is more appropriate for loading data from external data sources such as Azure Blob Storage or Azure Data Lake Storage.

D. Azure Cosmos DB change feed doesn’t directly address the need for analytics without affecting the performance of the operational data store.

Cosmos DB - Synapse Link

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

You have an Azure subscription. The subscription contains an Azure SQL managed instance that stores employee details, including social security numbers and phone numbers.

You need to configure the managed instance to meet the following requirements:

  • The helpdesk team must see only the last four digits of an employee’s phone number.
  • Cloud administrators must be prevented from seeing the employee’s social security numbers.

What should you enable for each column in the managed instance? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer Area

Dynamic data masking helps prevent unauthorized access to sensitive data by enabling customers to designate how much of the sensitive data to reveal with minimal effect on the application layer.

Always Encrypted is a feature designed to protect sensitive data, such as credit card numbers or national/regional identification numbers (for example, U.S. social security numbers), stored in Azure SQL Database, Azure SQL Managed Instance, and SQL Server databases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

You plan to use an Azure Storage account to store data assets.

You need to recommend a solution that meets the following requirements:

  • Supports immutable storage
  • Disables anonymous access to the storage account
  • Supports access control list (ACL)-based Azure AD permissions

What should you include in the recommendation?

A. Azure Files
B. Azure Data Lake Storage
C. Azure NetApp Files
D. Azure Blob Storage

A

Correct Answer: B

The correct answer is B. Azure Data Lake Storage.
Here’s a breakdown of why Azure Data Lake Storage is the best fit based on the given requirements:
Supports immutable storage:
* Azure Data Lake Storage Gen2 offers immutable storage through features like:
* Append-only writes: Once data is written to a file, it cannot be modified.
* Time-based retention policies: Files can be retained for a specified duration, preventing accidental deletion or modification.
* Legal hold: Files can be placed on legal hold, restricting any changes or deletions.
Disables anonymous access to the storage account:
* Azure Data Lake Storage Gen2 allows you to configure network rules and access control lists (ACLs) to strictly control who can access the storage account. You can disable public access entirely, ensuring that only authorized users can interact with the data.
Supports access control list (ACL)-based Azure AD permissions:
* Azure Data Lake Storage Gen2 integrates with Azure Active Directory (AD) to provide granular access control. You can use ACLs to assign permissions to individual users, groups, or service principals, allowing fine-grained control over who can access and modify data within the storage account.
Additional considerations:
* Azure Files: While Azure Files supports ACL-based permissions, it doesn’t offer immutable storage or the same level of granular access control as Azure Data Lake Storage Gen2.
* Azure NetApp Files: Azure NetApp Files is primarily designed for enterprise-grade file shares and doesn’t offer immutable storage or the same level of integration with Azure AD as Azure Data Lake Storage Gen2.
* Azure Blob Storage: While Azure Blob Storage supports various access control mechanisms, it doesn’t offer immutable storage or the same level of granular ACL-based permissions as Azure Data Lake Storage Gen2.
By choosing Azure Data Lake Storage, you can ensure that your data assets are stored securely, with strict control over who can access and modify them, while also benefiting from the immutability features to protect against accidental or malicious data changes.

Azure Data Lake storage comparison with Azure Blob storage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

You are designing a storage solution that will ingest, store, and analyze petabytes (PBs) of structured, semi-structured, and unstructured text data. The analyzed data will be offloaded to Azure Data Lake Storage Gen2 for long-term retention.

You need to recommend a storage and analytics solution that meets the following requirements:
* Stores the processed data
* Provides interactive analytics
* Supports manual scaling, built-in autoscaling, and custom autoscaling

What should you include in the recommendation? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hotspot

A

Answer

Recommendation:

  1. For storage and interactive analytics: Azure Data Lake Analytics
  • Azure Data Explorer is optimized for rapid ingestion and querying of large volumes of data, making it suitable for petabyte-scale data processing.
  • It supports a variety of data formats, including structured, semi-structured, and unstructured text data.
  • It offers interactive query capabilities, allowing for rapid exploration and analysis of data.
  • It provides built-in autoscaling and supports manual scaling to handle varying workloads.
  1. Query language: KQL (Kusto Query Language)
  • KQL is specifically designed for Azure Data Explorer and offers powerful capabilities for querying and analyzing large datasets.
  • It provides a rich set of functions and operators for data manipulation and exploration.

Explanation:

  • Azure Data Explorer’s high performance, scalability, and support for various data formats make it an ideal choice for storing and analyzing petabytes of text data.
  • KQL provides the necessary tools for efficient and interactive data exploration within Azure Data Explorer.

By combining Azure Data Explorer and KQL, you can effectively ingest, store, analyze, and offload petabytes of text data to Azure Data Lake Storage Gen2 for long-term retention.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

You plan to use Azure SQL as a database platform.

You need to recommend an Azure SQL product and service tier that meets the following requirements:
* Automatically scales compute resources based on the workload demand
* Provides per second billing

What should you recommend? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

The correct options to select are:

  • Azure SQL product: A single Azure SQL database
  • Service tier: Hyperscale

Here’s why:

  • A single Azure SQL database is the most appropriate choice for this scenario, as it provides a fully managed database service that can be scaled to meet your specific needs.
  • Hyperscale is the best service tier for automatically scaling compute resources based on workload demand. Hyperscale offers elastic scalability, allowing the database to automatically adjust its compute resources to handle varying workloads. This ensures optimal performance and cost-efficiency.
  • Hyperscale also provides per-second billing, which means you only pay for the resources you use, resulting in more accurate and granular billing.

The other options are not as suitable:

  • An Azure SQL Database elastic pool is not the best choice for this scenario, as it is designed to share resources across multiple databases. While it can provide some level of scalability, it may not be as flexible or efficient as a single Azure SQL database.
  • Azure SQL Managed Instance is a more complex option that is better suited for migrating on-premises SQL Server workloads to Azure. It may not be the best choice for a new, cloud-native application.
  • The other service tiers (Basic, Business Critical, General Purpose, and Standard) do not offer automatic scaling based on workload demand.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

Q34 T2

You have an Azure subscription.

You need to deploy a solution that will provide point-in-time restore for blobs in storage accounts that have blob versioning and blob soft delete enabled.

Which type of blob should you create, and what should you enable for the accounts? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area

A

Answer

Blob Type: Block
Enable: A stored access policy (if required)

By selecting Block blobs and enabling a stored access policy (if necessary), you’ll have the appropriate configuration for point-in-time restore based on the existing blob versioning and soft delete settings.

Explanation:

A stored access policy is a mechanism in Azure Storage that allows you to grant specific permissions to access your storage resources. It’s essential for controlling who can read, write, or delete your blobs.
The other options (Immutable blob storage, Object replication, and The change feed) are not directly related to point-in-time restore and are typically configured separately based on specific use cases.
Therefore, the correct configuration for point-in-time restore in this scenario would be:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

Q35 T2

Your company, named Contoso, Ltd., has an Azure subscription that contains the following resources:

  • An Azure Synapse Analytics workspace named contosoworkspace1
  • An Azure Data Lake Storage account named contosolake1
  • An Azure SQL database named contososql1

The product data of Contoso is copied from contososql1 to contosolake1.

Contoso has a partner company named Fabrikam Inc. Fabrikam has an Azure subscription that contains the following resources:

  • A virtual machine named FabrikamVM1 that runs Microsoft SQL Server 2019
  • An Azure Storage account named fabrikamsa1

Contoso plans to upload the research data on FabrikamVM1 to contosolake1. During the upload, the research data must be transformed to the data formats used by Contoso.

The data in contosolake1 will be analyzed by using contosoworkspace1.

You need to recommend a solution that meets the following requirements:

  • Upload and transform the FabrikamVM1 research data.
  • Provide Fabrikam with restricted access to snapshots of the data in contosoworkspace1.

What should you recommend for each requirement? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

For ETL operations use Azure Data Factory and Azure Synapse Pipelines are based on Azure Data Factory

For restricted access use Azure Data Share:
Azure Data Share enables organizations to securely share data with multiple customers and partners. Data providers are always in control of the data that they’ve shared and Azure Data Share makes it simple to manage and monitor what data was shared, when and by whom.
In this case snapshot-based sharing should be used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

Q36 T2

You are designing a data pipeline that will integrate large amounts of data from multiple on-premises Microsoft SQL Server databases into an analytics platform in Azure. The pipeline will include the following actions:

  • Database updates will be exported periodically into a staging area in Azure Blob storage.
  • Data from the blob storage will be cleansed and transformed by using a highly parallelized load process.
  • The transformed data will be loaded to a data warehouse.
  • Each batch of updates will be used to refresh an online analytical processing (OLAP) model in a managed serving layer.
  • The managed serving layer will be used by thousands of end users.

You need to implement the data warehouse and serving layers.

What should you use? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

Data Warehouse: An Azure Synapse Analytics dedicated SQL pool
Serving Layer: Azure Analysis Services

The prompt asks us to select the appropriate options for implementing the data warehouse and serving layer in an Azure data pipeline.

Data Warehouse:

  • An Apache Spark pool in Azure Synapse Analytics: This is a good choice for data warehouses that require high performance and scalability. Apache Spark is a distributed computing framework that can handle large datasets and complex workloads.
  • An Azure Synapse Analytics dedicated SQL pool: This is a good choice for data warehouses that require SQL-based analytics and OLTP workloads. It is highly optimized for SQL queries and can be scaled to meet the needs of large organizations.
  • Azure Data Lake Analytics: This is a good choice for data warehouses that require ad-hoc analysis and exploration of large datasets. It is a pay-as-you-go service that allows you to analyze data in place without having to move it to a different storage location.

Serving Layer:

  • Azure Analysis Services: This is a good choice for serving OLAP models to thousands of end users. It is a fully managed service that provides high performance and scalability.
  • An Apache Spark pool in Azure Synapse Analytics: While Apache Spark can be used for serving OLAP models, it is not as optimized for this purpose as Azure Analysis Services.
  • An Azure Synapse Analytics dedicated SQL pool: While a dedicated SQL pool can be used for serving OLAP models, it is not as optimized for this purpose as Azure Analysis Services.

This combination will provide high performance, scalability, and flexibility for both the data warehouse and serving layer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

Q37 T2

You have an Azure subscription.

You need to deploy a relational database. The solution must meet the following requirements:

  • Support multiple read-only replicas.
  • Automatically load balance read-only requests across all the read-only replicas.
  • Minimize administrative effort

What should you use? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

As part of the requirement -> Support multiple read-only replicas.

Hyperscale is the right choice. Business critical tier has only 1 additional read replica

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

You have an app named App1 that uses an Azure Blob Storage container named app1data.

App1 uploads a cumulative transaction log file named File1.txt to a block blob in app1data once every hour. File1.txt only stores transaction data from the current day.

You need to ensure that you can restore the last uploaded version of File1.txt from any day for up to 30 days after the file was overwritten. The solution must minimize storage space.

What should you include in the solution?

A. container soft delete
B. blob snapshots
C. blob soft delete
D. blob versioning

A

Correct Answer: D

Justification:

Blob Versioning: Automatically keeps previous versions of an object when it is overwritten, enabling you to restore any version within the retention period.

Storage Efficiency: Only stores the changes, minimizing the additional storage required.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

You have 12 on-premises data sources that contain customer information and consist of Microsoft SQL Server, MySQL, and Oracle databases.

You have an Azure subscription.

You plan to create an Azure Data Lake Storage account that will consolidate the customer information for analysis and reporting.

You need to recommend a solution to automatically copy new information from the data sources to the Data Lake Storage account by using extract, transform and load (ETL). The solution must minimize administrative effort.

What should you include in the recommendation?

A. Azure Data Factory
B. Azure Data Explorer
C. Azure Data Share
D. Azure Data Studio

A

Correct Answer: A

The correct answer is A. Azure Data Factory.
Here’s why:
* Azure Data Factory is a fully managed cloud-based ETL service that allows you to automate data movement and transformation between on-premises and cloud data sources.
* It provides a drag-and-drop interface for creating and managing data pipelines, making it easy to design and automate the ETL process.
* Azure Data Factory supports various data sources, including Microsoft SQL Server, MySQL, and Oracle databases, making it compatible with your on-premises data sources.
* It offers built-in data transformation capabilities, allowing you to clean, filter, and aggregate data before loading it into the Data Lake Storage account.
* Azure Data Factory also supports scheduling and monitoring of data pipelines, ensuring that data is copied consistently and reliably.
Here’s a breakdown of why the other options are not as suitable:
* B. Azure Data Explorer: It’s a fast and highly scalable data exploration service that is optimized for ad-hoc queries on large datasets. While it can be used to analyze data in the Data Lake Storage account, it doesn’t provide the ETL capabilities needed to automatically copy data from on-premises sources.
* C. Azure Data Share: It’s a service that allows you to share data between Azure subscriptions and external parties. While it can be used to share data from the Data Lake Storage account, it doesn’t provide the ETL capabilities needed to copy data from on-premises sources.
* D. Azure Data Studio: It’s a SQL Server management tool that provides a graphical interface for managing and querying SQL Server databases. While it can be used to extract data from SQL Server databases, it doesn’t provide the ETL capabilities needed to automate data movement and transformation between different data sources.
Therefore, based on the requirements of automating data copying from on-premises data sources to the Data Lake Storage account, Azure Data Factory is the most suitable solution. It offers a comprehensive set of features for ETL, including data source support, data transformation, scheduling, and monitoring, while minimizing administrative effort.

Introduction to Azure Data Factory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

Q1 T3

You have SQL Server on an Azure virtual machine. The databases are written to nightly as part of a batch process.
You need to recommend a disaster recovery solution for the data. The solution must meet the following requirements:
✑ Provide the ability to recover in the event of a regional outage.
✑ Support a recovery time objective (RTO) of 15 minutes.
✑ Support a recovery point objective (RPO) of 24 hours.
✑ Support automated recovery.
✑ Minimize costs.
What should you include in the recommendation?

A. Azure virtual machine availability sets
B. Azure Disk Backup
C. an Always On availability group
D. Azure Site Recovery

A

Correct Answer: D

Replication with Azure Site Recover:
✑ RTO is typically less than 15 minutes.
✑ RPO: One hour for application consistency and five minutes for crash consistency.
Incorrect Answers:
B: Too slow.
C: Always On availability group RPO: Because replication to the secondary replica is asynchronous, there’s some data loss.
Reference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

Q2 T3

You plan to deploy the backup policy shown in the following exhibit

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.
Hot Area

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

Q3 T3

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements:
✑ Provide access to the full .NET framework.
Provide redundancy if an Azure region fails.

✑ Grant administrators access to the operating system to install custom application dependencies.
Solution: You deploy two Azure virtual machines to two Azure regions, and you create an Azure Traffic Manager profile.
Does this meet the goal?
A. Yes
B. No

A

Correct Answer: A

Azure Traffic Manager is a DNS-based traffic load balancer that enables you to distribute traffic optimally to services across global Azure regions, while providing high availability and responsiveness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

Q4 T3

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements:
✑ Provide access to the full .NET framework.
✑ Provide redundancy if an Azure region fails.
✑ Grant administrators access to the operating system to install custom application dependencies.
Solution: You deploy two Azure virtual machines to two Azure regions, and you deploy an Azure Application Gateway.
Does this meet the goal?

A. Yes
B. No

A

Correct Answer: B

App Gateway will balance the traffic between VMs deployed in the same region. Create an Azure Traffic Manager profile instead.

While Azure Application Gateway is a powerful tool for handling application traffic at the application layer and can assist with routing, load balancing, and other functions, it operates within a single region. It doesn’t automatically provide geo-redundancy across multiple Azure regions.

For redundancy across regions, Azure Traffic Manager or Azure Front Door would be more suitable. They operate at the DNS level and are designed to route traffic across different regions for high availability and failover purposes.

So, in this case, deploying two Azure virtual machines to two Azure regions and deploying an Azure Application Gateway would not fully meet the stated goals due to the lack of a regional failover strategy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

Q5 T3

You plan to create an Azure Storage account that will host file shares. The shares will be accessed from on-premises applications that are transaction intensive.
You need to recommend a solution to minimize latency when accessing the file shares. The solution must provide the highest-level of resiliency for the selected storage tier.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area

A

Answer

Box 1: Premium -
Premium: Premium file shares are backed by solid-state drives (SSDs) and provide consistent high performance and low latency, within single-digit milliseconds for most IO operations, for IO-intensive workloads.
Incorrect Answers:
✑ Hot: Hot file shares offer storage optimized for general purpose file sharing scenarios such as team shares. Hot file shares are offered on the standard storage hardware backed by HDDs.
✑ Transaction optimized: Transaction optimized file shares enable transaction heavy workloads that don’t need the latency offered by premium file shares.
Transaction optimized file shares are offered on the standard storage hardware backed by hard disk drives (HDDs). Transaction optimized has historically been called “standard”, however this refers to the storage media type rather than the tier itself (the hot and cool are also “standard” tiers, because they are on standard storage hardware).
Box 2: Zone-redundant storage (ZRS):
Premium Azure file shares only support LRS and ZRS.
Zone-redundant storage (ZRS): With ZRS, three copies of each file stored, however these copies are physically isolated in three distinct storage clusters in different Azure availability zones.

Reference

107
Q

Q6 T3

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements:
✑ Provide access to the full .NET framework.
✑ Provide redundancy if an Azure region fails.
✑ Grant administrators access to the operating system to install custom application dependencies.
Solution: You deploy an Azure virtual machine scale set that uses autoscaling.
Does this meet the goal?

A. Yes
B. No

A

Correct Answer: B

Instead, you should deploy two Azure virtual machines to two Azure regions, and you create a Traffic Manager profile.
Note: Azure Traffic Manager is a DNS-based traffic load balancer that enables you to distribute traffic optimally to services across global Azure regions, while providing high availability and responsiveness.

Reference

A virtual machine scale set with autoscaling can meet the requirement of providing access to the full .NET framework and granting administrators access to the operating system to install custom application dependencies. However, it may not be the best solution for providing redundancy if an Azure region fails.

To provide redundancy if an Azure region fails, it is recommended to deploy the stateless web app across multiple regions using Azure App Service. Azure App Service provides built-in redundancy and failover support across regions. Additionally, Azure App Service can also provide access to the full .NET framework and grant administrators access to the operating system.

Therefore, the recommended solution would be to deploy the stateless web app using Azure App Service to provide redundancy and meet all the specified requirements.

108
Q

Q7 T3

You need to recommend an Azure Storage account configuration for two applications named Application1 and Application2. The configuration must meet the following requirements:
✑ Storage for Application1 must provide the highest possible transaction rates and the lowest possible latency.
✑ Storage for Application2 must provide the lowest possible storage costs per GB.
✑ Storage for both applications must be available in an event of datacenter failure.
✑ Storage for both applications must be optimized for uploads and downloads.
What should you recommend? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Hot Area

A

Answer

The prompt asks us to recommend an Azure Storage account configuration for two applications, Application1 and Application2, based on specific requirements.

Application1 Requirements:
* Highest possible transaction rates and lowest possible latency
* Availability in an event of datacenter failure
* Optimized for uploads and downloads

Application2 Requirements:
* Lowest possible storage costs per GB
* Availability in an event of datacenter failure
* Optimized for uploads and downloads

Recommendations:

Application1:

  • BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication: This configuration offers the highest possible transaction rates and lowest possible latency, making it ideal for applications that require real-time data access. ZRS replication ensures data durability and availability in case of a datacenter failure. BlockBlobStorage is optimized for storing large amounts of unstructured data, which is often the case for applications that require high transaction rates.

Application2:

  • BlobStorage with Standard performance, Cool access tier, and Geo-redundant storage (GRS) replication: This configuration provides the lowest possible storage costs per GB while still ensuring data durability and availability. The Cool access tier is suitable for data that is infrequently accessed and is ideal for long-term storage. GRS replication ensures data redundancy across multiple regions, providing high availability. BlobStorage is suitable for storing large amounts of unstructured data, making it a good choice for applications that require low storage costs.

By selecting these configurations, we meet all the requirements for both applications and provide the most suitable storage options based on their specific needs.

109
Q

Q8 T3

You plan to develop a new app that will store business critical data. The app must meet the following requirements:

✑ Prevent new data from being modified for one year.
✑ Maximize data resiliency.
✑ Minimize read latency.

What storage solution should you recommend for the app? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

A

Answer

The appropriate storage solution for the app, based on the given requirements, is:

Storage Account Type: Premium block blobs

Redundancy: Zone-redundant storage (ZRS)

Here’s why:

  • Prevent new data from being modified for one year: Premium block blobs offer immutability features that can be configured to prevent data modification for a specified period, such as one year. This ensures that critical data remains unchanged and protected from accidental or malicious alterations.
  • Maximize data resiliency: ZRS provides the highest level of data durability among the available options. It replicates data across multiple data centers within a region, ensuring that data is protected against failures in individual data centers. This is crucial for business-critical data that needs to be highly resilient.
  • Minimize read latency: Premium block blobs are optimized for low-latency reads, which is essential for applications that require fast access to data. This storage type is designed to deliver high performance and responsiveness, even for large datasets.
    Therefore, Premium block blobs with ZRS redundancy offer the best combination of data immutability, resiliency, and low latency, making it the ideal storage solution for the given requirements.
110
Q

Q9 T3

You plan to deploy 10 applications to Azure. The applications will be deployed to two Azure Kubernetes Service (AKS) clusters. Each cluster will be deployed to a separate Azure region.
The application deployment must meet the following requirements:
✑ Ensure that the applications remain available if a single AKS cluster fails.
✑ Ensure that the connection traffic over the internet is encrypted by using SSL without having to configure SSL on each container.
Which service should you include in the recommendation?
A. Azure Front Door
B. Azure Traffic Manager
C. AKS ingress controller
D. Azure Load Balancer

A

Correct Answer: A

Azure Front Door supports SSL.
Azure Front Door, which focuses on global load-balancing and site acceleration, and Azure CDN Standard, which offers static content caching and acceleration.
The new Azure Front Door brings together security with CDN technology for a cloud-based CDN with threat protection and additional capabilities.

Reference

Front Door is an application delivery network that provides global load balancing and site acceleration service for web applications. It offers Layer 7 capabilities for your application like SSL offload, path-based routing, fast failover, caching, etc. to improve performance and high-availability of your applications.

Traffic Manager does not provide SSL Offloading.
And the other options are not global options (multi-region)

111
Q

Q10 T3

You have an on-premises file server that stores 2 TB of data files.
You plan to move the data files to Azure Blob Storage in the West Europe Azure region.
You need to recommend a storage account type to store the data files and a replication solution for the storage account. The solution must meet the following requirements:
✑ Be available if a single Azure datacenter fails.
✑ Support storage tiers.
✑ Minimize cost.
What should you recommend? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area

A

Answer

Box 1: Standard general-purpose v2
Standard general-purpose v2 meets the requirements and minimizes the costs.
Box 2: Zone-redundant storage (ZRS)
ZRS protects against a Datacenter failure, while minimizing the costs.

Reference

112
Q

Q11 T3

You have an Azure web app named App1 and an Azure key vault named KV1.
App1 stores database connection strings in KV1.
App1 performs the following types of requests to KV1:
✑ Get
✑ List
✑ Wrap
✑ Delete
✑ Backup
✑ Decrypt
✑ Encrypt
You are evaluating the continuity of service for App1.
You need to identify the following if the Azure region that hosts KV1 becomes unavailable:
✑ To where will KV1 fail over?
✑ During the failover, which request type will be unavailable?
What should you identify? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Hot Area

A

Answer

Box 1: A server in the paired region
The contents of your key vault are replicated within the region and to a secondary region at least 150 miles away, but within the same geography to maintain high durability of your keys and secrets.
Regions are paired for cross-region replication based on proximity and other factors.

Box 2: Delete -
During failover, your key vault is in read-only mode.

Reference

113
Q

Q12 T3

Your company identifies the following business continuity and disaster recovery objectives for virtual machines that host sales, finance, and reporting applications in the company’s on-premises data center:
✑ The sales application must be able to fail over to a second on-premises data center.
✑ The reporting application must be able to recover point-in-time data at a daily granularity. The RTO is eight hours.
✑ The finance application requires that data be retained for seven years. In the event of a disaster, the application must be able to run from Azure. The recovery time objective (RTO) is 10 minutes.
You need to recommend which services meet the business continuity and disaster recovery objectives. The solution must minimize costs.
What should you recommend for each application? To answer, drag the appropriate services to the correct applications. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place

A

Answer

Box 1: Azure Site Recovery -

Azure Site Recovery -
Coordinates virtual-machine and physical-server replication, failover, and fullback.
DR solutions have low Recovery point objectives; DR copy can be behind by a few seconds/minutes.
DR needs only operational recovery data, which can take hours to a day. Using DR data for long-term retention is not recommended because of the fine-grained data capture.
Disaster recovery solutions have smaller Recovery time objectives because they are more in sync with the source.
Remote monitor the health of machines and create customizable recovery plans.
Box 2: Azure Site Recovery and Azure Backup
Backup ensures that your data is safe and recoverable while Site Recovery keeps your workloads available when/if an outage occurs.

Box 3: Azure Backup only -

Azure Backup -
Backs up data on-premises and in the cloud
Have wide variability in their acceptable Recovery point objective. VM backups usually one day while database backups as low as 15 minutes.
Backup data is typically retained for 30 days or less. From a compliance view, data may need to be saved for years. Backup data is ideal for archiving in such instances.
Because of a larger Recovery point objective, the amount of data a backup solution needs to process is usually much higher, which leads to a longer Recovery time objective.

Reference

114
Q

Q13 T3

You need to design a highly available Azure SQL database that meets the following requirements:
✑ Failover between replicas of the database must occur without any data loss.
✑ The database must remain available in the event of a zone outage.
✑ Costs must be minimized.
Which deployment option should you use?
A. Azure SQL Managed Instance Business Critical
B. Azure SQL Database Premium
C. Azure SQL Database Basic
D. Azure SQL Managed Instance General Purpose

A

Correct Answer: B

Zone-redundant configuration is not available in SQL Managed Instance. In SQL Database this feature is only available when the Gen5 hardware is selected.

To prevent Data Loss, Premium/Business Critical is required:
The primary node constantly pushes changes to the secondary nodes in order and ensures that the data is persisted to at least one secondary replica before committing each transaction. This process guarantees that if the primary node crashes for any reason, there is always a fully synchronized node to fail over to.

Reference

115
Q

Q14 T3

You need to design a highly available Azure SQL database that meets the following requirements:
✑ Failover between replicas of the database must occur without any data loss.
✑ The database must remain available in the event of a zone outage.
✑ Costs must be minimized.
Which deployment option should you use?
A. Azure SQL Managed Instance Business Critical
B. Azure SQL Database Premium
C. Azure SQL Database Basic
D. Azure SQL Database Hyperscale

A

Correct Answer: B

Azure SQL Database Premium meets the requirements and is the least expensive.
Note: There are two high availability architectural models:
* Standard availability model that is based on a separation of compute and storage. It relies on high availability and reliability of the remote storage tier. This architecture targets budget-oriented business applications that can tolerate some performance degradation during maintenance activities.
* Premium availability model that is based on a cluster of database engine processes. It relies on the fact that there is always a quorum of available database engine nodes. This architecture targets mission-critical applications with high IO performance, high transaction rate and guarantees minimal performance impact to your workload during maintenance activities.
Note: Zone-redundant configuration for the general purpose service tier is offered for both serverless and provisioned compute. This configuration utilizes Azure
Availability Zones to replicate databases across multiple physical locations within an Azure region. By selecting zone-redundancy, you can make your new and existing serverless and provisioned general purpose single databases and elastic pools resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes of the application logic.
Incorrect:
Not A: Azure SQL Managed Instance Business Critical is more expensive.
Not C: Azure SQL Database Basic, and General purpose provide only locally redundant availability.

Reference

116
Q

Q15T3

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements:
✑ Provide access to the full .NET framework.
✑ Provide redundancy if an Azure region fails.
✑ Grant administrators access to the operating system to install custom application dependencies.
Solution: You deploy a web app in an Isolated App Service plan.
Does this meet the goal?
A. Yes
B. No

A

Correct Answer: B

Instead: You deploy two Azure virtual machines to two Azure regions, and you create an Azure Traffic Manager profile.
Note: Azure Traffic Manager is a DNS-based traffic load balancer that enables you to distribute traffic optimally to services across global Azure regions, while providing high availability and responsiveness.

Reference

App Service has not admin access to OS

Linux apps in App Service run in their own containers. You have root access to the container but no access to the host operating system is allowed. Likewise, for apps running in Windows containers, you have administrative access to the container but no access to the host operating system.

117
Q

Q16 T3

You need to design a highly available Azure SQL database that meets the following requirements:
✑ Failover between replicas of the database must occur without any data loss.
✑ The database must remain available in the event of a zone outage.
✑ Costs must be minimized.
Which deployment option should you use?

A. Azure SQL Database Serverless
B. Azure SQL Database Business Critical
C. Azure SQL Database Basic
D. Azure SQL Database Standard

A

Correct Answer: B

The correct answer is B. Azure SQL Database Business Critical.

Here’s why:

  • Failover without data loss: Business Critical uses a synchronous replication model, ensuring that all writes are committed to both primary and secondary replicas before acknowledging the transaction. This guarantees data consistency and prevents data loss during failover.
  • Zone outage resilience: Business Critical supports zone redundancy, meaning replicas are distributed across different availability zones within a region. If one zone experiences an outage, the database can failover to a replica in a different zone, ensuring continued availability.
  • Cost minimization: While Business Critical is the most expensive option, it’s justified by its high availability and disaster recovery capabilities. It might be the most cost-effective choice in the long run, especially if data loss or service interruptions would have significant consequences.
    Other options don’t meet all requirements:
  • Serverless: While cost-effective for unpredictable workloads, it doesn’t guarantee synchronous replication or zone redundancy, making it unsuitable for high availability and disaster recovery.
  • Basic: Doesn’t support synchronous replication or zone redundancy, making it unsuitable for high availability.
  • Standard: Supports synchronous replication but doesn’t offer zone redundancy, making it less resilient to zone outages.

Public preview zone redundant configuration for azure SQL Database Serverless compute tier

118
Q

Q17 T3

You have an on-premises Microsoft SQL Server database named SQL1.

You plan to migrate SQL1 to Azure.

You need to recommend a hosting solution for SQL1. The solution must meet the following requirements:

  • Support the deployment of multiple secondary, read-only replicas.
  • Support automatic replication between primary and secondary replicas.
  • Support failover between primary and secondary replicas within a 15-minute recovery time objective (RTO).

What should you include in the solution? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

Recommended Solution

Azure Service or service tier:
* Azure SQL Managed Instance

Replication mechanism:
* Auto-failover groups

Explanation

Azure SQL Managed Instance
* Supports multiple secondary, read-only replicas: Managed Instance allows for multiple secondary replicas to be created and managed efficiently.
* Supports automatic replication: Auto-failover groups handle the automatic replication between primary and secondary instances.
* Supports failover within a 15-minute RTO: Auto-failover groups can achieve RTOs of less than 15 minutes.

Auto-failover groups
* Manages replication and failover: This feature simplifies the management of multiple replicas and automates failover processes.
* Achieves desired RTO: Auto-failover groups are designed to meet stringent RTO requirements.

Why not other options?

  • Azure SQL Database: While it supports geo-replication and some level of high availability, it might not be suitable for complex workloads and large databases that typically require the capabilities of a managed instance.
  • Hyperscale service tier: While it offers high performance and scalability, it doesn’t inherently provide the same level of high availability and disaster recovery features as managed instance with auto-failover groups.
  • Active geo-replication: While a core technology, it requires more manual management compared to auto-failover groups.

By choosing Azure SQL Managed Instance with auto-failover groups, you get a robust and reliable solution that meets your specific requirements.

119
Q

Q18 T3

You have two on-premises Microsoft SQL Server 2017 instances that host an Always On availability group named AG1. AG1 contains a single database named DB1.

You have an Azure subscription that contains a virtual machine named VM1. VM1 runs Linux and contains a SQL Server 2019 instance.

You need to migrate DB1 to VM1. The solution must minimize downtime on DB1.

What should you do? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

Prepare for the migration by:

Box 1: Adding a secondary replica to AG1: This is crucial for minimizing downtime. By having a secondary replica, you can perform the migration on it without affecting the primary replica and its availability.
Perform the migration by using:

Box 2: Azure Migrate: This is the best option for migrating an on-premises SQL Server database to Azure. It provides a streamlined process for assessing, migrating, and modernizing your database workloads.
Explanation:

Adding a secondary replica: This ensures high availability and provides a safe environment for performing the migration without impacting the primary replica.
Azure Migrate: Provides tools for assessing compatibility, migrating data efficiently, and managing the migration process. It is specifically designed for cloud migrations and offers features to minimize downtime.
Other options and why they are not suitable:

Creating an Always On availability group on VM1: This is unnecessary as you already have an existing AG1.
Upgrading the on-premises SQL Server instances: This is not required for the migration and might introduce additional complexities.
Distributed availability group: While this is a possibility for extending an availability group across on-premises and Azure, it’s more complex and not necessary for this scenario.
Log shipping: This is an older method that is less efficient and more complex than using Azure Migrate for this type of migration.
By following these steps, you can effectively migrate DB1 to VM1 with minimal downtime.

120
Q

Q19 T3

You are building an Azure web app that will store the Personally Identifiable Information (PII) of employees.

You need to recommend an Azure SQL. Database solution for the web app. The solution must meet the following requirements:

  • Maintain availability in the event of a single datacenter outage.
  • Support the encryption of specific columns that contain PII.
  • Automatically scale up during payroll operations.
  • Minimize costs.

What should you include in the recommendations? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area

A

Answer

Service Tier and Computer Tier: Business Critical service tier and Serverless computer tier
Encryption Method: Always Encrypted

Service Tier and Computer Tier:

Recommendation: Business Critical service tier and Serverless computer tier

Justification:
* Business Critical service tier: This tier provides the highest level of availability, ensuring the database is online even in the event of a single datacenter outage. This is crucial for storing PII, which requires uninterrupted access.
* Serverless computer tier: This option allows the database to automatically scale up and down based on workload, effectively handling the increased load during payroll operations. This minimizes costs by avoiding over-provisioning.

Why other options are discarded:
* General Purpose service tier: While more cost-effective, it doesn’t guarantee the same level of availability as Business Critical. This is unacceptable for PII storage.
* Hyperscale service tier: While suitable for extremely large databases, it’s overkill for this scenario and leads to higher costs. Provisioned compute tier doesn’t offer the required auto-scaling capability.

Encryption Method:

Recommendation: Always Encrypted

Justification:
* Always Encrypted provides the highest level of protection for sensitive data, ensuring that PII remains encrypted both at rest and in transit. This is essential for complying with data privacy regulations.
* It allows for the encryption of specific columns: This feature is crucial for protecting only the PII columns while leaving other data unencrypted, optimizing performance.

Why other options are discarded:
* Microsoft SQL Server and database encryption keys: This option is part of the implementation of Always Encrypted, not a separate encryption method.
* Transparent Data Encryption (TDE): While TDE encrypts the entire database, it doesn’t provide the granular control needed to protect specific columns containing PII.

By combining Business Critical service tier with Serverless compute tier and implementing Always Encrypted, you achieve a highly available, cost-effective, and secure solution for storing employee PII.

121
Q

Q20 T3

You plan to deploy an Azure Database for MySQL flexible server named Server1 to the East US Azure region.

You need to implement a business continuity solution for Server1. The solution must minimize downtime in the event of a failover to a paired region.

What should you do?

A. Create a read replica.
B. Store the database files in Azure premium file shares.
C. Implement Geo-redundant backup.
D. Configure native MySQL replication.

A

Correct Answer: C

The answer should be C: Geo-redundant backup:
Read replica are not meant for business continuity, failing over is manual which doesn’t meet the requirement of minimizing downtime.
“There’s no automated failover between source and replica servers.
Read replicas is meant for scaling of read intensive workloads and isn’t designed to meet high availability needs of a server. Stopping the replication on read replica to bring it online in read write mode is the means by which this manual failover is performed.”

Reference

122
Q

Q21 T3

You have an Azure subscription that contains the resources shown in the following table

You need to recommend a load balancing solution that will distribute incoming traffic for VMSS1 across NVA1 and NVA2. The solution must minimize administrative effort.

What should you include in the recommendation?

A. Gateway Load Balancer
B. Azure Front Door
C. Azure Application Gateway
D. Azure Traffic Manager

A

Correct Answer: A

Gateway Load Balancer is a SKU of the Azure Load Balancer portfolio catered for high performance and high availability scenarios with third-party Network Virtual Appliances (NVAs). With the capabilities of Gateway Load Balancer, you can easily deploy, scale, and manage NVAs. Chaining a Gateway Load Balancer to your public endpoint only requires one selection.

123
Q

Q22 T3

You have the Azure subscriptions shown in the following table

Contoso.onmicrosft.com contains a user named User1.

You need to deploy a solution to protect against ransomware attacks. The solution must meet the following requirements:

  • Ensure that all the resources in Sub1 are backed up by using Azure Backup.
  • Require that User1 first be assigned a role for Sub2 before the user can make major changes to the backup configuration.

What should you create in each subscription? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point

Answer Area

A

Answer

Sub 1: A Recovery Services Vault
Sub 2: A Resource Guard

Best Options for Each Subscription

Sub1 (Production Subscription)

  • Azure Backup Vault: This is essential for backing up your resources in the production environment. It provides a centralized location to store and manage backups.

Sub2 (Recovery Subscription)

  • Resource Guard: While not directly involved in the backup process, a Resource Guard can be used to protect critical resources in the recovery subscription from accidental deletion or modification. This is an added layer of security to prevent unintended changes to the recovery environment.

Justification

Sub1 (Production Subscription)
* Azure Backup Vault: The primary goal of Sub1 is to protect data from ransomware attacks. An Azure Backup Vault is the core component for backing up your data and ensuring recovery options.

Sub2 (Recovery Subscription)
* Resource Guard: While not strictly necessary for the backup process, a Resource Guard provides an extra layer of protection for the recovery environment. It helps prevent accidental changes or malicious actions that could compromise the recovery process. By implementing a Resource Guard, you can maintain the integrity of the recovery environment.

By combining an Azure Backup Vault in Sub1 and a Resource Guard in Sub2, you create a robust protection strategy that addresses both data protection and environmental security.

Reference

124
Q

Q23 T3

You have 10 on-premises servers that run Windows Server.

You need to perform daily backups of the servers to a Recovery Services vault. The solution must meet the following requirements:

  • Back up all the files and folders on the servers.
  • Maintain three copies of the backups in Azure.
  • Minimize costs.

What should you configure? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

Box 1: The Microsoft Azure Recovery Services (MARS) agent
The MARS agent is a free and easy-to-use agent that can be installed on Windows servers to back up files and folders to Azure.
Volume Shadow Copy Service (VSS) is a Windows service that provides a snapshot of the server’s file system, which is used to create consistent backups. The VSS service is already installed and enabled on Windows Server by default, so it is not necessary to select it as a configuration option.

Box 2: Locally-redundant storage (LRS)
LRS is the most cost-effective storage option for Azure Backup. It replicates data three times within a single data center in the primary region, which provides sufficient durability for most workloads.

125
Q

Q24 T3

You plan to deploy a containerized web-app that will be hosted in five Azure Kubernetes Service (AKS) clusters. Each cluster will be hosted in a different Azure region.

You need to provide access to the app from the internet. The solution must meet the following requirements:

  • Incoming HTTPS requests must be routed to the cluster that has the lowest network latency.
  • HTTPS traffic to individual pods must be routed via an ingress controller.
  • In the event of an AKS cluster outage, failover time must be minimized.

What should you include in the solution? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area

A

Answer

Box 1: Azure Front Door
Both Azure Front Door and Traffic Manager are global load balancer. However, recommended traffic for Azure Front Door is HTTP(S), and recommended traffic for Traffic Manager is Non-HTTP(S).

Box 2: Azure Application Gateway
The Application Gateway Ingress Controller (AGIC) is a Kubernetes application, which makes it possible for Azure Kubernetes Service (AKS) customers to leverage Azure’s native Application Gateway L7 load-balancer to expose cloud software to the Internet.
AGIC helps eliminate the need to have another load balancer/public IP address in front of the AKS cluster and avoids multiple hops in your datapath before requests reach the AKS cluster.

126
Q

Q25 T3

You have an Azure subscription.

You create a storage account that will store documents.

You need to configure the storage account to meet the following requirements:

  • Ensure that retention policies are standardized across the subscription.
  • Ensure that data can be purged if the data is copied to an unauthorized location.

Which two settings should you enable? To answer, select the appropriate settings in the answer area.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

The two settings that should be enabled are:

  • Enable version-level immutability support under Access control.
  • Enable permanent delete for soft deleted items under Recovery.

Here’s why:

Enable version-level immutability support:

  • This setting ensures that data cannot be modified or deleted for a specified period, providing a strong retention policy.
  • It helps prevent accidental or malicious deletion of important documents.
  • By enabling version-level immutability, you can enforce a standardized retention policy across your subscription.

Enable permanent delete for soft deleted items:

  • This setting allows you to permanently delete soft-deleted items after a specified retention period.
  • It helps prevent unauthorized data retention and reduces storage costs.
  • By enabling permanent delete, you can ensure that data is purged if it is copied to an unauthorized location.

The other settings are not directly related to the given requirements:

  • Enable operational backup with Azure Backup: This setting is for backing up data to Azure Backup, which is not directly related to retention policies or data purging.
  • Enable point-in-time restore for containers: This setting allows you to restore containers to a previous state, which is not directly related to retention policies or data purging.
  • Enable soft delete for blobs and containers: These settings prevent accidental deletion of blobs and containers, but they do not enforce retention policies or prevent data purging.
  • Enable versioning for blobs: This setting enables versioning for blobs, which can be helpful for tracking changes, but it does not enforce retention policies or prevent data purging.
  • Enable blob change feed: This setting enables a change feed for blobs, which can be helpful for tracking changes, but it does not enforce retention policies or prevent data purging.
127
Q

Q26 T3

You have an Azure subscription.

You are designing a solution for containerized apps. The solution must meet the following requirements:

  • Automatically scale the apps by creating additional instances.
  • Minimize administrative effort to maintain nodes and clusters.
  • Ensure that containerized apps are highly available across multiple availability zones.
  • Provide a central location for the lifecycle management and storage of container images.

What should you include in the solution? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area

A

Answer

  1. Azure Container Apps
  2. Azure Container Registry

In this scenario, we need automatic scaling which is not supported in Azure Container Instance (ACI)

But it is a feature in Azure Container Apps (ACA) by setting scaling rules

128
Q

Q27 T3

You plan to use Azure Storage to store data assets.

You need to identify the procedure to fail over a general-purpose v2 account as part of a disaster recovery plan. The solution must meet the following requirements:

  • Apps must be able to access the storage account after a failover.
  • You must be able to fail back the storage account to the original location.
  • Downtime must be minimized.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Answer Area

A

Answer

Answer is correct:
Prerequisites:
Before you can perform an account failover on your storage account, make sure that:Your storage account is configured for geo-replication (GRS, GZRS, RA-GRS or RA-GZRS). For more information about Azure Storage redundancy,

Reference

129
Q

Q1 T4

You have an Azure subscription that contains a Basic Azure virtual WAN named VirtualWAN1 and the virtual hubs shown in the following table.

You have an ExpressRoute circuit in the US East Azure region.
You need to create an ExpressRoute association to VirtualWAN1.
What should you do first?
A. Upgrade VirtualWAN1 to Standard.
B. Create a gateway on Hub1.
C. Enable the ExpressRoute premium add-on.
D. Create a hub virtual network in US East.

A

Correct Answer: A

A basic Azure virtual WAN does not support express route. You have to upgrade to standard.

130
Q

Q2 T4

You have an Azure subscription that contains a storage account.
An application sometimes writes duplicate files to the storage account.
You have a PowerShell script that identifies and deletes duplicate files in the storage account. Currently, the script is run manually after approval from the operations manager.
You need to recommend a serverless solution that performs the following actions:
✑ Runs the script once an hour to identify whether duplicate files exist
✑ Sends an email notification to the operations manager requesting approval to delete the duplicate files
✑ Processes an email response from the operations manager specifying whether the deletion was approved
✑ Runs the script if the deletion was approved
What should you include in the recommendation?
A. Azure Logic Apps and Azure Event Grid
B. Azure Logic Apps and Azure Functions
C. Azure Pipelines and Azure Service Fabric
D. Azure Functions and Azure Batch

A

Correct Answer: B

You can schedule a powershell script with Azure Logic Apps.
When you want to run code that performs a specific job in your logic apps, you can create your own function by using Azure Functions. This service helps you create Node.js, C#, and F# functions so you don’t have to build a complete app or infrastructure to run code. You can also call logic apps from inside Azure functions.

Logic Apps Azure functions

Azure Logic Apps is a serverless solution that enables you to create and run workflows that integrate with various services and systems. You can use Azure Logic Apps to create a workflow that runs the PowerShell script once an hour using a time-based trigger, sends an email notification to the operations manager for approval, and processes the email response.

Azure Functions is a serverless compute service that allows you to run event-driven code without having to manage infrastructure explicitly. You can use Azure Functions to host the PowerShell script, which can be triggered by the Logic App when the operations manager approves the deletion.

Combining Azure Logic Apps and Azure Functions will provide the necessary components to meet the requirements of the scenario.

131
Q

Q3 T4

Your company has the infrastructure shown in the following table

The on-premises Active Directory domain syncs with Azure Active Directory (Azure AD).
Server1 runs an application named App1 that uses LDAP queries to verify user identities in the on-premises Active Directory domain.
You plan to migrate Server1 to a virtual machine in Subscription1.
A company security policy states that the virtual machines and services deployed to Subscription1 must be prevented from accessing the on-premises network.
You need to recommend a solution to ensure that App1 continues to function after the migration. The solution must meet the security policy.
What should you include in the recommendation?
A. Azure AD Application Proxy
B. the Active Directory Domain Services role on a virtual machine
C. an Azure VPN gateway
D. Azure AD Domain Services (Azure AD DS)

A

Correct Answer: D

Azure Active Directory Domain Services (Azure AD DS) provides managed domain services such as domain join, group policy, lightweight directory access protocol (LDAP), and Kerberos/NTLM authentication.

Active Directory Domain Services Overview

Justification:
Azure AD DS: Offers LDAP, Kerberos, and NTLM authentication without requiring a direct connection to on-premises AD, ensuring compliance with the security policy.
Functionality: Allows App1 to perform LDAP queries to verify user identities using the synchronized data from Azure AD.
Security: Prevents virtual machines in Subscription1 from accessing the on-premises network directly.

132
Q

Q4 T4

You need to design a solution that will execute custom C# code in response to an event routed to Azure Event Grid. The solution must meet the following requirements:

✑ The executed code must be able to access the private IP address of a Microsoft SQL Server instance that runs on an Azure virtual machine.
✑ Costs must be minimized.

What should you include in the solution?

A. Azure Logic Apps in the Consumption plan
B. Azure Functions in the Premium plan
C. Azure Functions in the Consumption plan
D. Azure Logic Apps in the integrated service environment

A

C. Azure Functions in the Consumption plan

Reasoning:

  • Azure Functions is the best fit for executing custom C# code in response to events.
  • Consumption plan is the most cost-effective option as you only pay for the compute time used, which is ideal for event-driven workloads.
  • Access to private IP: Azure Functions can access private endpoints, allowing it to communicate with the SQL Server instance on the Azure VM.

Breakdown of other options:

  • Azure Logic Apps in the Consumption plan: While Logic Apps can handle events, they are better suited for orchestrating workflows than executing custom code.
  • Azure Functions in the Premium plan: This would be overkill for a simple event-driven workload and would incur higher costs.
  • Azure Logic Apps in the integrated service environment: This option is even more expensive and complex than the Premium plan for Azure Functions, and it’s not necessary for this scenario.

By choosing Azure Functions in the Consumption plan, you meet the requirements of executing custom C# code, accessing the private IP address of the SQL Server instance, and minimizing costs.

Azure Functions: Hosting plans comparison

133
Q

Q5 T4

You have an on-premises network and an Azure subscription. The on-premises network has several branch offices.
A branch office in Toronto contains a virtual machine named VM1 that is configured as a file server. Users access the shared files on VM1 from all the offices.
You need to recommend a solution to ensure that the users can access the shared files as quickly as possible if the Toronto branch office is inaccessible.

What should you include in the recommendation?

A. a Recovery Services vault and Windows Server Backup
B. Azure blob containers and Azure File Sync
C. a Recovery Services vault and Azure Backup
D. an Azure file share and Azure File Sync

A

Correct Answer: D

Use Azure File Sync to centralize your organization’s file shares in Azure Files, while keeping the flexibility, performance, and compatibility of an on-premises file server. Azure File Sync transforms Windows Server into a quick cache of your Azure file share.

Design for Azure Files

Storage Sync Files deployment guide

134
Q

Q6 T4

You have an Azure subscription named Subscription1 that is linked to a hybrid Azure Active Directory (Azure AD) tenant.
You have an on-premises datacenter that does NOT have a VPN connection to Subscription1. The datacenter contains a computer named Server1 that has
Microsoft SQL Server 2016 installed. Server is prevented from accessing the internet.
An Azure logic app resource named LogicApp1 requires write access to a database on Server1.
You need to recommend a solution to provide LogicApp1 with the ability to access Server1.
What should you recommend deploying on-premises and in Azure? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area

A

Answer

Box 1: An on-premises data gateway
For logic apps in global, multi-tenant Azure that connect to on-premises SQL Server, you need to have the on-premises data gateway installed on a local computer and a data gateway resource that’s already created in Azure.
Box 2: A connection gateway resource

Analyzing the Requirements
Given:
* You have an Azure subscription and a hybrid Azure AD tenant.
* Your on-premises datacenter has no VPN connection to Azure.
* Server1 (with SQL Server 2016) is on-premises and cannot access the internet.
* LogicApp1 needs write access to the database on Server1.
Requirement:
* Provide a solution for LogicApp1 to access Server1.
Recommended Solution
On-premises:
* Azure AD Application Proxy connector: This is necessary to establish a secure connection between the on-premises application (SQL Server) and Azure.
* On-premises data gateway: This is required to securely connect LogicApp1 to the on-premises SQL Server database.
Azure:
* A connection gateway resource: This resource is needed to configure the connection between the on-premises data gateway and LogicApp1.
Explanation:
* Azure AD Application Proxy connector: This component will be installed on Server1. It will create a secure tunnel between the on-premises SQL Server and Azure, allowing LogicApp1 to access the database.
* On-premises data gateway: This component will also be installed on Server1. It will act as a bridge between LogicApp1 and the on-premises SQL Server, ensuring secure and efficient data transfer.
* Connection gateway resource: This Azure resource will be created in your Azure subscription. It will define the connection between LogicApp1 and the on-premises data gateway, allowing LogicApp1 to access the SQL Server database.
Benefits of this solution:
* Secure connection: The Azure AD Application Proxy and on-premises data gateway will establish a secure tunnel, protecting sensitive data.
* No VPN required: This solution does not require a VPN connection between the on-premises datacenter and Azure, simplifying the setup and reducing costs.
* Scalability: The Azure platform can easily scale to accommodate increasing workloads and data volumes.
* Integration with Azure AD: The Azure AD Application Proxy provides seamless integration with your Azure AD environment, ensuring strong security and access control.
Therefore, the recommended components to deploy on-premises and in Azure are:
* On-premises: Azure AD Application Proxy connector, On-premises data gateway
* Azure: Connection gateway resource

Connectors: Create API SQL
Logic Apps Gateway Connection
Logic Apps Gateway Install: prerequisites

135
Q

Q7 T4

Your company develops a web service that is deployed to an Azure virtual machine named VM1. The web service allows an API to access real-time data from
VM1.
The current virtual machine deployment is shown in the Deployment exhibit

The chief technology officer (CTO) sends you the following email message: “Our developers have deployed the web service to a virtual machine named VM1.
Testing has shown that the API is accessible from VM1 and VM2. Our partners must be able to connect to the API over the Internet. Partners will use this data in applications that they develop.”
You deploy an Azure API Management (APIM) service. The relevant API Management configuration is shown in the API exhibit

For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area

A

Answer

Api Management Using with Vnet

Yes - Because we are using an APIM, deployed to a VNET but configured to be “External”

Yes - Because the APIM is deployed in the same vNET as VM1 just in a different subnet. Communication between subnets are enabled by default and there is no mention of otherwise.

No - VPN required because the APIM is accessible from the internet by virtue of it being configured as “External”

136
Q

Q8 T4

Your company has an existing web app that runs on Azure virtual machines.
You need to ensure that the app is protected from SQL injection attempts and uses a layer-7 load balancer. The solution must minimize disruptions to the code of the app.
What should you recommend? To answer, drag the appropriate services to the correct targets. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.

Drag & Drop

A

Answer

Box 1: Azure Application Gateway
The Azure Application Gateway Web Application Firewall (WAF) provides protection for web applications. These protections are provided by the Open Web
Application Security Project (OWASP) Core Rule Set (CRS).
Box 2: Web Application Firewall (WAF)

Application Gateway: Customize WAF rules

App Gateway and WAF.
WAF v2 has latest OWASP rules (3.2) in preview and requires App Gateway with required /24 subnet to deploy.

137
Q

Q9 T4

You are designing a microservices architecture that will be hosted in an Azure Kubernetes Service (AKS) cluster. Apps that will consume the microservices will be hosted on Azure virtual machines. The virtual machines and the AKS cluster will reside on the same virtual network.
You need to design a solution to expose the microservices to the consumer apps. The solution must meet the following requirements:
✑ Ingress access to the microservices must be restricted to a single private IP address and protected by using mutual TLS authentication.
✑ The number of incoming microservice calls must be rate-limited.
✑ Costs must be minimized.
What should you include in the solution?
A. Azure App Gateway with Azure Web Application Firewall (WAF)
B. Azure API Management Standard tier with a service endpoint
C. Azure Front Door with Azure Web Application Firewall (WAF)
D. Azure API Management Premium tier with virtual network connection

A

B. Azure API Management Standard tier with a service endpoint

Reasoning:
* Azure API Management is designed to manage and secure APIs, making it a perfect fit for this scenario.
* Standard tier offers the necessary features for rate limiting and mutual TLS authentication without incurring the higher costs of the Premium tier.
* Service endpoint restricts access to the API Management instance to a specific private IP address, meeting the security requirement.

Breakdown of other options:
* Azure App Gateway with Azure Web Application Firewall (WAF): While App Gateway can handle ingress traffic and WAF provides security, it lacks advanced API management features like rate limiting and mutual TLS authentication.
* Azure Front Door with Azure Web Application Firewall (WAF): Primarily designed for global load balancing and CDN, it’s not optimized for API management and would incur unnecessary costs.
* Azure API Management Premium tier with virtual network connection: While it offers all required features, the Premium tier is overkill for this scenario, leading to higher costs.

By choosing Azure API Management Standard tier with a service endpoint, you achieve the desired level of security, control, and cost-efficiency for your microservices architecture.

138
Q

Q10 T4

You have a .NET web service named Service1 that performs the following tasks:
✑ Reads and writes temporary files to the local file system.
✑ Writes to the Application event log.
You need to recommend a solution to host Service1 in Azure. The solution must meet the following requirements:
✑ Minimize maintenance overhead.
✑ Minimize costs.
What should you include in the recommendation?
A. an Azure App Service web app
B. an Azure virtual machine scale set
C. an App Service Environment (ASE)
D. an Azure Functions app

A

Correct Answer: D

The correct answer is D. an Azure Functions app.

Here’s a breakdown of why Azure Functions is the best choice based on the given requirements:

Minimizes maintenance overhead:

  • Azure Functions is a fully managed serverless platform, meaning you don’t have to worry about managing underlying infrastructure like virtual machines or operating systems.
  • Updates and patching are handled automatically by Azure, reducing your maintenance burden.
  • You only pay for the resources consumed when your function executes, eliminating the need to maintain idle resources.
    Minimizes costs:
  • Azure Functions offers a pay-as-you-go pricing model, where you only pay for the compute time and resources used by your function.
  • This can be significantly more cost-effective than running a dedicated VM or App Service instance, especially if your service has intermittent usage.
  • You can set up automatic scaling to handle varying workloads without overprovisioning resources.
    Additional benefits of Azure Functions for this scenario:
  • Easy to deploy and manage using the Azure portal, Visual Studio, or the Azure CLI.
  • Can be triggered by various events, such as HTTP requests, timers, or messages from other Azure services.
  • Integrates seamlessly with other Azure services like Azure Storage for file storage and Azure Event Hubs for event logging.

While other options like Azure App Service or Azure Virtual Machines might be suitable in some scenarios, they wouldn’t be as cost-effective or require as little maintenance as Azure Functions for this specific use case.

139
Q

Q11 T4

You have the Azure resources shown in the following table

You need to deploy a new Azure Firewall policy that will contain mandatory rules for all Azure Firewall deployments. The new policy will be configured as a parent policy for the existing policies.
What is the minimum number of additional Azure Firewall policies you should create?
A. 0
B. 1
C. 2
D. 3

A

Correct Answer: D

Firewall policies work across regions and subscriptions.
Place all your global configurations in the parent policy.
The parent policy is required to be in the same region as the child policy.
Each of the three regions must have a new parent policy.

Parent policy must be in the same region as child policy!
You get this information when creating a Firewall Policy. Parent Policy drop down list only shows policies in the same region.
Existing Firewall Policies are located in different regions. To link them to a new parent policy, each region must have a new parent policy => 3 new policies.

140
Q

Q12 T4

Your company has an app named App1 that uses data from the on-premises Microsoft SQL Server databases shown in the following table

App1 and the data are used on the first day of the month only. The data is not expected to grow more than 3 percent each year.
The company is rewriting App1 as an Azure web app and plans to migrate all the data to Azure.
You need to migrate the data to Azure SQL Database and ensure that the database is only available on the first day of each month.
Which service tier should you use?
A. vCore-based General Purpose
B. DTU-based Standard
C. vCore-based Business Critical
D. DTU-based Basic

A

Correct Answer: A

Note: App1 and the data are used on the first day of the month only. See Serverless compute tier below.
The vCore based purchasing model.
The term vCore refers to the Virtual Core. In this purchasing model of Azure SQL Database, you can choose from the provisioned compute tier and serverless compute tier.
* Provisioned compute tier: You choose the exact compute resources for the workload.
* Serverless compute tier: Azure automatically pauses and resumes the database based on workload activity in the serverless tier. During the pause period, Azure does not charge you for the compute resources.

Azure Sql Databases: DTU & VCore base models

Use the serverless model in vcore

While the provisioned compute tier provides a specific amount of compute resources that are continuously provisioned independent of workload activity, the serverless compute tier auto-scales compute resources based on workload activity.
While the provisioned compute tier bills for the amount of compute provisioned at a fixed price per hour, the serverless compute tier bills for the amount of compute used, per second.

Choosing the Right Azure SQL Database Service Tier

Understanding the Requirements
* Data Usage: Intensive but infrequent (first day of the month only).
* Data Size: Relatively large (900GB total).
* Performance: Needs to handle the application’s workload efficiently on the first day of the month.
* Cost Optimization: The database is idle for most of the month.

Evaluating Service Tier Options

A. vCore-based General Purpose:
* Offers flexibility in scaling compute, memory, and storage independently.
* Suitable for most workloads, including this one.
* Could be cost-effective if properly managed with elastic pools or serverless compute.

B. DTU-based Standard:
* Less flexible than vCore-based.
* Might not provide the necessary performance or scalability for the initial day’s workload.
* Generally less cost-effective than vCore-based for this scenario.

C. vCore-based Business Critical:
* Offers highest performance and availability but is also the most expensive.
* Overkill for this use case given the infrequent usage.

D. DTU-based Basic:
* Lowest performance and resource limits.
* Definitely not suitable for this workload due to the data size and expected load.

Recommendation: vCore-based General Purpose with Serverless Compute

Given the infrequent but intensive usage pattern, vCore-based General Purpose with Serverless Compute is the most suitable option.

  • vCore-based General Purpose: Offers the flexibility to scale resources as needed.
  • Serverless Compute: Automatically pauses the database when idle, significantly reducing costs for most of the month. It automatically resumes when activity starts on the first day of the month.

Additional Considerations:

  • Elastic Pools: If you have multiple databases with similar usage patterns, consider using elastic pools to share resources and potentially reduce costs further.
  • Performance Testing: Before going live, conduct thorough performance testing to ensure the chosen configuration can handle the expected workload.
  • Backup and Restore: Implement a robust backup and restore strategy to protect your data.

By carefully considering these factors, you can optimize the performance and cost-effectiveness of your Azure SQL Database deployment.

141
Q

Q13 T4

You are developing a sales application that will contain several Azure cloud services and handle different components of a transaction. Different cloud services will process customer orders, billing, payment, inventory, and shipping.
You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using XML messages.
What should you include in the recommendation?
A. Azure Service Fabric
B. Azure Data Lake
C. Azure Service Bus
D. Azure Traffic Manager

A

Correct Answer: C

Asynchronous messaging options in Azure include Azure Service Bus, Event Grid, and Event Hubs.

Azure Messaging

142
Q

Q14 T4

Your company has 300 virtual machines hosted in a VMware environment. The virtual machines vary in size and have various utilization levels.
You plan to move all the virtual machines to Azure.
You need to recommend how many and what size Azure virtual machines will be required to move the current workloads to Azure. The solution must minimize administrative effort.
What should you use to make the recommendation?
A. Azure Pricing calculator
B. Azure Advisor
C. Azure Migrate
D. Azure Cost Management

A

Correct Answer: C

Azure Migrate provides a centralized hub to assess and migrate on-premises servers, infrastructure, applications, and data to Azure. It provides the following:
Unified migration platform: A single portal to start, run, and track your migration to Azure. Range of tools: A range of tools for assessment and migration.

143
Q

Q15 T4

You plan to provision a High Performance Computing (HPC) cluster in Azure that will use a third-party scheduler.
You need to recommend a solution to provision and manage the HPC cluster node.
What should you include in the recommendation?

A. Azure Automation
B. Azure CycleCloud
C. Azure Purview
D. Azure Lighthouse

A

Correct Answer: B

The correct answer is B. Azure CycleCloud.
Here’s why:

  • Azure CycleCloud is specifically designed for provisioning and managing HPC clusters. It supports various HPC schedulers (like Slurm, PBS, and LSF) and provides features like workload management, resource allocation, and cluster monitoring.
  • Azure Automation is a general-purpose automation platform that can be used for various tasks, but it’s not specifically designed for HPC cluster management.
  • Azure Purview is a data governance platform that helps manage and discover data assets. It’s not relevant to HPC cluster provisioning and management.
  • Azure Lighthouse is a management platform that allows service providers to manage their customers’ Azure environments. It’s not directly related to HPC cluster provisioning.
    Therefore, Azure CycleCloud is the most suitable solution for provisioning and managing an HPC cluster in Azure that uses a third-party scheduler.
144
Q

Q16 T4

You are designing an Azure App Service web app.
You plan to deploy the web app to the North Europe Azure region and the West Europe Azure region.
You need to recommend a solution for the web app. The solution must meet the following requirements:
✑ Users must always access the web app from the North Europe region, unless the region fails.
✑ The web app must be available to users if an Azure region is unavailable.
✑ Deployment costs must be minimized.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area

A

Answer

Box 1: A Traffic Manager profile
To support load balancing across the regions we need a Traffic Manager.

Box 2: Priority traffic routing -
Priority traffic-routing method.

Often an organization wants to provide reliability for their services. To do so, they deploy one or more backup services in case their primary goes down. The
‘Priority’ traffic-routing method allows Azure customers to easily implement this failover pattern.

145
Q

Q16 T4

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to deploy multiple instances of an Azure web app across several Azure regions.
You need to design an access solution for the app. The solution must meet the following replication requirements:
✑ Support rate limiting.
✑ Balance requests between all instances.
✑ Ensure that users can access the app in the event of a regional outage.
Solution: You use Azure Traffic Manager to provide access to the app.
Does this meet the goal?
A. Yes
B. No

A

Correct Answer: B

Azure Traffic Manager is a DNS-based traffic load balancer. This service allows you to distribute traffic to your public facing applications across the global Azure regions. Traffic Manager also provides your public endpoints with high availability and quick responsiveness. It does not provide rate limiting.
Note: Azure Front Door would meet the requirements. The Azure Web Application Firewall (WAF) rate limit rule for Azure Front Door controls the number of requests allowed from clients during a one-minute duration.

146
Q

Q18 T4

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to deploy multiple instances of an Azure web app across several Azure regions.
You need to design an access solution for the app. The solution must meet the following replication requirements:
✑ Support rate limiting.
✑ Balance requests between all instances.
✑ Ensure that users can access the app in the event of a regional outage.
Solution: You use Azure Load Balancer to provide access to the app.
Does this meet the goal?
A. Yes
B. No

A

Correct Answer: B

Azure Application Gateway and Azure Load Balancer do not support rate or connection limits.
Note: Azure Front Door would meet the requirements. The Azure Web Application Firewall (WAF) rate limit rule for Azure Front Door controls the number of requests allowed from clients during a one-minute duration.

147
Q

Q19 T4

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to deploy multiple instances of an Azure web app across several Azure regions.
You need to design an access solution for the app. The solution must meet the following replication requirements:
✑ Support rate limiting.
✑ Balance requests between all instances.
✑ Ensure that users can access the app in the event of a regional outage.
Solution: You use Azure Application Gateway to provide access to the app.
Does this meet the goal?
A. Yes
B. No

A

Correct Answer: B

Azure Application Gateway and Azure Load Balancer do not support rate or connection limits.
Note: Azure Front Door would meet the requirements. The Azure Web Application Firewall (WAF) rate limit rule for Azure Front Door controls the number of requests allowed from clients during a one-minute duration.

No, this solution does not meet the goal.

Azure Application Gateway is a Layer 7 load balancer that provides features like SSL termination, cookie-based session affinity, and URL-based routing. However, it operates within a single region and cannot distribute traffic across multiple regions.

To meet the requirements of supporting rate limiting, balancing requests between instances across multiple regions, and ensuring app accessibility during regional outages, you should use Azure Front Door with Web Application Firewall (WAF). Azure Front Door is a global load balancer that can distribute traffic optimally to services across multiple regions, ensuring high availability in the event of a regional outage. By enabling WAF, you can configure custom rate limiting rules to control incoming traffic to your web app.

148
Q

Q20 T4

Your company has two on-premises sites in New York and Los Angeles and Azure virtual networks in the East US Azure region and the West US Azure region.

Each on-premises site has ExpressRoute Global Reach circuits to both regions.

You need to recommend a solution that meets the following requirements:

✑ Outbound traffic to the internet from workloads hosted on the virtual networks must be routed through the closest available on-premises site.
✑ If an on-premises site fails, traffic from the workloads on the virtual networks to the internet must reroute automatically to the other site.

What should you include in the recommendation? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

A

Answer

Box 1: Border Gateway Protocol (BGP)
An on-premises network gateway can exchange routes with an Azure virtual network gateway using the border gateway protocol (BGP). Using BGP with an Azure virtual network gateway is dependent on the type you selected when you created the gateway. If the type you selected were:
ExpressRoute: You must use BGP to advertise on-premises routes to the Microsoft Edge router. You cannot create user-defined routes to force traffic to the
ExpressRoute virtual network gateway if you deploy a virtual network gateway deployed as type: ExpressRoute. You can use user-defined routes for forcing traffic from the Express Route to, for example, a Network Virtual Appliance.

Box 2: Border Gateway Protocol (BGP)
Incorrect:
Microsoft does not support HSRP or VRRP for high availability configurations.

Design for disaster recovery with expressroute private peering

149
Q

Q21 T4

You are designing an application that will use Azure Linux virtual machines to analyze video files. The files will be uploaded from corporate offices that connect to
Azure by using ExpressRoute.
You plan to provision an Azure Storage account to host the files.
You need to ensure that the storage account meets the following requirements:
✑ Supports video files of up to 7 TB
✑ Provides the highest availability possible
✑ Ensures that storage is optimized for the large video files
✑ Ensures that files from the on-premises network are uploaded by using ExpressRoute
How should you configure the storage account? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area

A

Answer

Storage Account Configuration

Storage Account Type:
* 2. Premium page blobs

Data redundancy:
* 1. ZRS (Zone Redundant Storage)

Networking:
* 2. A private endpoint

Explanation:

  • Premium page blobs are optimized for random read/write IOPS, which is ideal for video analysis workloads. They also support large file sizes, up to 8 TB, meeting the requirement for 7 TB video files.
  • ZRS provides the highest level of availability by replicating data across three availability zones within a region. This ensures data durability even in case of a zone failure.
  • A private endpoint allows secure access to the storage account from your virtual machines within the same virtual network, leveraging ExpressRoute for optimal performance and security.

By configuring the storage account with these settings, you ensure that the video files are stored securely, reliably, and efficiently, meeting all the specified requirements.

Why Not Other Options?

Storage Account Type
* Premium file shares: While suitable for file-based workloads, they are not optimized for large, random read/write operations typical of video analysis.
* Standard general-purpose v2: While cost-effective, it doesn’t offer the performance required for video processing, which often involves high IOPS.

Data Redundancy
* LRS (Locally Redundant Storage): Offers lower availability compared to ZRS and GRS.
* GRS (Geo-Redundant Storage): Provides geo-replication but with potential read latency, which is not ideal for video processing requiring low latency access.

Networking
* Azure Route Server: Primarily used for large-scale network connectivity scenarios, it’s overkill for this specific requirement.
* A service endpoint: While it provides secure access, it doesn’t leverage the benefits of ExpressRoute for optimal performance and cost-efficiency.

In conclusion, the chosen options of Premium page blobs, ZRS, and a private endpoint offer the best combination of performance, availability, and cost-efficiency for the given scenario.

150
Q

Q22 T4

A company plans to implement an HTTP-based API to support a web app. The web app allows customers to check the status of their orders.
The API must meet the following requirements:
✑ Implement Azure Functions.
✑ Provide public read-only operations.
✑ Prevent write operations.
You need to recommend which HTTP methods and authorization level to configure.
What should you recommend? To answer, configure the appropriate options in the dialog box in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area

A

Answer

Box 1: GET only -
Get for read-only-

Box 2: Anonymous -
Anonymous for public operations.

151
Q

Q23 T4

You have an Azure subscription.
You need to recommend a solution to provide developers with the ability to provision Azure virtual machines. The solution must meet the following requirements:
✑ Only allow the creation of the virtual machines in specific regions.
✑ Only allow the creation of specific sizes of virtual machines.
What should you include in the recommendation?
A. Azure Resource Manager (ARM) templates
B. Azure Policy
C. Conditional Access policies
D. role-based access control (RBAC)

A

Correct Answer: B

Azure Policies allows you to specify allowed locations, and allowed VM SKUs.

Allowed virtual machine size SKUs This policy enables you to specify a set of virtual machine size SKUs that your organization can deploy.
Allowed locations This policy enables you to restrict the locations your organization can specify when deploying resources. Use to enforce your geo-compliance requirements. Excludes resource groups, Microsoft.AzureActiveDirectory/b2cDirectories, and resources that use the ‘global’ region.

Reference

152
Q

Q24 T4

You have an on-premises network that uses an IP address space of 172.16.0.0/16.
You plan to deploy 30 virtual machines to a new Azure subscription.
You identify the following technical requirements:
✑ All Azure virtual machines must be placed on the same subnet named Subnet1.
✑ All the Azure virtual machines must be able to communicate with all on-premises servers.
✑ The servers must be able to communicate between the on-premises network and Azure by using a site-to-site VPN.
You need to recommend a subnet design that meets the technical requirements.
What should you include in the recommendation? To answer, drag the appropriate network addresses to the correct subnets. Each network address may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place

A

Answer

Given answer is correct and so easy to spot it the answer without knowing much about subnetting and here is why
Make sure the PRIVATE IPs for on-prem and Azure DO NOT OVERLAP and hence from given options you can eliminate 172.16.xx
Also the VPN GWY needs be of /27 minimum preferably /26 or 25 CIDR

153
Q

Q25 T4

You have data files in Azure Blob Storage.
You plan to transform the files and move them to Azure Data Lake Storage.
You need to transform the data by using mapping data flow.
Which service should you use?
A. Azure Databricks
B. Azure Storage Sync
C. Azure Data Factory
D. Azure Data Box Gateway

A

Correct Answer: C

You can copy and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory or Azure Synapse Analytics.

Reference

What are mapping data flows?

Mapping data flows are visually designed data transformations in Azure Data Factory. Data flows allow data engineers to develop data transformation logic without writing code. The resulting data flows are executed as activities within Azure Data Factory pipelines that use scaled-out Apache Spark clusters. Data flow activities can be operationalized using existing Azure Data Factory scheduling, control, flow, and monitoring capabilities.

154
Q

Q26 T4

You have an Azure subscription.

You need to deploy an Azure Kubernetes Service (AKS) solution that will use Windows Server 2019 nodes. The solution must meet the following requirements:

✑ Minimize the time it takes to provision compute resources during scale-out operations.
✑ Support autoscaling of Windows Server containers.

Which scaling option should you recommend?

A. Kubernetes version 1.20.2 or newer
B. Virtual nodes with Virtual Kubelet ACI
C. cluster autoscaler
D. horizontal pod autoscaler

A

Correct Answer: C

Deployments can scale across AKS with no delay as cluster autoscaler deploys new nodes in your AKS cluster.
Note: AKS clusters can scale in one of two ways:
* The cluster autoscaler watches for pods that can’t be scheduled on nodes because of resource constraints. The cluster then automatically increases the number of nodes.
* The horizontal pod autoscaler uses the Metrics Server in a Kubernetes cluster to monitor the resource demand of pods. If an application needs more resources, the number of pods is automatically increased to meet the demand.

Incorrect:

Not D: If your application needs to rapidly scale, the horizontal pod autoscaler may schedule more pods than can be provided by the existing compute resources in the node pool. If configured, this scenario would then trigger the cluster autoscaler to deploy additional nodes in the node pool, but it may take a few minutes for those nodes to successfully provision and allow the Kubernetes scheduler to run pods on them.

Reference

155
Q

Q27 T4

Your on-premises network contains a file server named Server1 that stores 500 GB of data.

You need to use Azure Data Factory to copy the data from Server1 to Azure Storage.

You add a new data factory.

What should you do next? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

A

Answer

Box 1: Install a self-hosted integration runtime.
If your data store is located inside an on-premises network, an Azure virtual network, or Amazon Virtual Private Cloud, you need to configure a self-hosted integration runtime to connect to it.
The Integration Runtime to be used to connect to the data store. You can use Azure Integration Runtime or Self-hosted Integration Runtime (if your data store is located in private network). If not specified, it uses the default Azure Integration Runtime.

Box 2: Create a pipeline.
You perform the Copy activity with a pipeline.

Reference

156
Q

Q28 T4

You have an Azure subscription.

You need to recommend an Azure Kubernetes Service (AKS) solution that will use Linux nodes. The solution must meet the following requirements:

✑ Minimize the time it takes to provision compute resources during scale-out operations.
✑ Support autoscaling of Linux containers.
✑ Minimize administrative effort.

Which scaling option should you recommend?

A. horizontal pod autoscaler
B. cluster autoscaler
C. virtual nodes
D. Virtual Kubelet

A

Correct Answer: C

To rapidly scale application workloads in an AKS cluster, you can use virtual nodes. With virtual nodes, you have quick provisioning of pods, and only pay per second for their execution time. You don’t need to wait for Kubernetes cluster autoscaler to deploy VM compute nodes to run the additional pods. Virtual nodes are only supported with Linux pods and nodes.

157
Q

Q29 T4

You are designing an order processing system in Azure that will contain the Azure resources shown in the following table

The order processing system will have the following transaction flow:

✑ A customer will place an order by using App1.
✑ When the order is received, App1 will generate a message to check for product availability at vendor 1 and vendor 2.
✑ An integration component will process the message, and then trigger either Function1 or Function2 depending on the type of order.
✑ Once a vendor confirms the product availability, a status message for App1 will be generated by Function1 or Function2.
✑ All the steps of the transaction will be logged to storage1.

Which type of resource should you recommend for the integration component?

A. an Azure Service Bus queue
B. an Azure Data Factory pipeline
C. an Azure Event Grid domain
D. an Azure Event Hubs capture

A

Correct Answer: B

Azure Data Factory is the platform is the cloud-based ETL and data integration service that allows you to create data-driven workflows for orchestrating data movement and transforming data at scale. Using Azure Data Factory, you can create and schedule data-driven workflows (called pipelines) that can ingest data from disparate data stores.
Data Factory contains a series of interconnected systems that provide a complete end-to-end platform for data engineers.

Reference

Please note the question is asking to implement “An integration component will process the message”. Service Bus definitely is unable to process the message, it’s just a message queue.
ADF has a “control activity” which is like IF—Then flow

158
Q

Q30 T4

You have 100 Microsoft SQL Server Integration Services (SSIS) packages that are configured to use 10 on-premises SQL Server databases as their destinations.
You plan to migrate the 10 on-premises databases to Azure SQL Database.
You need to recommend a solution to create Azure-SQL Server Integration Services (SSIS) packages. The solution must ensure that the packages can target the
SQL Database instances as their destinations.
What should you include in the recommendation?
A. Data Migration Assistant (DMA)
B. Azure Data Factory
C. Azure Data Catalog
D. SQL Server Migration Assistant (SSMA)

A

Correct Answer: B

Migrate on-premises SSIS workloads to SSIS using ADF (Azure Data Factory).
When you migrate your database workloads from SQL Server on premises to Azure database services, namely Azure SQL Database or Azure SQL Managed
Instance, your ETL workloads on SQL Server Integration Services (SSIS) as one of the primary value-added services will need to be migrated as well.
Azure-SSIS Integration Runtime (IR) in Azure Data Factory (ADF) supports running SSIS packages. Once Azure-SSIS IR is provisioned, you can then use familiar tools, such as SQL Server Data Tools (SSDT)/SQL Server Management Studio (SSMS), and command-line utilities, such as dtinstall/dtutil/dtexec, to deploy and run your packages in Azure.

Reference

You should include Azure Data Factory in the recommendation to create Azure-SQL Server Integration Services (SSIS) packages. Azure Data Factory supports running SSIS packages in the cloud using Azure-SSIS Integration Runtime, which allows you to target Azure SQL Database instances as the destinations for your SSIS packages. This enables you to continue using your existing SSIS packages while migrating your on-premises databases to Azure SQL Database.

159
Q

Q31 T4

You have an Azure virtual machine named VM1 that runs Windows Server 2019 and contains 500 GB of data files.
You are designing a solution that will use Azure Data Factory to transform the data files, and then load the files to Azure Data Lake Storage.
What should you deploy on VM1 to support the design?
A. the On-premises data gateway
B. the Azure Pipelines agent
C. the self-hosted integration runtime
D. the Azure File Sync agent

A

Correct Answer: C

The integration runtime (IR) is the compute infrastructure that Azure Data Factory and Synapse pipelines use to provide data-integration capabilities across different network environments.
A self-hosted integration runtime can run copy activities between a cloud data store and a data store in a private network. It also can dispatch transform activities against compute resources in an on-premises network or an Azure virtual network. The installation of a self-hosted integration runtime needs an on-premises machine or a virtual machine inside a private network.

Reference

The Integration Runtime (IR) is the compute infrastructure used by Azure Data Factory and Azure Synapse pipelines to provide the following data integration capabilities across different network environments:

Data Flow: Execute a Data Flow in a managed Azure compute environment.
Data movement: Copy data across data stores in a public or private networks (for both on-premises or virtual private networks). The service provides support for built-in connectors, format conversion, column mapping, and performant and scalable data transfer.
Activity dispatch: Dispatch and monitor transformation activities running on a variety of compute services such as Azure Databricks, Azure HDInsight, ML Studio (classic), Azure SQL Database, SQL Server, and more.
SSIS package execution: Natively execute SQL Server Integration Services (SSIS) packages in a managed Azure compute environment.

160
Q

Q32 T4

You have an Azure Active Directory (Azure AD) tenant that syncs with an on-premises Active Directory domain.
Your company has a line-of-business (LOB) application that was developed internally.
You need to implement SAML single sign-on (SSO) and enforce multi-factor authentication (MFA) when users attempt to access the application from an unknown location.
Which two features should you include in the solution? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
A. Azure AD Privileged Identity Management (PIM)
B. Azure Application Gateway
C. Azure AD enterprise applications
D. Azure AD Identity Protection
E. Conditional Access policies

A

Correct Answer: CE

C. Azure AD enterprise applications
E. Conditional Access policies

C. Azure AD enterprise applications: You need to configure the LOB application as an enterprise application in Azure AD. This will allow you to configure SAML-based SSO for the application, enabling users to sign in using their Azure AD credentials.

E. Conditional Access policies: You can create a Conditional Access policy in Azure AD to enforce MFA when users attempt to access the application from an unknown location. Conditional Access policies allow you to set specific conditions, such as location or device state, and apply security requirements, like MFA, when those conditions are met.

161
Q

Q33 T4

You plan to automate the deployment of resources to Azure subscriptions.
What is a difference between using Azure Blueprints and Azure Resource Manager (ARM) templates?
A. ARM templates remain connected to the deployed resources.
B. Only blueprints can contain policy definitions.
C. Only ARM templates can contain policy definitions.
D. Blueprints remain connected to the deployed resources.

A

Correct Answer: D

With Azure Blueprints, the relationship between the blueprint definition (what should be deployed) and the blueprint assignment (what was deployed) is preserved.
This connection supports improved tracking and auditing of deployments.
Incorrect:
Not A: An ARM template is a document that doesn’t exist natively in Azure - each is stored either locally or in source control or in Templates (preview). The template gets used for deployments of one or more Azure resources, but once those resources deploy there’s no active connection or relationship to the template.
Not C: Blueprints are a declarative way to orchestrate the deployment of various resource templates and other artifacts such as:

Role Assignments -

Policy Assignments -
Azure Resource Manager templates (ARM templates)

Resource Groups

162
Q

Q34 T4

You have the resources shown in the following table

You create a new resource group in Azure named RG2.
You need to move the virtual machines to RG2.
What should you use to move each virtual machine? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area

A

Answer

Box 1: Azure Resource Mover -
To move Azure VMs to another region, Microsoft now recommends using Azure Resource Mover.
Incorrect:
Not Azure Migrate: We are not migrating, only moving a VM between resource groups.

Box 2: Azure Migrate -
Azure Migrate provides a centralized hub to assess and migrate on-premises servers, infrastructure, applications, and data to Azure.
Azure migrate includes Azure Migrate Server Migration: Migrate VMware VMs, Hyper-V VMs, physical servers, other virtualized servers, and public cloud VMs to
Azure.
Incorrect:
Not Arc: Azure Migrate is adequate. No need to use Azure Arc.
Not Data Migration Assistant: Data Migration Assistant is a stand-alone tool to assess SQL Servers.
It is used to assess SQL Server databases for migration to Azure SQL Database, Azure SQL Managed Instance, or Azure VMs running SQL Server.
Not Lighthouse: Azure Lighthouse enables multi-tenant management with scalability, higher automation, and enhanced governance across resources.
With Azure Lighthouse, service providers can deliver managed services using comprehensive and robust tooling built into the Azure platform. Customers maintain control over who has access to their tenant, which resources they can access, and what actions can be taken.

163
Q

Q35 T4

You plan to deploy an Azure App Service web app that will have multiple instances across multiple Azure regions.
You need to recommend a load balancing service for the planned deployment The solution must meet the following requirements:
✑ Maintain access to the app in the event of a regional outage.
✑ Support Azure Web Application Firewall (WAF).
✑ Support cookie-based affinity.
✑ Support URL routing.
What should you include in the recommendation?
A. Azure Front Door
B. Azure Traffic Manager
C. Azure Application Gateway
D. Azure Load Balancer

A

Correct Answer: C

The best option to meet the given requirements is C. Azure Application Gateway.

Here’s why:
Requirements and how Azure Application Gateway fulfills them:

  • Maintain access to the app in the event of a regional outage: Azure Application Gateway supports multiple regions, so it can automatically route traffic to healthy instances in different regions if one region experiences an outage.
  • Support Azure Web Application Firewall (WAF): Azure Application Gateway comes with built-in WAF capabilities, allowing you to protect your web applications from common web attacks.
  • Support cookie-based affinity: Azure Application Gateway can maintain session affinity based on cookies, ensuring that requests from the same client are always routed to the same instance.
  • Support URL routing: Azure Application Gateway allows you to route traffic based on URL patterns, enabling you to create different routing rules for different web applications or paths within your application.
    While Azure Traffic Manager can also provide load balancing and regional failover, it doesn’t offer WAF capabilities or cookie-based affinity. Azure Load Balancer is primarily for load balancing network traffic at the IP layer and doesn’t support URL routing or WAF. Azure Front Door is suitable for global traffic distribution but might not be the best choice for regional load balancing and WAF requirements in this specific scenario.
164
Q

Q36 T4

You have the Azure resources shown in the following table

You need to design a solution that provides on-premises network connectivity to SQLDB1 through PE1.
How should you configure name resolution? To answer select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area

A

Answer

Box 1 is wrong, VNET default configuration is to use azure DNS.
The correct answer for box 1 should be “configure vm1 to forward contoso.com to the azure provided dns at 168.63.129.16” to convert VM1 to a DNS forwarder.

Reasoning: This setup allows the DNS server on VM1 to forward DNS queries for the contoso.com domain to the Azure-provided DNS. The Azure DNS can resolve the private endpoint (PE1) to its private IP address.
On-premises DNS Configuration
Option: Forward contoso.com to VM1

Reasoning: By configuring the on-premises DNS to forward requests for contoso.com to VM1, DNS queries for this domain will be directed to the DNS server on VM1, which will then forward them to the Azure-provided DNS.

Box 2: Forward contoso.com to VM1
Forward to the DNS server VM1.
Note: You can use the following options to configure your DNS settings for private endpoints:
* Use the host file (only recommended for testing). You can use the host file on a virtual machine to override the DNS.
* Use a private DNS zone. You can use private DNS zones to override the DNS resolution for a private endpoint. A private DNS zone can be linked to your virtual network to resolve specific domains.
* Use your DNS forwarder (optional). You can use your DNS forwarder to override the DNS resolution for a private link resource. Create a DNS forwarding rule to use a private DNS zone on your DNS server hosted in a virtual network.

165
Q

Q37 T4

You are designing a microservices architecture that will support a web application.
The solution must meet the following requirements:
✑ Deploy the solution on-premises and to Azure.
Support low-latency and hyper-scale operations.
✑ Allow independent upgrades to each microservice.
✑ Set policies for performing automatic repairs to the microservices.
You need to recommend a technology.
What should you recommend?

A. Azure Container Instance
B. Azure Logic App
C. Azure Service Fabric
D. Azure virtual machine scale set

A

Correct Answer: C

Azure Service Fabric enables you to create Service Fabric clusters on premises or in other clouds.
Azure Service Fabric is low-latency and scales up to thousands of machines.

Azure Service Fabric is the recommended technology for the microservices architecture you are designing, as it meets all the specified requirements:

✓ Supports deployment both on-premises and to Azure, providing a consistent platform for managing and deploying microservices.
✓ Enables low-latency and hyper-scale operations, as it is designed for building scalable and reliable applications.
✓ Allows independent upgrades to each microservice, as it supports versioning and rolling upgrades.
✓ Provides built-in health monitoring and automatic repairs for the microservices with configurable policies.

166
Q

Q38 T4

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to deploy multiple instances of an Azure web app across several Azure regions.
You need to design an access solution for the app. The solution must meet the following replication requirements:
✑ Support rate limiting.
✑ Balance requests between all instances.
✑ Ensure that users can access the app in the event of a regional outage.
Solution: You use Azure Front Door to provide access to the app.
Does this meet the goal?

A. Yes
B. No

A

Correct Answer: A

Azure Front Door meets the requirements. The Azure Web Application Firewall (WAF) rate limit rule for Azure Front Door controls the number of requests allowed from clients during a one-minute duration.

NGINX Plus and Microsoft Azure Load Balancers

167
Q

Q39 T4

You need to recommend a solution to generate a monthly report of all the new Azure Resource Manager (ARM) resource deployments in your Azure subscription.
What should you include in the recommendation?
A. Azure Activity Log
B. Azure Arc
C. Azure Analysis Services
D. Azure Monitor action groups

A

Correct Answer: A

Activity logs are kept for 90 days. You can query for any range of dates, as long as the starting date isn’t more than 90 days in the past.
Through activity logs, you can determine:
✑ what operations were taken on the resources in your subscription
✑ who started the operation
when the operation occurred
✑ the status of the operation
✑ the values of other properties that might help you research the operation

168
Q

Q40 T4

You have an Azure subscription.
You need to recommend a solution to provide developers with the ability to provision Azure virtual machines. The solution must meet the following requirements:
✑ Only allow the creation of the virtual machines in specific regions.
✑ Only allow the creation of specific sizes of virtual machines.
What should you include in the recommendation?
A. Attribute-based access control (ABAC)
B. Azure Policy
C. Conditional Access policies
D. role-based access control (RBAC)

A

Correct Answer: B

Azure Policies allows you to specify allowed locations, and allowed VM SKUs.

169
Q

Q41 T4

You are developing a sales application that will contain several Azure cloud services and handle different components of a transaction. Different cloud services will process customer orders, billing, payment, inventory, and shipping.
You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using XML messages.
What should you include in the recommendation?
A. Azure Notification Hubs
B. Azure Data Lake
C. Azure Service Bus
D. Azure Blob Storage

A

Correct Answer: C

Asynchronous messaging options.

There are different types of messages and the entities that participate in a messaging infrastructure. Based on the requirements of each message type, Microsoft recommends Azure messaging services. The options include Azure Service Bus, Event Grid, and Event Hubs.
Azure Service Bus queues are well suited for transferring commands from producers to consumers.
Data is transferred between different applications and services using messages. A message is a container decorated with metadata, and contains data. The data can be any kind of information, including structured data encoded with the common formats such as the following ones: JSON, XML, Apache Avro, Plain Text.
Reference:

170
Q

Q42 T4

You have 100 devices that write performance data to Azure Blob Storage.
You plan to store and analyze the performance data in an Azure SQL database.
You need to recommend a solution to continually copy the performance data to the Azure SQL database.
What should you include in the recommendation?
A. Azure Data Factory
B. Data Migration Assistant (DMA)
C. Azure Data Box
D. Azure Database Migration Service

A

Correct Answer: A

Azure Data Factory is a cloud-based data integration service that allows you to create, schedule, and manage data pipelines. It can be used to continually copy data from various sources, including Azure Blob Storage, to multiple destinations such as an Azure SQL Database. The other options aren’t suitable for continual data copying in the scenario described.

171
Q

Q43 T4

You need to recommend a storage solution for the records of a mission critical application. The solution must provide a Service Level Agreement (SLA) for the latency of write operations and the throughput.
What should you include in the recommendation?
A. Azure Data Lake Storage Gen2
B. Azure Blob Storage
C. Azure SQL
D. Azure Cosmos DB

A

Correct Answer: D

Azure Cosmos DB is Microsoft’s fast NoSQL database with open APIs for any scale. It offers turnkey global distribution across any number of Azure regions by transparently scaling and replicating your data wherever your users are. The service offers comprehensive 99.99% SLAs which covers the guarantees for throughput, consistency, availability and latency for the Azure Cosmos DB Database Accounts scoped to a single Azure region configured with any of the five
Consistency Levels or Database Accounts spanning multiple Azure regions, configured with any of the four relaxed Consistency Levels. Azure Cosmos DB allows configuring multiple Azure regions as writable endpoints for a Database Account. In this configuration, Azure Cosmos DB offers 99.999% SLA for both read and write availability.

The correct answer is D. Azure Cosmos DB.
Here’s why:
* Azure Cosmos DB is a fully managed NoSQL database that offers a high level of availability, performance, and scalability.
* It provides a SLA for both write latency and throughput, making it suitable for mission-critical applications that require predictable performance.
* Azure Cosmos DB also supports various data models, including document, key-value, graph, and table, making it flexible for different data structures.
* It offers global distribution and automatic failover, ensuring data redundancy and high availability across multiple regions.
Here’s a breakdown of why the other options are not as suitable:
* A. Azure Data Lake Storage Gen2: While it’s a good option for storing large datasets, it doesn’t provide the same level of SLA for latency and throughput as Azure Cosmos DB. It’s more suited for analytics and data warehousing scenarios.
* B. Azure Blob Storage: It’s primarily used for storing unstructured data like images, videos, and documents. It doesn’t offer the same level of performance and scalability as Azure Cosmos DB, especially for high-throughput write operations.
* C. Azure SQL: While it’s a good choice for relational databases, it might not be the most efficient for storing large amounts of unstructured data or for applications that require high-throughput write operations.
Therefore, based on the requirements of a mission-critical application with a need for SLA on latency and throughput, Azure Cosmos DB is the most suitable storage solution.

172
Q

Q44 T4

You are planning a storage solution. The solution must meet the following requirements:
✑ Support at least 500 requests per second.
✑ Support a large image, video, and audio streams.
Which type of Azure Storage account should you provision?
A. standard general-purpose v2
B. premium block blobs
C. premium page blobs
D. premium file shares

A

Correct Answer: B

Use Azure Blobs if you want your application to support streaming and random access scenarios.
It’s ideal for applications that require high transaction rates or consistent low-latency storage.
Incorrect:
Not A: Standard storage accounts has a default maximum request rate per storage account 20,000 requests per second1, but is not optimized for video and audio streams.
Not C: Page blobs is best suited for random reads and random writes.
Not D: FileStorage storage accounts (premium) has a maximum concurrent request rate of 100,000 IOPS.
Maximum file size is 4 TB, but is not optimized for video and audio streams.

Reference

173
Q

Q45 T4

You need to recommend a data storage solution that meets the following requirements:
✑ Ensures that applications can access the data by using a REST connection
✑ Hosts 20 independent tables of varying sizes and usage patterns
✑ Automatically replicates the data to a second Azure region
✑ Minimizes costs
What should you recommend?
A. an Azure SQL Database elastic pool that uses active geo-replication
B. tables in an Azure Storage account that use geo-redundant storage (GRS)
C. tables in an Azure Storage account that use read-access geo-redundant storage (RA-GRS)
D. an Azure SQL database that uses active geo-replication

A

Correct Answer: B

The Table service offers structured storage in the form of tables. The Table service API is a REST API for working with tables and the data that they contain.
Geo-redundant storage (GRS) has a lower cost than read-access geo-redundant storage (RA-GRS).

174
Q

Q46 T4

You are designing a software as a service (SaaS) application that will enable Azure Active Directory (Azure AD) users to create and publish online surveys. The
SaaS application will have a front-end web app and a back-end web API. The web app will rely on the web API to handle updates to customer surveys.
You need to design an authorization flow for the SaaS application. The solution must meet the following requirements:
✑ To access the back-end web API, the web app must authenticate by using OAuth 2 bearer tokens.
✑ The web app must authenticate by using the identities of individual users.
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Hot Area

A

Answer

Box 1: Azure AD -
The Azure AD server issues tokens (access & refresh token). See step 5 below in graphic.
OAuth 2.0 authentication with Azure Active Directory.
The OAuth 2.0 is the industry protocol for authorization. It allows a user to grant limited access to its protected resources. Designed to work specifically with
Hypertext Transfer Protocol (HTTP), OAuth separates the role of the client from the resource owner. The client requests access to the resources controlled by the resource owner and hosted by the resource server (here the Azure AD server). The resource server issues access tokens with the approval of the resource owner. The client uses the access tokens to access the protected resources hosted by the resource server.

Box 2: A web API - Delegated access is used
The bearer token sent to the web API contains the user identity.
The web API makes authorization decisions based on the user identity.
Reference:

175
Q

Q47 T4

You plan to create an Azure environment that will contain a root management group and 10 child management groups. Each child management group will contain five Azure subscriptions. You plan to have between 10 and 30 resource groups in each subscription.
You need to design an Azure governance solution. The solution must meet the following requirements:
✑ Use Azure Blueprints to control governance across all the subscriptions and resource groups.
✑ Ensure that Blueprints-based configurations are consistent across all the subscriptions and resource groups.
✑ Minimize the number of blueprint definitions and assignments.
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Hot Area

A

Answer

Box 1. The root management group
When creating a blueprint definition, you’ll define where the blueprint is saved. Blueprints can be saved to a management group or subscription that you have
Contributor access to. If the location is a management group, the blueprint is available to assign to any child subscription of that management group.
The root management group is built into the hierarchy to have all management groups and subscriptions fold up to it. This root management group allows for global policies and Azure role assignments to be applied at the directory level.
Box 2. The root management group

On July 11, 2026, Blueprints (Preview) will be deprecated. Migrate your existing blueprint definitions and assignments to Template Specs and Deployment Stacks

176
Q

Q48 T4

You are designing a virtual machine that will run Microsoft SQL Server and contain two data disks. The first data disk will store log files, and the second data disk will store data. Both disks are P40 managed disks.
You need to recommend a host caching method for each disk. The method must provide the best overall performance for the virtual machine while preserving the integrity of the SQL data and logs.
Which host caching method should you recommend for each disk? To answer, drag the appropriate methods to the correct disks. Each method may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place

A

Answer

The recommended host caching methods for the log and data disks are:

  • Log disk: ReadWrite
  • Data disk: ReadOnly

Here’s why:

Log disk:

  • ReadWrite caching is essential for log disks to ensure optimal performance for write operations. SQL Server writes frequently to the log disk, and read caching alone may not be sufficient to handle the write workload effectively.
  • ReadWrite caching allows for both reading and writing directly from the cache, reducing the number of I/O operations to the underlying storage and improving performance.

Data disk:

  • ReadOnly caching is generally recommended for data disks in SQL Server. This is because data disks primarily contain read-heavy workloads, and read caching can significantly improve performance by reducing the number of I/O operations to the underlying storage.
  • Write operations to the data disk are typically less frequent and can be handled efficiently without write caching.

By using these caching methods, you can optimize the performance of the virtual machine while maintaining data integrity.

Choosing “None” for the log disk’s caching method would significantly impact the performance of the SQL Server instance.

Here’s why:

  • Increased I/O operations: Without caching, every write operation to the log disk would directly involve the underlying storage, leading to increased I/O latency and potentially affecting overall database performance.
  • Reduced write throughput: The log disk is heavily used for write operations, and without caching, the write throughput would be limited by the physical capabilities of the storage device.
  • Potential performance bottlenecks: The log disk is critical for database recovery and consistency. If it becomes a performance bottleneck due to excessive I/O operations, it could impact the entire database’s availability and responsiveness.

Therefore, it’s strongly recommended to use ReadWrite caching for the log disk to ensure optimal performance and reliability.

177
Q

Q49 T4

You are designing a solution that calculates 3D geometry from height-map data.
You need to recommend a solution that meets the following requirements:
✑ Performs calculations in Azure.
✑ Ensures that each node can communicate data to every other node.
✑ Maximizes the number of nodes to calculate multiple scenes as fast as possible.
Minimizes the amount of effort to implement the solution.

Which two actions should you include in the recommendation? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
A. Enable parallel file systems on Azure.
B. Create a render farm that uses virtual machines.
C. Create a render farm that uses virtual machine scale sets.
D. Create a render farm that uses Azure Batch.
E. Enable parallel task execution on compute nodes.

A

Correct Answer: DE

Multi-instance tasks allow you to run an Azure Batch task on multiple compute nodes simultaneously. These tasks enable high performance computing scenarios like Message Passing Interface (MPI) applications in Batch.
You configure compute nodes for parallel task execution at the pool level.
Azure Batch allows you to set task slots per node up to (4x) the number of node cores.

How it works

A common scenario for Batch involves scaling out intrinsically parallel work, such as the rendering of images for 3D scenes, on a pool of compute nodes. This pool can be your “render farm” that provides tens, hundreds, or even thousands of cores to your rendering job.

You configure compute nodes for parallel task execution at the pool level

178
Q

Q50 T4

You have an on-premises application that consumes data from multiple databases. The application code references database tables by using a combination of the server, database, and table name.

You need to migrate the application data to Azure.

To which two services can you migrate the application data to achieve the goal? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

A. SQL Server Stretch Database
B. SQL Server on an Azure virtual machine
C. Azure SQL Database
D. Azure SQL Managed Instance

A

Correct Answer: BD

Answer:

B. SQL Server on an Azure virtual machine
D. Azure SQL Managed Instance

Explanation:

Both options provide the flexibility to maintain the original connection strings without significant code modifications.

  • SQL Server on an Azure virtual machine: This option replicates the on-premises environment closely, preserving the server, database, and table structure in the connection string.
  • Azure SQL Managed Instance: While it abstracts away the underlying infrastructure, it maintains the familiar server, database, and table structure in the connection string, allowing for minimal code changes.

These options offer the most straightforward migration path while preserving the application’s existing logic.

Cross-database queries are supported by SQL Server, for example on an Azure virtual machine, and also supported by an Azure SQL Managed Instance.

179
Q

Q51 T4

You plan to migrate on-premises Microsoft SQL Server databases to Azure.
You need to recommend a deployment and resiliency solution that meets the following requirements:
✑ Supports user-initiated backups
✑ Supports multiple automatically replicated instances across Azure regions
✑ Minimizes administrative effort to implement and maintain business continuity
What should you recommend? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Hot Area

A

Answer

Box 1: an Azure SQL database -
Incorrect answers:
User imitated backups are not supported by Azure SQL Managed instance.

Box 2: Active geo-replication -
Active geo-replication required to multiple automatically replicated instances across Azure regions.
You can manage Azure SQL Database security for geo-restore. SQL database cannot be used for geo-restore.
Incorrect:
Not SQL Server: Active geo-replication requires Azure SQL database.

180
Q

Q52 T4

You need to design a highly available Azure SQL database that meets the following requirements:
✑ Failover between replicas of the database must occur without any data loss.
✑ The database must remain available in the event of a zone outage.
✑ Costs must be minimized.
Which deployment option should you use?
A. Azure SQL Managed Instance Business Critical
B. Azure SQL Managed Instance General Purpose
C. Azure SQL Database Business Critical
D. Azure SQL Database Serverless

A

The best option would be C. Azure SQL Database Business Critical, for the following reasons:

Business Critical tier provides synchronous replication to ensure no data loss during failover, which is crucial to meet the requirement of “failover without any data loss.”
It offers zone-redundant high availability, ensuring that the database remains available even in the event of a zone outage.
While Azure SQL Managed Instance Business Critical also offers high availability and synchronous replication, it is typically more expensive than Azure SQL Database Business Critical, so it doesn’t align well with the cost-minimization requirement.

Azure SQL Managed Instance General Purpose and Azure SQL Database Serverless are more cost-effective but do not meet the high availability and data-loss prevention requirements due to asynchronous replication and limitations in disaster recovery.

Conclusion:

Option C: Azure SQL Database Business Critical is the best fit for achieving high availability, preventing data loss, and balancing costs.

181
Q

Q53 T4

You have an Azure web app that uses an Azure key vault named KeyVault1 in the West US Azure region.
You are designing a disaster recovery plan for KeyVault1.
You plan to back up the keys in KeyVault1.
You need to identify to where you can restore the backup.
What should you identify?

A. any region worldwide
B. the same region only
C. KeyVault1 only
D. the same geography only

A

Correct Answer: D

The correct answer is D. the same geography only.

Azure Key Vault backups are designed to be restored within the same geography to ensure optimal performance and latency. While you can technically restore a Key Vault backup to any region worldwide, it’s not recommended due to potential performance and latency issues.
Here’s a breakdown of the options:

  • A. any region worldwide: This is incorrect because restoring a Key Vault backup to a different geography can introduce significant latency and performance issues.
  • B. the same region only: This is partially correct, but it’s more accurate to say the same geography only. Regions within the same geography (e.g., East US, West US, Central US) are typically located closer together, which can improve performance and latency.
  • C. KeyVault1 only: This is incorrect. While you can restore the backup to the same Key Vault, it’s important to consider the region and geography for optimal performance.
  • D. the same geography only: This is the correct answer. Restoring a Key Vault backup to the same geography ensures the best performance and latency.

Therefore, when designing a disaster recovery plan for Azure Key Vault, it’s essential to identify a backup location within the same geography as the original Key Vault.

182
Q

Q54 T4

You have an on-premises line-of-business (LOB) application that uses a Microsoft SQL Server instance as the backend.
You plan to migrate the on-premises SQL Server instance to Azure virtual machines.
You need to recommend a highly available SQL Server deployment that meets the following requirements:
✑ Minimizes costs
Minimizes failover time if a single server fails

What should you include in the recommendation?
A. an Always On availability group that has premium storage disks and a virtual network name (VNN)
B. an Always On Failover Cluster Instance that has a virtual network name (VNN) and a standard file share
C. an Always On availability group that has premium storage disks and a distributed network name (DNN)
D. an Always On Failover Cluster Instance that has a virtual network name (VNN) and a premium file share

A

Correct Answer: C

Always On availability groups on Azure Virtual Machines are similar to Always On availability groups on-premises, and rely on the underlying Windows Server
Failover Cluster.
If you deploy your SQL Server VMs to a single subnet, you can configure a virtual network name (VNN) and an Azure Load Balancer, or a distributed network name (DNN) to route traffic to your availability group listener.
There are some behavior differences between the functionality of the VNN listener and DNN listener that are important to note:
* Failover time: Failover time is faster when using a DNN listener since there is no need to wait for the network load balancer to detect the failure event and change its routing.
* Etc.
Incorrect:
Not B, not D: Migrate to an Always On availability group, not an Always on Failover cluster Instance.
Reference:

183
Q

Q55 T4

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases.
The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.
You need to recommend a solution to meet the regulatory requirement.

Solution: You recommend creating resource groups based on locations and implementing resource locks on the resource groups.

Does this meet the goal?
A. Yes
B. No

A

Correct Answer: B

Instead; you should recommend using an Azure Policy initiative to enforce the location
Note: Azure Resource Policy Definitions can be used which can be applied to a specific Resource Group with the App Service instances.
In Azure Policy, we offer several built-in policies that are available by default. For example:
* Allowed Locations (Deny): Restricts the available locations for new resources. Its effect is used to enforce your geo-compliance requirements.

184
Q

Q55(56) T4

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases.
The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.
You need to recommend a solution to meet the regulatory requirement.

Solution: You recommend using the Regulatory compliance dashboard in Microsoft Defender for Cloud.

Does this meet the goal?
A. Yes
B. No

A

Correct Answer: B

Instead; you should recommend using an Azure Policy initiative to enforce the location
Note: Azure Resource Policy Definitions can be used which can be applied to a specific Resource Group with the App Service instances.
In Azure Policy, we offer several built-in policies that are available by default. For example:
* Allowed Locations (Deny): Restricts the available locations for new resources. Its effect is used to enforce your geo-compliance requirements.

185
Q

Q55(57) T4

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases.
The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.
You need to recommend a solution to meet the regulatory requirement.

Solution: You recommend using an Azure Policy initiative to enforce the location.

A. Yes
B. No

A

Correct Answer: A

Azure Resource Policy Definitions can be used which can be applied to a specific Resource Group with the App Service instances.
In Azure Policy, we offer several built-in policies that are available by default. For example:
* Allowed Locations (Deny): Restricts the available locations for new resources. Its effect is used to enforce your geo-compliance requirements.

186
Q

Q58 T4

You plan to move a web app named App1 from an on-premises datacenter to Azure.
App1 depends on a custom COM component that is installed on the host server.
You need to recommend a solution to host App1 in Azure. The solution must meet the following requirements:
✑ App1 must be available to users if an Azure datacenter becomes unavailable.
✑ Costs must be minimized.
What should you include in the recommendation?
A. In two Azure regions, deploy a load balancer and a web app.
B. In two Azure regions, deploy a load balancer and a virtual machine scale set.
C. Deploy a load balancer and a virtual machine scale set across two availability zones.
D. In two Azure regions, deploy an Azure Traffic Manager profile and a web app.

A

Correct Answer: C

Need to use a virtual machine as Azure App service does not allow COM components.
Need two availability zones to protect against an Azure datacenter failure.
Incorrect:
Not A, Not D: Cannot use a web app.
Azure App Service does not allow the registration of COM components on the platform. If your app makes use of any COM components, these need to be rewritten in managed code and deployed with the site or application.

187
Q

Q59 T4

You plan to deploy an application named App1 that will run in containers on Azure Kubernetes Service (AKS) clusters. The AKS clusters will be distributed across four Azure regions.
You need to recommend a storage solution to ensure that updated container images are replicated automatically to all the Azure regions hosting the AKS clusters.
Which storage solution should you recommend?
A. geo-redundant storage (GRS) accounts
B. Premium SKU Azure Container Registry
C. Azure Content Delivery Network (CDN)
D. Azure Cache for Redis

A

Correct Answer: B

Enable geo-replication for container images.

Best practice: Store your container images in Azure Container Registry and geo-replicate the registry to each AKS region.

To deploy and run your applications in AKS, you need a way to store and pull the container images. Container Registry integrates with AKS, so it can securely store your container images or Helm charts. Container Registry supports multimaster geo-replication to automatically replicate your images to Azure regions around the world.

Geo-replication is a feature of Premium SKU container registries.

Note:
When you use Container Registry geo-replication to pull images from the same region, the results are:
Faster: You pull images from high-speed, low-latency network connections within the same Azure region.
More reliable: If a region is unavailable, your AKS cluster pulls the images from an available container registry.
Cheaper: There’s no network egress charge between datacenters.

188
Q

Q60 T4

You have an Azure Active Directory (Azure AD) tenant.
You plan to deploy Azure Cosmos DB databases that will use the SQL API.
You need to recommend a solution to provide specific Azure AD user accounts with read access to the Cosmos DB databases.
What should you include in the recommendation?
A. shared access signatures (SAS) and Conditional Access policies
B. certificates and Azure Key Vault
C. master keys and Azure Information Protection policies
D. a resource token and an Access control (IAM) role assignment

A

Correct Answer: D

The Access control (IAM) pane in the Azure portal is used to configure role-based access control on Azure Cosmos resources. The roles are applied to users, groups, service principals, and managed identities in Active Directory. You can use built-in roles or custom roles for individuals and groups. The following screenshot shows Active Directory integration (RBAC) using access control (IAM) in the Azure portal

Note: To use the Azure Cosmos DB RBAC in your application, you have to update the way you initialize the Azure Cosmos DB SDK. Instead of passing your account’s primary key, you have to pass an instance of a TokenCredential class. This instance provides the Azure Cosmos DB SDK with the context required to fetch an Azure AD (AAD) token on behalf of the identity you wish to use.

189
Q

Q61 T4

You need to recommend an Azure Storage solution that meets the following requirements:
✑ The storage must support 1 PB of data.
✑ The data must be stored in blob storage.
✑ The storage must support three levels of subfolders.
✑ The storage must support access control lists (ACLs).
What should you include in the recommendation?
A. a premium storage account that is configured for block blobs
B. a general purpose v2 storage account that has hierarchical namespace enabled
C. a premium storage account that is configured for page blobs
D. a premium storage account that is configured for file shares and supports large file shares

A

Correct Answer: B

Default limits for Azure general-purpose v2 (GPv2), general-purpose v1 (GPv1), and Blob storage accounts include:
* Default maximum storage account capacity: 5 PiB
Blob storage supports Azure Data Lake Storage Gen2, Microsoft’s enterprise big data analytics solution for the cloud. Azure Data Lake Storage Gen2 offers a hierarchical file system as well as the advantages of Blob storage.
Blob storage supports Azure Data Lake Storage Gen2, Microsoft’s enterprise big data analytics solution for the cloud. Azure Data Lake Storage Gen2 offers a hierarchical file system as well as the advantages of Blob storage
Incorrect:
Not D: In a Premium FileStorage account, storage size is limited to 100 TB.

190
Q

Q62 T4

You manage a database environment for a Microsoft Volume Licensing customer named Contoso, Ltd. Contoso uses License Mobility through Software
Assurance.
You need to deploy 50 databases. The solution must meet the following requirements:
✑ Support automatic scaling.
✑ Minimize Microsoft SQL Server licensing costs.
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Hot Area

A

Answer

Box 1: vCore -
You can only apply the Azure Hybrid licensing model when you choose a vCore-based purchasing model and the provisioned compute tier for your Azure SQL
Database. Azure Hybrid Benefit isn’t available for service tiers under the DTU-based purchasing model or for the serverless compute tier.

Box 2: An Azure SQL Database elastic pool
Azure SQL Database elastic pools are a simple, cost-effective solution for managing and scaling multiple databases that have varying and unpredictable usage demands. The databases in an elastic pool are on a single server and share a set number of resources at a set price. Elastic pools in SQL Database enable software as a service (SaaS) developers to optimize the price performance for a group of databases within a prescribed budget while delivering performance elasticity for each database.

191
Q

Q63 T4

You have an on-premises application named App1 that uses an Oracle database.
You plan to use Azure Databricks to transform and load data from App1 to an Azure Synapse Analytics instance.
You need to ensure that the App1 data is available to Databricks.
Which two Azure services should you include in the solution? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
A. Azure Data Box Gateway
B. Azure Import/Export service
C. Azure Data Lake Storage
D. Azure Data Box Edge
E. Azure Data Factory

A

Correct Answer: CE

ADF moves data from on-prem Oracle to Data Lake storage, which makes data ready for DataBrick

DataBricks “ETL” data to Synapse:

To ensure that the data from App1 is available to Azure Databricks, you should include the following Azure services in your solution:

  1. Azure Data Factory (E): Azure Data Factory can be used to create a data pipeline for ETL (Extract, Transform, Load) processes, which can move your data from the on-premises Oracle database to Azure.
  2. Azure Data Lake Storage (C): Azure Data Lake Storage can act as the intermediary storage area where the transformed data can be placed. Azure Databricks is tightly integrated with Azure Data Lake Storage, making it an ideal choice for storing your data.

Please note that while Azure Data Box Gateway and Azure Data Box Edge are used for offline transfer of large amounts of data, and Azure Import/Export service is used for importing large amounts of data into Azure, they might not be necessary if your data can be transferred online or isn’t extremely large.

192
Q

Q64 T4

You are designing a cost-optimized solution that uses Azure Batch to run two types of jobs on Linux nodes. The first job type will consist of short-running tasks for a development environment. The second job type will consist of long-running Message Passing Interface (MPI) applications for a production environment that requires timely job completion.
You need to recommend the pool type and node type for each job type. The solution must minimize compute charges and leverage Azure Hybrid Benefit whenever possible.
What should you recommend? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Hot Area

A

Answer

Box 1: User subscription and low-priority virtual machines
The first job type will consist of short-running tasks for a development environment.
Among the many ways to purchase and consume Azure resources are Azure low priority VMs and Spot VMs. These virtual machines are compute instances allocated from spare capacity, offered at a highly discounted rate compared to on demand VMs. This means they can be a great option for cost savings” for the right workloads

Box 2: Batch service and dedicate virtual machines
The second job type will consist of long-running Message Passing Interface (MPI) applications for a production environment that requires timely job completion.
Azure Batch Service is a cloud based job scheduling and compute management platform that enables running large-scale parallel and high performance computing applications efficiently in the cloud. Azure Batch Service provides job scheduling and in automatically scaling and managing virtual machines running those jobs.

193
Q

Q65 T4

You are developing a sales application that will contain several Azure cloud services and handle different components of a transaction. Different cloud services will process customer orders, billing, payment, inventory, and shipping.
You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using XML messages.
What should you include in the recommendation?

A. Azure Notification Hubs
B. Azure Service Fabric
C. Azure Queue Storage
D. Azure Data Lake

A

Correct Answer: C

Queue Storage delivers asynchronous messaging between application components, whether they are running in the cloud, on the desktop, on an on-premises server, or on a mobile device.

The maximum message size supported by Azure Storage Queues is 64KB while Azure Service Bus Queues support messages up to 256KB. This becomes an important factor especially when the message format is padded (such as XML).

194
Q

Q66 T4

You are developing a sales application that will contain several Azure cloud services and handle different components of a transaction. Different cloud services will process customer orders, billing, payment, inventory, and shipping.
You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using XML messages.
What should you include in the recommendation?
A. Azure Notification Hubs
B. Azure Service Fabric
C. Azure Queue Storage
D. Azure Application Gateway

A

Correct Answer: C

Queue storage is often used to create a backlog of work to process asynchronously.
A queue message must be in a format compatible with an XML request using UTF-8 encoding.

195
Q

Q67 T4

You need to design a highly available Azure SQL database that meets the following requirements:

  • Failover between replicas of the database must occur without any data loss.
  • The database must remain available in the event of a zone outage.
  • Costs must be minimized.

Which deployment option should you use?

A. Azure SQL Database Hyperscale
B. Azure SQL Database Premium
C. Azure SQL Database Basic
D. Azure SQL Database Standard

A

Correct Answer: B

Premium should be the answer.
Whenever zone-redundancy (availability within the same region) is required you can only choose:
-General Purpose
-Premium
-Business Critical

See

196
Q

Q68 T4

You are developing a sales application that will contain several Azure cloud services and handle different components of a transaction. Different cloud services will process customer orders, billing, payment, inventory, and shipping.

You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using XML messages.

What should you include in the recommendation?

A. Azure Service Bus
B. Azure Data Lake
C. Azure Traffic Manager
D. Azure Blob Storage

A

Correct Answer: A

Azure Service Bus is a fully managed enterprise message broker with message queues and publish-subscribe topics (in a namespace). Service Bus is used to decouple applications and services from each other, providing the following benefits:
- Load-balancing work across competing workers
- Safely routing and transferring data and control across service and application boundaries
- Coordinating transactional work that requires a high-degree of reliability

197
Q

Q69 T4

You need to design a highly available Azure SQL database that meets the following requirements:

  • Failover between replicas of the database must occur without any data loss.
  • The database must remain available in the event of a zone outage.
  • Costs must be minimized.

Which deployment option should you use?

A. Azure SQL Database Basic
B. Azure SQL Managed Instance General Purpose
C. Azure SQL Database Business Critical
D. Azure SQL Managed Instance Business Critical

A

Correct Answer: C

The Business Critical service tier is designed for OLTP applications with high transaction rates and low latency I/O requirements. It offers the highest resilience to failures by using several isolated replicas.

198
Q

Q70 T4

You have an Azure subscription.

You need to deploy an Azure Kubernetes Service (AKS) solution that will use Windows Server 2019 nodes. The solution must meet the following requirements:

  • Minimize the time it takes to provision compute resources during scale-out operations.
  • Support autoscaling of Windows Server containers.

Which scaling option should you recommend?

A. horizontal pod autoscaler
B. Virtual nodes
C. Kubernetes version 1.20.2 or newer
D. cluster autoscaler

A

Correct Answer: D

To keep up with application demands in Azure Kubernetes Service (AKS), you may need to adjust the number of nodes that run your workloads. The cluster autoscaler component can watch for pods in your cluster that can’t be scheduled because of resource constraints. When issues are detected, the number of nodes in a node pool is increased to meet the application demand. Nodes are also regularly checked for a lack of running pods, with the number of nodes then decreased as needed. This ability to automatically scale up or down the number of nodes in your AKS cluster lets you run an efficient, cost-effective cluster.

199
Q

Q71 T4

You are developing a sales application that will contain several Azure cloud services and handle different components of a transaction. Different cloud services will process customer orders, billing, payment, inventory, and shipping.

You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using XML messages.

What should you include in the recommendation?

A. Azure Service Fabric
B. Azure Data Lake
C. Azure Service Bus
D. Azure Application Gateway

A

Correct Answer: C

Azure Service Bus is a fully managed enterprise message broker with message queues and publish-subscribe topics (in a namespace). Service Bus is used to decouple applications and services from each other, providing the following benefits:
- Load-balancing work across competing workers
- Safely routing and transferring data and control across service and application boundaries
- Coordinating transactional work that requires a high-degree of reliability

200
Q

Q72 T4

Your company has offices in North America and Europe.

You plan to migrate to Azure.

You need to recommend a networking solution for the new Azure infrastructure. The solution must meet the following requirements:

  • The Point-to-Site (P2S) VPN connections of mobile users must connect automatically to the closest Azure region.
  • The offices in each region must connect to their local Azure region by using an ExpressRoute circuit.
  • Transitive routing between virtual networks and on-premises networks must be supported.
  • The network traffic between virtual networks must be filtered by using FQDNs.

What should you include in the recommendation?

A. Azure Virtual WAN with a secured virtual hub
B. virtual network peering and application security groups
C. virtual network gateways and network security groups (NSGs)
D. Azure Route Server and Azure Network Function Manager

A

Correct Answer: A

Option A, Azure Virtual WAN with a secured virtual hub,
is the best recommendation for this scenario as it allows for automatic connection of mobile users to the closest Azure region, connection of offices to their local Azure region via ExpressRoute circuits, support for transitive routing, and filtering of network traffic between virtual networks by using FQDNs.

Option B, virtual network peering and application security groups,
does not provide automatic connection of mobile users to the closest Azure region or support for transitive routing.

Option C, virtual network gateways and network security groups (NSGs),
does not provide automatic connection of mobile users to the closest Azure region or support for transitive routing, and filtering network traffic between virtual networks by using FQDNs is more challenging.

Option D, Azure Route Server and Azure Network Function Manager,
does not provide automatic connection of mobile users to the closest Azure region or support for filtering network traffic between virtual networks by using FQDNs.

201
Q

Q73 T4

You need to design a highly available Azure SQL database that meets the following requirements:

  • Failover between replicas of the database must occur without any data loss.
  • The database must remain available in the event of a zone outage.
  • Costs must be minimized.

Which deployment option should you use?

A. Azure SQL Database Business Critical
B. Azure SQL Managed Instance Business Critical
C. Azure SQL Database Standard
D. Azure SQL Managed Instance General Purpose

A

Correct Answer: A

Zone-redundant availability is available to databases in the General Purpose, Premium, Business Critical and Hyperscale service tiers of the vCore purchasing model, and not the Basic and Standard service tiers of the DTU-based purchasing model. Zone-redundant availability ensures Recovery Point Objective (RPO) which indicates the amount of data loss is zero.

202
Q

Q74 T4

You are designing a point of sale (POS) solution that will be deployed across multiple locations and will use an Azure Databricks workspace in the Standard tier. The solution will include multiple apps deployed to the on-premises network of each location.

You need to configure the authentication method that will be used by the app to access the workspace. The solution must minimize the administrative effort associated with staff turnover and credential management.

What should you configure?

A. a managed identity
B. a service principal
C. a personal access token

A

Correct Answer: B

the key word in the questions is the following
You need to configure the authentication method that will be used by the app to access the workspace. Service App is used by the App and for the App but managed identity in this case is created for the workspace (AZ resource) and used by the App hence not what is required.
*a service principal is “…An application whose tokens can be used to authenticate and grant access to specific Azure resources from a user-app, service or automation tool, when an organization is using Azure Active Directory…”

*Managed Identities are in essence 100% identical in functionality and use case than Service Principals. In fact, they are actually Service Principals.
What makes them different though, is: – They are always linked to an Azure Resource, not to an application or 3rd party connector – They are automatically created for you, including the credentials; big benefit here is that no one knows the credentials

203
Q

Q75 T4

You have two Azure AD tenants named contoso.com and fabrikam.com. Each tenant is linked to 50 Azure subscriptions. Contoso.com contains two users named User1 and User2.

You need to meet the following requirements:

  • Ensure that User1 can change the Azure AD tenant linked to specific Azure subscriptions.
  • If an Azure subscription is liked to a new Azure AD tenant, and no available Azure AD accounts have full subscription-level permissions to the subscription, elevate the access of User2 to the subscription.

The solution must use the principle of least privilege.

Which role should you assign to each user? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

A

Answer

User1: b. Owner
User2: b. Owner

Although this is not following the principle of least privilege:

Based on the principle of least privilege, here’s the recommended role assignment for each user:
* User1: Subscription Contributor
* User2: Subscription User (with Privileged Access just-in-time (JIT) elevation)
Here’s why:
* User1:
* Subscription Contributor provides the necessary permissions to change the Azure AD tenant linked to a subscription. This role allows User1 to manage tenant association without granting excessive control over other aspects of the subscription resources.
* User2:
* Subscription User grants basic access to view subscription resources. However, User2 won’t have permission to manage tenant association or perform extensive actions within the subscription.
* Privileged Access JIT elevation: This allows User2 to be temporarily granted higher privileges (like Owner) when needed. This elevation can be configured to occur only for specific actions (like managing resources when no other authorized user is available) and for a limited duration. This approach minimizes the time User2 has elevated access, adhering to the least privilege principle.
Explanation of other roles and why they are not recommended:
* Co-owner: Grants extensive control over the entire subscription, including managing resource groups, virtual machines, and other resources. This role is unnecessary for User1’s task and provides excessive privileges for User2.
* Owner: Grants complete control over the subscription, including managing tenant association and all resources. Assigning this role to either user violates the least privilege principle.
* System Administrator: This Azure AD role grants broad permissions across the entire tenant, including managing users, groups, and directory objects. This role is not relevant to managing Azure subscriptions and should not be assigned for this purpose.
By assigning the roles mentioned above, you ensure User1 can manage tenant association while minimizing the access privileges of User2. JIT elevation for User2 provides a secure way to grant temporary elevated access only when necessary.

Before you can associate or add your subscription, do the following steps:
- Sign in using an account that: Has an Owner role assignment for the subscription.

For User1 who needs to change the Azure AD tenant linked to specific Azure subscriptions, they need to be assigned the role of “Owner”. This is because to change the Azure AD tenant linked to a subscription, the user must have enough permissions, which are available at the Owner level.

For User2 who needs to have the access elevated to the subscription if no available Azure AD accounts have full subscription-level permissions to the subscription, they need to be assigned the “Owner” role as well. This role provides full access to all resources, including the right to delegate access to others. In this scenario, the “Owner” role would allow User2 to gain access to the subscription in the absence of any other account with full permissions.

204
Q

Q76 T4

Your company has the divisions shown in the following table

Sub1 contains an Azure App Service web app named App1. App1 uses Azure AD for single-tenant user authentication. Users from contoso.com can authenticate to App1.

You need to recommend a solution to enable users in the fabrikam.com tenant to authenticate to App1.

What should you recommend?

A. Configure a Conditional Access policy.
B. Use Azure AD entitlement management to govern external users.
C. Configure the Azure AD provisioning service.
D. Configure Azure AD Identity Protection.

A

Correct Answer B

Here are some of capabilities of entitlement management:
- Select connected organizations whose users can request access. When a user who isn’t yet in your directory requests access, and is approved, they’re automatically invited into your directory and assigned access. When their access expires, if they have no other access package assignments, their B2B account in your directory can be automatically removed.

205
Q

Q77 T4

You have a multi-tier app named App1 and an Azure SQL database named SQL1. The backend service of App1 writes data to SQL1. Users use the App1 client to read the data from SQL1.

During periods of high utilization, the users experience delays retrieving the data.

You need to minimize how long it takes for data requests.

What should you include in the solution?

A. Azure Cache for Redis
B. Azure Content Delivery Network (CDN)
C. Azure Data Factory
D. Azure Synapse Analytics

A

A. Azure Cache for Redis

Explanation:

Azure Cache for Redis provides an in-memory data store based on open-source software Redis. It can be used to cache the most frequently accessed data, thus significantly reducing latency and increasing throughput for the application data requests. By storing data that’s accessed often in a cache, you can improve app performance by reducing the load on your main database and make the app more responsive even during high traffic.

Azure Content Delivery Network (CDN) is more for delivering static content to users, and not designed for database queries.

A. Azure Cache for Redis - Cache the data so users can achieve them quicker
B. Azure Content Delivery Network (CDN) - There isn’t any issue with the networking
C. Azure Data Factory - This tool is for orchestration, not for enhancing anything existing
D. Azure Synapse Analytics - Also an orchestration tool with additional data warehouse capabilities, not relevant in this case

206
Q

Q78 T4

You have an Azure subscription that contains the resources shown in the following table.

You create peering between VNet1 and VNet2 and between VNet1 and VNet3.

The virtual machines host an HTTPS-based client/server application and are accessible only via the private IP address of each virtual machine.

You need to implement a load balancing solution for VM2 and VM3. The solution must ensure that if VM2 fails, requests will be routed automatically to VM3, and if VM3 fails, requests will be routed automatically to VM2.

What should you include in the solution?

A. Azure Firewall Premium
B. Azure Application Gateway v2
C. a cross-region load balancer
D. Azure Front Door Premium

A

Correct Answer: B

The best solution for this scenario is still B. Azure Application Gateway v2.

Here’s why:

  • Client/Server Application: The scenario involves a client/server application accessible via private IPs [table in question]. This eliminates the need for an internet-facing load balancer like Azure Front Door [Microsoft Azure documentation].
  • High Availability: The requirement is to automatically route requests to VM2 or VM3 in case of failure [table in question]. Azure Application Gateway v2 can be configured for high availability with features like active-passive or multi-site deployment, ensuring traffic reaches the healthy VM.
  • VNet Peering: The VMs reside in peered VNets (VNet1, VNet2, VNet3) [table in question]. Azure Application Gateway v2 can be deployed in a VNet and communicate with VMs in peered VNets.
    Other options considered:
  • Azure Firewall Premium: While Azure Firewall Premium offers load balancing capabilities, it’s primarily designed for network security and lacks the application layer routing features required for client/server communication.
  • Cross-region load balancer: A cross-region load balancer is not necessary in this scenario as the VMs are located within the same region.
    In conclusion, Azure Application Gateway v2 provides a comprehensive solution for load balancing VMs in peered VNets while ensuring high availability for the client/server application.
    The provided table does not introduce any new elements that would significantly alter the recommended solution. The core requirements and considerations remain the same.
207
Q

Q79 T4

You are designing an app that will include two components. The components will communicate by sending messages via a queue.

You need to recommend a solution to process the messages by using a First in, First out (FIFO) pattern.

What should you include in the recommendation?

A. storage queues with a custom metadata setting
B. Azure Service Bus queues with partitioning enabled
C. Azure Service Bus queues with sessions enabled
D. storage queues with a stored access policy

A

C. Azure Service Bus queues with sessions enabled

Explanation:

Azure Service Bus supports a FIFO pattern through the use of sessions. A session is a sequence of ordered messages. All messages in a session are handled in the order they arrive. This ensures that messages are processed in the order they were added to the queue.

Options A and D are incorrect because Azure Storage queues do not natively support the First In, First Out (FIFO) pattern. There are no such features as custom metadata setting or stored access policy that can establish FIFO in Azure Storage queues.

Option B is incorrect because while partitioning in Azure Service Bus can improve performance by spreading the load across multiple message brokers and stores, it doesn’t enforce FIFO ordering across partitions. FIFO ordering is maintained within a partition, but not across partitions. Hence, for strict FIFO, you would not want to enable partitioning, you would want to use sessions.

208
Q

Q80 T4

You need to deploy an instance of SQL Server on Azure Virtual Machines. The solution must meet the following requirements:

  • Support 15,000 disk IOPS.
  • Support SR-IOV.
  • Minimize costs.

What should you include in the solution? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

A

Answer

Azure Virtual Machine:
Use a high-performance Azure Virtual Machine such as the Dv3 or Ev3 series, which are optimized for workloads that require low latency and high throughput.

SR-IOV: Enable SR-IOV on the Virtual Machine. SR-IOV allows for direct communication between the virtual NIC and the physical NIC, reducing latency and increasing throughput.

Azure Premium SSD Disks:
Use Azure Premium SSD Disks as they are optimized for performance-sensitive workloads and have a high IOPS and throughput limit.

209
Q

Q81 T4

You are developing an app that will use Azure Functions to process Azure Event Hubs events. Request processing is estimated to take between five and 20 minutes.

You need to recommend a hosting solution that meets the following requirements:

  • Supports estimates of request processing runtimes
  • Supports event-driven autoscaling for the app

Which hosting plan should you recommend?

A. Dedicated
B. Consumption
C. App Service
D. Premium

A

D. Premium

The Premium plan is the best fit for this scenario. It supports both longer execution times and event-driven scaling, which are the requirements specified in the question.

Azure Functions on a Premium plan can run for a longer period, up to 60 minutes (or indefinitely if the host.json “functionTimeout” setting is null), making it suitable for the estimated request processing times of five to 20 minutes. The Premium plan also supports event-driven autoscaling.

The Consumption plan supports event-driven autoscaling but only allows functions to run for up to 10 minutes, so it wouldn’t support the estimated request processing times of five to 20 minutes.

The Dedicated and App Service plans can run for a longer period, but they do not support event-driven autoscaling. The Dedicated plan is also the most costly option and should be used when you need the most control over the function app environment.

210
Q

Q82 T4

You are developing a sales application that will contain several Azure cloud services and handle different components of a transaction. Different cloud services will process customer orders, billing, payment, inventory, and shipping.

You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using XML messages.

What should you include in the recommendation?

A. Azure Notification Hubs
B. Azure Application Gateway
C. Azure Service Bus
D. Azure Traffic Manager

A

C is the answer.

Azure Service Bus is a fully managed enterprise message broker with message queues and publish-subscribe topics (in a namespace). Service Bus is used to decouple applications and services from each other, providing the following benefits:
- Load-balancing work across competing workers
- Safely routing and transferring data and control across service and application boundaries
- Coordinating transactional work that requires a high-degree of reliability

211
Q

Q83 T4

You are developing a sales application that will contain several Azure cloud services and handle different components of a transaction. Different cloud services will process customer orders, billing, payment, inventory, and shipping.

You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using XML messages.

What should you include in the recommendation?

A. Azure Notification Hubs
B. Azure Application Gateway
C. Azure Queue Storage
D. Azure Traffic Manager

A

C is the answer.

Azure Queue Storage is a service for storing large numbers of messages. You access messages from anywhere in the world via authenticated calls using HTTP or HTTPS. A queue message can be up to 64 KB in size. A queue may contain millions of messages, up to the total capacity limit of a storage account. Queues are commonly used to create a backlog of work to process asynchronously.

212
Q

Q84 T4

You need to design a highly available Azure SQL database that meets the following requirements:

  • Failover between replicas of the database must occur without any data loss.
  • The database must remain available in the event of a zone outage.
  • Costs must be minimized.

Which deployment option should you use?

A. Azure SQL Database Basic
B. Azure SQL Database Business Critical
C. Azure SQL Database Standard
D. Azure SQL Managed Instance General Purpose

A

B. Azure SQL Database Business Critical

Azure SQL Database Business Critical tier is designed to provide high availability with zero data loss during failover, which meets one of the main requirements of the scenario.

Additionally, Azure SQL Database Business Critical tier offers zone redundant configurations, which means that replicas of the data are stored in different availability zones. This means the database will remain available in the event of a zone outage, meeting another requirement of the scenario.

Azure SQL Managed Instance General Purpose, while providing automatic backups and high availability within a single region, doesn’t support the required zone redundancy.

Please note, while Business Critical tier might appear costly, the requirement is to minimize costs, not to choose the least costly option. Considering the high availability and zero data loss requirements, Business Critical tier would be the most cost-effective choice.

213
Q

Q85 T4

You need to design a highly available Azure SQL database that meets the following requirements:

  • Failover between replicas of the database must occur without any data loss.
  • The database must remain available in the event of a zone outage.
  • Costs must be minimized.

Which deployment option should you use?

A. Azure SQL Database Hyperscale
B. Azure SQL Database Premium
C. Azure SQL Database Standard
D. Azure SQL Managed Instance General Purpose

A

Correct Answer: B

In Premium, Business Critical, and Hyperscale service tiers, SQL Database supports the use of read-only replicas to offload read-only query workloads, using the ApplicationIntent=ReadOnly parameter in the connection string.

The question to ask for costs must be minimized.

So the answer is Premium

214
Q

Q86 T4

You company has offices in New York City, Sydney, Paris, and Johannesburg.

The company has an Azure subscription.

You plan to deploy a new Azure networking solution that meets the following requirements:

  • Connects to ExpressRoute circuits in the Azure regions of East US, Southeast Asia, North Europe, and South Africa
  • Minimizes latency by supporting connection in three regions
  • Supports Site-to-site VPN connections
  • Minimizes costs

You need to identify the minimum number of Azure Virtual WAN hubs that you must deploy, and which virtual WAN SKU to use.

What should you identify? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

A

Answer

Minimizes latency by supporting connection in three regions

The requirement to “minimize latency by supporting connection in three regions” suggests that connections should be optimized across three regions. However, the solution also needs to connect to ExpressRoute circuits in four specific Azure regions: East US, Southeast Asia, North Europe, and South Africa.

To meet all these requirements, a hub should be deployed in each of these four regions. This ensures that each region has a local connection point, reducing latency. Even though connections are optimized across three regions, the fourth hub is necessary to provide a local connection point in the fourth region.

So, while three hubs might seem sufficient based on one requirement, considering all requirements makes it clear that four hubs are needed. This is a common scenario in network planning where various factors and requirements must be balanced.

215
Q

Q87 T4

You have an Azure Functions microservice app named App1 that is hosted in the Consumption plan. App1 uses an Azure Queue Storage trigger.

You plan to migrate App1 to an Azure Kubernetes Service (AKS) cluster.

You need to prepare the AKS cluster to support App1. The solution must meet the following requirements:

  • Use the same scaling mechanism as the current deployment.
  • Support kubenet and Azure Container Networking Interface (CNI) networking.

Which two actions should you perform? Each correct answer presents part of the solution.

NOTE: Each correct answer is worth one point.

A. Configure the horizontal pod autoscaler.
B. Install Virtual Kubelet.
C. Configure the AKS cluster autoscaler.
D. Configure the virtual node add-on.
E. Install Kubernetes-based Event Driven Autoscaling (KEDA).

A

**Correct Answer: AE **

A. Configure the horizontal pod autoscaler.
E. Install Kubernetes-based Event Driven Autoscaling (KEDA).

Kubernetes uses the horizontal pod autoscaler (HPA) to monitor the resource demand and automatically scale the number of replicas. By default, the horizontal pod autoscaler checks the Metrics API every 15 seconds for any required changes in replica count, but the Metrics API retrieves data from the Kubelet every 60 seconds. Effectively, the HPA is updated every 60 seconds. When changes are required, the number of replicas is increased or decreased accordingly. Horizontal pod autoscaler works with AKS clusters that have deployed the Metrics Server for Kubernetes 1.8+.

Kubernetes Event-driven Autoscaling (KEDA) is a single-purpose and lightweight component that strives to make application autoscaling simple and is a CNCF Incubation project.

It applies event-driven autoscaling to scale your application to meet demand in a sustainable and cost-efficient manner with scale-to-zero.

216
Q

Q88 T4

You are developing a sales application that will contain several Azure cloud services and handle different components of a transaction. Different cloud services will process customer orders, billing, payment, inventory, and shipping.

You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using XML messages.

What should you include in the recommendation?

A. Azure Application Gateway
B. Azure Queue Storage
C. Azure Data Lake
D. Azure Traffic Manager

A

Correct Answer: B

Azure Queue Storage is a service for storing large numbers of messages that can be accessed from anywhere in the world via authenticated calls using HTTP or HTTPS. It provides cloud messaging between application components, which would be ideal for this scenario where different components (customer orders, billing, payment, inventory, and shipping) need to communicate transaction information asynchronously.

217
Q

Q89 T4

You need to design a highly available Azure SQL database that meets the following requirements:

  • Failover between replicas of the database must occur without any data loss.
  • The database must remain available in the event of a zone outage.
  • Costs must be minimized.

Which deployment option should you use?

A. Azure SQL Managed Instance General Purpose
B. Azure SQL Database Hyperscale
C. Azure SQL Database Premium
D. Azure SQL Managed Instance Business Critical

A

Correct Answer: C

Azure SQL Database Premium tier offers the best high availability with an always-on model, with automatic failover and zero data loss in case of failure (RPO = 0). It also supports availability across multiple zones, meaning that it can remain available in the event of a zone outage.

Here’s why other options may not be suitable:

Azure SQL Managed Instance General Purpose: While it provides high availability, it doesn’t support automatic failover with zero data loss. It also lacks the zone redundant configuration.
Azure SQL Database Hyperscale: While it supports high scale and rapid growth, it may not necessarily be the most cost-effective option for the scenario described.
Azure SQL Managed Instance Business Critical: While it supports automatic failover with zero data loss, and has built-in zone redundancy, it is typically more expensive than the Azure SQL Database Premium.

218
Q

Q90 T4

You need to design a highly available Azure SQL database that meets the following requirements:

  • Failover between replicas of the database must occur without any data loss.
  • The database must remain available in the event of a zone outage.
  • Costs must be minimized.

Which deployment option should you use?

A. Azure SQL Database Hyperscale
B. Azure SQL Database Premium
C. Azure SQL Database Basic
D. Azure SQL Database Serverless

A

Correct Answer: B

Azure SQL Database Hyperscale is a scalable option for large workloads with flexible storage management. However, it is not specifically designed to ensure availability in case of zone outages and does not offer data-loss-free failover.

Azure SQL Database Basic is the most cost-effective option but lacks advanced features such as automatic failover and high availability.

Azure SQL Database Serverless is a cost-effective option for light and intermittent workloads but may not be suitable for an application requiring high availability without interruptions.

Azure SQL Database Premium is the recommended option as it offers advanced features like active geo-replication and automatic, data-loss-free failover. It also supports high availability in case of zone outages, ensuring the database remains accessible even if a specific zone experiences an interruption

219
Q

Q92 T4

You are developing a sales application that will contain several Azure cloud services and handle different components of a transaction. Different cloud services will process customer orders, billing, payment, inventory, and shipping.

You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using XML messages.

What should you include in the recommendation?

A. Azure Service Fabric
B. Azure Traffic Manager
C. Azure Queue Storage
D. Azure Notification Hubs

A

C. Azure Queue Storage.

To enable asynchronous communication between cloud services using XML messages, you should use a messaging system. Out of the options provided:

A. Azure Service Fabric: It’s a distributed systems platform for deploying and managing microservices and containers. While it can be used to build resilient applications, it’s not a messaging system per se.

B. Azure Traffic Manager: It’s a DNS-based traffic load balancer. It doesn’t deal with asynchronous messaging.

C. Azure Queue Storage: This service allows you to decouple cloud components and ensure asynchronous message delivery. Messages can be placed into a queue, where another service can pick them up and process them, which is exactly what’s described in the scenario. The messages can be in XML format or any other format that suits your needs.

D. Azure Notification Hubs: This is for sending push notifications to mobile devices. It’s not designed for inter-service communication.

220
Q

Q93 T4

You have an on-premises Microsoft SQL Server 2008 instance that hosts a 50-GB database.

You need to migrate the database to an Azure SQL managed instance. The solution must minimize downtime.

What should you use?

A. Azure Migrate
B. Azure Data Studio
C. WANdisco LiveData Platform for Azure
D. SQL Server Management Studio (SSMS)

A

Recommended Solution: A. Azure Migrate

*Using Azure Migrate will ensure a smooth and efficient migration of your 50-GB SQL Server 2008 database to an Azure SQL managed instance with minimal downtime. It provides comprehensive support and tools specifically designed for such migration scenarios.

  • A. Azure Migrate
    • Overview: Azure Migrate is a centralized hub that supports the assessment and migration of on-premises workloads to Azure. It provides tools and guidance for migrating servers, databases, and applications with minimal downtime.
    • Suitability: Specifically designed to handle complex migrations, including databases like Microsoft SQL Server 2008. It offers end-to-end tracking and minimizes downtime through efficient data transfer and synchronization methods.
  • B. Azure Data Studio
    • Overview: Azure Data Studio is a cross-platform database tool primarily used for data management and development tasks. It offers a modern editor experience for querying, designing, and managing databases.
    • Suitability: While useful for database administration and development, it is not optimized for large-scale migration tasks and does not provide features to minimize downtime during migrations.
  • C. WANdisco LiveData Platform for Azure
    • Overview: WANdisco LiveData Platform is designed for the continuous replication and migration of large-scale data, particularly in big data and Hadoop environments.
    • Suitability: Not the ideal choice for migrating Microsoft SQL Server databases as it is tailored more towards big data scenarios rather than traditional relational databases.
  • D. SQL Server Management Studio (SSMS)
    • Overview: SSMS is a comprehensive tool for configuring, managing, and administering all components within Microsoft SQL Server. It supports tasks like querying, designing, and managing databases.
    • Suitability: While SSMS can perform database backups and restores, using it alone for migration may result in extended downtime. It lacks advanced migration features and automation provided by specialized tools like Azure Migrate.
221
Q

Q94 T4

You need to design a highly available Azure SQL database that meets the following requirements:

  • Failover between replicas of the database must occur without any data loss.
  • The database must remain available in the event of a zone outage.
  • Costs must be minimized.

Which deployment option should you use?

A. Azure SQL Managed Instance Business Critical
B. Azure SQL Managed Instance General Purpose
C. Azure SQL Database Standard
D. Azure SQL Database Premium

A

Correct Answer: D

This question appears a lot of time, with different options as answer.
Always the answers are (in this order):
1. Azure SQL Database Premium
2. Azure SQL Database Serverless
3. Azure SQL Database Business Critical
If there is only one of them, select it. If there are 2 of them, remember the order, and select the option in the order showed here.

222
Q

Q95 T4

You need to design a highly available Azure SQL database that meets the following requirements:

  • Failover between replicas of the database must occur without any data loss.
  • The database must remain available in the event of a zone outage.
  • Costs must be minimized.

Which deployment option should you use?

A. Azure SQL Database Business Critical
B. Azure SQL Database Basic
C. Azure SQL Managed Instance General Purpose
D. Azure SQL Database Hyperscale

A

Correct Answer: A

This question appears a lot of time, with different options as answer.
Always the answers are (in this order):
1. Azure SQL Database Premium
2. Azure SQL Database Serverless
3. Azure SQL Database Business Critical
If there is only one of them, select it. If there are 2 of them, remember the order, and select the option in the order showed here.

223
Q

Q102 T4

You are developing a multi-tier app named App1 that will be hosted on Azure virtual machines. The peak utilization periods for App1 will be from 8 AM to 9 AM and 4 PM to 5 PM on weekdays.

You need to deploy the infrastructure for App1. The solution must meet the following requirements:

  • Support virtual machines deployed to four availability zones across two Azure regions.
  • Minimize costs by accumulating CPU credits during periods of low utilization.

What is the minimum number of virtual networks you should deploy, and which virtual machine size should you use? To answer, select the appropriate options in the answer area

A

Answer

Number of Virtual networks:
✔ 2
Virtual machine size
✔ B-Series

Explanation:

Number of Virtual networks:
You need at least one virtual network per Azure region for the local resources, hence since you have two Azure regions, you’ll need at least 2 virtual networks.

Virtual machine size:
The B-Series VM size is the best choice here because of the ability to bank CPU credits during periods of low utilization. The B-series are burstable VMs that accumulate CPU credits during idle times and then consume these credits during periods of high CPU usage. This matches well with your requirement to minimize costs by accumulating CPU credits during periods of low utilization. Other series like A-Series, D-Series, and M-Series do not have this functionality.

224
Q

Q104 T4

You have an on-premises Microsoft SQL server named SQL1 that hosts 50 databases.

You plan to migrate SQL1 to Azure SQL Managed Instance.

You need to perform an offline migration of SQL1. The solution must minimize administrative effort.

What should you include in the solution?

A. Azure Migrate
B. Azure Database Migration Service
C. SQL Server Migration Assistant (SSMA)
D. Data Migration Assistant (DMA)

A

B. Azure Database Migration Service

Azure Database Migration Service is a tool that helps you simplify, guide, and automate your database migration to Azure. Specifically for SQL Server to Azure SQL Managed Instance migrations, it provides an option for offline (one-time) migrations which is suitable for your scenario.

The Data Migration Assistant (DMA) tool can be used beforehand to assess your SQL Server databases for any feature parity and compatibility issues that could impact the database functionality in Azure SQL Managed Instance.

Azure Migrate is a service that helps you assess and migrate applications, infrastructure, and data, but it doesn’t specifically cater to SQL Server migrations. SQL Server Migration Assistant (SSMA) is more suited for migrations to Azure SQL Database and does not support Azure SQL Managed Instance.

225
Q

Q108 T4

You plan to deploy an infrastructure solution that will contain the following configurations:
* External users will access the infrastructure by using Azure Front Door.
* External user access to the backend APIs hosted in Azure Kubernetes Service (AKS) will be controlled by using Azure API Management.
* External users will be authenticated by an Azure AD B2C tenant that uses OpenID Connect-based federation with a third-party identity provider.

Which function does each service provide? To answer, drag the appropriate functions to the correct services. Each function may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

Correct answers should be:
- Front Door –> OWASP
- APIM –> Validation JWT

Here’s how the different services would map to the functions provided in your infrastructure solution:

Azure Front Door:
- Function: Protection against OWASP vulnerabilities
- Azure Front Door provides web application firewall (WAF) capabilities, which include protection against OWASP (Open Web Application Security Project) top 10 vulnerabilities. This is the primary function that Azure Front Door would provide in your infrastructure.

Azure API Management:
- Function: IP filtering on a per-API level
- Azure API Management allows you to configure IP filtering on a per-API basis. This feature lets you control which IP addresses can access specific APIs, providing an additional layer of security.

  • Function: Validation of Azure AD B2C JSON Web Tokens (JWTs)
    • Azure API Management is also responsible for validating the JWTs issued by Azure AD B2C. When a user is authenticated via Azure AD B2C, the resulting JWT can be validated by Azure API Management before allowing access to the backend services.

Summary:
- Azure Front Door:
- Protection against OWASP vulnerabilities

  • Azure API Management:
    • IP filtering on a per-API level
    • Validation of Azure B2C JSON Web Tokens (JWTs)
226
Q

Q109 T4

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases.

The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.

You need to recommend a solution to meet the regulatory requirement.

Solution: You recommend using an Azure Policy initiative to enforce the location of resource groups.

Does this meet the goal?

A. Yes
B. No

A

Correct Answer: No

No, the proposed solution does not fully meet the goal.

Explanation:
- Azure Policy Initiative: While an Azure Policy initiative can be used to enforce certain governance rules, including the location of resources, enforcing the location of resource groups alone does not necessarily ensure that the individual resources (such as Azure App Service instances and Azure SQL databases) within those resource groups are deployed to the specific Azure regions required by the regulatory mandate.

What would work:
- You should use Azure Policy to enforce the location of the specific resources (i.e., Azure App Service instances and Azure SQL databases) rather than just the resource groups. This can be done by creating a policy that restricts the locations where these resources can be deployed and assigning that policy to the relevant subscriptions or resource groups.

Summary:
- The solution proposed (enforcing the location of resource groups) does not meet the goal because it does not directly enforce the location of the Azure App Service instances or Azure SQL databases. Instead, you should enforce the location of the individual resources.

227
Q

Q110 T4

Your on-premises datacenter contains a server that runs Linux and hosts a Java app named App1. App1 has the following characteristics:

  • App1 is an interactive app that users access by using HTTPS connections.
  • The number of connections to App1 changes significantly throughout the day.
  • App1 runs multiple concurrent instances.
  • App1 requires major changes to run in a container.

You plan to migrate App1 to Azure.

You need to recommend a compute solution for App1. The solution must meet the following requirements:

  • The solution must run multiple instances of App1.
  • The number of instances must be managed automatically depending on the load.
  • Administrative effort must be minimized.

What should you include in the recommendation?

A. Azure Batch
B. Azure App Service
C. Azure Kubernetes Service (AKS)
D. Azure Virtual Machine Scale Sets

A

Correct Answer: B

Recommendation: Azure App Service

Reasons for Choosing Azure App Service:

  • Auto-scaling: Azure App Service provides built-in auto-scaling capabilities based on various metrics, including CPU, memory, and request count. This automatically adjusts the number of instances to handle fluctuating loads, ensuring optimal performance and resource utilization.
  • Multiple Instances: App Service can easily manage multiple instances of your application, allowing for horizontal scaling to meet demand.
  • HTTPS Support: App Service inherently supports HTTPS connections, making it suitable for your interactive app.
  • Reduced Administrative Overhead: As a Platform as a Service (PaaS), App Service handles many infrastructure management tasks, such as operating system updates, patching, and load balancing. This significantly reduces administrative effort.
  • Ease of Migration: While App Service is optimized for cloud-native applications, it can also host existing applications like yours with minimal code changes.

Additional Considerations:

  • App Service Plan: Choose a suitable App Service plan based on your application’s performance and scalability requirements.
  • Deployment Slots: Use deployment slots for staging and testing updates before deploying them to production.
  • Monitoring and Diagnostics: Leverage Azure Application Insights to monitor your application’s performance and identify potential issues.

By selecting Azure App Service, you can effectively migrate App1 to Azure while meeting the requirements of automatic scaling, multiple instances, and reduced administrative overhead.

228
Q

Q111 T4

You have an Azure App Service web app named Webapp1 that connects to an Azure SQL database named DB1. Webapp1 and DB1 are deployed to the East US Azure region.

You need to ensure that all the traffic between Webapp1 and DB1 is sent via a private connection.

What should you do? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

A

Answer

Box 1: 2 subnets
Create a virtual network that contains at least 2 subnets. One for the Azure App Service VNet Integration and another for the Azure Private Link.

Box 2: a private DNS zone
Configure name resolution to use a private DNS zone. This is necessary for the web app to work with Azure DNS private zones.

229
Q

Q112 T4

Your on-premises network contains an Active Directory Domain Services (AD DS) domain. The domain contains a server named Server1. Server1 contains an app named App1 that uses AD DS authentication. Remote users access App1 by using a VPN connection to the on-premises network.

You have an Azure AD tenant that syncs with the AD DS domain by using Azure AD Connect.

You need to ensure that the remote users can access App1 without using a VPN. The solution must meet the following requirements:

  • Ensure that the users authenticate by using Azure Multi-Factor Authentication (MFA).
  • Minimize administrative effort.

What should you include in the solution? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

A

Answer

Box 1: An app registration
This allows App1 to use Azure AD for authentication.

Box 2: a server that runs windows server and has the Azure AD Application Proxy connector installed
On-premises: A server that runs Windows Server and has the Azure AD Application Proxy connector installed. This allows App1 to be accessed from outside the on-premises network without a VPN.

In Azure AD:
An App registration: This represents your on-premises App1 in Azure AD and is necessary for authentication and authorization.

  1. An enterprise application: This is the external representation of your on-premises application and will be used by users to access App1.

While both App registration and Enterprise application are essential components in the overall solution, the App registration is the foundational building block. It establishes the identity of your on-premises application in Azure AD, providing the necessary credentials for authentication and authorization.

Without an App registration, you cannot proceed with creating an Enterprise application or configuring Azure AD Application Proxy.

Therefore, if forced to choose only one option from the Azure AD part, the App registration is the most critical component for this scenario.

On-premises:

A server that runs Windows Server and has the Azure AD Application Proxy connector installed: This is crucial for providing secure remote access to your on-premises App1. The connector establishes a secure connection between Azure AD and your on-premises network.
Explanation:
By combining these components, you can achieve the following:

Secure remote access: Azure AD Application Proxy allows users to access App1 securely without requiring a VPN connection.
Azure AD authentication: Users will authenticate using their Azure AD credentials, which can be enforced with Azure MFA for added security.
Minimal administrative effort: Azure AD Application Proxy simplifies the process of publishing on-premises applications and managing access.
Additional Considerations:
Configure App registration: You’ll need to configure the App registration with appropriate permissions and redirect URIs.
Configure enterprise application: Assign users or groups to the enterprise application, and customize access policies as needed.
Configure Azure AD Application Proxy connector: Install and configure the connector on the designated server, and publish App1 through the Azure portal.
Test thoroughly: Ensure that users can successfully access App1 using Azure AD authentication and MFA.
By following these steps, you can provide secure and convenient access to App1 for your remote users without relying on a VPN connection.

230
Q

Q113 T4

You have an Azure subscription that contains an Azure Kubernetes Service (AKS) instance named AKS1. AKS1 hosts microservice-based APIs that are configured to listen on non-default HTTP ports.

You plan to deploy a Standard tier Azure API Management instance named APIM1 that will make the APIs available to external users.

You need to ensure that the AKS1 APIs are accessible to APIM1. The solution must meet the following requirements:

  • Implement MTLS authentication between APIM1 and AKS1.
  • Minimize development effort.
  • Minimize costs.

What should you do?

A. Implement an external load balancer on AKS1.
B. Redeploy APIM1 to the virtual network that contains AKS1.
C. Implement an ExternalName service on AKS1.
D. Deploy an ingress controller to AKS1.

A

Correct Answer: D

Mutual TLS (mTLS) authentication is natively supported by Azure API Management and can be enabled in Kubernetes by installing an Ingress Controller. This approach simplifies the microservices as the authentication will be performed in the Ingress Controller. This solution also meets the requirements of implementing mTLS authentication between APIM1 and AKS1, minimizing development effort, and minimizing costs.

Please note that while deploying an ingress controller to AKS1, you should ensure that it supports mTLS. Examples of enterprise-level ingress controllers that support mTLS include NGINX and AGIC1.

231
Q

Q114 T4

You need to recommend a solution to integrate Azure Cosmos DB and Azure Synapse. The solution must meet the following requirements:

  • Traffic from an Azure Synapse workspace to the Azure Cosmos DB account must be sent via the Microsoft backbone network.
  • Traffic from the Azure Synapse workspace to the Azure Cosmos DB account must NOT be routed over the internet.
  • Implementation effort must be minimized.

What should you include in the recommendation? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

A

Answer

Box 1: Configure a dedicated managed virtual network
Managed private endpoints are only supported in Azure Synapse workspaces with a Managed workspace Virtual Network.

Box 2: Managed private endpoints
When you use Managed private endpoints, traffic between your Azure Synapse workspace and other Azure resources traverse entirely over the Microsoft backbone network.

232
Q

Q115 T4

You have an Azure subscription that contains an Azure Cosmos DB for NoSQL account named account1 and an Azure Synapse Analytics workspace named Workspace1. The account1 account contains a container named Contained that has the analytical store enabled.

You need to recommend a solution that will process the data stored in Contained in near-real-time (NRT) and output the results to a data warehouse in Workspace1 by using a runtime engine in the workspace. The solution must minimize data movement.

Which pool in Workspace1 should you use?

A. Apache Spark
B. serverless SQL
C. dedicated SQL
D. Data Explorer

A

Correct Answer: A

For processing data stored in the ‘Contained’ container of Cosmos DB in near-real-time (NRT) and outputting results to a data warehouse in Workspace1, leveraging an Apache Spark pool within Azure Synapse Analytics is highly recommended. This approach is particularly effective due to Apache Spark’s robust in-memory processing capabilities, which can handle large volumes of data swiftly. Additionally, by utilizing Azure Synapse Link for seamless integration with Cosmos DB’s analytical store, this solution ensures minimal data movement. This not only enhances performance by enabling direct real-time data access but also optimizes resource utilization and reduces latency, making it an ideal setup for real-time data analytics.

233
Q

Q116 T4

You have an on-premises datacenter named Site1. Site1 contains a VMware vSphere cluster named Cluster1 that hosts 100 virtual machines. Cluster1 is managed by using VMware vCenter.

You have an Azure subscription named Sub1.

You plan to migrate the virtual machines from Cluster1 to Sub1.

You need to identify which resources are required to run the virtual machines in Azure. The solution must minimize administrative effort.

What should you configure? To answer, drag the appropriate resources to the correct targets. Each resource may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

234
Q

Q117 T4

Your on-premises datacenter contains a server named Server1 that runs Microsoft SQL Server 2022. Server1 contains a 30-TB database named DB1 that stores customer data. Server1 runs a custom application named App1 that verifies the compliance of records in DB1. App1 must run on the same server as DB1.

You have an Azure subscription.

You need to migrate DB1 to Azure. The solution must minimize administrative effort.

To which service should you migrate DB1, and what should you use to perform the migration? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

A

Answer

Box 1: SQL Server on Azure Virtual Machines

Box 2: By Using: Azure Migrate
Only Azure Migrate & DMA can handle a SQL Server migration to Azure VM. Since DMA option is not available, only Azure Migrate remains.

Its SQL on Azure VM and Azure Migrate. Since the existing server has a custom app that must be installed on the local DB server and SQL is already SQL 2022 there is no need to do anything other than lift and shift. With that Azure Migrate to simply move the VM to Azure. Minimal Admin effort.

235
Q

Q118 T4

You need to design a highly available Azure SQL database that meets the following requirements:

  • Failover between replicas of the database must occur without any data loss.
  • The database must remain available in the event of a zone outage.
  • Costs must be minimized.

Which deployment option should you use?

A. Azure SQL Managed Instance Business Critical
B. Azure SQL Database Business Critical
C. Azure SQL Database Basic
D. Azure SQL Database Standard

A

Correct Answer: B

SQL Database Business Critical as it’s cheaper than Managed Instance Business Critical

236
Q

Q124 T4

You have an Azure subscription that contains the resources shown in the following table

VNet1, VNet2, and VNet3 each has multiple virtual machines connected. The virtual machines use the Azure DNS service for name resolution.

You need to recommend an Azure Monitor log routing solution that meets the following requirements:

  • Ensures that the logs collected from the virtual machines and sent to Workspace1 are routed over the Microsoft backbone network
  • Minimizes administrative effort

What should you include in the recommendation? To answer, select the appropriate options in the answer area

A

Answer

Box1: 1 AMPLS
Box2: 2 Private End Points

One for VNet1 and VNet 2, since they are peered. And one for VNet3. It isolated from VNet1 and VNet2.

Here is explanation:

Peered networks
Network peering is used in various topologies, other than hub and spoke. Such networks can share each other’s IP addresses, and most likely share the same DNS. In such cases, create a single private link on a network that’s accessible to your other networks. Avoid creating multiple private endpoints and AMPLS objects because ultimately only the last one set in the DNS applies.

Isolated networks
If your networks aren’t peered, you must also separate their DNS to use private links. After that’s done, create a separate private endpoint for each network, and a separate AMPLS object. Your AMPLS objects can link to the same workspaces/components or to different ones.

237
Q

Q126 T4

You have 100 Azure Storage accounts.

Access to the accounts is restricted by using Azure role-based access control (Azure RBAC) assignments.

You need to recommend a solution that uses role assignment conditions based on the tags assigned to individual resources within the storage accounts.

What should you include in the recommendation? To answer, select the appropriate options in the answer area.

A

Answer

To implement role assignment conditions based on tags assigned to individual resources within Azure Storage accounts, you should use Attribute-Based Access Control (ABAC).

Explanation:

  • Implement role assignment conditions by using:
    • ABAC (Attribute-Based Access Control): ABAC allows you to create role assignments with conditions based on resource attributes (such as tags). This is particularly useful for fine-grained access control where you want to apply permissions based on specific characteristics of the resources.
  • Assign permissions to:
    • Blobs, Files, Tables: ABAC can be used to assign permissions to specific data types within Azure Storage, such as blobs, files, and tables, based on the tags or attributes assigned to those resources. This ensures that only users with the correct conditions in their role assignments can access these resources.

Summary:

  • Implement role assignment conditions by using: ABAC
  • Assign permissions to: Blobs, Files, Tables

ABAC provides the flexibility needed to implement role assignment conditions based on tags, which is ideal for your scenario with 100 Azure Storage accounts.

Reference

238
Q

Q1 T5

Litware - Case Study

Question
You need to ensure that users managing the production environment are registered for Azure MFA and must authenticate by using Azure MFA when they sign in to the Azure portal. The solution must meet the authentication and authorization requirements.
What should you do? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

A

Answer

The correct answers are:

To register the users using MFA:

  • Security defaults in Azure AD

To force Azure MFA authentication:

  • Conditional Access policy (Capolicy1) with Session control

Here’s why:

  • Azure ID Identity Protection: While this service detects and mitigates identity risks, it doesn’t directly register users for MFA.
  • Azure ID authentication methods policy: This allows controlling which authentication methods users can choose during sign-in, but it doesn’t enforce mandatory MFA usage.
  • Security defaults in Azure AD: This is the recommended approach for enforcing MFA registration for all users or specific user groups within a tenant.
  • Grant control: This option within Conditional Access defines which applications or resources users are granted access to. While it can restrict access based on MFA enrollment, it doesn’t enforce actual MFA usage during sign-in.
  • Session control: This option within Conditional Access allows defining requirements for user sign-in sessions. By configuring session control in Capolicy1 to require MFA, you enforce MFA for users accessing the Azure portal when managing the production environment.
  • Sign-In risk policy: This option within Azure AD Identity Protection identifies risky sign-in attempts based on risk levels. While it can trigger additional authentication steps like MFA for risky attempts, it doesn’t enforce mandatory MFA for all users.

Therefore, using Security defaults in Azure AD for registration and Session control in the existing Capolicy1 for enforcement best aligns with Litware’s requirement.

239
Q

Q2 T5

Litware - Case Study

Question

After you migrate App1 to Azure, you need to enforce the data modification requirements to meet the security and compliance requirements.
What should you do?

A. Create an access policy for the blob service.
B. Implement Azure resource locks.
C. Create Azure RBAC assignments.
D. Modify the access level of the blob service.

A

Correct Answer: A

Scenario: Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.
As an administrator, you can lock a subscription, resource group, or resource to prevent other users in your organization from accidentally deleting or modifying critical resources. The lock overrides any permissions the user might have.

Immutable storage for Azure Blob Storage enables users to store business-critical data in a WORM (Write Once, Read Many) state. While in a WORM state, data cannot be modified or deleted for a user-specified interval. By configuring immutability policies for blob data, you can protect your data from overwrites and deletes.

240
Q

Q2-2 T5

Litware - Case Study

Question

You plan to migrate App1 to Azure.
You need to recommend a high-availability solution for App1. The solution must meet the resiliency requirements.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

A

Answer

Here is a high-availability solution recommendation for App1 that meets Litware’s resiliency requirements:

  • Number of host groups: 1
  • Number of virtual machine scale sets: 1

Rationale:

  • Availability zones: Since Litware requires App1 to be available even if two availability zones fail, using a single availability zone with a single virtual machine scale set won’t meet this requirement. However, Azure offers Availability Sets within an availability zone. An Availability Set distributes VMs across multiple isolated fault domains within a zone. This means that if a hardware or software failure within a zone disrupts one VM, the other VMs in the availability set located on different fault domains are not affected and can continue to operate.

Recommendation:

  • Deploy App1 to a virtual machine scale set within a single Azure availability zone.
  • Configure at least two virtual machines within the scale set.
  • Use Azure Managed Disks with Azure Premium Storage for the VMs. This will provide high availability and redundancy for the VM disks.
  • Configure Azure Load Balancer for the virtual machine scale set. This will distribute traffic across the VMs in the scale set and provide a single endpoint for client applications.

Benefits of this solution:

  • High Availability: If a hardware or software failure disrupts one VM in the availability set, the other VMs will continue to operate, maintaining application availability.
  • Automatic scaling: Azure virtual machine scale sets can automatically scale the number of VMs up or down based on predefined metrics. This can help to ensure that App1 has the resources it needs to meet demand.
  • Minimizes administrative effort: By using a virtual machine scale set, you can manage a group of VMs as a single unit. This can help to reduce the administrative overhead of managing individual VMs.

Additional Considerations:

  • For disaster recovery, consider replicating the Azure VMs and Azure Storage accounts to a secondary Azure region.
  • Litware can implement Azure Site Recovery service to automate the disaster recovery process.
241
Q

Q2-3 T5

Litware - Case Study

Question

You plan to migrate App1 to Azure.
You need to recommend a storage solution for App1 that meets the security and compliance requirements.
Which type of storage should you recommend, and how should you recommend configuring the storage? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

A

Answer

Box 1: Standard general-purpose v2
Standard general-purpose v2 supports Blob Storage.
Azure Storage provides data protection for Blob Storage and Azure Data Lake Storage Gen2.
Scenario:
Litware identifies the following security and compliance requirements:
✑ Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.
✑ On-premises users and services must be able to access the Azure Storage account that will host the data in App1.
✑ Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.
All Azure SQL databases in the production environment must have Transparent Data Encryption (TDE) enabled.

✑ App1 must NOT share physical hardware with other workloads.

Box 2: Hierarchical namespace -
Scenario: Plan: Migrate App1 to Azure virtual machines.
Azure Data Lake Storage Gen2 implements an access control model that supports both Azure role-based access control (Azure RBAC) and POSIX-like access control lists (ACLs).
Data Lake Storage Gen2 and the Network File System (NFS) 3.0 protocol both require a storage account with a hierarchical namespace enabled.

Reference

Reference 2

242
Q

Q2-4 T5

Litware - Case Study

Question

How should the migrated databases DB1 and DB2 be implemented in Azure?

Hot Area

A

Answer

Box 1: SQL Managed Instance -
Scenario: Once migrated to Azure, DB1 and DB2 must meet the following requirements:
✑ Maintain availability if two availability zones in the local Azure region fail.
✑ Fail over automatically.
✑ Minimize I/O latency.
The auto-failover groups feature allows you to manage the replication and failover of a group of databases on a server or all databases in a managed instance to another region. It is a declarative abstraction on top of the existing active geo-replication feature, designed to simplify deployment and management of geo- replicated databases at scale. You can initiate a geo-failover manually or you can delegate it to the Azure service based on a user-defined policy. The latter option allows you to automatically recover multiple related databases in a secondary region after a catastrophic failure or other unplanned event that results in full or partial loss of the SQL Database or SQL Managed Instance availability in the primary region.

Box 2: Business critical -
SQL Managed Instance is available in two service tiers:
General purpose: Designed for applications with typical performance and I/O latency requirements.
Business critical: Designed for applications with low I/O latency requirements and minimal impact of underlying maintenance operations on the workload.

243
Q

Q2-4 T5

Litware - Case Study

Question

You plan to migrate App1 to Azure.
You need to recommend a network connectivity solution for the Azure Storage account that will host the App1 data. The solution must meet the security and compliance requirements.
What should you include in the recommendation?

A. Microsoft peering for an ExpressRoute circuit
B. Azure public peering for an ExpressRoute circuit
C. a service endpoint that has a service endpoint policy
D. a private endpoint

A

Correct Answer: D

Private Endpoint securely connect to storage accounts from on-premises networks that connect to the VNet using VPN or ExpressRoutes with private-peering.

Private Endpoint also secure your storage account by configuring the storage firewall to block all connections on the public endpoint for the storage service.

Incorrect Answers:

A: Microsoft peering provides access to Azure public services via public endpoints with public IP addresses, which should not be allowed.
B: Azure public peering has been deprecated.
C: By default, Service Endpoints are enabled on subnets configured in Azure virtual networks. Endpoints can’t be used for traffic from your premises to Azure services.

244
Q

Q2-5 T5

Litware - Case Study

Question

You need to implement the Azure RBAC role assignments for the Network Contributor role. The solution must meet the authentication and authorization requirements.
What is the minimum number of assignments that you must use?

A. 1
B. 2
C. 5
D. 10
E. 15

A

Correct Answer: B

Scenario: The Network Contributor built-in RBAC role must be used to grant permissions to the network administrators for all the virtual networks in all the Azure subscriptions.
RBAC roles must be applied at the highest level possible.

245
Q

Q2-7 T5

Litware - Case Study

Question

You plan to migrate DB1 and DB2 to Azure.
You need to ensure that the Azure database and the service tier meet the resiliency and business requirements.
What should you configure? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

A

Answer

Database: A single Azure SQL database

Rationale: While Azure SQL Managed Instance provides more flexibility and control, a single Azure SQL database is often sufficient for most workloads. It’s simpler to manage and offers a cost-effective solution.

Service tier: Hyperscale

Rationale: The Hyperscale service tier provides:
High availability: It can automatically scale up or down to handle varying workloads, ensuring continuous availability.

Performance: It offers excellent performance for most database workloads, including those with high transaction rates.
Scalability: It allows for independent scaling of compute and storage resources, providing flexibility to meet changing demands.
Cost-effectiveness: It can be cost-effective, especially for large databases or those with fluctuating workloads.

Additional Considerations:

Geo-replication: To ensure data redundancy and disaster recovery, consider enabling geo-replication for the databases. This will replicate data to a secondary region, providing protection against regional failures.
Transparent Data Encryption (TDE): To comply with Litware’s security requirements, enable TDE on the databases. This will encrypt data at rest, protecting it from unauthorized access.
High availability options: If even higher availability is required, explore options like zone redundancy or availability groups within the Hyperscale service tier.
By configuring a single Azure SQL database with the Hyperscale service tier, Litware can meet its resiliency and business requirements while minimizing administrative overhead and costs.

246
Q

Q2-6 T5

Litware - Case Study

Question

You need to configure an Azure policy to ensure that the Azure SQL databases have Transparent Data Encryption (TDE) enabled. The solution must meet the security and compliance requirements.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Select and Place

A

Answer

Step 1: Create an Azure policy definition that uses the deployIfNotExists
The first step is to define the roles that deployIfNotExists and modify needs in the policy definition to successfully deploy the content of your included template.

Step 2: Create an Azure policy assignment
When creating an assignment using the portal, Azure Policy both generates the managed identity and grants it the roles defined in roleDefinitionIds.

Step 3: Invoke a remediation task.
Resources that are non-compliant to a deployIfNotExists or modify policy can be put into a compliant state through Remediation. Remediation is accomplished by instructing Azure Policy to run the deployIfNotExists effect or the modify operations of the assigned policy on your existing resources and subscriptions, whether that assignment is to a management group, a subscription, a resource group, or an individual resource.
During evaluation, the policy assignment with deployIfNotExists or modify effects determines if there are non-compliant resources or subscriptions. When non- compliant resources or subscriptions are found, the details are provided on the Remediation page.

247
Q

Q1(3) T6

Contoso - Case Study

Question

You need to recommend a solution that meets the file storage requirements for App2.
What should you deploy to the Azure subscription and the on-premises network? To answer, drag the appropriate services to the correct locations. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.
Select and Place

A

Answer

https://docs.microsoft.com/en-us/azure/storage/file-sync/file-sync-deployment-guide
Scenario: App2 has the following file storage requirements:
✑ Save files to an Azure Storage account.
✑ Replicate files to an on-premises location.
✑ Ensure that on-premises clients can read the files over the LAN by using the SMB protocol.

Box 2: Azure File Sync -
Use Azure File Sync to centralize your organization’s file shares in Azure Files, while keeping the flexibility, performance, and compatibility of an on-premises file server. Azure File Sync transforms Windows Server into a quick cache of your Azure file share. You can use any protocol that’s available on Windows Server to access your data locally, including SMB, NFS, and FTPS. You can have as many caches as you need across the world.

248
Q

Q1(4) T6

Contoso - Case Study

Question

You need to recommend a solution that meets the data requirements for App1.
What should you recommend deploying to each availability zone that contains an instance of App1?

A. an Azure Cosmos DB that uses multi-region writes
B. an Azure Data Lake store that uses geo-zone-redundant storage (GZRS)
C. an Azure Storage account that uses geo-zone-redundant storage (GZRS)

A

Correct Answer: A

Scenario: App1 has the following data requirements:
✑ Each instance will write data to a data store in the same availability zone as the instance.
✑ Data written by any App1 instance must be visible to all App1 instances.
Azure Cosmos DB: Each partition across all the regions is replicated. Each region contains all the data partitions of an Azure Cosmos container and can serve reads as well as serve writes when multi-region writes is enabled.

Incorrect Answers:

B, D: GZRS protects against failures. Geo-redundant storage (with GRS or GZRS) replicates your data to another physical location in the secondary region to protect against regional outages. However, that data is available to be read only if the customer or Microsoft initiates a failover from the primary to secondary region.
C: Active geo-replication is designed as a business continuity solution that lets you perform quick disaster recovery of individual databases in case of a regional disaster or a large scale outage. Once geo-replication is set up, you can initiate a geo-failover to a geo-secondary in a different Azure region. The geo-failover is initiated programmatically by the application or manually by the user.

249
Q

Q1(5) T6

Contoso - Case Study

Question

You are evaluating whether to use Azure Traffic Manager and Azure Application Gateway to meet the connection requirements for App1.
What is the minimum numbers of instances required for each service? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

A

Answer

Box 1: 1 -
App1 will only be accessible from the internet. App1 has the following connection requirements:
Connections to App1 must be active-active load balanced between instances.

All connections to App1 from North America must be directed to the East US region. All other connections must be directed to the West Europe region.
App1 will have six instances: three in the East US Azure region and three in the West Europe Azure region.
Note: Azure Traffic Manager is a DNS-based traffic load balancer. This service allows you to distribute traffic to your public facing applications across the global
Azure regions.

Box 2: 2 -
For production workloads, run at least two gateway instances.
A single Application Gateway deployment can run multiple instances of the gateway.
Use one Application Gateway in East US Region, and one in the West Europe region.

250
Q

Q1(5) T6

Contoso - Case Study

Question

What should you implement to meet the identity requirements? To answer, select the appropriate options in the answer area.

A

Answer

A. Azure AD Identity Governance
Feature:
B. Access reviews
Explanation:

Azure AD Identity Governance service allows organizations to manage, control, and monitor access to resources, which is necessary for the management of Fabrikam users’ access to resources.

Access reviews is a feature of Azure AD Identity Governance that allows for periodic review of access permissions, fulfilling the requirement for the monthly review of Fabrikam users’ access permissions to App1.

251
Q

Q1(6) T6

Contoso - Case Study

Question

What should you recommend to meet the monitoring requirements for App2?

A. VM insights
B. Azure Application Insights
C. Microsoft Sentinel
D. Container insights

A

Correct Answer: B

Scenario: You need to monitor App2 to analyze how long it takes to perform different transactions within the application. The solution must not require changes to the application code.
Unified cross-component transaction diagnostics.

The unified diagnostics experience automatically correlates server-side telemetry from across all your Application Insights monitored components into a single view. It doesn’t matter if you have multiple resources. Application Insights detects the underlying relationship and allows you to easily diagnose the application component, dependency, or exception that caused a transaction slowdown or failure.

Note: Components are independently deployable parts of your distributed/microservices application. Developers and operations teams have code-level visibility or access to telemetry generated by these application components.

252
Q

Q1 T6

Contoso - Case Study

Question
You need to recommend a solution for the App1 maintenance task. The solution must minimize costs.
What should you include in the recommendation?

A. an Azure logic app
B. an Azure function
C. an Azure virtual machine
D. an App Service WebJob

A

The best option for the App1 maintenance task to minimize costs is:

B. An Azure Function

Here’s why:

  • Cost-Effectiveness: Azure Functions are serverless, meaning you only pay for the resources consumed when the function executes. This is ideal for a task that runs hourly, as it minimizes idle resource costs.
  • Scalability: Azure Functions automatically scale based on demand, ensuring sufficient resources during execution without manual configuration.
  • Ease of Development: Azure Functions support various programming languages, including Python, making it easier for developers to create the PowerShell script within the function.
  • Integration with Key Vault: Azure Functions can access secrets securely from Azure Key Vault using managed identities, eliminating the need to store sensitive information directly in the script.

Why other options are not ideal:

  • A. Azure Logic App: While a logic app can be used for automation, it might be more complex and expensive compared to a simple script running in an Azure Function.
  • C. An Azure Virtual Machine: This would be an overkill for running a single hourly script. It incurs costs for continuously running a VM, even during idle periods.
  • D. An App Service WebJob: Though similar to Azure Functions, WebJobs are typically tied to a specific App Service plan, which can incur fixed costs regardless of usage. Azure Functions offer more flexibility and cost optimization in this scenario.

Therefore, an Azure Function provides the most cost-effective and efficient way to execute the hourly maintenance script for App1.

253
Q

Q1(3) T6

Contoso - Case Study

Question
You need to recommend a solution that meets the application development requirements.
What should you include in the recommendation?

A. the Azure App Configuration service
B. an Azure Container Registry instance
C. deployment slots
D. Continuous Integration/Continuous Deployment (CI/CD) sources

A

Recommendation: C. Deployment slots

Explanation:

Deployment slots provide the ideal solution for the application development requirements outlined in the case study.

Here’s how they address the specific needs:

  • Staging and production environments: Deployment slots allow you to create isolated environments for testing new application versions before deploying them to production. This aligns with the requirement of deploying a staging instance before using the new version in production.
  • Zero-downtime deployments: By swapping the staging slot with the production slot, you can seamlessly transition to the new application version without any downtime, fulfilling the requirement for a smooth switch between versions.

How other options don’t fully address the requirements:

  • A. Azure App Configuration: While useful for managing application settings, it doesn’t directly address the need for staging and production environments or zero-downtime deployments.
  • B. Azure Container Registry: Primarily for storing container images, it doesn’t provide the necessary environment management features for this scenario.
  • D. CI/CD sources: While essential for the development process, they focus on building and deploying applications, not managing the deployment lifecycle and zero-downtime transitions.

By leveraging deployment slots, Contoso can efficiently manage the development and deployment process for App1 and App2, ensuring minimal downtime and a streamlined workflow.

254
Q

Q1(3) T6

Contoso - Case Study

Question
You need to recommend an App Service architecture that meets the requirements for App1. The solution must minimize costs.
What should you recommend?

A. one App Service Environment (ASE) per availability zone
B. one App Service Environment (ASE) per region
C. one App Service plan per region
D. one App Service plan per availability zone

A

Recommendation: C. One App Service plan per region

Explanation:

One App Service plan per region is the most cost-effective and efficient option for App1 given the requirements.

Here’s a breakdown of why:

  • Cost-efficiency: This option avoids the overhead of multiple App Service Environments (ASEs), which can be costly.
  • High availability: By distributing App Service plans across regions, you achieve high availability and fault tolerance.
  • Load balancing: Azure App Service provides built-in load balancing across instances within a plan and across regions, ensuring optimal distribution of traffic.
  • Scalability: You can independently scale instances within each App Service plan based on regional demand.
  • Simplicity: Managing a single App Service plan per region is simpler than managing multiple ASEs.

Considerations:

  • Traffic Distribution: Use traffic manager to direct traffic to the appropriate region based on geographic location.
  • Data Consistency: Ensure data consistency across regions using appropriate data storage and synchronization mechanisms.
  • Monitoring: Implement robust monitoring to track performance and identify potential issues.

By adopting this architecture, Contoso can effectively deploy and manage App1 while optimizing costs and ensuring high availability.

255
Q

Q1(4) T6

Contoso - Case Study

Question
You need to recommend a solution to ensure that App1 can access the third-party credentials and access strings. The solution must meet the security requirements.

What should you include in the recommendation?

Authenticate App1 by using:
1. A certificate
2. A system-assigned managed identity
3. A user-assigned managed identity

Authorize App1 to retrieve Kay Vault secrets by using:
1. An access policy
2. A connected service
3. A private link
4. A role assignment

A

Recommendation:

Authenticate App1 by using:
* 2. A system-assigned managed identity

Authorize App1 to retrieve Key Vault secrets by using:
* 4. A role assignment

Explanation:
* System-assigned managed identity: This is a secure and convenient way to authenticate App1 without managing credentials directly. It is automatically created when you deploy the App Service instance.
* Role assignment: Granting App1 the necessary permissions to access Key Vault secrets through a role assignment adheres to the principle of least privilege and ensures strong security.

Additional considerations:
* Key Vault access policies: While not directly required in this scenario, creating fine-grained access policies can further enhance security by limiting the permissions granted to App1.

By adopting this approach, App1 can securely access the required credentials and access strings from Key Vault without compromising security.

256
Q

Q1-6 T7

Fabrikam - Case Study

Question:

You design a solution for the web tier of WebApp1 as shown in the exhibit

For each of the following statements, select Yes if the
statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.

Hot Area

A

Answer

Box 1: Yes -
Any new deployments to Azure must be redundant in case an Azure region fails.
Traffic Manager is resilient to failure, including the failure of an entire Azure region.

Box 2: No -
Traffic Manager provides load balancing, but not auto-scaling.

Box 3: No -
Automatic failover using Azure Traffic Manager: when you have complex architectures and multiple sets of resources capable of performing the same function, you can configure Azure Traffic Manager (based on DNS) to check the health of your resources and route the traffic from the non-healthy resource to the healthy resource.

257
Q

Q1-4 T7

Fabrikam - Case Study

Question:

You need to recommend a data storage strategy for WebApp1.
What should you include in the recommendation?

A. an Azure virtual machine that runs SQL Server
B. a fixed-size DTU Azure SQL database
C. an Azure SQL Database elastic pool
D. a vCore-based Azure SQL database

A

Correct Answer: D

The use of WebApp1 is unpredictable. At peak times, users often report delays. At other times, many resources for WebApp1 are underutilized.

Database metrics for the production instance of WebApp1 must be available for analysis so that database administrators can optimize the performance settings.

Note: A virtual core (vCore) represents a logical CPU and offers you the option to choose between generations of hardware and the physical characteristics of the hardware (for example, the number of cores, the memory, and the storage size). The vCore-based purchasing model gives you flexibility, control, transparency of individual resource consumption, and a straightforward way to translate on-premises workload requirements to the cloud. This model optimizes price, and allows you to choose compute, memory, and storage resources based on your workload needs.

Incorrect:

Not C: Azure SQL Database elastic pools are a simple, cost-effective solution for managing and scaling multiple databases, not for a single database.

258
Q

Q1-5 T7

Fabrikam - Case Study

Question:

To meet the authentication requirements of Fabrikam, what should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

A

Answer

Minimum Number of Azure AD Tenants: 1

Fabrikam requires a single Azure AD tenant to manage identities from its corp.fabrikam.com forest and synchronize them for cloud-based services and applications.

Minimum Number of Custom Domains: 1

Fabrikam should add at least one custom domain (corp.fabrikam.com) to their Azure AD tenant to provide users with familiar email addresses and sign-in experiences.

Minimum Number of Conditional Access Policies: 2

Fabrikam needs at least two conditional access policies:

Require MFA for administrative access to the Azure portal: This policy enhances security by enforcing multi-factor authentication for administrative actions.
Enforce authentication using corp.fabrikam.com UPN and conditional access rules: This policy ensures users authenticate using their corporate credentials and can be used to implement additional security measures like device compliance or location-based restrictions.
By implementing these components, Fabrikam can establish a solid foundation for its hybrid identity environment and meet the specified authentication requirements.

259
Q

Q1-6 T7

Fabrikam - Case Study

Question:

You need to recommend a notification solution for the IT Support distribution group.
What should you include in the recommendation?

A. a SendGrid account with advanced reporting
B. an action group
C. Azure Network Watcher
D. Azure AD Connect Health

A

Correct Answer: D

Recommendation: D. Azure AD Connect Health

Explanation:

  • Azure AD Connect Health is specifically designed to monitor the health of your on-premises Active Directory and Azure AD Connect infrastructure.
  • It provides real-time alerts and reports on directory synchronization status, synchronization errors, and other critical issues.
  • You can configure Azure AD Connect Health to send notifications to the IT Support distribution group whenever there are problems with directory synchronization services.

Why not the other options:

  • SendGrid: While useful for email delivery and advanced reporting, it’s not directly tied to monitoring directory synchronization health.
  • Action group: While flexible for creating custom actions based on alerts, it requires integration with a monitoring service like Azure AD Connect Health to provide the necessary data.
  • Azure Network Watcher: Primarily used for monitoring network connectivity and performance, it’s not the best fit for directory synchronization issues.

By choosing Azure AD Connect Health, you ensure that the IT Support team is promptly notified of any issues that could impact user authentication and access to resources.

An email distribution group named IT Support must be notified of any issues relating to the directory synchronization services.

Note: You can configure the Azure AD Connect Health service to send email notifications when alerts indicate that your identity infrastructure is not healthy. his occurs when an alert is generated, and when it is resolved.

Reference

260
Q

Q1-6 T7

Fabrikam - Case Study

Question:

You need to recommend a solution to meet the database retention requirements.
What should you recommend?

A. Configure a long-term retention policy for the database.
B. Configure Azure Site Recovery.
C. Use automatic Azure SQL Database backups.
D. Configure geo-replication of the database.

A

Recommendation: C. Use automatic Azure SQL Database backups

Explanation:

  • Azure SQL Database automatically creates full database backups and differential backups.
  • You can configure long-term retention policies for these backups to meet the requirement of retaining database backups for a minimum of seven years.
  • This option is cost-effective, reliable, and ensures that you have multiple recovery points available in case of data loss or corruption.

Why not the other options:

  • Configure a long-term retention policy for the database: This is not a specific feature for databases. It’s more about defining how long data within the database should be retained, rather than backing up the database itself.
  • Configure Azure Site Recovery: Primarily used for disaster recovery and replicating entire virtual machines, not specific for database backups.
  • Configure geo-replication of the database: While this provides high availability and disaster recovery, it’s not designed for long-term retention of database backups.

By choosing automatic Azure SQL Database backups with long-term retention, you effectively address the database retention requirement.

Scenario: Database backups must be retained for a minimum of seven years to meet compliance requirements.
Many applications have regulatory, compliance, or other business purposes that require you to retain database backups beyond the 7-35 days provided by Azure
SQL Database and Azure SQL Managed Instance automatic backups. By using the long-term retention (LTR) feature, you can store specified SQL Database and
SQL Managed Instance full backups in Azure Blob storage with configured redundancy for up to 10 years. LTR backups can then be restored as a new database.

261
Q

Q1 T7

Fabrikam - Case Study

Question:

To meet the authentication requirements of Fabrikam, what should you include in the solution? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

Minimum number of Azure AD tenants: 1

Explanation:

A single Azure AD tenant can manage identities from multiple Active Directory forests using Azure AD Connect and conditional access policies to meet the specified requirements.

Minimum number of conditional access policies to create: 2

Explanation:

Policy 1: Require MFA for administrative access to the Azure portal.
Policy 2: Ensure users authenticate using their corp.fabrikam.com credentials and enforce policies to restrict access to authorized users.

262
Q

Q1-2 T7

Fabrikam - Case Study

Question:

You are evaluating the components of the migration to Azure that require you to provision an Azure Storage account. For each of the following statements, select
Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.

Answer Area

A

Answer

No: Azure SQL Managed Instance: Migrating a database to Azure SQL Managed Instance using Azure Database Migration Service (DMS) requires a storage account.

No: Azure App Service: Azure App Service can scale and update a web app from a single point with its own built-in storage.

No: Azure SQL Database Monitoring: A storage account is not mandatory for Azure SQL Database monitoring with Log Analytics workspace. You can use Log Analytics workspace for streaming data, but for archiving large amounts of data, storage account is a cheaper option

Azure Storage Account Necessity
1. For SQL server database migration
Not strictly required. While Azure Storage can be used for storing database backups during the migration process, it’s not mandatory. Azure offers several other options for database migration, such as:
* Azure Database Migration Service: This service handles the migration process without requiring an Azure Storage account.
* Bulk data import/export: For large datasets, you can use Azure Blob Storage to transfer data, but it’s not the only option.
2. For the website content storage
Not required. Azure App Service, a likely choice for hosting the website, provides built-in storage for website content. You can use Azure Blob Storage for static content or large files, but it’s not essential for the website itself.
3. For the database metric monitoring
Not directly required. Azure Monitor can collect and store database metrics without explicitly using an Azure Storage account. It offers various options for storing and analyzing metrics, including Log Analytics workspaces and Azure Monitor metrics.
However, indirectly, an Azure Storage account might be involved:
* Log Analytics workspace: While not a storage account itself, Log Analytics workspaces use underlying storage to store collected data.
* Backup storage: If you’re backing up database metrics for long-term retention, Azure Blob Storage can be a suitable option.
In conclusion, while an Azure Storage account is not strictly necessary for any of the three scenarios, it can be leveraged for specific use cases within those scenarios, such as backup, bulk data transfer, or long-term metric storage.

263
Q

Q1-3 T7

Fabrikam - Case Study

Question:

What should you include in the identity management strategy to support the planned changes?

A. Deploy domain controllers for corp.fabrikam.com to virtual networks in Azure.
B. Move all the domain controllers from corp.fabrikam.com to virtual networks in Azure.
C. Deploy a new Azure AD tenant for the authentication of new R&D projects.
D. Deploy domain controllers for the rd.fabrikam.com forest to virtual networks in Azure.

A

Correct Answer: A (But not ideal)

Directory synchronization between Azure Active Directory (Azure AD) and corp.fabrikam.com must not be affected by a link failure between Azure and the on- premises network. (This requires domain controllers in Azure).

Users on the on-premises network must be able to authenticate to corp.fabrikam.com if an Internet link fails. (This requires domain controllers on-premises).

Identity Management Strategy for Fabrikam

The identity management strategy for Fabrikam should focus on a hybrid approach that leverages both on-premises Active Directory and Azure AD to meet their planned changes. Here’s why the provided options are not ideal:

  • A. Deploy domain controllers for corp.fabrikam.com to virtual networks in Azure: While this can be done for specific use cases, it’s not necessary for Fabrikam’s scenario. Azure AD Connect can handle authentication for most use cases.
  • B. Move all the domain controllers from corp.fabrikam.com to virtual networks in Azure: This is not recommended as R&D will remain on-premises. Keeping some domain controllers on-premises ensures seamless authentication for those users.
  • C. Deploy a new Azure AD tenant for the authentication of new R&D projects: Since R&D will be on-premises, a separate Azure AD tenant wouldn’t be beneficial. It’s best to leverage the existing corp.fabrikam.com for authentication.
  • D. Deploy domain controllers for the rd.fabrikam.com forest to virtual networks in Azure: Unnecessary for this scenario. R&D remains on-premises, so managing domain controllers in Azure isn’t required.

Here’s what should be included in the identity management strategy:

  • Azure AD Connect: Implement Azure AD Connect to synchronize user identities and groups from corp.fabrikam.com to Azure AD. This allows R&D users and other on-premises users to authenticate with their existing credentials against Azure AD for access to cloud resources.
  • Password Hash Synchronization (PHS): Configure PHS for secure on-premises password verification during Azure AD authentication. This ensures users only have to manage one password for both on-premises and cloud services.
  • Azure AD Connect Health: Monitor the health of directory synchronization and configure alerts to notify the IT Support group (already recommended earlier) of any issues.
  • Azure AD Pass-through Authentication (PTA): Implement PTA as a failover mechanism for authentication if the internet link to Azure fails. This allows on-premises users to authenticate against corp.fabrikam.com even during an outage.

By implementing these elements, Fabrikam can achieve a secure and centralized identity management solution that supports both on-premises and cloud resources for their planned changes.