test0 Flashcards

1
Q

You are designing an Azure governance solution.
All Azure resources must be easily identifiable based on the following operational information: environment, owner, department and cost center.
You need to ensure that you can use the operational information when you generate reports for the Azure resources.
What should you include in the solution?

A. an Azure data catalog that uses the Azure REST API as a data source
B. an Azure management group that uses parent groups to create a hierarchy
C. an Azure policy that enforces tagging rules
D. Azure Active Directory (Azure AD) administrative units

A

The correct answer is C. an Azure policy that enforces tagging rules.

Here’s why:

Tags are the Key: Azure tags are key-value pairs that you can apply to Azure resources. They are specifically designed to store metadata like environment, owner, department, and cost center. This allows you to easily filter, group, and report on your resources based on these operational details.

Azure Policy Enforces Consistency: Using Azure Policy, you can define rules that require specific tags to be present when resources are created or updated. This ensures that all resources are consistently tagged with the necessary information. Without policy, users might forget or apply tags inconsistently, making reporting difficult.

Let’s look at why the other options are not the best fit:

A. an Azure data catalog that uses the Azure REST API as a data source: Azure Data Catalog is a metadata management service that helps you discover, understand, and consume data. While it could potentially be used to collect and store tag information, it’s not the primary tool for enforcing tagging or making it consistently available.

B. an Azure management group that uses parent groups to create a hierarchy: Management groups are for organizing and managing subscriptions, not for tagging individual resources. They can help you apply policy at a high level, but they don’t provide the granular operational information you need for each resource.

D. Azure Active Directory (Azure AD) administrative units: Administrative units in Azure AD are for delegating administrative permissions to specific sets of users and resources. They do not directly relate to resource tagging and reporting of operational information.

In summary, to ensure resources are consistently tagged with operational information for reporting, you need to enforce tagging rules, which is best achieved with an Azure Policy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You are designing a large Azure environment that will contain many subscriptions.
You plan to use Azure Policy as part of a governance solution.
To which three scopes can you assign Azure Policy definitions? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.

A. Azure Active Directory (Azure AD) administrative units
B. Azure Active Directory (Azure AD) tenants
C. subscriptions
D. compute resources
E. resource groups
F. management groups

A

The correct answers are C. subscriptions, E. resource groups, and F. management groups.

Here’s why:

Subscriptions (C): Azure Policies can be assigned directly to Azure subscriptions. This allows you to enforce policies across all resources within that subscription. This is a very common level for applying policies.

Resource Groups (E): Policies can be assigned at the resource group level, which provides granular control over a specific collection of resources. This is useful for applying different policies to different application environments or projects.

Management Groups (F): Management groups are designed to create a hierarchy above subscriptions, allowing you to apply policies to entire groups of subscriptions within your Azure environment. This is useful for establishing overarching governance rules for many subscriptions at once.

Let’s look at why the other options are not correct scopes for assigning Azure Policy definitions:

A. Azure Active Directory (Azure AD) administrative units: Azure AD administrative units are used for managing users and groups within Azure AD, not for managing resource policies.

B. Azure Active Directory (Azure AD) tenants: While Azure Policy definitions are created at the tenant level (so they can be used across subscriptions), policies are assigned to management groups, subscriptions or resource groups, not directly to the tenant.

D. Compute resources: While Azure Policy can apply to compute resources, you can’t directly assign a policy to an individual compute resource. Policies must be applied to management groups, subscriptions, or resource groups which contain resources.

Key takeaway: Azure Policy scopes are hierarchical, starting with Management Groups at the top, then Subscriptions, and then Resource Groups. This allows you to enforce consistent governance across your Azure environment at various levels of granularity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

HOTSPOT -
You plan to create an Azure environment that will contain a root management group and 10 child management groups. Each child management group will contain five Azure subscriptions. You plan to have between 10 and 30 resource groups in each subscription.
You need to design an Azure governance solution. The solution must meet the following requirements:
✑ Use Azure Blueprints to control governance across all the subscriptions and resource groups.
✑ Ensure that Blueprints-based configurations are consistent across all the subscriptions and resource groups.
✑ Minimize the number of blueprint definitions and assignments.
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Level at which to define the blueprints:
The child management groups
The root management group
The subscriptions
Level at which to create the blueprint assignments:
The child management groups
The root management group
The subscriptions

A

Level at which to define the blueprints:

The root management group

Level at which to create the blueprint assignments:

The child management groups

Explanation:

Blueprint Definition Scope: Blueprints are defined at the management group level (or subscription, but in this scenario it is better at management group level) and can be applied to resources at lower levels. To minimize the number of blueprint definitions, you should define your blueprints at the root management group level. This allows you to have a single source of truth for governance configurations. Because all child management groups will inherit the blueprints defined at the root management group level.

Blueprint Assignment Scope: You want governance applied consistently to all 50 subscriptions and the resources they contain. You should create blueprint assignments at the child management group level. When you assign a blueprint to a management group, all subscriptions within that group inherit the assigned configurations.

Why this is the best approach:

Centralized Governance: Defining the blueprint at the root allows you to have a central definition that you can manage in one place.

Consistent Application: Assigning the blueprint to the child management groups ensures that all subscriptions within each child management group have the same policy and resource settings, providing consistent governance.

Minimize Effort: You avoid creating multiple blueprint definitions or assigning to each subscription individually.

Why other options are incorrect:

Defining blueprints at the child management group level: This would require you to potentially have multiple blueprint definitions which contradicts the requirement of minimizing blueprint definitions.

Defining blueprints at subscription level: This would require you to potentially have multiple blueprint definitions and would require more effort to assign blueprints to all the subscriptions.

Assigning blueprints at the root management group level: While this technically would apply the policy to the child management group. Applying at the child management group allows for greater flexibility if you wanted to specify more child management group specific settings.

Assigning blueprints at subscription level: This would require you to assign blueprints to each individual subscription. Which contradicts the requirement to minimize blueprint assignments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

HOTSPOT -
You need to design an Azure policy that will implement the following functionality:
✑ For new resources, assign tags and values that match the tags and values of the resource group to which the resources are deployed.
✑ For existing resources, identify whether the tags and values match the tags and values of the resource group that contains the resources.
✑ For any non-compliant resources, trigger auto-generated remediation tasks to create missing tags and values.
The solution must use the principle of least privilege.
What should you include in the design? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Azure Policy effect to use:
Append
EnforceOPAConstraint
EnforceRegoPolicy
Modify
Azure Active Directory (Azure AD) object and role-based
access control (RBAC) role to use for the remediation tasks:
A managed identity with the Contributor role
A managed identity with the User Access Administrator role
A service principal with the Contributor role
A service principal with the User Access Administrator role

A

Azure Policy effect to use:

Modify

Azure Active Directory (Azure AD) object and role-based access control (RBAC) role to use for the remediation tasks:

A managed identity with the Contributor role

Explanation:

Modify Effect: The Modify effect in Azure Policy is the correct choice here because it allows you to add, update, or remove tags on resources. This effect is versatile enough to handle both new resource deployments (by assigning tags) and existing resources (by updating tags as needed). It also can handle remediation which is what is needed. The other options do not provide the appropriate functionality.

Append only adds tags, it cannot modify or remediate existing values and also does not support remediation.

EnforceOPAConstraint is related to Open Policy Agent, which is not the correct approach in this case.

EnforceRegoPolicy is related to Rego policies, which is not the correct approach in this case.

Managed Identity with Contributor Role:

Managed Identity: Using a managed identity is the best practice for security in Azure. It eliminates the need to manage credentials and provides a secure way for Azure Policy to access resources.

Contributor Role: The Contributor role is sufficient for Azure Policy to create and update tags on resources within the scope where the policy is applied. The Contributor role is the minimal privilege needed to do this task. The User Access Administrator role has way more permissions than is needed and therefore violates the principle of least privilege.

A service principal can be used in a similar capacity but a managed identity is better practice when creating the solution.

Why this is the best approach:

Correct Functionality: The Modify effect allows for the desired behavior to create and modify tags.

Principle of Least Privilege: The Contributor role provides the necessary permissions to update tags without granting unnecessary access.

Security Best Practice: Using a managed identity avoids credential management and is the recommended approach for accessing Azure resources securely.

In summary: The Modify effect along with a managed identity using the Contributor role is the most effective way to implement this policy with the principle of least privilege.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

You need to recommend a solution to generate a monthly report of all the new Azure Resource Manager (ARM) resource deployments in your Azure subscription.
What should you include in the recommendation?

A. Azure Activity Log
B. Azure Advisor
C. Azure Analysis Services
D. Azure Monitor action groups

A

The correct answer is A. Azure Activity Log.

Here’s why:

Azure Activity Log’s Purpose: The Azure Activity Log is a service that provides a record of all operations that occur in your Azure subscription. This includes creation, modification, and deletion of resources. It’s essentially an audit log for your Azure environment. This includes the information on new ARM resource deployments.

Reporting on Deployments: Because the Activity Log records all resource deployment events, it is the ideal place to extract data for your monthly report of new deployments. You can filter and export the Activity Log data to analyze and build your report.

Let’s look at why the other options are not the best fit:

B. Azure Advisor: Azure Advisor analyzes your Azure resources and provides recommendations for performance, security, cost, and high availability improvements. While useful, it does not directly provide a report of new resource deployments.

C. Azure Analysis Services: Azure Analysis Services is a fully managed platform-as-a-service (PaaS) that provides enterprise-grade data modeling, analysis, and reporting capabilities. It’s typically used for complex data analysis, not for basic reporting on resource deployments.

D. Azure Monitor action groups: Azure Monitor action groups are used to trigger actions when certain alerts are fired from Azure Monitor. While it’s great for real-time alerts, it’s not intended for generating monthly reports on resource deployments.

Key Takeaway: Azure Activity Log is the core service in Azure that records all operations on your resources, making it the best choice for generating reports on new deployments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity.
Several virtual machines exhibit network connectivity issues.
You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines.
Solution: Install and configure the Azure Monitoring agent and the Dependency Agent on all the virtual machines. Use VM insights in Azure Monitor to analyze the network traffic.
Does this meet the goal?

A. Yes
B. No

A

The answer is B. No.

Here’s why:

While the Azure Monitoring agent and the Dependency agent are indeed components of VM insights, they are not the correct tools for analyzing packet-level network traffic to determine if packets are being allowed or denied. They provide a view of network connections and dependencies, but not the detailed packet-level information that’s needed for the given scenario.

Here’s a more detailed explanation:

VM Insights: VM Insights in Azure Monitor provides information about the performance and dependencies of your virtual machines. It can show you which machines are communicating with each other and the network connections between them, but not the detail of whether packets are being allowed or dropped based on firewall rules, for example.

Dependency Agent: This agent discovers and maps the connections and dependencies between processes, but it does not capture packet-level information.

Network Connectivity Issues: To diagnose network connectivity issues, particularly when trying to determine if packets are being allowed or denied, you typically need more detailed tools that operate at the network layer.

What should be used instead?

To analyze if packets are being allowed or denied, you would typically use:

Network Watcher: Azure Network Watcher is a service that allows you to monitor and diagnose network conditions. Key features include:

Packet Capture: This feature lets you capture packets going to and from virtual machines, allowing for deep packet inspection.

IP Flow Verify: This feature allows you to test if packets are being allowed or denied based on the configured security rules.

Connection Troubleshooter: Helps troubleshoot connection issues by verifying the path of the traffic and the security rules in place.

Network Security Group (NSG) Flow Logs: This allows you to capture information about the IP traffic flowing through an NSG. You can use this data to analyze whether traffic is being allowed or denied.

In summary: While the proposed solution is useful for monitoring and visualizing network connections, it’s not suitable for analyzing the specific packet-level details needed for diagnosing packet allowance or denial issues. Network Watcher or NSG Flow Logs are more appropriate tools for the required task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

DRAG DROP -
You need to design an architecture to capture the creation of users and the assignment of roles. The captured data must be stored in Azure Cosmos DB.
Which services should you include in the design? To answer, drag the appropriate services to the correct targets. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:
Azure Services
Azure Event Grid
Azure Event Hubs
Azure Functions
Azure Monitor Logs
Azure Notification Hubs
Answer Area
Azure Active Directory audit log

Azure service (unspecified in the image)

Azure service (unspecified in the image)

Cosmos DB

A

Azure Active Directory audit log

Azure Event Hubs

Azure Functions

Cosmos DB

Explanation:

Azure Active Directory audit log: This is the source of the data, containing the records of user creations and role assignments within your Azure AD tenant.

Azure Event Hubs: Azure Event Hubs is a highly scalable event ingestion service, ideal for capturing the audit log events from Azure AD. It can handle high volumes of data, which is crucial for logging events. It acts as a buffer or intermediary between the event source (Audit log) and the destination where the event data will be stored (Cosmos DB).

Azure Functions: Azure Functions provides a serverless compute platform, which makes it suitable for processing and transforming the raw event data from Event Hubs before storing it into Cosmos DB. We need an intermediary service to transform the event data before passing it to Cosmos DB. It also allows you to add logic to extract the specific fields you want from the raw audit log events for efficient querying.

Cosmos DB: Azure Cosmos DB is a NoSQL database that can store a large variety of data. In this case, it will store the transformed data of user creations and role assignments in a database.

Why other services are not appropriate:

Azure Event Grid: Event Grid is primarily for near real-time reactive event routing, not for high volume continuous data ingestion for storage, which we need here. It’s often used for more immediate actions like triggering alerts or other events.

Azure Monitor Logs: Azure Monitor Logs is used for storing and querying log and metrics data from Azure resources. It can be used for analyzing logs, but it’s not the appropriate intermediary to move the data to the Cosmos DB instance.

Azure Notification Hubs: Notification Hubs is for sending push notifications to various platforms and is not relevant for this scenario.

In summary: The correct flow is to capture the events from the Azure AD Audit Log with Azure Event Hubs, transform and prepare data using Azure Functions and then save the results in Azure Cosmos DB.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

HOTSPOT -
What should you implement to meet the identity requirements? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Service:
Azure AD Identity Governance
Azure AD Identity Protection
Azure AD Privilege Access Management (PIM)
Azure Automation
Feature:
Access packages
Access reviews
Approvals
Runbooks

A

Service:
Azure AD Privileged Identity Management (PIM) [1]
Feature:
Access reviews
Here’s why these are the correct choices:
For Service - Azure AD PIM:
Provides time-based and approval-based role activation
Minimizes risks from excessive permissions
Manages, controls, and monitors access within Azure AD
Essential for privileged account security
Implements just-in-time access
For Feature - Access reviews:
Part of identity governance strategy
Ensures right people have appropriate access
Helps maintain compliance
Enables regular review of access rights
Reduces security risks through periodic validation
Important notes for Azure 304-305 exam:
Understand the differences between:
Identity Governance (overall strategy)
Identity Protection (risk-based security)
PIM (privileged access management)
Know key features:
How access reviews work
PIM workflow and configuration
Identity governance implementation
Security best practices
Focus on:
Security principles
Compliance requirements
Access management lifecycle
Privileged account protection
Remember to understand how these services integrate with other Azure security features for comprehensive identity management. [2]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You need to recommend a solution to generate a monthly report of all the new Azure Resource Manager (ARM) resource deployments in your Azure subscription.

What should you include in the recommendation?

A. Application Insights
B. Azure Arc
C. Azure Log Analytics
D. Azure Monitor metrics

A

The correct answer is C. Azure Log Analytics.

Here’s why:

Azure Log Analytics and the Activity Log: Azure Log Analytics is the service within Azure Monitor that allows you to collect and analyze logs and other data, including the Azure Activity Log. The Activity Log contains the records of all operations performed on resources within your subscription, including the creation of new resources. Log Analytics provides powerful querying and reporting capabilities, which allows you to extract and format the information about new resource deployments into a monthly report.

Data Collection: Azure Log Analytics collects logs and events through the Azure Monitor agent which can be configured to pull data from the activity log.

Querying: Using Kusto Query Language (KQL), you can write specific queries against the Activity Log data to filter for resource creation events, sort them by time, and create a monthly summary.

Reporting: You can then use Azure Log Analytics features like dashboards and workbooks to build visualizations for your monthly report, or export the data to another reporting tool.

Let’s examine why the other options are less suitable:

A. Application Insights: Application Insights is primarily for monitoring the performance and behavior of applications. While it does capture logs related to application usage and errors, it’s not designed to track resource deployment events from the Azure Activity Log.

B. Azure Arc: Azure Arc is a service that extends Azure management capabilities to other platforms, including on-premises and other clouds. It does not have a direct relationship with reporting on Azure resource deployments.

D. Azure Monitor metrics: Azure Monitor metrics collect numeric data over time, like CPU usage, memory utilization, etc. While these are valuable for performance monitoring, they don’t provide the detailed event information, or creation events, needed for the deployment reporting requirement.

In conclusion: Azure Log Analytics is the correct service because it is designed for collecting, storing, querying and reporting on log data, including the Azure Activity Log, which contains the necessary information to report on new ARM resource deployments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You have an Azure subscription.

You plan to deploy a monitoring solution that will include the following:

  • Azure Monitor Network Insights
  • Application Insights
  • Microsoft Sentinel
  • VM insights

The monitoring solution will be managed by a single team.

What is the minimum number of Azure Monitor workspaces required?

A. 1
B. 2
C. 3
D. 4

A

The correct answer is A. 1.

Here’s why:

Azure Monitor Workspace (Log Analytics Workspace): An Azure Monitor workspace, also known as a Log Analytics workspace, is a fundamental component of Azure Monitor. It’s where log data and other telemetry are stored for analysis and visualization. All the services you listed (Network Insights, Application Insights, Microsoft Sentinel, and VM insights) can send data to the same workspace.

Single Team Management: Since the monitoring solution will be managed by a single team, there’s no need to separate the data into multiple workspaces for access or organizational purposes.

Cost-Effectiveness: Using a single workspace is generally more cost-effective, as you avoid the overhead of managing multiple workspaces and potential data transfer charges between them.

Why Not More Workspaces?

Multiple workspaces are often used when:

You need to separate data for different environments (e.g., development, testing, production).

You have different teams that require segregated access to specific data.

You have different regulatory or compliance requirements that require isolating data from different sources.

None of those conditions apply here: The problem specifies a single team managing the monitoring solution, which means that data separation and access control do not require multiple workspaces. Therefore, one workspace is the most efficient option.

In conclusion: For a single team managing a monitoring solution that includes the specified services, a single Azure Monitor workspace is sufficient and the most cost-effective choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You need to recommend a solution to generate a monthly report of all the new Azure Resource Manager (ARM) resource deployments in your Azure subscription.

What should you include in the recommendation?

A. Application Insights
B. Azure Analysis Services
C. Azure Advisor
D. Azure Activity Log

A

The correct answer is D. Azure Activity Log.

Here’s why:

Azure Activity Log’s Function: The Azure Activity Log is a service that provides a detailed record of all operations that occur in your Azure subscription. This includes the creation, modification, and deletion of resources. It is essentially an audit log for your Azure environment. Specifically, it records when new ARM resources are deployed.

Generating Reports: The Activity Log’s data can be filtered, exported, and analyzed to create a monthly report of new resource deployments. You can export the log to various destinations, such as Azure Storage, Azure Event Hubs, or Azure Log Analytics, for further analysis and reporting.

Purpose-Built: The Activity Log is designed for tracking operational events, such as the creation of resources. It’s the most appropriate tool for generating this kind of report.

Let’s review why the other options are not the best fit:

A. Application Insights: Application Insights is a service designed to monitor the performance and usage of applications. While it can log some operational events from within the application, it doesn’t track resource deployment events at the subscription level from ARM.

B. Azure Analysis Services: Azure Analysis Services is a data analytics service used for creating complex data models for reporting. It does not contain the data on new resource deployments at the ARM level.

C. Azure Advisor: Azure Advisor is a recommendation engine that analyses your Azure resources and provides recommendations for cost, performance, and security improvements. It’s not designed for reporting on new resource deployments.

In conclusion: The Azure Activity Log is the ideal service for providing the necessary data to generate a monthly report of new ARM resource deployments, as it records all resource operations within your Azure subscription.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

HOTSPOT
Case Study
Overview
Fabrikam, Inc. is an engineering company that has offices throughout Europe. The company has a main office in London and three branch offices in Amsterdam, Berlin, and Rome.

Existing Environment: Active Directory Environment

The network contains two Active Directory forests named corp.fabrikam.com and rd.fabrikam.com. There are no trust relationships between the forests.

Corp.fabrikam.com is a production forest that contains identities used for internal user and computer authentication.

Rd.fabrikam.com is used by the research and development (R&D) department only. The R&D department is restricted to using on-premises resources only.

Existing Environment: Network Infrastructure

Each office contains at least one domain controller from the corp.fabrikam.com domain. The main office contains all the domain controllers for the rd.fabrikam.com forest.

All the offices have a high-speed connection to the internet.

An existing application named WebApp1 is hosted in the data center of the London office. WebApp1 is used by customers to place and track orders. WebApp1 has a web tier that uses Microsoft Internet Information Services (IIS) and a database tier that runs Microsoft SQL Server 2016. The web tier and the database tier are deployed to virtual machines that run on Hyper-V.

The IT department currently uses a separate Hyper-V environment to test updates to WebApp1.

Fabrikam purchases all Microsoft licenses through a Microsoft Enterprise Agreement that includes Software Assurance.

Existing Environment: Problem Statements

The use of WebApp1 is unpredictable. At peak times, users often report delays. At other times, many resources for WebApp1 are underutilized.

Fabrikam plans to move most of its production workloads to Azure during the next few years, including virtual machines that rely on Active Directory for authentication.

As one of its first projects, the company plans to establish a hybrid identity model, facilitating an upcoming Microsoft 365 deployment.

All R&D operations will remain on-premises.

Fabrikam plans to migrate the production and test instances of WebApp1 to Azure.

Requirements: Technical Requirements

Fabrikam identifies the following technical requirements:

  • Website content must be easily updated from a single point.
  • User input must be minimized when provisioning new web app instances.
  • Whenever possible, existing on-premises licenses must be used to reduce cost.
  • Users must always authenticate by using their corp.fabrikam.com UPN identity.
  • Any new deployments to Azure must be redundant in case an Azure region fails.
  • Whenever possible, solutions must be deployed to Azure by using the Standard pricing tier of Azure App Service.
  • An email distribution group named IT Support must be notified of any issues relating to the directory synchronization services.
  • In the event that a link fails between Azure and the on-premises network, ensure that the virtual machines hosted in Azure can authenticate to Active Directory.
  • Directory synchronization between Azure Active Directory (Azure AD) and corp.fabrikam.com must not be affected by a link failure between Azure and the on-premises network.

Requirements: Database Requirements

Fabrikam identifies the following database requirements:

  • Database metrics for the production instance of WebApp1 must be available for analysis so that database administrators can optimize the performance settings.
  • To avoid disrupting customer access, database downtime must be minimized when databases are migrated.
  • Database backups must be retained for a minimum of seven years to meet compliance requirements.

Requirements: Security Requirements

Fabrikam identifies the following security requirements:

  • Company information including policies, templates, and data must be inaccessible to anyone outside the company.
  • Users on the on-premises network must be able to authenticate to corp.fabrikam.com if an internet link fails.
  • Administrators must be able authenticate to the Azure portal by using their corp.fabrikam.com credentials.
  • All administrative access to the Azure portal must be secured by using multi-factor authentication (MFA).
  • The testing of WebApp1 updates must not be visible to anyone outside the company.

To meet the authentication requirements of Fabrikam, what should you include in the solution? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Minimum number of Azure AD tenants:
0
1
2
3
4
Minimum number of custom domains to add:
0
1
2
3
4
Minimum number of conditional access policies to create:
0
1
2
3
4

A

Minimum number of Azure AD tenants:

Answer: 1

Explanation: Fabrikam needs a single Azure AD tenant to represent their organization in Azure. This tenant will be used to synchronize user identities from the on-premises corp.fabrikam.com domain to the cloud, allowing users to authenticate to Azure resources and Microsoft 365 services using their existing corporate credentials. They also want to be able to access the Azure portal with their on-premise credentials. There is no need for multiple tenants as there is only one organization.

Minimum number of custom domains to add:

Answer: 1

Explanation: Fabrikam needs to add a single custom domain that will be used as their UPN (User Principal Name) suffix. This domain should match their on-premise domain, corp.fabrikam.com, so that users can login to Azure resources and services with the same UPN they use on-premises. This is a key step to establish a hybrid identity environment and to allow users to use the same credentials in the cloud.

Minimum number of conditional access policies to create:

Answer: 2

Explanation: Fabrikam requires at least two conditional access policies to meet the requirements:

MFA for Azure Portal Administrators: One policy to enforce MFA for all administrators when accessing the Azure portal using their corp.fabrikam.com accounts. This meets the requirement that “All administrative access to the Azure portal must be secured by using multi-factor authentication (MFA).”

Access From On-Premises Networks: One policy to ensure that users can still authenticate to resources (including Azure virtual machines) using their corp.fabrikam.com credentials even if the connection between Azure and the on-premises network fails. This addresses the need: “Users on the on-premises network must be able to authenticate to corp.fabrikam.com if an internet link fails.”

Why Other Options Are Incorrect:

Zero or Multiple Azure AD Tenants: Only one Azure AD tenant is needed to centralize identity management for the organization. Using more than one would add unnecessary complexity.

Zero or Multiple Custom Domains: They need at least 1 custom domain to manage the login for Azure AD, using the same user principal name as they use on premise.

Zero or One Conditional Access Policy: Two policies are required to fulfil the security requirements. One policy is for MFA and another policy to ensure on-premises access. Three or more may be needed but the problem asks for the minimum amount.

In Summary:

One Azure AD tenant is needed to centralize identity management.

One custom domain needs to be added to allow users to authenticate with their UPN.

Two conditional access policies are the minimum needed to secure administrator access to the portal with MFA, and to allow for on-premises access if the internet link fails.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You have an Azure subscription that contains an Azure SQL database named DB1. Several queries that query the data in DB1 take a long time to execute.

You need to recommend a solution to identify the queries that take the longest to execute.

What should you include in the recommendation?

A. SQL Database Advisor
B. Azure Monitor
C. Performance Recommendations
D. Query Performance Insight

A

The correct answer is D. Query Performance Insight.

Here’s why:

Purpose-Built for Query Analysis: Query Performance Insight is a feature of Azure SQL Database specifically designed to identify and analyze the performance of database queries. It provides detailed information about query execution, including duration, resource consumption (CPU, I/O, etc.), and execution counts. This makes it ideal for pinpointing the queries that are taking the longest to execute.

Direct Identification of Slow Queries: It directly surfaces the slowest running queries, making it easy to identify the problem areas in your database workload.

Historical Data: It also shows historical query performance, which is useful for trend analysis and for identifying regressions after changes.

Let’s look at why the other options are not the best fit:

A. SQL Database Advisor: The SQL Database Advisor offers recommendations for improving database performance, such as indexing or schema adjustments. While these recommendations might indirectly improve query performance, it doesn’t directly identify which queries are running slowly. It’s more proactive than reactive in addressing performance issues. It will not directly show which queries are slow.

B. Azure Monitor: Azure Monitor is a general monitoring service for Azure resources. While it can collect metrics and logs for your Azure SQL Database, it does not provide the specific query performance insights that Query Performance Insight provides. You would have to write your own custom logs to track these slow queries if using Azure Monitor, and it’s not as easy as using Query Performance Insight.

C. Performance Recommendations: “Performance Recommendations” is a general term rather than a specific tool or service. Azure SQL Database has the Database Advisor, which gives recommendations, but it does not directly identify the slowest queries.

In summary: Query Performance Insight is the correct choice because it is specifically designed for analyzing the performance of queries and will directly show the queries that take the longest to execute.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You have an Azure App Service Web App that includes Azure Blob storage and an Azure SQL Database instance. The application is instrumented by using the Application Insights SDK.

1.) Correlate Azure resource usage and performance data with app configuration and performance data

2.) Visualize the relationships between application components

3.) Track requests and exceptions to a specific line of code within the application

4.) Analyze how many users return to the application and how often they select a particular dropdown value

You need to design a monitoring solution for the web app. Which Azure monitoring services should you use for each?

a. Azure Application Insights
b. Azure Service Map
c. Azure Monitor Logs
d. Azure Activity Log

A
  1. Correlate Azure resource usage and performance data with app configuration and performance data:

Answer: a. Azure Application Insights

Explanation: Application Insights is specifically designed to monitor applications and provides deep insights into their performance. It automatically collects telemetry data like request rates, response times, exception rates, and dependency calls. When combined with the Azure Monitor metrics that Application Insights collects on the host and other Azure resources, you can correlate application performance with underlying infrastructure performance to identify bottlenecks and performance issues. It can also show performance information in the application itself via traces.

  1. Visualize the relationships between application components:

Answer: b. Azure Service Map

Explanation: Azure Service Map automatically discovers application components and maps the dependencies between them. It provides a visual representation of the application architecture, allowing you to quickly identify how different components are connected and the network traffic flows between them. This visualization is crucial for understanding complex application architectures.

  1. Track requests and exceptions to a specific line of code within the application:

Answer: a. Azure Application Insights

Explanation: Using the Application Insights SDK, you can implement custom telemetry, including logging specific trace statements, and exceptions within the code. This capability allows developers to track requests as they pass through the various parts of the application and to pinpoint issues with specific lines of code. With the Application Insights code level diagnostics, you can track execution flow to see which line of code is causing errors.

  1. Analyze how many users return to the application and how often they select a particular dropdown value:

Answer: a. Azure Application Insights

Explanation: Application Insights provides out-of-the-box user session tracking and event tracking. You can analyze user activity, including how many users return to your application and frequency. You can create custom event telemetry to track particular actions, such as selecting a dropdown value and using this data to generate usage patterns. You can track events within the code and also with client-side JavaScript.

In summary:

a. Azure Application Insights: Used for application performance monitoring, correlating infrastructure and application metrics, custom logging, code level diagnostics, and user behavior tracking.

b. Azure Service Map: Used for visualizing the relationships and dependencies between application components.

c. Azure Monitor Logs: This is not the best answer here as it would require a separate custom log that would need to be configured and managed separately. These logs are not automatically available for use.

d. Azure Activity Log: The Activity Log is more for administrative actions and not for application monitoring.

Therefore, you should use:

1 -> a

2 -> b

3 -> a

4 -> a

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You have an on-premises Hyper-V cluster. The cluster contains Hyper-V hosts that run Windows Server 2016 Datacenter. The hosts are licensed under a Microsoft Enterprise Agreement that has Software Assurance.

The Hyper-V cluster contains 30 virtual machines that run Windows Server 2012 R2. Each virtual machine runs a different workload. The workloads have predictable consumption patterns.

You plan to replace the virtual machines with Azure virtual machines that run Windows Server 2016. The virtual machines will be sized according to the consumption pattern of each workload.

You need to recommend a solution to minimize the compute costs of the Azure virtual machines. Which two recommendations should you include in the solution?

A. Configure a spending limit in the Azure account center.
B. Create a virtual machine scale set that uses autoscaling.
C. Activate Azure Hybrid Benefit for the Azure virtual machines.
D. Purchase Azure Reserved Virtual Machine Instances for the Azure virtual machines.
E. Create a lab in Azure DevTest Labs and place the Azure virtual machines in the lab.
Discussion

A

The two correct recommendations are C. Activate Azure Hybrid Benefit for the Azure virtual machines and D. Purchase Azure Reserved Virtual Machine Instances for the Azure virtual machines.

Here’s why:

C. Activate Azure Hybrid Benefit:

How it Works: Azure Hybrid Benefit allows you to use your existing on-premises Windows Server licenses with Software Assurance to reduce the cost of running Windows Server virtual machines in Azure. Because the Hyper-V hosts are licensed under Software Assurance, you can apply the benefit to the Azure virtual machines and significantly reduce licensing costs.

Cost Savings: This directly lowers the per-hour cost of the virtual machines.

D. Purchase Azure Reserved Virtual Machine Instances:

How it Works: Reserved Instances (RIs) allow you to commit to using specific virtual machine sizes for one or three years, in exchange for a significant discount compared to pay-as-you-go pricing.

Cost Savings: Given the predictable consumption patterns of the workloads, using Reserved Instances for the virtual machines provides a huge cost savings. The problem states the virtual machines will be sized according to the consumption pattern.

Why other options are not the best fit for minimizing compute costs:

A. Configure a spending limit in the Azure account center: While spending limits are crucial for cost management, they don’t directly reduce compute costs. They can prevent surprise bills by limiting consumption but do not reduce the actual cost of resources consumed.

B. Create a virtual machine scale set that uses autoscaling: While autoscaling can reduce overall costs by scaling down VMs when not needed, this can lead to more complex management. Given that the workload is predictable, it is better to purchase reserved instances of the VMs, which will provide more cost savings and is less complex to manage. This approach can provide cost benefits but not as much as reserved instances. Autoscaling is better for unpredictable workloads.

E. Create a lab in Azure DevTest Labs and place the Azure virtual machines in the lab: Azure DevTest Labs can help with cost management in development and test environments but doesn’t directly reduce the cost of production virtual machines. The problem states that each VM runs a different workload which suggests that they are for production. DevTest Labs also does not provide cost benefits like Reserved Instances and Hybrid Benefit.

In summary: To minimize compute costs of the Azure VMs when the workload is predictable, you should use Azure Hybrid Benefit to reduce licensing costs, and Azure Reserved Instances for a substantial discount.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You have an Azure subscription that contains the SQL servers:
SQLsr1 –>RG1 –>East US
SQLsvr2 –>RG2 –> West US

The subscription contains the storage accounts:
Storage1(storagev2) –> RG1 –>East US.
Storage2(BlobStorage)–>RG2 –> West US

You create the Azure SQL databases:
SQLdb1–>RG1–>SQLsvr1–>STD pricing tier
SQLdb2–>RG1–>SQLsvr1–>STD pricing tier
SQLdb3–>RG2–>SQLsvr2–>Premium pricing tier

1.) When you enable auditing for SQLdb1, can you store the audit info to sotrage1?

2.) When you enable auditing for SQLdb2, can you store the audit info to storage2?

3.) When you enable auditing for SQLdb3, can you store the audit info to storage2?

A

Key Concept: For Azure SQL Database auditing, you need a storage account in the same region as the SQL Server.

Here are the answers to your questions:

  1. When you enable auditing for SQLdb1, can you store the audit info to Storage1?

Answer: Yes

Explanation: SQLdb1 is located in the same resource group RG1 as SQLsvr1 and in the East US region. Storage1 is also in RG1 and the East US region. Because the storage account is in the same region as the SQL Server, it can be used to store audit logs.

  1. When you enable auditing for SQLdb2, can you store the audit info to Storage2?

Answer: No

Explanation: SQLdb2 is located in resource group RG1 and in the East US region. However, Storage2 is in resource group RG2 and the West US region. Since the storage account must be located in the same region as the SQL Server for auditing, Storage2 cannot be used to store audit logs for SQLdb2. You would need to use Storage1 or another storage account in the East US region.

  1. When you enable auditing for SQLdb3, can you store the audit info to Storage2?

Answer: Yes

Explanation: SQLdb3 is located in resource group RG2 and in the West US region, as well as SQLsvr2. Storage2 is in RG2 and the West US region. Because they are in the same region, Storage2 is a valid storage account to store audit logs for SQLdb3.

In summary:

SQLdb1 (East US) can store audit logs in Storage1 (East US).

SQLdb2 (East US) CANNOT store audit logs in Storage2 (West US).

SQLdb3 (West US) can store audit logs in Storage2 (West US).

Important Note: When configuring auditing for Azure SQL Database, the storage account must be in the same Azure region as the SQL Server. It does not matter if the storage account is in the same resource group. It is also important to note that the storage account type can be either blob storage or general purpose V2 storage for SQL auditing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A company has a hybrid ASP.NET Web API application that is based on a software as a service (SaaS) offering.

Users report general issues with the data. You advise the company to implement live monitoring and use ad hoc queries on stored JSON data. You also advise the company to set up smart alerting to detect anomalies in the data.

You need to recommend a solution to set up smart alerting.
What should you recommend?

A. Azure Site Recovery and Azure Monitor Logs
B. Azure Data Lake Analytics and Azure Monitor Logs
C. Azure Application Insights and Azure Monitor Logs
D. Azure Security Center and Azure Data Lake Store

A

The correct answer is C. Azure Application Insights and Azure Monitor Logs.

Here’s why:

Azure Application Insights for Smart Alerting: Application Insights is a powerful Application Performance Monitoring (APM) service specifically designed to monitor web applications and their underlying services. It includes:

Smart Detection: Application Insights has built-in smart detection capabilities that use machine learning to automatically detect anomalies in your application’s performance, including response times, request rates, and exception rates. This is ideal for detecting unusual data issues.

Metrics and Telemetry: It collects a wealth of telemetry data that can be used for analysis and alerting. The data collected can include: application requests, traces, dependency calls, exceptions, and metrics. This is required to detect anomalies.

Custom Metrics: It allows you to create custom metrics and alerts specific to your application’s data patterns if needed.

Azure Monitor Logs (Log Analytics) for Data Analysis: While Application Insights handles smart alerting well, it’s useful to use Azure Monitor Logs (Log Analytics) in conjunction with it.

JSON Data Storage: Application Insights stores collected data, including logs and telemetry, in a Log Analytics workspace. This allows you to query and analyze your JSON data using Kusto Query Language (KQL).

Alerts based on Log queries: While Application Insights can trigger alerts directly, you can also create complex alerts based on log queries in Log Analytics. You can write queries that detect specific data patterns or anomalies. You can then create alerts based on these queries.

Why the other options are not the best fit:

A. Azure Site Recovery and Azure Monitor Logs: Azure Site Recovery is primarily for business continuity and disaster recovery. It doesn’t provide the application performance monitoring and anomaly detection capabilities required here. It also does not monitor the data inside the application.

B. Azure Data Lake Analytics and Azure Monitor Logs: Azure Data Lake Analytics is designed for batch processing large datasets and performing advanced analytics. While it’s useful for analyzing data, it’s not the best fit for live monitoring of a web application or for setting up smart alerts for anomaly detection.

D. Azure Security Center and Azure Data Lake Store: Azure Security Center is focused on security posture management and threat protection. It does not provide the application monitoring and smart alerting capability required for application performance and data anomaly detection. Azure Data Lake Store is a storage service and does not have the ability to monitor anomalies.

In conclusion: Application Insights provides the built-in smart detection and the rich telemetry needed for monitoring your application. Combining it with Azure Monitor Logs allows you to store and query JSON data and create complex alerts, thus meeting all the requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

You have an Azure subscription that is linked to an Azure Active Directory (Azure AD) tenant. The subscription contains 10 resource groups, one for each department at your company.

Each department has a specific spending limit for its Azure resources.

You need to ensure that when a department reaches its spending limit, the compute resources of the department shut down automatically.

Which two features should you include in the solution?

A. Azure Logic Apps
B. Azure Monitor alerts
C. the spending limit of an Azure account
D. Cost Management budgets
E. Azure Log Analytics alerts

A

The two correct features to include in the solution are B. Azure Monitor alerts and D. Cost Management budgets. Here’s why:

D. Cost Management Budgets:

Purpose: Cost Management budgets allow you to set spending limits for a specific scope, such as a subscription, resource group, or management group. They also allow you to be notified when cost spending has reached certain thresholds.

Role in Solution: You’ll use budgets to define the spending limits for each department’s resource group. Once that limit is met, an action or alert should be triggered.

B. Azure Monitor Alerts:

Purpose: Azure Monitor alerts can trigger actions based on certain events or conditions that are evaluated.

Role in Solution: In this solution, you can configure cost management budgets to notify Azure Monitor of when a spending limit is met. Then you can set up Azure monitor to create an alert based on that threshold being met. This alert can then trigger an action.

How These Two Work Together
Cost management budgets track the budget usage and generate notifications to Azure Monitor. Azure Monitor can then generate an alert based on the notification that it received from the budget service. Then, the Azure Monitor alert can trigger an action such as a Logic App, Automation Runbook, Function App, or webhook. You can choose an appropriate action that will shut down your compute resources.

Why Other Options Are Not Correct (or not the complete solution):

A. Azure Logic Apps: Logic Apps are great for automating workflows, and you could use one to shut down the resources. However, they do not provide the budgeting functionality required to track the spending limits of a department. A Logic App is more of an action for the alert to call.

C. the spending limit of an Azure account: The spending limit for an entire Azure account is not granular enough. It doesn’t allow you to apply limits for each department separately based on their resource groups. A single Azure spending limit cannot be set for multiple departments.

E. Azure Log Analytics alerts: Azure Log Analytics alerts are based on log queries. While you can use logs to track costs, this service is not the best for this task. It can’t directly trigger an action based on a cost budget. Azure Log Analytics is not a direct requirement for this scenario.

In Summary: Cost Management budgets will provide the ability to track spending and trigger an alert when that limit is reached, while Azure Monitor alerts provides the ability to define the action to take (shutting down the compute resources).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

You have an Azure subscription that contains the resources

storage1, storage account, storage in East US
storage2, storage account, storageV2 in East US
Workspace1, log analytics workspace in East US
Workspace2, log analytics workspace in East US
Hub1, Event hub in East US

You create an Azure SQL database named DB1 that is hosted in the East US region.

To DB1, you add a diagnostic setting named Settings1. Settings1 archives SQLInsights to storage1 and sends SQLInsights to Workspace1.

1.) Can you add a new diagnostic setting to archive SQLInsights logs to storage2?

2.) Can you add a new diagnostic setting that sends SQLInsights logs to Workspace2?

3.) Can you add a new diagnostic setting that sends SQLInsighs logs to Hub1?

A

Key Concepts:

Diagnostic Settings: Diagnostic settings for Azure resources allow you to route logs and metrics to different destinations for analysis and storage.

Storage Accounts: Storage accounts must be in the same region as the SQL Server resource. The storage account can be a blob storage or a storageV2 type.

Log Analytics Workspaces: Log Analytics workspaces can be in the same region or a different region as the SQL database, but is not recommended as it introduces higher latency and cost.

Event Hubs: Event Hubs can also be in the same region as the SQL database, or a different region.

Here are the answers to your questions:

  1. Can you add a new diagnostic setting to archive SQLInsights logs to Storage2?

Answer: Yes

Explanation: Storage2 is a storage account of type storageV2 located in the same region as the SQL Database DB1 (East US). A storage account in the same region is a valid destination for the diagnostic logs. Also, the type of the storage account can be either blob storage or a general purpose V2 type.

  1. Can you add a new diagnostic setting that sends SQLInsights logs to Workspace2?

Answer: Yes

Explanation: Workspace2 is a log analytics workspace located in the same region as the SQL Database DB1 (East US). A log analytics workspace in the same region is a valid destination for the diagnostic logs.

  1. Can you add a new diagnostic setting that sends SQLInsights logs to Hub1?

Answer: Yes

Explanation: Hub1 is an Event Hub located in the same region as the SQL Database DB1 (East US). An Event Hub in the same region is a valid destination for the diagnostic logs.

In summary:

You can add a new diagnostic setting to archive SQLInsights logs to Storage2 as it is in the same region as the SQL server.

You can add a new diagnostic setting that sends SQLInsights logs to Workspace2 as it is in the same region as the SQL server.

You can add a new diagnostic setting that sends SQLInsights logs to Hub1 as it is in the same region as the SQL server.

Important Note:
While all destinations for log settings are valid as the resources are located in the same region, it is important to use same region resources when setting up your log settings for optimal performance. Using cross region resources can result in higher latency and data transfer costs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

You deploy several Azure SQL Database instances. You plan to configure the Diagnostics settings on the databases with the following settings:
Diagnostic setting named Diagnostic1
Archive to a storage account is enabled.
SQLInsights log is enabled and has a retention of 90 days. AutomaticTurning log is enabled and as a retention of 30 says.
All other logs are disabled.
Send log analytics is enabled.
Archive to storage account is enabled.
Stream to even hub is disabled.

1.) What is the amount of time an SQLInsights data will be stored in blob storage?

30 days
90 days
730 days
indefinite

2.) What is the maximum amount of time SQLInsights data can be stored in Azure Log Analytics?

30 days
90 days
730 days
indefinite

A

Key Concepts:

Diagnostic Settings: These settings define where Azure resources send their logs and metrics, and how long that data is retained.

Storage Account Retention: When you configure diagnostic settings to archive logs to a storage account, you specify a retention period in days. After that time, the logs are deleted from the storage account.

Log Analytics Workspace Retention: When you configure diagnostic settings to send logs to a Log Analytics workspace, the retention is managed in the Log Analytics workspace itself. It’s independent of the diagnostic settings. Log Analytics can store the logs indefinitely or for a specific period based on your settings for either the table or the workspace.

Answers:

  1. What is the amount of time an SQLInsights data will be stored in blob storage?

Answer: 90 days

Explanation: In the diagnostic setting named Diagnostic1, you explicitly enabled the SQLInsights log and set its retention to 90 days when archiving to a storage account. This setting directly controls how long the data persists in storage.

  1. What is the maximum amount of time SQLInsights data can be stored in Azure Log Analytics?

Answer: 730 days
Explanation:
How long is the data kept?
Raw data points (that is, items that you can query in Analytics and inspect in Search) are kept for up to 730 days.
Reference:
https://docs.microsoft.com/en-us/azure/azure-monitor/app/data-retention-privacy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Your company has the divisions:
East –> sub1, sub2 –> East.contoso.com
West–> sub3, sub4 –> West.contoso.com

You plan to deploy a custom application to each subscription. The application will contain the following:
✑ A resource group
✑ An Azure web app
✑ Custom role assignments
✑ An Azure Cosmos DB account
You need to use Azure Blueprints to deploy the application to each subscription.

What is the minimum number of objects required to deploy the application?

Management Groups:

Blueprint definitions:

Blueprint assignments:

A

Understanding the Requirements

Two Divisions: The company has two divisions, East and West, with two subscriptions each (total of 4 subscriptions).

Consistent Application: Each subscription needs the same application components: a resource group, an Azure web app, custom role assignments, and a Cosmos DB account.

Azure Blueprints: Blueprints allow you to create repeatable deployment packages for Azure resources.

Minimize Objects: We need to determine the minimum number of management groups, blueprint definitions, and blueprint assignments to achieve the desired outcome.

Minimum Objects

Management Groups:

Answer: 1

Explanation: Since all subscriptions are in the same organization and there are no requirements to have separate policies for the different divisions, we do not require management groups. You do not need to deploy management groups to deploy azure blueprints.

Blueprint Definitions:

Answer: 1

Explanation: You can define one blueprint that includes all the common components needed for the application (resource group, web app, custom role assignments, and Cosmos DB account). Because all subscriptions will contain the same application, a single definition is sufficient. The blueprint definitions are for managing the blueprint itself, and do not need to match the quantity of subscriptions.

Blueprint Assignments:

Answer: 4

Explanation: While one blueprint can define the application, we need to assign that blueprint to each subscription where you want to deploy the application. Because we have 4 subscriptions we will need a blueprint assignment for each subscription.

Why Other Configurations Are Not Minimal:

Multiple Management Groups: While you could use separate management groups for East and West divisions, it’s not required for the scenario. The blueprints can be applied directly to the subscriptions, and since there is nothing specified in the requirements about needing management groups, they are not needed for this scenario.

Multiple Blueprint Definitions: Creating multiple blueprint definitions for the same application components in each subscription would be redundant, increasing maintenance.

More blueprint assignments: Because there are 4 subscriptions, you need to assign blueprints for each subscription and you cannot assign a blueprint to more than one subscription at once.

In summary: To deploy the application with Azure Blueprints using the minimum number of objects, you need:

Management Groups: 1 (Root Management Group - if you have one already). Note you do not need to deploy management groups to deploy blueprints.

Blueprint Definitions: 1

Blueprint Assignments: 4 (one for each subscription)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

You have an Azure Active Directory (Azure AD) tenant.
You plan to deploy Azure Cosmos DB databases that will use the SQL API.
You need to recommend a solution to provide specific Azure AD user accounts with read access to the Cosmos DB databases.

What should you include in the recommendation?

A. shared access signatures (SAS) and conditional access policies
B. certificates and Azure Key Vault
C. a resource token and an Access control (IAM) role assignment
D. master keys and Azure Information Protection policies

A

The correct answer is C. a resource token and an Access control (IAM) role assignment.

Here’s why:

Resource Tokens (or Resource IDs) for Cosmos DB: When a Cosmos DB account is created, a resource id is automatically generated for that account. This resource id is also available at different scopes including the database and container levels. This resource id is used for configuring access control (IAM).

Azure Role-Based Access Control (RBAC) and IAM: Azure role-based access control (RBAC), allows you to grant specific permissions to Azure AD users, groups, or service principals over various scopes. For Cosmos DB, you use IAM (Identity and Access Management) to assign roles to Azure AD user accounts.

Built-in and Custom Roles: You can use built-in roles, such as “Cosmos DB Reader,” or create custom roles to provide fine-grained control over access. For example, you can create a role that only grants read access to specific databases or containers.

Granting Access: By assigning an appropriate role with read permissions to a user and providing that user the id to the resource, you can grant the user access to a specific Cosmos DB database.

Let’s review why the other options are not the right fit:

A. shared access signatures (SAS) and conditional access policies: Shared Access Signatures (SAS) are used for providing access to storage accounts, not Cosmos DB databases. While conditional access policies are useful for enforcing authentication policies based on conditions, they are not a direct way of granting access to specific Cosmos DB database resources.

B. certificates and Azure Key Vault: Certificates and Azure Key Vault are primarily used for securing sensitive information such as API keys, not for providing read access to Cosmos DB resources for users. While you can use certificates to provide client-side authentication for applications, certificates are not used to grant user access.

D. master keys and Azure Information Protection policies: Master keys provide full access to Cosmos DB account resources. Sharing these would violate the principle of least privilege, and they should be managed securely. Azure Information Protection policies are primarily used for securing document access.

In Summary: Using IAM role assignments, and providing the resource id to the user to allow access, is the best way to provide specific Azure AD users with read access to Cosmos DB databases while adhering to security best practices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

You need to design a resource governance solution for an Azure subscription. The solution must meet the following requirements:

✑ Ensure that all ExpressRoute resources are created in a resource group named RG1.
✑ Delegate the creation of the ExpressRoute resources to an Azure Active Directory (Azure AD) group named Networking.
✑ Use the principle of least privilege.

1.) Ensure all ExpressRoute resources are created in RG1

2.) Delegate the creation of the ExpressRoute resources to Networking

a. A custom RBAC role assignment at the level of RG1
b. A custom RBAC role assignment at the subscription level
c. An Azure Blueprints assignment that sets locking mode for the level of RG1
d. An Azure Policy assignment at the subscription level that has an exclusion
e. Multiple Azure Policy assignments at the resource group level except for RG1

A
  1. Ensure all ExpressRoute resources are created in RG1:

Correct Answer: d. An Azure Policy assignment at the subscription level that has an exclusion

Explanation: An Azure Policy can be used to enforce that all ExpressRoute resources are created in the RG1 resource group. The policy is assigned at the subscription level so it is applied to all resources in the subscription. You would create the policy to require that all ExpressRoute resources should only be created in the RG1 resource group, and you would specify any other resource group to be an exception (excluded from the policy). This policy setting would ensure that all new ExpressRoute resources will always be created in the correct resource group. While other options could be made to work, this is the easiest and most appropriate way to achieve the requirement.

Why other options are not best here

An Azure Blueprint can also enforce these settings, but is overkill for this specific setting.

While you could create multiple resource group level policies for all groups that are not RG1, it would require extra effort to keep all groups updated.

A resource group level policy would be incorrect since you would need multiple policies, this would not be the best solution.

  1. Delegate the creation of the ExpressRoute resources to Networking:

Correct Answer: a. A custom RBAC role assignment at the level of RG1

Explanation: To delegate permission to create ExpressRoute resources, you should use Role-Based Access Control (RBAC). Create a custom role that has only the permissions to create and manage ExpressRoute resources. Then, assign this custom role to the Networking Azure AD group at the level of the resource group RG1. This adheres to the principle of least privilege because it gives the group only the necessary permissions, and only within the context of the resource group needed.

Why Other Options Are Incorrect

Creating a role assignment at the subscription level would give the group more permissions than are necessary and would therefore violate the principle of least privilege.

While blueprints can also manage roles, this would be overkill for what is required. Blueprints are not used to delegate permissions to groups.

Azure Policy doesn’t manage permissions directly.

In Summary:

To ensure resources are created in the correct resource group, use Azure Policy.

To delegate permissions, use RBAC roles with a custom role for ExpressRoute management on the correct resource group.

This combination of Azure Policy and RBAC roles provides an efficient and secure governance solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

You have an Azure Active Directory (Azure AD) tenant and Windows 10 devices.

MFA Policy Configuration:
Enable Policy set to off
Grant
Select the controls to be enforced
Grant access selected.
Require multi-factor authentication: yes
Require device to be marked as compliant: no
Require hybrid azure ad joined devices: yes
Require approved client apps: no
Require app protection policy: no
For multiple controls: require one of the selected controls.

What is the result of the policy?

A. All users will always be prompted for multi-factor authentication (MFA).
B. Users will be prompted for multi-factor authentication (MFA) only when they sign in from devices that are NOT joined to Azure AD.
C. All users will be able to sign in without using multi-factor authentication (MFA).
D. Users will be prompted for multi-factor authentication (MFA) only when they sign in from devices that are joined to Azure AD.

A

The correct answer is D. Users will be prompted for multi-factor authentication (MFA) only when they sign in from devices that are joined to Azure AD.

Here’s why:

Understanding the Conditional Access Policy:

Enable Policy set to off: Since the policy is turned off, all other settings are meaningless. You will not be prompted for MFA based on this policy as it is disabled.

Grant Access Selected: This means the policy is setup to grant access. If all conditions are met, then access will be granted based on the next settings.

Require multi-factor authentication: yes: This means that MFA is required to gain access based on this policy, but it can only be applied if the other conditions are also met. This will only be enforced if they sign in from a device that is hybrid azure ad joined.

Require device to be marked as compliant: no: This means the device compliance status is not required for this policy to be enforced.

Require hybrid azure ad joined devices: yes: This means that the device must be Azure AD joined for this policy to be applied.

Require approved client apps: no: This is not required for this policy to be enforced.

Require app protection policy: no: This is not required for this policy to be enforced.

For multiple controls: require one of the selected controls: Because there is only one control that is set to yes, this setting is not important as the policy will always apply that control.

Result: The policy is disabled so will have no impact. Because it is disabled, the result will be that all users will be able to sign in without using MFA. If the policy were enabled, only users on Hybrid Azure AD joined devices would be required to use multi-factor authentication (MFA) because it is specified that “Require hybrid azure ad joined devices: yes”. All other users and devices would not be impacted by this conditional access policy if it were enabled.

Let’s analyze the incorrect options:

A. All users will always be prompted for multi-factor authentication (MFA). This is incorrect as the policy is disabled and will have no impact, meaning that all users will not be prompted for MFA based on this policy.

B. Users will be prompted for multi-factor authentication (MFA) only when they sign in from devices that are NOT joined to Azure AD. This is incorrect, as the policy is disabled. If the policy were enabled, then users on non Azure AD joined devices would not be impacted by the policy.

C. All users will be able to sign in without using multi-factor authentication (MFA). This is the correct answer because the policy is disabled and therefore, all users will be able to sign in without using MFA.

In summary: Because the policy is disabled, no users will be impacted by the policy and all users will be able to sign in without using MFA. If the policy were enabled, it would require MFA for users signing in from hybrid Azure AD joined devices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

You are designing an Azure resource deployment that will use Azure Resource Manager templates. The deployment will use Azure Key Vault to store secrets.

You need to recommend a solution to meet the following requirements:

✑ Prevent the IT staff that will perform the deployment from retrieving the secrets directly from Key Vault.
✑ Use the principle of least privilege.

Which two actions should you recommend?

A. Create a Key Vault access policy that allows all get key permissions, get secret permissions, and get certificate permissions.

B. From Access policies in Key Vault, enable access to the Azure Resource Manager for template deployment.

C. Create a Key Vault access policy that allows all list key permissions, list secret permissions, and list certificate permissions.

D. Assign the IT staff a custom role that includes the Microsoft.KeyVault/Vaults/Deploy/Action permission.

E. Assign the Key Vault Contributor role to the IT staff.

A

The two correct actions are B. From Access policies in Key Vault, enable access to the Azure Resource Manager for template deployment. and D. Assign the IT staff a custom role that includes the Microsoft.KeyVault/Vaults/Deploy/Action permission.

Here’s why:

B. Enable Access for Azure Resource Manager:

How it works: Key Vault has a specific feature to grant access to Azure Resource Manager for template deployment. By enabling this, you allow ARM to fetch secrets from Key Vault during the deployment, without granting the deployment user direct access to those secrets. This allows you to store secrets in Key Vault, and still allow ARM templates to use those secrets without having to give the user or service account access to those secrets. This satisfies the requirement that the IT staff not be able to access the secrets.

Principle of Least Privilege: This provides a secure way for the template deployment to read the secrets, without giving direct access to the IT staff that are running the deployment.

D. Assign a Custom Role for Deployment:

How it works: The Microsoft.KeyVault/Vaults/Deploy/Action permission allows a user or service principal to use Key Vault secrets during an ARM template deployment. By assigning a custom role that includes only this permission, you limit the permissions given to the IT staff. They will only be able to deploy resources to Azure, and will not be able to list or view the secrets themselves.

Principle of Least Privilege: This approach adheres to the principle of least privilege by not granting the IT staff any other unnecessary permissions within the Key Vault (like read, delete, list).

Why Other Options Are Incorrect:

A. Create a Key Vault access policy that allows all get key permissions, get secret permissions, and get certificate permissions: This is incorrect as it gives too much permission to the IT staff. The IT staff should not be able to get the secrets directly from Key Vault.

C. Create a Key Vault access policy that allows all list key permissions, list secret permissions, and list certificate permissions: This is incorrect as it gives too much permission to the IT staff. The IT staff should not be able to list the secrets directly from Key Vault.

E. Assign the Key Vault Contributor role to the IT staff: This role provides far too many permissions, and goes against the principle of least privilege. The Key Vault contributor can manage everything within a Key Vault, including deleting the vault itself.

In Summary:

Enabling the Azure Resource Manager access policy allows ARM to fetch secrets for deployments.

Assigning a custom role that includes the Microsoft.KeyVault/Vaults/Deploy/Action permission to IT staff allows the template deployments, but does not allow the IT staff to retrieve secrets directly.

These two settings ensure that the IT staff who run the deployment cannot directly access secrets in Key Vault, and also uses the principle of least privilege.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

You have an Azure subscription that contains resources in three Azure regions.
You need to implement Azure Key Vault to meet the following requirements:
✑ In the event of a regional outage, all keys must be readable.
✑ All the resources in the subscription must be able to access Key Vault.
✑ The number of Key Vault resources to be deployed and managed must be minimized.

How many instances of Key Vault should you implement?

A. 1
B. 2
C. 3
D. 6

A

The correct answer is A. 1

Here’s why:

Requirement 1: Regional Outage Resilience: Azure Key Vault has built-in redundancy within a region. If you create a single Key Vault and enable geo-replication, the keys are replicated to a secondary region within the same geopolitical boundary, ensuring that they are readable even during a regional outage of the primary location. If you needed to read the keys in a second region you would enable geo-replication for this, but creating multiple Key Vaults isn’t required to fulfil this.

Requirement 2: Access for all Subscription Resources: Key Vault access is controlled through Azure Active Directory (Azure AD) and role-based access control (RBAC). You can use a managed identity that you set at the subscription scope to give access to the resources, and can use a single Key Vault.

Requirement 3: Minimize Number of Key Vaults: Creating a single Key Vault reduces the management overhead. Having multiple vaults would require additional administrative effort.

Why other options are incorrect:

B. 2: While two Key Vaults would provide redundancy across two regions, it doesn’t help with the requirement to keep the resources to a minimum and would be more complex to manage and give all resources access to those across the different regions.

C. 3: Three Key Vaults would be redundant and increase management complexity unnecessarily. The single vault with replication is sufficient.

D. 6 Six key vaults is just not needed given that we can use a single vault with replication.

Exam Tip: Focus on requirements that emphasize minimizing management overhead or reducing the number of resources. In these cases, a single instance of a service that has built in capabilities would be preferred.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

You have an Azure Active Directory (Azure AD) tenant.
You plan to provide users with access to shared files by using Azure Storage. The users will be provided with different levels of access to various Azure file shares based on their user account or their group membership.
You need to recommend which additional Azure services must be used to support the planned deployment.
What should you include in the recommendation?

A. an Azure AD enterprise application
B. Azure Information Protection
C. an Azure AD Domain Services (Azure AD DS) instance
D. an Azure Front Door instance

A

The correct answer is A. an Azure AD enterprise application.

Here’s why:

Azure AD Enterprise Application: To control access to Azure file shares based on user accounts or group membership, you need to integrate Azure Storage with Azure AD. This is done through an Azure AD enterprise application, which acts as a service principal. Here’s how it works:

Storage Account Configuration: You enable Azure AD authentication on the storage account.

Azure AD Application: An enterprise application is created in Azure AD representing your storage account.

Role Assignments: You grant users or groups specific role assignments to the storage account using Azure AD roles such as Storage File Data SMB Share Reader, Storage File Data SMB Share Contributor etc. The roles are set on the scope of either the storage account itself, the individual file share, or even individual directories/files.

Authentication: When a user tries to access a file share, Azure AD validates their credentials and authorizes their access based on these role assignments.

Why other options are incorrect:

B. Azure Information Protection: Azure Information Protection is used to protect files, such as documents or emails, with sensitivity labels, encryption, and access permissions. While this could complement security on Azure files, it does not provide the identity-based access management you are looking for.

C. An Azure AD Domain Services (Azure AD DS) instance: While Azure AD DS provides managed domain services, you do not need it simply to control access to Azure file shares. You might use it to manage on-premises devices connected to your hybrid network, but it does not directly grant access to storage resources.

D. An Azure Front Door instance: Azure Front Door is a global HTTP(S) load balancer and application delivery service. It’s not relevant for providing access control to file shares.

Exam Tip: Pay close attention to requirements around identity-based access management. In these cases, the correct answer is usually related to how a service integrates with Azure Active Directory. Understanding the difference between authentication (verifying identity) and authorization (granting permissions) is also key.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Your company has users who work remotely from laptops.
You plan to move some of the applications accessed by the remote users to Azure virtual machines. The users will access the applications in Azure by using a point-to-site VPN connection. You will use certificates generated from an on-premises-based Certification authority (CA).

You need to recommend which certificates are required for the deployment.

1.) Trusted Root Certification Authorities certificate store on each laptop

2.) The users Personal store on each laptop

3.) The Azure VPN Gateway

Which certificates should be used for each

A. A root CA certificate that has the private key
B. A root CA certificate that has the public key only
C. A user certificate that has the private key
D. A user certificate that has the public key only

A

Okay, let’s break down the certificate requirements for a point-to-site VPN connection using an on-premises CA. Here’s the correct answer:

1) Trusted Root Certification Authorities certificate store on each laptop: B. A root CA certificate that has the public key only

Explanation: The Trusted Root Certification Authorities store is used to verify the identity of the server (in this case, the Azure VPN Gateway). You need to install the public key of the root CA that issued the VPN server certificate to establish trust. The private key should never be installed on client machines.

2) The users Personal store on each laptop: C. A user certificate that has the private key

Explanation: The Personal store is used for client authentication. Each user needs their own unique user certificate with the private key to prove their identity to the VPN gateway during the connection process. This private key must not be shared with other users.

3) The Azure VPN Gateway: B. A root CA certificate that has the public key only

Explanation: The Azure VPN Gateway, similar to the client machines, needs to verify the certificate used by connecting clients. This requires the public key of the root CA that issued the user certificates. You do not install the user certificates or their associated private keys on the VPN Gateway.

Therefore, the correct matching is:

1) Trusted Root Certification Authorities certificate store on each laptop: B

2) The users Personal store on each laptop: C

3) The Azure VPN Gateway: B

Why the other options are incorrect:

A. A root CA certificate that has the private key: The private key of the root CA should only be used to issue certificates and should be secured, not installed on client machines or the VPN gateway.

D. A user certificate that has the public key only: The private key is necessary for the user to be authenticated by the VPN gateway.

Key concepts for the Exam:

Root CA Certificate (Public Key): Used to establish trust by validating that the certificate is issued by a trusted source. Distributed widely.

User Certificate (Private Key): Used to uniquely identify and authenticate a user to access resources or services. Private keys must never be shared.

Certificate Store: Local location on Windows systems (like the “Trusted Root Certification Authorities” or “Personal” stores) that are used to manage certificates.

Exam Tip: When you see a question about certificates and authentication, focus on whether a private key or a public key is being used. Also, think about trust and what needs to verify the identity of whom or what. Remember, Private keys must always be kept secret, and you would not upload a private key anywhere.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

You are building an application that will run in a virtual machine (VM). The application will use Azure Managed Identity.

The application uses Azure Key Vault, Azure SQL Database, and Azure Cosmos DB.

You need to ensure the application can use secure credentials to access these services.

Functionality

1.) Azure Key vault

2.)Azure SQL

3.) CosmosDB

Which authentication method should you recommend for each functionality?

Authorization methods:

A. Hash-based message authentication code (HMAC)

B. Azure Managed Identity

C. Role-Based Access Controls (RBAC)

D. HTTPS encryption

A

Here’s the correct answer mapping:

1) Azure Key Vault: B. Azure Managed Identity

Explanation: Azure Managed Identity is the ideal method for authenticating to Azure Key Vault from an Azure resource (like a VM). Managed identities provide an automatically managed identity in Microsoft Entra ID, eliminating the need to store and manage credentials within the application code or configuration. The application can retrieve secrets directly from the Key Vault using its managed identity.

2) Azure SQL Database: B. Azure Managed Identity

Explanation: Azure SQL Database supports Azure AD authentication. When combined with Managed Identity, this enables a secure connection to the database without storing credentials. By enabling Azure AD authentication and then configuring the SQL server with an admin account that is an Azure AD principal, the application can use its managed identity to authenticate.

3) Azure Cosmos DB: B. Azure Managed Identity

Explanation: Azure Cosmos DB also supports Azure AD authentication. Using managed identity, a secure connection can be established with Cosmos DB without needing API keys or connection strings in the application code. Once the application’s managed identity is authorized to access the Cosmos DB resources, connections are made using access tokens obtained from Azure Active Directory by the managed identity.

Therefore, the correct matching is:

1) Azure Key Vault: B

2) Azure SQL Database: B

3) Azure Cosmos DB: B

Why the other options are incorrect:

A. Hash-based message authentication code (HMAC): HMAC is a cryptographic technique for verifying data integrity and authenticity. While important for secure communication, it’s not a method for authenticating to services.

C. Role-Based Access Controls (RBAC): RBAC is an authorization system that controls what actions principals (like user accounts, groups, or applications) can perform on Azure resources, however, it is not an authentication method. You use RBAC to grant the managed identity rights to use the other services.

D. HTTPS encryption: HTTPS provides secure communication channels via encryption, but is not an authentication method.

Key Concepts for the Exam:

Azure Managed Identity: Automatically managed identity in Microsoft Entra ID for use by Azure resources. Eliminates credential management.

Authentication vs. Authorization: Authentication validates the identity; Authorization grants access permissions.

Principle of Least Privilege: Grant only necessary permissions to services and applications.

Exam Tip: When a question describes a scenario using multiple Azure services and requires secure authentication with minimal credential management, Azure Managed Identities are the preferred method. Also, remember that RBAC is for authorization, while Managed Identities are for authentication. Understand the difference between authentication (verifying the identity) and authorization (granting permission).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

You have an Azure subscription that contains a custom application named Application1. Application1 was developed by an external company named Fabrikam,
Ltd. Developers at Fabrikam were assigned role-based access control (RBAC) permissions to the Application1 components. All users are licensed for the
Microsoft 365 E5 plan.

You need to recommend a solution to verify whether the Fabrikam developers still require permissions to Application1. The solution must meet the following requirements:
✑ To the manager of the developers, send a monthly email message that lists the access permissions to Application1.
✑ If the manager does not verify an access permission, automatically revoke that permission.
✑ Minimize development effort.

What should you recommend?

A. Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet.
B. Create an Azure Automation runbook that runs the Get-AzureRoleAssignment cmdlet.
C. In Azure Active Directory (Azure AD), create an access review of Application1.
D. In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources.

A

The correct answer is C. In Azure Active Directory (Azure AD), create an access review of Application1.

Here’s why:

Azure AD Access Reviews: Azure AD Access Reviews are specifically designed to meet the requirements outlined in the question. They provide:

Monthly Email Notifications: You can configure an access review to send monthly email messages to the manager of the Fabrikam developers. These messages would list the permissions the developers have for the resources related to Application1.

Automatic Revocation: If the manager does not verify an access permission during the review process, you can set the review to automatically revoke that permission. This enforces the “just-in-time” access principle.

Minimized Development Effort: Access Reviews are a built-in feature of Azure AD and require no custom coding to implement.

Why the other options are incorrect:

A. Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet: This option would require custom development of a PowerShell script using Azure Automation Runbooks, and scheduling, to implement all of the components, which would go against minimizing the effort. While it could retrieve the role assignments, it would not have the built-in review and automatic revocation features offered by Azure AD access reviews. Also it would not send the manager an email.

B. Create an Azure Automation runbook that runs the Get-AzureRoleAssignment cmdlet: This is similar to option A, but it is not specific to application-based role assignments and it would not have the built-in review and automatic revocation features offered by Azure AD access reviews, it also does not provide any mechanisms to notify users.

D. In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources: While Privileged Identity Management (PIM) provides just-in-time elevation for roles, this option is not designed for access reviews and is more useful for managing privileged accounts, not standard application access rights. It also would require custom code, and is not a built in feature of PIM to send notification to managers to approve access.

Key Concepts for the Exam:

Azure AD Access Reviews: A feature that enables you to regularly review who has access to Azure AD resources, groups and applications. They simplify access management and minimize the risk of over provisioned or stale access.

Privileged Identity Management (PIM): Used to manage and control privileged access to resources and is different from Access Reviews.

Least Privilege: Grant users only the necessary permissions, and reduce the surface area by removing permissions that are no longer needed.

Exam Tip: When a question asks about reviewing access, look for answers that involve Azure AD Access Reviews. Pay close attention to keywords like “regularly verify,” “automatically revoke,” and “minimize development effort.” You might see scenarios with a mixture of solutions, but usually the access review is the best fit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

You have an Azure subscription that contains 10 web apps. The apps are integrated with Azure AD and are accessed by users on different project teams.

The users frequently move between projects.

You need to recommend an access management solution for the web apps. The solution must meet the following requirements:

  • The users must only have access to the app of the project to which they are assigned currently.
  • Project managers must verify which users have access to their project’s app and remove users that are no longer assigned to their project.
  • Once every 30 days, the project managers must be prompted automatically to verify which users are assigned to their projects.

What should you include in the recommendation?

A. Azure AD Identity Protection
B. Microsoft Defender for Identity
C. Microsoft Entra Permissions Management
D. Azure AD Identity Governance

A

Here’s a breakdown of why the correct answer is D. Azure AD Identity Governance and why the others aren’t suitable:

D. Azure AD Identity Governance

Correct Choice: This is the ideal solution because it directly addresses all the requirements:

Access Based on Project: Azure AD Identity Governance, specifically through Access Packages, allows you to create collections of resources (like the web apps) that are tied to a specific project. Users can be granted access to these access packages.

Project Manager Verification: Access packages allow you to delegate the management and approval to the project managers. Project managers can see who has access to their project’s resources.

Periodic Access Reviews: Access Reviews are a core feature of Azure AD Identity Governance. They allow you to set up recurring reviews where project managers are prompted to verify and remove users as needed. You can configure the reviews to occur every 30 days, meeting the prompt’s requirement.

A. Azure AD Identity Protection

Incorrect Choice: Azure AD Identity Protection focuses on detecting and mitigating risks to user identities. It helps with things like identifying compromised accounts, preventing risky sign-ins, and enforcing MFA. It doesn’t address the access management requirements of the scenario.

B. Microsoft Defender for Identity

Incorrect Choice: Microsoft Defender for Identity is a security solution for on-premises Active Directory environments. It detects suspicious activities by monitoring domain controllers. While security-focused, it doesn’t manage access to cloud-based web apps in the way that’s needed.

C. Microsoft Entra Permissions Management

Incorrect Choice: While Permissions Management is important for understanding and controlling access to cloud resources, it doesn’t offer the access review and self-service capabilities that the scenario requires. It mainly focuses on providing visibility and remediating excessive permissions.

Therefore, the best recommendation is Azure AD Identity Governance because it provides the necessary access packages and access review functionalities to meet all of the stated requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Your company has the divisions shown in the following table.

|—|—|—|
| East | Sub1 | Contoso.com |
| West | Sub2 | Fabrikam.com |

Sub1 contains an Azure App Service web app named App1. App1 uses Azure AD for single-tenant user authentication. Users from contoso.com can authenticate to App1.
You need to recommend a solution to enable users in the fabrikam.com tenant to authenticate to App1.
What should you recommend?
A. Configure the Azure AD provisioning service.
B. Configure assignments for the fabrikam.com users by using Azure AD Privileged Identity Management (PIM).
C. Use Azure AD entitlement management to govern external users.
D. Configure Azure AD Identity Protection.

Division | Azure subscription | Azure Active Directory (Azure AD) tenant |

A

Let’s analyze the requirements and why the correct answer is the best fit:

Understanding the Problem

Single-Tenant App: App1 is set up to only accept authentication from users within the contoso.com Azure AD tenant.

Cross-Tenant Access Needed: Users from the fabrikam.com Azure AD tenant need to access App1.

Analyzing the Options

A. Configure the Azure AD provisioning service:

Incorrect. The Azure AD provisioning service is used for automating the creation, updating, and deletion of user identities and groups in applications and directories. It doesn’t directly enable cross-tenant authentication to an existing application. While you might use it to create user objects, this doesn’t address the primary issue of authentication from another tenant.

B. Configure assignments for the fabrikam.com users by using Azure AD Privileged Identity Management (PIM):

Incorrect. Azure AD PIM is for managing, controlling, and monitoring privileged access (e.g., administrators) within your own Azure AD tenant. It’s not designed for granting access to users from a completely different Azure AD tenant for a standard application.

C. Use Azure AD entitlement management to govern external users.

Correct. Entitlement Management in Azure AD is specifically designed to handle requests, approvals, and reviews of access to resources for external users (users from other organizations or Azure AD tenants). This is the most appropriate way to allow users in fabrikam.com to access App1, providing you with proper governance and management of external user access.

This functionality allows fabrikam.com users to request access to a resource in your tenant, which requires approval. The process also allows for time-bound access.

D. Configure Azure AD Identity Protection:

Incorrect. Azure AD Identity Protection is for identifying and mitigating risks and vulnerabilities related to your user accounts and logins. It does not handle providing external access to applications.

Why Entitlement Management is the Right Choice

Cross-Tenant Access: It’s explicitly designed for managing access by external users, which aligns directly with the requirement of allowing users from fabrikam.com to access App1.

Controlled Access: Entitlement management provides mechanisms for controlling who can request access, requires approvals, and allows for time-bound access, which helps govern access by external users.

Proper Governance: It provides a proper access request process, ensures proper access is granted, and provides an audit trail.

Therefore, the best recommendation is C. Use Azure AD entitlement management to govern external users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

our company, named Contoso, Ltd., implements several Azure logic apps that have HTTP triggers. The logic apps provide access to an on-premises web service.
Contoso establishes a partnership with another company named Fabrikam, Inc.
Fabrikam does not have an existing Azure Active Directory (Azure AD) tenant and uses third-party OAuth 2.0 identity management to authenticate its users.
Developers at Fabrikam plan to use a subset of the logic apps to build applications that will integrate with the on-premises web service of Contoso.
You need to design a solution to provide the Fabrikam developers with access to the logic apps. The solution must meet the following requirements:
✑ Requests to the logic apps from the developers must be limited to lower rates than the requests from the users at Contoso.
✑ The developers must be able to rely on their existing OAuth 2.0 provider to gain access to the logic apps.
✑ The solution must NOT require changes to the logic apps.
✑ The solution must NOT use Azure AD guest accounts.
What should you include in the solution?

A. Azure Front Door
B. Azure AD Application Proxy
C. Azure AD business-to-business (B2B)
D. Azure API Management

A

Understanding the Requirements

Access to Logic Apps: Fabrikam developers need to access specific Contoso Logic Apps that expose HTTP triggers.

Rate Limiting: Access from Fabrikam needs to be rate-limited compared to internal Contoso traffic.

External OAuth: Fabrikam uses a third-party OAuth 2.0 provider, and the solution must integrate with this.

No Logic App Changes: The existing Logic Apps cannot be modified.

No Azure AD Guest Accounts: The solution must avoid using Azure AD guest accounts.

Analyzing the Options

A. Azure Front Door

Incorrect: Azure Front Door is primarily a global, scalable entry point for web applications. It’s great for caching, routing, and accelerating web traffic, but it is not designed to integrate with external OAuth 2.0 identity providers for authentication or rate limiting requests to specific HTTP triggered logic apps. Additionally, it would require changes to the application, which is stated as not being allowed.

B. Azure AD Application Proxy

Incorrect: Azure AD Application Proxy is used to publish on-premises web applications to the internet securely using Azure AD. While it can handle authentication, it is specifically designed for applications behind the firewall. Also, it would require using Azure AD guest accounts and would be a poor fit for authenticating third-party OAuth 2.0 users.

C. Azure AD business-to-business (B2B)

Incorrect: Azure AD B2B is designed for inviting users from other organizations as guest users in your Azure AD tenant. The prompt specifically mentions that no Azure AD guest accounts should be used, therefore, this is not a good solution.

D. Azure API Management

Correct: Azure API Management (APIM) is the best fit for this scenario because:

Abstraction and Decoupling: APIM acts as an intermediary layer between the Fabrikam developers and the Contoso Logic Apps. This decouples the apps from direct access.

Rate Limiting: APIM offers built-in policies to enforce rate limiting on a per-subscription, per-API, or other granular levels. You can set specific rate limits for Fabrikam users.

External OAuth Integration: APIM can integrate with any OAuth 2.0 compliant identity provider. You can configure APIM to accept tokens from the Fabrikam OAuth provider and then pass authenticated requests to the Logic Apps.

No Logic App Changes: Since APIM sits in front of the Logic Apps, no modifications to the Logic App themselves are needed.

No Guest Accounts: APIM manages access through its own API subscriptions and policies. It doesn’t directly rely on Azure AD guest users.

Why API Management is the Right Choice

Azure API Management provides a controlled and manageable gateway for accessing your Logic Apps, ensuring all requirements are met:

Centralized Access: It centralizes access to the logic apps, simplifying management and security.

Security and Authentication: It allows integration with external OAuth 2.0 providers while also securing access to the Logic Apps.

Rate Limiting: Provides built-in capabilities for controlling the number of requests from external developers.

No Code Changes: Requires no changes to the Logic Apps.

No Guest Accounts: Doesn’t rely on Azure AD Guest Accounts, which is one of the requirements of this scenario.

Therefore, the correct answer is D. Azure API Management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

HOTSPOT -
You have an Azure subscription that contains 300 virtual machines that run Windows Server 2019.
You need to centrally monitor all warning events in the System logs of the virtual machines.
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Resource to create in Azure:
An event hub
A Log Analytics workspace
A search service
A storage account

Configuration to perform on the virtual machines:
Create event subscriptions
Configure Continuous delivery
Install the Azure Monitor agent
Modify the membership of the Event Log Readers group

A

Understanding the Goal

The goal is to collect warning events from the System logs of 300 Windows Server virtual machines and monitor them centrally in Azure.

Correct Selections:

Resource to create in Azure: A Log Analytics workspace

Why? A Log Analytics workspace is the central repository for collecting, storing, and analyzing log and performance data from Azure resources and on-premises servers. This is where the logs collected from the VMs will be sent and analyzed. An Event Hub could potentially be used but would require more customization of the solution.

Configuration to perform on the virtual machines: Install the Azure Monitor agent

Why? The Azure Monitor Agent (AMA) is the modern method of collecting telemetry data (including logs) from Azure VMs and other resources. You install this agent on each VM to collect the desired logs.

The legacy Azure Log Analytics agent is deprecated, so this is not a good answer.

Also, Modify the membership of the Event Log Readers group is not needed with the Azure Monitor Agent. The Azure Monitor agent uses the Virtual Machine Managed Identity for authentication, it does not use a user account, and therefore does not require membership of this group.

Incorrect Selections and Why

An event hub: While Event Hubs can ingest telemetry data, they don’t provide the same level of analysis and querying capabilities as Log Analytics. You would typically use Event Hubs as an intermediate step before sending data to a data store like a Log Analytics workspace.

A search service: Azure Search is for indexing and searching content. It isn’t meant for log analysis.

A storage account: Storage accounts are useful for storing logs, but not for analysis and monitoring in this scenario.

Create event subscriptions: Event subscriptions are generally used to react to events within Azure. They are not directly used to monitor logs on VMs.

Configure Continuous Delivery: Continuous delivery is a development practice and has no direct impact on monitoring logs from VMs.

Modify the membership of the Event Log Readers group: The Azure Monitor agent uses the Virtual Machine Managed Identity for authentication, it does not use a user account, and therefore does not require membership of this group.

Therefore, the correct answer is:

Resource to create in Azure: A Log Analytics workspace

Configuration to perform on the virtual machines: Install the Azure Monitor agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

HOTSPOT
You have several Azure App Service web apps that use Azure Key Vault to store data encryption keys.
Several departments have the following requests to support the web app:

Security

Review the membership of administrative roles and require users to provide a justification for continued membership.
Get alerts about changes in administrator assignments.
See a history of administrator activation, including which changes administrators made to Azure resources.
Development

Enable the applications to access Key Vault and retrieve keys for use in code.
Quality Assurance

Receive temporary administrator access to create and configure additional web apps in the test environment.

Which service should you recommend for each department’s request? To answer, configure the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Security:
Azure AD Privileged Identity Management
Azure Managed Identity
Azure AD Connect
Azure AD Identity Protection

Development:
Azure AD Privileged Identity Management
Azure Managed Identity
Azure AD Connect
Azure AD Identity Protection

Quality Assurance:
Azure AD Privileged Identity Management
Azure Managed Identity
Azure AD Connect
Azure AD Identity Protection

A

Understanding the Needs

Security: Requires auditing and control over administrative roles and changes.

Development: Needs secure access from code to retrieve keys from Key Vault.

Quality Assurance: Requires temporary elevated access for testing purposes.

Analyzing the Options and Making Selections:

Security Department:

Azure AD Privileged Identity Management (PIM): Correct

Why: PIM is specifically designed to manage, control, and monitor privileged access within Azure AD. It allows you to:

Review membership of admin roles and require justification.

Get alerts about changes in admin assignments.

Track admin activation history and changes.

Azure Managed Identity: Incorrect. Managed identities are for applications and services to authenticate with other Azure resources.

Azure AD Connect: Incorrect. Azure AD Connect is for synchronizing on-premises Active Directory with Azure AD.

Azure AD Identity Protection: Incorrect. Identity Protection focuses on detecting and mitigating risks to user accounts, not managing role assignments.

Development Department:

Azure Managed Identity: Correct

Why: Managed identities provide a secure way for applications to authenticate with other Azure resources (like Key Vault) without needing to manage secrets or credentials. This is the preferred method for accessing Key Vault from application code.

Azure AD Privileged Identity Management: Incorrect. PIM manages privileged roles, not application access to resources.

Azure AD Connect: Incorrect. Azure AD Connect is for synchronizing on-premises Active Directory with Azure AD.

Azure AD Identity Protection: Incorrect. Identity Protection focuses on detecting and mitigating risks to user accounts, not managing app access to resources.

Quality Assurance Department:

Azure AD Privileged Identity Management (PIM): Correct

Why: PIM is ideal for providing temporary elevated access. It allows you to:

Grant users temporary admin roles.

Require activation with justification.

Set time-bound access limits, ensuring the elevated permissions expire.

Azure Managed Identity: Incorrect. Managed identities are for applications and services to authenticate with other Azure resources.

Azure AD Connect: Incorrect. Azure AD Connect is for synchronizing on-premises Active Directory with Azure AD.

Azure AD Identity Protection: Incorrect. Identity Protection focuses on detecting and mitigating risks to user accounts, not temporary privileged access.

Therefore, the correct answers are:

Security: Azure AD Privileged Identity Management

Development: Azure Managed Identity

Quality Assurance: Azure AD Privileged Identity Management

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Overview:

Existing Environment

Fabrikam, Inc. is an engineering company that has offices throughout Europe. The company has a main office in London and three branch offices in Amsterdam Berlin, and Rome.

Active Directory Environment:

The network contains two Active Directory forests named corp.fabnkam.com and rd.fabrikam.com. There are no trust relationships between the forests. Corp.fabrikam.com is a production forest that contains identities used for internal user and computer authentication. Rd.fabrikam.com is used by the research and development (R&D) department only. The R&D department is restricted to using on-premises resources only.

Network Infrastructure:

Each office contains at least one domain controller from the corp.fabrikam.com domain.

The main office contains all the domain controllers for the rd.fabrikam.com forest.

All the offices have a high-speed connection to the Internet.

An existing application named WebApp1 is hosted in the data center of the London office. WebApp1 is used by customers to place and track orders. WebApp1 has a web tier that uses Microsoft Internet Information Services (IIS) and a database tier that runs Microsoft SQL Server 2016. The web tier and the database tier are deployed to virtual machines that run on Hyper-V.

The IT department currently uses a separate Hyper-V environment to test updates to WebApp1.

Fabrikam purchases all Microsoft licenses through a Microsoft Enterprise Agreement that includes Software Assurance.

Problem Statement:

The use of Web App1 is unpredictable. At peak times, users often report delays. At other

times, many resources for WebApp1 are underutilized.

Requirements:

Planned Changes:

Fabrikam plans to move most of its production workloads to Azure during the next few years.

As one of its first projects, the company plans to establish a hybrid identity model, facilitating an upcoming Microsoft Office 365 deployment All R&D operations will remain on-premises.

Fabrikam plans to migrate the production and test instances of WebApp1 to Azure.

Technical Requirements:

Fabrikam identifies the following technical requirements:

  • Web site content must be easily updated from a single point.
  • User input must be minimized when provisioning new app instances.
  • Whenever possible, existing on premises licenses must be used to reduce cost.
  • Users must always authenticate by using their corp.fabrikam.com UPN identity.
  • Any new deployments to Azure must be redundant in case an Azure region fails.
  • Whenever possible, solutions must be deployed to Azure by using platform as a service (PaaS).
  • An email distribution group named IT Support must be notified of any issues relating to the directory synchronization services.
  • Directory synchronization between Azure Active Directory (Azure AD) and corp.fabhkam.com must not be affected by a link failure between Azure and the on premises network.

Database Requirements:

Fabrikam identifies the following database requirements:

  • Database metrics for the production instance of WebApp1 must be available for analysis so that database administrators can optimize the performance settings.
  • To avoid disrupting customer access, database downtime must be minimized when databases are migrated.
  • Database backups must be retained for a minimum of seven years to meet compliance requirement

Security Requirements:

Fabrikam identifies the following security requirements:

*Company information including policies, templates, and data must be inaccessible to anyone outside the company

*Users on the on-premises network must be able to authenticate to corp.fabrikam.com if an Internet link fails.

*Administrators must be able authenticate to the Azure portal by using their corp.fabrikam.com credentials.

*All administrative access to the Azure portal must be secured by using multi-factor

authentication.

*The testing of WebApp1 updates must not be visible to anyone outside the company.

You need to recommend a strategy for migrating the database content of WebApp1 to Azure .

What should you include in the recommendation?
Use Azure Site Recovery to replicate the SQL servers to Azure.
Use SQL Server transactional replication.
Copy the BACPAC file that contains the Azure SQL database file to Azure Blob storage.
Copy the VHD that contains the Azure SQL database files to Azure Blob storage

A

Understanding the Requirements and Constraints

Minimize Downtime: The migration process must minimize disruption to customer access.

Long-Term Backups: Backups must be retained for seven years.

Hybrid Identity: Authentication will be tied to corp.fabrikam.com (so we need AD Sync).

PaaS Preference: Prefer PaaS solutions where possible.

Redundancy: The solution must provide redundancy.

Security: Data must be kept private, including testing.

Database Analysis: Performance metrics should be available for analysis.

SQL Server 2016: The current on-premises database is running SQL Server 2016.

Analyzing the Options

Use Azure Site Recovery to replicate the SQL servers to Azure.

Incorrect. Azure Site Recovery (ASR) is great for replicating entire VMs, but it is a VM-based solution, which goes against the technical requirement to use PaaS solutions whenever possible. Also, it doesn’t address the requirement for long-term backups. ASR is not ideal for migrating a database to a PaaS offering.

Use SQL Server transactional replication.

Incorrect. While transactional replication is great for keeping data synchronized between databases, it’s complex to set up and doesn’t directly migrate the data into a PaaS Azure SQL Database solution. It’s typically used for ongoing replication, not a one-time migration. Transactional Replication does not handle the backup and retention requirements.

Copy the BACPAC file that contains the Azure SQL database file to Azure Blob storage.

Correct. A BACPAC file is a self-contained package containing the schema and data from a SQL Server database. This makes it suitable for migrating SQL databases. Additionally, a BACPAC file can be directly used to create an Azure SQL Database. This meets the PaaS and minimization of disruption requirements. The database can be backed up and retained for 7 years via the automated backup process for Azure SQL databases.

Copy the VHD that contains the Azure SQL database files to Azure Blob storage.

Incorrect. Copying the VHD (Virtual Hard Disk) file is a good solution for migrating an IaaS based SQL Server VM to Azure, however, is not appropriate when migrating to an Azure SQL Database (PaaS solution). It would be better to use the BACPAC approach, which can directly create a PaaS SQL DB.

Why BACPAC is the Best Choice

PaaS Alignment: It’s suitable for migrating to an Azure SQL Database, a PaaS offering, aligning with the PaaS preference requirement.

Minimal Downtime: Can be used for a relatively quick migration process, reducing impact on customer access.

Direct Migration: The BACPAC file is directly usable to create or update an Azure SQL Database.

Backup Handling: Azure SQL Database handles long-term backups (including 7-year retention) which addresses the requirement.

Efficiency: More efficient than setting up replication for a one-time migration.

Therefore, the recommendation should include: Copy the BACPAC file that contains the Azure SQL database file to Azure Blob storage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Your company has deployed several virtual machines (VMs) on-premises and to Azure. Azure ExpressRoute has been deployed and configured for on-premises to Azure connectivity.

Several VMs are exhibiting network connectivity issues.

You need to analyze the network traffic to determine whether packets are being allowed or denied to the VMs.

Solution: Use the Azure Traffic Analytics solution in Azure Log Analytics to analyze the network traffic.

Does the solution meet the goal?
Yes
No

A

Understanding the Goal

The goal is to:

Analyze network traffic to VMs both on-premises and in Azure.

Determine whether packets are being allowed or denied.

Diagnose network connectivity issues.

Understanding the Solution: Azure Traffic Analytics

What it Does: Azure Traffic Analytics is a cloud-based solution that analyzes NSG (Network Security Group) flow logs to provide insights into network traffic patterns and security posture within your Azure environment.

How it Works:

Flow logs are captured by Azure Network Watcher for NSGs.

These logs are sent to a storage account.

Traffic Analytics processes these flow logs to extract actionable information.

Analyzing if the Solution Meets the Goal

Network Analysis: Traffic Analytics does analyze network traffic patterns, which helps with determining what traffic is flowing.

Allow/Deny Decisions: Traffic Analytics can show if a connection attempt was allowed or denied by an NSG based on its rules.

Connectivity Issues: Traffic Analytics can help identify the source or destination of connectivity issues related to VMs.

Limitations with On-Premises:

Crucially, Traffic Analytics only works with flow logs generated by Azure Network Security Groups (NSGs).

It does not analyze traffic for on-premises VMs directly.

While it can show traffic that comes from on-premises through the Azure ExpressRoute circuit and hits Azure NSGs, it doesn’t provide visibility of on-premises network traffic that does not transverse an Azure NSG.

You would need additional analysis tools or logs on-premises to achieve full visibility of on-premises traffic.

Conclusion

While Azure Traffic Analytics is an excellent tool for understanding network traffic and identifying allowed or denied packets within Azure, it does not meet the goal of analyzing network traffic to all VMs, both on-premises and in Azure.

Therefore, the answer is No.

To fully analyze the network traffic to all VMs, you would need a solution that can collect network flow data from both the Azure NSGs and the on-premises network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements:

  • Provide access to the full .NET framework.
  • Provide redundancy if an Azure region fails.
  • Grant administrators access to the operating system to install custom application dependencies.

Solution: You deploy an Azure virtual machine to two Azure regions, and you deploy an Azure Application Gateway.

Does this meet the goal?
Yes
No

A

Understanding the Requirements

Stateless Web App: The application doesn’t store user session data locally and can be scaled horizontally.

Full .NET Framework: The application needs access to the complete .NET Framework, not just the .NET Core/5+.

Regional Redundancy: The application must remain operational if an Azure region goes down.

OS Access: Administrators must have access to the underlying operating system to install dependencies.

Analyzing the Solution

Azure Virtual Machines in Two Regions:

Meets Full .NET Framework Requirement: Yes, VMs allow you to install the full .NET framework and have full OS control.

Meets OS Access Requirement: Yes, administrators can access the OS of a virtual machine.

Provides Redundancy: Yes, deploying VMs in two regions provides redundancy, because if one region fails, the application would still be available in another.

Azure Application Gateway:

Meets Redundancy: Yes, Application Gateway allows you to load balance traffic across the VMs in the two regions for traffic distribution.

Provides Access to Web App: Yes, it provides a single entry point to access the web application running on the VMs.

Evaluation

The solution, deploying VMs in multiple regions with Azure Application Gateway, does meet the stated requirements.

Full .NET Framework: VMs allow installation of the full framework.

Redundancy: VMs in two regions with Application Gateway provide redundancy and high availability.

OS Access: VMs provide administrators with OS-level access.

Therefore, the answer is Yes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

DRAG DROP

Your on-premises network contains a server named Server1 that runs an ASP.NET application named App1.

You have a hybrid deployment of Azure Active Directory (Azure AD).

You need to recommend a solution to ensure that users sign in by using their Azure AD account and Azure Multi-Factor Authentication (MFA) when they connect to App1 from the internet.

Which three Azure services should you recommend be deployed and configured in sequence? To answer, move the appropriate services from the list of services to the answer area and arrange them in the correct order.
Services
an internal Azure Load Balancer
an Azure AD conditional access policy
Azure AD Application Proxy
an Azure AD managed identity
a public Azure Load Balancer
an Azure AD enterprise application
an App Service plan
Answer Area

A

Understanding the Requirements

On-Premises App: App1 is hosted on-premises.

Azure AD Authentication: Users must authenticate using their Azure AD accounts.

Azure MFA: Users must be prompted for MFA when connecting from the internet.

Internet Access: Users access App1 from the internet.

Analyzing the Azure Services

Here’s a breakdown of how each service fits into the solution:

Azure AD Enterprise Application:

Purpose: This represents App1 in Azure AD, making it possible to authenticate users and manage access.

Why It’s Needed First: You need to register the application in Azure AD before you can authenticate users against it. This sets the base for Azure AD to recognize the application as a target.

Azure AD Application Proxy:

Purpose: This securely publishes App1 to the internet without requiring changes to your network infrastructure. It’s a component of Azure AD that gives users secure access to on-premise applications through Azure AD.

Why It’s Needed Second: Application Proxy connects to your on-premise app using a proxy connector, and can then use Azure AD for authentication. It also enables the use of conditional access policies.

Azure AD Conditional Access Policy:

Purpose: Enforces MFA and other security requirements for access to App1 based on user location, device, etc.

Why It’s Needed Last: You configure a conditional access policy after you’ve configured the Enterprise Application in Azure AD and configured it to use Application Proxy. This allows you to define the authentication conditions.

Incorrect Services

An internal Azure Load Balancer: Internal load balancers are used for distributing traffic within a virtual network. They do not make applications accessible from the internet.

An Azure AD managed identity: Managed identities are for allowing resources to securely authenticate with other Azure services, not for application access.

A public Azure Load Balancer: While public load balancers can direct internet traffic, they do not implement authentication for Azure AD users or apply conditional access policies.

An App Service plan: App Service plans are used to define the resources for hosting Azure App Service web applications and do not play a role in authenticating against on-premises apps.

Correct Order

The correct order to deploy and configure these services is:

Azure AD enterprise application

Azure AD Application Proxy

Azure AD conditional access policy

Therefore, drag and drop the three services in the order listed above into the answer area.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

HOTSPOT
You have an Azure subscription that contains the SQL servers on Azure shown in the following table:

SQL Servers Table

Name Resource group Location
SQLsvr1 RG1 East US
SQLsvr2 RG2 West US
The subscription contains the storage accounts shown in the following table:

Storage Accounts Table

Name Resource group Location Account kind
storage1 RG1 East US StorageV2 (general purpose v2)
storage2 RG2 Central US BlobStorage
You create the Azure SQL databases shown in the following table:

Azure SQL Databases Table

Name Resource group Server Pricing tier
SQLdb1 RG1 SQLsvr1 Standard
SQLdb2 RG1 SQLsvr1 Standard
SQLdb3 RG2 SQLsvr2 Premium

Answer Area
Statements
When you enable auditing for SQLdb1, you can store the audit information to storage1.
When you enable auditing for SQLdb2, you can store the audit information to storage2.
When you enable auditing for SQLdb3, you can store the audit information to storage2.

A

Key Concepts:

Azure SQL Auditing: This feature tracks database events and writes them to audit logs. These logs can be stored in Azure Storage accounts.

Storage Account Requirements: When configuring auditing for Azure SQL databases, you need to specify a storage account for audit log storage. There are limitations around storage accounts that can be used for audit logs.

Important Considerations:

Storage Account Type: Azure SQL Database Auditing requires storage accounts of types StorageV2 (general purpose v2). Blob Storage accounts cannot be used.

Storage Account Location: The storage account used for auditing must be in the same region as the SQL server it is auditing. If it is not, then audit logs can not be written to the given storage account.

Statement Analysis:

When you enable auditing for SQLdb1, you can store the audit information to storage1.

Analysis: True. SQLdb1 is on the SQLsvr1 server located in East US. storage1 is also located in East US and it’s a StorageV2 account. This meets the requirements for SQL Auditing.

When you enable auditing for SQLdb2, you can store the audit information to storage2.

Analysis: False. SQLdb2 is on the SQLsvr1 server located in East US. However, storage2 is in Central US, so it is in a different region. Also, the storage account type of storage2 is a BlobStorage, which is not supported for SQL Auditing. Because of both the location and account type, the logs can not be sent to storage2.

When you enable auditing for SQLdb3, you can store the audit information to storage2.

Analysis: False. SQLdb3 is on the SQLsvr2 server located in West US. However, storage2 is located in Central US, which is a different region and the account type of storage2 is BlobStorage, which is also not supported. The server’s and storage’s regions must match, and the storage must be StorageV2.

Therefore, the correct answer is:

When you enable auditing for SQLdb1, you can store the audit information to storage1: Yes

When you enable auditing for SQLdb2, you can store the audit information to storage2: No

When you enable auditing for SQLdb3, you can store the audit information to storage2: No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

You have 100 servers that run Windows Server 2012 R2 and host Microsoft SQL Server

2012 R2 instances. The instances host databases that have the following characteristics:

✑ The largest database is currently 3 TB. None of the databases will ever exceed 4 TB.

✑ Stored procedures are implemented by using CLR.

You plan to move all the data from SQL Server to Azure.

You need to recommend an Azure service to host the databases. The solution must meet the following requirements:

✑ Whenever possible, minimize management overhead for the migrated databases. ✑ Minimize the number of database changes required to facilitate the migration.

✑ Ensure that users can authenticate by using their Active Directory credentials.

What should you include in the recommendation?
Azure SQL Database single databases
Azure SQL Database Managed Instance
Azure SQL Database elastic pools
SQL Server 2016 on Azure virtual machines

A

Understanding the Requirements

Large Databases: Databases up to 3 TB, with a max of 4 TB.

CLR Stored Procedures: Databases use CLR for stored procedures.

Minimize Management: Reduce overhead related to database maintenance and administration.

Minimize Changes: Reduce the number of database changes necessary for migration.

Active Directory Authentication: Use existing Active Directory credentials.

Analyzing the Options

Azure SQL Database Single Databases:

Pros: Highly managed PaaS offering. Low management overhead.

Cons: Limited database size (up to 4 TB for some configurations, but may be more costly for 4 TB). Does not support CLR.

Conclusion: Fails to meet the CLR requirement.

Azure SQL Database Managed Instance:

Pros: PaaS offering with high compatibility with on-premises SQL Server, supports CLR. Can be added to an Azure Active Directory domain. Up to 16 TB of storage in a single instance.

Cons: Higher cost than single databases.

Conclusion: Meets all requirements and is the best fit.

Azure SQL Database Elastic Pools:

Pros: PaaS offering for managing multiple databases with shared resources.

Cons: Designed for databases with varying usage patterns. Databases in an Elastic Pool cannot exceed the size limits of Azure SQL Database single databases, and therefore will not meet the requirement for large databases. Also does not support CLR.

Conclusion: Fails to meet the CLR and database size requirements.

SQL Server 2016 on Azure Virtual Machines:

Pros: Full control over the SQL Server instance. Full CLR support. Allows for full AD authentication.

Cons: Requires significantly more management overhead because you’re responsible for patching, backups, high availability, etc.

Conclusion: Fails to minimize management overhead.

Why Managed Instance is the Best Choice

Compatibility: Managed Instance has great compatibility with on-premises SQL Server. This will reduce the number of database changes required for the migration.

CLR Support: It supports CLR, unlike single databases and elastic pools.

Database Size: It can accommodate the large databases (up to 16TB) which also covers the largest database and projected growth.

Managed Service: It’s a PaaS offering, minimizing management overhead, as Azure manages the underlying infrastructure.

Active Directory Integration: Supports Active Directory authentication.

Therefore, the correct recommendation is Azure SQL Database Managed Instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

You have an Azure subscription that contains an Azure Blob storage account named store1.

You have an on-premises file server named Setver1 that runs Windows Sewer 2016.

Server1 stores 500 GB of company files.

You need to store a copy of the company files from Server 1 in store1.

Which two possible Azure services achieve this goal? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point
an Azure Batch account
an integration account
an On-premises data gateway
an Azure Import/Export job
Azure Data factory

A

Correct Solutions:

An On-premises data gateway: This is a crucial component for connecting on-premises data sources to Azure services. The gateway acts as a secure bridge, allowing services like Azure Data Factory (and other services) to access Server1’s files. The data gateway enables a hybrid approach to data migration and transfer to your cloud environment. Using this gateway to send the data from on-prem servers to Azure Storage will allow the data migration required.

Azure Data Factory: ADF is a cloud-based data integration service that orchestrates the movement and transformation of data. With the On-premises Data Gateway, ADF can copy the 500 GB of files from Server1 to store1 using the Copy activity. This is a standard use case for ADF and is a very appropriate approach for moving large amounts of data to the cloud.

Incorrect Solutions:

An Azure Batch account: Azure Batch is a service for running large-scale parallel and high-performance computing jobs. It is not used for direct data transfer or file copying from on-premises file servers.

An integration account: Integration Accounts are part of Azure Logic Apps and are used for storing integration artifacts such as schemas, maps, and partners information. It’s not used for data movement in the way required here.

An Azure Import/Export job: Import/Export jobs are primarily for migrating extremely large datasets to Azure by shipping physical storage devices (like hard drives). This solution is not required when you have a good internet connection that you can utilize to transfer 500 GB of data to Azure. It would be slower, more complicated, and involve manual shipping.

In summary:

The correct options are an On-premises data gateway and Azure Data Factory. These options work together to enable secure data transfer from an on-premises file server to an Azure Blob Storage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

You have the Azure resources shown in the following table.

Name Type Location
US-Central-Firewall-policy Azure Firewall policy Central US
US-East-Firewall-policy Azure Firewall policy East US
EU-Firewall-policy Azure Firewall policy West Europe
USEastfirewall Azure Firewall Central US
USWestfirewall Azure Firewall East US
EUFirewall Azure Firewall West Europe
— —
You need to deploy a new Azure Firewall policy that will contain mandatory rules for all Azure Firewall deployments. The new policy will be configured as a parent policy for the existing policies.

What is the minimum number of additional Azure Firewall policies you should create?
0
1
2
3

A

Understanding Parent Policies

An Azure Firewall Policy can be a parent policy. This means its rules and settings are inherited by other (child) policies.

You can’t directly assign a parent policy to an Azure Firewall, you must assign the policy to the firewall.

You can assign multiple firewalls to a policy.

You need to create a new firewall policy to be a parent for all existing firewall policies.

Analysis

Goal: You need a single parent policy that applies to all Azure Firewall deployments.

Current Setup: You have three existing Azure Firewall policies (US-Central-Firewall-policy, US-East-Firewall-policy, and EU-Firewall-policy) each associated with a specific Azure Firewall.

Solution: The solution to this problem is to create a new policy and assign it to all of the existing policies. Therefore, you need to create one parent policy.

Minimum Additional Policies:

To achieve the objective, we only need 1 additional Azure Firewall policy. The one new policy will act as the parent policy, and the existing policies will become its child policies.

Answer:
1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

HOTSPOT

You have an Azure subscription that contains the storage accounts shown in the following table.
Name Type Performance
storage1 StorageV2 Standard
storage2 SrorageV2 Premium
storage3 BlobStorage Standard
storage4 FileStorage Premium

You plan to implement two new apps that have the requirements shown in the following table.

Name Requirement
App1 Use lifecycle management to migrate app data between storage tiers
App2 Store app data in an Azure file share
Which storage accounts should you recommend using for each app? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
App1:
Storage1 and storage2 only
Storage1 and storage3 only
Storage1, storage2, and storage3 only
Storage1, storage2, storage3, and storage4
App2:
Storage4 only
Storage1 and storage4 only
Storage1, storage2, and storage4 only
Storage1, storage2, storage3, and storage4

A

Understanding Storage Account Types

StorageV2 (General-purpose v2): Supports all storage services (blobs, queues, tables, files) and offers different performance tiers (Hot, Cool, Archive). Suitable for most general-purpose scenarios.

BlobStorage: Specifically designed for storing unstructured data (blobs). It supports access tiers (Hot, Cool, Archive), making it suitable for lifecycle management.

FileStorage: Specifically designed for creating Azure file shares that can be accessed via SMB.

Analyzing Requirements

App1: Requires lifecycle management to migrate data between storage tiers. This means it needs to use Hot/Cool/Archive tiers.

App2: Requires storing data in an Azure file share.

Selections

App1: Storage1, storage2 and storage3 only.

Storage1 is a StorageV2 account, which is perfect for general-purpose storage including lifecycle management.

Storage2 is also a StorageV2, which also supports lifecycle management.

Storage3 is a BlobStorage account, which is perfect for blob storage including lifecycle management.

Storage4 is a FileStorage account, so does not support lifecyle management.

App2: Storage4 only

Storage4 is the only FileStorage account and therefore the only account type that can fulfill the requirement.

Therefore, the correct answer is:

App1: Storage1, storage2, and storage3 only

App2: Storage4 only

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

HOTSPOT

You are designing an Azure web app.

You plan to deploy the web app to the North Europe Azure region and the West Europe Azure region.

You need to recommend a solution for the web app. The solution must meet the following requirements:

✑ Users must always access the web app from the North Europe region, unless the

region fails.

✑ The web app must be available to users if an Azure region is unavailable. ✑ Deployment costs must be minimized.

What should you include in the recommendation? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Request routing method:
A Traffic Manager profile
Azure Application Gateway
Azure Load Balancer
Request routing configuration:
Cookie-based session affinity
Performance traffic routing
Priority traffic routing
Weighted traffic routing

A

Understanding the Requirements

Primary Region: Users should always access the North Europe region unless it is unavailable. This indicates the need for a failover mechanism.

High Availability: The app must remain accessible even if one of the regions fails.

Cost Optimization: Deployment costs need to be minimized.

Analyzing Azure Services

Request Routing Method:

A Traffic Manager profile: This is the best choice for routing traffic based on priority, performance, or geographic location. It offers automatic failover and is specifically designed for these scenarios.

Azure Application Gateway: Primarily designed for web traffic load balancing, web application firewall capabilities, and more advanced routing based on HTTP headers and other parameters. It’s not the right tool for handling primary/failover logic like this.

Azure Load Balancer: Primarily for balancing traffic within a region. It doesn’t provide the cross-region routing required for failover in this scenario.

Request Routing Configuration:

Cookie-based session affinity: Ensures requests from the same user are routed to the same instance. This isn’t relevant to the core requirement of failover and routing between regions.

Performance traffic routing: Routes traffic to the endpoint with the best performance. While helpful, it’s not the primary goal here which is the primary region and the failover.

Priority traffic routing: Routes traffic to a primary endpoint, and if that endpoint is unhealthy, the traffic is routed to the next available endpoint. This is perfect for the primary/failover scenario.

Weighted traffic routing: Routes traffic to different endpoints based on a percentage, typically for scenarios like testing different versions. This is not optimal for the specified requirement.

Solution

Based on the analysis:

Request routing method: A Traffic Manager profile is the appropriate service for managing the failover between two regions.

Request routing configuration: Priority traffic routing fits the requirement to route users to the primary region, which will be North Europe, and automatically direct traffic to the secondary region (West Europe) if the primary region is unavailable.

Therefore, the correct answer is:

Request routing method: A Traffic Manager profile

Request routing configuration: Priority traffic routing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases.

The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.

You need to recommend a solution to meet the regulatory requirement.

Solution: You recommend creating resource groups based on locations and implementing resource locks on the resource groups.

Does this meet the goal?
Yes
No

A

Understanding the Requirements

App Service and SQL Database Co-location: App Service instances and their associated Azure SQL databases must be deployed in the same Azure region.

Regional Regulatory Requirement: App Service instances can only be deployed to specific allowed Azure regions.

Simultaneous Deployment: Both the App Service instances and the Azure SQL databases will be deployed at the same time.

Evaluating the Proposed Solution

Resource Groups Based on Location: Creating resource groups named based on Azure regions is a sound practice. This helps organize resources logically and makes it easy to manage deployments within specific regions. It is common to create a resource group for each region you want to deploy resources in (e.g. rg-eastus, rg-westus).

Resource Locks: Resource locks prevent accidental deletion or modification of resources. If resource locks are placed on resource groups, they prevent any resource from being deleted or modified. However, they do not enforce the creation of resources in certain regions, and therefore will not enforce the regional regulatory requirement.

Why the Solution Doesn’t Fully Meet the Goal

The proposed solution addresses the organization and prevention of deletion of the resources, but it does not enforce the actual deployment of resources only to specific allowed regions.

While creating resource groups by location helps in the organization of resources, it does not prevent the creation of resources in the incorrect region.

Resource locks only protect existing resources, and don’t have a role during the creation of resources. Resource locks will not stop resources from being deployed to a resource group in an incorrect region.

Conclusion

The solution is a good practice for resource organization, but does not enforce regional deployment.

Answer:

No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Your company has an app named App1 that uses data from the on-premises Microsoft SQL Server databases shown in the following table.
|—|—|
| DB1 | 450 GB |
| DB2 | 250 GB |
| DB3 | 300 GB |
| DB4 | 50 GB |

App1 and the data are used on the first day of the month only. The data is not expected to grow more than 3% each year.

The company is rewriting App1 as an Azure web app and plans to migrate all the data to Azure.

You need to migrate the data to Azure SQL Database. The solution must minimize costs.

Which service tier should you use?
vCore-based Business Critical
vCore-based General Purpose
DTU-based Standard
DTU-based Basic

Name | Size |

A

Understanding the Requirements

Data Migration: All data from on-premises SQL Server databases needs to be migrated to Azure SQL Database.

Usage Pattern: The application (and thus the database) is used only on the first day of each month, and the data does not grow excessively (3% annually).

Cost Minimization: The goal is to choose the most cost-effective service tier for this usage pattern.

Analyzing Azure SQL Database Service Tiers

vCore-based Service Tiers:

Business Critical: Designed for mission-critical applications with the highest resilience, high availability, and the fastest performance (using local SSD storage). Offers a very high level of performance but is the most expensive option.

General Purpose: Suitable for most business workloads and is typically the default choice. Provides good performance with a balance between cost and features.

DTU-based Service Tiers:

Standard: Offers a good balance of features and performance.

Basic: The most cost-effective DTU-based option, designed for low-throughput and less demanding workloads, typically with small databases.

Evaluation for this scenario

Infrequent Usage: The application’s usage pattern is highly periodic - only once per month. Therefore, paying for very high performance during the rest of the month is not ideal. A tier with low compute capability for most of the month is optimal for this cost-conscious requirement.

Performance Needs: The data needs to be available and performant on that first day of the month, however, there is no requirement that the performance must be very high.

Data Size: The total data size is around 1050 GB (450+250+300+50), which does not qualify as “small”. However the low monthly usage makes a lower tier optimal.

vCore-based Business Critical is not suitable because it is intended for very high throughput, mission-critical systems and therefore the most expensive service.

vCore-based General Purpose is suitable for the performance requirements, however it will incur too much cost when the system is not being used, due to the compute required.

DTU-based Standard is not as expensive as a vCore option, however it does not provide the best optimization for low compute for the majority of the month, and will be more expensive than a basic tier.

DTU-based Basic is the best fit because it allows for a very cost effective tier, while also providing sufficient performance on the single day of the month the database is being used.

Conclusion

Given the low, infrequent usage of the database (one day a month), the DTU-based Basic tier is the most cost-effective option and will be sufficient for the performance requirement. You can scale up during the day required for maximum performance, and scale it back down for all other days.

Answer:

DTU-based Basic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

You have .NeT web service named service1 that has the following requirements.

✑ Must read and write to the local file system.

✑ Must write to the Windows Application event log.

You need to recommend a solution to host Service1 in Azure . The solution must meet the following requirements:

✑ Minimize maintenance overhead. ✑ Minimize costs.

What should you include in the recommendation?
an Azure App Service web app
an Azure virtual machine scale set
an App Service Environment (ASE)
an Azure Functions app

A

Understanding the Requirements

Service 1 Functionality: The service needs to:

Read and write to the local file system.

Write to the Windows Application event log.

Hosting Goals:

Minimize maintenance overhead.

Minimize costs.

Analyzing Azure Hosting Options

Azure App Service Web App:

Pros: Fully managed platform, low maintenance, good for web applications. Supports deployment of .NET web services.

Cons:

Limited local file system access: App Service web apps have a sandbox environment, making direct file system access limited. Can write to D:\home, but this is a network-based file system and it has some limitations. It is not the same as true local storage.

No direct access to the Windows Event Log: Writing to the Windows Event Log is not directly supported in a standard App Service Web App. You would typically need to use another logging mechanism.

Azure Virtual Machine Scale Set (VMSS):

Pros: Provides scalability and high availability for virtual machines. Full access to the VM and it’s operating system.

Cons: Higher maintenance overhead than PaaS offerings like App Service. This includes patching, configuring, and managing the underlying OS. Also higher cost due to the compute usage.

App Service Environment (ASE):

Pros: Provides an isolated, dedicated environment for App Service apps. Offers more control than a standard App Service. It is a private version of the standard Azure App Service.

Cons: Much more expensive than a standard App Service. Also, has similar limitations regarding local filesystem and Windows Event Log.

Azure Functions App:

Pros: Serverless compute service, event-driven architecture. Very low maintenance overhead and cost-effective.

Cons: Primarily for running code in response to events, not designed for hosting long-running services. Limited file system access, same as App Services. No direct access to the Windows event log.

Evaluation

File System Access: Both Azure App Service and Azure Functions have limited file system access, especially for non-temporary storage. VMSS is the only option here that would satisfy the local file system requirement.

Windows Event Log: Standard App Service Web Apps and Functions do not have direct access to the Windows Application event log. However, VMSS is ideal for this.

Maintenance: The goal is to minimize maintenance. VMSS requires much more maintenance than the other options, due to the underlying Operating system that requires patching and administration.

Cost: A VMSS will be the most expensive option due to compute usage. The App Service and Functions will be much cheaper due to lower usage, and no OS patching or administration. However both App Service and Functions do not offer full local file system access or Windows Event Log writing.

Conclusion

None of the options fulfill all of the requirements. The closest solution would be to use an Azure Virtual Machine Scale Set (VMSS) as it provides full control of the file system and Windows Event Logs. However it will incur additional overhead regarding maintenance, and therefore it is not a perfect solution, but it is the best one in this scenario.

An Azure App Service web app does not support writing to the Windows event log.

An Azure Functions app does not support writing to the Windows event log.

An App Service Environment (ASE) does not support writing to the Windows event log.

Answer:

an Azure virtual machine scale set

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

You have SQL Server on an Azure virtual machine. The databases are written to nightly as part of a batch process.

You need to recommend a disaster recovery solution for the data. The solution must meet the following requirements:

✑ Provide the ability to recover in the event of a regional outage. ✑ Support a recovery time objective (RTO) of 15 minutes.

✑ Support a recovery point objective (RPO) of 24 hours. ✑ Support automated recovery.

✑ Minimize costs.

What should you include in the recommendation?
Azure virtual machine availability sets
Azure Disk Backup
an Always On availability group
Azure Site Recovery

A

Understanding the Requirements

Regional Outage Protection: The solution must protect against complete Azure regional failures.

RTO (Recovery Time Objective) of 15 minutes: The maximum acceptable downtime for the service should be 15 minutes after a disaster.

RPO (Recovery Point Objective) of 24 hours: The maximum acceptable data loss should be 24 hours in a disaster. This means you can lose at most 24 hours of data.

Automated Recovery: The failover process should be automated, minimizing the need for manual intervention.

Cost Minimization: The chosen solution should be cost-effective.

Analyzing Disaster Recovery Options

Azure Virtual Machine Availability Sets:

Pros: Provides high availability within a single Azure region, protecting against hardware failures within the region.

Cons: Does NOT protect against regional outages. It does not provide disaster recovery to another region. This option is for availability, and not disaster recovery.

Does NOT meet requirements.

Azure Disk Backup:

Pros: Provides point-in-time backups of Azure VM disks to a recovery services vault. It is typically used for recovery within the same region, but can be configured for cross region restoration.

Cons: Backup and restore are not instantaneous. Restoring a database from backup and performing a manual recovery operation will exceed the 15-minute RTO. It requires human interaction to initiate the recovery process.

Does NOT meet the automated recovery or RTO requirements.

Always On Availability Group (AG):

Pros: Provides database-level high availability within a single region or across regions. Provides automatic failover. The RPO of a secondary node will be seconds away from the primary node, and therefore will not meet the requirement of 24 hours.

Cons: Can be complex to configure and manage. Requires a dedicated SQL Server license for each node in the availability group, which adds significant costs.

Does NOT meet the RPO requirements or cost requirements.

Azure Site Recovery (ASR):

Pros: Provides a disaster recovery service that replicates virtual machines to another Azure region, enabling you to fail over in case of a regional outage. Supports automated failover and failback. Can support the RPO requirement of 24 hours if a replication interval of 24 hours is set. It is less expensive than Always On Availability groups.

Cons: Recovery can take some time (can be optimized for shorter RTOs). It is not instantaneous recovery, however it can easily meet the 15-minute RTO requirement.

Evaluation

Regional Outage Protection: Azure Site Recovery is the only option here that protects against a regional outage. Availability sets do not protect against a regional outage, as they are in the same region.

RTO of 15 Minutes: Azure Site Recovery can support a 15-minute RTO by pre-configuring a failover plan with appropriate parameters and by setting the replication interval so that it doesn’t cause a delay during the failover. The other options either do not provide protection in another region, or will not meet the RTO requirement.

RPO of 24 hours: Azure Site Recovery allows a replication interval of 24 hours, to meet the RPO requirement.

Automated Recovery: Azure Site Recovery provides automated failover.

Cost Minimization: Azure Site Recovery will be more cost-effective than an Always On Availability group, due to not requiring the SQL Server Licenses. Azure Disk Backup is cheaper than ASR however, it requires manual intervention during failover and will not meet the RTO.

Conclusion

Given the requirements, Azure Site Recovery (ASR) is the best option for the disaster recovery of a SQL Server virtual machine in Azure. It meets the regional protection, RTO, RPO, and automated recovery requirements and is more cost-effective than an Always On Availability Group.

Answer:

Azure Site Recovery

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

HOTSPOT

You need to ensure that users managing the production environment are registered for Azure MFA and must authenticate by using Azure MFA when they sign in to the Azure portal. The solution must meet the authentication and authorization requirements.

What should you do? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
To register the users for Azure MFA, use:
Azure AD Identity Protection
Security defaults in Azure AD
Per-user MFA in the MFA management UI
To enforce Azure MFA authentication, configure:
Grant control in capolicy1
Session control in capolicy1
Sign-in risk policy in Azure AD Identity Protection for the Litware.com tenant

A

Understanding the Requirements

Azure MFA Enrollment: Users responsible for production environment management must be registered for Azure Multi-Factor Authentication (MFA).

MFA Enforcement: These users must be required to use Azure MFA when they sign in to the Azure portal.

Authentication and Authorization: The solution must meet both authentication (verifying the user’s identity) and authorization (granting access) requirements.

Analyzing Azure MFA Options

Registering Users for Azure MFA:

Azure AD Identity Protection: This service detects potential vulnerabilities and risks regarding user accounts, but it is not directly used to enroll users for MFA.

Security defaults in Azure AD: This provides basic security settings to all users of the tenant, including MFA registration. It doesn’t allow for targeting only specific users, such as in this case, those that manage the production environment.

Per-user MFA in the MFA management UI: This is the classic and direct way to enable MFA for specific users by allowing you to manage MFA settings for each account individually. It is the most effective solution for specific users, and therefore it is most suited to this situation, and is the best answer.

Enforcing Azure MFA Authentication:

Grant control in capolicy1: A “grant control” in a Conditional Access policy is used to enforce certain actions such as requiring MFA. If the correct user or group is specified, this is the perfect solution for enforcing MFA for those specific users.

Session control in capolicy1: Session control in conditional access policies is used to configure features such as “sign-in frequency” and is not directly used to enforce MFA.

Sign-in risk policy in Azure AD Identity Protection for the Litware.com tenant: Risk policies in Identity Protection are designed to detect and respond to sign-ins that are considered risky. Although this option provides security, it is not the perfect answer to the specified requirements.

Solution

Based on the analysis:

To register users for MFA: Use Per-user MFA in the MFA management UI. This option allows for precise control over who is enabled for MFA.

To enforce MFA authentication: Configure Grant control in capolicy1. This will ensure that the correct users must satisfy the MFA requirement in order to access the Azure portal.

Therefore, the correct answers are:

To register the users for Azure MFA, use: Per-user MFA in the MFA management UI

To enforce Azure MFA authentication, configure: Grant control in capolicy1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Your company has the divisions shown in the following table.
Division Azure subscription Azure AD tenant
East Sub1 Contoso.com
West Sub2 Fabrikam.com

Sub1 contains an Azure App Service web app named App1. App1 uses Azure AD for single-tenant user authentication. Users from contoso.com can authenticate to App1.

You need to recommend a solution to enable users in the fabrikam.com tenant to authenticate to App1.

What should you recommend?

A. Configure Azure AD join.
B. Configure Azure AD Identity Protection.
C. Configure a Conditional Access policy.
D. Configure Supported account types in the application registration and update the sign-in endpoint.

A

Understanding the Scenario

App1: An Azure App Service web app using Azure AD for authentication.

Current Setup: App1 is configured for single-tenant authentication, only allowing users from the contoso.com Azure AD tenant to sign in.

Requirement: You need to allow users from the fabrikam.com Azure AD tenant to authenticate to App1.

Analyzing Solution Options

A. Configure Azure AD join: Azure AD join is used to register devices with Azure AD. This is used for device authentication and does not allow another tenant’s users to authenticate against the application.

B. Configure Azure AD Identity Protection: Azure AD Identity Protection is for detecting and responding to risky sign-in behaviors. It does not enable cross-tenant authentication.

C. Configure a Conditional Access Policy: Conditional Access policies are used to control access based on criteria such as location, device, and app, but it does not directly enable users from another tenant to authenticate.

D. Configure Supported account types in the application registration and update the sign-in endpoint: This is the correct way to enable multi-tenant authentication. By changing the supported account types in the Azure AD Application Registration settings and updating the sign-in endpoint, you can enable the app to accept users from other tenants.

Explanation of the Correct Solution

When you register an application in Azure AD, the default behavior is single-tenant. To allow users from another Azure AD tenant to authenticate, you need to:

Change Supported Account Types: In the application registration settings in Azure AD, you need to configure the application to support multiple tenants. This tells Azure AD that the app can accept users from any Azure AD tenant. This setting allows for “Accounts in this organizational directory only” (single tenant) or “Accounts in any organizational directory” (multi-tenant).

Update the sign-in Endpoint: The sign-in endpoint must also be updated to enable users from the other tenant to authenticate.

Conclusion

The correct approach is to modify the application registration and the sign-in endpoint to enable multi-tenant authentication. This will allow the app to recognize and authenticate users from the fabrikam.com tenant.

Answer:

D. Configure Supported account types in the application registration and update the sign-in endpoint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

HOTSPOT -
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Statements
Authorization to access Azure resources can be provided only to Azure Active Directory
(Azure AD) users.
Identities stored in Azure Active Directory (Azure AD), third-party cloud services, and on-
premises Active Directory can be used to access Azure resources.
Azure has built-in authentication and authorization services that provide secure access to Azure resources.

A

Understanding Azure Authentication and Authorization

Authentication: The process of verifying a user’s identity (e.g., by checking their username and password).

Authorization: The process of granting permissions to access specific resources based on the user’s identity and their role or group membership.

Azure Active Directory (Azure AD): Microsoft’s cloud-based identity and access management service. It’s the primary identity provider for Azure resources.

Analyzing the Statements

Statement 1: Authorization to access Azure resources can be provided only to Azure Active Directory (Azure AD) users.

No. While Azure AD is the primary identity provider, you can also use other identities. For example, you can use service principals (which are application identities) to grant access to resources. Also, you can use guest users from other Azure AD tenants.

Statement 2: Identities stored in Azure Active Directory (Azure AD), third-party cloud services, and on-premises Active Directory can be used to access Azure resources.

Yes. This statement accurately reflects the flexibility of Azure’s identity management.

Azure AD Identities: This is the most common scenario where your cloud-based users are managed directly in Azure AD.

Third-party cloud services: You can federate with third party IDPs to provide single sign on for your cloud services. This enables integration and collaboration with other cloud service providers.

On-premises Active Directory: Through Azure AD Connect or federation, you can integrate on-premises Active Directory users so they can sign into cloud-based resources.

Statement 3: Azure has built-in authentication and authorization services that provide secure access to Azure resources.

Yes. This statement is correct. Azure provides several services for authentication and authorization. Azure AD is the core service, providing a centralized identity provider. Other services, such as Azure Role-Based Access Control (RBAC) and Azure Active Directory B2C (for consumer-facing applications) are built-in services that enhance the security of Azure resources. These services provide secure and flexible methods to control access to resources.

Answers:

Statement 1: No

Statement 2: Yes

Statement 3: Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

You have an application that is used by 6,000 users to validate their vacation requests. The application manages its own credential store.
Users must enter a username and password to access the application. The application does NOT support identity providers.
You plan to upgrade the application to use single sign-on (SSO) authentication by using an Azure Active Directory (Azure AD) application registration.
Which SSO method should you use?

A. header-based
B. SAML
C. password-based
D. OpenID Connect

A

Understanding the Situation

Current Setup: The application has its own user credential store and requires users to enter a username and password directly in the application. It does not support any external identity providers.

Goal: Upgrade the application to use single sign-on (SSO) using an Azure AD application registration. The application itself cannot be changed to support identity providers.

Constraint: The application does not directly support identity providers such as SAML or OIDC.

Analyzing SSO Methods

A. Header-based:

Mechanism: Header-based authentication typically involves passing authentication information in HTTP headers. This is a common mechanism when used in conjunction with a reverse proxy or web application firewall. In this case however, the application does not directly support any identity providers, therefore this is not a valid option.

Suitability: Requires the application to be modified and will not integrate directly with Azure AD.

B. SAML:

Mechanism: SAML (Security Assertion Markup Language) is an XML-based protocol for exchanging authentication and authorization data between identity providers (like Azure AD) and applications.

Suitability: Requires the application to directly support SAML integration. In this situation, we are unable to modify the application.

C. Password-based:

Mechanism: Password-based SSO, in the context of Azure AD, involves a secure way to store and manage the credentials for an application that doesn’t natively support federation. When a user accesses the application through Azure AD, Azure AD securely provides the application with the stored credentials.

Suitability: This method can be used when the application does not support any identity providers and cannot be changed. It will not modify the application.

D. OpenID Connect (OIDC):

Mechanism: OIDC is an authentication protocol built on top of OAuth 2.0. It is a modern protocol used for authentication.

Suitability: Requires the application to be modified to directly support OIDC.

Evaluation

Application Constraint: The application cannot be modified to use SAML or OIDC. This rules out options B and D.

Azure AD Compatibility: Azure AD supports password-based SSO for applications that do not directly support federation. This mechanism involves storing the application’s username and password in Azure AD and securely providing it to the application when required.

No Code Changes: By using password-based SSO, there will be no code changes required to the application.

Conclusion

Given the requirements and the constraint that the application cannot be modified, password-based SSO is the only viable option. It allows users to log in to the application through Azure AD while the application does not need to be modified to support authentication against Azure AD directly.

Answer:

C. password-based

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

You are designing a point of sale (POS) solution that will be deployed across multiple locations and will use an Azure Databricks workspace in the Standard tier. The solution will include multiple apps deployed to the on-premises network of each location.

You need to configure the authentication method that will be used by the app to access the workspace. The solution must minimize the administrative effort associated with staff turnover and credential management.

What should you configure?

A. a managed identity
B. a service principal
C. a personal access token

A

Understanding the Situation

POS Solution: A point-of-sale solution deployed at multiple locations with applications accessing an Azure Databricks workspace.

Authentication Requirement: The application needs to authenticate to the Databricks workspace.

Administrative Goal: Minimize administrative effort, particularly related to staff turnover and credential management.

Analyzing Authentication Options

A. a managed identity:

Mechanism: Managed identities provide an automatically managed identity in Azure AD. This eliminates the need for you to store credentials in code or configuration files. Azure services are assigned an identity with defined permissions and are then allowed to access other resources.

Suitability: Managed identities are best used when the application is running in Azure services. In this scenario the applications are running on premise, so this is not a suitable option.

B. a service principal:

Mechanism: A service principal is an identity for an application within Azure AD. It’s like a user, but it’s intended for applications rather than humans. You create a service principal in Azure AD and then configure it to access specific resources, for example the Databricks workspace. The app uses client id and client secret to connect to Azure AD.

Suitability: This would work for the application, but the secret needs to be managed, rotated and protected. This adds operational overhead and is not the optimal solution.

C. a personal access token:

Mechanism: A personal access token is a string that acts like a password for a user, granting access to specific resources. These tokens are typically linked to individual user accounts.

Suitability: Personal access tokens would be very difficult to manage with staff turnover and would create additional administrative overhead. Therefore this option is not appropriate.

Evaluation

Minimizing Administrative Effort:

Managed identities: Are the most secure and easy to manage as they do not require credential management. However, they are not suitable for applications running on-premise.

Service principals: Require managing credentials (client id and secret) which introduces management overhead.

Personal access tokens: Require managing tokens for each user, which increases overhead and complexity, especially with staff turnover.

On-premise application: The applications are deployed on premise, not in the cloud. Therefore managed identities are not a suitable option.

Conclusion

Although managed identities are ideal when running code in the cloud, in this situation, where the application is running on-premise, a service principal is the best option. Service principals provide a way for the application to authenticate without relying on user credentials or individual tokens. However the secret must be managed carefully.

Answer:

B. a service principal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

You are developing an app that will read activity logs for an Azure subscription by using Azure Functions.

You need to recommend an authentication solution for Azure Functions. The solution must minimize administrative effort.

What should you include in the recommendation?

A. an enterprise application in Azure AD
B. system-assigned managed identities
C. shared access signatures (SAS)
D. application registration in Azure AD

A

Understanding the Situation

App: An Azure Function app needs to read activity logs for an Azure subscription.

Authentication Goal: Minimize the administrative effort associated with managing credentials.

Analyzing Authentication Options

A. an enterprise application in Azure AD:

Mechanism: An enterprise application is a representation of an application within an Azure AD tenant. You register an application and grant it permissions to access Azure resources.

Suitability: This will work, however it involves the management of credentials such as a secret, and this is not the most ideal solution.

B. system-assigned managed identities:

Mechanism: Managed identities provide an automatically managed identity in Azure AD. This eliminates the need for you to store credentials in code or configuration files. When the function is assigned a managed identity, it is automatically registered in your Azure Active Directory.

Suitability: Managed identities are designed to simplify the process of authenticating to Azure resources. They are the ideal solution for minimizing administrative effort and managing credentials. It is the optimal solution for this scenario.

C. shared access signatures (SAS):

Mechanism: SAS provides delegated access to storage accounts. SAS is not an appropriate solution in this scenario, where an application is required to authenticate to an Azure resource (activity logs), and is only appropriate when accessing storage accounts.

Suitability: Not suitable for this purpose as it is intended for storage account access.

D. application registration in Azure AD:

Mechanism: Application registration is the process of registering your application within an Azure AD tenant. It is required to allow an application to request authentication to Azure AD.

Suitability: Application registration is a pre-requisite for most authentication methods, including managed identities. However, application registration alone does not provide a mechanism to minimize administrative overhead. This is a pre-req step, but it not the best answer.

Evaluation

Minimize Administrative Effort:

Managed Identities: Do not require you to manage secrets or credentials. Azure automatically rotates the credentials of the managed identity, eliminating all administrative overhead.

Enterprise applications: Require managing secrets and credentials, and therefore require additional administrative effort.

SAS: Not appropriate for the specific scenario.

Application registration: Only part of the process and requires the use of a method (such as client secret) which requires management.

Conclusion

System-assigned managed identities are the ideal solution because they eliminate the need to manage and secure credentials explicitly, greatly reducing administrative effort. The Azure function will automatically be assigned a service principal identity within Azure AD, and this identity can then be authorized to read from the activity logs.

Answer:

B. system-assigned managed identities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

You have an Azure Active Directory (Azure AD) tenant that syncs with an on-premises Active Directory domain.
Your company has a line-of-business (LOB) application that was developed internally.
You need to implement SAML single sign-on (SSO) and enforce multi-factor authentication (MFA) when users attempt to access the application from an unknown location.
Which two features should you include in the solution? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

A. Azure AD Privileged Identity Management (PIM)
B. Azure Application Gateway
C. Azure AD enterprise applications
D. Azure AD Identity Protection
E. Conditional Access policies

A

Understanding the Situation

Hybrid Environment: An Azure AD tenant synced with on-premises Active Directory.

LOB Application: An internally developed application needs to be integrated with Azure AD for SSO.

SSO Requirement: SAML-based single sign-on is needed.

MFA Enforcement: Multi-factor authentication (MFA) must be enforced when users access the application from unknown locations.

Analyzing Azure AD Features

A. Azure AD Privileged Identity Management (PIM):

Purpose: PIM is used to manage, control, and monitor access to important resources in your organization. It focuses on just-in-time access for privileged roles, not for general application SSO. It does not fulfill the requirement here.

Suitability: Not directly related to SAML SSO or location-based MFA enforcement.

B. Azure Application Gateway:

Purpose: Application Gateway is a web traffic load balancer with a WAF.

Suitability: This might be part of a solution if a web application firewall is required, but it is not required for SAML SSO and conditional access based on location, and therefore is not directly relevant to the requirements.

C. Azure AD enterprise applications:

Purpose: Enterprise applications are used to represent applications within Azure AD that use the directory for authentication, such as applications that use SAML SSO.

Suitability: Crucial for implementing SAML SSO with Azure AD for a custom application. You need to create an Enterprise Application to define how users authenticate against the application and to configure SAML.

D. Azure AD Identity Protection:

Purpose: Identity Protection detects and responds to risky sign-in behaviors based on machine learning and other analytics. It does not provide location based MFA enforcement.

Suitability: It can be used for risk-based MFA policies but is not used to enforce MFA based on location directly.

E. Conditional Access policies:

Purpose: Conditional Access policies allow you to control access to cloud apps based on conditions such as location, device, and user risk.

Suitability: This is essential for implementing location-based MFA enforcement. This will require a sign-in policy that specifies the use of MFA when the location is unknown.

Evaluation

SAML SSO: Azure AD enterprise applications are necessary to configure the application with SAML SSO. This will allow the application to trust the authentication process from Azure AD.

Location-based MFA: Conditional Access policies are required to enforce MFA when a user tries to access the application from an unfamiliar location.

Conclusion

The two required features are:

Azure AD enterprise applications: This allows for SAML-based authentication for the application.

Conditional Access policies: To enforce MFA based on sign-in location.

Answer:

C. Azure AD enterprise applications
E. Conditional Access policies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Your on-premises network contains an Active Directory Domain Services (AD DS) domain. The domain contains a server named Server1. Server1 contains an app named App1 that uses AD DS authentication. Remote users access App1 by using a VPN connection to the on-premises network.

You have an Azure AD tenant that syncs with the AD DS domain by using Azure AD Connect.

You need to ensure that the remote users can access App1 without using a VPN. The solution must meet the following requirements:

  • Ensure that the users authenticate by using Azure Multi-Factor Authentication (MFA).
  • Minimize administrative effort.

What should you include in the solution? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
— —
Answer Area
In Azure AD:
A managed identity
An access package
An app registration
An enterprise application
On-premises:
A server that runs Windows Server and has the Azure AD Application Proxy connector installed
A server that runs Windows Server and has the on-premises data gateway (standard mode) installed
A server that runs Windows Server and has the Web Application Proxy role service installed
— —

A

Understanding the Situation

On-premises App: App1 is an on-premises application that uses AD DS for authentication.

Current Access: Remote users connect via VPN to access App1.

Goal: Enable remote users to access App1 without a VPN.

Requirements:

Use Azure MFA for authentication.

Minimize administrative effort.

Analyzing Azure AD Components

A managed identity: Managed identities are used for authenticating Azure resources to other Azure services. They are not used for authenticating on-premise applications. Therefore this is not a suitable option.

An access package: Access packages are used to govern access to resources within an organization. This is a good tool to provide user access, but not an appropriate mechanism to publish on-premises apps to the internet.

An app registration: An app registration is required to enable an application to authenticate with Azure AD. It is a pre-req for a number of solutions, and is not the correct answer.

An enterprise application: An enterprise application represents the application within Azure AD and acts as the point of contact when integrating with an external service. It is required for Azure AD Application Proxy. This is the correct solution.

Analyzing On-Premises Components

A server that runs Windows Server and has the Azure AD Application Proxy connector installed: This is the correct on-premises component. The Azure AD Application Proxy connector acts as a reverse proxy, securely publishing your on-premises applications to the internet, enabling external users to access them without needing a VPN. This component facilitates the secure connection with the enterprise application, allowing for SSO.

A server that runs Windows Server and has the on-premises data gateway (standard mode) installed: The on-premises data gateway is used for connecting on-premises data sources to Azure services. It is not used for publishing applications to the internet, therefore it is not the right solution.

A server that runs Windows Server and has the Web Application Proxy role service installed: Web Application Proxy is a legacy solution, and not the optimal solution here. The Azure AD Application Proxy is more appropriate.

Evaluation

VPN Removal: Azure AD Application Proxy allows remote users to access on-premises applications without the need for a VPN.

Azure MFA: The Azure AD Application Proxy integrates seamlessly with Azure AD’s authentication services, including MFA.

AD Authentication: The Azure AD Application Proxy will securely pass the users’ identity to the on-premise application and will authenticate with AD.

Minimizing Admin Effort: This solution uses managed services, reducing the overall administrative overhead compared to managing complex VPN connections.

Conclusion

The correct components for this solution are:

In Azure AD: An enterprise application.

On-premises: A server that runs Windows Server and has the Azure AD Application Proxy connector installed.

Answer:

In Azure AD: An enterprise application

On-premises: A server that runs Windows Server and has the Azure AD Application Proxy connector installed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

You need to implement the Azure RBAC role assignments for the Network Contributor role. The solution must meet the authentication and authorization requirements.
What is the minimum number of assignments that you must use?

A. 1
B. 2
C. 5
D. 10
E. 15

A

Understanding the Requirements

Azure RBAC: You need to use Azure Role-Based Access Control (RBAC).

Network Contributor Role: You specifically need to assign the built-in Network Contributor role.

Goal: You need to determine the minimum number of role assignments required.

Key Concepts

Role Definition: A role (like Network Contributor) defines the set of permissions.

Role Assignment: A role assignment links a role definition to a specific user, group, or service principal at a specific scope (like a resource group, subscription, or management group).

Minimum Number of Assignments

The key to answering this question is that a single role assignment can grant access to multiple users if the assignment is done to a security group. Therefore:

One Assignment: You can create a security group in Azure AD, assign the Network Contributor role to that group at the appropriate scope (e.g., a specific resource group or the subscription), and then add all users who need that level of access to this security group.

Therefore, you can implement the requirements with only 1 role assignment.

Why not more?

You could assign the Network Contributor role individually to multiple users. However, that would not be the minimum.

Creating many individual role assignments is generally considered poor practice compared to using groups because it makes management complex.

You could create multiple groups, but there is no requirement for this.

Conclusion

The minimum number of role assignments required to provide the Network Contributor role to multiple users is 1, using a security group.

Answer:

A. 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

You have an Azure subscription that contains an Azure Kubernetes Service (AKS) instance named AKS1. AKS1 hosts microservice-based APIs that are configured to listen on non-default HTTP ports.

You plan to deploy a Standard tier Azure API Management instance named APIM1 that will make the APIs available to external users.

You need to ensure that the AKS1 APIs are accessible to APIM1. The solution must meet the following requirements:

  • Implement MTLS authentication between APIM1 and AKS1.
  • Minimize development effort.
  • Minimize costs.

What should you do?

A. Implement an external load balancer on AKS1.
B. Redeploy APIM1 to the virtual network that contains AKS1.
C. Implement an ExternalName service on AKS1.
D. Deploy an ingress controller to AKS1.

A

Understanding the Situation

AKS1: An AKS cluster with microservice APIs on non-default HTTP ports.

APIM1: A Standard tier API Management instance that needs to expose the AKS1 APIs to external users.

Security: Mutual TLS (MTLS) authentication is required between APIM1 and AKS1.

Goals:

Minimize development effort.

Minimize costs.

Analyzing Solution Options

A. Implement an external load balancer on AKS1:

Mechanism: This would expose the services in AKS to the internet via an external IP address, which is not the most appropriate solution when utilizing API management.

Suitability: While a load balancer is common for exposing services to the internet, it does not directly address the requirement for MTLS authentication with APIM. In addition, exposing the microservices directly to the internet without API management is not the most secure or best practice solution. It also adds unnecessary costs.

B. Redeploy APIM1 to the virtual network that contains AKS1:

Mechanism: This places the API management service inside the same virtual network as the AKS cluster, allowing them to communicate via the private IP.

Suitability: This solves the networking requirement but does not directly implement MTLS. Also, redeploying APIM is a costly and time consuming operation, and this does not meet the goals of minimal development effort and cost.

C. Implement an ExternalName service on AKS1:

Mechanism: An ExternalName service in Kubernetes maps a DNS alias to an external hostname, allowing you to direct internal traffic to the desired endpoint.

Suitability: This is the correct approach to expose the services in AKS to API Management. When used in conjunction with MTLS authentication in APIM, this will fulfill all the requirements while also minimizing cost and development effort. The service in AKS will also require MTLS configured on the API itself, which is minimal.

D. Deploy an ingress controller to AKS1:

Mechanism: An ingress controller manages external access to services inside the Kubernetes cluster, often using layer 7 (HTTP) rules.

Suitability: Ingress controllers can be part of exposing services, however they do not solve for MTLS or the requirement to expose it to API Management. This approach will not fulfil the requirements.

Evaluation

MTLS Authentication: Requires server authentication on the AKS side, and client authentication on the APIM side. This configuration can be done without changing the AKS service configuration using an ExternalName service, which will keep configuration and costs minimal.

Minimize Development Effort: Using an ExternalName service requires minimal changes to AKS, and minimal configuration changes on API Management, and therefore the development effort is minimized.

Minimize Costs: Redeploying API Management will incur additional costs, as will implementing a Load Balancer. An externalName service has minimal costs.

Conclusion

The best approach that fulfills all the requirements is to implement an ExternalName service on AKS1, which exposes the service to API Management, and configure the MTLS connection on both the service, and API Management.

Answer:

C. Implement an ExternalName service on AKS1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

You have an Azure subscription named Sub1 that is linked to an Azure AD tenant named contoso.com.

You plan to implement two ASP.NET Core apps named App1 and App2 that will be deployed to 100 virtual machines in Sub1. Users will sign in to App1 and App2 by using their contoso.com credentials.

App1 requires read permissions to access the calendar of the signed-in user. App2 requires write permissions to access the calendar of the signed-in user.

You need to recommend an authentication and authorization solution for the apps. The solution must meet the following requirements:

  • Use the principle of least privilege.
  • Minimize administrative effort.

What should you include in the recommendation? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
Authentication:
Application registration in Azure AD
A system-assigned managed identity
A user-assigned managed identity
Authorization:
Application permissions
Azure role-based access control (Azure RBAC)
Delegated permissions

A

Understanding the Situation

Apps: Two ASP.NET Core apps (App1, App2) deployed to virtual machines.

Authentication: Users will sign in using their contoso.com (Azure AD) credentials.

Authorization:

App1 needs read access to the user’s calendar.

App2 needs write access to the user’s calendar.

Goals:

Principle of least privilege (granting only necessary permissions).

Minimize administrative effort.

Analyzing Authentication Options

Application registration in Azure AD:

Mechanism: Creating an app registration is a prerequisite for an application to authenticate with Azure AD. This provides an identity for the application in Azure AD. This step is required to create an Azure application.

Suitability: Required in all of the scenarios here, and therefore is not specific enough to be the correct answer.

A system-assigned managed identity:

Mechanism: A system-assigned managed identity is an identity automatically assigned to an Azure resource by Azure. The identity is bound to the lifecycle of the Azure resource. This option will work if the virtual machine and application are using managed identities.

Suitability: Suitable if you configure your VM with managed identities, this will simplify credentials management, and reduce administrative overhead. This is the ideal solution for this part.

A user-assigned managed identity:

Mechanism: A user-assigned managed identity is created as a standalone resource that you can then assign to one or more Azure resources.

Suitability: This is also an option for managing managed identities. However a system-assigned managed identity is simpler in this scenario, because the resource lifecycle is the same.
Analyzing Authorization Options

Application permissions:

Mechanism: Application permissions grant access to an application to an API for data that is not specific to a user. They do not act on behalf of a user.

Suitability: This is not the correct approach in this case as the application needs to access the user’s calendar, not all calendars in the organization.

Azure role-based access control (Azure RBAC):

Mechanism: RBAC is used for managing access to Azure resources (e.g. VMs, networks, storage etc) and is not appropriate for managing access to Graph API data (e.g. calendar).

Suitability: Not suitable for managing access to the user’s calendar information.

Delegated permissions:

Mechanism: Delegated permissions grant an application access to specific resources on behalf of a signed-in user. These can be set on the application registration or the enterprise application.

Suitability: This is the appropriate authorization type for this situation. App1 will require a delegated permission to “read the user’s calendar” and App2 will require a delegated permission to “write to the user’s calendar”.

Evaluation

Authentication: Using a system-assigned managed identity for authentication on the virtual machines to simplify authentication and reduce administrative overhead. Each virtual machine will have a unique identity.

Authorization: Using delegated permissions will enable the application to act on behalf of the user, allowing access to their calendar. This is ideal for implementing the principle of least privilege.

Conclusion

The ideal combination for the given requirements is:

Authentication: A system-assigned managed identity

Authorization: Delegated permissions

Answer:

Authentication: A system-assigned managed identity

Authorization: Delegated permissions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

HOTSPOT -
You plan to deploy an Azure web app named App1 that will use Azure Active Directory (Azure AD) authentication.
App1 will be accessed from the internet by the users at your company. All the users have computers that run Windows 10 and are joined to Azure AD.
You need to recommend a solution to ensure that the users can connect to App1 without being prompted for authentication and can access App1 only from company-owned computers.
What should you recommend for each requirement? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
The users can connect to App1 without being
prompted for authentication:
An Azure AD app registration
An Azure AD managed identity
Azure AD Application Proxy
The users can access App1 only from
company-owned computers:
A Conditional Access policy
An Azure AD administrative unit
Azure Application Gateway
Azure Blueprints
Azure Policy

A

Understanding the Situation

App1: An Azure web app using Azure AD authentication.

Access: Users need to access App1 from the internet.

Requirements:

Seamless Access: Users should not be prompted for authentication (SSO).

Device Restriction: Only company-owned (Azure AD joined) Windows 10 devices should be allowed to access App1.

Analyzing Options for Seamless Access

An Azure AD app registration:

Mechanism: An app registration is the first step when integrating an application with Azure AD for authentication, so it is a prerequisite step.

Suitability: While necessary for enabling authentication, it does not directly ensure seamless authentication. It does not manage the device configuration.

An Azure AD managed identity:

Mechanism: Managed identities are used by Azure resources to authenticate to other Azure services, not the user accessing the web app directly.

Suitability: Not applicable for the situation.

Azure AD Application Proxy:

Mechanism: Primarily designed to publish on-premises web applications to the internet.

Suitability: Not required for applications already hosted in Azure.

Analyzing Options for Device Restriction

A Conditional Access policy:

Mechanism: Conditional Access policies allow you to define access rules based on various conditions including device state.

Suitability: This is the correct tool to enforce device-based access restrictions. It enables access from Azure AD Joined devices.

An Azure AD administrative unit:

Mechanism: Administrative units are used to scope permissions within an Azure AD tenant.

Suitability: Not applicable to device-based restrictions.

Azure Application Gateway:

Mechanism: Provides load balancing and web application firewall features.

Suitability: Not designed for controlling access based on device state.

Azure Blueprints:

Mechanism: Blueprints are used to deploy and update collections of Azure resources.

Suitability: Not designed to provide device based access restrictions.

Azure Policy:

Mechanism: Used to enforce organizational standards and assess compliance.

Suitability: Not designed to control access based on the device.

Evaluation

Seamless Authentication: When a user is using an Azure AD joined computer to connect to a resource that uses Azure AD for authentication, they will automatically be authenticated using their current windows sign-in credentials. This will provide single-sign-on with no additional work. An application registration is a pre-req, but not the answer itself.

Device Restriction: Azure AD Conditional Access policies are specifically designed for this type of scenario, they can be set up to restrict application access to specific device types or device states such as only Azure AD joined devices.

Conclusion

The correct components for this solution are:

The users can connect to App1 without being prompted for authentication: An Azure AD app registration, in conjunction with Azure AD joined devices, will automatically authenticate users.

The users can access App1 only from company-owned computers: A Conditional Access policy.

Answer:

The users can connect to App1 without being prompted for authentication: An Azure AD app registration

The users can access App1 only from company-owned computers: A Conditional Access policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

HOTSPOT -
You have several Azure App Service web apps that use Azure Key Vault to store data encryption keys.
Several departments have the following requests to support the web app:
Department
Security
Request
* Review the membership of administrative roles and require
users to provide a justification for continued membership.
* Get alerts about changes in administrator assignments.
* See a history of administrator activation, including which
changes administrators made to Azure resources.
Development
* Enable the applications to access Key Vault and retrieve
keys for use in code.
Quality Assurance
* Receive temporary administrator access to create and
configure additional web apps in the test environment.
Which service should you recommend for each department’s request? To answer, configure the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Security:
Development:
Quality Assurance:
Azure AD Privileged Identity Management
Azure Managed Identity
Azure AD Connect
Azure AD Identity Protection
Azure AD Privileged Identity Management
Azure Managed Identity
Azure AD Connect
Azure AD Identity Protection
Azure AD Privileged Identity Management
Azure Managed Identity
Azure AD Connect
Azure AD Identity Protection

A

Understanding the Departments and Their Requirements

Security:

Needs to review administrator role memberships.

Wants alerts on administrator changes.

Needs a history of administrator actions.

Development:

Needs to enable applications to access Key Vault to retrieve keys.

Quality Assurance (QA):

Needs temporary admin access to create and configure resources.

Analyzing Azure Services

Azure AD Privileged Identity Management (PIM):

Purpose: Manages, controls, and monitors access to important resources. Allows for just-in-time (JIT) access to privileged roles, can provide alerts for role changes, requires justification for continued roles, and has an audit history.

Suitability: Matches the Security and Quality Assurance requirements.

Azure Managed Identity:

Purpose: Provides an identity for Azure services to use when authenticating to other Azure services. It is used by applications to securely retrieve keys from Key Vault.

Suitability: Matches the Development requirement.

Azure AD Connect:

Purpose: Synchronizes on-premises identities with Azure AD.

Suitability: Not directly related to any of these requests.

Azure AD Identity Protection:

Purpose: Detects and responds to risky sign-in behaviors.

Suitability: Not directly related to any of these requests.

Matching Departments to Services

Security: The requirement for role membership review, alerts on administrator changes, and history of admin actions all point to the use of Azure AD Privileged Identity Management (PIM).

Development: The need for the application to securely retrieve keys from Key Vault is best met with Azure Managed Identity, removing the requirement to store secrets or connection strings in the application.

Quality Assurance: The requirement for temporary access to create and configure resources is best met using Azure AD Privileged Identity Management (PIM), as it will provide just-in-time access for the required purpose.

Conclusion

The correct service recommendations are:

Security: Azure AD Privileged Identity Management

Development: Azure Managed Identity

Quality Assurance: Azure AD Privileged Identity Management

Answer:

Security: Azure AD Privileged Identity Management

Development: Azure Managed Identity

Quality Assurance: Azure AD Privileged Identity Management

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

HOTSPOT -
You are designing a software as a service (SaaS) application that will enable Azure Active Directory (Azure AD) users to create and publish online surveys. The
SaaS application will have a front-end web app and a back-end web API. The web app will rely on the web API to handle updates to customer surveys.
You need to design an authorization flow for the SaaS application. The solution must meet the following requirements:
✑ To access the back-end web API, the web app must authenticate by using OAuth 2 bearer tokens.
✑ The web app must authenticate by using the identities of individual users.
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
The access tokens will be generated by:
Azure AD
A web app
A web API
Authorization decisions will be performed by:
Azure AD
A web app
A web API

A

Requirements:

SaaS Application: A SaaS application with a web app and a web API.

OAuth 2.0 Bearer Tokens: Web app must use OAuth 2.0 bearer tokens to access the web API.

User Authentication: Web app must authenticate on behalf of individual users.

Authorization Decisions: Authorization must be performed based on the user’s identity.

Answer Area:

The access tokens will be generated by:

Azure AD

Authorization decisions will be performed by:

A web API

Explanation:

The access tokens will be generated by:

Azure AD:

Why it’s correct: In an OAuth 2.0 flow, the authorization server (in this case Azure AD) issues access tokens to clients (the web app) after successful authentication. The access token is then used to access the resource API. This is the standard flow for Azure AD authentication.

Why not others:

A web app: The web app is the client, it cannot issue the tokens, instead, it requests them from Azure AD.

A web API: A web API is the resource to be protected by the tokens. It does not issue tokens for clients to call other APIs.

Authorization decisions will be performed by:

A web API:

Why it’s correct: The resource (the web API) is responsible for validating the access token and making authorization decisions. It will check if the client (web app) has the correct permissions to access specific data or operations based on claims in the token.

Why not others:

Azure AD: Azure AD is responsible for authentication and issuing tokens, it is not directly involved in the authorization of what an application is allowed to do.

A web app: The web app consumes the API, it does not make the authorization decisions on the API.

Important Notes for the AZ-304 Exam:

OAuth 2.0: Be very familiar with the OAuth 2.0 authorization flow, and the role of the client application, the authentication server, and the API resource server.

Azure AD as Authorization Server: Understand that Azure AD is used as an authentication and authorization server in Azure-based applications.

Access Tokens: Understand what access tokens are, how they are used, and that they are generated by the auth server.

Authorization Decisions: Know that APIs are responsible for authorizing access based on the identity that is included in the token.

Security Best Practices: Secure your applications, and do not embed secrets.

Exam Focus: Always look for the components that match the specific authorization workflow, and know the specific purpose of each.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

You have an Azure subscription that contains a custom application named Application1. Application1 was developed by an external company named Fabrikam,
Ltd. Developers at Fabrikam were assigned role-based access control (RBAC) permissions to the Application1 components. All users are licensed for the
Microsoft 365 E5 plan.
You need to recommend a solution to verify whether the Fabrikam developers still require permissions to Application1. The solution must meet the following requirements:
✑ To the manager of the developers, send a monthly email message that lists the access permissions to Application1.
✑ If the manager does not verify an access permission, automatically revoke that permission.
✑ Minimize development effort.
What should you recommend?

A. In Azure Active Directory (Azure AD), create an access review of Application1.
B. Create an Azure Automation runbook that runs the Get-AzRoleAssignment cmdlet.
C. In Azure Active Directory (Azure AD) Privileged Identity Management, create a custom role assignment for the Application1 resources.
D. Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet.

A

Understanding the Situation

Application1: A custom application in Azure with RBAC permissions assigned to Fabrikam developers.

Goal: Regularly verify whether Fabrikam developers still need access to Application1.

Requirements:

Monthly email to the developers’ manager listing access permissions.

Automatic revocation if access is not verified by the manager.

Minimize development effort.

Analyzing the Options

A. In Azure Active Directory (Azure AD), create an access review of Application1.

Mechanism: Access reviews are a feature in Azure AD that allow you to regularly review user access to resources. You can configure them to send out review requests to users or managers. These reviews can also be set up to automatically revoke access if not confirmed.

Suitability: This solution directly addresses the requirements, is easy to set up, and minimizes development effort. This is the best solution.

B. Create an Azure Automation runbook that runs the Get-AzRoleAssignment cmdlet.

Mechanism: Get-AzRoleAssignment is a PowerShell cmdlet to retrieve role assignments. You could potentially use this cmdlet in an Automation runbook to retrieve the role assignments, however it does not provide the ability to notify a manager, and revoke access, meaning that additional logic must be created. This will incur higher development overhead.

Suitability: Requires a lot of custom development to achieve the requirements.

C. In Azure Active Directory (Azure AD) Privileged Identity Management, create a custom role assignment for the Application1 resources.

Mechanism: Privileged Identity Management (PIM) is designed to control just-in-time access to highly privileged roles and it is not suited to provide periodic access reviews of standard roles.

Suitability: Not appropriate for this scenario, it’s not designed for this purpose and does not offer access review capabilities.

D. Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet.

Mechanism: Get-AzureADUserAppRoleAssignment is a PowerShell cmdlet to retrieve application role assignments for users. This cmdlet can be used to retrieve the list of users and their application roles, and can be used in an Automation runbook to achieve the desired result. However it does not support access reviews, or a notification to the manager, therefore additional logic must be developed.

Suitability: This requires significant additional development to notify the manager and to revoke access, and therefore does not minimize development effort.

Evaluation

Access Review with Manager Approval: Azure AD Access Reviews allows for the implementation of manager access reviews, automatic reminders to the manager, and automatic revocation upon review timeout.

Automation: Access reviews provide automatic reminders and automatic revocation, minimizing development overhead.

Least Privilege: Access reviews and automatic revocation will ensure users do not have access for longer than required, enforcing the principle of least privilege.

Conclusion

Creating an access review of Application1 in Azure AD is the most suitable solution. It directly meets the access review, notification, automatic revocation, and minimal development effort requirements.

Answer:

A. In Azure Active Directory (Azure AD), create an access review of Application1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Your company has the infrastructure shown in the following table.
Location: Azure
Resource:

Azure subscription named Subscription1
20 Azure web apps
Location: On-premises datacenter
Resource:

Active Directory domain
Server running Azure AD Connect
Linux computer named Server1

The on-premises Active Directory domain syncs with Azure Active Directory (Azure AD).
Server1 runs an application named App1 that uses LDAP queries to verify user identities in the on-premises Active Directory domain.
You plan to migrate Server1 to a virtual machine in Subscription1.
A company security policy states that the virtual machines and services deployed to Subscription1 must be prevented from accessing the on-premises network.
You need to recommend a solution to ensure that App1 continues to function after the migration. The solution must meet the security policy.
What should you include in the recommendation?

A. Azure AD Application Proxy
B. the Active Directory Domain Services role on a virtual machine
C. an Azure VPN gateway
D. Azure AD Domain Services (Azure AD DS)

A

Understanding the Situation

App1: An application running on-premises, that uses LDAP queries to authenticate with on-premises Active Directory Domain Services (AD DS).

Migration: Server1 (and thus App1) is being moved to an Azure VM in Subscription1.

Security Policy: Azure resources in Subscription1 must not access the on-premises network.

Goal: App1 must continue to function (i.e., authenticate users) after the migration without violating the security policy.

Analyzing the Options

A. Azure AD Application Proxy:

Mechanism: Azure AD Application Proxy is designed to publish on-premises web applications to the internet so users can access them remotely without using a VPN. It is also useful for providing SSO for web applications.

Suitability: Application Proxy is not suited for the situation, as the goal is not to publish a web application.

B. the Active Directory Domain Services role on a virtual machine:

Mechanism: Deploying a Windows Server VM in Azure with the Active Directory Domain Services (AD DS) role allows you to create a domain controller in the cloud, and migrate your AD services into the cloud.

Suitability: While this would technically provide an AD DS environment, it would not allow the cloud resources to access the on-premises AD. It would provide an isolated version of Active Directory in the cloud and would not solve the problem.

C. an Azure VPN gateway:

Mechanism: A VPN gateway connects your on-premises network to Azure, creating a secure bridge between them.

Suitability: This option would violate the security policy by connecting the Azure network and on-premises network.

D. Azure AD Domain Services (Azure AD DS):

Mechanism: Azure AD DS provides a managed domain service in the cloud. This is completely separate to on premise Active Directory and can provide the same authentication methods that on premise AD DS can.

Suitability: This is the correct solution, as it can be used to provide the same authentication services within Azure, without connecting to the on-premise Active Directory.

Evaluation

Security Policy: The security policy prohibits any network access to the on-premise environment, therefore VPN should not be used.

App1 Functionality: App1 requires LDAP to query the user database for authentication purposes. Azure AD DS can provide this functionality.

Minimal Modification: Azure AD DS is designed to provide the same functionality that Active Directory provides, however without requiring a full domain controller deployment.

Conclusion

The recommended solution is to use Azure AD Domain Services (Azure AD DS). It provides the required AD DS functionality in Azure, which is used for LDAP queries. This ensures that App1 can function in Azure and remains compliant with the security policy.

Answer:

D. Azure AD Domain Services (Azure AD DS)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

You are designing an app that will be hosted on Azure virtual machines that run Ubuntu. The app will use a third-party email service to send email messages to users. The third-party email service requires that the app authenticate by using an API key.

You need to recommend an Azure Key Vault solution for storing and accessing the API key. The solution must minimize administrative effort.

What should you recommend using to store and access the key? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
— —
Answer Area
Storage:
Certificate
Key
Secret
Access:
An API token
A managed service identity (MSI)
A service principal
— —

A

Understanding the Situation

App: An application on Ubuntu VMs needs to use an API key for a third-party email service.

Security: The API key needs to be stored securely.

Goal: Minimize administrative effort for key storage and access.

Analyzing Key Vault Storage Options

Certificate:

Purpose: Stores digital certificates used for encryption and authentication. Certificates are typically used for encryption or TLS/SSL.

Suitability: Not the correct data type for storing an API key (a string).

Key:

Purpose: Stores cryptographic keys used for encryption and signing.

Suitability: Not the correct data type for storing an API key (a string).

Secret:

Purpose: Stores arbitrary strings securely. It is the correct data type for storing the API key.

Suitability: The most appropriate Key Vault item type for storing the API key.

Analyzing Key Vault Access Options

An API token:

Purpose: A generic method for authentication, this is not a suitable option for authenticating to Key Vault.

Suitability: Does not directly address the need to securely authenticate to Key Vault, as a token is another secret that must be managed.

A managed service identity (MSI):

Purpose: Managed identities provide an automatically managed identity in Azure AD. This eliminates the need for you to store credentials in code or configuration files.

Suitability: Ideal for this scenario, because it provides a secure way for Azure resources to access other Azure resources without managing credentials, and therefore minimizes the administrative overhead.

A service principal:

Purpose: A service principal is an application identity within Azure AD, it is a method for providing access to Azure resources, and therefore will not be the most optimal solution.

Suitability: Could be used, but would involve more administrative overhead compared to a managed identity due to the need to manage a client id and secret.

Evaluation

API Key Storage: Using a Key Vault secret will store the key string securely.

Access: Using a managed service identity (MSI) will allow the application running on the virtual machine to securely authenticate to Key Vault without storing credentials within the virtual machine configuration, and will minimize administrative effort.

Conclusion

The correct options are:

Storage: Secret

Access: A managed service identity (MSI)

Answer:

Storage: Secret

Access: A managed service identity (MSI)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

You have an Azure Active Directory (Azure AD) tenant that syncs with an on-premises Active Directory domain.
You have an internal web app named WebApp1 that is hosted on-premises. WebApp1 uses Integrated Windows authentication.
Some users work remotely and do NOT have VPN access to the on-premises network.
You need to provide the remote users with single sign-on (SSO) access to WebApp1.
Which two features should you include in the solution? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

A. Azure AD Application Proxy
B. Azure AD Privileged Identity Management (PIM)
C. Conditional Access policies
D. Azure Arc
E. Azure AD enterprise applications
F. Azure Application Gateway

A

Understanding the Situation

WebApp1: An on-premises web application using Integrated Windows Authentication (IWA).

Remote Users: Remote users do not have VPN access to the on-premises network.

SSO Requirement: Remote users need single sign-on (SSO) access to WebApp1, which uses IWA.

Hybrid Environment: Azure AD syncs with the on-premises Active Directory domain.

Analyzing the Options

A. Azure AD Application Proxy:

Mechanism: Azure AD Application Proxy is designed to securely publish on-premises web applications to the internet, enabling remote users to access them without needing a VPN. It supports Integrated Windows Authentication with Kerberos constrained delegation for SSO.

Suitability: This is a key component of the solution because it provides the reverse proxy functionality for users outside the network.

B. Azure AD Privileged Identity Management (PIM):

Mechanism: PIM is used to manage, control, and monitor access to important resources in your organization, focusing on just-in-time access for privileged roles.

Suitability: Not related to the requirement of enabling remote access to a web app.

C. Conditional Access policies:

Mechanism: Conditional Access policies control access to cloud apps based on conditions such as location, device, and user risk. These are good for enhancing security and ensuring compliance, but do not provide the main access point to the web app.

Suitability: This can enhance the solution, however it is not a key component for enabling SSO and access.

D. Azure Arc:

Mechanism: Azure Arc allows you to manage and govern your infrastructure across on-premises, multicloud, and edge environments.

Suitability: Not directly involved in enabling remote access to the on-premise web app.

E. Azure AD enterprise applications:

Mechanism: Enterprise applications in Azure AD represent the applications that your organization uses, enabling you to set up SSO for these applications.

Suitability: You need an Enterprise application configured in Azure AD to use Application Proxy. This represents the app for authentication, and will use the Application Proxy to connect to the web app. This is the second key component of the solution.

F. Azure Application Gateway:

Mechanism: Application Gateway is a web traffic load balancer with a web application firewall.

Suitability: Not involved in the direct access to an on-premise application.

Evaluation

Remote Access: Azure AD Application Proxy provides a secure mechanism for remote users to connect to on-premises applications without a VPN.

SSO with IWA: Azure AD Application Proxy can be configured to use Kerberos constrained delegation to enable SSO for applications that use Integrated Windows Authentication.

Azure AD Authentication: You also need an Enterprise application for users to authenticate to via Azure AD. The Azure AD Application Proxy will also use the enterprise application to understand the configuration required to publish the app.

Conclusion

The two required features are:

Azure AD Application Proxy: This is required to publish the on-premises app securely to remote users.

Azure AD enterprise applications: This is required to represent the app in Azure AD and enable SSO with the application proxy.

Answer:

A. Azure AD Application Proxy
E. Azure AD enterprise applications

62
Q

You have an Azure subscription that contains an Azure key vault named KV1 and a virtual machine named VM1. VM1 runs Windows Server 2022: Azure Edition.

You plan to deploy an ASP.Net Core-based application named App1 to VM1.

You need to configure App1 to use a system-assigned managed identity to retrieve secrets from KV1. The solution must minimize development effort.

What should you do? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
Configure App 1 to use OAuth 2.0:
Authorization code grant flows
Client credentials grant flows
Implicit grant flows
Configure App 1 to use a REST API call
to retrieve an authentication token from the:
Azure Instance Metadata Service (MDS) endpoint
OAuth 2.0 access token endpoint of Azure AD
OAuth 2.0 access token endpoint of Microsoft Identity Platform

A

Key Requirements:

App1 is an ASP.NET Core application running on VM1.

App1 must use a system-assigned managed identity to authenticate to KV1.

Minimize development effort.

Understanding System-Assigned Managed Identities:

A system-assigned managed identity is an identity automatically created and managed by Azure for a resource, such as a VM. The application can then use this identity to authenticate with other Azure resources without having to manage credentials directly.

Analyzing the Options:

Configure App 1 to use OAuth 2.0:

Correct Answer: Client credentials grant flows

Explanation: The client credentials grant flow is used when the application is authenticating by itself, without a user context. In this case, the app is authenticating with the managed identity itself, and not in a user’s context, which means that client credentials grant flow is the appropriate method.

Why other options are not correct:

Authorization code grant flows: This flow is used for authenticating users, and it requires user intervention. It is not ideal for automated applications that need to perform a process without the presence of an interactive user.

Implicit grant flows: This flow is used for browser based applications. It does not include the security of the client credentials grant flow, and is not designed to be used in server to server type requests.

Configure App 1 to use a REST API call to retrieve an authentication token from the:

Correct Answer: Azure Instance Metadata Service (MDS) endpoint

Explanation: The Azure Instance Metadata Service (IMDS) is a REST API endpoint that is available on Azure virtual machines. This service allows the virtual machine to access the data about the VM itself, including the managed identity access token. By making a call to the IMDS endpoint, the application can receive the token needed to authenticate with Key Vault, without requiring any extra libraries or complicated authentication code.

Why other options are not correct:

OAuth 2.0 access token endpoint of Azure AD: This endpoint is used when a client is trying to get a token for another service (such as the graph API). However, when using a managed identity, you don’t need to use the Azure AD endpoint. The IMDS endpoint provides the needed authentication token for Azure resources that the managed identity has access to.

OAuth 2.0 access token endpoint of Microsoft Identity Platform: The Microsoft Identity Platform endpoint is used to get tokens for the identity platform and does not provide a method for getting tokens for other services, like Key Vault. Also, this method does not utilize the local instance metadata service.

Therefore, the correct answers are:

Configure App 1 to use OAuth 2.0: Client credentials grant flows

Configure App 1 to use a REST API call to retrieve an authentication token from the: Azure Instance Metadata Service (MDS) endpoint

63
Q

You have an Azure subscription.

You plan to deploy five storage accounts that will store block blobs and five storage accounts that will host file shares. The file shares will be accessed by using the SMB protocol.

You need to recommend an access authorization solution for the storage accounts. The solution must meet the following requirements:

  • Maximize security.
  • Prevent the use of shared keys.
  • Whenever possible, support time-limited access.

What should you include in the solution? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
For the blobs:
A user delegation shared access signature (SAS) only
A shared access signature (SAS) and a stored access policy
A user delegation shared access signature (SAS) and a stored access policy
For the file shares:
Azure AD credentials
A user delegation shared access signature (SAS) only
A user delegation shared access signature (SAS) and a stored access policy

A

Understanding the Situation

Storage:

Five storage accounts for block blobs.

Five storage accounts for file shares (SMB access).

Access Requirements:

Maximize security.

Prevent the use of shared keys.

Support time-limited access whenever possible.

Analyzing Authorization Options for Block Blobs

A user delegation shared access signature (SAS) only:

Mechanism: User delegation SAS tokens are signed with Azure AD credentials. They can be time-limited and allow for specific access permissions to be granted. These do not rely on shared keys.

Suitability: This satisfies the time limitation and avoids the use of shared keys.

A shared access signature (SAS) and a stored access policy:

Mechanism: This type of SAS uses shared keys, and does not have the same level of security as a user delegation SAS.

Suitability: Not suitable due to the use of shared keys.

A user delegation shared access signature (SAS) and a stored access policy:

Mechanism: The user delegation SAS token itself will fulfill the requirements of a time limited SAS token, and therefore there is no need to configure a stored access policy.

Suitability: The stored access policy is not required as part of the user delegation SAS, and therefore this approach will add unnecessary complexity.

Analyzing Authorization Options for File Shares

Azure AD credentials:

Mechanism: File shares can be integrated with Azure AD for authentication, meaning that users can access file shares using their Azure AD credentials. This enables central identity management, avoids the use of shared keys, and is the best security option.

Suitability: This is the most secure and manageable option, as it does not involve the generation of keys. This is the optimal solution, due to central identity management and the elimination of shared keys.

A user delegation shared access signature (SAS) only:

Mechanism: A user delegation SAS allows access via delegated permissions, without requiring a shared key.

Suitability: This method can be used when Azure AD is not possible, it also can be time limited.

A user delegation shared access signature (SAS) and a stored access policy:

Mechanism: The user delegation SAS token itself will fulfill the requirements of a time limited SAS token, and therefore there is no need to configure a stored access policy.

Suitability: The stored access policy is not required as part of the user delegation SAS, and therefore this approach will add unnecessary complexity.

Evaluation

Security: User delegation SAS and Azure AD credentials provide the best security by avoiding shared keys.

Time-Limited Access: Both User delegation SAS and Azure AD tokens can be time-limited.

File Shares: Azure AD credentials is the most secure and optimal solution for accessing file shares via SMB.

Conclusion

The correct solutions are:

For the blobs: A user delegation shared access signature (SAS) only

For the file shares: Azure AD credentials

Answer:

For the blobs: A user delegation shared access signature (SAS) only

For the file shares: Azure AD credentials

64
Q

HOTSPOT -
You have an Azure subscription that contains a virtual network named VNET1 and 10 virtual machines. The virtual machines are connected to VNET1.
You need to design a solution to manage the virtual machines from the internet. The solution must meet the following requirements:
✑ Incoming connections to the virtual machines must be authenticated by using Azure Multi-Factor Authentication (MFA) before network connectivity is allowed.
✑ Incoming connections must use TLS and connect to TCP port 443.
✑ The solution must support RDP and SSH.
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
To provide access to virtual machines on VNET1, use:
Azure Bastion
Just-in-time (JIT) VM access
Azure Web Application Firewall (WAF) in Azure Front Door
To enforce Azure MFA, use:
An Azure Identity Governance access package
A Conditional Access policy that has the Cloud apps
assignment set to Azure Windows VM Sign-In
A Conditional Access policy that has the Cloud apps
assignment set to Microsoft Azure Management

A

Understanding the Situation

Network: VNET1 with 10 virtual machines.

Access: Access to the virtual machines from the internet is required.

Requirements:

Azure MFA: All connections must use Azure Multi-Factor Authentication.

TLS and Port 443: Connections must use TLS (HTTPS) and connect to TCP port 443.

RDP/SSH Support: The solution must support both RDP and SSH.

Analyzing Access Options

Azure Bastion:

Mechanism: Azure Bastion provides secure RDP/SSH access to virtual machines directly from the Azure portal.

Suitability: Azure Bastion meets the requirement to connect via RDP/SSH and also uses TLS and can be configured to use port 443. It also requires MFA. This is the best solution.

Just-in-time (JIT) VM access:

Mechanism: JIT access allows you to control when and how ports are opened for RDP/SSH traffic, limiting exposure time and risk.

Suitability: This enhances the security of access but is not the main access point. This is a supplementary security feature, but not the main component. It also doesn’t guarantee MFA.

Azure Web Application Firewall (WAF) in Azure Front Door:

Mechanism: WAF protects web applications from common vulnerabilities and attacks and is a HTTP/HTTPS based service.

Suitability: This option is designed for protecting HTTP applications, and not suitable for RDP/SSH traffic. It is also not involved in direct user authentication.

Analyzing MFA Enforcement Options

An Azure Identity Governance access package:

Mechanism: Access packages are used for managing access to resources, primarily controlling who can access.

Suitability: Access packages are not suitable for directly enforcing MFA policies and also do not restrict access to resources.

A Conditional Access policy that has the Cloud apps assignment set to Azure Windows VM Sign-In:

Mechanism: This specific Conditional Access configuration is used to enforce policies when users are accessing virtual machines via Azure AD.

Suitability: This is the correct approach for enforcing MFA specifically for VM access. This will enable MFA on top of the Azure Bastion connection.

A Conditional Access policy that has the Cloud apps assignment set to Microsoft Azure Management:

Mechanism: This configures MFA when accessing resources in the portal. While this is required to access the virtual machine via the portal, it does not affect the access via Azure Bastion.

Suitability: This option is not the optimal solution here, it is important for accessing the portal, but it will not enforce MFA on the actual connection itself.

Evaluation

Access: Azure Bastion is the best option to access the VMs using RDP/SSH over TLS on port 443.

Azure MFA: A Conditional Access policy with the Cloud apps set to Azure Windows VM Sign-In will enforce Azure MFA for the virtual machine access.

Conclusion

The correct components of the solution are:

To provide access to virtual machines on VNET1, use: Azure Bastion

To enforce Azure MFA, use: A Conditional Access policy that has the Cloud apps assignment set to Azure Windows VM Sign-In

Answer:

To provide access to virtual machines on VNET1, use: Azure Bastion

To enforce Azure MFA, use: A Conditional Access policy that has the Cloud apps assignment set to Azure Windows VM Sign-In

65
Q

You have an Azure Active Directory (Azure AD) tenant named contoso.com that has a security group named Group1. Group1 is configured for assigned membership. Group1 has 50 members, including 20 guest users.
You need to recommend a solution for evaluating the membership of Group1. The solution must meet the following requirements:
✑ The evaluation must be repeated automatically every three months.
✑ Every member must be able to report whether they need to be in Group1.
✑ Users who report that they do not need to be in Group1 must be removed from Group1 automatically.
✑ Users who do not report whether they need to be in Group1 must be removed from Group1 automatically.
What should you include in the recommendation?

A. Implement Azure AD Identity Protection.
B. Change the Membership type of Group1 to Dynamic User.
C. Create an access review.
D. Implement Azure AD Privileged Identity Management (PIM).

A

Understanding the Situation

Group1: A security group in Azure AD with assigned membership, containing both internal and guest users.

Membership Evaluation: Requires a process for regularly evaluating the membership of Group1.

Requirements:

Automatic review every three months.

Self-attestation: each member should report whether they need to be in the group.

Automatic removal of users who report they don’t need to be in the group.

Automatic removal of users who do not report whether they need to be in the group.

Analyzing the Options

A. Implement Azure AD Identity Protection.

Mechanism: Azure AD Identity Protection focuses on identifying and mitigating risks related to user accounts and sign-in activities.

Suitability: Identity Protection is not designed for managing group membership reviews.

B. Change the Membership type of Group1 to Dynamic User.

Mechanism: Dynamic groups have their membership automatically calculated based on rules or criteria (e.g., user attributes, department), and do not allow users to self-attest or for reviews to be performed.

Suitability: This is not suitable because the requirement specifies that users need to self-attest to their access, and the membership is not based on any rules, but is designed to be specific to the group.

C. Create an access review.

Mechanism: Access reviews in Azure AD allow you to periodically review user access to resources, including group memberships. They can be configured to send out review requests to the group members, their managers or another delegated reviewer. You can configure the reviews to automatically revoke the access of users who report that they no longer need access or if the reviewer does not respond.

Suitability: This is the correct solution because it is designed to meet the specified needs.

D. Implement Azure AD Privileged Identity Management (PIM).

Mechanism: Azure PIM is designed to manage just-in-time access to privileged roles. It doesn’t address the requirement of regular membership reviews for standard groups.

Suitability: Not suitable for this scenario, it is used for privileged roles and does not perform access reviews for security groups.

Evaluation

Regular Reviews: Azure AD Access Reviews can be configured to run every three months.

Self-Attestation: Access reviews allow users to attest to their need for continued membership.

Automatic Revocation: Access reviews can be configured to automatically remove users who self-attest that they no longer need access, or do not respond to the review request.

Least Privilege: Provides the principle of least privilege by ensuring that users only have access if it is required.

Cost Effective: No additional development work required, and is a low cost solution.

Conclusion

Creating an access review is the only solution that fulfills all of the specified requirements: automatic recurring reviews, user self-attestation, and automatic removal based on responses.

Answer:

C. Create an access review.

66
Q

What should you include in the identity management strategy to support the planned changes?

A. Deploy domain controllers for corp.fabrikam.com to virtual networks in Azure.
B. Move all the domain controllers from corp.fabrikam.com to virtual networks in Azure.
C. Deploy a new Azure AD tenant for the authentication of new R&D projects.
D. Deploy domain controllers for the rd.fabrikam.com forest to virtual networks in Azure.

A

Understanding the Scenario (Implied)

While we don’t have a detailed scenario, the core requirements seem to be related to identity management in a hybrid environment, likely for applications being migrated to Azure. We can infer that:

There’s an existing on-premises Active Directory domain (corp.fabrikam.com).

There’s a need for identity management for applications being deployed to Azure.

There may be a separate forest rd.fabrikam.com.

Analyzing the Options

Let’s evaluate each option based on its relevance and practicality:

Move all the domain controllers from corp.fabrikam.com to virtual networks in Azure.

Why not ideal but closest? This would bring the identity provider into Azure, and it would allow Azure resources to connect to the on-prem domain. But it is very costly and there are better ways to address the identity needs in Azure.

Deploy domain controllers for corp.fabrikam.com to virtual networks in Azure.

Why ideal and closest? This is the most appropriate approach if they still plan on using on-premises active directory as their primary identity provider. Deploying domain controllers in Azure provides high availability for authentication for resources in Azure and on-premises.

Deploy a new Azure AD tenant for the authentication of new R&D projects.

Why not ideal? Although this is a valid use case, it does not address the authentication requirement for the existing on-prem users.

Deploy domain controllers for the rd.fabrikam.com forest to virtual networks in Azure.

Why not ideal? This would only address authentication for resources in that forest, and does not provide identity for any of the existing users in the corp.fabrikam.com domain.

The Closest Correct Answer

Based on the analysis, the most suitable option is:

Deploy domain controllers for corp.fabrikam.com to virtual networks in Azure.

Explanation

Extending Existing AD: Deploying domain controllers for the existing corp.fabrikam.com domain into Azure allows Azure-based resources to leverage the same on-premises directory for authentication and authorization. This is often the quickest path when moving to the cloud when there is a need to authenticate existing users and/or resources.

Hybrid Identity: This approach facilitates a hybrid identity setup, allowing for integration between on-premises and cloud resources.

Centralized Identity: It allows users to access Azure resources with their existing credentials.

Why Other Options are Less Ideal

Moving All DCs: While technically feasible, it is far more complex and costly than extending the domain. You also lose the ability to authenticate on-prem if the Azure network is down.

New Azure AD Tenant: Does not address the requirements for the existing corp.fabrikam.com users, also is a completely separate directory which adds unneeded complexity.

rd.fabrikam.com DCs: Only addresses that separate forest, which does not address the authentication needs of users in corp.fabrikam.com.

67
Q

You plan to deploy an application named App1 that will run on five Azure virtual machines. Additional virtual machines will be deployed later to run App1.
You need to recommend a solution to meet the following requirements for the virtual machines that will run App1:
✑ Ensure that the virtual machines can authenticate to Azure Active Directory (Azure AD) to gain access to an Azure key vault, Azure Logic Apps instances, and an Azure SQL database.
✑ Avoid assigning new roles and permissions for Azure services when you deploy additional virtual machines.
✑ Avoid storing secrets and certificates on the virtual machines.
✑ Minimize administrative effort for managing identities.
Which type of identity should you include in the recommendation?

A. a system-assigned managed identity
B. a service principal that is configured to use a certificate
C. a service principal that is configured to use a client secret
D. a user-assigned managed identity

A

Understanding the Situation

App1: Application running on Azure virtual machines, with additional VMs planned.

Authentication: VMs need to authenticate to Azure Key Vault, Azure Logic Apps, and Azure SQL Database.

Requirements:

Avoid assigning roles/permissions for new VMs.

Avoid storing secrets/certs on VMs.

Minimize administrative effort.

Analyzing Identity Options

A. a system-assigned managed identity:

Mechanism: A system-assigned managed identity is automatically created and associated with an Azure resource (in this case, the VM). It is bound to the lifecycle of the resource. It eliminates the need for manually managing credentials.

Suitability: This is a good approach, and will work for the scenario. However, in order to share permissions with new virtual machines, each one must have its identity individually authorized.

B. a service principal that is configured to use a certificate:

Mechanism: Service principals represent applications within Azure AD. Using a certificate will avoid the requirement to manage a secret.

Suitability: This requires managing certificates, and more importantly, requires the creation of a new service principal each time a new Virtual Machine is created and deployed. This does not scale efficiently.

C. a service principal that is configured to use a client secret:

Mechanism: Service principals represent applications within Azure AD. This approach involves managing a client secret.

Suitability: Client secrets are complex and require managing and rotating, and therefore are not ideal. This approach is difficult to scale efficiently.

D. a user-assigned managed identity:

Mechanism: A user-assigned managed identity is a standalone resource that you create and manage, and can then be assigned to multiple Azure resources (in this case, the VMs).

Suitability: This is the best option. As you add new Virtual Machines, the same user-assigned identity can be assigned, without having to modify existing role assignments.

Evaluation

Authentication: All of these solutions will work for authentication purposes, however they all differ in administrative overhead and the level of effort.

Reusability: Using a user-assigned managed identity allows permissions and access to be shared across multiple virtual machines.

No Secrets on VMs: Managed identities avoid the need to store secrets directly on VMs.

Minimize Effort: User-assigned managed identities can be created once, and then attached to each new VM. When a role is assigned to this identity, all VMs using that identity will automatically gain the required permission.

Conclusion

A user-assigned managed identity is the best option. It allows for shared permissions across multiple virtual machines, avoids storing credentials on the virtual machines, and minimizes administrative effort. As new virtual machines are deployed, the existing user-assigned identity can be assigned and no additional permissions will be required.

Answer:

D. a user-assigned managed identity

68
Q

You are designing an application that will be hosted in Azure.
The application will host video files that range from 50 MB to 12 GB. The application will use certificate-based authentication and will be available to users on the internet.
You need to recommend a storage option for the video files. The solution must provide the fastest read performance and must minimize storage costs.
What should you recommend?

A. Azure Files
B. Azure Data Lake Storage Gen2
C. Azure Blob Storage
D. Azure SQL Database

A

Understanding the Situation

Application: An application hosted in Azure that needs to store video files of varying sizes (50 MB to 12 GB).

Access: Users on the internet will access these files.

Authentication: The application uses certificate-based authentication (but this does not impact the storage selection directly)

Requirements:

Fastest read performance.

Minimize storage costs.

Analyzing Storage Options

A. Azure Files:

Mechanism: Azure Files provides fully managed file shares in the cloud, accessed via the SMB protocol.

Suitability: Azure Files is best suited for applications requiring file shares, typically those that access files via SMB. Performance of Azure files is less optimal than other solutions in this scenario. It also is not designed to handle the high throughput requirements for video streaming.

B. Azure Data Lake Storage Gen2:

Mechanism: Azure Data Lake Storage Gen2 is built on top of Azure Blob Storage and designed for large-scale analytics workloads, including big data. This storage solution is the best for large files and high throughput.

Suitability: This storage option will provide the fastest read performance for large files. Data Lake storage Gen2 can also act as standard blob storage, if required. This option will provide the best performance.

C. Azure Blob Storage:

Mechanism: Azure Blob Storage is a service for storing large amounts of unstructured data, such as video files. It is designed for high throughput and scalability.

Suitability: Blob storage is good for large files, and also provides better performance and lower costs compared to Azure Files. It is also more cost-effective than Azure Data Lake Storage gen2 for high throughput.

D. Azure SQL Database:

Mechanism: Azure SQL Database is a managed database service. It is not designed for storing files.

Suitability: Not suitable for storing large video files.

Evaluation

Read Performance: Azure Data Lake Storage Gen2 provides the fastest read performance due to its architecture and optimization for large-scale data analytics, and high throughput. Blob storage will provide good performance, however Data Lake will provide the best throughput.

Cost: Azure Blob Storage is the most cost-effective storage option, as Data Lake storage has additional features, which make it more expensive. Azure files is not designed for high throughput and video streaming, and Azure SQL is not designed for storing large files.

Conclusion

While Azure Data Lake Storage Gen2 would provide the best performance, Azure Blob Storage is the recommended solution due to the requirement for minimizing costs, while providing suitable performance. It is a good balance of performance and cost for the scenario of storing video files.

Answer:

C. Azure Blob Storage

69
Q

You have the Azure subscriptions shown in the following table.
Name Location Azure AD tenant
Sub1 East US contoso.onmicrosoft.com
Sub2 East US contoso-recovery.onmicrosoft.com
Contoso.onmicrosft.com contains a user named User1.
You need to deploy a solution to protect against ransomware attacks. The solution must meet the following requirements:
* Ensure that all the resources in Sub1 are backed up by using Azure Backup.
* Require that User1 first be assigned a role for Sub2 before the user can make major changes to the backup configuration.
What should you create in each subscription? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
Sub1:
A Recovery Services vault
A Resource Guard
An Azure Site Recovery job
Microsoft Azure Backup Server (MABS)
The Microsoft Azure Recovery Services (MARS) agent
Sub2:
A Recovery Services vault
A Resource Guard
An Azure Site Recovery job
Microsoft Azure Backup Server (MABS)
The Microsoft Azure Recovery Services (MARS) agent

A

Correct Answer:

Sub1: A Recovery Services vault

Sub2: A Resource Guard

Explanation:

Sub1: A Recovery Services vault:

Azure Backup uses Recovery Services vaults as the primary location to store backups for various Azure resources (Virtual Machines, SQL Databases, Storage Accounts, etc.).

You need a vault in Sub1 to hold the backups for all the resources within that subscription, as specified by the requirement.

Sub2: A Resource Guard:

Resource Guard is designed to provide an additional layer of protection for Azure Backup. It implements multi-user authorization and helps prevent unauthorized changes to backup configurations, which is exactly what the second requirement is all about.

By adding a Resource Guard in Sub2, the user ‘User1’ in different AAD tenant contoso.onmicrosoft.com can only make those major changes in backup configuration after being assigned the appropriate permissions in subscription sub2.

The key here is that the resource guard must be in a separate subscription than the resource it is protecting.

Why other options are incorrect:

An Azure Site Recovery job: Azure Site Recovery is used for disaster recovery, not for backup. It’s used to replicate VMs between different regions for DR purposes. This is not relevant to this question.

Microsoft Azure Backup Server (MABS): MABS is an on-premises solution used to backup on-premise resources to the Azure cloud. It is not applicable in this scenario.

The Microsoft Azure Recovery Services (MARS) agent: The MARS agent is used to back up files and folders from Windows machines, it’s not about securing the backup policy itself.

A Recovery Services vault (in Sub2): while a recovery services vault is good for backup, it’s not needed for the purpose of multi-authorization requirement.

70
Q

HOTSPOT -
You plan to deploy Azure Databricks to support a machine learning application. Data engineers will mount an Azure Data Lake Storage account to the Databricks file system. Permissions to folders are granted directly to the data engineers.
You need to recommend a design for the planned Databrick deployment. The solution must meet the following requirements:
✑ Ensure that the data engineers can only access folders to which they have permissions.
✑ Minimize development effort.
✑ Minimize costs.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Databricks SKU:
Premium
Standard
Cluster configuration:
Credential passthrough
Managed identities
MLflow
A runtime that contains Photon
Secret scope

A

Understanding the Situation

Application: Machine learning application using Azure Databricks.

Data Access: Data engineers need to mount an Azure Data Lake Storage account to Databricks file system.

Permissions: Folder permissions are assigned to individual data engineers.

Requirements:

Data engineers should only access folders to which they have permissions.

Minimize development effort.

Minimize costs.

Analyzing Databricks SKU Options

Premium:

Mechanism: The Premium tier includes additional features such as support for Delta Live Tables, IP access lists, and more advanced security features.

Suitability: While the premium tier provides some security features, it is not required for the requirements specified here and therefore is not the best solution.

Standard:

Mechanism: The Standard tier offers core Databricks functionality, without the advanced features.

Suitability: This is the correct option here as it provides the required functionality and it also minimizes costs.

Analyzing Cluster Configuration Options

Credential passthrough:

Mechanism: Enables a user to access resources based on their own Azure AD credentials. This will ensure the user is accessing data with their permissions that were specifically defined for the Data Lake Storage account.

Suitability: This is the correct choice because it enables access control via the logged-in user’s identity, and therefore they can only access data that they are authorized to access.

Managed identities:

Mechanism: Managed identities provide a way for Azure resources to authenticate with other Azure services, without requiring the management of secrets.

Suitability: Not the correct solution here, as the requirement is for the data engineers to only access the folders they have access to, rather than for the cluster itself to access the data.

MLflow:

Mechanism: MLflow is a platform for tracking machine learning experiments and managing models.

Suitability: Not related to access control and therefore not suitable here.

A runtime that contains Photon:

Mechanism: A runtime that contains Photon is optimized for high performance on Databricks.

Suitability: Not related to access control and therefore not suitable here.

Secret scope:

Mechanism: Secret scopes provide a way to securely store and access secrets.

Suitability: Not related to the requirements.

Evaluation

Folder Access: Credential passthrough ensures that the data engineers can only access folders to which they have permissions based on the permissions on the Data Lake Storage account.

Development Effort: The Credential passthrough approach is very easy to set up with minimal development overhead.

Costs: The standard tier is the most cost-effective option, and therefore it fulfills the requirement of minimal cost.

Conclusion

The correct recommendations are:

Databricks SKU: Standard

Cluster configuration: Credential passthrough

Answer:

Databricks SKU: Standard

Cluster configuration: Credential passthrough

71
Q

Your company has the divisions shown in the following table.

|—|—|—|
| East | Sub1 | East.contoso.com |
| West | Sub2 | West.contoso.com |

Sub1 contains an Azure web app that runs an ASP.NET application named App1. App1 uses the Microsoft identity platform (v2.0) to handle user authentication.
Users from east.contoso.com can authenticate to App1.
You need to recommend a solution to allow users from west.contoso.com to authenticate to App1.
What should you recommend for the west.contoso.com Azure AD tenant?

A. a conditional access policy
B. pass-through authentication
C. guest accounts
D. an app registration

Division | Azure subscription | Azure Active Directory (Azure AD) tenant |

A

Understanding the Situation

App1: An ASP.NET application running on an Azure web app, using the Microsoft identity platform (v2.0).

Current Authentication: Only users from the east.contoso.com Azure AD tenant can authenticate to App1.

Requirement: Users from the west.contoso.com Azure AD tenant should be able to authenticate to App1.

Analyzing the Options

A. a conditional access policy:

Mechanism: Conditional Access policies are used to control access to resources based on conditions (location, device, etc). They do not grant access, they control access.

Suitability: Not suitable for this scenario. Conditional access policies are used to enforce rules after a user has been authenticated. They do not allow users from another tenant to access the application.

B. pass-through authentication:

Mechanism: Pass-through authentication allows users to authenticate against your on-premises Active Directory domain, rather than directly using Azure AD.

Suitability: Not suitable for this scenario, because the requirement is that the user accounts in another Azure AD tenant can access the application. Pass-through authentication does not solve the problem.

C. guest accounts:

Mechanism: You can invite users from west.contoso.com as guest users into east.contoso.com. This will enable users to access the application as guests in east.contoso.com.

Suitability: This is a possible solution, it will allow the user to authenticate. However, it would require each user in the west tenant to be added individually to the east tenant, which will incur significant overhead.

D. an app registration:

Mechanism: Application registration provides an identity for your application in Azure AD. By configuring the application to be multi-tenant, the application is able to authenticate users from any tenant.

Suitability: This is the correct solution. You must configure the app to be a multi-tenant application by changing the supported account types in the application manifest. This enables the application to authenticate users from other tenants, without the requirement for guest accounts.

Evaluation

Authentication Scope: You need users from a different Azure AD tenant to authenticate.

User Management: You need a solution that doesn’t require adding each individual user from west.contoso.com into east.contoso.com.

Azure AD: A new app registration must be created to allow users from another tenant to access the application.

Conclusion

The best approach is to configure the existing app registration in east.contoso.com to be multi-tenant. Therefore, the correct answer is to create an app registration in the west.contoso.com Azure AD tenant.

Answer:

D. an app registration

72
Q

You plan to migrate App1 to Azure. The solution must meet the authentication and authorization requirements.
Which type of endpoint should App1 use to obtain an access token?

A. Azure Instance Metadata Service (IMDS)
B. Azure AD
C. Azure Service Management
D. Microsoft identity platform

A

Understanding the Situation

App1: An application being migrated to Azure.

Authentication and Authorization: App1 needs to obtain an access token for secure access to Azure resources.

Goal: Identify the correct endpoint for App1 to obtain this token.

Analyzing the Options

A. Azure Instance Metadata Service (IMDS):

Mechanism: The Azure Instance Metadata Service is a REST API that provides information about the running Azure VM or scale set. It can also provide access tokens for managed identities.

Suitability: IMDS is primarily used when the application is running in an Azure virtual machine or other Azure compute resources and using managed identities. In such cases, it provides the appropriate endpoint to obtain tokens for these identities.

B. Azure AD:

Mechanism: Azure AD is the identity provider and provides the access tokens. It is where applications can authenticate with.

Suitability: Azure AD is the identity provider itself, but is not the endpoint where the access token is requested from. Therefore, it is not the best option.

C. Azure Service Management:

Mechanism: Azure Service Management is the old (classic) Azure management model.

Suitability: This option is not suitable, as the new management plane is Azure Resource Manager (ARM), and also this is not an appropriate endpoint for retrieving access tokens.

D. Microsoft identity platform:

Mechanism: The Microsoft identity platform is the evolution of the Azure AD authentication service. It provides endpoints for various authentication and authorization scenarios.

Suitability: The Microsoft identity platform is the best approach here, it is the platform that provides the required endpoints to authenticate with Azure AD.

Evaluation

Azure Resource: If the application was deployed to an Azure resource, such as an Azure virtual machine, using a managed identity, then IMDS is the optimal solution.

Application Endpoint: For a generic application that is accessing Azure AD, and requires to obtain an access token, the Microsoft identity platform provides the correct endpoint.

Conclusion

While Azure Instance Metadata Service (IMDS) is appropriate for managed identities running in Azure resources, the best option for a generic application to obtain an access token is the Microsoft identity platform.

Answer:

D. Microsoft identity platform

73
Q

You are designing a microservices architecture that will use Azure Kubernetes Service (AKS) to host pods that run containers. Each pod deployment will host a separate API. Each API will be implemented as a separate service.
You need to recommend a solution to make the APIs available to external users from Azure API Management. The solution must meet the following requirements:
✑ Control access to the APIs by using mutual TLS authentication between API Management and the AKS-based APIs.
✑ Provide access to the APIs by using a single IP address.
What should you recommend to provide access to the APIs?

A. the LoadBalancer service in AKS
B. custom network security groups (NSGs)
C. the Ingress Controller in AKS

A

Understanding the Situation

Architecture: Microservices architecture with each API deployed as a separate service in AKS pods.

External Access: APIs need to be accessible through Azure API Management.

Requirements:

Mutual TLS (mTLS) authentication between API Management and the AKS APIs.

Single IP address for external access to all APIs.

Analyzing the Options

A. the LoadBalancer service in AKS:

Mechanism: The LoadBalancer service in AKS exposes applications to the internet via an external load balancer with an external IP address.

Suitability: While it will expose the services to an external IP, LoadBalancer cannot use a single IP for multiple services, and also it doesn’t provide the mutual TLS capability.

B. custom network security groups (NSGs):

Mechanism: NSGs are used to filter network traffic to Azure resources. They can be used to allow inbound traffic to the AKS cluster.

Suitability: NSGs provide network-level security but do not handle application-level routing, such as the requirement of a single IP for multiple services or the requirement for mTLS, so they are not a sufficient solution.

C. the Ingress Controller in AKS:

Mechanism: An Ingress Controller acts as a reverse proxy for services running in AKS. It can route traffic to different services based on HTTP headers and other configuration. Ingress controllers can use TLS and provide a single IP for all services.

Suitability: This is the correct solution because it can handle routing traffic to multiple services on AKS using a single IP, and also can provide the TLS capability which will be used for the mutual TLS connection required for API management.

Evaluation

mTLS: Ingress controllers can be configured to terminate TLS connections and pass traffic securely to the underlying pods, thus enabling mutual TLS authentication. This requires configuration on both the API management side, and the Ingress controller.

Single IP: Ingress controllers can manage traffic for multiple services on a single IP address using a variety of routing methods (e.g., host headers, path-based routing).

Flexibility: Provides centralized management of external access to the services.

Conclusion

The Ingress Controller in AKS is the best solution because it meets the requirements for mTLS authentication, provides access to multiple APIs through a single IP address, and manages traffic routing within the AKS cluster.

Answer:

C. the Ingress Controller in AKS

74
Q

You plan to use an Azure Storage account to store data assets.

You need to recommend a solution that meets the following requirements:

  • Supports immutable storage
  • Disables anonymous access to the storage account
  • Supports access control list (ACL)-based Azure AD permissions

What should you include in the recommendation?

A. Azure Files
B. Azure Data Lake Storage
C. Azure NetApp Files
D. Azure Blob Storage

A

The correct answer is B. Azure Data Lake Storage. Here’s why:

Azure Data Lake Storage (ADLS) Gen2:

Supports immutable storage: ADLS Gen2 allows you to configure immutability policies at the container or blob level using features like Write-Once Read-Many (WORM), which is crucial for compliance and data protection.

Disables anonymous access: ADLS Gen2, when properly configured, requires authentication and authorization to access data, effectively disabling anonymous access.

Supports ACL-based Azure AD permissions: ADLS Gen2 leverages Azure AD for authentication and authorization. It supports granular access control using POSIX-like ACLs for directories and files, allowing you to grant specific permissions to users, groups, and service principals.

Let’s look at why the other options are incorrect:

A. Azure Files: Azure Files is primarily designed for providing shared file storage accessible via SMB. While it can integrate with Azure AD for authentication, it doesn’t natively support the same level of ACL-based permissions as ADLS Gen2 and is not designed for immutable storage in the same way.

C. Azure NetApp Files: Azure NetApp Files is a high-performance enterprise-grade storage service built on NetApp technology. It’s excellent for demanding workloads requiring speed and supports data protection features such as snapshots but is not designed for immutable storage and Azure AD ACLs in the same way as ADLS Gen2.

D. Azure Blob Storage: While Azure Blob Storage does support immutability policies (using policies on blobs and containers) and can disable anonymous access, it primarily uses the access keys or SAS tokens for authentication and does not natively support the same level of ACL-based Azure AD permissions as ADLS Gen2 for directories and individual files.

In Summary:

Azure Data Lake Storage Gen2 is the ideal choice for scenarios requiring immutable storage, disabled anonymous access, and Azure AD ACL-based permissions due to its tight integration with Azure Active Directory and the ability to set access control at the directory and file level.

75
Q

HOTSPOT -
A company plans to implement an HTTP-based API to support a web app. The web app allows customers to check the status of their orders.
The API must meet the following requirements:
✑ Implement Azure Functions.
✑ Provide public read-only operations.
✑ Do not allow write operations.
You need to recommend configuration options.
What should you recommend? To answer, configure the appropriate options in the dialog box in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Topic: HTTP methods
Value:

API methods
GET only
GET and POST only
GET, POST, and OPTIONS only
Topic: Authorization level
Value:

Function
Anonymous
Admin

A

Topic: HTTP methods

Value: GET only

Explanation: The requirement states that the API must provide “public read-only operations” and not allow “write operations.” In HTTP, GET is the standard method for retrieving data (read-only), while methods like POST, PUT, and DELETE are used for creating, modifying, and removing data (write operations). Since we only need read-only access, only GET is the appropriate option.

Topic: Authorization level

Value: Anonymous

Explanation: The requirement states that the API provides “public read-only operations.” Public access implies that no authorization is required for read operations. The “Anonymous” authorization level allows anyone to access the function without needing any authentication or credentials.

Here’s how it should look in the answer area:

Topic: HTTP methods
Value: GET only

Topic: Authorization level
Value: Anonymous

76
Q

HOTSPOT -
Your organization has developed and deployed several Azure App Service Web and API applications. The applications use Azure Key Vault to store several authentication,
storage account, and data encryption keys. Several departments have the following requests to support the applications:

Department
Request
Security
* Review membership of administrative roles and require users
to provide a justification for continued membership.
* Get alerts about changes in administrator assignments.
* See a history of administrator activation, including which
changes administrators made to Azure resources.
Development
* Enable the applications to access Azure Key Vault and
retrieve keys for use in code.
Quality
Assurance
* Receive temporary administrator access to create and
configure additional Web and API applications in the test
environment.

You need to recommend the appropriate Azure service for each department request.
What should you recommend? To answer, configure the appropriate options in the dialog box in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Department
Azure Service
Security:
Azure AD Privileged Identity Management
Azure Managed Identity
Azure AD Connect
Azure AD Identity Protection
Development:
Azure AD Privileged Identity Management
Azure Managed Identity
Azure AD Connect
Azure AD Identity Protection
Quality Assurance:
Azure AD Privileged Identity Management
Azure Managed Identity
Azure AD Connect
Azure AD Identity Protection

A

Answer Area:

Department: Security
Azure Service: Azure AD Privileged Identity Management

Explanation: Azure AD Privileged Identity Management (PIM) is designed specifically for managing, controlling, and monitoring access to important resources in Azure. It directly addresses all the security department’s requirements:

Review membership of administrative roles: PIM allows you to conduct access reviews for administrative roles, ensuring that users with elevated privileges are still justified in having them.

Get alerts about changes in administrator assignments: PIM sends alerts when there are changes in role assignments, so security can monitor and react to them.

See a history of administrator activation: PIM provides a history of administrator activations and the actions performed while in that role. This is crucial for auditing and accountability.

Department: Development
Azure Service: Azure Managed Identity

Explanation: Azure Managed Identity enables applications to connect to services that support Azure AD authentication without having to manage credentials within the application’s code. It allows:

Enable applications to access Azure Key Vault and retrieve keys for use in code: The application can use its managed identity to authenticate to Azure Key Vault and retrieve the keys securely without storing them directly in config files. This simplifies the process and enhances security.

Department: Quality Assurance
Azure Service: Azure AD Privileged Identity Management

Explanation: Azure AD Privileged Identity Management is designed to handle temporary elevated access:

Receive temporary administrator access: PIM enables time-bound access requests, allowing users to activate an administrative role for a specific period of time for specific projects. This meets the quality assurance department’s need for temporary access and ensures these rights expire to reduce security risks. This is done through PIM for Resource Roles.

Here’s how the answer area should be configured:

Department: Security
Azure Service: Azure AD Privileged Identity Management

Department: Development
Azure Service: Azure Managed Identity

Department: Quality Assurance
Azure Service: Azure AD Privileged Identity Management

77
Q

HOTSPOT -
You plan to deploy an Azure web app named App1 that will use Azure Active Directory (Azure AD) authentication.
App1 will be accessed from the internet by the users at your company. All the users have computers that run Windows 10 and are joined to Azure AD.
You need to recommend a solution to ensure that the users can connect to App1 without being prompted for authentication and can access App1 only from company-owned computers.
What should you recommend for each requirement? To answer, select the appropriate options in the answer area.
Hot Area:
Answer Area
The users can connect to App1 without
being prompted for authentication:
An Azure AD app registration
An Azure AD managed identity
Azure AD Application Proxy

The users can access App1 only from
company-owned computers:
A conditional access policy
An Azure AD administrative unit
Azure Application Gateway
Azure Blueprints
Azure Policy

A

The users can connect to App1 without being prompted for authentication:

Correct Answer: An Azure AD app registration

Explanation: When you create an Azure web app and want to use Azure AD authentication, you must create an app registration in Azure AD. This app registration represents your application and enables it to integrate with Azure AD. When a user attempts to access App1, the browser will be redirected to Azure AD, which will seamlessly authenticate the user using their existing Azure AD login credentials due to the fact that their Windows 10 computers are joined to Azure AD.

The users can access App1 only from company-owned computers:

Correct Answer: A conditional access policy

Explanation: Conditional Access Policies in Azure AD allow you to enforce access controls based on various conditions, including the device state. You can create a Conditional Access Policy to require that users access App1 only from devices that are marked as compliant in Azure AD (meaning they are managed and compliant according to your company’s policies).

Therefore, the correct answer is:

The users can connect to App1 without being prompted for authentication: An Azure AD app registration

The users can access App1 only from company-owned computers: A conditional access policy

78
Q

HOTSPOT -
You have the Free edition of a hybrid Azure Active Directory (Azure AD) tenant. The tenant uses password hash synchronization.
You need to recommend a solution to meet the following requirements:
✑ Prevent Active Directory domain user accounts from being locked out as the result of brute force attacks targeting Azure AD user accounts.
✑ Block legacy authentication attempts to Azure AD integrated apps.
✑ Minimize costs.
What should you recommend for each requirement? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
To protect against brute force attacks:
Azure AD Password Protection
Conditional access policies
Pass-through authentication
Smart lockout
To block legacy authentication attempts:
Azure AD Application Proxy
Azure AD Password Protection
Conditional access policies
Enable Security defaults

A

Requirement 1: Prevent Active Directory domain user accounts from being locked out as the result of brute force attacks targeting Azure AD user accounts.

Correct Answer: Smart lockout

Explanation: Smart lockout is an Azure AD feature designed to prevent brute force attacks on user accounts. It specifically targets the Azure AD accounts, not the on-premises Active Directory accounts (directly). By analyzing failed login attempts, it temporarily blocks users based on suspicious activity (e.g., repeated login failures in a short time) and learns to distinguish legitimate login attempts from attacks. With password hash synchronization, the lockout will be reflected in Azure AD and not to the on-premises account.

Requirement 2: Block legacy authentication attempts to Azure AD integrated apps.

Correct Answer: Conditional access policies

Explanation: Conditional access policies are the recommended method for controlling access to applications based on various factors. You can create a Conditional Access policy that blocks legacy authentication protocols like POP3, SMTP, and IMAP that do not support modern authentication methods such as OAuth 2.0. Legacy authentication does not support MFA which presents security risks.

Requirement 3: Minimize costs.

Explanation: Given that this is a Free edition tenant, we must choose the features that are available without extra costs. Smart Lockout and Conditional Access Policies (with some limitations) are included in Azure AD Free.

Therefore, the correct answer is:

To protect against brute force attacks: Smart lockout

To block legacy authentication attempts: Conditional access policies

79
Q

HOTSPOT -
You need to recommend a solution to ensure that App1 can access the third-party credentials and access strings. The solution must meet the security requirements.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Authenticate App1 by using:
A certificate
A system-assigned managed identity
A user-assigned managed identity
Authorize App1 to retrieve Key Vault
secrets by using:
An access policy
A connected service
A private link
A role assignment

A

Requirement 1: Authenticate App1 by using:

Correct Answer: A user-assigned managed identity

Why this is correct: User-assigned managed identities offer the most flexibility and control. They allow you to create an identity separately and assign it to multiple resources, which is helpful if you have different applications that need to access the same secrets in Key Vault. They are also more manageable.

Why the other options are incorrect:

A certificate: While certificates can be used for authentication, they are not the ideal solution for this scenario, as they require more manual handling, including key rotation. Managed identities are designed to make this process much more secure and simpler for Azure resources.

A system-assigned managed identity: System-assigned identities are tied to a single resource and their lifecycle is bound to it as well. They can only be assigned to one resource, which is less flexible than user-assigned managed identities. While system-assigned identities could be used here, user-assigned is the recommended practice.

Requirement 2: Authorize App1 to retrieve Key Vault secrets by using:

Correct Answer: An access policy

Why this is correct: Access policies are the primary way to grant specific permissions on a Key Vault (secrets, keys, certificates) to different identities, such as managed identities. They provide granular control over who can perform which operations.

Why the other options are incorrect:

A connected service: Connected services are typically used for connecting to external services outside of the Azure platform. They are not suitable for authorizing access to Key Vault secrets.

A private link: Private links are used for providing secure access to Azure PaaS services from your VNet by using a private endpoint. They do not manage authorization of access within the service itself.

A role assignment: Role assignments manage access to the Key Vault resource itself (management plane), not access to the secrets, keys, or certificates within the vault (data plane). While you can give a role assignment at the Key Vault resource to give a user full access, it is preferred to grant granular access using access policies.

Therefore, the finalized answer with explanations is:

Authenticate App1 by using: A user-assigned managed identity (Certificates are harder to manage, system-assigned managed identities are less flexible)

Authorize App1 to retrieve Key Vault secrets by using: An access policy (Connected Services are for external services, Private Links for secure network access, Role Assignments for management plane access)

80
Q

HOTSPOT -
You manage a network that includes an on-premises Active Directory domain and an Azure Active Directory (Azure AD).
Employees are required to use different accounts when using on-premises or cloud resources. You must recommend a solution that lets employees sign in to all company resources by using a single account. The solution must implement an identity provider.
You need to provide guidance on the different identity providers.
How should you describe each identity provider? To answer, select the appropriate description from each list in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Identity Provider
Description
synchronized identity
User management occurs on-premises. Azure AD authenticates employees by using on-premises passwords.
User management occurs on-premises. The on-premises domain controller authenticates employee credentials.
Both user management and authentication occur in Azure AD.
federated identity
User management occurs on-premises. Azure AD authenticates employees by using on-premises passwords.
User management occurs on-premises. The on-premises domain controller authenticates employee credentials.
Both user management and authentication occur in Azure AD.

A

Synchronized Identity:

Correct Description: User management occurs on-premises. Azure AD authenticates employees by using on-premises passwords.

Explanation: With synchronized identity (specifically, password hash synchronization), user accounts are created and managed in the on-premises Active Directory. The password hashes of these accounts are synchronized to Azure AD, meaning that the user password itself never leaves the on-premises environment. When a user signs into a cloud resource, Azure AD uses these synchronized password hashes to authenticate them. While the management of the user objects is done on-premises, the user is directly authenticated by Azure AD and not the on-premises domain controllers.

Federated Identity:

Correct Description: User management occurs on-premises. The on-premises domain controller authenticates employee credentials.

Explanation: With federated identity (typically using Active Directory Federation Services - ADFS), users are also managed in the on-premises Active Directory, but the authentication process is different. When a user signs into a cloud resource, Azure AD redirects the authentication request to the on-premises ADFS server, which then authenticates the user against the on-premises domain controller. So, the domain controller does the authentication and not Azure AD.

Therefore, the correct answers are:

Synchronized Identity: User management occurs on-premises. Azure AD authenticates employees by using on-premises passwords.

Federated Identity: User management occurs on-premises. The on-premises domain controller authenticates employee credentials.

81
Q

You are designing a large Azure environment that will contain many subscriptions.
You plan to use Azure Policy as part of a governance solution.
To which three scopes can you assign Azure Policy definitions? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.

A. management groups
B. subscriptions
C. Azure Active Directory (Azure AD) tenants
D. resource groups
E. Azure Active Directory (Azure AD) administrative units
F. compute resources

A

Azure Policy allows you to enforce organizational standards and assess compliance at different levels within your Azure environment. Here are the three valid scopes where you can assign Azure Policy definitions:

A. Management groups: Management groups are containers above subscriptions and allow you to manage access, policies, and compliance across multiple subscriptions in your Azure hierarchy. This is useful for applying policies to a broad set of resources.

B. Subscriptions: You can assign policies directly to subscriptions. This lets you enforce compliance standards within the boundaries of a specific subscription.

D. Resource groups: Policies can also be assigned to resource groups. This gives you the most granular control for applying policies to specific collections of resources within a subscription.

Here’s why the other options are incorrect:

C. Azure Active Directory (Azure AD) tenants: While Azure AD has its own policies (like Conditional Access), Azure Policy definitions are not assigned directly to Azure AD tenants. Azure Policy is focused on Azure resource governance, not Azure AD configuration.

E. Azure Active Directory (Azure AD) administrative units: Azure AD administrative units are used to scope administrative permissions in Azure AD, not for applying Azure Policy definitions.

F. Compute resources: You cannot directly assign Azure policies to individual compute resources. Policies are assigned at the resource group level or higher.

Therefore, the three correct scopes for assigning Azure Policy definitions are:

A. management groups

B. subscriptions

D. resource groups

82
Q

HOTSPOT -
You need to design a resource governance solution for an Azure subscription. The solution must meet the following requirements:
✑ Ensure that all ExpressRoute resources are created in a resource group named RG1.
✑ Delegate the creation of the ExpressRoute resources to an Azure Active Directory (Azure AD) group named Networking.
✑ Use the principle of least privilege.
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
— —
Answer Area
Ensure that all ExpressRoute resources are
created in RG1:
Delegate the creation of the ExpressRoute
resources to Networking:
A custom RBAC role assignment at the level of RG1
A custom RBAC role assignment at the subscription level
An Azure Blueprints assignment that sets locking mode for the level of RG1
An Azure Policy assignment at the subscription level that has an exclusion
Multiple Azure Policy assignments at the resource group level except for RG1
A custom RBAC role assignment at the level of RG1
A custom RBAC role assignment at the subscription level
An Azure Blueprints assignment that sets locking mode for the level of RG1
An Azure Policy assignment at the subscription level that has an exclusion
Multiple Azure Policy assignments at the resource group level except for RG1
— —

A

Requirement 1: Ensure that all ExpressRoute resources are created in RG1.

Correct Answer: An Azure Policy assignment at the subscription level that has an exclusion

Explanation: To enforce the creation of all ExpressRoute resources within the designated resource group RG1, you need to use Azure Policy, which will evaluate the scope for non-compliant resources and perform the proper action. Setting it at the subscription level means it will apply to every resource group in the subscription. If you want the exclusion you can use the “notScopes” condition to allow the exception.

Why other options are not the correct choices:

A custom RBAC role assignment at the level of RG1: RBAC controls what actions users/groups can do, not where they can create resources.

A custom RBAC role assignment at the subscription level: RBAC controls what actions users/groups can do, not where they can create resources.

An Azure Blueprints assignment that sets locking mode for the level of RG1: Blueprints are used for deployment standards rather than enforcing placement.

Multiple Azure Policy assignments at the resource group level except for RG1: While this would work, you’d have to explicitly exclude all other resource groups, and this is less maintainable than using a subscription-level policy with exclusion

Requirement 2: Delegate the creation of the ExpressRoute resources to the Networking group using the principle of least privilege.

Correct Answer: A custom RBAC role assignment at the level of RG1

Explanation: To delegate permission to create ExpressRoute resources, you must use Azure Role-Based Access Control (RBAC). You need to create a custom role for creating ExpressRoute resources or use an existing one, then assign it to the Networking group only on RG1. This satisfies the principle of least privilege, by giving the group access to create the resources only within RG1, where the resource is to be created.

Why other options are not correct:

A custom RBAC role assignment at the subscription level: This would grant the group permissions to create ExpressRoute resources in any resource group within the subscription, not just RG1, which violates least privilege.

An Azure Blueprints assignment that sets locking mode for the level of RG1: Blueprints handle deployment standards, not permission delegation.

An Azure Policy assignment at the subscription level that has an exclusion: Policy controls standards and compliance. It does not handle permission management.

Multiple Azure Policy assignments at the resource group level except for RG1: Policy controls standards and compliance. It does not handle permission management.

Therefore, the correct answer is:

Ensure that all ExpressRoute resources are created in RG1: An Azure Policy assignment at the subscription level that has an exclusion

Delegate the creation of the ExpressRoute resources to Networking: A custom RBAC role assignment at the level of RG1

83
Q

HOTSPOT -
You plan to create an Azure environment that will contain a root management group and 10 child management groups. Each child management group will contain five Azure subscriptions. You plan to have between 10 and 30 resource groups in each subscription.
You need to design an Azure governance solution. The solution must meet the following requirements:
✑ Use Azure Blueprints to control governance across all the subscriptions and resource groups.
✑ Ensure that Blueprints-based configurations are consistent across all the subscriptions and resource groups.
✑ Minimize the number of blueprint definitions and assignments.
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Level at which to define the blueprints:
The child management groups
The root management group
The subscriptions
Level at which to create the blueprint assignments:
The child management groups
The root management group
The subscriptions

A

Requirement 1: Use Azure Blueprints to control governance across all the subscriptions and resource groups.

This dictates that Blueprints will be our chosen method for enforcing governance, so it will be used for both the definitions and assignments.

Requirement 2: Ensure that Blueprints-based configurations are consistent across all the subscriptions and resource groups.

This implies that the blueprint definition should be set at a higher level so that it can be applied across a large number of subscriptions and resource groups. The assignments will also need to be done at a level that impacts the needed subscriptions.

Requirement 3: Minimize the number of blueprint definitions and assignments.

This implies that we should use the higher level options, which can manage multiple lower-level items, to avoid repetition.

Based on these requirements, here is how we should implement our Azure Blueprints solution:

Level at which to define the blueprints:

Correct Answer: The root management group

Explanation: Defining blueprints at the root management group allows you to apply these definitions to all child management groups, and thus, all underlying subscriptions and resource groups. This is the most efficient approach for consistent governance across your entire environment and to minimize the number of definitions needed.

Level at which to create the blueprint assignments:

Correct Answer: The child management groups

Explanation: By assigning the blueprint at the child management group level, the policies will automatically be applied to the subscriptions and resource groups that reside inside the group, thus minimizing the effort and number of assignments required. You would want to avoid assigning at the root, as that would mean the subscriptions within the root will also be impacted (if there are any). Also, assigning at the subscription level would force more assignments and make managing them more difficult.

Therefore, the correct answer is:

Level at which to define the blueprints: The root management group

Level at which to create the blueprint assignments: The child management groups

83
Q

Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases.
The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.
You need to recommend a solution to meet the regulatory requirement.
Solution: You recommend using an Azure policy to enforce the resource group location.
Does this meet the goal?

A. Yes
B. No

A

Requirements:

Deploy App Service instances and Azure SQL databases simultaneously: This is a deployment consideration, and not something policies directly enforce.

Deploy App Service instances only to specific Azure regions: This is a key requirement for regional compliance.

App Service resources must reside in the same region: This is another key requirement for consistent regional deployment.

Proposed Solution: Use an Azure policy to enforce the resource group location.

Analysis:

Does it address simultaneous deployment? No, the solution doesn’t address the simultaneous deployment requirement. Azure Policy doesn’t control the timing of deployment, it only validates resources once they exist.

Does it address specific regions? No, it does not. While Azure Policy can enforce a specific location for resources, not necessarily for resource groups themselves. Using resource groups to enforce regional deployments is not the proper or intended use for resource groups. If resource groups are created in a region A, you are still free to create the resources within it in any other region. You could create a policy to validate specific resource types and their locations, but if you just restrict the resource groups to a region, then you can still deploy the App Service resources in other regions as long as they are within the resource group.

Does it address same region for all App Service resources? No, it does not. While the policy forces resources to be in a specific resource group, the individual resources do not necessarily have to be in the same region as the resource group.

Conclusion:

The proposed solution does not completely meet the requirement. While it provides some control over the resource group location, it fails to meet the key requirements of enforcing the regional deployments of App Service instances and their associated SQL resources, and it also doesn’t enforce simultaneous deployment, which was not in the original requirement, but might be implied.

Therefore, the answer is:

B. No

A better solution would be to use an Azure Policy that enforces specific locations for the App Service and SQL resources themselves.

84
Q

You have an Azure subscription.
You need to recommend a solution to provide developers with the ability to provision Azure virtual machines. The solution must meet the following requirements:
✑ Only allow the creation of the virtual machines in specific regions.
✑ Only allow the creation of specific sizes of virtual machines.
What should you include in the recommendation?

A. Azure Resource Manager templates
B. Azure Policy
C. conditional access policies
D. role-based access control (RBAC)

A

Requirements:

Provide developers with the ability to provision Azure virtual machines: This implies the need to allow developers to have the ability to perform create operations for the virtual machines.

Only allow the creation of virtual machines in specific regions: This requires enforcing a location constraint on deployments.

Only allow the creation of specific sizes of virtual machines: This requires enforcing a SKU/size constraint on deployments.

Analyzing the options:

A. Azure Resource Manager templates: While ARM templates are used to define and deploy resources, they do not inherently enforce constraints. Templates can specify locations and sizes, but they do not prevent users from creating VMs with a template that doesn’t adhere to the standards. While templates help with the deployment consistency, they can’t enforce compliance policies.

B. Azure Policy: Azure Policy is designed to enforce organizational standards and assess compliance. You can create policies to restrict the locations where virtual machines can be deployed and also restrict which SKUs (sizes) of virtual machines can be used. This solution can meet both requirements, and would also prevent users from deploying outside of this compliance.

C. Conditional access policies: Conditional Access policies are used to control access to Azure resources based on conditions. These policies are not related to controlling which Azure resource is created. These are for authentication and authorization controls, not resource controls.

D. Role-based access control (RBAC): RBAC is used for managing access to Azure resources. RBAC is used for permissions to perform actions, not for which type of resources can be created. RBAC would allow developers access to the resources, but it wouldn’t enforce which resources they can create.

Conclusion:

Based on the analysis, Azure Policy is the best option to fulfill the requirements of enforcing specific regions and virtual machine sizes. It provides the necessary mechanisms for governing resource creation.

Therefore, the correct answer is:

B. Azure Policy

85
Q

You plan to automate the deployment of resources to Azure subscriptions.
What is a difference between using Azure Blueprints and Azure Resource Manager templates?

A. Azure Resource Manager templates remain connected to the deployed resources.
B. Only Azure Resource Manager templates can contain policy definitions.
C. Azure Blueprints remain connected to the deployed resources.
D. Only Azure Blueprints can contain policy definitions.

A

Key Concepts:

Azure Resource Manager (ARM) Templates: These are JSON files that define the infrastructure and configuration for your Azure resources. They are used for repeatable deployments. Once a resource is deployed via an ARM template, the template has completed its function, and is not connected to the resource.

Azure Blueprints: Blueprints are a higher-level service that combines resource deployments (using ARM templates) with other governance elements like policies and role assignments. They are used to create reusable, consistent, and compliant environments, and they do remain connected to the deployed resources.

Analyzing the options:

A. Azure Resource Manager templates remain connected to the deployed resources. This statement is incorrect. ARM templates are used for deployments, but they are not “connected” after deployment. They are a definition, not a stateful connection.

B. Only Azure Resource Manager templates can contain policy definitions. This statement is incorrect. While ARM templates can deploy resources and configuration, they can not enforce policy. Policy is applied through Azure Policy, which can be integrated into Blueprints.

C. Azure Blueprints remain connected to the deployed resources. This statement is correct. Blueprints maintain a connection to the deployed resources and enforce consistency and compliance going forward. After a Blueprint assignment, the resources will be tied to it.

D. Only Azure Blueprints can contain policy definitions. This statement is incorrect. Blueprints can include policy assignments, but policy definitions are managed separately in Azure Policy. Policy definitions are assigned to Blueprints.

Conclusion:

The key difference is that Azure Blueprints remain connected to the deployed resources, while ARM templates are used to deploy the resources and then are no longer connected to them.

Therefore, the correct answer is:

C. Azure Blueprints remain connected to the deployed resources.

86
Q

HOTSPOT -
You have an Azure blueprint named BP1.
The properties of BP1 are shown in the Properties exhibit. (Click the Properties tab.)
All services > Blueprints | Blueprint definitions >
BP1

Publish Blueprint | Edit Blueprint | Delete Blueprint

BP1

Name: BP1 ID
Definition location: All PAYG Subscriptions
Description: Assigns policies to address specific recommendations from the Azure Security Benchmark.
Version: Draft
State: Draft
Definition location ID:
Edit Blueprint
Basics
Blueprint Name: BP1

Blueprint Description:
Assigns policies to address specific recommendations from the Azure Security Benchmark.

Definition location*: All PAYG Subscriptions
The management group or subscription where the blueprint is saved. The definition location determines the scope that the blueprint may be assigned to. Learn more at aka.ms/BlueLocation.

Artifacts
The artifacts attached to BP1 are shown in the Artifacts exhibit.

NAME

Subscription
ARTIFACT TYPE: Subscription
Audit Azure Security Benchmark recommendations and deploy specific supporting VM Extensions
ARTIFACT TYPE: Policy assignment
PARAMETERS: 0 out of 12 parameters populated
Database Resource Group
ARTIFACT TYPE: Resource group
PARAMETERS: 0 out of 2 parameters populated
+ Add artifact…

Answer Area
Statements

You can assign BP1 in its current state.
BP1 has a role assignment defined.
When BP1 is assigned, you will need to provide a resource group name.

A

Exhibit Analysis:

Properties Tab:

Name: BP1

Definition Location: All PAYG Subscriptions

Description: “Assigns policies to address specific recommendations from the Azure Security Benchmark.”

Version: Draft

State: Draft

Artifacts Tab:

Subscription Artifact: A subscription level artifact (likely for adding subscription level configurations)

Policy Assignment Artifact: “Audit Azure Security Benchmark recommendations and deploy specific supporting VM Extensions” with 0 of 12 parameters populated

Resource Group Artifact: “Database Resource Group” with 0 of 2 parameters populated

Statement Analysis:

You can assign BP1 in its current state.

Analysis: False. The blueprint is in the “Draft” state, as shown in the Properties tab. Blueprints need to be published before they can be assigned. A draft blueprint cannot be assigned.

BP1 has a role assignment defined.

Analysis: False. The artifacts list includes a Subscription artifact, a Policy Assignment artifact, and a Resource Group artifact. There is no role assignment artifact explicitly defined in this blueprint configuration.

When BP1 is assigned, you will need to provide a resource group name.

Analysis: True. The Artifacts tab shows a “Database Resource Group” artifact. The parameters field is shown as “0 out of 2 parameters populated” which implies that this resource group is not pre-defined, and that you will need to provide at least a name when you are deploying this blueprint as an assignment.

Therefore, the correct answers are:

You can assign BP1 in its current state: No

BP1 has a role assignment defined: No

When BP1 is assigned, you will need to provide a resource group name: Yes

87
Q

Overview:

Existing Environment

Fabrikam, Inc. is an engineering company that has offices throughout Europe. The company has a main office in London and three branch offices in Amsterdam Berlin, and Rome.

Active Directory Environment:

The network contains two Active Directory forests named corp.fabnkam.com and rd.fabrikam.com. There are no trust relationships between the forests. Corp.fabrikam.com is a production forest that contains identities used for internal user and computer authentication. Rd.fabrikam.com is used by the research and development (R&D) department only. The R&D department is restricted to using on-premises resources only.

Network Infrastructure:

Each office contains at least one domain controller from the corp.fabrikam.com domain.

The main office contains all the domain controllers for the rd.fabrikam.com forest.

All the offices have a high-speed connection to the Internet.

An existing application named WebApp1 is hosted in the data center of the London office. WebApp1 is used by customers to place and track orders. WebApp1 has a web tier that uses Microsoft Internet Information Services (IIS) and a database tier that runs Microsoft SQL Server 2016. The web tier and the database tier are deployed to virtual machines that run on Hyper-V.

The IT department currently uses a separate Hyper-V environment to test updates to WebApp1.

Fabrikam purchases all Microsoft licenses through a Microsoft Enterprise Agreement that includes Software Assurance.

Problem Statement:

The use of Web App1 is unpredictable. At peak times, users often report delays. At other

times, many resources for WebApp1 are underutilized.

Requirements:

Planned Changes:

Fabrikam plans to move most of its production workloads to Azure during the next few years.

As one of its first projects, the company plans to establish a hybrid identity model, facilitating an upcoming Microsoft Office 365 deployment All R&D operations will remain on-premises.

Fabrikam plans to migrate the production and test instances of WebApp1 to Azure.

Technical Requirements:

Fabrikam identifies the following technical requirements:

  • Web site content must be easily updated from a single point.
  • User input must be minimized when provisioning new app instances.
  • Whenever possible, existing on premises licenses must be used to reduce cost.
  • Users must always authenticate by using their corp.fabrikam.com UPN identity.
  • Any new deployments to Azure must be redundant in case an Azure region fails.
  • Whenever possible, solutions must be deployed to Azure by using platform as a service (PaaS).
  • An email distribution group named IT Support must be notified of any issues relating to the directory synchronization services.
  • Directory synchronization between Azure Active Directory (Azure AD) and corp.fabhkam.com must not be affected by a link failure between Azure and the on premises network.

Database Requirements:

Fabrikam identifies the following database requirements:

  • Database metrics for the production instance of WebApp1 must be available for analysis so that database administrators can optimize the performance settings.
  • To avoid disrupting customer access, database downtime must be minimized when databases are migrated.
  • Database backups must be retained for a minimum of seven years to meet compliance requirement

Security Requirements:

Fabrikam identifies the following security requirements:

*Company information including policies, templates, and data must be inaccessible to anyone outside the company

*Users on the on-premises network must be able to authenticate to corp.fabrikam.com if an Internet link fails.

*Administrators must be able authenticate to the Azure portal by using their corp.fabrikam.com credentials.

*All administrative access to the Azure portal must be secured by using multi-factor

authentication.

*The testing of WebApp1 updates must not be visible to anyone outside the company.

HOTSPOT

You are evaluating the components of the migration to Azure that require you to provision an Azure Storage account.

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.

Statements
You must provision an Azure Storage account for the
SQL Server database migration.
You must provision an Azure Storage account for the
Web site content storage.
You must provision an Azure Storage account for the
Database metric monitoring.

A

Key Concepts:

Azure Storage Account: A service for storing various types of data in the cloud.

Azure SQL Migration: Database migration to Azure can use storage accounts for backups, migration tools, etc.

Web App Content: Content can be stored in various locations, including Azure Storage or directly in the App Service plan.

Database Metrics Monitoring: Monitoring data is typically written to Azure Monitor, not directly to storage accounts.

Statement Analysis:

You must provision an Azure Storage account for the SQL Server database migration.

Analysis: Yes. While there are multiple ways to migrate SQL databases to Azure, using an Azure Storage Account is a typical method. The storage account can be used to store backup files before restoring to the Azure SQL instances. The migration can be done using the Azure Database Migration Service, which typically relies on a storage account to store the database backups. This allows for more efficient and faster data transfer.

You must provision an Azure Storage account for the Web site content storage.

Analysis: No. While an Azure Storage account can be used to store website content (e.g., using Azure Blob Storage or static website hosting), it is not a must. For a PaaS web app, content is often deployed directly to the app service. Alternatively, you could use Azure Content Delivery Network (CDN) linked to an Azure Storage account if there is a need for a CDN solution. However, this problem suggests PaaS as the option to use, which would not require the use of an Azure Storage account for web site content storage.

You must provision an Azure Storage account for the Database metric monitoring.

Analysis: No. Database metrics in Azure SQL are primarily written to Azure Monitor and Log Analytics. While Log Analytics can store the logs on a storage account, this is not something that must be done for database metric monitoring. The requirement is to have access to the database metrics and this can be done directly using Azure Monitor. You are not required to use Azure Storage.

Therefore, the correct answers are:

You must provision an Azure Storage account for the SQL Server database migration: Yes

You must provision an Azure Storage account for the Web site content storage: No

You must provision an Azure Storage account for the Database metric monitoring: No

87
Q

You need to recommend a strategy for the web tier of WebApp1. The solution must minimize What should you recommend?

Create a runbook that resizes virtual machines automatically to a smaller size outside of business hours.
Configure the Scale Up settings for a web app.
Deploy a virtual machine scale set that scales out on a 75 percent CPU threshold.
Configure the Scale Out settings for a web app.

A

Understanding the Requirement (Implied)

While we don’t have a full scenario, the prompt emphasizes cost minimization for the web tier of an application, which is very common in Azure. We need a solution that can automatically scale based on usage patterns.

Analyzing the Options

Let’s evaluate each option:

Create a runbook that resizes virtual machines automatically to a smaller size outside of business hours.

Why not ideal but closest? This is not the best option as it requires custom logic and could require custom scripts to perform the resize of the VM. It is also not ideal because scaling an application does not involve resizing the VM, but rather adding more VM instances.

Configure the Scale Up settings for a web app.

Why not ideal? Scale up, or vertical scaling, is the act of increasing the size of the instance that is running the app. This will increase the costs, not minimize them.

Deploy a virtual machine scale set that scales out on a 75 percent CPU threshold.

Why not ideal but closest? A VM scale set is designed to scale out VMs to handle increased load. However, using VMs to host a web app would be more expensive and does not meet the design of the prompt which would be using an App Service.

Configure the Scale Out settings for a web app.

Why ideal and closest? This allows the web app to scale out its instances based on various metrics, and it does not involve re-sizing the VMs of the app, but adding instances.

The Closest Correct Answer

Based on the analysis, the most suitable and correct option is:

Configure the Scale Out settings for a web app.

Explanation

App Service Scale Out: Configuring scale out settings for a web app enables the App Service to automatically adjust the number of instances based on defined metrics, such as CPU usage, memory usage, or request queue length.

Cost Optimization: Scaling out only when needed minimizes the resource consumption and costs, compared to keeping instances constantly running.

Automatic Scaling: Automatic scaling reduces administrative overhead by responding to varying usage automatically.

Why Other Options are Less Ideal

Runbook for VM Resizing: While it can reduce costs, resizing a VM does not provide high availability, and is not designed for web app scaling. It is more complex to manage than built-in scaling features of App Service, and also can disrupt service.

Scale Up Settings: Scale up will only increase the costs, and does not address the need to minimize cost.

VM Scale Sets: Virtual machine scale sets are more complex to manage for a web app. App service web apps are more suitable for this scenario.

88
Q

You need to recommend a strategy for migrating the database content of WebApp1 to Azure.

What should you include in the recommendation?

Use Azure Site Recovery to replicate the SQL servers to Azure.
Use SQL Server transactional replication.
Copy the BACPAC file that contains the Azure SQL database file to Azure Blob storage.
Copy the VHD that contains the Azure SQL database files to Azure Blob storage

A

Requirements:

Migrate the database content of WebApp1 to Azure.

Minimize database downtime during the migration. (This is implied by the requirement to “avoid disrupting customer access.”)

Analyzing the Options:

Use Azure Site Recovery to replicate the SQL servers to Azure.

Analysis: Azure Site Recovery is a good option for migrating entire servers and the SQL server itself. However, this is not recommended when migrating to PaaS. Also, while Azure Site Recovery can reduce downtime, it doesn’t directly address a database-specific migration strategy. There are also faster ways of migrating data using other methods without replicating the entire SQL server. This method would have unnecessary overhead.

Use SQL Server transactional replication.

Analysis: Transactional replication is a viable method for minimizing downtime during database migrations. It allows for ongoing synchronization of data between the on-premises SQL Server and an Azure SQL Database. This approach allows for a cutover time when the application is pointed to the newly migrated SQL database. This solution satisfies the requirement of minimal downtime.

Copy the BACPAC file that contains the Azure SQL database file to Azure Blob storage.

Analysis: BACPAC files can be used to move data to Azure but require an outage to create the BACPAC and then to restore the database, so it does not meet the requirement for minimal downtime. It is also useful when the source is not an SQL Server instance. It would require downtime during the export and import operations which would not meet the downtime requirement.

Copy the VHD that contains the Azure SQL database files to Azure Blob storage.

Analysis: This approach is for migrating entire virtual machines, not just the database content, also VHDs are not the correct method of migrating to PaaS services. Copying the VHD would require an outage for the creation and restore which would not meet the minimal downtime requirement. This solution is also geared for Infrastructure-as-a-Service migration, and the requirement is to deploy to PaaS, so this would not meet the requirement.

Conclusion:

SQL Server transactional replication is the best option to satisfy the minimal downtime requirement.

Therefore, the correct answer is:

Use SQL Server transactional replication.

89
Q

You need to recommend a notification solution for the IT Support distribution group.

What should you include in the recommendation?

Azure Network Watcher
an action group
a SendGrid account with advanced reporting
Azure AD Connect Health

A

Requirement:

An email distribution group named IT Support must be notified of any issues relating to the directory synchronization services.

Analyzing the options:

Azure Network Watcher: Azure Network Watcher is a service for monitoring and diagnosing network-related issues. It is not designed for sending notifications about directory synchronization problems. This would not satisfy the notification requirement.

An action group: This is a good solution. Action groups are the best way to send notifications for Azure services, including alerts generated by Azure Monitor. You can configure an action group to send email notifications to a specific email address or distribution list when an alert is triggered. This would be the correct option to use for the solution.

A SendGrid account with advanced reporting: While SendGrid is a good service for sending emails, it doesn’t directly tie into Azure Monitor or the notification process for Azure services. Also, setting up a SendGrid account is not needed for sending emails to a distribution group for basic email alerts, which is what is required here. You only need an action group.

Azure AD Connect Health: Azure AD Connect Health monitors the health and performance of the directory synchronization service (Azure AD Connect). However, it does not directly send out email notifications without a configuration. Azure AD Connect Health is a monitoring service and doesn’t handle notifications. You need an action group with a defined rule to get email alerts about issues with directory synchronization.

Conclusion:

Using an action group is the best and most direct approach for sending email notifications to the IT Support distribution group when issues with directory synchronization occur. It is directly integrated with Azure Monitor, which can be configured to send email alerts about the health of the synchronization services.

Therefore, the correct answer is:

an action group

90
Q

You need to recommend a data storage strategy for WebApp1.

What should you include in in the recommendation?

an Azure SQL Database elastic pool
a vCore-baswl Azure SQL database
an Azure virtual machine that runs SQL Server
a fixed-size DTU AzureSQL database.

A

Requirements:

The database needs to support WebApp1, which experiences unpredictable usage with peak times and underutilized periods.

The requirement to use PaaS solutions whenever possible also needs to be considered.

Analyzing the Options:

An Azure SQL Database elastic pool:

Analysis: This is a good option. An elastic pool allows you to share resources among multiple databases in the pool. This approach is ideal for applications with unpredictable usage, because it provides flexibility by allowing the pool to dynamically allocate resources to the databases within it, depending on their load. This is a good choice for the given requirements. Also, it is a PaaS solution.

A vCore-based Azure SQL database:

Analysis: This is also a good option, as the vCore purchasing model is a flexible and scalable way of sizing Azure SQL databases. However, if the goal is to minimize costs for predictable usage, an elastic pool would be better because it can automatically scale up and down. If using the vCore model, you have to configure autoscaling, which is an extra step. Also, the requirements state that you must use PaaS solutions, and the vCore model is part of that PaaS offering.

An Azure virtual machine that runs SQL Server:

Analysis: While this is an option, it is not recommended because it violates the requirement to use a PaaS service when possible. This is an IaaS approach, which is not desired per the requirements. This also requires more manual administration, including patching and backups.

A fixed-size DTU Azure SQL database:

Analysis: While this is a PaaS offering, a fixed-size DTU database is not good for unpredictable workloads. This would require sizing for the peak workload, and would result in high costs when the workload is low. This does not satisfy the requirement of optimizing for unpredictable usage.

Conclusion:

Given the unpredictable nature of WebApp1’s usage, an Azure SQL Database elastic pool is the most appropriate solution. It allows for efficient resource allocation, scaling on demand, and cost optimization for variable workloads.

Therefore, the correct answer is:

an Azure SQL Database elastic pool

91
Q

You design a solution for the web tier of WebApp1 as shown in the exhibit.

[Diagram Description]

Azure Traffic Manager is shown at the top.
Traffic Manager is connected to two Azure Web Apps:
One in North Europe.
One in West Europe.
Question:
For each of the following statements, select Yes if the statement is true. Otherwise, select No.

Statements:

The design supports the technical requirements for redundancy.
The design supports autoscaling.
The design requires a manual configuration if an Azure region fails.

A

Understanding the Diagram:

The diagram shows the following setup:

Azure Traffic Manager: Used to distribute traffic across multiple endpoints (in this case, two Web Apps).

Two Azure Web Apps:

One in the North Europe region.

One in the West Europe region.

Analyzing the Statements:

The design supports the technical requirements for redundancy.

Analysis: Yes. The deployment of the application in two different regions (North Europe and West Europe), and the use of Traffic Manager to route traffic to those deployments, ensures that the application has redundancy. If one region fails, Traffic Manager will redirect the traffic to the other healthy region. Thus, the system will remain accessible even if an entire region is down.

The design supports autoscaling.

Analysis: Yes. While not explicitly shown, Azure Web Apps support autoscaling. Therefore, the web app itself can automatically scale out and scale in based on the web app load. This will automatically scale the web tier when there is more load on the service, therefore supporting the requirement for auto-scaling.

The design requires a manual configuration if an Azure region fails.

Analysis: No. The beauty of using Azure Traffic Manager is that it automatically detects when one of the regions is down and directs all traffic to the healthy region. There is no manual intervention needed to fail over the traffic to another region. The failover is automatic.

Therefore, the correct answers are:

The design supports the technical requirements for redundancy: Yes

The design supports autoscaling: Yes

The design requires a manual configuration if an Azure region fails: No

92
Q

You need to recommend a solution to meet the database retention requirement.

What should you recommend?

Configure a long-term retention policy for the database.
Configure Azure Site Recovery.
Configure geo replication of the database.
Use automatic Azure SQL Database backups.

A

Requirement:

Database backups must be retained for a minimum of seven years to meet compliance requirements.

Analyzing the Options:

Configure a long-term retention policy for the database.

Analysis: This is the correct solution. Azure SQL Database offers long-term backup retention options, allowing you to retain backups for up to 10 years. You can configure backup retention using a combination of weekly, monthly, and yearly backups. This allows you to meet the requirement of retaining backups for a minimum of seven years.

Configure Azure Site Recovery.

Analysis: Azure Site Recovery is a service for disaster recovery, and is used to replicate servers to other regions or data centers. It is not a database backup or long-term retention solution. This option does not meet the database retention requirements.

Configure geo replication of the database.

Analysis: Geo-replication creates a readable replica of your database in another region. This is used for disaster recovery purposes, not for long-term backup and retention. This option does not meet the database retention requirements.

Use automatic Azure SQL Database backups.

Analysis: While automatic backups are essential for recovery purposes, they typically have a shorter retention period by default. They only provide point-in-time restore functionality, and are not intended for long-term backups that must be available for seven years. You have to configure a long-term retention policy to satisfy the retention requirement.

Conclusion:

To meet the requirement of retaining database backups for a minimum of seven years, you must configure a long-term retention policy for the database. This is the appropriate option.

Therefore, the correct answer is:

Configure a long-term retention policy for the database.

93
Q

Overview

Contoso,Ltd is a US-base finance service company that has a main office New York and an office in San Francisco.

Payment Processing Query System

Contoso hosts a business critical payment processing system in its New York data center. The system has three tiers a front-end web app a middle -tier API and a back end data store implemented as a Microsoft SQL Server 2014 database All servers run Windows Server 2012 R2.

The front -end and middle net components are hosted by using Microsoft Internet Inform-non Services (IK) The application rode is written in C# and middle- tier API uses the Entity framework to communicate the SQL Server database. Maintenance of the database e performed by using SQL Server Ago-

The database is currently J IB and is not expected to grow beyond 3 TB.

The payment processing system has the following compliance related requirement

  • Encrypt data in transit and at test. Only the front-end and middle-tier components must be able to access the encryption keys that protect the date store.
  • Keep backups of the two separate physical locations that are at last 200 miles apart and can be restored for op to seven years.
  • Support blocking inbound and outbound traffic based on the source IP address, the description IP address, and the port number
  • Collect Windows security logs from all the middle-tier servers and retain the log for a period of seven years,
  • Inspect inbound and outbound traffic from the from-end tier by using highly available network appliances.
  • Only allow all access to all the tiers from the internal network of Contoso.

Tape backups ate configured by using an on-premises deployment or Microsoft System Center Data protection Manager (DPMX and then shaped ofsite for long term storage

Historical Transaction Query System

Contoso recently migrate a business-Critical workload to Azure. The workload contains a NET web server for querying the historical transaction data residing in azure Table Storage. The NET service is accessible from a client app that was developed in-house and on the client computer in the New Your office. The data in the storage is 50 GB and is not except to increase.

Information Security Requirement

The IT security team wants to ensure that identity management n performed by using Active Directory.

Password hashes must be stored on premises only.

Access to all business-critical systems must rely on Active Directory credentials. Any suspicious authentication attempts must trigger multi-factor authentication prompt automatically Legitimate users must be able to authenticate successfully by using multi-factor authentication.

Planned Changes

Contoso plans to implement the following changes:

  • Migrate the payment processing system to Azure.
  • Migrate the historical transaction data to Azure Cosmos DB to address the performance issues.

Migration Requirements

Contoso identifies the following general migration requirements:

Infrastructure services must remain available if a region or a data center fails. Failover must occur without any administrative intervention

  • Whenever possible. Azure managed serves must be used to management overhead
  • Whenever possible, costs must be minimized.

Contoso identifies the following requirements for the payment processing system:

  • If a data center fails, ensure that the payment processing system remains available without any administrative intervention. The middle-tier and the web front end must continue to operate without any additional configurations-
  • If that the number of compute nodes of the from -end and the middle tiers of the payment processing system can increase or decrease automatically based on CPU utilization.
  • Ensure that each tier of the payment processing system is subject to a Service level Agreement (SLA) of 9959 percent availability
  • Minimize the effort required to modify the middle tier API and the back-end tier of the payment processing system.
  • Generate alerts when unauthorized login attempts occur on the middle-tier virtual machines.
  • Insure that the payment processing system preserves its current compliance status.
  • Host the middle tier of the payment processing system on a virtual machine.

Contoso identifies the following requirements for the historical transaction query system:

  • Minimize the use of on-premises infrastructure service.
  • Minimize the effort required to modify the .NET web service querying Azure Cosmos DB.
  • If a region fails, ensure that the historical transaction query system remains available without any administrative intervention.

Current Issue

The Contoso IT team discovers poor performance of the historical transaction query as the queries frequently cause table scans.

Information Security Requirements

The IT security team wants to ensure that identity management is performed by using Active Directory.

Password hashes must be stored on-premises only.

Access to all business-critical systems must rely on Active Directory credentials. Any suspicious authentication attempts must trigger a multi-factor authentication prompt automatically. legitimate users must be able to authenticate successfully by using multi-factor authentication.

HOTSPOT

You need to recommend a solution for configuring the Azure Multi-Factor Authentication (MFA) settings.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Answer Area
Azure AD license:
Free
Basic
Premium P1
Premium P2
Access control for the sign-in risk policy:
Allow access and require multi-factor authentication
Block access and require multi-factor authentication
Allow access and require Azure MFA registration
Block access
Access control for the multi-factor
authentication registration policy:
Allow access and require multi-factor authentication
Block access and require multi-factor authentication
Allow access and require Azure MFA registration
Block access
— —

A

Key Requirements:

Access to all business-critical systems must rely on Active Directory credentials.

Any suspicious authentication attempts must trigger a multi-factor authentication prompt automatically.

Legitimate users must be able to authenticate successfully by using multi-factor authentication.

Analysis of Azure MFA Options:

Azure AD License:

Correct Answer: Premium P1

Explanation: To implement conditional access policies based on sign-in risk, you need an Azure AD Premium P1 or P2 license. Azure AD Free and Basic do not offer conditional access based on risk, and this requirement is part of the information security requirements of the case study. Premium P2 has other features that are not needed for this requirement, and thus P1 is the correct answer.

Access control for the sign-in risk policy:

Correct Answer: Allow access and require multi-factor authentication

Explanation: For any suspicious sign-in attempt, the policy should allow access but enforce multi-factor authentication (MFA). This provides the needed security without completely blocking legitimate users.

Access control for the multi-factor authentication registration policy:

Correct Answer: Allow access and require Azure MFA registration

Explanation: Users need to have the ability to register for MFA. This policy ensures that users will be prompted to complete the MFA registration process. If they do not register, they will not be able to log on with MFA enabled, and thus cannot gain access to the resources.

Therefore, the correct answer is:

Azure AD license: Premium P1

Access control for the sign-in risk policy: Allow access and require multi-factor authentication

Access control for the multi-factor authentication registration policy: Allow access and require Azure MFA registration

94
Q

HOTSPOT

You need to recommend a solution for the users at Contoso to authenticate to the cloud-based services and the Azure AD-integrated applications.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Install Azure AD Connect and set the
user sign-in option to:
Federation with AD FS
Pass-through Authentication
Password Synchronization
Implement load balancing for the
components of the authentication
solution by using:
Azure Application Gateway and a Basic Load Balancer
Azure Application Gateway and a Standard Load Balancer
Traffic Manager and a Basic Load Balancer
Traffic Manager and a Standard Load Balancer

A

Key Requirements:

Access to all business-critical systems must rely on Active Directory credentials.

Password hashes must be stored on-premises only.

Analyzing Authentication Options:

Install Azure AD Connect and set the user sign-in option to:

Correct Answer: Pass-through Authentication

Explanation: Since password hashes must be stored on-premises only, you cannot use password hash synchronization. Pass-through authentication allows users to authenticate directly against their on-premises Active Directory domain controller. This solution also fulfills the requirement to use on-premises Active Directory for credentials while also authenticating to Azure and cloud based services.

Why other options are incorrect:

Federation with AD FS: While this would also work, AD FS has a higher complexity for maintenance. Also, Pass-through Authentication is a more modern solution with less infrastructure needed compared to AD FS.

Password Synchronization: This stores password hashes in Azure AD, which violates the requirement for keeping password hashes on-premises only.

Implement load balancing for the components of the authentication solution by using:

Correct Answer: Traffic Manager and a Standard Load Balancer

Explanation:

You need a way to load balance the authentication requests between the on-prem domain controllers and Azure, which are in different networks. Traffic Manager will direct user traffic to a healthy authentication endpoint based on configured rules. Azure AD Connect agents for pass-through authentication do not need an external load balancer as it is handled by the service itself, therefore you need a way to balance traffic between these on-prem authentication points.

You will need a way to balance traffic to the local Active Directory domain controllers. To do this, you use a standard load balancer. It needs to be a standard load balancer, since it must be routable from Azure (where traffic manager will reside).

Why other options are incorrect:

Azure Application Gateway and a Basic Load Balancer: Azure Application Gateway is not suitable for load balancing authentication requests. Also, basic load balancers do not work with Azure Traffic Manager.

Azure Application Gateway and a Standard Load Balancer: Application Gateway is a web traffic load balancer, not a load balancer for the authentication traffic.

Traffic Manager and a Basic Load Balancer: Basic Load Balancers are not compatible with the Traffic Manager solution, therefore standard load balancers are needed.

Therefore, the correct answer is:

Install Azure AD Connect and set the user sign-in option to: Pass-through Authentication

Implement load balancing for the components of the authentication solution by using: Traffic Manager and a Standard Load Balancer

95
Q

HOTSPOT

You need to recommend a solution for the data store of the historical transaction query system.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Sizing requirements:
A table that has unlimited capacity
A table that has a fixed capacity
Multiple tables that have unlimited capacity
Multiple tables that have fixed capacity
Resiliency:
An additional read region
An availability set
An availability zone

A

Key Requirements:

Migrate historical transaction data to Azure Cosmos DB to address performance issues.

Minimize the effort required to modify the .NET web service querying Azure Cosmos DB.

If a region fails, ensure that the historical transaction query system remains available without any administrative intervention.

Historical data is 50 GB and is not expected to increase, although it needs to be migrated.

Analyzing the Data Store Options:

Sizing Requirements:

Correct Answer: A table that has unlimited capacity

Explanation: Since the data will be migrated to Azure Cosmos DB, we need to select the most appropriate solution for the type of data. Azure Cosmos DB supports tables, and a single table with unlimited capacity is suitable for storing the historical data. You do not need more than one, and there is no specified reason to have more than one. Azure Cosmos DB will automatically scale as necessary.

Resiliency:

Correct Answer: An additional read region

Explanation: To meet the requirement for regional failover, you need an additional read region. Cosmos DB supports multi-region writes and reads which will allow for continued availability. You are not required to implement zones or sets. While an availability zone would provide resiliency within the primary region, it does not address the regional failover requirement.

Therefore, the correct answer is:

Sizing requirements: A table that has unlimited capacity

Resiliency: An additional read region

96
Q

You need to recommend a solution for protecting the content of the back-end tier of the payment processing system.

What should you include in the recommendations?

Always Encrypted with deterministic encryption
Transparent Date Encryption (TDE)
Azure Storage Service Encryption
Always Encrypted with randomized encryption

A

Key Requirements:

Encrypt data in transit and at rest.

Only the front-end and middle-tier components must be able to access the encryption keys that protect the data store.

Analyzing the Options:

Always Encrypted with deterministic encryption:

Analysis: Always Encrypted protects sensitive data by encrypting it within the application itself before sending it to the database. Deterministic encryption allows querying the data, but there are some risks to this method. Deterministic encryption can result in patterns that reveal the values through a frequency analysis attack. This method can satisfy the requirement that only front-end and middle-tier components can access the encryption keys, as the keys are managed by the application, but it does not meet the requirement for only the front-end and middle tier to access the encryption key. Also, this encryption method is best used on single fields, rather than the entire database.

Transparent Data Encryption (TDE):

Analysis: TDE encrypts the database at rest (on disk) at the storage layer. The encryption is transparent to the application, which means that the application does not need to be modified to use TDE. TDE does not prevent unauthorized users from accessing the data as long as they have access to the database (or a backup of it). Also, this method does not meet the requirements for only the front-end and middle tiers to access the encryption keys. This method also does not protect data during transit.

Azure Storage Service Encryption:

Analysis: Azure Storage Service Encryption encrypts data stored in Azure Storage, not SQL databases. Since this method of encryption is not applicable to the database, this option is incorrect.

Always Encrypted with randomized encryption:

Analysis: This is the correct answer. Always Encrypted with randomized encryption provides the best way to satisfy the requirements. It allows the front-end and middle-tier components to access the encryption keys, as the keys are handled by the application, and it protects data both at rest and in transit. Also, randomized encryption provides a higher degree of security than deterministic encryption. This is the appropriate solution.

Conclusion:

To meet the requirements of data encryption and controlled access to encryption keys, Always Encrypted with randomized encryption is the best solution for protecting the database content of the back-end tier of the payment processing system.

Therefore, the correct answer is:

Always Encrypted with randomized encryption

97
Q

HOTSPOT

You need to recommend a solution for data of the historical transaction query system.

What should you include in the recommendation? To answer, Select the appropriate or options in the answer area. NOTE: Each correct selection is worth one point

Answer Area
Sizing requirements:
A table that has unlimited capacity
A table that has a fixed capacity
Multiple tables that have unlimited capacity
Multiple tables that have fixed capacity
Resiliency:
An additional read region
An availability set
An availability zone
— —

A

Key Requirements:

Migrate historical transaction data to Azure Cosmos DB to address performance issues.

Minimize the effort required to modify the .NET web service querying Azure Cosmos DB.

If a region fails, ensure that the historical transaction query system remains available without any administrative intervention.

Historical data is 50 GB and is not expected to increase, although it needs to be migrated.

Analyzing the Data Store Options:

Sizing Requirements:

Correct Answer: A table that has unlimited capacity

Explanation: Since the data is being migrated to Azure Cosmos DB (and the requirement is to use it, this means it is the table API) , a single table with unlimited capacity is the best fit. Cosmos DB tables automatically scale to the required size needed, and a fixed size is not a good fit. Also, the application requirements do not need for multiple tables, so only using a single table satisfies all requirements.

Resiliency:

Correct Answer: An additional read region

Explanation: To ensure that the application is always available, even during an entire region failure, you need to implement a read replica of the data in a different region. This method will allow a single region to fail while the data is still available in the read region. This meets the requirement for regional failover without manual intervention. While availability zones would provide resiliency in a single region, this does not satisfy the need for region failover.

Therefore, the correct answer is:

Sizing requirements: A table that has unlimited capacity

Resiliency: An additional read region

98
Q

You need to recommend a solution for the network configuration of the front-end tier of the payment processing.

What should you include in the recommendation?

Azure Application Gateway
Traffic Manager
a Standard Load Balancer
a Basic load Balancer

A

Key Requirements:

Inspect inbound and outbound traffic from the front-end tier by using highly available network appliances.

Only allow all access to all the tiers from the internal network of Contoso.

Ensure that the payment processing system remains available without any administrative intervention. The middle-tier and the web front end must continue to operate without any additional configurations if a data center fails.

The front-end tier needs to scale automatically based on CPU utilization.

Analyzing the Options:

Azure Application Gateway:

Analysis: This is the correct answer. Azure Application Gateway provides a web traffic load balancer that can provide SSL offloading and can integrate with web application firewalls (WAFs) for inspecting the traffic. It supports high availability via its scale set features. This method would allow for inspection and control of the front-end tier traffic. It also satisfies the requirements for high availability, and automatic scaling due to being backed by a scale set.

Traffic Manager:

Analysis: Traffic Manager is a DNS-based load balancer for cross-region traffic distribution. While it’s good for high availability and regional failover, it does not provide the needed functionality of inspecting traffic by using network appliances. This option does not satisfy the security requirement for traffic inspection. Also, Traffic Manager does not do scaling.

A Standard Load Balancer:

Analysis: A Standard Load Balancer is a layer 4 load balancer that is used to distribute traffic across virtual machines in a single region. It provides load balancing, but does not provide the needed features for inspecting traffic using appliances. Standard Load Balancer also is not an ideal method for load balancing web requests and traffic. Also, this does not support traffic inspection through a WAF.

A Basic Load Balancer:

Analysis: This is not a good option. Basic Load Balancers only provide basic load balancing and do not provide inspection functionality or autoscaling. Also, they can not be used for cross region traffic.

Conclusion:

To satisfy the requirements of traffic inspection, high availability, and autoscaling, Azure Application Gateway is the appropriate choice for the front-end tier network configuration. It can also satisfy the requirement to provide access to internal requests, by only allowing access to the virtual network where the application is deployed.

Therefore, the correct answer is:

Azure Application Gateway

98
Q

You need to recommend a solution for the user at Contoso to authenticate to the cloud-based sconces and the Azure AD-integrated application.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Install Azure AD Connect and set the
user sign-in option to:
Federation with AD FS
Pass-through Authentication
Password Synchronization
Implement load balancing for the
components of the authentication
solution by using:
Azure Application Gateway and a Basic Load Balancer
Azure Application Gateway and a Standard Load Balancer
Traffic Manager and a Basic Load Balancer
Traffic Manager and a Standard Load Balancer

A

Key Requirements:

Access to all business-critical systems must rely on Active Directory credentials.

Password hashes must be stored on-premises only.

Analyzing the Authentication Options:

Install Azure AD Connect and set the user sign-in option to:

Correct Answer: Pass-through Authentication

Explanation: Given the requirement to keep password hashes on-premises only, you cannot use password hash synchronization. Pass-through authentication allows users to authenticate directly against their on-premises Active Directory domain controller without storing password hashes in Azure AD.

Why other options are incorrect:

Federation with AD FS: This is a more complex solution than Pass-through Authentication and not needed for the current requirements.

Password Synchronization: This option violates the requirement to keep password hashes on-premises.

Implement load balancing for the components of the authentication solution by using:

Correct Answer: Traffic Manager and a Standard Load Balancer

Explanation: Traffic Manager will direct user traffic to a healthy authentication endpoint based on configured rules (such as the location of the user). You will also need a standard load balancer in front of the domain controllers in the on-prem network. Traffic manager will balance between the Azure AD Connect endpoints, and you would need a load balancer for the domain controllers.

Why other options are incorrect:

Azure Application Gateway and a Basic Load Balancer: Azure Application Gateway is for web traffic load balancing, not authentication traffic. A basic load balancer is also not sufficient.

Azure Application Gateway and a Standard Load Balancer: Same as above, application gateway is not for authentication traffic.

Traffic Manager and a Basic Load Balancer: Basic Load Balancer will not work with Traffic Manager as it is not routable from Azure.

Therefore, the correct answer is:

Install Azure AD Connect and set the user sign-in option to: Pass-through Authentication

Implement load balancing for the components of the authentication solution by using: Traffic Manager and a Standard Load Balancer

98
Q

You need to recommend a solution for implementing the back-end tier of the payment processing system in Azure.

What should you include in the recommendation?

an Azure SQL Database managed instance
a SQL Server database on an Azure virtual machine
an Azure SQL Database single database
an Azure SQL Database elastic pool

A

Key Requirements:

The back-end data store is implemented as a Microsoft SQL Server 2014 database, and requires minimal changes.

The data store size is 1 TB and is not expected to grow beyond 3 TB.

The solution must preserve its current compliance status.

Minimize the effort required to modify the back-end tier.

Whenever possible, costs must be minimized.

Analyzing the Options:

an Azure SQL Database managed instance:

Analysis: This is a good option. Managed instances are designed for migrating existing SQL Server databases with minimal changes. They provide almost 100% compatibility with SQL Server and support many features (including compliance). This option would be ideal for maintaining the existing compliance status while also minimizing the effort needed for modifications. This is also the preferred PaaS method.

a SQL Server database on an Azure virtual machine:

Analysis: While this option would work, it requires more management overhead as you would need to manage the underlying operating system and SQL Server instance. This also violates the requirement to use PaaS whenever possible. It also does not minimize cost, as the managed instance solution will provide more flexibility.

an Azure SQL Database single database:

Analysis: Azure SQL Database single databases are ideal for new development, but they have limitations compared to managed instances in terms of features and migration with minimal changes. They do not offer the same degree of compatibility with SQL Server instances. Also, if the size of the database were to go to 3TB, there may be a need for multiple smaller databases, which would require more overhead. This option would not allow a migration with minimal changes, and it would not help in minimizing cost.

an Azure SQL Database elastic pool:

Analysis: While elastic pools are a good fit for multiple databases that have unpredictable usage, the requirements specify only one single SQL database instance. The requirement is also to minimize the effort required to modify the back-end tier. An elastic pool is not the appropriate fit in this scenario. Also, the cost would not be minimal for a single database.

Conclusion:

To meet the requirements of minimal changes, preserving compliance, and using PaaS while being mindful of costs, an Azure SQL Database managed instance is the most appropriate solution.

Therefore, the correct answer is:

an Azure SQL Database managed instance

99
Q

You need to recommend a compute solution for the middle tier of the payment processing system.

What should you include in the recommendation?

Azure Kubernetes Service (AKS)
virtual machine scale sets
availability sets
App Service Environments (ASEs)

A

Key Requirements:

Host the middle tier of the payment processing system on a virtual machine.

The number of compute nodes of the middle tier can increase or decrease automatically based on CPU utilization.

Ensure that each tier of the payment processing system is subject to a Service Level Agreement (SLA) of 99.99% availability.

Ensure that the payment processing system remains available without any administrative intervention if a data center fails.

Minimize the effort required to modify the middle tier API.

Whenever possible, costs must be minimized.

Analyzing the Options:

Azure Kubernetes Service (AKS):

Analysis: While AKS is a good platform for containerized applications, it introduces extra complexity that is not required in this scenario. Also, the requirement states that the middle tier must be hosted on a virtual machine, and AKS is a container orchestration platform, not a virtual machine platform.

virtual machine scale sets:

Analysis: This is the correct choice. Virtual machine scale sets allow you to create and manage a group of identical, load-balanced virtual machines. It supports automatic scaling based on metrics like CPU utilization, and can be easily set up in multiple availability zones or regions. Virtual machine scale sets also provide high availability, which meets the SLA requirement. It also uses virtual machines, which matches the given requirements.

availability sets:

Analysis: Availability sets provide high availability by distributing virtual machines across multiple fault domains and update domains within a data center, but it does not provide a way to automatically scale based on CPU utilization. Also, availability sets do not provide regional resilience as the virtual machines need to reside in the same region.

App Service Environments (ASEs):

Analysis: App Service Environments are designed for hosting web apps, APIs, and mobile backends, and is not designed for directly hosting virtual machines. The requirements specifically state that the middle tier should be hosted on a virtual machine.

Conclusion:

To meet the requirements of virtual machine hosting, automatic scaling, and high availability while minimizing management overhead and costs, virtual machine scale sets are the most appropriate solution for the middle tier of the payment processing system.

Therefore, the correct answer is:

virtual machine scale sets

100
Q

You need to recommend a disaster recovery solution for the back-end tier of the payment processing system.

What should you include in the recommendation?

Always On Failover Cluster Instances
active geo-replication
Azure Site Recovery
an auto-failover group

A

Key Requirements:

If a data center fails, the payment processing system must remain available without any administrative intervention.

The back-end tier is an Azure SQL Database managed instance.

Analyzing the Options:

Always On Failover Cluster Instances:

Analysis: While Always On Failover Cluster Instances provide high availability, they are for SQL Server on virtual machines, not Azure SQL Database managed instances. This option is not appropriate for the back-end tier of the payment processing system since it is an Azure SQL Database managed instance. Also, while they are highly available, they are not designed for regional failover.

active geo-replication:

Analysis: While geo-replication is a good method for disaster recovery, the requirement is to avoid any manual intervention. The geo-replication method for Azure SQL Database will require some manual intervention to failover, so this method does not satisfy the requirements for the DR solution. Also, the requirement to use PaaS methods (and the selection of Azure SQL Managed instance) means that this option is not a good fit.

Azure Site Recovery:

Analysis: Azure Site Recovery is designed for replicating virtual machines, not database instances. It also requires manual intervention to failover to another region. Also, this method is not appropriate since the back-end tier is an Azure SQL Database managed instance, not an IaaS service.

an auto-failover group:

Analysis: This is the correct answer. An auto-failover group in Azure SQL Database Managed Instance is a feature specifically designed to provide automatic failover to a secondary region in case of a data center outage, with no need for manual intervention. Also, this satisfies the requirement of using PaaS solutions.

Conclusion:

To meet the requirements of automatic failover with no administrative intervention for the back-end tier, an auto-failover group is the most appropriate solution.

Therefore, the correct answer is:

an auto-failover group

101
Q

You need to recommend a backup solution for the data store of the payment processing system.

What should you include in the recommendation?

Microsoft System Center Data Protection Manager (DPM)
Azure Backup Server
Azure SQL long-term backup retention
Azure Managed Disks

A

Key Requirements:

Database backups must be retained for a minimum of seven years to meet compliance requirements.

Minimize the management overhead by using Azure managed services whenever possible.

The back-end tier is an Azure SQL Database managed instance

Analyzing the Options:

Microsoft System Center Data Protection Manager (DPM):

Analysis: DPM is an on-premises backup solution that is not ideal for Azure SQL Database managed instances. It also does not minimize overhead as this would require more manual management. Also, it would not satisfy the requirement to use PaaS.

Azure Backup Server:

Analysis: Azure Backup Server is a hybrid backup solution that requires some on-premise infrastructure. While it can be used for some cloud backup scenarios, it is not suitable for the PaaS option of Azure SQL Database managed instance. This would add management overhead and not be a good fit for the requirements.

Azure SQL long-term backup retention:

Analysis: This is the correct answer. Azure SQL Database managed instance provides built-in long-term backup retention policies that can retain backups for up to 10 years. This approach satisfies the retention period and minimizes management overhead by utilizing the built-in features of a PaaS database. Also, this solution is specifically for the selected PaaS database solution (managed instance).

Azure Managed Disks:

Analysis: Azure Managed Disks are used to manage virtual machine disks, not Azure SQL database backups. This is not the appropriate solution for backing up an Azure SQL Database managed instance.

Conclusion:

To meet the requirements of a seven-year backup retention, minimal management overhead, and the selected PaaS service, Azure SQL long-term backup retention is the most appropriate solution.

Therefore, the correct answer is:

Azure SQL long-term backup retention

102
Q

HOTSPOT

You need to design a solution for securing access to the historical transaction data.

What should you include in the solution? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
The Azure Cosmos DB account
will be used to:
Create users and generate resource tokens
Create users and request resource tokens
Generate resource tokens and perform authentication
Request resource tokens and perform authentication
The .NET web service
will be used to:
Create users and generate resource tokens
Create users and request resource tokens
Generate resource tokens and perform authentication
Request resource tokens and perform authentication

A

Key Concepts:

Azure Cosmos DB Resource Tokens: These tokens provide secure, limited-time access to specific resources within Cosmos DB (e.g., a single document, a collection).

Principle of Least Privilege: Grant only the necessary permissions to access the data.

Analyzing the Options:

The Azure Cosmos DB account will be used to:

Correct Answer: Create users and generate resource tokens

Explanation: The Azure Cosmos DB account is responsible for managing users and generating resource tokens. Resource tokens are associated with a user and allow access to specific resources. The tokens are generated by the Cosmos DB account using its keys. The Cosmos DB account is the administrative entity.

The .NET web service will be used to:

Correct Answer: Request resource tokens and perform authentication

Explanation: The .NET web service will not have any type of administrative access to generate the tokens. The .NET web service needs to request the required resource token from the Cosmos DB service (which will validate the request) and perform authentication to get the data using the resource token received from Azure Cosmos DB.

Therefore, the correct answer is:

The Azure Cosmos DB account will be used to: Create users and generate resource tokens

The .NET web service will be used to: Request resource tokens and perform authentication

103
Q

You need to recommend a solution for protecting the content of the payment processing system.

What should you include in the recommendation?

Transparent Data Encryption (TDE)
Azure Storage Service Encryption
Always Encrypted with randomized encryption
Always Encrypted with deterministic encryption

A

Key Requirements:

Encrypt data in transit and at rest.

Only the front-end and middle-tier components must be able to access the encryption keys that protect the data store.

Analyzing the Options:

Transparent Data Encryption (TDE):

Analysis: TDE encrypts data at rest (on disk) in the storage layer. While it is a good option for encrypting data at rest, it is not sufficient for the requirement to only allow the front-end and middle tier to have access to the data. Also, the keys used for encryption are stored in a place that can be accessed by others with administrative privileges. It also does not protect data in transit.

Azure Storage Service Encryption:

Analysis: Azure Storage Service Encryption encrypts data stored in Azure Storage, not SQL databases. This option is not applicable for protecting the data of the database itself.

Always Encrypted with randomized encryption:

Analysis: This is the correct solution. Always Encrypted with randomized encryption protects data both in transit and at rest. It allows the front-end and middle-tier components to access the encryption keys using the appropriate drivers, and only those components will be able to access the encrypted data. It is also a good method for protecting data for an entire database (as opposed to only single fields).

Always Encrypted with deterministic encryption:

Analysis: While this would work, it is not the most secure method of implementing Always Encrypted. Deterministic encryption results in patterns that may expose the values through a frequency analysis attack.

Conclusion:

To meet the requirements of data encryption and controlled access to encryption keys, Always Encrypted with randomized encryption is the most appropriate choice for the database content of the payment processing system.

Therefore, the correct answer is:

Always Encrypted with randomized encryption

104
Q

You need to recommend a solution for the collection of security logs the middle tier of the payment processing system.

What should you include in the recommendation?

Azure Notification Hubs
the Azure Diagnostics agent
Azure Event Hubs
the Azure Log Analytics agent

A

Key Requirements:

Collect Windows security logs from all the middle-tier servers.

Retain the logs for a period of seven years.

The middle tier is deployed on virtual machines

Analyzing the Options:

Azure Notification Hubs:

Analysis: Azure Notification Hubs is a service for sending push notifications to mobile devices. This service is not applicable for the requirement of collecting security logs from servers.

the Azure Diagnostics agent:

Analysis: The Azure Diagnostics agent is used for collecting data related to the health and status of the Azure services themselves, not for collecting operating system logs (like security logs) inside of a VM. It also sends information to Azure Storage, which is not the ideal method for storing logs.

Azure Event Hubs:

Analysis: While Azure Event Hubs is a great solution for ingesting large streams of data (such as logs), it does not directly collect security logs from virtual machines. You would need another agent to send the data to Event Hubs.

the Azure Log Analytics agent:

Analysis: This is the correct solution. The Azure Log Analytics agent (also known as the MMA agent) can be installed on virtual machines to collect security logs and send them to a Log Analytics workspace. The Log Analytics workspace is designed for storing and analyzing log data, and it can retain logs for up to seven years, therefore satisfying all of the requirements.

Conclusion:

To meet the requirements of collecting and retaining security logs from virtual machines, the Azure Log Analytics agent is the most appropriate solution.

Therefore, the correct answer is:

the Azure Log Analytics agent

105
Q

You need to recommend a backup solution for the data store of the payment processing.

What should you include in the recommendation?

Microsoft System Center Data Protection Manager (DPM)
long-term retention
a Recovery Services vault
Azure Backup Server

A

Key Requirements:

Database backups must be retained for a minimum of seven years to meet compliance requirements.

The back-end tier is an Azure SQL Database managed instance.

Whenever possible, costs must be minimized and use Azure managed services

Analyzing the Options:

Microsoft System Center Data Protection Manager (DPM):

Analysis: DPM is an on-premises backup solution and is not suitable for backing up an Azure SQL Database managed instance, especially since there is a preference to utilize PaaS methods. It also goes against the requirement to utilize Azure managed services when possible and will increase overhead.

long-term retention:

Analysis: This is the correct answer. Azure SQL Database managed instance has a built-in long-term backup retention policy that can retain backups for up to 10 years. This solution satisfies the requirement to retain backups for at least seven years and uses a built-in PaaS service feature.

a Recovery Services vault:

Analysis: While a recovery services vault is needed for Azure Backup, it does not directly perform the backup for Azure SQL databases. You will still need the backup services for the database to run in conjunction with the recovery services vault.

Azure Backup Server:

Analysis: Azure Backup Server is for IaaS VM and on-premises backup, and not specifically for Azure SQL Database managed instances, and does not utilize PaaS methods. This does not directly address the requirement for backing up a database.

Conclusion:

To meet the requirements of seven-year retention and using Azure managed services, long-term retention is the most appropriate solution. This is a built-in feature of the Azure SQL Database managed instance service.

Therefore, the correct answer is:

long-term retention

106
Q

You need to recommend a high-availability solution for the middle tier of the payment processing system.

What should you include in the recommendation?

availability zones
an availability set
the Premium App Service plan
the Isolated App Server plan

A

Key Requirements:

The middle tier is hosted on virtual machines (which implies virtual machine scale sets per previous recommendations).

Ensure that each tier of the payment processing system is subject to a Service Level Agreement (SLA) of 99.99% availability.

Ensure that the payment processing system remains available without any administrative intervention if a data center fails.

Analyzing the Options:

availability zones:

Analysis: This is the correct solution. Availability zones provide high availability within a single Azure region by distributing virtual machine instances across physically separated data centers. Virtual Machine Scale Sets can be configured to span availability zones, therefore meeting the need to be deployed on virtual machines while also providing regional high availability and a 99.99% SLA. This satisfies the requirement.

an availability set:

Analysis: While availability sets offer high availability, they only provide this within the same fault domain, and not across different zones or regions. They do not protect against the failure of an entire data center.

the Premium App Service plan:

Analysis: The Premium App Service plan is used for hosting web apps in Azure App Service and not for virtual machines, which is the requirement.

the Isolated App Server plan:

Analysis: Isolated App Service plans are used for web apps that need to be isolated and highly available. While this is a highly available solution, the requirement was to deploy to a virtual machine scale set, not an app service plan.

Conclusion:

To meet the requirements of regional high availability and the needed SLA while also being used with virtual machine scale sets, deploying virtual machine scale sets across availability zones is the most appropriate solution.

Therefore, the correct answer is:

availability zones

107
Q

HOTSPOT

You plan to deploy a network-intensive application to several Azure virtual machines.

You need to recommend a solution that meets the following requirements:

✑ Minimizes the use of the virtual machine processors to transfer data

✑ Minimizes network latency

Which virtual machine size and feature should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Virtual machine size:
Compute optimized Standard_F8s
General purpose Standard_B8ms
High performance compute Standard_H16r
Memory optimized Standard_E16s_v3
Feature:
Receive side scaling (RSS)
Remote Direct Memory Access (RDMA)
Single root I/O virtualization (SR-IOV)
Virtual Machine Multi-Queue (VMMQ)

A

Key Requirements:

Minimize the use of virtual machine processors to transfer data.

Minimize network latency.

Analyzing the Options:

Virtual machine size:

Correct Answer: High performance compute Standard_H16r

Explanation: High-performance compute VMs like the Standard_H16r are designed for network-intensive workloads. These virtual machines are optimized for low latency and high throughput networking, with features that bypass the virtual machine’s CPU for data transfers. Also, the H-series of virtual machines is specifically designed to minimize CPU utilization during data transfers.

Why other options are incorrect:

Compute optimized Standard_F8s: While these are compute optimized, they do not have features that are designed to minimize CPU utilization when transferring data.

General purpose Standard_B8ms: These virtual machines are designed for general-purpose workloads, and are not a good fit for network intensive workloads.

Memory optimized Standard_E16s_v3: These virtual machines are designed for memory-intensive applications, not for network-intensive ones.

Feature:

Correct Answer: Remote Direct Memory Access (RDMA)

Explanation: RDMA is a feature that allows data transfer to bypass the virtual machine’s CPU. This minimizes processor usage and drastically reduces network latency by having the data transfer handled by the hardware layer itself.

Why other options are incorrect:

Receive side scaling (RSS): RSS distributes network traffic across multiple CPU cores, which still involves CPU processing of the data. While it will increase throughput, it does not minimize CPU utilization during the transfer.

Single root I/O virtualization (SR-IOV): SR-IOV allows direct access to the virtual machine’s hardware, but does not minimize CPU usage or latency as much as RDMA does. Also, this requires specialized NICs that are not included in the problem.

Virtual Machine Multi-Queue (VMMQ): VMMQ improves network throughput, but it still relies on the virtual machine’s CPU to process the data, and does not minimize processor utilization and latency as much as RDMA does.

Therefore, the correct answer is:

Virtual machine size: High performance compute Standard_H16r

Feature: Remote Direct Memory Access (RDMA)

108
Q

DRAG DROP

Your company identifies the following business continuity and disaster recovery objectives for virtual machines that host sales, finance, and reporting application in the company’s on-premises data center.

  • The finance application requires that data be retained for seven years. In the event of a disaster, the application must be able to run from Azure. The recovery in objective (RTO) is 10 minutes,
  • The reporting application must be able to recover point in-time data al a daily granularity. The RTO is eight hours.
  • The sales application must be able to fail over to second on-premises data center.

You need to recommend which Azure services meet the business community and disaster recovery objectives. The solution must minimize costs.

What should you recommend for each application? To answer, drag the appropriate services to the correct application. Each service may be used owe. More than once not at an You may need to drag the spin bar between panes or scroll 10 view content.

Actions

Azure Backup only
Azure Site Recovery only
Azure Site Recovery and Azure Backup
Answer Area

Sales: Service or Services
Finance: Service or Services
Reporting: Service or Services

A

Key Concepts:

Azure Backup: Primarily for backing up data and virtual machines with varying retention periods.

Azure Site Recovery: For replicating virtual machines for DR and migration, with lower RTO than Azure Backup.

RTO (Recovery Time Objective): The target time to restore an application or service after a disruption.

Analyzing Each Application’s Requirements:

Sales Application:

Requirements: Failover to a second on-premises data center.

Solution: Since this does not involve the use of Azure services, we have to select an option. The closest available solution is to select Azure Site Recovery only even though this is not exactly correct. The requirement is to failover to another on-premise data center and not Azure.

Finance Application:

Requirements: Seven-year data retention, RTO of 10 minutes, and ability to run from Azure in the event of a disaster.

Solution: Since the RTO is only 10 minutes, the data needs to be available in Azure. For this, Azure Site Recovery and Azure Backup is the correct solution. Azure Site Recovery will handle the replication of the virtual machine to Azure, which allows a quick failover if necessary. The data that has been replicated in Azure needs to be backed up using Azure Backup to satisfy the retention requirements.

Reporting Application:

Requirements: Daily point-in-time data recovery, RTO of eight hours.

Solution: For a data recovery of daily granularity and the RTO of eight hours, Azure Backup is sufficient. Since there is no requirement for immediate failover, the use of only Azure Backup is sufficient.

Therefore, the correct answers are:

Sales Application: Azure Site Recovery only (Note: This is the closest option, but not an exact fit)

Finance Application: Azure Site Recovery and Azure Backup

Reporting Application: Azure Backup only

108
Q

HOTSPOT
You configure the Diagnostics settings for an Azure SQL database as shown in the following exhibit.

[Diagnostic Settings Panel Description]

Name: “Elags” (a text box for the name of the settings).
Options:
Archive to a storage account (not selected).
Stream to an event hub (not selected).
Send to Log Analytics (selected).
Log Analytics: OMSWorkspace1

LOG Section (Options):

SQLInsights (checked)
AutomaticTuning (checked)
QueryStoreRuntimeStatistics (checked)
QueryStoreWaitStatistics (checked)
Errors (checked)
DatabaseWaitStatistics (checked)
Timeouts (checked)
Blocks (checked)
Deadlocks (checked)
Audit (checked)
SQLSecurityAuditEvents (checked)

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point.

Answer Area
To perform real-time reporting by using
Microsoft Power BI, you must first
[answer choice].
clear Send to Log Analytics
clear SQLInsights
select Archive to a storage account
select Stream to an event hub
Azure Analysis Services

Diagnostics data can be reviewed in
[answer choice].
Azure Application Insights
Azure SQL Analytics
Microsoft SQL Server Analysis Services (SSAS)
SQL Health Check

A

Exhibit Analysis:

Name: Elags

Destination: Send to Log Analytics (OMSWorkspace1)

Logs Selected:

SQLInsights

AutomaticTuning

QueryStoreRuntimeStatistics

QueryStoreWaitStatistics

Errors

DatabaseWaitStatistics

Timeouts

Blocks

Deadlocks

Audit

SQLSecurityAuditEvents

Statement Analysis:

To perform real-time reporting by using Microsoft Power BI, you must first [answer choice].

Correct Answer: select Stream to an event hub

Explanation: Microsoft Power BI can connect to Event Hubs and use the information to perform real-time reporting. In this scenario, since the diagnostics are being sent to Log Analytics, you can not perform real-time reporting. You need to stream to Event Hubs to support real-time reporting for Power BI.

Diagnostics data can be reviewed in [answer choice].

Correct Answer: Azure SQL Analytics

Explanation: Since the diagnostics are being sent to Log Analytics, the correct solution to review the data is to use Azure SQL Analytics. Azure SQL Analytics is a solution built on top of Log Analytics and can visualize the data collected from SQL Servers.

Therefore, the correct answers are:

To perform real-time reporting by using Microsoft Power BI, you must first: select Stream to an event hub

Diagnostics data can be reviewed in: Azure SQL Analytics

109
Q

You deploy an Azure virtual machine that runs an ASP.NET application. The application will be accessed from the internet by the users at your company.

You need to recommend a solution to ensure that the users are pre-authenticated by using their Azure Active Directory (Azure AD) account before they can connect to the ASP.NET application

What should you include in the recommendation?

an Azure AD enterprise application
Azure Traffic Manager
a public Azure Load Balancer
Azure Application Gateway

A

Key Requirements:

The ASP.NET application is hosted on an Azure virtual machine.

Users must be pre-authenticated using their Azure AD accounts.

The application will be accessed from the internet.

Analyzing the Options:

an Azure AD enterprise application:

Analysis: This is the correct solution. An Azure AD enterprise application (also called an app registration) allows you to integrate the authentication process with your Azure AD tenant. This enables users to log on to the application using their existing Azure AD credentials. The Azure AD App Registration handles the authentication process. The app registration does not directly do anything to the application itself, but allows the application to leverage Azure AD for authentication.

Azure Traffic Manager:

Analysis: Azure Traffic Manager is a DNS-based load balancer for distributing traffic across multiple endpoints (e.g., across regions). It does not provide pre-authentication functionality using Azure AD. This option does not meet the authentication requirement.

a public Azure Load Balancer:

Analysis: A public Azure Load Balancer is a layer 4 load balancer that distributes network traffic across multiple virtual machines. It does not provide authentication capabilities. This option does not meet the authentication requirement.

Azure Application Gateway:

Analysis: Azure Application Gateway is a layer 7 load balancer that can provide web traffic load balancing and features such as SSL termination. While it does provide WAF features and other security settings, it does not provide pre-authentication with Azure AD, which is the key requirement.

Conclusion:

To pre-authenticate users with their Azure AD accounts before they access the ASP.NET application, you must create an Azure AD enterprise application.

Therefore, the correct answer is:

an Azure AD enterprise application

110
Q

DRAG DROP

You are designing a virtual machine that will run Microsoft SQL Server and will contain two data disks. The first data disk will store log files, and the second data disk will store data. Both disks are P40 managed disks.

You need to recommend a caching policy for each disk. The policy must provide the best overall performance for the virtual machine.

Which caching policy should you recommend for each disk? To answer, drag the appropriate policies to the correct disks. Each policy may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.

Policies
None
ReadOnly
ReadWrite

Answer Area
Log: Policy
Data: Policy

A

Key Concepts:

Managed Disks Caching Policies:

None: No caching is applied.

ReadOnly: Data can be read from cache, but writes go directly to disk.

ReadWrite: Data can be both read from and written to the cache.

SQL Server Disk Usage:

Log Disk: Primarily used for sequential write operations.

Data Disk: Used for both read and write operations.

Analyzing Each Disk:

Log Disk:

Caching Policy: None

Explanation: Log files in SQL Server are predominantly written sequentially. Caching writes on the log disk can add additional overhead, and can also delay the writes to the disk. Since they are mostly write operations, using caching is not ideal. With the none setting, all data will be written directly to the disk.

Data Disk:

Caching Policy: ReadWrite

Explanation: Data files in SQL Server involve both read and write operations. Using the ReadWrite setting provides the best performance by caching the reads, and buffering the write operation, allowing a quicker response time when reading and writing data.

Therefore, the correct answers are:

Log: Policy: None

Data: Policy: ReadWrite

110
Q

HOTSPOT

You plan to deploy logical Azure SQL Database servers to the East US Azure region and the West US Azure region. Each server will contain 20 databases. Each database will be accessed by a different user who resides in a different on-premises location. The databases will be configured to use active geo-replication.

You need to recommend a solution that meets the following requirements:

✑ Restricts user access to each database

✑ Restricts network access to each database based on each user’s respective location

✑ Ensures that the databases remain accessible from client applications if the local Azure region fails

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Configure user access by using:
Azure PowerShell
The REST API
Transact-SQL
Configure database-level firewall rules
by using:
Azure PowerShell
The REST API
Transact-SQL

A

Key Requirements:

Restrict user access to each database.

Restrict network access to each database based on each user’s location.

Ensure database accessibility if the local Azure region fails.

There are 20 databases per server and each database is associated with a different user at a different location, and each database is configured to use active geo-replication.

Analyzing the Options:

Configure user access by using:

Correct Answer: Transact-SQL

Explanation: Transact-SQL (T-SQL) is the appropriate method for managing database-level access control. You can use T-SQL commands to create users and assign granular permissions to those users on a per-database basis. T-SQL also provides fine-grained control over specific objects inside the database itself.

Why other options are incorrect:

Azure PowerShell and The REST API: These methods are useful for managing server-level access control and configurations, but not suitable for the granular access controls needed at the database level.

Configure database-level firewall rules by using:

Correct Answer: Azure PowerShell

Explanation: Azure PowerShell allows for easy automation of managing firewall rules. You can use scripts and cmdlets to add, remove, and update firewall rules based on each user’s location, and you can do this in bulk.

Why other options are incorrect:

The REST API: While the REST API is also an option, it is more complex to use for this scenario. Azure PowerShell provides easy access to manage resources and firewall rules.

Transact-SQL: T-SQL can manage database firewalls, but this does not provide a way to apply IP restrictions.

High Availability: The requirement to ensure database accessibility in case of a region failure is satisfied by using active geo-replication. With active geo-replication, if the primary region becomes unavailable, the database in the secondary region will become the primary and remain available to client applications.

Therefore, the correct answers are:

Configure user access by using: Transact-SQL

Configure database-level firewall rules by using: Azure PowerShell

110
Q

You have an on-premises Hyper-V cluster. The cluster contains Hyper-V hosts that run Windows Server 2016 Datacenter. The hosts are licensed under a Microsoft Enterprise Agreement that has Software Assurance.

The Hyper-V cluster hosts 3 virtual machines that run Windows Server 2012 R2. Each virtual machine runs a different workload. The workloads have predictable consumption patterns.

You plan to replace the virtual machines with Azure virtual machines that run Windows

Server 2016. The virtual machines will be sized according to the consumption pattern of each workload.

You need to recommend a solution to minimize the compute costs of the Azure virtual machines.

Which two recommendations should you include in the solution? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

Purchase Azure Reserved Virtual Machine Instances for the Azure virtual machines
Create a virtual machine scale set that uses autoscaling
Configure a spending limit in the Azure account center
Create a lab in Azure DevTest Labs and place the Azure virtual machines in the lab
Activate Azure Hybrid Benefit for the Azure virtual machines

A

Key Requirements:

Replace on-premises VMs with Azure VMs running Windows Server 2016.

VMs sized according to consumption patterns.

Minimize compute costs.

The on-prem Hyper-V hosts are licensed under an enterprise agreement with software assurance.

Workloads have predictable consumption patterns.

Analyzing the Options:

Purchase Azure Reserved Virtual Machine Instances for the Azure virtual machines:

Analysis: This is a correct solution. Reserved VM Instances provide significant discounts (up to 72%) compared to pay-as-you-go rates. Since the workloads have predictable consumption patterns, you can buy a reserved instance for the base capacity of the workload, and use autoscaling to handle any spikes of traffic. This directly addresses the cost minimization requirement.

Create a virtual machine scale set that uses autoscaling:

Analysis: This is a correct solution. Autoscaling will automatically scale down the size of the deployment when the resource usage decreases, and it will also increase the deployment when needed. This way, the deployment size is always the appropriate size based on resource consumption.

Configure a spending limit in the Azure account center:

Analysis: While setting a spending limit can prevent overspending, it does not actively minimize the compute costs of the VMs themselves. It is a good practice, but not one of the best practices for minimizing compute costs. This helps with overall cost management, but not compute cost minimization.

Create a lab in Azure DevTest Labs and place the Azure virtual machines in the lab:

Analysis: Azure DevTest Labs is designed for non-production environments like development and testing, and it is not suitable for the planned production virtual machines. This option does not minimize costs.

Activate Azure Hybrid Benefit for the Azure virtual machines:

Analysis: This is a correct solution. The Azure Hybrid Benefit allows you to use your on-premises Windows Server licenses with Software Assurance to reduce the cost of Windows Server virtual machines in Azure. Since your on-premises hosts have software assurance this directly reduces compute costs.

Conclusion:

The best two recommendations for minimizing compute costs are to use Azure Reserved VM Instances and activate Azure Hybrid Benefit. Also using auto-scaling will reduce compute costs by shutting down unneeded instances.

Therefore, the two correct answers are:

Purchase Azure Reserved Virtual Machine Instances for the Azure virtual machines

Activate Azure Hybrid Benefit for the Azure virtual machines

111
Q

Your company wants to use an Azure Active Directory (Azure AD) hybrid identity solution.

You need to ensure that users can authenticate if the internet connection to the on-premises Active Directory is unavailable. The solution must minimize authentication prompts for the users.

What should you include in the solution?

an Active Directory Federation Services (AD FS) server
pass-through authentication and Azure AD Seamless Single Sign-On (Azure AD Seamless SSO)
password hash synchronization and Azure AD Seamless Single Sign-On (Azure AD Seamless SSO)

A

Key Requirements:

Azure AD hybrid identity solution.

Users must be able to authenticate even if the internet connection to on-premises AD is unavailable.

Minimize authentication prompts.

Analyzing the Options:

an Active Directory Federation Services (AD FS) server:

Analysis: While AD FS can be used for hybrid identity, it does not directly address the issue of authentication when the internet is unavailable. If the internet link is down, AD FS would also not be accessible from the cloud, and thus will not allow users to authenticate. Also, ADFS is more complex and needs more maintenance.

pass-through authentication and Azure AD Seamless Single Sign-On (Azure AD Seamless SSO):

Analysis: This option requires a connection to on-premises domain controllers. Pass-through authentication relays authentication requests to the on-premises AD. If the internet connection to the on-premises AD is unavailable, pass-through authentication will not work. While Azure AD Seamless SSO can provide a better user experience by minimizing prompts, it still relies on a connection to on-premises AD. This does not meet the requirement.

password hash synchronization and Azure AD Seamless Single Sign-On (Azure AD Seamless SSO):

Analysis: This is the correct answer. Password hash synchronization synchronizes password hashes to Azure AD. With the hashes available in Azure AD, even if the internet connection to the on-premises AD is unavailable, users can still authenticate with their synchronized password hashes. The Azure AD Seamless SSO can also allow the user to authenticate without any prompts, as long as the machine is domain joined.

Conclusion:

To ensure that users can authenticate even when the internet connection to on-premises AD is unavailable, and to minimize authentication prompts, the best solution is to use password hash synchronization and Azure AD Seamless Single Sign-On (Azure AD Seamless SSO).

Therefore, the correct answer is:

password hash synchronization and Azure AD Seamless Single Sign-On (Azure AD Seamless SSO)

111
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company has deployed several virtual machines (VMs) on-premises and to Azure.

Azure ExpressRoute has been deployed and configured for on-premises to Azure connectivity.

Several VMs are exhibiting network connectivity issues.

You need to analyze the network traffic to determine whether packets are being allowed or denied to the VMs.

Solution: Install and configure the Microsoft Monitoring Agent and the Dependency Agent on all VMs. Use the Wire Data solution in Azure Monitor to analyze the network traffic.

Does the solution meet the goal?

Yes
No

A

Key Requirements:

Analyze network traffic to determine whether packets are allowed or denied to VMs.

VMs are located both on-premises and in Azure.

Azure ExpressRoute is used for on-premises to Azure connectivity.

Proposed Solution: Install and configure the Microsoft Monitoring Agent and the Dependency Agent on all VMs. Use the Wire Data solution in Azure Monitor to analyze the network traffic.

Analysis:

Microsoft Monitoring Agent (MMA) and Dependency Agent:

The MMA agent collects data, including performance and log data. The Dependency agent collects network connection data and can identify dependencies between virtual machines and processes.

Wire Data Solution in Azure Monitor:

The Wire Data solution uses network data to show connection information such as ports and protocols. It will give information about which VM has network connectivity issues.

Traffic Analysis:

While the agents and the Wire Data solution can collect network traffic information, they cannot tell you whether packets are being allowed or denied due to network rules. It can tell you connection information, but it does not give insight into the rules of the firewall, which is necessary for seeing if traffic is blocked by network rules.

Conclusion:

The proposed solution will not directly tell you if the network traffic is being allowed or denied. The solution only provides connectivity information. It does not provide information on network firewall rules or if the network traffic is being blocked by rules.

Therefore, the correct answer is:

No

112
Q

DRAG DROP

Your company has users who work remotely from laptops.

You plan to move some of the applications accessed by the remote users to Azure virtual machines. The users will access the applications in Azure by using a point-to-site VPN connection. You will use certificates generated from an on-premises-based certification authority (CA).

You need to recommend which certificates are required for the deployment.

What should you include in the recommendation? To answer, drag the appropriate certificates to the correct targets. Each certificate may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.

Certificates

A root CA certificate that has the private key
A root CA certificate that has the public key only
A user certificate that has the private key
A user certificate that has the public key only

Answer Area
Trusted Root Certification Authorities certificate store on each laptop:Certificate
The users’ Personal store on each laptop:Certificate
The Azure VPN gateway:Certificate

A

Key Concepts:

Root CA Certificate: Used to establish trust in certificates issued by the CA.

User Certificate: Used to identify and authenticate a user connecting to the VPN.

Public Key: Used for encryption and verifying signatures.

Private Key: Used for decryption and signing.

Trusted Root Certification Authorities Store: This is used to establish the trust chain.

Personal Store: This is the local certificate store on the device.

Azure VPN Gateway: This component needs to have the public key of the certificate.

Analyzing the Certificate Requirements:

Trusted Root Certification Authorities certificate store on each laptop:

Correct Certificate: A root CA certificate that has the public key only

Explanation: To trust certificates issued by your on-premises CA, the root CA certificate’s public key must be installed in the Trusted Root Certification Authorities store on each laptop. This establishes the chain of trust needed for client authentication.

The users’ Personal store on each laptop:

Correct Certificate: A user certificate that has the private key

Explanation: The user certificate is used for authenticating the client. It must include the private key as this is required for the authentication process. This certificate is installed in the Personal store of the user’s device and is used by the VPN client software for authentication.

The Azure VPN gateway:

Correct Certificate: A root CA certificate that has the public key only

Explanation: The Azure VPN Gateway needs the public key of the root CA certificate so that it can establish the trust chain and accept certificates issued by that CA.

Therefore, the correct answers are:

Trusted Root Certification Authorities certificate store on each laptop: A root CA certificate that has the public key only

The users’ Personal store on each laptop: A user certificate that has the private key

The Azure VPN gateway: A root CA certificate that has the public key only

113
Q

You have 100 Microsoft SQL Server integration Services (SSIS) packages that are configured to use 10 on-premises SQL Server databases as their destinations.

You plan to migrate the 10 on-premises databases to Azure SQL Database

You need to recommend a solution to host the SSlS packages in Azure. The solution must ensure that the packages can target the SQL Database instances as their destinations.

What should you include in the recommendation?

SQL Server Migration Assistant (SSMA)
Azure Data Catalog
Data Migration Assistant
Azure Data Factory

A

Key Requirements:

Migrate 10 on-premises SQL Server databases to Azure SQL Database.

Host 100 SSIS packages in Azure.

SSIS packages must target the migrated Azure SQL Database instances.

Analyzing the Options:

SQL Server Migration Assistant (SSMA):

Analysis: SSMA is a tool for migrating databases from on-premises SQL Server to Azure SQL Database, and other databases. While it helps with migrating the databases, it does not address the need for hosting the SSIS packages, which is the main issue. This option is not correct.

Azure Data Catalog:

Analysis: Azure Data Catalog is a metadata management service and not suitable for hosting SSIS packages or running any type of workflows. This option does not meet the requirements.

Data Migration Assistant:

Analysis: Data Migration Assistant helps assess and migrate databases but does not provide a platform for hosting or running SSIS packages. This would assist with the database migrations, but not with the SSIS packages. This option is not correct.

Azure Data Factory:

Analysis: This is the correct answer. Azure Data Factory (ADF) provides the ability to execute SSIS packages. It includes the Azure-SSIS Integration Runtime, which allows for running SSIS packages in Azure. This service can run existing packages that target SQL databases in Azure. It also supports connecting to on-premises data sources, if needed.

Conclusion:

To host the SSIS packages in Azure and enable them to target Azure SQL Database instances, Azure Data Factory with its Azure-SSIS Integration Runtime is the ideal solution.

Therefore, the correct answer is:

Azure Data Factory

114
Q

The developers at your company are building a containerized Python Django app.

You need to recommend platform to host the app.

The solution must meet the following requirements:

✑ Support autoscaling.

✑ Support continuous deployment from an Azure Container Registry.

✑ Provide built-in functionality to authenticate app users by using Azure Active Directory (Azure AD).

Which platform should you include in the recommendation?

Azure Container instances
an Azure App Service instance that uses containers
Azure Kubernetes Service (AKS)

A

Key Requirements:

Host a containerized Python Django app.

Support autoscaling.

Support continuous deployment from Azure Container Registry.

Provide built-in functionality for Azure AD authentication.

Analyzing the Options:

Azure Container Instances:

Analysis: While Azure Container Instances (ACI) is great for running individual containers, it does not have built-in autoscaling capabilities, nor does it provide built-in support for Azure AD authentication. While you can pull the image from an Azure Container Registry, the deployment is not continuous. This does not meet the requirements.

an Azure App Service instance that uses containers:

Analysis: This is the correct solution. Azure App Service with container support allows you to deploy containers, and it has built-in autoscaling and continuous deployment capabilities. Also, Azure App Service can easily be integrated with Azure AD for authentication, which satisfies the requirement.

Azure Kubernetes Service (AKS):

Analysis: AKS is a powerful container orchestration platform, but it’s more complex to manage than App Service, and not necessarily required for this type of application. While AKS supports autoscaling and continuous deployment, it does not provide built-in functionality to handle the authentication needs. This method of deployment would add unneeded overhead.

Conclusion:

To meet the requirements of autoscaling, continuous deployment, and built-in Azure AD authentication for a containerized web app, an Azure App Service instance that uses containers is the best option.

Therefore, the correct answer is:

an Azure App Service instance that uses containers

114
Q

Your company purchases an app named App1.

You need to recommend a solution 10 ensure that App 1 can read and modify access reviews.

What should you recommend?

From the Azure Active Directory admin center, register App1. and then delegate permissions to the Microsoft Graph AP
From the Azure Active Directory admin center, register App1. from the Access control (IAM) blade, delegate permissions.
From API Management services, publish the API of App1. and then delegate permissions to the Microsoft Graph AP
From API Management services, publish the API of App1 From the Access control (IAM) blade, delegate permissions.

A

Key Concepts:

Access Reviews: A feature in Azure AD that allows you to review and manage user access to resources.

Microsoft Graph API: The primary API for interacting with Microsoft 365 services, including Azure AD.

API Permissions: Applications need specific permissions to access data through the Microsoft Graph API.

Analyzing the Options:

From the Azure Active Directory admin center, register App1, and then delegate permissions to the Microsoft Graph API:

Analysis: This is the correct answer. Registering the app in Azure AD creates an application identity, and then assigning the required Graph API permissions (such as AccessReview.ReadWrite.All) allows the application to read and modify access reviews.

From the Azure Active Directory admin center, register App1. from the Access control (IAM) blade, delegate permissions:

Analysis: While the Access control (IAM) blade allows you to delegate permissions on Azure resources, this is not the method for delegating API permissions for an Azure AD application. IAM is for role assignments, not API permissions.

From API Management services, publish the API of App1. and then delegate permissions to the Microsoft Graph API:

Analysis: API Management is used for managing APIs and exposing them to consumers, and does not delegate permissions for Azure AD applications. While you can use this, it will not grant your application access to the graph API and access reviews.

From API Management services, publish the API of App1. From the Access control (IAM) blade, delegate permissions:

Analysis: API Management is used for managing APIs and exposing them to consumers, and does not delegate permissions for Azure AD applications using IAM. Also, IAM is not for API permissions.

Conclusion:

To grant App1 permissions to read and modify access reviews, you must first register the application in Azure AD and then grant the necessary permissions to the Microsoft Graph API.

Therefore, the correct answer is:

From the Azure Active Directory admin center, register App1, and then delegate permissions to the Microsoft Graph API

114
Q

You need to design a highly available Azure SQL database that meets the following requirements:

  • Failover between replicas of the database must occur without any data loss.
  • The database must remain available In the event of a zone outage.
  • Costs must be minimized.

Which deployment option should you use?

Azure SQL Database Hyperscale
Azure SQL Database Premium
Azure SQL Database Serverless
Azure SQL Database Managed Instance General Purpose

A

Key Requirements:

Zero data loss failover: Failover between replicas must occur without any data loss.

Zone outage resilience: The database must remain available in the event of an availability zone outage.

Minimize costs: The solution should be cost-effective.

Analyzing the Options:

Azure SQL Database Hyperscale:

Analysis: Hyperscale offers high availability and geo-replication options, and is designed for extremely large databases that need high-performance scaling. However, it also incurs higher costs, which is a negative per the requirements. Also, while it supports zero data loss failover, it is not the most cost-effective way of achieving that. This option would also be more expensive than the other options that satisfy the requirements.

Azure SQL Database Premium:

Analysis: Premium tier databases can provide high availability, but they do not support zone-redundancy. Also, while they can provide data loss prevention using geo-replication, this option requires manual intervention, which is not ideal for a high availability solution. This does not satisfy the requirements.

Azure SQL Database Serverless:

Analysis: Serverless is a compute tier for single Azure SQL databases with autoscaling capabilities but do not offer zone redundancy as the primary design goal is to be serverless. Serverless also does not provide a guaranteed data loss prevention without manual configuration (such as geo-replication). Also, this option only supports single databases and not pools, making it less flexible.

Azure SQL Database Managed Instance General Purpose:

Analysis: This option is ideal for the given requirements. Azure SQL Database Managed Instance is zone-redundant, providing a 99.99% SLA. It also has built-in high availability that can provide automatic zero-data loss failover for a lower cost compared to Hyperscale. This meets the zero-data loss requirements, is zone redundant, and also is a PaaS solution.

Conclusion:

To meet the requirements of zero data loss failover, zone outage resilience, and cost minimization, Azure SQL Database Managed Instance General Purpose is the best option.

Therefore, the correct answer is:

Azure SQL Database Managed Instance General Purpose

114
Q

HOTSPOT

You plan to deploy the backup policy shown in the following exhibit.

Policy1

Backup frequency

Daily
At: 6:00 PM
(UTC) Coordinated Universal Time
Retention range

Retention of daily backup point.

At: 6:00 PM
For: 90 Day(s)
Retention of weekly backup point.

On: Sunday
At: 6:00 PM
For: 26 Week(s)
Retention of monthly backup point.

Week Based / Day Based
On: First
Day: Sunday
At: 6:00 PM
For: 36 Month(s)
Retention of yearly backup point.

Not Configured

Use the drop-down menus to select the answer choice that completes each statement based on
the information presented in the graphic. NOTE: Each correct selection is worth one point.
Virtual machines that are backed up using the policy can
be recovered for up to a maximum of [answer choice].
90 days
26 weeks
36 months
45 months
The minimum recovery point objective (RPO) for virtual
machines that are backed up by using the policy is
[answer choice].
1 hour
1 day
1 week
1 month
1 year

A

Exhibit Analysis:

Policy Name: Policy1

Backup Frequency:

Daily at 6:00 PM UTC

Weekly on Sunday at 6:00 PM UTC

Monthly on the first Sunday at 6:00 PM UTC

Retention:

Daily backups: 90 days

Weekly backups: 26 weeks

Monthly backups: 36 months

Yearly backups: Not configured

Statement Analysis:

Virtual machines that are backed up using the policy can be recovered for up to a maximum of [answer choice].

Correct Answer: 36 months

Explanation: The maximum retention for the policy is the monthly backup retention, which is set to 36 months.

The minimum recovery point objective (RPO) for virtual machines that are backed up by using the policy is [answer choice].

Correct Answer: 1 day

Explanation: The RPO (Recovery Point Objective) is the maximum acceptable time period in which data might be lost due to a disruption. In this policy, daily backups occur at 6:00 PM, so the minimum data loss will be no more than 1 day (the difference between today’s backup at 6:00 PM and the previous day’s backup at 6:00 PM).

Therefore, the correct answers are:

Virtual machines that are backed up using the policy can be recovered for up to a maximum of: 36 months

The minimum recovery point objective (RPO) for virtual machines that are backed up by using the policy is: 1 day

115
Q

You have 500 Azure web apps in the same Azure region. The apps use a premium Azure Key vault for authentication. A developer reports that some authentication requests are being throttled.

You need 10 recommend a solution to increase the available throughput of the key vault the solution must minimize costs.

What should you recommend?

Increase me number of key vaults in the subscription
Configure geo-replication.
Change the pacing tier.
Configure load balancing for the apps.

A

Key Requirements:

500 Azure web apps in the same region.

Apps use a premium Azure Key Vault for authentication.

Authentication requests are being throttled.

Increase key vault throughput.

Minimize costs.

Analyzing the Options:

Increase the number of key vaults in the subscription:

Analysis: While this would work, it would not be ideal and would create a maintenance overhead. Also, this adds to complexity and also would be more costly. This is not a good solution.

Configure geo-replication:

Analysis: Geo-replication is for making the service resilient to regional outages, not for increasing throughput. This does not help with resolving the authentication throttling issues. It will also add to cost.

Change the pricing tier:

Analysis: Changing the pricing tier will increase the available operations per second. While the key vault is premium, if this was not the case, then this would be the appropriate solution. Since the requirement stated that it was premium, this is not the correct method of improving throughput.

Configure load balancing for the apps:

Analysis: This is the correct solution. Load balancing distributes the request load and helps reduce throttling by having multiple apps request the key vault. This will reduce the likelihood of a single Key Vault throttling requests from too many apps. While there are no Azure Load Balancers that work directly with Key Vault, the application side needs to be balanced. Also, this is the least costly option for improving the throughput.

Conclusion:

To increase key vault throughput while minimizing cost, the best solution is to configure load balancing for the apps.

Therefore, the correct answer is:

Configure load balancing for the apps.

116
Q

You use Azure Application Insights.

You plan to use continuous export.

You need to store Application Insights data for five years.

Which Azure service should you use?

Azure Backup
Azure SQL Database
Azure Storage
Azure Monitor Logs

A

Key Requirements:

Use continuous export from Azure Application Insights.

Store the data for five years.

Analyzing the Options:

Azure Backup:

Analysis: Azure Backup is designed for backing up virtual machines, databases, and files, and it is not the appropriate service for storing Application Insights telemetry data. Also, you can not use Azure Backup to directly backup application insights logs. This is not the correct option.

Azure SQL Database:

Analysis: While Azure SQL Database can store data, it is not well-suited for storing the raw, unstructured telemetry data from Application Insights. This database service is optimized for transactional data, and not for log-based events from application insights. This option is not the best choice.

Azure Storage:

Analysis: This is the correct solution. Azure Storage is the best solution for storing data from Application Insights as it can be used to export telemetry data in its raw format. Storage provides a lower-cost service than Azure Monitor Logs for long term log storage. Also, Azure Storage supports configuring retention policies for a long period, therefore easily allowing the 5-year retention period.

Azure Monitor Logs:

Analysis: Azure Monitor Logs can be used to store the Application Insights data, but this is for querying and analyzing log data. The requirements are to simply store it for five years, not to analyze it. Also, Azure Monitor Logs are more expensive to store data than Azure Storage. This option is not ideal because of cost.

Conclusion:

To store Application Insights data for five years using continuous export, Azure Storage is the most appropriate and cost-effective service.

Therefore, the correct answer is:

Azure Storage

117
Q

HOTSPOT

You have a virtual machine scale set named SS1.

You configure autoscaling as shown in the following exhibit.

Default Profile1

Delete warning
The very last or default recurrence rule cannot be deleted. Instead, you can disable autoscale to turn off autoscale.

Scale mode
- Scale based on a metric (selected)
- Scale to a specific instance count

Scale out
When SS1 (Average) Percentage CPU > 75
Increase instance count by 3

Scale in
When SS1 (Average) Percentage CPU < 25
Decrease instance count by 2

+ Add a rule

Instance limits
Minimum: 3
Maximum: 15
Default: 6

Schedule
This scale condition is executed when none of the other scale condition(s) match.

You configure the scale out and scale in rules to have a duration of 10 minutes and a cool down time of 10 minutes.

Use the drop-down menus to select the answer choice that answers each question based on the information presented in the graphic. NOTE: Each correct selection is worth one point.

If SS1 scales to nine virtual machines, what is the minimum amount of time before SS1 will scale up?
10 minutes
20 minutes
30 minutes
60 minutes

If SS1 scales to nine virtual machines, and then the average processor utilization is 30 percent for one hour, how many virtual machines will be in SS1?
1
3
6
9
12
15

A

Exhibit Analysis:

Scale Mode: Based on a metric.

Scale Out Rule:

Trigger: Average CPU > 75%

Action: Increase instance count by 3

Duration: 10 minutes

Cooldown: 10 minutes

Scale In Rule:

Trigger: Average CPU < 25%

Action: Decrease instance count by 2

Duration: 10 minutes

Cooldown: 10 minutes

Instance Limits:

Minimum: 3

Maximum: 15

Default: 6

The rules have a duration of 10 minutes, and a cool down time of 10 minutes.

Statement Analysis:

If SS1 scales to nine virtual machines, what is the minimum amount of time before SS1 will scale up?

Correct Answer: 20 minutes

Explanation: The scale-out rules need to have a duration of 10 minutes, and a cool down time of 10 minutes. Therefore, the cool down time will ensure that it will take at least 20 minutes to scale up if a scale action has already happened.

If SS1 scales to nine virtual machines, and then the average processor utilization is 30 percent for one hour, how many virtual machines will be in SS1?

Correct Answer: 6

Explanation: When the average processor utilization is less than 25%, the scale-in rule will be triggered. The scale in rule is a decrease of 2 virtual machines, and the duration is 10 minutes with a cool down of 10 minutes. The scale in rule will continue to trigger every 20 minutes until the number of VMs is the minimum allowed, which is 3. After the number of virtual machines reaches three, since the load is still above 25%, no more virtual machines will be removed. When there is no further scale actions to perform, and the utilization of the virtual machines is above the 25% value, then no more scale actions will be taken, meaning that the default value will be used (which is 6).

Therefore, the correct answers are:

If SS1 scales to nine virtual machines, what is the minimum amount of time before SS1 will scale up?: 20 minutes

If SS1 scales to nine virtual machines, and then the average processor utilization is 30 percent for one hour, how many virtual machines will be in SS1?: 6

117
Q

Note: This question is part of series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company has an on-premises Active Directory Domain Services (AD DS) domain and an established Azure Active Directory (Azure AD) environment.

Your company would like users to be automatically signed in to cloud apps when they are on their corporate desktops that are connected to the corporate network.

You need to enable single sign-on (SSO) for company users.

Solution: Install and configure an Azure AD Connect server to use password hash synchronization and select the Enable single sign-on option.

Does the solution meet the goal?

Yes
No

A

Key Requirements:

On-premises Active Directory Domain Services (AD DS) and Azure Active Directory (Azure AD) environment.

Users should be automatically signed in to cloud apps while on their corporate desktops connected to the corporate network.

Enable single sign-on (SSO) for company users.

Proposed Solution: Install and configure an Azure AD Connect server to use password hash synchronization and select the Enable single sign-on option.

Analysis:

Password Hash Synchronization: Password hash synchronization synchronizes the password hashes from the on-premise domain controllers to the cloud, so that users can use the same username and password to log in to cloud services.

Enable single sign-on option: The enable single sign-on option within Azure AD Connect will enable Azure AD Seamless Single Sign-On.

Azure AD Seamless SSO: This feature provides automatic sign-on for users on domain-joined machines that are connected to the corporate network. It will allow users to authenticate to Azure without requiring them to re-enter their credentials.

Conclusion:

The proposed solution, by implementing Password Hash Synchronization and Azure AD Seamless SSO, will allow users on the corporate network to automatically authenticate to Azure services without being prompted.

Therefore, the correct answer is:

Yes

117
Q

Your company plans to publish APIs for its services by using Azure API Management.

You discover that service responses include the AspNet-Version header.

You need to recommend a solution to remove AspNet-Version from the response of the published APIs.

What should you include in the recommendation?

a new product
a modification to the URL scheme
a new policy

A

Key Requirement:

Remove the AspNet-Version header from the response of APIs published using Azure API Management.

Analyzing the Options:

a new product:

Analysis: Products in API Management are used to group APIs and configure access controls. They do not provide a mechanism for modifying response headers. This would not help remove the AspNet-Version header.

a modification to the URL scheme:

Analysis: Modifying the URL scheme would not have any impact on response headers. This would not help remove the AspNet-Version header.

a new policy:

Analysis: This is the correct answer. API Management policies can be used to customize the behavior of APIs. There is a specific policy that can set, modify, or remove headers in both the request and the response. This can be used to remove the AspNet-Version header from API responses.

Conclusion:

To remove the AspNet-Version header from API responses in Azure API Management, you should use a new policy. This allows you to control the behavior of the API and remove the unwanted header from the response.

Therefore, the correct answer is:

a new policy

118
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases.

The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.

You need to recommend a solution to meet the regulatory requirement.

Solution: You recommend using the Regulatory compliance dashboard in Azure Security Center.

Does this meet the goal?

Yes
No

A

Key Requirements:

Deploy App Service instances and Azure SQL databases simultaneously. (Not directly addressed by the solution, but a context to consider).

Deploy App Service instances only to specific Azure regions.

App Service resources must reside in the same region.

Proposed Solution: Using the Regulatory Compliance Dashboard in Azure Security Center.

Analysis:

Regulatory Compliance Dashboard: The Regulatory Compliance Dashboard in Azure Security Center helps you track your compliance status against various standards and regulations (e.g., ISO 27001, SOC 2).

Resource Location Enforcement: While the dashboard can report on resource locations and detect resources that are not compliant with set standards and policies, it does not enforce the location when creating new resources. It only tells you the current compliance state of the subscription.

Simultaneous Deployment: This feature does not assist in simultaneous deployment of App Service instances and Azure SQL databases.

Conclusion:

The Regulatory Compliance Dashboard can detect non-compliant resources after deployment, but it does not prevent resources from being deployed to the wrong regions, nor does it assist with simultaneous deployments. It does not enforce any of the required items.

Therefore, the correct answer is:

No

118
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You are designing an Azure solution for a company that has four departments. Each department will deploy several Azure app services and Azure SQL databases.

You need to recommend a solution to report the costs for each department to deploy the app services and the databases. The solution must provide a consolidated view for cost reporting that displays cost broken down by department.

Solution: Place all resources in the same resource group. Assign tags to each resource.

Does the solution meet the goal?

Yes
No

A

Key Requirements:

Report costs for each department’s resources (app services and SQL databases).

Provide a consolidated view for cost reporting.

Display cost breakdown by department.

The company has 4 departments, and each will have multiple app services and databases.

Proposed Solution: Place all resources in the same resource group and assign tags to each resource.

Analysis:

Single Resource Group: While this simplifies management to some degree, it is not necessary, and it will not help with cost reporting. Resource groups are for resource management, not for cost separation. Placing all resources in the same resource group will prevent filtering based on departments.

Tags: Tags are key-value pairs that can be assigned to Azure resources, and this is an ideal way of categorizing the resources. By tagging resources with department information, you can then generate cost reports filtered by the specific tag.

Consolidated View: A single resource group does not provide a consolidated view for cost reporting. The cost breakdown would still need to be based on tags.

Conclusion:

While using tags is a correct way of categorizing the resources, placing all resources in the same resource group would not help with the requirement to filter costs per department. This would also increase the overhead when managing those resources. The resource group is not relevant to cost reporting.

Therefore, the correct answer is:

No

119
Q

You are designing a microservices architecture that will be hosted in an Azure Kubernetes Service (AKS) cluster. Apps that will consume the microservices will be hosted on Azure virtual machines. The virtual machines and the AKS cluster will reside on the same virtual network.

You need to design a solution to expose the microservices to the consumer apps.

The solution must meet the following requirements:

  • Ingress access to the microservices must be restricted to a single private IP address and protected by using mutual TLS authentication.
  • The number of incoming microservice calls must be rate-limited.
  • Costs must be minimized.

What should you include in the solution?

Azure API Management Premium tier with virtual network connection
Azure Front Door with Azure Web Application Firewall (WAF)
Azure API Management Standard tier with a service endpoint
Azure App Gateway with Azure Web Application Firewall (WAF)

A

Key Requirements:

Microservices hosted in AKS.

Consumer apps on Azure VMs within the same VNet.

Ingress access restricted to a single private IP address.

Mutual TLS authentication for access.

Rate limiting on incoming calls.

Minimize costs.

Analyzing the Options:

Azure API Management Premium tier with virtual network connection:

Analysis: This is a good solution. The Premium tier of API Management supports VNet integration, mutual TLS authentication, and rate-limiting capabilities. It can also be deployed internally within the network to provide a single private IP address for access. However, the premium tier is also the most expensive of the options and is not the best choice for minimizing costs.

Azure Front Door with Azure Web Application Firewall (WAF):

Analysis: Azure Front Door is a global HTTP load balancer and web application firewall. It’s not the right choice for this use case, since the apps are in the same VNet and the requirement is for a private IP. Also, the pricing for this is not ideal and would not minimize cost.

Azure API Management Standard tier with a service endpoint:

Analysis: The standard tier of API Management does not support all of the required features, including mutual TLS authentication. Also, service endpoints are used to secure the traffic from a virtual network to a specific Azure service (like Azure SQL database), and is not the best way of accessing the APIs internally.

Azure App Gateway with Azure Web Application Firewall (WAF):

Analysis: While Azure App Gateway supports VNet integration and can be used as an ingress for an AKS cluster, it does not natively support mutual TLS authentication. Also, this is a layer 7 load balancer, while the requirement is to expose the API through a single private IP. While you could technically expose the services using App Gateway and a private IP, it does not support mutual TLS, nor does it have the rate limiting features, which makes it not a good fit.

Conclusion:

While all options are able to expose the services, to meet the requirements for a single private IP, mutual TLS authentication, rate-limiting, and minimizing costs, Azure API Management Premium tier with virtual network connection is the best option. While it is more expensive than standard, it is the only option that provides the required features.

Therefore, the correct answer is:

Azure API Management Premium tier with virtual network connection

120
Q

HOTSPOT

You are designing an access policy for the sales department at your company.

Occasionally, the developers at the company must stop, start, and restart Azure virtual machines. The development team changes often.

You need to recommend a solution to provide the developers with the required access to the virtual machines.

The solution must meet the following requirements:

✑ Provide permissions only when needed.

✑ Use the principle of least privilege.

✑ Minimize costs.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Azure Active Directory (Azure ID)
license:
Free
Basic
Premium P1
Premium P2
Security feature:
Just in time VM access
A conditional access policy
Privileged Identity Management for the Azure resources

A

Key Requirements:

Developers must be able to stop, start, and restart Azure virtual machines.

Permissions only when needed (just-in-time access).

Principle of least privilege (only the required permissions).

Minimize costs.

Analyzing the Options:

Azure Active Directory (Azure AD) license:

Correct Answer: Premium P1

Explanation: The premium tier of Azure AD is required to use Privileged Identity Management. Although both premium P1 and P2 provide access to PIM, P1 has the required capabilities without the added features of P2. Therefore, P1 is the correct solution to satisfy cost minimization.

Security feature:

Correct Answer: Privileged Identity Management for the Azure resources

Explanation: Privileged Identity Management (PIM) is the ideal solution for providing just-in-time access to Azure resources. This feature allows you to grant temporary role assignments to users. This will satisfy the requirements for just-in-time access using the principle of least privilege, with a time limit as needed. It is also the ideal solution that provides a good audit trail.

Why other options are incorrect:

Just in time VM access: Just-in-time (JIT) VM access is used to control remote access (RDP/SSH) to virtual machines, and does not handle the requirement for role assignments.

A conditional access policy: Conditional access policies are used to control access based on conditions, but this is not appropriate for the given requirements, as conditional access policies do not provide a solution for assigning role based access with time limits.

Therefore, the correct answers are:

Azure Active Directory (Azure AD) license: Premium P1

Security feature: Privileged Identity Management for the Azure resources

121
Q

HOTSPOT

You have an on-premises file server that stores 2 TB of data files.

You plan to move the data files to Azure Blob storage in the Central Europe region.

You need to recommend a storage account type to store the data files and a replication solution for the storage account.

The solution must meet the following requirements:

✑ Be available if a single Azure datacenter fails.

✑ Support storage tiers.

✑ Minimize cost.

What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Account type:
Blob storage
Storage (general purpose v1)
StorageV2 (general purpose v2)
Replication solution:
Geo-redundant storage (GRS)
Zone-redundant storage (ZRS)
Locally-redundant storage (LRS)
Read-access geo-redundant storage (RA-GRS)

A

Key Requirements:

Store 2 TB of data files in Azure Blob Storage in the Central Europe region.

Data must be available if a single Azure data center fails.

Support storage tiers (e.g., hot, cool, archive).

Minimize cost.

Analyzing the Options:

Account type:

Correct Answer: StorageV2 (general purpose v2)

Explanation: StorageV2 (general purpose v2) accounts support all storage services, including the Blob service, and allow access to all of the storage tiers. While blob storage accounts can be used for unstructured data, they do not allow for other storage features, such as tables, queues, etc.

Why other options are incorrect:

Blob storage: While blob storage accounts are useful, they are specifically for blob storage, and do not provide a general purpose solution.

Storage (general purpose v1): Storage accounts V1 are an older type of storage and should not be used in new deployments.

Replication solution:

Correct Answer: Zone-redundant storage (ZRS)

Explanation: ZRS replicates your data synchronously across multiple availability zones within the same region. If there is an issue with an availability zone, the data will still be accessible in the other availability zones of that region. Since the requirement is to be available in the event of a data center failure (which can occur in an availability zone), this option will satisfy the requirement. This option is also less expensive than GRS.

Why other options are incorrect:

Geo-redundant storage (GRS): While GRS provides protection against region-wide outages by replicating data to a secondary region, it incurs higher costs compared to ZRS, and is not needed in this scenario, as the requirement is to have availability when a data center fails.

Locally-redundant storage (LRS): LRS only provides redundancy within a single data center, which does not satisfy the requirement to be available if a single data center fails.

Read-access geo-redundant storage (RA-GRS): RA-GRS offers read access to the secondary region, but is more expensive and does not provide any benefit over GRS. Also, this option is not needed, since the requirement is for a single data center failure.

Therefore, the correct answers are:

Account type: StorageV2 (general purpose v2)

Replication solution: Zone-redundant storage (ZRS)

122
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure Storage account that contains two 1-GB data files named File1 and File2. The data files are set to use the archive access tier.

You need to ensure that File1 is accessible immediately when a retrieval request is initiated.

Solution: For File1, you set Access tier to Hot.

Does this meet the goal?

Yes
No

A

Key Requirements:

Two data files (File1 and File2) in Azure Storage, set to the archive access tier.

File1 must be accessible immediately upon a retrieval request.

Proposed Solution: Set the access tier for File1 to Hot.

Analysis:

Archive Access Tier: The archive access tier is designed for rarely accessed data and has a retrieval latency measured in hours. Setting a file to archive puts that file offline, and a rehydration operation needs to be done in order to get that file online.

Hot Access Tier: The hot access tier is designed for frequently accessed data and has low retrieval latency. Files in the hot tier are immediately accessible.

Conclusion:

By setting the access tier for File1 to the Hot tier, you make the file available immediately when a retrieval request is initiated, thus satisfying the requirement.

Therefore, the correct answer is:

Yes

122
Q

A company has a hybrid ASP.NET Web API application that is based on a software as a service (SaaS) offering.

Users report general issues with the data. You advise the company to implement live monitoring and use ad hoc queries on stored JSON data. You also advise the company to set up smart alerting to detect anomalies in the data.

You need to recommend a solution to set up smart alerting.

What should you recommend?

Azure Application Insights and Azure Monitor Logs
Azure Site Recovery and Azure Monitor Logs
Azure Data Lake Analytics and Azure Monitor Logs
Azure Security Center and Azure Data Lake Store

A

Key Requirements:

Hybrid ASP.NET Web API application.

Live monitoring and ad-hoc queries on JSON data.

Smart alerting to detect data anomalies.

Analyzing the Options:

Azure Application Insights and Azure Monitor Logs:

Analysis: This is the correct solution. Azure Application Insights is designed for monitoring application performance and detecting anomalies. It also allows for ad-hoc queries using the Kusto Query Language (KQL) that supports JSON formatted data. Azure Monitor Logs (which is the underlying data store for Application Insights) can be used for setting up smart alerts based on telemetry data patterns and also for log queries for debugging purposes. This solution perfectly matches the requirements for the problem.

Azure Site Recovery and Azure Monitor Logs:

Analysis: Azure Site Recovery is a disaster recovery service, and is not used for monitoring application telemetry or setting up alerts. Azure Monitor Logs are useful for log aggregation and analysis, but will not help in setting up smart alerts, as they do not provide this functionality on their own. This option is incorrect.

Azure Data Lake Analytics and Azure Monitor Logs:

Analysis: Azure Data Lake Analytics is used for processing large datasets, and is not an ideal solution for live monitoring and smart alerting. This is used for large scale data processing, and not for monitoring and alerts. Also, Azure Monitor Logs are not the ideal solution for this, as they will only store log data and not the raw JSON data, which is required.

Azure Security Center and Azure Data Lake Store:

Analysis: Azure Security Center is a security management service and is not used for application performance monitoring or setting up alerts for data anomalies. Azure Data Lake Store is a storage solution, but is not suited for live monitoring and alerting of application data anomalies. Also, this solution would not handle the ad-hoc query requirement.

Conclusion:

To set up smart alerting for detecting data anomalies, and satisfy all the other requirements, Azure Application Insights and Azure Monitor Logs is the best solution.

Therefore, the correct answer is:

Azure Application Insights and Azure Monitor Logs

122
Q

Your company has 300 virtual machines hosted in a VMware environment. The virtual machines vary in size and have various utilization levels.

You plan to move all the virtual machines to Azure.

You need to recommend how many and what size Azure virtual machines will be required to move the current workloads to Azure. The solution must minimize administrative effort.

What should you use to make the recommendation?

Azure Cost Management
Azure Pricing calculator
Azure Migrate
Azure Advisor

A

Key Requirements:

Migrate 300 virtual machines from a VMware environment to Azure.

Virtual machines vary in size and utilization.

Recommend the number and size of Azure virtual machines.

Minimize administrative effort.

Analyzing the Options:

Azure Cost Management:

Analysis: Azure Cost Management is used for managing and analyzing cloud spending. It does not perform analysis of on-premises virtual machines for sizing or migration purposes, and does not help in the recommendation. This option does not address the requirement.

Azure Pricing calculator:

Analysis: The Azure Pricing calculator is used for estimating the cost of Azure services. While it can assist in sizing based on existing values, it does not do a sizing analysis of on-premise virtual machines. You need another tool to determine the appropriate size, and this tool will only help after the fact. This option does not meet the requirements.

Azure Migrate:

Analysis: This is the correct answer. Azure Migrate is specifically designed for assessing and migrating on-premises virtual machines to Azure. It can analyze the performance data of your on-premises VMs and recommend the appropriate size for Azure VMs. Azure Migrate also handles the migration process itself. This is also the best solution for the least amount of administrative overhead.

Azure Advisor:

Analysis: Azure Advisor analyzes existing Azure resources and provides recommendations for cost optimization, security, performance, and high availability. However, it does not assess on-premises VMs or recommend Azure VM sizes for migration purposes. This option does not address the requirement.

Conclusion:

To analyze your on-premises VMware environment and recommend the required number and size of Azure VMs for migration, while minimizing administrative effort, Azure Migrate is the most appropriate solution.

Therefore, the correct answer is:

Azure Migrate

122
Q

You have an Azure Storage account that contains the data shown in the following exhibit.

Authentication method: Access key (Switch to Azure AD User Account)
Location: container1

Search blobs by prefix (case-sensitive)

NAME MODIFIED ACCESS TIER BLOB TYPE SIZE LEASE STATE
File1.bin 5/4/2019, 5:57:06 PM Cool (Inferred) Block blob 1.25 GiB Available
File2.bin 5/4/2019, 6:09:57 PM Hot Block blob 2.5 GiB Available
File3.bin 5/4/2019, 6:26:26 PM Archive Block blob 1.97 GiB Available

You need to identify which files can be accessed immediately from the storage account.

Which files should you identify?

File1. bin only
File2.bin only
File3.bin only
File1.bin and File2.bin only
File1.bin File2.bin File3.bin

A

Key Concepts:

Azure Blob Storage Access Tiers:

Hot: Optimized for frequently accessed data. Low latency, higher cost.

Cool: Optimized for infrequently accessed data. Moderate latency, lower cost than hot.

Archive: Optimized for rarely accessed data. High latency (hours to rehydrate), lowest cost.

Analyzing the Files:

File1.bin: Access tier is set to “Cool (Inferred)”.

Files set to the “Cool” tier are available immediately.

File2.bin: Access tier is set to “Hot”.

Files set to the “Hot” tier are available immediately.

File3.bin: Access tier is set to “Archive”.

Files set to the “Archive” tier are not immediately accessible. They need to be rehydrated from the archive tier before they can be accessed, and it can take several hours for the rehydration to complete.

Conclusion:

Only files in the Hot and Cool tiers are accessible immediately. Files in the Archive tier are not immediately accessible and must first be rehydrated.

Therefore, the correct answer is:

File1.bin and File2.bin only

123
Q

DRAG DROP

Your on-premises network contains a server named Server1 that runs an ASP.NET application named App1.

You have a hybrid deployment of Azure Active Directory (Azure AD).

You need to recommend a solution to ensure that users sign in by using their Azure AD account and Azure Multi-Factor Authentication (MFA) when they connect to App1 from the internet.

Which three Azure services should you recommend be deployed and configured in sequence? To answer, move the appropriate services from the list of services to the answer area and arrange them in the correct order.

Services
an Azure AD managed identity
an internal Azure Load Balancer
an Azure AD enterprise application
Azure AD Application Proxy
a public Azure Load Balancer
an App Service plan
an Azure AD conditional access policy
Answer Area
1:
2:
3:

A

Key Requirements:

App1 is an on-premises ASP.NET application.

Users must sign in with their Azure AD account.

Azure MFA must be enforced.

App1 is accessed from the internet.

Understanding the Azure Services:

Azure AD Managed Identity: Provides an identity for Azure resources to authenticate to other services. Not relevant for on-prem apps.

Internal Azure Load Balancer: For load balancing within Azure VNets, not for public access from the internet.

Azure AD Enterprise Application: Represents your application in Azure AD, enabling it to integrate with Azure AD for authentication. This is needed for the application to use Azure AD for authentication.

Azure AD Application Proxy: Provides secure access to on-premises web applications from the internet using Azure AD authentication. This enables the app to be accessed from the internet.

Public Azure Load Balancer: Distributes traffic to backend servers, not for authentication requirements.

App Service Plan: For hosting Azure web apps, not for an on-premises application.

Azure AD Conditional Access Policy: Enforces access controls, including MFA, based on conditions.

Analyzing the Correct Sequence:

Azure AD Enterprise Application: You must first create an application registration in Azure AD to represent your on-premises application. This enables the application to be integrated with Azure AD.

Azure AD Application Proxy: You need to use Application Proxy to expose the on-premises App1 to the internet and enforce pre-authentication using Azure AD. This will provide the authentication of users, and allow access to the on-premise app.

Azure AD Conditional Access Policy: After setting up the enterprise app and the application proxy, you can then create a conditional access policy that requires MFA for users accessing App1.

Therefore, the correct sequence is:

Azure AD enterprise application

Azure AD Application Proxy

Azure AD conditional access policy

123
Q

DRAG DROP

You are designing a network connectivity strategy for a new Azure subscription.

You identify the following requirements:

✑ The Azure virtual machines on a subnet named Subnet1 must be accessible only from the computers in your London office.

✑ Engineers require access to the Azure virtual machine on a subnet named Subnet2 over the Internet on a specific TCP/IP management port.

✑ The Azure virtual machines in the West Europe Azure region must be able to communicate on all ports to the Azure virtual machines in the North Europe Azure region.

You need to recommend which components must be used to meet the requirements. The solution must minimize costs and administrative effort whenever possible.

What should you include in the recommendation? To answer, drag the appropriate components to the correct requirements. Each component may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.

Components

An Azure ExpressRoute connection
A network security group (NSG)
A new virtual network
A site-to-site VPN
Virtual network peering
Answer Area

The Azure virtual machines on Subnet1 must be accessible only from the computers in the London office: Component
Engineers require access to the Azure virtual machines on Subnet2 over the Internet on a specific TCP/IP management port: Component
The Azure virtual machines in the West Europe region must be able to communicate on all ports to the Azure virtual machines in the North Europe region: Component

A

Correct Answer Area:

The Azure virtual machines on Subnet1 must be accessible only from the computers in the London office: A site-to-site VPN

Engineers require access to the Azure virtual machines on Subnet2 over the Internet on a specific TCP/IP management port: A network security group (NSG)

The Azure virtual machines in the West Europe region must be able to communicate on all ports to the Azure virtual machines in the North Europe region: Virtual network peering

Explanation of Each Choice:

Subnet1 Access from London Office:

Requirement: The key here is to establish a secure, private connection between your on-premises London office and Azure. This means traffic can not go over the public internet.

Correct Component: A site-to-site VPN is the best option. It creates an encrypted tunnel over the internet between your on-premises network and your Azure virtual network. This ensures that only traffic originating from your London office (with the correct VPN configuration) can reach Subnet1.

Why not others:

Azure ExpressRoute: Although also used for connectivity, it is overkill and more expensive for a simple requirement like this. ExpressRoute is preferred for a lot of bandwidth or lower latency connectivity, not needed here.

Network Security Group: NSGs are used for filtering traffic IN and OUT of subnets and interfaces. Not for creating a VPN connection.

Virtual network peering: Virtual network peering is used to connect virtual networks inside azure. Not between an on-premises and an azure environment.

A new virtual network: Not required in this scenario and does not resolve this requirement.

Subnet2 Access via Internet on Specific Port:

Requirement: You need to allow specific access from the public internet via a specific port.

Correct Component: A network security group (NSG) is the ideal choice. You can create a rule in the NSG that allows inbound traffic to the specific port from any IP or a range of IPs on the virtual machines in Subnet2.

Why not others:

Azure ExpressRoute/Site-to-site VPN: These are for private connections, not public internet access.

Virtual network peering: Used for connectivity between VNets, not for public internet access.

Inter-Region Communication (West Europe to North Europe):

Requirement: Virtual machines in two different regions need to communicate freely.

Correct Component: Virtual network peering is the most cost-effective and simplest way to achieve this. It allows you to connect two virtual networks as if they were on the same network.

Why not others:
* Azure ExpressRoute/Site-to-site VPN: Although also used for connectivity, they are not the preferred way to communicate between azure virtual networks.
* Network security group: NSG is for filtering traffic not creating connection between virtual networks.

Important Notes for the AZ-304 Exam:

NSGs are essential for securing Azure resources: Understand how to configure inbound and outbound rules based on source/destination, ports, and protocols.

Understand when to use Site-to-Site VPNs: Ideal for connecting your on-premise infrastructure to Azure. Know the components required for setting up a VPN gateway.

Know the use cases for Virtual Network Peering: Its most often used to connect virtual networks in azure.

Understand what ExpressRoute does: Understand when it is more appropriate for high bandwidth connectivity.

Cost optimization: The exam often looks for the most efficient solution. VPN connections are less expensive than Expressroute.

Practical application: The exam will test not just your knowledge of components, but your ability to choose the right tool for the job, which means understanding the use cases and trade-offs.

123
Q

DRAG DROP

A company has an existing web application that runs on virtual machines (VMs) in Azure.

You need to ensure that the application is protected from SQL injection attempts and uses a layer-7 load balancer. The solution must minimize disruption to the code for the existing web application.

What should you recommend? To answer, drag the appropriate values to the correct items. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.

Services:
Web Application Firewall (WAF)
Azure Application Gateway
Azure Load Balancer
Azure Traffic Manager
SSL offloading
URL-based content routing

Answer Area:
Azure service: Service
Feature: Service

A

Correct Answer Area (with single selections):

Azure service: Azure Application Gateway

Feature: Web Application Firewall (WAF)

Feature: URL-based content routing

Feature: SSL offloading

Explanation of Each Choice (Single Selection Focus):

Azure Service:

Requirement: The core need is for a service that provides both layer-7 load balancing and protection against SQL injection with minimal code changes.

Correct Component (single selection): Azure Application Gateway is the only suitable option here. It is a layer-7 load balancer, integrates directly with WAF, and is designed for web application traffic.

Why not others (single selection):

Azure Load Balancer: This is a layer-4 load balancer that operates on TCP/UDP. It does not inspect HTTP traffic to provide security against SQL injections or do URL based routing.

Azure Traffic Manager: Is a DNS based traffic manager. Not used for load balancing web application traffic.

Feature:

Requirement: The need is for a feature that provides protection against SQL injection attacks.

Correct Component (single selection): Web Application Firewall (WAF) is the direct answer. It’s the Azure service that specifically handles web application security, including protection against SQL injection.

Why not others (single selection):

URL-based content routing: This is a feature of Azure Application Gateway that allows for routing based on the URL. However, it doesn’t offer direct protection against SQL injection.

SSL offloading: This is also a feature of Azure Application Gateway that offloads SSL decryption from the web servers to the Application Gateway. However, it doesn’t offer direct protection against SQL injection.

Important Notes for the AZ-304 Exam (with a focus on single selection questions):

Layer 4 vs. Layer 7: This is fundamental. Load Balancer is L4; Application Gateway is L7. Be clear on the distinctions, capabilities, and appropriate use cases.

Web Application Firewall (WAF): Must know it protects against SQL injection, XSS, and other web exploits and how it works with Application Gateway.

Application Gateway Deep Dive: Know the core features:

Layer-7 load balancing.

SSL termination (offloading).

URL-based content routing.

WAF Integration.

Minimize disruption: Choose options that require minimal changes to the existing application and infrastructure.

Cost Considerations: Always keep cost in mind, even if not the primary factor in a given question. Understand that the Application Gateway and WAF are more expensive than the Basic Load Balancer.

Practical Application: Know when to use each service. The exam is not about simple definitions, but rather the ability to apply your knowledge to different scenarios.

Exam Technique - Single Selection: Pay very close attention to the question requirements. If it states to “choose one” per category, do just that. This is a common method of questions on the exam.

Avoid Redundancy: In single-selection scenarios, don’t pick things that are highly related, especially if they come from the same service (e.g., you wouldn’t pick both Application Gateway AND URL-based routing as a “service” and “feature” respectively).

123
Q

You are designing an Azure web app that will use Azure Active Directory (Azure AD) for authentication.

You need to recommend a solution to provide users from multiple Azure AD tenants with access to App1. The solution must ensure that the users use Azure Multi-Factor Authentication (MFA) when they connect to App1.

Which two types of objects should you include in the recommendation? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

Azure AD managed identities
an identity Experience Framework policy
Azure AD conditional access policies
a Microsoft intune app protection policy
an Azure application security group
Azure AD guest accounts

A

Requirements:

Multiple Azure AD Tenants: The solution must allow users from different Azure AD tenants to access the application (App1).

Azure AD Authentication: Authentication must be managed through Azure AD.

Multi-Factor Authentication (MFA): All users accessing the app must be required to use Azure MFA.

Correct Object Types:

Azure AD guest accounts

Azure AD conditional access policies

Explanation:

Azure AD guest accounts:

Why it’s correct: To allow users from different Azure AD tenants to access your application, you must invite them as “guest users” into your application’s Azure AD tenant. This allows them to authenticate using their own organization’s credentials, while still being recognized as authorized users for your app.

Why not others:

Azure AD managed identities: This is for authenticating Azure resources, not end-users from different tenants.

An Identity Experience Framework policy: This is for creating custom sign-in flows, but not for granting multi-tenant access or directly enforcing MFA.

Microsoft Intune app protection policy: This policy is used to protect the company’s data within a specific application, not about users authentication.

An Azure application security group: This is used for network security, not authentication.

Azure AD conditional access policies:

Why it’s correct: Conditional access policies in Azure AD are the mechanism to enforce MFA based on various conditions. You can configure a policy that requires MFA when users from all or specific guest users try to access the application (App1).

Why not others:

Azure AD managed identities: This is for authenticating Azure resources, not end-users from different tenants.

An Identity Experience Framework policy: This is for creating custom sign-in flows, but not for granting multi-tenant access or directly enforcing MFA.

Microsoft Intune app protection policy: This policy is used to protect the company’s data within a specific application, not about users authentication.

An Azure application security group: This is used for network security, not authentication.

Important Notes for the AZ-304 Exam:

Azure AD Guest Users: Understand how to use guest accounts for multi-tenant access. Know the process of inviting and managing external users in your Azure AD.

Azure AD Conditional Access Policies: This is a core concept for securing access to Azure resources. Know how to:

Create conditional access policies.

Set conditions (users, locations, devices, apps).

Configure access controls (MFA, block, etc.).

Multi-Factor Authentication (MFA): Be familiar with different MFA options and how to enforce them using conditional access. Understand when MFA should be required for added security.

Distinguish Identity Concepts: Be able to differentiate between the following concepts for different use cases:

Azure AD Managed Identities vs. User Authentication

Conditional Access vs. Identity Experience Framework

Real-world scenarios: The exam focuses on applying these technologies to practical business needs. Be ready to explain why you choose each component.

123
Q

You are designing a microservices architecture that will support a web application.

The solution must meet the following requirements:

✑ Allow independent upgrades to each microservice

✑ Deploy the solution on-premises and to Azure

✑ Set policies for performing automatic repairs to the microservices

✑ Support low-latency and hyper-scale operations

You need to recommend a technology.

What should you recommend?

Azure Service Fabric
Azure Container Service
Azure Container Instance
Azure Virtual Machine Scale Set

A

Requirements:

Independent Upgrades: Each microservice must be independently updatable. This means changes to one microservice shouldn’t force updates to others.

On-Premises and Azure Deployment: The solution must be deployable both on-premises and in Azure. This implies a level of portability.

Automatic Repairs: The platform should support policies for automatic repairs of failed microservices. This means self-healing capabilities.

Low-Latency and Hyper-Scale: It needs to support low latency operations and the ability to scale to very high volumes of traffic (hyper-scale).

Recommended Technology:

Azure Service Fabric

Explanation:

Azure Service Fabric:

Why it’s the best fit:

Independent Upgrades: Service Fabric excels at managing microservices, allowing independent upgrades and deployments for each service.

On-Premises and Azure: Service Fabric can be deployed both on-premises (using Windows Server) and in Azure (as a PaaS). This portability is a key requirement.

Automatic Repairs: Service Fabric provides built-in features for monitoring the health of services and automatically repairing failing instances.

Low-Latency and Hyper-Scale: It’s designed for building highly scalable, low-latency applications, and can handle large numbers of concurrent operations.

State Management: Service fabric has a built-in state management to easily develop microservices.

Why not others:

Azure Container Service (AKS): It’s a good container orchestration platform, but it doesn’t have the same level of portability to on-prem environments, and its self-healing capability is not as built-in.

Azure Container Instances (ACI): It’s best for lightweight container deployments, not for complex microservices architectures requiring on-prem deployment and automated self-healing. Does not offer state management.

Azure Virtual Machine Scale Sets (VMSS): This primarily focuses on scaling VMs, not managing and orchestrating microservices. It’s more for running a single application that can scale on VMs, not for a complex system of independent microservices.

Important Notes for the AZ-304 Exam:

Service Fabric Use Cases: Be very clear on the types of applications that Service Fabric is best suited for. This includes microservices architectures, stateful and stateless services.

Microservices Architecture: Understand the core principles behind microservices, including independent deployment, scalability, and resilience.

Deployment Flexibility: Know the options for deploying Service Fabric (on-premises, Azure PaaS).

Self-Healing Capabilities: Know how to configure and use Service Fabric’s automatic repair mechanisms.

Scale and Performance: Be aware that Service Fabric is engineered to support very high scale applications and with low latency.

Alternatives: Know the general purpose of the other offerings for containers in Azure (AKS, ACI), and when they would be more appropriate.

Exam Focus: The exam will not just ask for a definition, it will want you to select the right component based on business requirements.

124
Q

You need to recommend a data storage solution that meets the following requirements:

  • Ensures that applications can access the data by using a REST connection
  • Hosts 20 independent tables of varying sizes and usage patterns
  • Automatically replicates the data to a second Azure region
  • Minimizes costs

What should you recommend?

an Azure SQL Database that uses active geo-replication
tables in an Azure Storage account that use geo-redundant storage (GRS)
tables in an Azure Storage account that use read-access geo-redundant storage (RA-GRS)
an Azure SQL Database elastic database pool that uses active geo-replication

A

Requirements:

REST API Access: The data must be accessible through a REST interface.

Independent Tables: The solution must support 20 independent tables of different sizes and usage patterns.

Automatic Geo-Replication: The data must be automatically replicated to a secondary Azure region.

Minimize Costs: The solution should be cost-effective.

Recommended Solution:

Tables in an Azure Storage account that use read-access geo-redundant storage (RA-GRS)

Explanation:

Azure Storage Account with RA-GRS Tables:

REST Access: Azure Storage tables are directly accessible using a REST API, which is a fundamental part of their design.

Independent Tables: A single Azure Storage account can hold many independent tables, meeting the 20-table requirement.

Automatic Geo-Replication (RA-GRS): RA-GRS ensures that the data is replicated to a secondary region, and provides read access to that secondary location. This satisfies the HA and geo-redundancy requirements.

Minimize Cost: Azure Storage tables are designed to handle different patterns and are cost effective compared to SQL options.

Why not others:

Azure SQL Database with active geo-replication: While it provides strong SQL capabilities and geo-replication, SQL databases are more costly for simple table storage and have a higher operational overhead. SQL databases also do not have a REST interface, rather they use SQL.

Azure SQL Database elastic database pool with active geo-replication: Same reasons as above, but with the added complication of an elastic pool, which is unnecessary for the stated requirements and would add even more costs.

Tables in an Azure Storage account that use geo-redundant storage (GRS): This would meet the geo-replication requirements but it would not provide the ability to read from the secondary location, and so is not as good a choice as RA-GRS.

Important Notes for the AZ-304 Exam:

Azure Storage Tables: Know what they are designed for and their features (scalability, cost-effectiveness, REST API access). Be able to explain where they are appropriate.

Geo-Redundancy: Understand the differences between GRS, RA-GRS and how they impact performance, availability and cost.

Cost-Effective Solutions: The exam often asks for the most cost-effective solution. Be aware of the pricing models of different Azure services.

SQL Database Use Cases: Understand when to use SQL DBs and when other options (like Table storage) are more appropriate. SQL DBs are better suited for complex queries, transactions, and relational data models.

REST API Access: Know which Azure services offer a REST interface for data access and when it might be required.

Exam Technique: Ensure you fully read the requirements, so you don’t pick a more expensive or complex solution than is needed.

125
Q

You manage an Azure environment for a company. The environment has over 25,000 licensed users and 100 mission-critical applications. You need to recommend a solution that provides advanced user threat detection and remediation strategies.

What should you recommend?

Azure Active Directory (Azure AD) Identity Protection
Azure Active Directory Federation Services (AD FS)
Azure Active Directory (Azure AD) authentication
Microsoft Identity Manager
Azure Active Directory (Azure AD) Connect

A

Requirements:

Large User Base: The environment has over 25,000 licensed users, indicating a large scale.

Mission-Critical Applications: The environment supports 100 mission-critical applications, requiring a strong focus on security.

Advanced Threat Detection: The solution must provide advanced threat detection capabilities related to user behavior.

Remediation Strategies: The solution must offer automated remediation options.

Recommended Solution:

Azure Active Directory (Azure AD) Identity Protection

Explanation:

Azure AD Identity Protection:

Why it’s the best fit:

Advanced Threat Detection: Azure AD Identity Protection uses machine learning to detect risky sign-ins, user behavior anomalies, and other suspicious activities.

Remediation Strategies: It provides automated remediation actions, such as requiring users to change their passwords, enforce MFA, or blocking access altogether, if it detects risky activity.

Scale: It is designed to operate at scale, handling a user base of over 25,000 licensed users without any issues.

Integrated: This offering works directly with Azure AD, which is the identity platform for the applications being secured.

Why not others:

Azure Active Directory Federation Services (AD FS): AD FS is for on-premises federated identity, not for advanced threat detection and remediation.

Azure Active Directory (Azure AD) authentication: While this is the primary method of authentication in Azure, it doesn’t offer the advanced threat detection and remediation strategies needed for this scenario.

Microsoft Identity Manager: This is an on-prem identity management solution, also not offering the required threat detection or being fully integrated with cloud applications.

Azure Active Directory (Azure AD) Connect: Is a tool to synchronize identities from on-premises to azure, and doesn’t offer any direct threat detection or remediation.

Important Notes for the AZ-304 Exam:

Azure AD Identity Protection: Be very familiar with this service, and know how to configure and use it for security. Know its capabilities to detect risky users, anomalous sign-in behavior, and user compromise. Be able to select this offering when you see these as required capabilities.

Threat Detection: Understand the differences between standard security measures and advanced threat detection.

Automated Remediation: Be familiar with how to configure automatic actions when risks are detected.

Scalability: Be aware of how services scale to handle a large user base.

Identity-Centric Security: Know how Azure AD and its components like Identity Protection form a core part of a cloud security strategy.

When to use other Identity Services: Know what AD FS, MIM, and AD Connect are for and when they would be a more appropriate selection.

Exam Focus: It is key to understand how the solutions align with business requirements, and the exam is not just about knowing definitions.

126
Q

HOTSPOT

Your on-premises network contains a file server named Server1 that stores 500 GB of data.

You need to use Azure Data Factory to copy the data from Server1 to Azure Storage.

You add a new data factory.

What should you do next? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
From Server1:
Install an Azure File Sync agent
Install a self-hosted integration runtime
Install the File Server Resource Manager role service
From the data factory:
Create a pipeline
Create an import/export job
Provision an Azure-SQL Server Integration Services (SSIS) integration runtime

A

Answer Area:

From Server1:

Install a self-hosted integration runtime

From the data factory:

Create a pipeline

Explanation:

From Server1:

Install a self-hosted integration runtime:

Why it’s correct: A self-hosted integration runtime (SHIR) is required to establish a secure bridge between Azure Data Factory and your on-premises data source (Server1). The SHIR is installed on a server within the on-premises network and handles the data movement. This agent is required when you want to copy data from an on-premise location to Azure.

Why not others:

Install an Azure File Sync agent: This is for syncing files between on-prem and Azure File Shares, not directly related to Data Factory’s data copying processes.

Install the File Server Resource Manager role service: This is for managing file server resources, not for enabling Data Factory connectivity.

From the data factory:

Create a pipeline:

Why it’s correct: In Azure Data Factory, a pipeline is the core object for defining the sequence of activities involved in data movement. You’ll need to create a pipeline that specifies the data source (Server1 via the SHIR), the data destination (Azure Storage), and the copy operation.

Why not others:

Create an import/export job: While import/export jobs can exist in Azure, it is not the methodology to copy data via Azure Data Factory.

Provision an Azure-SQL Server Integration Services (SSIS) integration runtime: An Azure SSIS integration runtime is specific to migrating data using SSIS packages, it’s not required for copying directly from a file server.

Important Notes for the AZ-304 Exam:

Azure Data Factory (ADF): Be familiar with the core concepts of ADF:

Pipelines: For orchestrating data movement.

Datasets: Defining data sources and destinations.

Linked services: Defining connections to data sources and destinations.

Integration runtimes: Managing the execution environment for data movement.

Self-Hosted Integration Runtime (SHIR): Know when a SHIR is required (on-prem data sources or data sources within an Azure VNET), how to install it, and configure it in ADF. This will appear very often on the exam.

Cloud vs. Self-Hosted IR: Know when to use each of the options. For most cloud to cloud scenarios, the Azure integration runtime is sufficient. For connecting to on-premises data sources, a SHIR is required.

Data Movement: Understand the high-level process of copying data in ADF.

Other Components: Know the different types of integrations available with ADF.

Exam Technique: Carefully read the prompts and select the next step in the process. Always follow the correct logical steps in each scenario.

127
Q

You have an Azure subscription that contains a storage account.

An application sometimes writes duplicate files to the storage account.

You have a PowerShell script that identifies and deletes duplicate files in the storage account. Currently, the script is run manually after approval from the operations manager.

You need to recommend a serverless solution that performs the following actions:

✑ Runs the script once an hour to identify whether duplicate files exist

✑ Sends an email notification to the operations manager requesting approval to delete the duplicate files

✑ Processes an email response from the operations manager specifying whether the deletion was approved

✑ Runs the script if the deletion was approved

What should you include in the recommendation?

Azure Logic Apps and Azure Functions
Azure Pipelines and Azure Service Fabric
Azure Logic Apps and Azure Event Grid
Azure Functions and Azure Batch

A

Requirements:

Scheduled Script Execution: The script needs to run automatically once per hour.

Email Notification with Approval: An email needs to be sent to the operations manager to approve the deletion.

Process Approval Response: The system must handle the email response indicating if the deletion is approved.

Conditional Script Execution: The script must run only if the deletion is approved.

Serverless: The solution must be serverless, implying no managing virtual machines.

Recommended Solution:

Azure Logic Apps and Azure Functions

Explanation:

Azure Logic Apps:

Why it’s correct:

Orchestration: Logic Apps excels at orchestrating workflows and automating tasks. It is the main component for managing the flow of operations.

Scheduling: It can be triggered on a schedule, meeting the hourly execution requirement.

Email Operations: Logic Apps has built-in connectors for sending and processing emails (using Outlook, Office 365, or other connectors).

Conditional Logic: It can use conditional branches to determine if the deletion should proceed based on the email response.

Azure Functions:

Why it’s correct:

Script Execution: Azure Functions is ideal for running the PowerShell script to identify and delete duplicate files.

Integration: It integrates well with Logic Apps as an action in the workflow.

How the solution works:

A Logic App is configured to run on a schedule, for example, every hour.

The Logic App invokes an Azure Function to run the PowerShell script that checks for duplicate files.

The Logic App generates and sends an email to the operation manager asking for an approval to delete files.

The Logic App watches for the email response.

If the response has an approval, then the Logic App executes a second Azure function that performs the deletion.

Why not others:

Azure Pipelines and Azure Service Fabric: Azure Pipelines is a CI/CD service, not meant for orchestrating this type of workflow. Azure Service Fabric is for building microservices, which is not required for this problem.

Azure Logic Apps and Azure Event Grid: While Event Grid can trigger actions, it is not the appropriate choice here since this scenario doesn’t require event-based triggering.

Azure Functions and Azure Batch: Azure Batch is designed for large-scale parallel compute tasks, which is unnecessary here.

Important Notes for the AZ-304 Exam:

Azure Logic Apps: Be very familiar with Logic Apps, and know how to:

Create workflows.

Use different triggers (schedule, HTTP, etc.).

Integrate with connectors (email, databases, APIs).

Add conditions and control flow.

Azure Functions: Know the use cases for Functions and how to:

Write and deploy function code.

Connect with other Azure services.

Handle triggers and bindings.

Serverless Architecture: Understand what “serverless” means and the benefits and limitations.

Orchestration vs. Execution: Know the difference between orchestration and execution, and how Logic Apps and Functions are used in this scenario.

Integration: Understand the integration points between different Azure services.

Exam Focus: The exam will test your ability to choose the right serverless solution based on a set of requirements.

127
Q

HOTSPOT

You have an Azure Active Directory (Azure AD) tenant.

You plan to use Azure Monitor to monitor user sign-ins and generate alerts based on specific user sign-in events.

You need to recommend a solution to trigger the alerts based on the events.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Send Azure AD logs to:
An Azure event hub
An Azure Log Analytics workspace
An Azure Storage account
Signal type to use for triggering the alerts:
Activity log
Log
Metric

A

Answer Area:

Send Azure AD logs to:

An Azure Log Analytics workspace

Signal type to use for triggering the alerts:

Log

Explanation:

Send Azure AD logs to:

An Azure Log Analytics workspace:

Why it’s correct: Azure Log Analytics is the correct destination for collecting, storing, and analyzing log data, including Azure AD sign-in logs. It provides a powerful query language (Kusto Query Language or KQL) to perform complex analysis and create custom alerts.

Why not others:

An Azure event hub: While Event Hubs is great for streaming data, it’s not the best choice for storing and querying logs for analysis or creating alerts based on them.

An Azure Storage account: While you can store logs in Azure Storage, querying and creating alerts directly from Storage is not as efficient or powerful as using Log Analytics.

Signal type to use for triggering the alerts:

Log:

Why it’s correct: Azure AD sign-in events are stored as log data. You will need to analyze the log data to trigger the alerts, which is what Log Analytics does. Log-based alerts are created from the Log Analytics workspace after you have ingested the log data into that workspace.

Why not others:

Activity log: The Azure activity log is for auditing operations that occurred on Azure resources. Azure AD sign in events are not activity logs.

Metric: Metrics are numerical values that are measured over time, such as CPU usage. They are not the best mechanism to trigger alerts based on the content of a log.

Important Notes for the AZ-304 Exam:

Azure Monitor: Understand the core concepts of Azure Monitor.

Log Analytics: Be familiar with Log Analytics, KQL, and using queries to get data. Understand that logs are stored and analyzed in Log Analytics workspaces.

Alerting in Azure Monitor: Know how to create alerts based on log data and metric data. Be able to select the correct method.

Azure AD Logging: Know that Azure AD logs are available for analysis in Log Analytics. Know what kind of data can be collected via the diagnostic settings.

Activity Log vs. Log: Know the difference. Activity logs are for resource management events, and logs (like sign-in logs) are stored in log analytics.

Event Hubs: Understand the use cases for Event Hubs. It’s best for streaming data, not for long-term storage and analysis.

Exam Technique: Carefully read the question, and be sure to select the right option for log based alerts.

127
Q

Your company, named Contoso, Ltd, implements several Azure logic apps that have HTTP triggers: The logic apps provide access to an on-premises web service.

Contoso establishes a partnership with another company named Fabrikam, Inc.

Fabrikam does not have an existing Azure Active Directory (Azure AD) tenant and uses third-party OAuth 2.0 identity management to authenticate its users.

Developers at Fabrikam plan to use a subset of the logics apps to build applications that will integrate with the on-premises web service of Contoso.

You need to design a solution to provide the Fabrikam developers with access to the logic apps.

The solution must meet the following requirements:

✑ Requests to the logic apps from the developers must be limited to lower rates than the requests from the users at Contoso.

✑ The developers must be able to rely on their existing OAuth 2.0 provider to gain access to the logic apps.

✑ The solution must NOT require changes to the logic apps.

✑ The solution must NOT use Azure AD guest accounts.

What should you include in the solution?

Azure AD business-to-business (B2B)
Azure Front Door
Azure API Management
Azure AD Application Proxy

A

Requirements:

Rate Limiting: Requests from Fabrikam must be rate-limited to a lower rate than Contoso users.

External OAuth 2.0 Provider: Fabrikam users should authenticate using their existing OAuth 2.0 provider (not Azure AD).

No Logic App Changes: The solution must not require modifications to the logic apps themselves.

No Azure AD Guest Accounts: The solution must not use Azure AD guest accounts.

Access to Logic Apps via HTTP Triggers: The logic apps are accessed via HTTP triggers.

Recommended Solution:

Azure API Management

Explanation:

Azure API Management (APIM):

Why it’s the best fit:

Rate Limiting: API Management allows you to define rate limits and policies based on subscriptions, API keys, or other criteria. You can create a specific rate limit for Fabrikam developers that differs from Contoso users.

External OAuth 2.0 Provider Support: APIM can integrate with external OAuth 2.0 providers, allowing Fabrikam users to authenticate using their existing system.

No Logic App Changes: You can expose your logic apps through APIM without having to make any modifications to the logic apps. APIM acts as a facade for the logic apps.

No Guest Accounts: APIM manages access via API keys and OAuth 2.0, so there is no need for guest accounts in your Azure AD.

Centralized Management: API Management provides a central point to manage and control access to your APIs, implement policies, and monitor usage.

Why not others:

Azure AD business-to-business (B2B): While B2B allows collaboration with users from other Azure AD tenants, it requires the partner organization to also use Azure AD or a Microsoft account, and it does not directly provide the rate limiting and custom authentication mechanisms required here.

Azure Front Door: Azure Front Door is a global content delivery network (CDN) and load balancer, not designed for API access control or user authentication.

Azure AD Application Proxy: Azure AD Application Proxy is used for publishing on-premises web apps for remote access via Azure AD. It does not provide rate-limiting capabilities or integration with arbitrary OAuth 2.0 providers.

Important Notes for the AZ-304 Exam:

Azure API Management: Be very familiar with APIM, including how to:

Create and manage APIs.

Implement access policies.

Integrate with OAuth 2.0.

Apply rate limits.

Set up developer portals.

OAuth 2.0: Understand the basics of OAuth 2.0 and its role in API security.

Rate Limiting: Understand how to use rate limits to protect APIs from misuse and overuse.

External Identity Providers: Know how to integrate external identity providers with APIM.

API Gateway Pattern: Understand the role of an API gateway like APIM in managing and securing APIs.

Exam Focus: The exam will often test you on choosing a solution that fits multiple requirements and doesn’t rely on the “simplest” choice. You will need to know each service and where it applies to each scenario.

128
Q

You have an Azure Active Directory (Azure AD) tenant named contoso.com that contains several administrative user accounts. You need to recommend a solution to identify which administrative user accounts have NOT signed in during the previous 30 days.

Which service should you include in the recommendation?

Azure AD Identity Protection
Azure AD Privileged identity Management (PIM)
Azure Advisor
Azure Activity log

A

Requirement:

Identify administrative user accounts in an Azure AD tenant that have not signed in during the previous 30 days.

Recommended Solution:

Azure AD Privileged Identity Management (PIM)

Explanation:

Azure AD Privileged Identity Management (PIM):

Why it’s the best fit:

Access Reviews: PIM provides the capability to perform “access reviews”. An access review is a way to audit which users have access to a resource and have that access reviewed on a periodic basis. PIM can be used to review inactive users by including user sign in logs as part of the review criteria. This means you can set up a review to find admin users that have not logged in for 30 days.

Auditing: While not the primary goal here, PIM also provides detailed auditing capabilities.

Governance: PIM is designed for securing privileged accounts and is ideal for this specific scenario of inactive administrative accounts.

Reports: PIM provides reports that help you see and access who your admin users are, and when they last signed in.

Why not others:

Azure AD Identity Protection: This is primarily focused on detecting risky sign-in behaviors and compromised accounts, not on identifying inactive users.

Azure Advisor: Azure Advisor provides recommendations for best practices on Azure resources and will not help in identifying user sign-in activity.

Azure Activity log: The Azure activity log is an audit trail of actions performed in Azure, not user sign-in logs. While you could filter and analyze the activity logs, that is not its primary purpose and it will not help in identifying sign-in inactivity.

Important Notes for the AZ-304 Exam:

Azure AD Privileged Identity Management (PIM): Be very familiar with PIM, and understand:

How to use PIM for managing access to administrative roles.

How to perform access reviews.

How to enable just-in-time (JIT) access.

The PIM reporting capabilities.

The use cases for PIM.

Identity Security: Know that identifying inactive admin accounts is a common security practice.

Risk Management: Know how to reduce security risks by detecting and addressing potentially compromised or inactive admin accounts.

Exam Focus: Understand the difference between the services and when you would use each.

128
Q

You have an on-premises network to which you deploy a virtual appliance.

You plan to deploy several Azure virtual machines and connect the on-premises network to Azure by using a Site-to-Site connection.

All network traffic that will be directed from the Azure virtual machines to a specific subnet must flow through the virtual appliance.

You need to recommend solutions to manage network traffic.

Which two options should you recommend? Each correct answer presents a complete solution.

Configure Azure Traffic Manager.
Implement an Azure virtual network.
Implement Azure ExpressRoute.
Configure a routing table.

A

Requirements:

On-premises Connection: Connect the on-premises network to Azure using a Site-to-Site VPN.

Virtual Appliance Traffic: All traffic from Azure VMs to a specific subnet must flow through a virtual appliance (which also implies that is is in the path of the traffic).

Correct Solutions:

Implement an Azure virtual network.

Configure a routing table.

Explanation:

Implement an Azure virtual network:

Why it’s correct: An Azure Virtual Network is the fundamental building block for creating a private network in Azure. All Azure resources need to exist within an Azure VNET. In order to deploy any VM into Azure, and have it connect to on-prem, an Azure VNET is the foundation for this connectivity.

Why not others (by themselves): All other services need an Azure VNET to operate, so this is the basic requirement for this scenario.

Configure a routing table:

Why it’s correct: Azure Route Tables allow you to define custom routing rules. By creating a route table and associating it with the subnet where your VMs reside, you can force all traffic destined for the specific subnet to be directed through the virtual appliance.

Why not others:

Configure Azure Traffic Manager: Traffic Manager is a DNS-based load balancer for distributing traffic across different endpoints. It does not handle routing at the network level.

Implement Azure ExpressRoute: While ExpressRoute provides dedicated private connectivity to Azure, it’s not required for this scenario and it does not handle the custom routing requirements.

Important Notes for the AZ-304 Exam:

Azure Virtual Networks: Understand the fundamental role of VNETs for deploying resources and creating isolated networks in Azure.

Azure Route Tables: Be very familiar with Route Tables, how to create them, how to associate them with subnets, and how to define custom routes. Understand that these allow for granular control of network traffic flow.

Virtual Appliances: Know what a virtual appliance is and the common use cases.

Site-to-Site VPN: Understand the need for a Site-to-Site VPN connection between on-premises and Azure, and the components required.

Network Routing: Understand the principles of network routing, including default routes and custom routes.

Traffic Manager: Be familiar with Traffic Manager as a DNS based load balancer.

Express Route: Understand the purpose of express route and when to use that.

Exam Focus: Be sure to pay attention to the specific requirements of each question. If there is a routing requirement, a routing table is likely required.

128
Q

You have multiple Anne deployments.

You plan to implement Azure Blueprints.

Which artifact types can be added to a blueprint?

Policy assignment Resource group, Role assignment virtual machines, and virtual networks only
Subscriptions. tenants. Resource group, and Key vault only
Subscriptions. tenants. Resource group, and Azure Active Directory only
Azure Resource Manager template (Subscription). Policy assignment Resource group, and Role assignment only

A

Understanding Azure Blueprints:

Azure Blueprints are a way to define a repeatable set of Azure resources and configurations that can be deployed consistently across different environments. Think of it as a template for deploying your Azure environment.

Correct Artifact Types:

Azure Resource Manager template (Subscription), Policy assignment, Resource group, and Role assignment only

Explanation:

Why it’s correct:

Azure Resource Manager (ARM) Templates: These templates define the actual resources to be deployed, such as virtual machines, storage accounts, networks, etc. In this case the scope must be set at the subscription level.

Policy Assignments: These artifacts allow you to enforce compliance requirements across your environment. You can define policies to ensure that deployed resources meet your organization’s standards.

Resource Groups: These are containers for organizing Azure resources. A blueprint can define resource group deployments, enabling a consistent grouping of resources.

Role Assignments: These are used to set permissions that are assigned to resources or resource groups.

Why not others:

Policy assignment, Resource group, Role assignment virtual machines, and virtual networks only: While Policy assignments, Resource Groups, and Role assignments are valid, Virtual Machines and Virtual Networks are defined via ARM Templates and are not added directly.

Subscriptions, tenants, Resource group, and Key vault only: While subscriptions, tenants and resource groups are related to the overall structure of Azure, they are not artifacts that you can define within a blueprint. Key Vault is a resource that can be deployed via an ARM template, not directly in a blueprint.

Subscriptions, tenants, Resource group, and Azure Active Directory only: Similar to the previous option, subscriptions, tenants, and Azure AD are not resources you would deploy with a blueprint.

Important Notes for the AZ-304 Exam:

Azure Blueprints: Understand the purpose of Blueprints and how they help to create a consistent deployment environment.

Blueprint Artifacts: Know the specific types of artifacts that you can include in a blueprint. You must know these core resources.

ARM Templates: Know that Azure Resource Manager templates are the base component for Azure deployments.

Policy Assignments: Understand how to use Azure Policy to enforce compliance and standards within your deployments.

Role Assignments: Understand how to define permissions for resources.

Use Cases: Know when to use Azure Blueprints for resource deployment.

Exam Focus: The exam will require a good understanding of the Azure services, and how they work together.

129
Q

HOTSPOT

You are building an application that will run in a virtual machine (VM). The application will use Azure Managed Identity.

The application uses Azure Key Vault, Azure SQL Database, and Azure Cosmos DB.

You need to ensure the application can use secure credentials to access these services.

Which authentication method should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

a)
Hash-based message authentication code (HMAC)
b) Azure Managed Identity
c) Role-Based Access Controls (RBAC)
d) HTTPS encryption

Answer Area
Functionality
Azure Key Vault
Azure SQL
Cosmos DB

Authorization method

A

Requirements:

Application in a VM: The application is running on an Azure Virtual Machine.

Azure Managed Identity: The application will use Azure Managed Identity for authentication.

Access to Key Vault, SQL Database, Cosmos DB: The application needs to securely access these Azure services.

Secure Credentials: Credentials must be secure (not embedded in the app).

Answer Area:

Functionality: Authorization Method:
Azure Key Vault Azure Managed Identity
Azure SQL Azure Managed Identity
Cosmos DB Azure Managed Identity
Explanation:

Azure Managed Identity:

Why it’s the best fit (for all services):

Secure Credential Management: Managed identities remove the need to manage credentials in your code, and prevent credentials from being stored directly within the application or VM. It provides the most secure way for applications to authenticate with Azure services.

Simplified Authentication: Once the managed identity is set up, the code will access the services with minimal changes.

No Key Rotation: Managed Identities will not expire and don’t require key rotation. This makes it much easier to manage.

Integration: All of these services support Azure Managed Identity authentication.

Why not others:

Hash-based message authentication code (HMAC): HMAC is for verifying the integrity and authenticity of a message. It is not the main authorization method for Azure services.

Role-Based Access Controls (RBAC): RBAC provides permission management and access control based on roles. RBAC is used to grant access to managed identities, but it’s not the authentication method, but instead it’s how you control which permissions the managed identity has on each service.

HTTPS encryption: Provides transport layer security, it ensures data is encrypted but it is not an authentication mechanism.

Important Notes for the AZ-304 Exam:

Azure Managed Identities: Understand what managed identities are, how they work, the benefits of using them and when to use them. This will be a very common topic on the exam.

Secure Authentication: The exam will often focus on secure credential management. Use of managed identities are recommended over secrets or connection strings.

RBAC: Understand its role in access control and granting permissions to managed identities.

Key Vault: Understand how Managed Identities can securely access Key Vault secrets and certificates.

Azure SQL and Cosmos DB: Understand how Managed Identities work with these services for authentication.

Exam Focus: The exam will not just ask you to identify the service to use, it will also test if you know the best practice and security implications. Managed identities should be a strong area of study.

130
Q

You have a hybrid deployment of Azure Active Directory (Azure AD).

You need to recommend a solution to ensure that the Azure AD tenant can be managed only from the computers on your on-premises network.

What should you include in the recommendation?

Azure AD roles and administrators
Azure AD Privileged identity Management
A conditional access policy
Azure AD Application Proxy

A

Requirement:

Ensure that the Azure AD tenant can be managed only from computers located on the on-premises network.

Recommended Solution:

A conditional access policy

Explanation:

Conditional Access Policy:

Why it’s the best fit:

Location-Based Access Control: Conditional access policies allow you to control access based on various conditions, including network location (IP address ranges). You can define a policy that restricts Azure AD administrative access only to requests coming from your on-premises network’s IP address range or a specific named location.

Granular Control: You can apply conditional access policies to specific user groups, roles, or applications. This allows for granular control of the access.

Real-time Access Control: The access control decision is made in real time, when the user attempts to sign in.

Why not others:

Azure AD roles and administrators: While Azure AD roles determine who has administrative rights, they do not control where those users can perform those tasks from.

Azure AD Privileged Identity Management (PIM): PIM is for managing and governing privileged accounts, it does not control where those users can perform those tasks from.

Azure AD Application Proxy: Application Proxy is for publishing on-premises applications for remote access via Azure AD. This is not related to management of Azure AD itself.

Important Notes for the AZ-304 Exam:

Conditional Access Policies: Be very familiar with Azure AD conditional access and understand how to create and configure these policies to limit user access based on different requirements.

Location-Based Conditions: Know how to configure conditional access policies based on IP address ranges or named locations.

Azure AD Roles: Understand the different built-in administrative roles and what permissions they grant.

Privileged Identity Management (PIM): Understand its purpose and when to use it to control access to privileged roles.

Application Proxy: Know the purpose and use cases for Application Proxy.

Exam Focus: The exam is all about selecting the best service to meet the business needs. Be sure you are selecting the best answer, and not just one that sounds good.

130
Q

HOTSPOT

You plan to create an Azure environment that will contain a root management group and 10 child management groups. Each child management group will contain five Azure subscriptions. You plan to have between 10 and 30 resource groups in each subscription.

You need to design an Azure governance solution.

The solution must meet the following requirements:

  • Use Azure Blueprints to control governance across all the subscriptions and resource groups.
  • Ensure that Blueprints-based configurations are consistent across all the subscriptions and resource groups.
  • Minimize the number of blueprint definitions and assignments.

What should you include in the solution? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Level at which to define the blueprints:
The child management groups
The root management group
The subscriptions
Level at which to create the blueprint assignments:
The child management groups
The root management group
The subscriptions

A

Requirements:

Root and Child Management Groups: A hierarchy of management groups.

Consistent Governance: Consistent configurations across all subscriptions and resource groups.

Minimize Blueprints: Minimize the number of blueprint definitions and assignments.

Answer Area:

Level at which to define the blueprints:

The root management group

Level at which to create the blueprint assignments:

The child management groups

Explanation:

Level at which to define the blueprints:

The root management group:

Why it’s correct: Defining the blueprint at the root management group level ensures that the blueprint definition is available to all child management groups and their subscriptions. This is important in order to meet the requirement to minimize the number of definitions, as you can create a single blueprint and use it everywhere.

Why not others:

The child management groups: Defining blueprints at child management groups would mean defining the blueprint in each of them, which means the same blueprint definition would need to be re-created multiple times, and wouldn’t be available to child groups.

The subscriptions: Defining the blueprint at the subscription level is the lowest level and it would mean you would need to define the same blueprint for each of the 50 subscriptions.

Level at which to create the blueprint assignments:

The child management groups:

Why it’s correct: Assigning the blueprints at the child management group level enables consistent application of the blueprint to all subscriptions within each child management group. It also limits the number of assignments, since you only have to create a maximum of 10 assignment per blueprint, instead of 50 assignments if you were to apply it to the subscription level.

Why not others:

The root management group: While you could assign at the root management group, that wouldn’t give you the required granular control, since all subscriptions would inherit.

The subscriptions: Assigning at the subscription level would require managing assignments on every single subscription, and would be a lot of management overhead.

Important Notes for the AZ-304 Exam:

Management Groups: Be very familiar with management groups, their purpose, their structure, and how they’re used to organize subscriptions.

Azure Blueprints: Understand their purpose in providing repeatable deployments. Know how to create blueprint definitions and assignments.

Blueprint Scope: Understand that blueprints have a scope (root, management group, subscription). Select the appropriate scope for the task at hand.

Hierarchy Inheritance: Understand how management groups and blueprints use inheritance.

Minimize Effort: The exam usually emphasizes solutions that minimize administrative effort and complexity.

Exam Focus: Be sure you read the requirements fully. Look for any hints that may give you the clue to the correct answer. The correct answer must minimize the number of assignments and provide consistent control.

130
Q

HOTSPOT

You have a web application that uses a MongoDB database. You plan to migrate the web application to Azure.

You must migrate to Cosmos DB while minimizing code and configuration changes.

You need to design the Cosmos DB configuration.

What should you recommend? To answer, select the appropriate values in the answer area. NOTE: Each correct selection is worth one point.

Option
MongoDB compatibility:
Database
Collection
Account

API:
Cassandra API
DocumentDB API
Graph API
MongoDB API
Table API

A

Requirements:

Existing MongoDB: The web application uses a MongoDB database.

Migrate to Cosmos DB: The migration target is Azure Cosmos DB.

Minimize Changes: Minimize code and configuration changes in the web application.

Answer Area:

Option Value
MongoDB compatibility: Account
API: MongoDB API
Explanation:

MongoDB compatibility:

Why “Account” is correct: When setting up a Cosmos DB account, you must configure a specific API. The choice here is whether to use MongoDB compatibility API with an Account level compatibility, which is the correct option in order to have MongoDB functionality.

Why not others: Database and collection are not settings when setting up a Cosmos DB Account.

API:

Why “MongoDB API” is correct: Cosmos DB has different APIs to support different database models (SQL, Cassandra, MongoDB, Graph, Table). Selecting the MongoDB API allows you to use the existing MongoDB database structures and drivers. Using the MongoDB API will minimize changes to your application because it will be compatible with MongoDB.

Why not others:

Cassandra API: This is for Apache Cassandra-compatible databases, not suitable for a MongoDB migration.

DocumentDB API: This is the older SQL API and is not compatible with MongoDB.

Graph API: This is used to model data with graph functionality. Not applicable for this scenario.

Table API: This is for Azure Table Storage, not compatible with a MongoDB database.

Important Notes for the AZ-304 Exam:

Azure Cosmos DB: Be very familiar with Cosmos DB and its capabilities.

Multiple APIs: Know that Cosmos DB supports different data models via different APIs, such as the SQL API, MongoDB API, Cassandra API, Gremlin (Graph) API, and Table API.

MongoDB API: Know when to use the MongoDB API when migrating from a MongoDB database to Cosmos DB.

Minimize Changes: Know when selecting a specific option will cause less disruption to the application.

Exam Focus: The exam will often test your practical skills and ability to choose the correct service to minimize changes and optimize the outcome.

130
Q

A company named Contoso, Ltd. has an Azure Active Directory (Azure AD) tenant that is integrated with Microsoft Office 365 and an Azure subscription.

Contoso has an on-premises identity infrastructure. The infrastructure includes servers that run Active Directory Domain Services (AD DS), Active Directory Federation Services (AD FS), Azure AD Connect, and Microsoft Identity Manager (MIM).

Contoso has a partnership with a company named Fabrikam, Inc. Fabrikam has an Active Directory forest and an Office 365 tenant. Fabrikam has the same on-premises identity infrastructure as Contoso.

A team of 10 developers from Fabrikam will work on an Azure solution that will be hosted in the Azure subscription of Contoso. The developers must be added to the Contributor role for a resource in the Contoso subscription.

You need to recommend a solution to ensure that Contoso can assign the role to the 10 Fabrikam developers. The solution must ensure that the Fabrikam developers use their existing credentials to access resources.

What should you recommend?

Configure a forest trust between the on-premises Active Directory forests of Contoso and Fabrikam.
Configure an organization relationship between the Office 365 tenants of Fabrikam and Contoso.
In the Azure AD tenant of Contoso, use MIM to create guest accounts for the Fabrikam developers.
Configure an AD FS relying party trust between the fabrikam and Contoso AD FS infrastructures.

A

Requirements:

Cross-Company Access: Fabrikam developers need access to Contoso’s Azure resources.

Contributor Role: The Fabrikam developers need the Contributor role on a resource in Contoso’s subscription.

Existing Credentials: Fabrikam developers should use their existing credentials (from Fabrikam’s domain) for access.

Existing Infrastructure: Both companies use a similar on-premises identity infrastructure (AD DS, AD FS, Azure AD Connect, and MIM).

No additional synchronization: The solution should not require additional identity synchronization between the environments.

Recommended Solution:

Configure an organization relationship between the Office 365 tenants of Fabrikam and Contoso.

Explanation:

Office 365 Organizational Relationship:

Why it’s the best fit:

Guest Accounts: The organizational relationship enables Fabrikam users to be added as guests to the Contoso Azure AD tenant, without requiring local account creation. This will also enable guest account invitations to be sent.

Existing Credentials: When the Fabrikam users are added as guests, they will use their existing organizational credentials from their Fabrikam Office 365 tenant.

Role Assignment: Once the Fabrikam developers are guests in the Contoso Azure AD, they can be assigned the Contributor role to a resource in the Contoso Azure subscription.

No additional synchronization: The users are not duplicated or synchronized, they will simply use their own credentials to access resources.

Easy to Setup: Setting up an organizational relationship is simple, and doesn’t require any complex setup.

Why not others:

Configure a forest trust between the on-premises Active Directory forests of Contoso and Fabrikam: A forest trust could allow authentication between on-premises resources, but would be very complex and wouldn’t allow for Azure access using their credentials. In addition, it is unnecessary since they both have Azure AD.

In the Azure AD tenant of Contoso, use MIM to create guest accounts for the Fabrikam developers: MIM is for managing on-premise identities, not for external user access to Azure, and is unnecesary in this case. It would also require a manual process to add guest accounts.

Configure an AD FS relying party trust between the Fabrikam and Contoso AD FS infrastructures: Setting up a relying party trust will not provide access to Azure resources using existing credentials in this case. While this could be configured, it is much more complex and requires the Fabrikam developers to know their company federation settings for Contoso.

Important Notes for the AZ-304 Exam:

Azure AD B2B Collaboration: Be familiar with how Azure AD B2B collaboration (guest accounts) works. Know the methods to add users, and that they can use their existing organizational credentials.

Azure AD Organizational Relationships: Understand how to use organizational relationships in Azure AD to enable B2B collaboration.

Role Assignment: Be familiar with the process of assigning roles to users in Azure subscriptions and to resources.

Federation vs. B2B: Know the difference between federation and B2B collaboration in Azure AD.

Use Existing Infrastructure: The exam often emphasizes leveraging existing infrastructure where possible.

Exam Focus: Be sure to select the simplest and most direct method to achieve the goal.

131
Q

You have 70 TB of files on your on-premises file server.

You need to recommend solution for importing data to Azure. The solution must minimize cost.

What Azure service should you recommend?

Azure StorSimple
Azure Batch
Azure Data Box
Azure Stack

A

Requirements:

Large Data Volume: 70 TB of files need to be imported.

On-Premises Data: The data is currently on an on-premises file server.

Minimize Cost: The solution should be the most cost-effective option.

Recommended Solution:

Azure Data Box

Explanation:

Azure Data Box:

Why it’s the best fit:

Large Data Import: Data Box is designed for moving large volumes of data (terabytes) to Azure.

Cost-Effective: For very large datasets (like 70 TB), Data Box is typically more cost-effective than transferring data over a network.

Secure: Azure provides an encrypted method for transporting data using the physical device.

Physical Transport: You request a Data Box from Azure. You copy your data to the device, ship it back to Azure, and the data is then uploaded into your storage account.

Why not others:

Azure StorSimple: StorSimple is a hybrid cloud storage solution and not ideal for a one time data migration.

Azure Batch: Azure Batch is designed for large-scale parallel compute tasks, not for data migration.

Azure Stack: Azure Stack is a hybrid cloud platform for hosting Azure services in an on-premises environment, not for data migration.

Important Notes for the AZ-304 Exam:

Azure Data Box: Be familiar with different Data Box options (Data Box Disk, Data Box, Data Box Heavy) and when to use each, as well as the process of ordering and using them.

Large Data Transfer: Know when to use Azure Data Box for large amounts of data to be transferred.

Cost Optimization: The exam often emphasizes cost-effective solutions. Be aware of the pricing models for different Azure services.

Network Constraints: Know that Data box may be appropriate when network constraints prevent transferring large amounts of data over a network.

Hybrid Scenarios: Understand the various ways that hybrid cloud scenarios are managed by Azure.

Exam Focus: Be sure to fully understand the requirements, especially related to the data migration scenario. Select the most cost-effective option that meets all requirements.

131
Q

HOTSPOT

Your company has 20 web APIs that were developed in-house.

The company is developing 10 web apps that will use the web APIs. The web apps and the APIs are registered in the company’s Azure Active Directory (Azure AD) tenant. The web APIs are published by using Azure API Management.

You need to recommend a solution to block unauthorized requests originating from the web apps from reaching the web APIs.

The solution must meet the following requirements:

✑ Use Azure AD-generated claims.

✑ Minimize configuration and management effort.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Grant permissions to allow the web apps to
access the web APIs by using:
Azure AD
Azure API Management
The web APIs
Configure a JSON Web Token (JWT) validation
policy by using:
Azure AD
Azure API Management
The web APIs

A

Requirements:

Secure Web APIs: Protect 20 web APIs from unauthorized requests.

Authorized Web Apps: Only allow the 10 authorized web apps to access the APIs.

Azure AD Claims: Use claims generated by Azure AD to authorize the requests.

Minimize Effort: Minimize configuration and management.

API Management: The APIs are published using Azure API Management.

Answer Area:

Grant permissions to allow the web apps to access the web APIs by using:

Azure AD

Configure a JSON Web Token (JWT) validation policy by using:

Azure API Management

Explanation:

Grant permissions to allow the web apps to access the web APIs by using:

Azure AD:

Why it’s correct: You must use Azure AD to control which web apps have access to which APIs. In Azure AD, you will configure “API permissions” on each web application (client) to grant it access to the required web APIs. By granting these permissions, the web apps can acquire the appropriate access tokens from Azure AD. You configure this in the web app manifest.

Why not others:

Azure API Management: API Management is not used to configure the permissions that allow web apps to acquire a token.

The web APIs: The APIs cannot determine what permissions client applications have.

Configure a JSON Web Token (JWT) validation policy by using:

Azure API Management:

Why it’s correct: Azure API Management can be used to validate the incoming access tokens that are provided by Azure AD. In API Management, you would add a policy that validates the JWT access token to ensure that it was issued by Azure AD and that it has the correct claims. This policy would ensure that only tokens with the correct audience (the API), issued by the correct tenant are allowed to proceed.

Why not others:

Azure AD: Azure AD issues access tokens, but it is not the place to implement authorization policies.

The web APIs: Web APIs can perform token validation, but this approach would duplicate effort in all the APIs. It is better to manage it in a centralized location such as API Management.

Important Notes for the AZ-304 Exam:

Azure AD Authentication and Authorization: Understand the basic principles of Azure AD authentication (users/apps obtaining tokens) and authorization (validating tokens and granting access).

API Permissions in Azure AD: Know how to configure API permissions to grant an application access to an API.

JSON Web Tokens (JWT): Understand what a JWT token is and what it contains (claims).

Azure API Management: Know the role of API Management in securing APIs. Know how to implement policies to validate tokens.

Centralized Policy: The exam often rewards solutions that implement policies in a centralized manner, instead of doing it in all the individual resources.

Exam Focus: Read the requirement closely and select the best way to implement a given solution.

131
Q

HOTSPOT

You have the application architecture shown in the following exhibit.

Components in the Architecture:
Azure Active Directory (AAD):

Provides authentication for the system.
Azure DNS:

Manages domain name resolution for the application.
Traffic Manager:

Handles global traffic routing to the appropriate region (active or standby).
Active Region:

Contains the following:
Web App: The primary application hosting component.
SQL Database: The primary database for application data.
Standby Region:

Contains the following:
Web App: A secondary instance of the application.
SQL Database: A secondary instance of the database (likely configured for replication or failover).
Flow of Data:

Traffic Manager routes user requests from the internet to either the active or standby region based on availability.
Active Directory provides authentication services for the application.

Use the drop-down menus to select choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point.

To change the front end to an ative/active
architecture in which both regions process
incoming connections, you must [answer
choice].
add a load balancer to each region
add an Azure Application Gateway to each region
add an Azure content delivery network (CDN)
modify the Traffic Manager routing method
To control the threshold for failing over the
front end to the standby region, you must
configure the [answer choice].
an Application Insights availability test
Azure SQL Database failover groups
Connection Monitor in Azure Network Watcher
Endpoint monitor settings in Traffic Manager

A

Architecture Summary:

The application uses Azure AD for authentication, Azure DNS for domain resolution, and Traffic Manager for global routing.

It has an active and a standby region for high availability and disaster recovery.

Traffic Manager directs traffic to either the active or standby region, but not both simultaneously.

Answer Area:

To change the front end to an active/active architecture in which both regions process incoming connections, you must

modify the Traffic Manager routing method

To control the threshold for failing over the front end to the standby region, you must configure the

Endpoint monitor settings in Traffic Manager

Explanation:

To change the front end to an active/active architecture in which both regions process incoming connections, you must:

modify the Traffic Manager routing method:

Why it’s correct: Traffic Manager’s routing method determines how it directs traffic. To make the setup active-active (where both regions receive traffic concurrently), you need to change the routing method from one that directs to a single region (like “Priority” or “Failover”) to one that distributes traffic across multiple regions, such as “Performance” or “Weighted.”

Why not others:

add a load balancer to each region: A load balancer within each region only balances the traffic within a specific region and does not solve for the active/active requirement, where both regions must process traffic.

add an Azure Application Gateway to each region: Application Gateway is also an in-region load balancer for HTTP traffic. It is not the solution to make two regions process traffic concurrently.

add an Azure content delivery network (CDN): A CDN caches static assets. It does not address how to make the application active active.

To control the threshold for failing over the front end to the standby region, you must configure the:

Endpoint monitor settings in Traffic Manager:

Why it’s correct: Traffic Manager uses endpoint monitors to check the health of the applications. The “Endpoint monitor settings” define the frequency, probe details, and failure criteria. This determines when traffic manager considers a region or endpoint to be unhealthy and triggers a failover.

Why not others:

an Application Insights availability test: Application Insights monitors applications for performance and errors. It is not directly responsible for Traffic Manager endpoint monitoring or failover decisions.

Azure SQL Database failover groups: This is for database failover, not application failover. While related, it does not control Traffic Manager endpoint health checks.

Connection Monitor in Azure Network Watcher: This is used for network performance monitoring, and is not used to control Traffic Manager routing decisions.

Important Notes for the AZ-304 Exam:

Traffic Manager: Be very familiar with the different routing methods (Priority, Performance, Weighted, Geographic) and when to use each. Know how Traffic Manager monitors endpoint health. This is a common topic.

Active/Active Architecture: Understand the differences between active-passive and active-active architectures.

Load Balancing: Know the differences between global load balancers like Traffic Manager, and local load balancers like Azure Load Balancer or Application Gateway.

Endpoint Monitoring: Know how to configure endpoint health probes in Traffic Manager.

Failover: Understand how failover is triggered and the common use cases for disaster recovery scenarios.

High Availability: Be familiar with the different ways to implement high availability, both at the application level and at the data level.

Exam Focus: Look at the specific problem being asked, and select the service that performs that function. For example, Traffic manager is for routing of traffic, and a load balancer is for in-region balancing of connections.