test0 Flashcards
You are designing an Azure governance solution.
All Azure resources must be easily identifiable based on the following operational information: environment, owner, department and cost center.
You need to ensure that you can use the operational information when you generate reports for the Azure resources.
What should you include in the solution?
A. an Azure data catalog that uses the Azure REST API as a data source
B. an Azure management group that uses parent groups to create a hierarchy
C. an Azure policy that enforces tagging rules
D. Azure Active Directory (Azure AD) administrative units
The correct answer is C. an Azure policy that enforces tagging rules.
Here’s why:
Tags are the Key: Azure tags are key-value pairs that you can apply to Azure resources. They are specifically designed to store metadata like environment, owner, department, and cost center. This allows you to easily filter, group, and report on your resources based on these operational details.
Azure Policy Enforces Consistency: Using Azure Policy, you can define rules that require specific tags to be present when resources are created or updated. This ensures that all resources are consistently tagged with the necessary information. Without policy, users might forget or apply tags inconsistently, making reporting difficult.
Let’s look at why the other options are not the best fit:
A. an Azure data catalog that uses the Azure REST API as a data source: Azure Data Catalog is a metadata management service that helps you discover, understand, and consume data. While it could potentially be used to collect and store tag information, it’s not the primary tool for enforcing tagging or making it consistently available.
B. an Azure management group that uses parent groups to create a hierarchy: Management groups are for organizing and managing subscriptions, not for tagging individual resources. They can help you apply policy at a high level, but they don’t provide the granular operational information you need for each resource.
D. Azure Active Directory (Azure AD) administrative units: Administrative units in Azure AD are for delegating administrative permissions to specific sets of users and resources. They do not directly relate to resource tagging and reporting of operational information.
In summary, to ensure resources are consistently tagged with operational information for reporting, you need to enforce tagging rules, which is best achieved with an Azure Policy.
You are designing a large Azure environment that will contain many subscriptions.
You plan to use Azure Policy as part of a governance solution.
To which three scopes can you assign Azure Policy definitions? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. Azure Active Directory (Azure AD) administrative units
B. Azure Active Directory (Azure AD) tenants
C. subscriptions
D. compute resources
E. resource groups
F. management groups
The correct answers are C. subscriptions, E. resource groups, and F. management groups.
Here’s why:
Subscriptions (C): Azure Policies can be assigned directly to Azure subscriptions. This allows you to enforce policies across all resources within that subscription. This is a very common level for applying policies.
Resource Groups (E): Policies can be assigned at the resource group level, which provides granular control over a specific collection of resources. This is useful for applying different policies to different application environments or projects.
Management Groups (F): Management groups are designed to create a hierarchy above subscriptions, allowing you to apply policies to entire groups of subscriptions within your Azure environment. This is useful for establishing overarching governance rules for many subscriptions at once.
Let’s look at why the other options are not correct scopes for assigning Azure Policy definitions:
A. Azure Active Directory (Azure AD) administrative units: Azure AD administrative units are used for managing users and groups within Azure AD, not for managing resource policies.
B. Azure Active Directory (Azure AD) tenants: While Azure Policy definitions are created at the tenant level (so they can be used across subscriptions), policies are assigned to management groups, subscriptions or resource groups, not directly to the tenant.
D. Compute resources: While Azure Policy can apply to compute resources, you can’t directly assign a policy to an individual compute resource. Policies must be applied to management groups, subscriptions, or resource groups which contain resources.
Key takeaway: Azure Policy scopes are hierarchical, starting with Management Groups at the top, then Subscriptions, and then Resource Groups. This allows you to enforce consistent governance across your Azure environment at various levels of granularity.
HOTSPOT -
You plan to create an Azure environment that will contain a root management group and 10 child management groups. Each child management group will contain five Azure subscriptions. You plan to have between 10 and 30 resource groups in each subscription.
You need to design an Azure governance solution. The solution must meet the following requirements:
✑ Use Azure Blueprints to control governance across all the subscriptions and resource groups.
✑ Ensure that Blueprints-based configurations are consistent across all the subscriptions and resource groups.
✑ Minimize the number of blueprint definitions and assignments.
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Level at which to define the blueprints:
The child management groups
The root management group
The subscriptions
Level at which to create the blueprint assignments:
The child management groups
The root management group
The subscriptions
Level at which to define the blueprints:
The root management group
Level at which to create the blueprint assignments:
The child management groups
Explanation:
Blueprint Definition Scope: Blueprints are defined at the management group level (or subscription, but in this scenario it is better at management group level) and can be applied to resources at lower levels. To minimize the number of blueprint definitions, you should define your blueprints at the root management group level. This allows you to have a single source of truth for governance configurations. Because all child management groups will inherit the blueprints defined at the root management group level.
Blueprint Assignment Scope: You want governance applied consistently to all 50 subscriptions and the resources they contain. You should create blueprint assignments at the child management group level. When you assign a blueprint to a management group, all subscriptions within that group inherit the assigned configurations.
Why this is the best approach:
Centralized Governance: Defining the blueprint at the root allows you to have a central definition that you can manage in one place.
Consistent Application: Assigning the blueprint to the child management groups ensures that all subscriptions within each child management group have the same policy and resource settings, providing consistent governance.
Minimize Effort: You avoid creating multiple blueprint definitions or assigning to each subscription individually.
Why other options are incorrect:
Defining blueprints at the child management group level: This would require you to potentially have multiple blueprint definitions which contradicts the requirement of minimizing blueprint definitions.
Defining blueprints at subscription level: This would require you to potentially have multiple blueprint definitions and would require more effort to assign blueprints to all the subscriptions.
Assigning blueprints at the root management group level: While this technically would apply the policy to the child management group. Applying at the child management group allows for greater flexibility if you wanted to specify more child management group specific settings.
Assigning blueprints at subscription level: This would require you to assign blueprints to each individual subscription. Which contradicts the requirement to minimize blueprint assignments.
HOTSPOT -
You need to design an Azure policy that will implement the following functionality:
✑ For new resources, assign tags and values that match the tags and values of the resource group to which the resources are deployed.
✑ For existing resources, identify whether the tags and values match the tags and values of the resource group that contains the resources.
✑ For any non-compliant resources, trigger auto-generated remediation tasks to create missing tags and values.
The solution must use the principle of least privilege.
What should you include in the design? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Azure Policy effect to use:
Append
EnforceOPAConstraint
EnforceRegoPolicy
Modify
Azure Active Directory (Azure AD) object and role-based
access control (RBAC) role to use for the remediation tasks:
A managed identity with the Contributor role
A managed identity with the User Access Administrator role
A service principal with the Contributor role
A service principal with the User Access Administrator role
Azure Policy effect to use:
Modify
Azure Active Directory (Azure AD) object and role-based access control (RBAC) role to use for the remediation tasks:
A managed identity with the Contributor role
Explanation:
Modify Effect: The Modify effect in Azure Policy is the correct choice here because it allows you to add, update, or remove tags on resources. This effect is versatile enough to handle both new resource deployments (by assigning tags) and existing resources (by updating tags as needed). It also can handle remediation which is what is needed. The other options do not provide the appropriate functionality.
Append only adds tags, it cannot modify or remediate existing values and also does not support remediation.
EnforceOPAConstraint is related to Open Policy Agent, which is not the correct approach in this case.
EnforceRegoPolicy is related to Rego policies, which is not the correct approach in this case.
Managed Identity with Contributor Role:
Managed Identity: Using a managed identity is the best practice for security in Azure. It eliminates the need to manage credentials and provides a secure way for Azure Policy to access resources.
Contributor Role: The Contributor role is sufficient for Azure Policy to create and update tags on resources within the scope where the policy is applied. The Contributor role is the minimal privilege needed to do this task. The User Access Administrator role has way more permissions than is needed and therefore violates the principle of least privilege.
A service principal can be used in a similar capacity but a managed identity is better practice when creating the solution.
Why this is the best approach:
Correct Functionality: The Modify effect allows for the desired behavior to create and modify tags.
Principle of Least Privilege: The Contributor role provides the necessary permissions to update tags without granting unnecessary access.
Security Best Practice: Using a managed identity avoids credential management and is the recommended approach for accessing Azure resources securely.
In summary: The Modify effect along with a managed identity using the Contributor role is the most effective way to implement this policy with the principle of least privilege.
You need to recommend a solution to generate a monthly report of all the new Azure Resource Manager (ARM) resource deployments in your Azure subscription.
What should you include in the recommendation?
A. Azure Activity Log
B. Azure Advisor
C. Azure Analysis Services
D. Azure Monitor action groups
The correct answer is A. Azure Activity Log.
Here’s why:
Azure Activity Log’s Purpose: The Azure Activity Log is a service that provides a record of all operations that occur in your Azure subscription. This includes creation, modification, and deletion of resources. It’s essentially an audit log for your Azure environment. This includes the information on new ARM resource deployments.
Reporting on Deployments: Because the Activity Log records all resource deployment events, it is the ideal place to extract data for your monthly report of new deployments. You can filter and export the Activity Log data to analyze and build your report.
Let’s look at why the other options are not the best fit:
B. Azure Advisor: Azure Advisor analyzes your Azure resources and provides recommendations for performance, security, cost, and high availability improvements. While useful, it does not directly provide a report of new resource deployments.
C. Azure Analysis Services: Azure Analysis Services is a fully managed platform-as-a-service (PaaS) that provides enterprise-grade data modeling, analysis, and reporting capabilities. It’s typically used for complex data analysis, not for basic reporting on resource deployments.
D. Azure Monitor action groups: Azure Monitor action groups are used to trigger actions when certain alerts are fired from Azure Monitor. While it’s great for real-time alerts, it’s not intended for generating monthly reports on resource deployments.
Key Takeaway: Azure Activity Log is the core service in Azure that records all operations on your resources, making it the best choice for generating reports on new deployments.
Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity.
Several virtual machines exhibit network connectivity issues.
You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines.
Solution: Install and configure the Azure Monitoring agent and the Dependency Agent on all the virtual machines. Use VM insights in Azure Monitor to analyze the network traffic.
Does this meet the goal?
A. Yes
B. No
The answer is B. No.
Here’s why:
While the Azure Monitoring agent and the Dependency agent are indeed components of VM insights, they are not the correct tools for analyzing packet-level network traffic to determine if packets are being allowed or denied. They provide a view of network connections and dependencies, but not the detailed packet-level information that’s needed for the given scenario.
Here’s a more detailed explanation:
VM Insights: VM Insights in Azure Monitor provides information about the performance and dependencies of your virtual machines. It can show you which machines are communicating with each other and the network connections between them, but not the detail of whether packets are being allowed or dropped based on firewall rules, for example.
Dependency Agent: This agent discovers and maps the connections and dependencies between processes, but it does not capture packet-level information.
Network Connectivity Issues: To diagnose network connectivity issues, particularly when trying to determine if packets are being allowed or denied, you typically need more detailed tools that operate at the network layer.
What should be used instead?
To analyze if packets are being allowed or denied, you would typically use:
Network Watcher: Azure Network Watcher is a service that allows you to monitor and diagnose network conditions. Key features include:
Packet Capture: This feature lets you capture packets going to and from virtual machines, allowing for deep packet inspection.
IP Flow Verify: This feature allows you to test if packets are being allowed or denied based on the configured security rules.
Connection Troubleshooter: Helps troubleshoot connection issues by verifying the path of the traffic and the security rules in place.
Network Security Group (NSG) Flow Logs: This allows you to capture information about the IP traffic flowing through an NSG. You can use this data to analyze whether traffic is being allowed or denied.
In summary: While the proposed solution is useful for monitoring and visualizing network connections, it’s not suitable for analyzing the specific packet-level details needed for diagnosing packet allowance or denial issues. Network Watcher or NSG Flow Logs are more appropriate tools for the required task.
DRAG DROP -
You need to design an architecture to capture the creation of users and the assignment of roles. The captured data must be stored in Azure Cosmos DB.
Which services should you include in the design? To answer, drag the appropriate services to the correct targets. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:
Azure Services
Azure Event Grid
Azure Event Hubs
Azure Functions
Azure Monitor Logs
Azure Notification Hubs
Answer Area
Azure Active Directory audit log
↓
Azure service (unspecified in the image)
↓
Azure service (unspecified in the image)
↓
Cosmos DB
Azure Active Directory audit log
↓
Azure Event Hubs
↓
Azure Functions
↓
Cosmos DB
Explanation:
Azure Active Directory audit log: This is the source of the data, containing the records of user creations and role assignments within your Azure AD tenant.
Azure Event Hubs: Azure Event Hubs is a highly scalable event ingestion service, ideal for capturing the audit log events from Azure AD. It can handle high volumes of data, which is crucial for logging events. It acts as a buffer or intermediary between the event source (Audit log) and the destination where the event data will be stored (Cosmos DB).
Azure Functions: Azure Functions provides a serverless compute platform, which makes it suitable for processing and transforming the raw event data from Event Hubs before storing it into Cosmos DB. We need an intermediary service to transform the event data before passing it to Cosmos DB. It also allows you to add logic to extract the specific fields you want from the raw audit log events for efficient querying.
Cosmos DB: Azure Cosmos DB is a NoSQL database that can store a large variety of data. In this case, it will store the transformed data of user creations and role assignments in a database.
Why other services are not appropriate:
Azure Event Grid: Event Grid is primarily for near real-time reactive event routing, not for high volume continuous data ingestion for storage, which we need here. It’s often used for more immediate actions like triggering alerts or other events.
Azure Monitor Logs: Azure Monitor Logs is used for storing and querying log and metrics data from Azure resources. It can be used for analyzing logs, but it’s not the appropriate intermediary to move the data to the Cosmos DB instance.
Azure Notification Hubs: Notification Hubs is for sending push notifications to various platforms and is not relevant for this scenario.
In summary: The correct flow is to capture the events from the Azure AD Audit Log with Azure Event Hubs, transform and prepare data using Azure Functions and then save the results in Azure Cosmos DB.
HOTSPOT -
What should you implement to meet the identity requirements? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Service:
Azure AD Identity Governance
Azure AD Identity Protection
Azure AD Privilege Access Management (PIM)
Azure Automation
Feature:
Access packages
Access reviews
Approvals
Runbooks
Service:
Azure AD Privileged Identity Management (PIM) [1]
Feature:
Access reviews
Here’s why these are the correct choices:
For Service - Azure AD PIM:
Provides time-based and approval-based role activation
Minimizes risks from excessive permissions
Manages, controls, and monitors access within Azure AD
Essential for privileged account security
Implements just-in-time access
For Feature - Access reviews:
Part of identity governance strategy
Ensures right people have appropriate access
Helps maintain compliance
Enables regular review of access rights
Reduces security risks through periodic validation
Important notes for Azure 304-305 exam:
Understand the differences between:
Identity Governance (overall strategy)
Identity Protection (risk-based security)
PIM (privileged access management)
Know key features:
How access reviews work
PIM workflow and configuration
Identity governance implementation
Security best practices
Focus on:
Security principles
Compliance requirements
Access management lifecycle
Privileged account protection
Remember to understand how these services integrate with other Azure security features for comprehensive identity management. [2]
You need to recommend a solution to generate a monthly report of all the new Azure Resource Manager (ARM) resource deployments in your Azure subscription.
What should you include in the recommendation?
A. Application Insights
B. Azure Arc
C. Azure Log Analytics
D. Azure Monitor metrics
The correct answer is C. Azure Log Analytics.
Here’s why:
Azure Log Analytics and the Activity Log: Azure Log Analytics is the service within Azure Monitor that allows you to collect and analyze logs and other data, including the Azure Activity Log. The Activity Log contains the records of all operations performed on resources within your subscription, including the creation of new resources. Log Analytics provides powerful querying and reporting capabilities, which allows you to extract and format the information about new resource deployments into a monthly report.
Data Collection: Azure Log Analytics collects logs and events through the Azure Monitor agent which can be configured to pull data from the activity log.
Querying: Using Kusto Query Language (KQL), you can write specific queries against the Activity Log data to filter for resource creation events, sort them by time, and create a monthly summary.
Reporting: You can then use Azure Log Analytics features like dashboards and workbooks to build visualizations for your monthly report, or export the data to another reporting tool.
Let’s examine why the other options are less suitable:
A. Application Insights: Application Insights is primarily for monitoring the performance and behavior of applications. While it does capture logs related to application usage and errors, it’s not designed to track resource deployment events from the Azure Activity Log.
B. Azure Arc: Azure Arc is a service that extends Azure management capabilities to other platforms, including on-premises and other clouds. It does not have a direct relationship with reporting on Azure resource deployments.
D. Azure Monitor metrics: Azure Monitor metrics collect numeric data over time, like CPU usage, memory utilization, etc. While these are valuable for performance monitoring, they don’t provide the detailed event information, or creation events, needed for the deployment reporting requirement.
In conclusion: Azure Log Analytics is the correct service because it is designed for collecting, storing, querying and reporting on log data, including the Azure Activity Log, which contains the necessary information to report on new ARM resource deployments.
You have an Azure subscription.
You plan to deploy a monitoring solution that will include the following:
- Azure Monitor Network Insights
- Application Insights
- Microsoft Sentinel
- VM insights
The monitoring solution will be managed by a single team.
What is the minimum number of Azure Monitor workspaces required?
A. 1
B. 2
C. 3
D. 4
The correct answer is A. 1.
Here’s why:
Azure Monitor Workspace (Log Analytics Workspace): An Azure Monitor workspace, also known as a Log Analytics workspace, is a fundamental component of Azure Monitor. It’s where log data and other telemetry are stored for analysis and visualization. All the services you listed (Network Insights, Application Insights, Microsoft Sentinel, and VM insights) can send data to the same workspace.
Single Team Management: Since the monitoring solution will be managed by a single team, there’s no need to separate the data into multiple workspaces for access or organizational purposes.
Cost-Effectiveness: Using a single workspace is generally more cost-effective, as you avoid the overhead of managing multiple workspaces and potential data transfer charges between them.
Why Not More Workspaces?
Multiple workspaces are often used when:
You need to separate data for different environments (e.g., development, testing, production).
You have different teams that require segregated access to specific data.
You have different regulatory or compliance requirements that require isolating data from different sources.
None of those conditions apply here: The problem specifies a single team managing the monitoring solution, which means that data separation and access control do not require multiple workspaces. Therefore, one workspace is the most efficient option.
In conclusion: For a single team managing a monitoring solution that includes the specified services, a single Azure Monitor workspace is sufficient and the most cost-effective choice.
You need to recommend a solution to generate a monthly report of all the new Azure Resource Manager (ARM) resource deployments in your Azure subscription.
What should you include in the recommendation?
A. Application Insights
B. Azure Analysis Services
C. Azure Advisor
D. Azure Activity Log
The correct answer is D. Azure Activity Log.
Here’s why:
Azure Activity Log’s Function: The Azure Activity Log is a service that provides a detailed record of all operations that occur in your Azure subscription. This includes the creation, modification, and deletion of resources. It is essentially an audit log for your Azure environment. Specifically, it records when new ARM resources are deployed.
Generating Reports: The Activity Log’s data can be filtered, exported, and analyzed to create a monthly report of new resource deployments. You can export the log to various destinations, such as Azure Storage, Azure Event Hubs, or Azure Log Analytics, for further analysis and reporting.
Purpose-Built: The Activity Log is designed for tracking operational events, such as the creation of resources. It’s the most appropriate tool for generating this kind of report.
Let’s review why the other options are not the best fit:
A. Application Insights: Application Insights is a service designed to monitor the performance and usage of applications. While it can log some operational events from within the application, it doesn’t track resource deployment events at the subscription level from ARM.
B. Azure Analysis Services: Azure Analysis Services is a data analytics service used for creating complex data models for reporting. It does not contain the data on new resource deployments at the ARM level.
C. Azure Advisor: Azure Advisor is a recommendation engine that analyses your Azure resources and provides recommendations for cost, performance, and security improvements. It’s not designed for reporting on new resource deployments.
In conclusion: The Azure Activity Log is the ideal service for providing the necessary data to generate a monthly report of new ARM resource deployments, as it records all resource operations within your Azure subscription.
HOTSPOT
Case Study
Overview
Fabrikam, Inc. is an engineering company that has offices throughout Europe. The company has a main office in London and three branch offices in Amsterdam, Berlin, and Rome.
Existing Environment: Active Directory Environment
The network contains two Active Directory forests named corp.fabrikam.com and rd.fabrikam.com. There are no trust relationships between the forests.
Corp.fabrikam.com is a production forest that contains identities used for internal user and computer authentication.
Rd.fabrikam.com is used by the research and development (R&D) department only. The R&D department is restricted to using on-premises resources only.
Existing Environment: Network Infrastructure
Each office contains at least one domain controller from the corp.fabrikam.com domain. The main office contains all the domain controllers for the rd.fabrikam.com forest.
All the offices have a high-speed connection to the internet.
An existing application named WebApp1 is hosted in the data center of the London office. WebApp1 is used by customers to place and track orders. WebApp1 has a web tier that uses Microsoft Internet Information Services (IIS) and a database tier that runs Microsoft SQL Server 2016. The web tier and the database tier are deployed to virtual machines that run on Hyper-V.
The IT department currently uses a separate Hyper-V environment to test updates to WebApp1.
Fabrikam purchases all Microsoft licenses through a Microsoft Enterprise Agreement that includes Software Assurance.
Existing Environment: Problem Statements
The use of WebApp1 is unpredictable. At peak times, users often report delays. At other times, many resources for WebApp1 are underutilized.
Fabrikam plans to move most of its production workloads to Azure during the next few years, including virtual machines that rely on Active Directory for authentication.
As one of its first projects, the company plans to establish a hybrid identity model, facilitating an upcoming Microsoft 365 deployment.
All R&D operations will remain on-premises.
Fabrikam plans to migrate the production and test instances of WebApp1 to Azure.
Requirements: Technical Requirements
Fabrikam identifies the following technical requirements:
- Website content must be easily updated from a single point.
- User input must be minimized when provisioning new web app instances.
- Whenever possible, existing on-premises licenses must be used to reduce cost.
- Users must always authenticate by using their corp.fabrikam.com UPN identity.
- Any new deployments to Azure must be redundant in case an Azure region fails.
- Whenever possible, solutions must be deployed to Azure by using the Standard pricing tier of Azure App Service.
- An email distribution group named IT Support must be notified of any issues relating to the directory synchronization services.
- In the event that a link fails between Azure and the on-premises network, ensure that the virtual machines hosted in Azure can authenticate to Active Directory.
- Directory synchronization between Azure Active Directory (Azure AD) and corp.fabrikam.com must not be affected by a link failure between Azure and the on-premises network.
Requirements: Database Requirements
Fabrikam identifies the following database requirements:
- Database metrics for the production instance of WebApp1 must be available for analysis so that database administrators can optimize the performance settings.
- To avoid disrupting customer access, database downtime must be minimized when databases are migrated.
- Database backups must be retained for a minimum of seven years to meet compliance requirements.
Requirements: Security Requirements
Fabrikam identifies the following security requirements:
- Company information including policies, templates, and data must be inaccessible to anyone outside the company.
- Users on the on-premises network must be able to authenticate to corp.fabrikam.com if an internet link fails.
- Administrators must be able authenticate to the Azure portal by using their corp.fabrikam.com credentials.
- All administrative access to the Azure portal must be secured by using multi-factor authentication (MFA).
- The testing of WebApp1 updates must not be visible to anyone outside the company.
To meet the authentication requirements of Fabrikam, what should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Minimum number of Azure AD tenants:
0
1
2
3
4
Minimum number of custom domains to add:
0
1
2
3
4
Minimum number of conditional access policies to create:
0
1
2
3
4
Minimum number of Azure AD tenants:
Answer: 1
Explanation: Fabrikam needs a single Azure AD tenant to represent their organization in Azure. This tenant will be used to synchronize user identities from the on-premises corp.fabrikam.com domain to the cloud, allowing users to authenticate to Azure resources and Microsoft 365 services using their existing corporate credentials. They also want to be able to access the Azure portal with their on-premise credentials. There is no need for multiple tenants as there is only one organization.
Minimum number of custom domains to add:
Answer: 1
Explanation: Fabrikam needs to add a single custom domain that will be used as their UPN (User Principal Name) suffix. This domain should match their on-premise domain, corp.fabrikam.com, so that users can login to Azure resources and services with the same UPN they use on-premises. This is a key step to establish a hybrid identity environment and to allow users to use the same credentials in the cloud.
Minimum number of conditional access policies to create:
Answer: 2
Explanation: Fabrikam requires at least two conditional access policies to meet the requirements:
MFA for Azure Portal Administrators: One policy to enforce MFA for all administrators when accessing the Azure portal using their corp.fabrikam.com accounts. This meets the requirement that “All administrative access to the Azure portal must be secured by using multi-factor authentication (MFA).”
Access From On-Premises Networks: One policy to ensure that users can still authenticate to resources (including Azure virtual machines) using their corp.fabrikam.com credentials even if the connection between Azure and the on-premises network fails. This addresses the need: “Users on the on-premises network must be able to authenticate to corp.fabrikam.com if an internet link fails.”
Why Other Options Are Incorrect:
Zero or Multiple Azure AD Tenants: Only one Azure AD tenant is needed to centralize identity management for the organization. Using more than one would add unnecessary complexity.
Zero or Multiple Custom Domains: They need at least 1 custom domain to manage the login for Azure AD, using the same user principal name as they use on premise.
Zero or One Conditional Access Policy: Two policies are required to fulfil the security requirements. One policy is for MFA and another policy to ensure on-premises access. Three or more may be needed but the problem asks for the minimum amount.
In Summary:
One Azure AD tenant is needed to centralize identity management.
One custom domain needs to be added to allow users to authenticate with their UPN.
Two conditional access policies are the minimum needed to secure administrator access to the portal with MFA, and to allow for on-premises access if the internet link fails.
You have an Azure subscription that contains an Azure SQL database named DB1. Several queries that query the data in DB1 take a long time to execute.
You need to recommend a solution to identify the queries that take the longest to execute.
What should you include in the recommendation?
A. SQL Database Advisor
B. Azure Monitor
C. Performance Recommendations
D. Query Performance Insight
The correct answer is D. Query Performance Insight.
Here’s why:
Purpose-Built for Query Analysis: Query Performance Insight is a feature of Azure SQL Database specifically designed to identify and analyze the performance of database queries. It provides detailed information about query execution, including duration, resource consumption (CPU, I/O, etc.), and execution counts. This makes it ideal for pinpointing the queries that are taking the longest to execute.
Direct Identification of Slow Queries: It directly surfaces the slowest running queries, making it easy to identify the problem areas in your database workload.
Historical Data: It also shows historical query performance, which is useful for trend analysis and for identifying regressions after changes.
Let’s look at why the other options are not the best fit:
A. SQL Database Advisor: The SQL Database Advisor offers recommendations for improving database performance, such as indexing or schema adjustments. While these recommendations might indirectly improve query performance, it doesn’t directly identify which queries are running slowly. It’s more proactive than reactive in addressing performance issues. It will not directly show which queries are slow.
B. Azure Monitor: Azure Monitor is a general monitoring service for Azure resources. While it can collect metrics and logs for your Azure SQL Database, it does not provide the specific query performance insights that Query Performance Insight provides. You would have to write your own custom logs to track these slow queries if using Azure Monitor, and it’s not as easy as using Query Performance Insight.
C. Performance Recommendations: “Performance Recommendations” is a general term rather than a specific tool or service. Azure SQL Database has the Database Advisor, which gives recommendations, but it does not directly identify the slowest queries.
In summary: Query Performance Insight is the correct choice because it is specifically designed for analyzing the performance of queries and will directly show the queries that take the longest to execute.
You have an Azure App Service Web App that includes Azure Blob storage and an Azure SQL Database instance. The application is instrumented by using the Application Insights SDK.
1.) Correlate Azure resource usage and performance data with app configuration and performance data
2.) Visualize the relationships between application components
3.) Track requests and exceptions to a specific line of code within the application
4.) Analyze how many users return to the application and how often they select a particular dropdown value
You need to design a monitoring solution for the web app. Which Azure monitoring services should you use for each?
a. Azure Application Insights
b. Azure Service Map
c. Azure Monitor Logs
d. Azure Activity Log
- Correlate Azure resource usage and performance data with app configuration and performance data:
Answer: a. Azure Application Insights
Explanation: Application Insights is specifically designed to monitor applications and provides deep insights into their performance. It automatically collects telemetry data like request rates, response times, exception rates, and dependency calls. When combined with the Azure Monitor metrics that Application Insights collects on the host and other Azure resources, you can correlate application performance with underlying infrastructure performance to identify bottlenecks and performance issues. It can also show performance information in the application itself via traces.
- Visualize the relationships between application components:
Answer: b. Azure Service Map
Explanation: Azure Service Map automatically discovers application components and maps the dependencies between them. It provides a visual representation of the application architecture, allowing you to quickly identify how different components are connected and the network traffic flows between them. This visualization is crucial for understanding complex application architectures.
- Track requests and exceptions to a specific line of code within the application:
Answer: a. Azure Application Insights
Explanation: Using the Application Insights SDK, you can implement custom telemetry, including logging specific trace statements, and exceptions within the code. This capability allows developers to track requests as they pass through the various parts of the application and to pinpoint issues with specific lines of code. With the Application Insights code level diagnostics, you can track execution flow to see which line of code is causing errors.
- Analyze how many users return to the application and how often they select a particular dropdown value:
Answer: a. Azure Application Insights
Explanation: Application Insights provides out-of-the-box user session tracking and event tracking. You can analyze user activity, including how many users return to your application and frequency. You can create custom event telemetry to track particular actions, such as selecting a dropdown value and using this data to generate usage patterns. You can track events within the code and also with client-side JavaScript.
In summary:
a. Azure Application Insights: Used for application performance monitoring, correlating infrastructure and application metrics, custom logging, code level diagnostics, and user behavior tracking.
b. Azure Service Map: Used for visualizing the relationships and dependencies between application components.
c. Azure Monitor Logs: This is not the best answer here as it would require a separate custom log that would need to be configured and managed separately. These logs are not automatically available for use.
d. Azure Activity Log: The Activity Log is more for administrative actions and not for application monitoring.
Therefore, you should use:
1 -> a
2 -> b
3 -> a
4 -> a
You have an on-premises Hyper-V cluster. The cluster contains Hyper-V hosts that run Windows Server 2016 Datacenter. The hosts are licensed under a Microsoft Enterprise Agreement that has Software Assurance.
The Hyper-V cluster contains 30 virtual machines that run Windows Server 2012 R2. Each virtual machine runs a different workload. The workloads have predictable consumption patterns.
You plan to replace the virtual machines with Azure virtual machines that run Windows Server 2016. The virtual machines will be sized according to the consumption pattern of each workload.
You need to recommend a solution to minimize the compute costs of the Azure virtual machines. Which two recommendations should you include in the solution?
A. Configure a spending limit in the Azure account center.
B. Create a virtual machine scale set that uses autoscaling.
C. Activate Azure Hybrid Benefit for the Azure virtual machines.
D. Purchase Azure Reserved Virtual Machine Instances for the Azure virtual machines.
E. Create a lab in Azure DevTest Labs and place the Azure virtual machines in the lab.
Discussion
The two correct recommendations are C. Activate Azure Hybrid Benefit for the Azure virtual machines and D. Purchase Azure Reserved Virtual Machine Instances for the Azure virtual machines.
Here’s why:
C. Activate Azure Hybrid Benefit:
How it Works: Azure Hybrid Benefit allows you to use your existing on-premises Windows Server licenses with Software Assurance to reduce the cost of running Windows Server virtual machines in Azure. Because the Hyper-V hosts are licensed under Software Assurance, you can apply the benefit to the Azure virtual machines and significantly reduce licensing costs.
Cost Savings: This directly lowers the per-hour cost of the virtual machines.
D. Purchase Azure Reserved Virtual Machine Instances:
How it Works: Reserved Instances (RIs) allow you to commit to using specific virtual machine sizes for one or three years, in exchange for a significant discount compared to pay-as-you-go pricing.
Cost Savings: Given the predictable consumption patterns of the workloads, using Reserved Instances for the virtual machines provides a huge cost savings. The problem states the virtual machines will be sized according to the consumption pattern.
Why other options are not the best fit for minimizing compute costs:
A. Configure a spending limit in the Azure account center: While spending limits are crucial for cost management, they don’t directly reduce compute costs. They can prevent surprise bills by limiting consumption but do not reduce the actual cost of resources consumed.
B. Create a virtual machine scale set that uses autoscaling: While autoscaling can reduce overall costs by scaling down VMs when not needed, this can lead to more complex management. Given that the workload is predictable, it is better to purchase reserved instances of the VMs, which will provide more cost savings and is less complex to manage. This approach can provide cost benefits but not as much as reserved instances. Autoscaling is better for unpredictable workloads.
E. Create a lab in Azure DevTest Labs and place the Azure virtual machines in the lab: Azure DevTest Labs can help with cost management in development and test environments but doesn’t directly reduce the cost of production virtual machines. The problem states that each VM runs a different workload which suggests that they are for production. DevTest Labs also does not provide cost benefits like Reserved Instances and Hybrid Benefit.
In summary: To minimize compute costs of the Azure VMs when the workload is predictable, you should use Azure Hybrid Benefit to reduce licensing costs, and Azure Reserved Instances for a substantial discount.
You have an Azure subscription that contains the SQL servers:
SQLsr1 –>RG1 –>East US
SQLsvr2 –>RG2 –> West US
The subscription contains the storage accounts:
Storage1(storagev2) –> RG1 –>East US.
Storage2(BlobStorage)–>RG2 –> West US
You create the Azure SQL databases:
SQLdb1–>RG1–>SQLsvr1–>STD pricing tier
SQLdb2–>RG1–>SQLsvr1–>STD pricing tier
SQLdb3–>RG2–>SQLsvr2–>Premium pricing tier
1.) When you enable auditing for SQLdb1, can you store the audit info to sotrage1?
2.) When you enable auditing for SQLdb2, can you store the audit info to storage2?
3.) When you enable auditing for SQLdb3, can you store the audit info to storage2?
Key Concept: For Azure SQL Database auditing, you need a storage account in the same region as the SQL Server.
Here are the answers to your questions:
- When you enable auditing for SQLdb1, can you store the audit info to Storage1?
Answer: Yes
Explanation: SQLdb1 is located in the same resource group RG1 as SQLsvr1 and in the East US region. Storage1 is also in RG1 and the East US region. Because the storage account is in the same region as the SQL Server, it can be used to store audit logs.
- When you enable auditing for SQLdb2, can you store the audit info to Storage2?
Answer: No
Explanation: SQLdb2 is located in resource group RG1 and in the East US region. However, Storage2 is in resource group RG2 and the West US region. Since the storage account must be located in the same region as the SQL Server for auditing, Storage2 cannot be used to store audit logs for SQLdb2. You would need to use Storage1 or another storage account in the East US region.
- When you enable auditing for SQLdb3, can you store the audit info to Storage2?
Answer: Yes
Explanation: SQLdb3 is located in resource group RG2 and in the West US region, as well as SQLsvr2. Storage2 is in RG2 and the West US region. Because they are in the same region, Storage2 is a valid storage account to store audit logs for SQLdb3.
In summary:
SQLdb1 (East US) can store audit logs in Storage1 (East US).
SQLdb2 (East US) CANNOT store audit logs in Storage2 (West US).
SQLdb3 (West US) can store audit logs in Storage2 (West US).
Important Note: When configuring auditing for Azure SQL Database, the storage account must be in the same Azure region as the SQL Server. It does not matter if the storage account is in the same resource group. It is also important to note that the storage account type can be either blob storage or general purpose V2 storage for SQL auditing.
A company has a hybrid ASP.NET Web API application that is based on a software as a service (SaaS) offering.
Users report general issues with the data. You advise the company to implement live monitoring and use ad hoc queries on stored JSON data. You also advise the company to set up smart alerting to detect anomalies in the data.
You need to recommend a solution to set up smart alerting.
What should you recommend?
A. Azure Site Recovery and Azure Monitor Logs
B. Azure Data Lake Analytics and Azure Monitor Logs
C. Azure Application Insights and Azure Monitor Logs
D. Azure Security Center and Azure Data Lake Store
The correct answer is C. Azure Application Insights and Azure Monitor Logs.
Here’s why:
Azure Application Insights for Smart Alerting: Application Insights is a powerful Application Performance Monitoring (APM) service specifically designed to monitor web applications and their underlying services. It includes:
Smart Detection: Application Insights has built-in smart detection capabilities that use machine learning to automatically detect anomalies in your application’s performance, including response times, request rates, and exception rates. This is ideal for detecting unusual data issues.
Metrics and Telemetry: It collects a wealth of telemetry data that can be used for analysis and alerting. The data collected can include: application requests, traces, dependency calls, exceptions, and metrics. This is required to detect anomalies.
Custom Metrics: It allows you to create custom metrics and alerts specific to your application’s data patterns if needed.
Azure Monitor Logs (Log Analytics) for Data Analysis: While Application Insights handles smart alerting well, it’s useful to use Azure Monitor Logs (Log Analytics) in conjunction with it.
JSON Data Storage: Application Insights stores collected data, including logs and telemetry, in a Log Analytics workspace. This allows you to query and analyze your JSON data using Kusto Query Language (KQL).
Alerts based on Log queries: While Application Insights can trigger alerts directly, you can also create complex alerts based on log queries in Log Analytics. You can write queries that detect specific data patterns or anomalies. You can then create alerts based on these queries.
Why the other options are not the best fit:
A. Azure Site Recovery and Azure Monitor Logs: Azure Site Recovery is primarily for business continuity and disaster recovery. It doesn’t provide the application performance monitoring and anomaly detection capabilities required here. It also does not monitor the data inside the application.
B. Azure Data Lake Analytics and Azure Monitor Logs: Azure Data Lake Analytics is designed for batch processing large datasets and performing advanced analytics. While it’s useful for analyzing data, it’s not the best fit for live monitoring of a web application or for setting up smart alerts for anomaly detection.
D. Azure Security Center and Azure Data Lake Store: Azure Security Center is focused on security posture management and threat protection. It does not provide the application monitoring and smart alerting capability required for application performance and data anomaly detection. Azure Data Lake Store is a storage service and does not have the ability to monitor anomalies.
In conclusion: Application Insights provides the built-in smart detection and the rich telemetry needed for monitoring your application. Combining it with Azure Monitor Logs allows you to store and query JSON data and create complex alerts, thus meeting all the requirements.
You have an Azure subscription that is linked to an Azure Active Directory (Azure AD) tenant. The subscription contains 10 resource groups, one for each department at your company.
Each department has a specific spending limit for its Azure resources.
You need to ensure that when a department reaches its spending limit, the compute resources of the department shut down automatically.
Which two features should you include in the solution?
A. Azure Logic Apps
B. Azure Monitor alerts
C. the spending limit of an Azure account
D. Cost Management budgets
E. Azure Log Analytics alerts
The two correct features to include in the solution are B. Azure Monitor alerts and D. Cost Management budgets. Here’s why:
D. Cost Management Budgets:
Purpose: Cost Management budgets allow you to set spending limits for a specific scope, such as a subscription, resource group, or management group. They also allow you to be notified when cost spending has reached certain thresholds.
Role in Solution: You’ll use budgets to define the spending limits for each department’s resource group. Once that limit is met, an action or alert should be triggered.
B. Azure Monitor Alerts:
Purpose: Azure Monitor alerts can trigger actions based on certain events or conditions that are evaluated.
Role in Solution: In this solution, you can configure cost management budgets to notify Azure Monitor of when a spending limit is met. Then you can set up Azure monitor to create an alert based on that threshold being met. This alert can then trigger an action.
How These Two Work Together
Cost management budgets track the budget usage and generate notifications to Azure Monitor. Azure Monitor can then generate an alert based on the notification that it received from the budget service. Then, the Azure Monitor alert can trigger an action such as a Logic App, Automation Runbook, Function App, or webhook. You can choose an appropriate action that will shut down your compute resources.
Why Other Options Are Not Correct (or not the complete solution):
A. Azure Logic Apps: Logic Apps are great for automating workflows, and you could use one to shut down the resources. However, they do not provide the budgeting functionality required to track the spending limits of a department. A Logic App is more of an action for the alert to call.
C. the spending limit of an Azure account: The spending limit for an entire Azure account is not granular enough. It doesn’t allow you to apply limits for each department separately based on their resource groups. A single Azure spending limit cannot be set for multiple departments.
E. Azure Log Analytics alerts: Azure Log Analytics alerts are based on log queries. While you can use logs to track costs, this service is not the best for this task. It can’t directly trigger an action based on a cost budget. Azure Log Analytics is not a direct requirement for this scenario.
In Summary: Cost Management budgets will provide the ability to track spending and trigger an alert when that limit is reached, while Azure Monitor alerts provides the ability to define the action to take (shutting down the compute resources).
You have an Azure subscription that contains the resources
storage1, storage account, storage in East US
storage2, storage account, storageV2 in East US
Workspace1, log analytics workspace in East US
Workspace2, log analytics workspace in East US
Hub1, Event hub in East US
You create an Azure SQL database named DB1 that is hosted in the East US region.
To DB1, you add a diagnostic setting named Settings1. Settings1 archives SQLInsights to storage1 and sends SQLInsights to Workspace1.
1.) Can you add a new diagnostic setting to archive SQLInsights logs to storage2?
2.) Can you add a new diagnostic setting that sends SQLInsights logs to Workspace2?
3.) Can you add a new diagnostic setting that sends SQLInsighs logs to Hub1?
Key Concepts:
Diagnostic Settings: Diagnostic settings for Azure resources allow you to route logs and metrics to different destinations for analysis and storage.
Storage Accounts: Storage accounts must be in the same region as the SQL Server resource. The storage account can be a blob storage or a storageV2 type.
Log Analytics Workspaces: Log Analytics workspaces can be in the same region or a different region as the SQL database, but is not recommended as it introduces higher latency and cost.
Event Hubs: Event Hubs can also be in the same region as the SQL database, or a different region.
Here are the answers to your questions:
- Can you add a new diagnostic setting to archive SQLInsights logs to Storage2?
Answer: Yes
Explanation: Storage2 is a storage account of type storageV2 located in the same region as the SQL Database DB1 (East US). A storage account in the same region is a valid destination for the diagnostic logs. Also, the type of the storage account can be either blob storage or a general purpose V2 type.
- Can you add a new diagnostic setting that sends SQLInsights logs to Workspace2?
Answer: Yes
Explanation: Workspace2 is a log analytics workspace located in the same region as the SQL Database DB1 (East US). A log analytics workspace in the same region is a valid destination for the diagnostic logs.
- Can you add a new diagnostic setting that sends SQLInsights logs to Hub1?
Answer: Yes
Explanation: Hub1 is an Event Hub located in the same region as the SQL Database DB1 (East US). An Event Hub in the same region is a valid destination for the diagnostic logs.
In summary:
You can add a new diagnostic setting to archive SQLInsights logs to Storage2 as it is in the same region as the SQL server.
You can add a new diagnostic setting that sends SQLInsights logs to Workspace2 as it is in the same region as the SQL server.
You can add a new diagnostic setting that sends SQLInsights logs to Hub1 as it is in the same region as the SQL server.
Important Note:
While all destinations for log settings are valid as the resources are located in the same region, it is important to use same region resources when setting up your log settings for optimal performance. Using cross region resources can result in higher latency and data transfer costs.
You deploy several Azure SQL Database instances. You plan to configure the Diagnostics settings on the databases with the following settings:
Diagnostic setting named Diagnostic1
Archive to a storage account is enabled.
SQLInsights log is enabled and has a retention of 90 days. AutomaticTurning log is enabled and as a retention of 30 says.
All other logs are disabled.
Send log analytics is enabled.
Archive to storage account is enabled.
Stream to even hub is disabled.
1.) What is the amount of time an SQLInsights data will be stored in blob storage?
30 days
90 days
730 days
indefinite
2.) What is the maximum amount of time SQLInsights data can be stored in Azure Log Analytics?
30 days
90 days
730 days
indefinite
Key Concepts:
Diagnostic Settings: These settings define where Azure resources send their logs and metrics, and how long that data is retained.
Storage Account Retention: When you configure diagnostic settings to archive logs to a storage account, you specify a retention period in days. After that time, the logs are deleted from the storage account.
Log Analytics Workspace Retention: When you configure diagnostic settings to send logs to a Log Analytics workspace, the retention is managed in the Log Analytics workspace itself. It’s independent of the diagnostic settings. Log Analytics can store the logs indefinitely or for a specific period based on your settings for either the table or the workspace.
Answers:
- What is the amount of time an SQLInsights data will be stored in blob storage?
Answer: 90 days
Explanation: In the diagnostic setting named Diagnostic1, you explicitly enabled the SQLInsights log and set its retention to 90 days when archiving to a storage account. This setting directly controls how long the data persists in storage.
- What is the maximum amount of time SQLInsights data can be stored in Azure Log Analytics?
Answer: 730 days
Explanation:
How long is the data kept?
Raw data points (that is, items that you can query in Analytics and inspect in Search) are kept for up to 730 days.
Reference:
https://docs.microsoft.com/en-us/azure/azure-monitor/app/data-retention-privacy
Your company has the divisions:
East –> sub1, sub2 –> East.contoso.com
West–> sub3, sub4 –> West.contoso.com
You plan to deploy a custom application to each subscription. The application will contain the following:
✑ A resource group
✑ An Azure web app
✑ Custom role assignments
✑ An Azure Cosmos DB account
You need to use Azure Blueprints to deploy the application to each subscription.
What is the minimum number of objects required to deploy the application?
Management Groups:
Blueprint definitions:
Blueprint assignments:
Understanding the Requirements
Two Divisions: The company has two divisions, East and West, with two subscriptions each (total of 4 subscriptions).
Consistent Application: Each subscription needs the same application components: a resource group, an Azure web app, custom role assignments, and a Cosmos DB account.
Azure Blueprints: Blueprints allow you to create repeatable deployment packages for Azure resources.
Minimize Objects: We need to determine the minimum number of management groups, blueprint definitions, and blueprint assignments to achieve the desired outcome.
Minimum Objects
Management Groups:
Answer: 1
Explanation: Since all subscriptions are in the same organization and there are no requirements to have separate policies for the different divisions, we do not require management groups. You do not need to deploy management groups to deploy azure blueprints.
Blueprint Definitions:
Answer: 1
Explanation: You can define one blueprint that includes all the common components needed for the application (resource group, web app, custom role assignments, and Cosmos DB account). Because all subscriptions will contain the same application, a single definition is sufficient. The blueprint definitions are for managing the blueprint itself, and do not need to match the quantity of subscriptions.
Blueprint Assignments:
Answer: 4
Explanation: While one blueprint can define the application, we need to assign that blueprint to each subscription where you want to deploy the application. Because we have 4 subscriptions we will need a blueprint assignment for each subscription.
Why Other Configurations Are Not Minimal:
Multiple Management Groups: While you could use separate management groups for East and West divisions, it’s not required for the scenario. The blueprints can be applied directly to the subscriptions, and since there is nothing specified in the requirements about needing management groups, they are not needed for this scenario.
Multiple Blueprint Definitions: Creating multiple blueprint definitions for the same application components in each subscription would be redundant, increasing maintenance.
More blueprint assignments: Because there are 4 subscriptions, you need to assign blueprints for each subscription and you cannot assign a blueprint to more than one subscription at once.
In summary: To deploy the application with Azure Blueprints using the minimum number of objects, you need:
Management Groups: 1 (Root Management Group - if you have one already). Note you do not need to deploy management groups to deploy blueprints.
Blueprint Definitions: 1
Blueprint Assignments: 4 (one for each subscription)
You have an Azure Active Directory (Azure AD) tenant.
You plan to deploy Azure Cosmos DB databases that will use the SQL API.
You need to recommend a solution to provide specific Azure AD user accounts with read access to the Cosmos DB databases.
What should you include in the recommendation?
A. shared access signatures (SAS) and conditional access policies
B. certificates and Azure Key Vault
C. a resource token and an Access control (IAM) role assignment
D. master keys and Azure Information Protection policies
The correct answer is C. a resource token and an Access control (IAM) role assignment.
Here’s why:
Resource Tokens (or Resource IDs) for Cosmos DB: When a Cosmos DB account is created, a resource id is automatically generated for that account. This resource id is also available at different scopes including the database and container levels. This resource id is used for configuring access control (IAM).
Azure Role-Based Access Control (RBAC) and IAM: Azure role-based access control (RBAC), allows you to grant specific permissions to Azure AD users, groups, or service principals over various scopes. For Cosmos DB, you use IAM (Identity and Access Management) to assign roles to Azure AD user accounts.
Built-in and Custom Roles: You can use built-in roles, such as “Cosmos DB Reader,” or create custom roles to provide fine-grained control over access. For example, you can create a role that only grants read access to specific databases or containers.
Granting Access: By assigning an appropriate role with read permissions to a user and providing that user the id to the resource, you can grant the user access to a specific Cosmos DB database.
Let’s review why the other options are not the right fit:
A. shared access signatures (SAS) and conditional access policies: Shared Access Signatures (SAS) are used for providing access to storage accounts, not Cosmos DB databases. While conditional access policies are useful for enforcing authentication policies based on conditions, they are not a direct way of granting access to specific Cosmos DB database resources.
B. certificates and Azure Key Vault: Certificates and Azure Key Vault are primarily used for securing sensitive information such as API keys, not for providing read access to Cosmos DB resources for users. While you can use certificates to provide client-side authentication for applications, certificates are not used to grant user access.
D. master keys and Azure Information Protection policies: Master keys provide full access to Cosmos DB account resources. Sharing these would violate the principle of least privilege, and they should be managed securely. Azure Information Protection policies are primarily used for securing document access.
In Summary: Using IAM role assignments, and providing the resource id to the user to allow access, is the best way to provide specific Azure AD users with read access to Cosmos DB databases while adhering to security best practices.
You need to design a resource governance solution for an Azure subscription. The solution must meet the following requirements:
✑ Ensure that all ExpressRoute resources are created in a resource group named RG1.
✑ Delegate the creation of the ExpressRoute resources to an Azure Active Directory (Azure AD) group named Networking.
✑ Use the principle of least privilege.
1.) Ensure all ExpressRoute resources are created in RG1
2.) Delegate the creation of the ExpressRoute resources to Networking
a. A custom RBAC role assignment at the level of RG1
b. A custom RBAC role assignment at the subscription level
c. An Azure Blueprints assignment that sets locking mode for the level of RG1
d. An Azure Policy assignment at the subscription level that has an exclusion
e. Multiple Azure Policy assignments at the resource group level except for RG1
- Ensure all ExpressRoute resources are created in RG1:
Correct Answer: d. An Azure Policy assignment at the subscription level that has an exclusion
Explanation: An Azure Policy can be used to enforce that all ExpressRoute resources are created in the RG1 resource group. The policy is assigned at the subscription level so it is applied to all resources in the subscription. You would create the policy to require that all ExpressRoute resources should only be created in the RG1 resource group, and you would specify any other resource group to be an exception (excluded from the policy). This policy setting would ensure that all new ExpressRoute resources will always be created in the correct resource group. While other options could be made to work, this is the easiest and most appropriate way to achieve the requirement.
Why other options are not best here
An Azure Blueprint can also enforce these settings, but is overkill for this specific setting.
While you could create multiple resource group level policies for all groups that are not RG1, it would require extra effort to keep all groups updated.
A resource group level policy would be incorrect since you would need multiple policies, this would not be the best solution.
- Delegate the creation of the ExpressRoute resources to Networking:
Correct Answer: a. A custom RBAC role assignment at the level of RG1
Explanation: To delegate permission to create ExpressRoute resources, you should use Role-Based Access Control (RBAC). Create a custom role that has only the permissions to create and manage ExpressRoute resources. Then, assign this custom role to the Networking Azure AD group at the level of the resource group RG1. This adheres to the principle of least privilege because it gives the group only the necessary permissions, and only within the context of the resource group needed.
Why Other Options Are Incorrect
Creating a role assignment at the subscription level would give the group more permissions than are necessary and would therefore violate the principle of least privilege.
While blueprints can also manage roles, this would be overkill for what is required. Blueprints are not used to delegate permissions to groups.
Azure Policy doesn’t manage permissions directly.
In Summary:
To ensure resources are created in the correct resource group, use Azure Policy.
To delegate permissions, use RBAC roles with a custom role for ExpressRoute management on the correct resource group.
This combination of Azure Policy and RBAC roles provides an efficient and secure governance solution.
You have an Azure Active Directory (Azure AD) tenant and Windows 10 devices.
MFA Policy Configuration:
Enable Policy set to off
Grant
Select the controls to be enforced
Grant access selected.
Require multi-factor authentication: yes
Require device to be marked as compliant: no
Require hybrid azure ad joined devices: yes
Require approved client apps: no
Require app protection policy: no
For multiple controls: require one of the selected controls.
What is the result of the policy?
A. All users will always be prompted for multi-factor authentication (MFA).
B. Users will be prompted for multi-factor authentication (MFA) only when they sign in from devices that are NOT joined to Azure AD.
C. All users will be able to sign in without using multi-factor authentication (MFA).
D. Users will be prompted for multi-factor authentication (MFA) only when they sign in from devices that are joined to Azure AD.
The correct answer is D. Users will be prompted for multi-factor authentication (MFA) only when they sign in from devices that are joined to Azure AD.
Here’s why:
Understanding the Conditional Access Policy:
Enable Policy set to off: Since the policy is turned off, all other settings are meaningless. You will not be prompted for MFA based on this policy as it is disabled.
Grant Access Selected: This means the policy is setup to grant access. If all conditions are met, then access will be granted based on the next settings.
Require multi-factor authentication: yes: This means that MFA is required to gain access based on this policy, but it can only be applied if the other conditions are also met. This will only be enforced if they sign in from a device that is hybrid azure ad joined.
Require device to be marked as compliant: no: This means the device compliance status is not required for this policy to be enforced.
Require hybrid azure ad joined devices: yes: This means that the device must be Azure AD joined for this policy to be applied.
Require approved client apps: no: This is not required for this policy to be enforced.
Require app protection policy: no: This is not required for this policy to be enforced.
For multiple controls: require one of the selected controls: Because there is only one control that is set to yes, this setting is not important as the policy will always apply that control.
Result: The policy is disabled so will have no impact. Because it is disabled, the result will be that all users will be able to sign in without using MFA. If the policy were enabled, only users on Hybrid Azure AD joined devices would be required to use multi-factor authentication (MFA) because it is specified that “Require hybrid azure ad joined devices: yes”. All other users and devices would not be impacted by this conditional access policy if it were enabled.
Let’s analyze the incorrect options:
A. All users will always be prompted for multi-factor authentication (MFA). This is incorrect as the policy is disabled and will have no impact, meaning that all users will not be prompted for MFA based on this policy.
B. Users will be prompted for multi-factor authentication (MFA) only when they sign in from devices that are NOT joined to Azure AD. This is incorrect, as the policy is disabled. If the policy were enabled, then users on non Azure AD joined devices would not be impacted by the policy.
C. All users will be able to sign in without using multi-factor authentication (MFA). This is the correct answer because the policy is disabled and therefore, all users will be able to sign in without using MFA.
In summary: Because the policy is disabled, no users will be impacted by the policy and all users will be able to sign in without using MFA. If the policy were enabled, it would require MFA for users signing in from hybrid Azure AD joined devices.
You are designing an Azure resource deployment that will use Azure Resource Manager templates. The deployment will use Azure Key Vault to store secrets.
You need to recommend a solution to meet the following requirements:
✑ Prevent the IT staff that will perform the deployment from retrieving the secrets directly from Key Vault.
✑ Use the principle of least privilege.
Which two actions should you recommend?
A. Create a Key Vault access policy that allows all get key permissions, get secret permissions, and get certificate permissions.
B. From Access policies in Key Vault, enable access to the Azure Resource Manager for template deployment.
C. Create a Key Vault access policy that allows all list key permissions, list secret permissions, and list certificate permissions.
D. Assign the IT staff a custom role that includes the Microsoft.KeyVault/Vaults/Deploy/Action permission.
E. Assign the Key Vault Contributor role to the IT staff.
The two correct actions are B. From Access policies in Key Vault, enable access to the Azure Resource Manager for template deployment. and D. Assign the IT staff a custom role that includes the Microsoft.KeyVault/Vaults/Deploy/Action permission.
Here’s why:
B. Enable Access for Azure Resource Manager:
How it works: Key Vault has a specific feature to grant access to Azure Resource Manager for template deployment. By enabling this, you allow ARM to fetch secrets from Key Vault during the deployment, without granting the deployment user direct access to those secrets. This allows you to store secrets in Key Vault, and still allow ARM templates to use those secrets without having to give the user or service account access to those secrets. This satisfies the requirement that the IT staff not be able to access the secrets.
Principle of Least Privilege: This provides a secure way for the template deployment to read the secrets, without giving direct access to the IT staff that are running the deployment.
D. Assign a Custom Role for Deployment:
How it works: The Microsoft.KeyVault/Vaults/Deploy/Action permission allows a user or service principal to use Key Vault secrets during an ARM template deployment. By assigning a custom role that includes only this permission, you limit the permissions given to the IT staff. They will only be able to deploy resources to Azure, and will not be able to list or view the secrets themselves.
Principle of Least Privilege: This approach adheres to the principle of least privilege by not granting the IT staff any other unnecessary permissions within the Key Vault (like read, delete, list).
Why Other Options Are Incorrect:
A. Create a Key Vault access policy that allows all get key permissions, get secret permissions, and get certificate permissions: This is incorrect as it gives too much permission to the IT staff. The IT staff should not be able to get the secrets directly from Key Vault.
C. Create a Key Vault access policy that allows all list key permissions, list secret permissions, and list certificate permissions: This is incorrect as it gives too much permission to the IT staff. The IT staff should not be able to list the secrets directly from Key Vault.
E. Assign the Key Vault Contributor role to the IT staff: This role provides far too many permissions, and goes against the principle of least privilege. The Key Vault contributor can manage everything within a Key Vault, including deleting the vault itself.
In Summary:
Enabling the Azure Resource Manager access policy allows ARM to fetch secrets for deployments.
Assigning a custom role that includes the Microsoft.KeyVault/Vaults/Deploy/Action permission to IT staff allows the template deployments, but does not allow the IT staff to retrieve secrets directly.
These two settings ensure that the IT staff who run the deployment cannot directly access secrets in Key Vault, and also uses the principle of least privilege.
You have an Azure subscription that contains resources in three Azure regions.
You need to implement Azure Key Vault to meet the following requirements:
✑ In the event of a regional outage, all keys must be readable.
✑ All the resources in the subscription must be able to access Key Vault.
✑ The number of Key Vault resources to be deployed and managed must be minimized.
How many instances of Key Vault should you implement?
A. 1
B. 2
C. 3
D. 6
The correct answer is A. 1
Here’s why:
Requirement 1: Regional Outage Resilience: Azure Key Vault has built-in redundancy within a region. If you create a single Key Vault and enable geo-replication, the keys are replicated to a secondary region within the same geopolitical boundary, ensuring that they are readable even during a regional outage of the primary location. If you needed to read the keys in a second region you would enable geo-replication for this, but creating multiple Key Vaults isn’t required to fulfil this.
Requirement 2: Access for all Subscription Resources: Key Vault access is controlled through Azure Active Directory (Azure AD) and role-based access control (RBAC). You can use a managed identity that you set at the subscription scope to give access to the resources, and can use a single Key Vault.
Requirement 3: Minimize Number of Key Vaults: Creating a single Key Vault reduces the management overhead. Having multiple vaults would require additional administrative effort.
Why other options are incorrect:
B. 2: While two Key Vaults would provide redundancy across two regions, it doesn’t help with the requirement to keep the resources to a minimum and would be more complex to manage and give all resources access to those across the different regions.
C. 3: Three Key Vaults would be redundant and increase management complexity unnecessarily. The single vault with replication is sufficient.
D. 6 Six key vaults is just not needed given that we can use a single vault with replication.
Exam Tip: Focus on requirements that emphasize minimizing management overhead or reducing the number of resources. In these cases, a single instance of a service that has built in capabilities would be preferred.
You have an Azure Active Directory (Azure AD) tenant.
You plan to provide users with access to shared files by using Azure Storage. The users will be provided with different levels of access to various Azure file shares based on their user account or their group membership.
You need to recommend which additional Azure services must be used to support the planned deployment.
What should you include in the recommendation?
A. an Azure AD enterprise application
B. Azure Information Protection
C. an Azure AD Domain Services (Azure AD DS) instance
D. an Azure Front Door instance
The correct answer is A. an Azure AD enterprise application.
Here’s why:
Azure AD Enterprise Application: To control access to Azure file shares based on user accounts or group membership, you need to integrate Azure Storage with Azure AD. This is done through an Azure AD enterprise application, which acts as a service principal. Here’s how it works:
Storage Account Configuration: You enable Azure AD authentication on the storage account.
Azure AD Application: An enterprise application is created in Azure AD representing your storage account.
Role Assignments: You grant users or groups specific role assignments to the storage account using Azure AD roles such as Storage File Data SMB Share Reader, Storage File Data SMB Share Contributor etc. The roles are set on the scope of either the storage account itself, the individual file share, or even individual directories/files.
Authentication: When a user tries to access a file share, Azure AD validates their credentials and authorizes their access based on these role assignments.
Why other options are incorrect:
B. Azure Information Protection: Azure Information Protection is used to protect files, such as documents or emails, with sensitivity labels, encryption, and access permissions. While this could complement security on Azure files, it does not provide the identity-based access management you are looking for.
C. An Azure AD Domain Services (Azure AD DS) instance: While Azure AD DS provides managed domain services, you do not need it simply to control access to Azure file shares. You might use it to manage on-premises devices connected to your hybrid network, but it does not directly grant access to storage resources.
D. An Azure Front Door instance: Azure Front Door is a global HTTP(S) load balancer and application delivery service. It’s not relevant for providing access control to file shares.
Exam Tip: Pay close attention to requirements around identity-based access management. In these cases, the correct answer is usually related to how a service integrates with Azure Active Directory. Understanding the difference between authentication (verifying identity) and authorization (granting permissions) is also key.
Your company has users who work remotely from laptops.
You plan to move some of the applications accessed by the remote users to Azure virtual machines. The users will access the applications in Azure by using a point-to-site VPN connection. You will use certificates generated from an on-premises-based Certification authority (CA).
You need to recommend which certificates are required for the deployment.
1.) Trusted Root Certification Authorities certificate store on each laptop
2.) The users Personal store on each laptop
3.) The Azure VPN Gateway
Which certificates should be used for each
A. A root CA certificate that has the private key
B. A root CA certificate that has the public key only
C. A user certificate that has the private key
D. A user certificate that has the public key only
Okay, let’s break down the certificate requirements for a point-to-site VPN connection using an on-premises CA. Here’s the correct answer:
1) Trusted Root Certification Authorities certificate store on each laptop: B. A root CA certificate that has the public key only
Explanation: The Trusted Root Certification Authorities store is used to verify the identity of the server (in this case, the Azure VPN Gateway). You need to install the public key of the root CA that issued the VPN server certificate to establish trust. The private key should never be installed on client machines.
2) The users Personal store on each laptop: C. A user certificate that has the private key
Explanation: The Personal store is used for client authentication. Each user needs their own unique user certificate with the private key to prove their identity to the VPN gateway during the connection process. This private key must not be shared with other users.
3) The Azure VPN Gateway: B. A root CA certificate that has the public key only
Explanation: The Azure VPN Gateway, similar to the client machines, needs to verify the certificate used by connecting clients. This requires the public key of the root CA that issued the user certificates. You do not install the user certificates or their associated private keys on the VPN Gateway.
Therefore, the correct matching is:
1) Trusted Root Certification Authorities certificate store on each laptop: B
2) The users Personal store on each laptop: C
3) The Azure VPN Gateway: B
Why the other options are incorrect:
A. A root CA certificate that has the private key: The private key of the root CA should only be used to issue certificates and should be secured, not installed on client machines or the VPN gateway.
D. A user certificate that has the public key only: The private key is necessary for the user to be authenticated by the VPN gateway.
Key concepts for the Exam:
Root CA Certificate (Public Key): Used to establish trust by validating that the certificate is issued by a trusted source. Distributed widely.
User Certificate (Private Key): Used to uniquely identify and authenticate a user to access resources or services. Private keys must never be shared.
Certificate Store: Local location on Windows systems (like the “Trusted Root Certification Authorities” or “Personal” stores) that are used to manage certificates.
Exam Tip: When you see a question about certificates and authentication, focus on whether a private key or a public key is being used. Also, think about trust and what needs to verify the identity of whom or what. Remember, Private keys must always be kept secret, and you would not upload a private key anywhere.
You are building an application that will run in a virtual machine (VM). The application will use Azure Managed Identity.
The application uses Azure Key Vault, Azure SQL Database, and Azure Cosmos DB.
You need to ensure the application can use secure credentials to access these services.
Functionality
1.) Azure Key vault
2.)Azure SQL
3.) CosmosDB
Which authentication method should you recommend for each functionality?
Authorization methods:
A. Hash-based message authentication code (HMAC)
B. Azure Managed Identity
C. Role-Based Access Controls (RBAC)
D. HTTPS encryption
Here’s the correct answer mapping:
1) Azure Key Vault: B. Azure Managed Identity
Explanation: Azure Managed Identity is the ideal method for authenticating to Azure Key Vault from an Azure resource (like a VM). Managed identities provide an automatically managed identity in Microsoft Entra ID, eliminating the need to store and manage credentials within the application code or configuration. The application can retrieve secrets directly from the Key Vault using its managed identity.
2) Azure SQL Database: B. Azure Managed Identity
Explanation: Azure SQL Database supports Azure AD authentication. When combined with Managed Identity, this enables a secure connection to the database without storing credentials. By enabling Azure AD authentication and then configuring the SQL server with an admin account that is an Azure AD principal, the application can use its managed identity to authenticate.
3) Azure Cosmos DB: B. Azure Managed Identity
Explanation: Azure Cosmos DB also supports Azure AD authentication. Using managed identity, a secure connection can be established with Cosmos DB without needing API keys or connection strings in the application code. Once the application’s managed identity is authorized to access the Cosmos DB resources, connections are made using access tokens obtained from Azure Active Directory by the managed identity.
Therefore, the correct matching is:
1) Azure Key Vault: B
2) Azure SQL Database: B
3) Azure Cosmos DB: B
Why the other options are incorrect:
A. Hash-based message authentication code (HMAC): HMAC is a cryptographic technique for verifying data integrity and authenticity. While important for secure communication, it’s not a method for authenticating to services.
C. Role-Based Access Controls (RBAC): RBAC is an authorization system that controls what actions principals (like user accounts, groups, or applications) can perform on Azure resources, however, it is not an authentication method. You use RBAC to grant the managed identity rights to use the other services.
D. HTTPS encryption: HTTPS provides secure communication channels via encryption, but is not an authentication method.
Key Concepts for the Exam:
Azure Managed Identity: Automatically managed identity in Microsoft Entra ID for use by Azure resources. Eliminates credential management.
Authentication vs. Authorization: Authentication validates the identity; Authorization grants access permissions.
Principle of Least Privilege: Grant only necessary permissions to services and applications.
Exam Tip: When a question describes a scenario using multiple Azure services and requires secure authentication with minimal credential management, Azure Managed Identities are the preferred method. Also, remember that RBAC is for authorization, while Managed Identities are for authentication. Understand the difference between authentication (verifying the identity) and authorization (granting permission).
You have an Azure subscription that contains a custom application named Application1. Application1 was developed by an external company named Fabrikam,
Ltd. Developers at Fabrikam were assigned role-based access control (RBAC) permissions to the Application1 components. All users are licensed for the
Microsoft 365 E5 plan.
You need to recommend a solution to verify whether the Fabrikam developers still require permissions to Application1. The solution must meet the following requirements:
✑ To the manager of the developers, send a monthly email message that lists the access permissions to Application1.
✑ If the manager does not verify an access permission, automatically revoke that permission.
✑ Minimize development effort.
What should you recommend?
A. Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet.
B. Create an Azure Automation runbook that runs the Get-AzureRoleAssignment cmdlet.
C. In Azure Active Directory (Azure AD), create an access review of Application1.
D. In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources.
The correct answer is C. In Azure Active Directory (Azure AD), create an access review of Application1.
Here’s why:
Azure AD Access Reviews: Azure AD Access Reviews are specifically designed to meet the requirements outlined in the question. They provide:
Monthly Email Notifications: You can configure an access review to send monthly email messages to the manager of the Fabrikam developers. These messages would list the permissions the developers have for the resources related to Application1.
Automatic Revocation: If the manager does not verify an access permission during the review process, you can set the review to automatically revoke that permission. This enforces the “just-in-time” access principle.
Minimized Development Effort: Access Reviews are a built-in feature of Azure AD and require no custom coding to implement.
Why the other options are incorrect:
A. Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet: This option would require custom development of a PowerShell script using Azure Automation Runbooks, and scheduling, to implement all of the components, which would go against minimizing the effort. While it could retrieve the role assignments, it would not have the built-in review and automatic revocation features offered by Azure AD access reviews. Also it would not send the manager an email.
B. Create an Azure Automation runbook that runs the Get-AzureRoleAssignment cmdlet: This is similar to option A, but it is not specific to application-based role assignments and it would not have the built-in review and automatic revocation features offered by Azure AD access reviews, it also does not provide any mechanisms to notify users.
D. In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources: While Privileged Identity Management (PIM) provides just-in-time elevation for roles, this option is not designed for access reviews and is more useful for managing privileged accounts, not standard application access rights. It also would require custom code, and is not a built in feature of PIM to send notification to managers to approve access.
Key Concepts for the Exam:
Azure AD Access Reviews: A feature that enables you to regularly review who has access to Azure AD resources, groups and applications. They simplify access management and minimize the risk of over provisioned or stale access.
Privileged Identity Management (PIM): Used to manage and control privileged access to resources and is different from Access Reviews.
Least Privilege: Grant users only the necessary permissions, and reduce the surface area by removing permissions that are no longer needed.
Exam Tip: When a question asks about reviewing access, look for answers that involve Azure AD Access Reviews. Pay close attention to keywords like “regularly verify,” “automatically revoke,” and “minimize development effort.” You might see scenarios with a mixture of solutions, but usually the access review is the best fit.
You have an Azure subscription that contains 10 web apps. The apps are integrated with Azure AD and are accessed by users on different project teams.
The users frequently move between projects.
You need to recommend an access management solution for the web apps. The solution must meet the following requirements:
- The users must only have access to the app of the project to which they are assigned currently.
- Project managers must verify which users have access to their project’s app and remove users that are no longer assigned to their project.
- Once every 30 days, the project managers must be prompted automatically to verify which users are assigned to their projects.
What should you include in the recommendation?
A. Azure AD Identity Protection
B. Microsoft Defender for Identity
C. Microsoft Entra Permissions Management
D. Azure AD Identity Governance
Here’s a breakdown of why the correct answer is D. Azure AD Identity Governance and why the others aren’t suitable:
D. Azure AD Identity Governance
Correct Choice: This is the ideal solution because it directly addresses all the requirements:
Access Based on Project: Azure AD Identity Governance, specifically through Access Packages, allows you to create collections of resources (like the web apps) that are tied to a specific project. Users can be granted access to these access packages.
Project Manager Verification: Access packages allow you to delegate the management and approval to the project managers. Project managers can see who has access to their project’s resources.
Periodic Access Reviews: Access Reviews are a core feature of Azure AD Identity Governance. They allow you to set up recurring reviews where project managers are prompted to verify and remove users as needed. You can configure the reviews to occur every 30 days, meeting the prompt’s requirement.
A. Azure AD Identity Protection
Incorrect Choice: Azure AD Identity Protection focuses on detecting and mitigating risks to user identities. It helps with things like identifying compromised accounts, preventing risky sign-ins, and enforcing MFA. It doesn’t address the access management requirements of the scenario.
B. Microsoft Defender for Identity
Incorrect Choice: Microsoft Defender for Identity is a security solution for on-premises Active Directory environments. It detects suspicious activities by monitoring domain controllers. While security-focused, it doesn’t manage access to cloud-based web apps in the way that’s needed.
C. Microsoft Entra Permissions Management
Incorrect Choice: While Permissions Management is important for understanding and controlling access to cloud resources, it doesn’t offer the access review and self-service capabilities that the scenario requires. It mainly focuses on providing visibility and remediating excessive permissions.
Therefore, the best recommendation is Azure AD Identity Governance because it provides the necessary access packages and access review functionalities to meet all of the stated requirements.
Your company has the divisions shown in the following table.
|—|—|—|
| East | Sub1 | Contoso.com |
| West | Sub2 | Fabrikam.com |
Sub1 contains an Azure App Service web app named App1. App1 uses Azure AD for single-tenant user authentication. Users from contoso.com can authenticate to App1.
You need to recommend a solution to enable users in the fabrikam.com tenant to authenticate to App1.
What should you recommend?
A. Configure the Azure AD provisioning service.
B. Configure assignments for the fabrikam.com users by using Azure AD Privileged Identity Management (PIM).
C. Use Azure AD entitlement management to govern external users.
D. Configure Azure AD Identity Protection.
Division | Azure subscription | Azure Active Directory (Azure AD) tenant |
Let’s analyze the requirements and why the correct answer is the best fit:
Understanding the Problem
Single-Tenant App: App1 is set up to only accept authentication from users within the contoso.com Azure AD tenant.
Cross-Tenant Access Needed: Users from the fabrikam.com Azure AD tenant need to access App1.
Analyzing the Options
A. Configure the Azure AD provisioning service:
Incorrect. The Azure AD provisioning service is used for automating the creation, updating, and deletion of user identities and groups in applications and directories. It doesn’t directly enable cross-tenant authentication to an existing application. While you might use it to create user objects, this doesn’t address the primary issue of authentication from another tenant.
B. Configure assignments for the fabrikam.com users by using Azure AD Privileged Identity Management (PIM):
Incorrect. Azure AD PIM is for managing, controlling, and monitoring privileged access (e.g., administrators) within your own Azure AD tenant. It’s not designed for granting access to users from a completely different Azure AD tenant for a standard application.
C. Use Azure AD entitlement management to govern external users.
Correct. Entitlement Management in Azure AD is specifically designed to handle requests, approvals, and reviews of access to resources for external users (users from other organizations or Azure AD tenants). This is the most appropriate way to allow users in fabrikam.com to access App1, providing you with proper governance and management of external user access.
This functionality allows fabrikam.com users to request access to a resource in your tenant, which requires approval. The process also allows for time-bound access.
D. Configure Azure AD Identity Protection:
Incorrect. Azure AD Identity Protection is for identifying and mitigating risks and vulnerabilities related to your user accounts and logins. It does not handle providing external access to applications.
Why Entitlement Management is the Right Choice
Cross-Tenant Access: It’s explicitly designed for managing access by external users, which aligns directly with the requirement of allowing users from fabrikam.com to access App1.
Controlled Access: Entitlement management provides mechanisms for controlling who can request access, requires approvals, and allows for time-bound access, which helps govern access by external users.
Proper Governance: It provides a proper access request process, ensures proper access is granted, and provides an audit trail.
Therefore, the best recommendation is C. Use Azure AD entitlement management to govern external users.
our company, named Contoso, Ltd., implements several Azure logic apps that have HTTP triggers. The logic apps provide access to an on-premises web service.
Contoso establishes a partnership with another company named Fabrikam, Inc.
Fabrikam does not have an existing Azure Active Directory (Azure AD) tenant and uses third-party OAuth 2.0 identity management to authenticate its users.
Developers at Fabrikam plan to use a subset of the logic apps to build applications that will integrate with the on-premises web service of Contoso.
You need to design a solution to provide the Fabrikam developers with access to the logic apps. The solution must meet the following requirements:
✑ Requests to the logic apps from the developers must be limited to lower rates than the requests from the users at Contoso.
✑ The developers must be able to rely on their existing OAuth 2.0 provider to gain access to the logic apps.
✑ The solution must NOT require changes to the logic apps.
✑ The solution must NOT use Azure AD guest accounts.
What should you include in the solution?
A. Azure Front Door
B. Azure AD Application Proxy
C. Azure AD business-to-business (B2B)
D. Azure API Management
Understanding the Requirements
Access to Logic Apps: Fabrikam developers need to access specific Contoso Logic Apps that expose HTTP triggers.
Rate Limiting: Access from Fabrikam needs to be rate-limited compared to internal Contoso traffic.
External OAuth: Fabrikam uses a third-party OAuth 2.0 provider, and the solution must integrate with this.
No Logic App Changes: The existing Logic Apps cannot be modified.
No Azure AD Guest Accounts: The solution must avoid using Azure AD guest accounts.
Analyzing the Options
A. Azure Front Door
Incorrect: Azure Front Door is primarily a global, scalable entry point for web applications. It’s great for caching, routing, and accelerating web traffic, but it is not designed to integrate with external OAuth 2.0 identity providers for authentication or rate limiting requests to specific HTTP triggered logic apps. Additionally, it would require changes to the application, which is stated as not being allowed.
B. Azure AD Application Proxy
Incorrect: Azure AD Application Proxy is used to publish on-premises web applications to the internet securely using Azure AD. While it can handle authentication, it is specifically designed for applications behind the firewall. Also, it would require using Azure AD guest accounts and would be a poor fit for authenticating third-party OAuth 2.0 users.
C. Azure AD business-to-business (B2B)
Incorrect: Azure AD B2B is designed for inviting users from other organizations as guest users in your Azure AD tenant. The prompt specifically mentions that no Azure AD guest accounts should be used, therefore, this is not a good solution.
D. Azure API Management
Correct: Azure API Management (APIM) is the best fit for this scenario because:
Abstraction and Decoupling: APIM acts as an intermediary layer between the Fabrikam developers and the Contoso Logic Apps. This decouples the apps from direct access.
Rate Limiting: APIM offers built-in policies to enforce rate limiting on a per-subscription, per-API, or other granular levels. You can set specific rate limits for Fabrikam users.
External OAuth Integration: APIM can integrate with any OAuth 2.0 compliant identity provider. You can configure APIM to accept tokens from the Fabrikam OAuth provider and then pass authenticated requests to the Logic Apps.
No Logic App Changes: Since APIM sits in front of the Logic Apps, no modifications to the Logic App themselves are needed.
No Guest Accounts: APIM manages access through its own API subscriptions and policies. It doesn’t directly rely on Azure AD guest users.
Why API Management is the Right Choice
Azure API Management provides a controlled and manageable gateway for accessing your Logic Apps, ensuring all requirements are met:
Centralized Access: It centralizes access to the logic apps, simplifying management and security.
Security and Authentication: It allows integration with external OAuth 2.0 providers while also securing access to the Logic Apps.
Rate Limiting: Provides built-in capabilities for controlling the number of requests from external developers.
No Code Changes: Requires no changes to the Logic Apps.
No Guest Accounts: Doesn’t rely on Azure AD Guest Accounts, which is one of the requirements of this scenario.
Therefore, the correct answer is D. Azure API Management.
HOTSPOT -
You have an Azure subscription that contains 300 virtual machines that run Windows Server 2019.
You need to centrally monitor all warning events in the System logs of the virtual machines.
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Resource to create in Azure:
An event hub
A Log Analytics workspace
A search service
A storage account
Configuration to perform on the virtual machines:
Create event subscriptions
Configure Continuous delivery
Install the Azure Monitor agent
Modify the membership of the Event Log Readers group
Understanding the Goal
The goal is to collect warning events from the System logs of 300 Windows Server virtual machines and monitor them centrally in Azure.
Correct Selections:
Resource to create in Azure: A Log Analytics workspace
Why? A Log Analytics workspace is the central repository for collecting, storing, and analyzing log and performance data from Azure resources and on-premises servers. This is where the logs collected from the VMs will be sent and analyzed. An Event Hub could potentially be used but would require more customization of the solution.
Configuration to perform on the virtual machines: Install the Azure Monitor agent
Why? The Azure Monitor Agent (AMA) is the modern method of collecting telemetry data (including logs) from Azure VMs and other resources. You install this agent on each VM to collect the desired logs.
The legacy Azure Log Analytics agent is deprecated, so this is not a good answer.
Also, Modify the membership of the Event Log Readers group is not needed with the Azure Monitor Agent. The Azure Monitor agent uses the Virtual Machine Managed Identity for authentication, it does not use a user account, and therefore does not require membership of this group.
Incorrect Selections and Why
An event hub: While Event Hubs can ingest telemetry data, they don’t provide the same level of analysis and querying capabilities as Log Analytics. You would typically use Event Hubs as an intermediate step before sending data to a data store like a Log Analytics workspace.
A search service: Azure Search is for indexing and searching content. It isn’t meant for log analysis.
A storage account: Storage accounts are useful for storing logs, but not for analysis and monitoring in this scenario.
Create event subscriptions: Event subscriptions are generally used to react to events within Azure. They are not directly used to monitor logs on VMs.
Configure Continuous Delivery: Continuous delivery is a development practice and has no direct impact on monitoring logs from VMs.
Modify the membership of the Event Log Readers group: The Azure Monitor agent uses the Virtual Machine Managed Identity for authentication, it does not use a user account, and therefore does not require membership of this group.
Therefore, the correct answer is:
Resource to create in Azure: A Log Analytics workspace
Configuration to perform on the virtual machines: Install the Azure Monitor agent
HOTSPOT
You have several Azure App Service web apps that use Azure Key Vault to store data encryption keys.
Several departments have the following requests to support the web app:
Security
Review the membership of administrative roles and require users to provide a justification for continued membership.
Get alerts about changes in administrator assignments.
See a history of administrator activation, including which changes administrators made to Azure resources.
Development
Enable the applications to access Key Vault and retrieve keys for use in code.
Quality Assurance
Receive temporary administrator access to create and configure additional web apps in the test environment.
Which service should you recommend for each department’s request? To answer, configure the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Security:
Azure AD Privileged Identity Management
Azure Managed Identity
Azure AD Connect
Azure AD Identity Protection
Development:
Azure AD Privileged Identity Management
Azure Managed Identity
Azure AD Connect
Azure AD Identity Protection
Quality Assurance:
Azure AD Privileged Identity Management
Azure Managed Identity
Azure AD Connect
Azure AD Identity Protection
Understanding the Needs
Security: Requires auditing and control over administrative roles and changes.
Development: Needs secure access from code to retrieve keys from Key Vault.
Quality Assurance: Requires temporary elevated access for testing purposes.
Analyzing the Options and Making Selections:
Security Department:
Azure AD Privileged Identity Management (PIM): Correct
Why: PIM is specifically designed to manage, control, and monitor privileged access within Azure AD. It allows you to:
Review membership of admin roles and require justification.
Get alerts about changes in admin assignments.
Track admin activation history and changes.
Azure Managed Identity: Incorrect. Managed identities are for applications and services to authenticate with other Azure resources.
Azure AD Connect: Incorrect. Azure AD Connect is for synchronizing on-premises Active Directory with Azure AD.
Azure AD Identity Protection: Incorrect. Identity Protection focuses on detecting and mitigating risks to user accounts, not managing role assignments.
Development Department:
Azure Managed Identity: Correct
Why: Managed identities provide a secure way for applications to authenticate with other Azure resources (like Key Vault) without needing to manage secrets or credentials. This is the preferred method for accessing Key Vault from application code.
Azure AD Privileged Identity Management: Incorrect. PIM manages privileged roles, not application access to resources.
Azure AD Connect: Incorrect. Azure AD Connect is for synchronizing on-premises Active Directory with Azure AD.
Azure AD Identity Protection: Incorrect. Identity Protection focuses on detecting and mitigating risks to user accounts, not managing app access to resources.
Quality Assurance Department:
Azure AD Privileged Identity Management (PIM): Correct
Why: PIM is ideal for providing temporary elevated access. It allows you to:
Grant users temporary admin roles.
Require activation with justification.
Set time-bound access limits, ensuring the elevated permissions expire.
Azure Managed Identity: Incorrect. Managed identities are for applications and services to authenticate with other Azure resources.
Azure AD Connect: Incorrect. Azure AD Connect is for synchronizing on-premises Active Directory with Azure AD.
Azure AD Identity Protection: Incorrect. Identity Protection focuses on detecting and mitigating risks to user accounts, not temporary privileged access.
Therefore, the correct answers are:
Security: Azure AD Privileged Identity Management
Development: Azure Managed Identity
Quality Assurance: Azure AD Privileged Identity Management
Overview:
Existing Environment
Fabrikam, Inc. is an engineering company that has offices throughout Europe. The company has a main office in London and three branch offices in Amsterdam Berlin, and Rome.
Active Directory Environment:
The network contains two Active Directory forests named corp.fabnkam.com and rd.fabrikam.com. There are no trust relationships between the forests. Corp.fabrikam.com is a production forest that contains identities used for internal user and computer authentication. Rd.fabrikam.com is used by the research and development (R&D) department only. The R&D department is restricted to using on-premises resources only.
Network Infrastructure:
Each office contains at least one domain controller from the corp.fabrikam.com domain.
The main office contains all the domain controllers for the rd.fabrikam.com forest.
All the offices have a high-speed connection to the Internet.
An existing application named WebApp1 is hosted in the data center of the London office. WebApp1 is used by customers to place and track orders. WebApp1 has a web tier that uses Microsoft Internet Information Services (IIS) and a database tier that runs Microsoft SQL Server 2016. The web tier and the database tier are deployed to virtual machines that run on Hyper-V.
The IT department currently uses a separate Hyper-V environment to test updates to WebApp1.
Fabrikam purchases all Microsoft licenses through a Microsoft Enterprise Agreement that includes Software Assurance.
Problem Statement:
The use of Web App1 is unpredictable. At peak times, users often report delays. At other
times, many resources for WebApp1 are underutilized.
Requirements:
Planned Changes:
Fabrikam plans to move most of its production workloads to Azure during the next few years.
As one of its first projects, the company plans to establish a hybrid identity model, facilitating an upcoming Microsoft Office 365 deployment All R&D operations will remain on-premises.
Fabrikam plans to migrate the production and test instances of WebApp1 to Azure.
Technical Requirements:
Fabrikam identifies the following technical requirements:
- Web site content must be easily updated from a single point.
- User input must be minimized when provisioning new app instances.
- Whenever possible, existing on premises licenses must be used to reduce cost.
- Users must always authenticate by using their corp.fabrikam.com UPN identity.
- Any new deployments to Azure must be redundant in case an Azure region fails.
- Whenever possible, solutions must be deployed to Azure by using platform as a service (PaaS).
- An email distribution group named IT Support must be notified of any issues relating to the directory synchronization services.
- Directory synchronization between Azure Active Directory (Azure AD) and corp.fabhkam.com must not be affected by a link failure between Azure and the on premises network.
Database Requirements:
Fabrikam identifies the following database requirements:
- Database metrics for the production instance of WebApp1 must be available for analysis so that database administrators can optimize the performance settings.
- To avoid disrupting customer access, database downtime must be minimized when databases are migrated.
- Database backups must be retained for a minimum of seven years to meet compliance requirement
Security Requirements:
Fabrikam identifies the following security requirements:
*Company information including policies, templates, and data must be inaccessible to anyone outside the company
*Users on the on-premises network must be able to authenticate to corp.fabrikam.com if an Internet link fails.
*Administrators must be able authenticate to the Azure portal by using their corp.fabrikam.com credentials.
*All administrative access to the Azure portal must be secured by using multi-factor
authentication.
*The testing of WebApp1 updates must not be visible to anyone outside the company.
You need to recommend a strategy for migrating the database content of WebApp1 to Azure .
What should you include in the recommendation?
Use Azure Site Recovery to replicate the SQL servers to Azure.
Use SQL Server transactional replication.
Copy the BACPAC file that contains the Azure SQL database file to Azure Blob storage.
Copy the VHD that contains the Azure SQL database files to Azure Blob storage
Understanding the Requirements and Constraints
Minimize Downtime: The migration process must minimize disruption to customer access.
Long-Term Backups: Backups must be retained for seven years.
Hybrid Identity: Authentication will be tied to corp.fabrikam.com (so we need AD Sync).
PaaS Preference: Prefer PaaS solutions where possible.
Redundancy: The solution must provide redundancy.
Security: Data must be kept private, including testing.
Database Analysis: Performance metrics should be available for analysis.
SQL Server 2016: The current on-premises database is running SQL Server 2016.
Analyzing the Options
Use Azure Site Recovery to replicate the SQL servers to Azure.
Incorrect. Azure Site Recovery (ASR) is great for replicating entire VMs, but it is a VM-based solution, which goes against the technical requirement to use PaaS solutions whenever possible. Also, it doesn’t address the requirement for long-term backups. ASR is not ideal for migrating a database to a PaaS offering.
Use SQL Server transactional replication.
Incorrect. While transactional replication is great for keeping data synchronized between databases, it’s complex to set up and doesn’t directly migrate the data into a PaaS Azure SQL Database solution. It’s typically used for ongoing replication, not a one-time migration. Transactional Replication does not handle the backup and retention requirements.
Copy the BACPAC file that contains the Azure SQL database file to Azure Blob storage.
Correct. A BACPAC file is a self-contained package containing the schema and data from a SQL Server database. This makes it suitable for migrating SQL databases. Additionally, a BACPAC file can be directly used to create an Azure SQL Database. This meets the PaaS and minimization of disruption requirements. The database can be backed up and retained for 7 years via the automated backup process for Azure SQL databases.
Copy the VHD that contains the Azure SQL database files to Azure Blob storage.
Incorrect. Copying the VHD (Virtual Hard Disk) file is a good solution for migrating an IaaS based SQL Server VM to Azure, however, is not appropriate when migrating to an Azure SQL Database (PaaS solution). It would be better to use the BACPAC approach, which can directly create a PaaS SQL DB.
Why BACPAC is the Best Choice
PaaS Alignment: It’s suitable for migrating to an Azure SQL Database, a PaaS offering, aligning with the PaaS preference requirement.
Minimal Downtime: Can be used for a relatively quick migration process, reducing impact on customer access.
Direct Migration: The BACPAC file is directly usable to create or update an Azure SQL Database.
Backup Handling: Azure SQL Database handles long-term backups (including 7-year retention) which addresses the requirement.
Efficiency: More efficient than setting up replication for a one-time migration.
Therefore, the recommendation should include: Copy the BACPAC file that contains the Azure SQL database file to Azure Blob storage.
Your company has deployed several virtual machines (VMs) on-premises and to Azure. Azure ExpressRoute has been deployed and configured for on-premises to Azure connectivity.
Several VMs are exhibiting network connectivity issues.
You need to analyze the network traffic to determine whether packets are being allowed or denied to the VMs.
Solution: Use the Azure Traffic Analytics solution in Azure Log Analytics to analyze the network traffic.
Does the solution meet the goal?
Yes
No
Understanding the Goal
The goal is to:
Analyze network traffic to VMs both on-premises and in Azure.
Determine whether packets are being allowed or denied.
Diagnose network connectivity issues.
Understanding the Solution: Azure Traffic Analytics
What it Does: Azure Traffic Analytics is a cloud-based solution that analyzes NSG (Network Security Group) flow logs to provide insights into network traffic patterns and security posture within your Azure environment.
How it Works:
Flow logs are captured by Azure Network Watcher for NSGs.
These logs are sent to a storage account.
Traffic Analytics processes these flow logs to extract actionable information.
Analyzing if the Solution Meets the Goal
Network Analysis: Traffic Analytics does analyze network traffic patterns, which helps with determining what traffic is flowing.
Allow/Deny Decisions: Traffic Analytics can show if a connection attempt was allowed or denied by an NSG based on its rules.
Connectivity Issues: Traffic Analytics can help identify the source or destination of connectivity issues related to VMs.
Limitations with On-Premises:
Crucially, Traffic Analytics only works with flow logs generated by Azure Network Security Groups (NSGs).
It does not analyze traffic for on-premises VMs directly.
While it can show traffic that comes from on-premises through the Azure ExpressRoute circuit and hits Azure NSGs, it doesn’t provide visibility of on-premises network traffic that does not transverse an Azure NSG.
You would need additional analysis tools or logs on-premises to achieve full visibility of on-premises traffic.
Conclusion
While Azure Traffic Analytics is an excellent tool for understanding network traffic and identifying allowed or denied packets within Azure, it does not meet the goal of analyzing network traffic to all VMs, both on-premises and in Azure.
Therefore, the answer is No.
To fully analyze the network traffic to all VMs, you would need a solution that can collect network flow data from both the Azure NSGs and the on-premises network.
You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements:
- Provide access to the full .NET framework.
- Provide redundancy if an Azure region fails.
- Grant administrators access to the operating system to install custom application dependencies.
Solution: You deploy an Azure virtual machine to two Azure regions, and you deploy an Azure Application Gateway.
Does this meet the goal?
Yes
No
Understanding the Requirements
Stateless Web App: The application doesn’t store user session data locally and can be scaled horizontally.
Full .NET Framework: The application needs access to the complete .NET Framework, not just the .NET Core/5+.
Regional Redundancy: The application must remain operational if an Azure region goes down.
OS Access: Administrators must have access to the underlying operating system to install dependencies.
Analyzing the Solution
Azure Virtual Machines in Two Regions:
Meets Full .NET Framework Requirement: Yes, VMs allow you to install the full .NET framework and have full OS control.
Meets OS Access Requirement: Yes, administrators can access the OS of a virtual machine.
Provides Redundancy: Yes, deploying VMs in two regions provides redundancy, because if one region fails, the application would still be available in another.
Azure Application Gateway:
Meets Redundancy: Yes, Application Gateway allows you to load balance traffic across the VMs in the two regions for traffic distribution.
Provides Access to Web App: Yes, it provides a single entry point to access the web application running on the VMs.
Evaluation
The solution, deploying VMs in multiple regions with Azure Application Gateway, does meet the stated requirements.
Full .NET Framework: VMs allow installation of the full framework.
Redundancy: VMs in two regions with Application Gateway provide redundancy and high availability.
OS Access: VMs provide administrators with OS-level access.
Therefore, the answer is Yes.
DRAG DROP
Your on-premises network contains a server named Server1 that runs an ASP.NET application named App1.
You have a hybrid deployment of Azure Active Directory (Azure AD).
You need to recommend a solution to ensure that users sign in by using their Azure AD account and Azure Multi-Factor Authentication (MFA) when they connect to App1 from the internet.
Which three Azure services should you recommend be deployed and configured in sequence? To answer, move the appropriate services from the list of services to the answer area and arrange them in the correct order.
Services
an internal Azure Load Balancer
an Azure AD conditional access policy
Azure AD Application Proxy
an Azure AD managed identity
a public Azure Load Balancer
an Azure AD enterprise application
an App Service plan
Answer Area
Understanding the Requirements
On-Premises App: App1 is hosted on-premises.
Azure AD Authentication: Users must authenticate using their Azure AD accounts.
Azure MFA: Users must be prompted for MFA when connecting from the internet.
Internet Access: Users access App1 from the internet.
Analyzing the Azure Services
Here’s a breakdown of how each service fits into the solution:
Azure AD Enterprise Application:
Purpose: This represents App1 in Azure AD, making it possible to authenticate users and manage access.
Why It’s Needed First: You need to register the application in Azure AD before you can authenticate users against it. This sets the base for Azure AD to recognize the application as a target.
Azure AD Application Proxy:
Purpose: This securely publishes App1 to the internet without requiring changes to your network infrastructure. It’s a component of Azure AD that gives users secure access to on-premise applications through Azure AD.
Why It’s Needed Second: Application Proxy connects to your on-premise app using a proxy connector, and can then use Azure AD for authentication. It also enables the use of conditional access policies.
Azure AD Conditional Access Policy:
Purpose: Enforces MFA and other security requirements for access to App1 based on user location, device, etc.
Why It’s Needed Last: You configure a conditional access policy after you’ve configured the Enterprise Application in Azure AD and configured it to use Application Proxy. This allows you to define the authentication conditions.
Incorrect Services
An internal Azure Load Balancer: Internal load balancers are used for distributing traffic within a virtual network. They do not make applications accessible from the internet.
An Azure AD managed identity: Managed identities are for allowing resources to securely authenticate with other Azure services, not for application access.
A public Azure Load Balancer: While public load balancers can direct internet traffic, they do not implement authentication for Azure AD users or apply conditional access policies.
An App Service plan: App Service plans are used to define the resources for hosting Azure App Service web applications and do not play a role in authenticating against on-premises apps.
Correct Order
The correct order to deploy and configure these services is:
Azure AD enterprise application
Azure AD Application Proxy
Azure AD conditional access policy
Therefore, drag and drop the three services in the order listed above into the answer area.
HOTSPOT
You have an Azure subscription that contains the SQL servers on Azure shown in the following table:
SQL Servers Table
Name Resource group Location
SQLsvr1 RG1 East US
SQLsvr2 RG2 West US
The subscription contains the storage accounts shown in the following table:
Storage Accounts Table
Name Resource group Location Account kind
storage1 RG1 East US StorageV2 (general purpose v2)
storage2 RG2 Central US BlobStorage
You create the Azure SQL databases shown in the following table:
Azure SQL Databases Table
Name Resource group Server Pricing tier
SQLdb1 RG1 SQLsvr1 Standard
SQLdb2 RG1 SQLsvr1 Standard
SQLdb3 RG2 SQLsvr2 Premium
Answer Area
Statements
When you enable auditing for SQLdb1, you can store the audit information to storage1.
When you enable auditing for SQLdb2, you can store the audit information to storage2.
When you enable auditing for SQLdb3, you can store the audit information to storage2.
Key Concepts:
Azure SQL Auditing: This feature tracks database events and writes them to audit logs. These logs can be stored in Azure Storage accounts.
Storage Account Requirements: When configuring auditing for Azure SQL databases, you need to specify a storage account for audit log storage. There are limitations around storage accounts that can be used for audit logs.
Important Considerations:
Storage Account Type: Azure SQL Database Auditing requires storage accounts of types StorageV2 (general purpose v2). Blob Storage accounts cannot be used.
Storage Account Location: The storage account used for auditing must be in the same region as the SQL server it is auditing. If it is not, then audit logs can not be written to the given storage account.
Statement Analysis:
When you enable auditing for SQLdb1, you can store the audit information to storage1.
Analysis: True. SQLdb1 is on the SQLsvr1 server located in East US. storage1 is also located in East US and it’s a StorageV2 account. This meets the requirements for SQL Auditing.
When you enable auditing for SQLdb2, you can store the audit information to storage2.
Analysis: False. SQLdb2 is on the SQLsvr1 server located in East US. However, storage2 is in Central US, so it is in a different region. Also, the storage account type of storage2 is a BlobStorage, which is not supported for SQL Auditing. Because of both the location and account type, the logs can not be sent to storage2.
When you enable auditing for SQLdb3, you can store the audit information to storage2.
Analysis: False. SQLdb3 is on the SQLsvr2 server located in West US. However, storage2 is located in Central US, which is a different region and the account type of storage2 is BlobStorage, which is also not supported. The server’s and storage’s regions must match, and the storage must be StorageV2.
Therefore, the correct answer is:
When you enable auditing for SQLdb1, you can store the audit information to storage1: Yes
When you enable auditing for SQLdb2, you can store the audit information to storage2: No
When you enable auditing for SQLdb3, you can store the audit information to storage2: No
You have 100 servers that run Windows Server 2012 R2 and host Microsoft SQL Server
2012 R2 instances. The instances host databases that have the following characteristics:
✑ The largest database is currently 3 TB. None of the databases will ever exceed 4 TB.
✑ Stored procedures are implemented by using CLR.
You plan to move all the data from SQL Server to Azure.
You need to recommend an Azure service to host the databases. The solution must meet the following requirements:
✑ Whenever possible, minimize management overhead for the migrated databases. ✑ Minimize the number of database changes required to facilitate the migration.
✑ Ensure that users can authenticate by using their Active Directory credentials.
What should you include in the recommendation?
Azure SQL Database single databases
Azure SQL Database Managed Instance
Azure SQL Database elastic pools
SQL Server 2016 on Azure virtual machines
Understanding the Requirements
Large Databases: Databases up to 3 TB, with a max of 4 TB.
CLR Stored Procedures: Databases use CLR for stored procedures.
Minimize Management: Reduce overhead related to database maintenance and administration.
Minimize Changes: Reduce the number of database changes necessary for migration.
Active Directory Authentication: Use existing Active Directory credentials.
Analyzing the Options
Azure SQL Database Single Databases:
Pros: Highly managed PaaS offering. Low management overhead.
Cons: Limited database size (up to 4 TB for some configurations, but may be more costly for 4 TB). Does not support CLR.
Conclusion: Fails to meet the CLR requirement.
Azure SQL Database Managed Instance:
Pros: PaaS offering with high compatibility with on-premises SQL Server, supports CLR. Can be added to an Azure Active Directory domain. Up to 16 TB of storage in a single instance.
Cons: Higher cost than single databases.
Conclusion: Meets all requirements and is the best fit.
Azure SQL Database Elastic Pools:
Pros: PaaS offering for managing multiple databases with shared resources.
Cons: Designed for databases with varying usage patterns. Databases in an Elastic Pool cannot exceed the size limits of Azure SQL Database single databases, and therefore will not meet the requirement for large databases. Also does not support CLR.
Conclusion: Fails to meet the CLR and database size requirements.
SQL Server 2016 on Azure Virtual Machines:
Pros: Full control over the SQL Server instance. Full CLR support. Allows for full AD authentication.
Cons: Requires significantly more management overhead because you’re responsible for patching, backups, high availability, etc.
Conclusion: Fails to minimize management overhead.
Why Managed Instance is the Best Choice
Compatibility: Managed Instance has great compatibility with on-premises SQL Server. This will reduce the number of database changes required for the migration.
CLR Support: It supports CLR, unlike single databases and elastic pools.
Database Size: It can accommodate the large databases (up to 16TB) which also covers the largest database and projected growth.
Managed Service: It’s a PaaS offering, minimizing management overhead, as Azure manages the underlying infrastructure.
Active Directory Integration: Supports Active Directory authentication.
Therefore, the correct recommendation is Azure SQL Database Managed Instance.
You have an Azure subscription that contains an Azure Blob storage account named store1.
You have an on-premises file server named Setver1 that runs Windows Sewer 2016.
Server1 stores 500 GB of company files.
You need to store a copy of the company files from Server 1 in store1.
Which two possible Azure services achieve this goal? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point
an Azure Batch account
an integration account
an On-premises data gateway
an Azure Import/Export job
Azure Data factory
Correct Solutions:
An On-premises data gateway: This is a crucial component for connecting on-premises data sources to Azure services. The gateway acts as a secure bridge, allowing services like Azure Data Factory (and other services) to access Server1’s files. The data gateway enables a hybrid approach to data migration and transfer to your cloud environment. Using this gateway to send the data from on-prem servers to Azure Storage will allow the data migration required.
Azure Data Factory: ADF is a cloud-based data integration service that orchestrates the movement and transformation of data. With the On-premises Data Gateway, ADF can copy the 500 GB of files from Server1 to store1 using the Copy activity. This is a standard use case for ADF and is a very appropriate approach for moving large amounts of data to the cloud.
Incorrect Solutions:
An Azure Batch account: Azure Batch is a service for running large-scale parallel and high-performance computing jobs. It is not used for direct data transfer or file copying from on-premises file servers.
An integration account: Integration Accounts are part of Azure Logic Apps and are used for storing integration artifacts such as schemas, maps, and partners information. It’s not used for data movement in the way required here.
An Azure Import/Export job: Import/Export jobs are primarily for migrating extremely large datasets to Azure by shipping physical storage devices (like hard drives). This solution is not required when you have a good internet connection that you can utilize to transfer 500 GB of data to Azure. It would be slower, more complicated, and involve manual shipping.
In summary:
The correct options are an On-premises data gateway and Azure Data Factory. These options work together to enable secure data transfer from an on-premises file server to an Azure Blob Storage.
You have the Azure resources shown in the following table.
Name Type Location
US-Central-Firewall-policy Azure Firewall policy Central US
US-East-Firewall-policy Azure Firewall policy East US
EU-Firewall-policy Azure Firewall policy West Europe
USEastfirewall Azure Firewall Central US
USWestfirewall Azure Firewall East US
EUFirewall Azure Firewall West Europe
— —
You need to deploy a new Azure Firewall policy that will contain mandatory rules for all Azure Firewall deployments. The new policy will be configured as a parent policy for the existing policies.
What is the minimum number of additional Azure Firewall policies you should create?
0
1
2
3
Understanding Parent Policies
An Azure Firewall Policy can be a parent policy. This means its rules and settings are inherited by other (child) policies.
You can’t directly assign a parent policy to an Azure Firewall, you must assign the policy to the firewall.
You can assign multiple firewalls to a policy.
You need to create a new firewall policy to be a parent for all existing firewall policies.
Analysis
Goal: You need a single parent policy that applies to all Azure Firewall deployments.
Current Setup: You have three existing Azure Firewall policies (US-Central-Firewall-policy, US-East-Firewall-policy, and EU-Firewall-policy) each associated with a specific Azure Firewall.
Solution: The solution to this problem is to create a new policy and assign it to all of the existing policies. Therefore, you need to create one parent policy.
Minimum Additional Policies:
To achieve the objective, we only need 1 additional Azure Firewall policy. The one new policy will act as the parent policy, and the existing policies will become its child policies.
Answer:
1
HOTSPOT
You have an Azure subscription that contains the storage accounts shown in the following table.
Name Type Performance
storage1 StorageV2 Standard
storage2 SrorageV2 Premium
storage3 BlobStorage Standard
storage4 FileStorage Premium
You plan to implement two new apps that have the requirements shown in the following table.
Name Requirement
App1 Use lifecycle management to migrate app data between storage tiers
App2 Store app data in an Azure file share
Which storage accounts should you recommend using for each app? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
App1:
Storage1 and storage2 only
Storage1 and storage3 only
Storage1, storage2, and storage3 only
Storage1, storage2, storage3, and storage4
App2:
Storage4 only
Storage1 and storage4 only
Storage1, storage2, and storage4 only
Storage1, storage2, storage3, and storage4
Understanding Storage Account Types
StorageV2 (General-purpose v2): Supports all storage services (blobs, queues, tables, files) and offers different performance tiers (Hot, Cool, Archive). Suitable for most general-purpose scenarios.
BlobStorage: Specifically designed for storing unstructured data (blobs). It supports access tiers (Hot, Cool, Archive), making it suitable for lifecycle management.
FileStorage: Specifically designed for creating Azure file shares that can be accessed via SMB.
Analyzing Requirements
App1: Requires lifecycle management to migrate data between storage tiers. This means it needs to use Hot/Cool/Archive tiers.
App2: Requires storing data in an Azure file share.
Selections
App1: Storage1, storage2 and storage3 only.
Storage1 is a StorageV2 account, which is perfect for general-purpose storage including lifecycle management.
Storage2 is also a StorageV2, which also supports lifecycle management.
Storage3 is a BlobStorage account, which is perfect for blob storage including lifecycle management.
Storage4 is a FileStorage account, so does not support lifecyle management.
App2: Storage4 only
Storage4 is the only FileStorage account and therefore the only account type that can fulfill the requirement.
Therefore, the correct answer is:
App1: Storage1, storage2, and storage3 only
App2: Storage4 only
HOTSPOT
You are designing an Azure web app.
You plan to deploy the web app to the North Europe Azure region and the West Europe Azure region.
You need to recommend a solution for the web app. The solution must meet the following requirements:
✑ Users must always access the web app from the North Europe region, unless the
region fails.
✑ The web app must be available to users if an Azure region is unavailable. ✑ Deployment costs must be minimized.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Request routing method:
A Traffic Manager profile
Azure Application Gateway
Azure Load Balancer
Request routing configuration:
Cookie-based session affinity
Performance traffic routing
Priority traffic routing
Weighted traffic routing
Understanding the Requirements
Primary Region: Users should always access the North Europe region unless it is unavailable. This indicates the need for a failover mechanism.
High Availability: The app must remain accessible even if one of the regions fails.
Cost Optimization: Deployment costs need to be minimized.
Analyzing Azure Services
Request Routing Method:
A Traffic Manager profile: This is the best choice for routing traffic based on priority, performance, or geographic location. It offers automatic failover and is specifically designed for these scenarios.
Azure Application Gateway: Primarily designed for web traffic load balancing, web application firewall capabilities, and more advanced routing based on HTTP headers and other parameters. It’s not the right tool for handling primary/failover logic like this.
Azure Load Balancer: Primarily for balancing traffic within a region. It doesn’t provide the cross-region routing required for failover in this scenario.
Request Routing Configuration:
Cookie-based session affinity: Ensures requests from the same user are routed to the same instance. This isn’t relevant to the core requirement of failover and routing between regions.
Performance traffic routing: Routes traffic to the endpoint with the best performance. While helpful, it’s not the primary goal here which is the primary region and the failover.
Priority traffic routing: Routes traffic to a primary endpoint, and if that endpoint is unhealthy, the traffic is routed to the next available endpoint. This is perfect for the primary/failover scenario.
Weighted traffic routing: Routes traffic to different endpoints based on a percentage, typically for scenarios like testing different versions. This is not optimal for the specified requirement.
Solution
Based on the analysis:
Request routing method: A Traffic Manager profile is the appropriate service for managing the failover between two regions.
Request routing configuration: Priority traffic routing fits the requirement to route users to the primary region, which will be North Europe, and automatically direct traffic to the secondary region (West Europe) if the primary region is unavailable.
Therefore, the correct answer is:
Request routing method: A Traffic Manager profile
Request routing configuration: Priority traffic routing
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases.
The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.
You need to recommend a solution to meet the regulatory requirement.
Solution: You recommend creating resource groups based on locations and implementing resource locks on the resource groups.
Does this meet the goal?
Yes
No
Understanding the Requirements
App Service and SQL Database Co-location: App Service instances and their associated Azure SQL databases must be deployed in the same Azure region.
Regional Regulatory Requirement: App Service instances can only be deployed to specific allowed Azure regions.
Simultaneous Deployment: Both the App Service instances and the Azure SQL databases will be deployed at the same time.
Evaluating the Proposed Solution
Resource Groups Based on Location: Creating resource groups named based on Azure regions is a sound practice. This helps organize resources logically and makes it easy to manage deployments within specific regions. It is common to create a resource group for each region you want to deploy resources in (e.g. rg-eastus, rg-westus).
Resource Locks: Resource locks prevent accidental deletion or modification of resources. If resource locks are placed on resource groups, they prevent any resource from being deleted or modified. However, they do not enforce the creation of resources in certain regions, and therefore will not enforce the regional regulatory requirement.
Why the Solution Doesn’t Fully Meet the Goal
The proposed solution addresses the organization and prevention of deletion of the resources, but it does not enforce the actual deployment of resources only to specific allowed regions.
While creating resource groups by location helps in the organization of resources, it does not prevent the creation of resources in the incorrect region.
Resource locks only protect existing resources, and don’t have a role during the creation of resources. Resource locks will not stop resources from being deployed to a resource group in an incorrect region.
Conclusion
The solution is a good practice for resource organization, but does not enforce regional deployment.
Answer:
No
Your company has an app named App1 that uses data from the on-premises Microsoft SQL Server databases shown in the following table.
|—|—|
| DB1 | 450 GB |
| DB2 | 250 GB |
| DB3 | 300 GB |
| DB4 | 50 GB |
App1 and the data are used on the first day of the month only. The data is not expected to grow more than 3% each year.
The company is rewriting App1 as an Azure web app and plans to migrate all the data to Azure.
You need to migrate the data to Azure SQL Database. The solution must minimize costs.
Which service tier should you use?
vCore-based Business Critical
vCore-based General Purpose
DTU-based Standard
DTU-based Basic
Name | Size |
Understanding the Requirements
Data Migration: All data from on-premises SQL Server databases needs to be migrated to Azure SQL Database.
Usage Pattern: The application (and thus the database) is used only on the first day of each month, and the data does not grow excessively (3% annually).
Cost Minimization: The goal is to choose the most cost-effective service tier for this usage pattern.
Analyzing Azure SQL Database Service Tiers
vCore-based Service Tiers:
Business Critical: Designed for mission-critical applications with the highest resilience, high availability, and the fastest performance (using local SSD storage). Offers a very high level of performance but is the most expensive option.
General Purpose: Suitable for most business workloads and is typically the default choice. Provides good performance with a balance between cost and features.
DTU-based Service Tiers:
Standard: Offers a good balance of features and performance.
Basic: The most cost-effective DTU-based option, designed for low-throughput and less demanding workloads, typically with small databases.
Evaluation for this scenario
Infrequent Usage: The application’s usage pattern is highly periodic - only once per month. Therefore, paying for very high performance during the rest of the month is not ideal. A tier with low compute capability for most of the month is optimal for this cost-conscious requirement.
Performance Needs: The data needs to be available and performant on that first day of the month, however, there is no requirement that the performance must be very high.
Data Size: The total data size is around 1050 GB (450+250+300+50), which does not qualify as “small”. However the low monthly usage makes a lower tier optimal.
vCore-based Business Critical is not suitable because it is intended for very high throughput, mission-critical systems and therefore the most expensive service.
vCore-based General Purpose is suitable for the performance requirements, however it will incur too much cost when the system is not being used, due to the compute required.
DTU-based Standard is not as expensive as a vCore option, however it does not provide the best optimization for low compute for the majority of the month, and will be more expensive than a basic tier.
DTU-based Basic is the best fit because it allows for a very cost effective tier, while also providing sufficient performance on the single day of the month the database is being used.
Conclusion
Given the low, infrequent usage of the database (one day a month), the DTU-based Basic tier is the most cost-effective option and will be sufficient for the performance requirement. You can scale up during the day required for maximum performance, and scale it back down for all other days.
Answer:
DTU-based Basic
You have .NeT web service named service1 that has the following requirements.
✑ Must read and write to the local file system.
✑ Must write to the Windows Application event log.
You need to recommend a solution to host Service1 in Azure . The solution must meet the following requirements:
✑ Minimize maintenance overhead. ✑ Minimize costs.
What should you include in the recommendation?
an Azure App Service web app
an Azure virtual machine scale set
an App Service Environment (ASE)
an Azure Functions app
Understanding the Requirements
Service 1 Functionality: The service needs to:
Read and write to the local file system.
Write to the Windows Application event log.
Hosting Goals:
Minimize maintenance overhead.
Minimize costs.
Analyzing Azure Hosting Options
Azure App Service Web App:
Pros: Fully managed platform, low maintenance, good for web applications. Supports deployment of .NET web services.
Cons:
Limited local file system access: App Service web apps have a sandbox environment, making direct file system access limited. Can write to D:\home, but this is a network-based file system and it has some limitations. It is not the same as true local storage.
No direct access to the Windows Event Log: Writing to the Windows Event Log is not directly supported in a standard App Service Web App. You would typically need to use another logging mechanism.
Azure Virtual Machine Scale Set (VMSS):
Pros: Provides scalability and high availability for virtual machines. Full access to the VM and it’s operating system.
Cons: Higher maintenance overhead than PaaS offerings like App Service. This includes patching, configuring, and managing the underlying OS. Also higher cost due to the compute usage.
App Service Environment (ASE):
Pros: Provides an isolated, dedicated environment for App Service apps. Offers more control than a standard App Service. It is a private version of the standard Azure App Service.
Cons: Much more expensive than a standard App Service. Also, has similar limitations regarding local filesystem and Windows Event Log.
Azure Functions App:
Pros: Serverless compute service, event-driven architecture. Very low maintenance overhead and cost-effective.
Cons: Primarily for running code in response to events, not designed for hosting long-running services. Limited file system access, same as App Services. No direct access to the Windows event log.
Evaluation
File System Access: Both Azure App Service and Azure Functions have limited file system access, especially for non-temporary storage. VMSS is the only option here that would satisfy the local file system requirement.
Windows Event Log: Standard App Service Web Apps and Functions do not have direct access to the Windows Application event log. However, VMSS is ideal for this.
Maintenance: The goal is to minimize maintenance. VMSS requires much more maintenance than the other options, due to the underlying Operating system that requires patching and administration.
Cost: A VMSS will be the most expensive option due to compute usage. The App Service and Functions will be much cheaper due to lower usage, and no OS patching or administration. However both App Service and Functions do not offer full local file system access or Windows Event Log writing.
Conclusion
None of the options fulfill all of the requirements. The closest solution would be to use an Azure Virtual Machine Scale Set (VMSS) as it provides full control of the file system and Windows Event Logs. However it will incur additional overhead regarding maintenance, and therefore it is not a perfect solution, but it is the best one in this scenario.
An Azure App Service web app does not support writing to the Windows event log.
An Azure Functions app does not support writing to the Windows event log.
An App Service Environment (ASE) does not support writing to the Windows event log.
Answer:
an Azure virtual machine scale set
You have SQL Server on an Azure virtual machine. The databases are written to nightly as part of a batch process.
You need to recommend a disaster recovery solution for the data. The solution must meet the following requirements:
✑ Provide the ability to recover in the event of a regional outage. ✑ Support a recovery time objective (RTO) of 15 minutes.
✑ Support a recovery point objective (RPO) of 24 hours. ✑ Support automated recovery.
✑ Minimize costs.
What should you include in the recommendation?
Azure virtual machine availability sets
Azure Disk Backup
an Always On availability group
Azure Site Recovery
Understanding the Requirements
Regional Outage Protection: The solution must protect against complete Azure regional failures.
RTO (Recovery Time Objective) of 15 minutes: The maximum acceptable downtime for the service should be 15 minutes after a disaster.
RPO (Recovery Point Objective) of 24 hours: The maximum acceptable data loss should be 24 hours in a disaster. This means you can lose at most 24 hours of data.
Automated Recovery: The failover process should be automated, minimizing the need for manual intervention.
Cost Minimization: The chosen solution should be cost-effective.
Analyzing Disaster Recovery Options
Azure Virtual Machine Availability Sets:
Pros: Provides high availability within a single Azure region, protecting against hardware failures within the region.
Cons: Does NOT protect against regional outages. It does not provide disaster recovery to another region. This option is for availability, and not disaster recovery.
Does NOT meet requirements.
Azure Disk Backup:
Pros: Provides point-in-time backups of Azure VM disks to a recovery services vault. It is typically used for recovery within the same region, but can be configured for cross region restoration.
Cons: Backup and restore are not instantaneous. Restoring a database from backup and performing a manual recovery operation will exceed the 15-minute RTO. It requires human interaction to initiate the recovery process.
Does NOT meet the automated recovery or RTO requirements.
Always On Availability Group (AG):
Pros: Provides database-level high availability within a single region or across regions. Provides automatic failover. The RPO of a secondary node will be seconds away from the primary node, and therefore will not meet the requirement of 24 hours.
Cons: Can be complex to configure and manage. Requires a dedicated SQL Server license for each node in the availability group, which adds significant costs.
Does NOT meet the RPO requirements or cost requirements.
Azure Site Recovery (ASR):
Pros: Provides a disaster recovery service that replicates virtual machines to another Azure region, enabling you to fail over in case of a regional outage. Supports automated failover and failback. Can support the RPO requirement of 24 hours if a replication interval of 24 hours is set. It is less expensive than Always On Availability groups.
Cons: Recovery can take some time (can be optimized for shorter RTOs). It is not instantaneous recovery, however it can easily meet the 15-minute RTO requirement.
Evaluation
Regional Outage Protection: Azure Site Recovery is the only option here that protects against a regional outage. Availability sets do not protect against a regional outage, as they are in the same region.
RTO of 15 Minutes: Azure Site Recovery can support a 15-minute RTO by pre-configuring a failover plan with appropriate parameters and by setting the replication interval so that it doesn’t cause a delay during the failover. The other options either do not provide protection in another region, or will not meet the RTO requirement.
RPO of 24 hours: Azure Site Recovery allows a replication interval of 24 hours, to meet the RPO requirement.
Automated Recovery: Azure Site Recovery provides automated failover.
Cost Minimization: Azure Site Recovery will be more cost-effective than an Always On Availability group, due to not requiring the SQL Server Licenses. Azure Disk Backup is cheaper than ASR however, it requires manual intervention during failover and will not meet the RTO.
Conclusion
Given the requirements, Azure Site Recovery (ASR) is the best option for the disaster recovery of a SQL Server virtual machine in Azure. It meets the regional protection, RTO, RPO, and automated recovery requirements and is more cost-effective than an Always On Availability Group.
Answer:
Azure Site Recovery
HOTSPOT
You need to ensure that users managing the production environment are registered for Azure MFA and must authenticate by using Azure MFA when they sign in to the Azure portal. The solution must meet the authentication and authorization requirements.
What should you do? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
To register the users for Azure MFA, use:
Azure AD Identity Protection
Security defaults in Azure AD
Per-user MFA in the MFA management UI
To enforce Azure MFA authentication, configure:
Grant control in capolicy1
Session control in capolicy1
Sign-in risk policy in Azure AD Identity Protection for the Litware.com tenant
Understanding the Requirements
Azure MFA Enrollment: Users responsible for production environment management must be registered for Azure Multi-Factor Authentication (MFA).
MFA Enforcement: These users must be required to use Azure MFA when they sign in to the Azure portal.
Authentication and Authorization: The solution must meet both authentication (verifying the user’s identity) and authorization (granting access) requirements.
Analyzing Azure MFA Options
Registering Users for Azure MFA:
Azure AD Identity Protection: This service detects potential vulnerabilities and risks regarding user accounts, but it is not directly used to enroll users for MFA.
Security defaults in Azure AD: This provides basic security settings to all users of the tenant, including MFA registration. It doesn’t allow for targeting only specific users, such as in this case, those that manage the production environment.
Per-user MFA in the MFA management UI: This is the classic and direct way to enable MFA for specific users by allowing you to manage MFA settings for each account individually. It is the most effective solution for specific users, and therefore it is most suited to this situation, and is the best answer.
Enforcing Azure MFA Authentication:
Grant control in capolicy1: A “grant control” in a Conditional Access policy is used to enforce certain actions such as requiring MFA. If the correct user or group is specified, this is the perfect solution for enforcing MFA for those specific users.
Session control in capolicy1: Session control in conditional access policies is used to configure features such as “sign-in frequency” and is not directly used to enforce MFA.
Sign-in risk policy in Azure AD Identity Protection for the Litware.com tenant: Risk policies in Identity Protection are designed to detect and respond to sign-ins that are considered risky. Although this option provides security, it is not the perfect answer to the specified requirements.
Solution
Based on the analysis:
To register users for MFA: Use Per-user MFA in the MFA management UI. This option allows for precise control over who is enabled for MFA.
To enforce MFA authentication: Configure Grant control in capolicy1. This will ensure that the correct users must satisfy the MFA requirement in order to access the Azure portal.
Therefore, the correct answers are:
To register the users for Azure MFA, use: Per-user MFA in the MFA management UI
To enforce Azure MFA authentication, configure: Grant control in capolicy1
Your company has the divisions shown in the following table.
Division Azure subscription Azure AD tenant
East Sub1 Contoso.com
West Sub2 Fabrikam.com
Sub1 contains an Azure App Service web app named App1. App1 uses Azure AD for single-tenant user authentication. Users from contoso.com can authenticate to App1.
You need to recommend a solution to enable users in the fabrikam.com tenant to authenticate to App1.
What should you recommend?
A. Configure Azure AD join.
B. Configure Azure AD Identity Protection.
C. Configure a Conditional Access policy.
D. Configure Supported account types in the application registration and update the sign-in endpoint.
Understanding the Scenario
App1: An Azure App Service web app using Azure AD for authentication.
Current Setup: App1 is configured for single-tenant authentication, only allowing users from the contoso.com Azure AD tenant to sign in.
Requirement: You need to allow users from the fabrikam.com Azure AD tenant to authenticate to App1.
Analyzing Solution Options
A. Configure Azure AD join: Azure AD join is used to register devices with Azure AD. This is used for device authentication and does not allow another tenant’s users to authenticate against the application.
B. Configure Azure AD Identity Protection: Azure AD Identity Protection is for detecting and responding to risky sign-in behaviors. It does not enable cross-tenant authentication.
C. Configure a Conditional Access Policy: Conditional Access policies are used to control access based on criteria such as location, device, and app, but it does not directly enable users from another tenant to authenticate.
D. Configure Supported account types in the application registration and update the sign-in endpoint: This is the correct way to enable multi-tenant authentication. By changing the supported account types in the Azure AD Application Registration settings and updating the sign-in endpoint, you can enable the app to accept users from other tenants.
Explanation of the Correct Solution
When you register an application in Azure AD, the default behavior is single-tenant. To allow users from another Azure AD tenant to authenticate, you need to:
Change Supported Account Types: In the application registration settings in Azure AD, you need to configure the application to support multiple tenants. This tells Azure AD that the app can accept users from any Azure AD tenant. This setting allows for “Accounts in this organizational directory only” (single tenant) or “Accounts in any organizational directory” (multi-tenant).
Update the sign-in Endpoint: The sign-in endpoint must also be updated to enable users from the other tenant to authenticate.
Conclusion
The correct approach is to modify the application registration and the sign-in endpoint to enable multi-tenant authentication. This will allow the app to recognize and authenticate users from the fabrikam.com tenant.
Answer:
D. Configure Supported account types in the application registration and update the sign-in endpoint.
HOTSPOT -
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Statements
Authorization to access Azure resources can be provided only to Azure Active Directory
(Azure AD) users.
Identities stored in Azure Active Directory (Azure AD), third-party cloud services, and on-
premises Active Directory can be used to access Azure resources.
Azure has built-in authentication and authorization services that provide secure access to Azure resources.
Understanding Azure Authentication and Authorization
Authentication: The process of verifying a user’s identity (e.g., by checking their username and password).
Authorization: The process of granting permissions to access specific resources based on the user’s identity and their role or group membership.
Azure Active Directory (Azure AD): Microsoft’s cloud-based identity and access management service. It’s the primary identity provider for Azure resources.
Analyzing the Statements
Statement 1: Authorization to access Azure resources can be provided only to Azure Active Directory (Azure AD) users.
No. While Azure AD is the primary identity provider, you can also use other identities. For example, you can use service principals (which are application identities) to grant access to resources. Also, you can use guest users from other Azure AD tenants.
Statement 2: Identities stored in Azure Active Directory (Azure AD), third-party cloud services, and on-premises Active Directory can be used to access Azure resources.
Yes. This statement accurately reflects the flexibility of Azure’s identity management.
Azure AD Identities: This is the most common scenario where your cloud-based users are managed directly in Azure AD.
Third-party cloud services: You can federate with third party IDPs to provide single sign on for your cloud services. This enables integration and collaboration with other cloud service providers.
On-premises Active Directory: Through Azure AD Connect or federation, you can integrate on-premises Active Directory users so they can sign into cloud-based resources.
Statement 3: Azure has built-in authentication and authorization services that provide secure access to Azure resources.
Yes. This statement is correct. Azure provides several services for authentication and authorization. Azure AD is the core service, providing a centralized identity provider. Other services, such as Azure Role-Based Access Control (RBAC) and Azure Active Directory B2C (for consumer-facing applications) are built-in services that enhance the security of Azure resources. These services provide secure and flexible methods to control access to resources.
Answers:
Statement 1: No
Statement 2: Yes
Statement 3: Yes
You have an application that is used by 6,000 users to validate their vacation requests. The application manages its own credential store.
Users must enter a username and password to access the application. The application does NOT support identity providers.
You plan to upgrade the application to use single sign-on (SSO) authentication by using an Azure Active Directory (Azure AD) application registration.
Which SSO method should you use?
A. header-based
B. SAML
C. password-based
D. OpenID Connect
Understanding the Situation
Current Setup: The application has its own user credential store and requires users to enter a username and password directly in the application. It does not support any external identity providers.
Goal: Upgrade the application to use single sign-on (SSO) using an Azure AD application registration. The application itself cannot be changed to support identity providers.
Constraint: The application does not directly support identity providers such as SAML or OIDC.
Analyzing SSO Methods
A. Header-based:
Mechanism: Header-based authentication typically involves passing authentication information in HTTP headers. This is a common mechanism when used in conjunction with a reverse proxy or web application firewall. In this case however, the application does not directly support any identity providers, therefore this is not a valid option.
Suitability: Requires the application to be modified and will not integrate directly with Azure AD.
B. SAML:
Mechanism: SAML (Security Assertion Markup Language) is an XML-based protocol for exchanging authentication and authorization data between identity providers (like Azure AD) and applications.
Suitability: Requires the application to directly support SAML integration. In this situation, we are unable to modify the application.
C. Password-based:
Mechanism: Password-based SSO, in the context of Azure AD, involves a secure way to store and manage the credentials for an application that doesn’t natively support federation. When a user accesses the application through Azure AD, Azure AD securely provides the application with the stored credentials.
Suitability: This method can be used when the application does not support any identity providers and cannot be changed. It will not modify the application.
D. OpenID Connect (OIDC):
Mechanism: OIDC is an authentication protocol built on top of OAuth 2.0. It is a modern protocol used for authentication.
Suitability: Requires the application to be modified to directly support OIDC.
Evaluation
Application Constraint: The application cannot be modified to use SAML or OIDC. This rules out options B and D.
Azure AD Compatibility: Azure AD supports password-based SSO for applications that do not directly support federation. This mechanism involves storing the application’s username and password in Azure AD and securely providing it to the application when required.
No Code Changes: By using password-based SSO, there will be no code changes required to the application.
Conclusion
Given the requirements and the constraint that the application cannot be modified, password-based SSO is the only viable option. It allows users to log in to the application through Azure AD while the application does not need to be modified to support authentication against Azure AD directly.
Answer:
C. password-based
You are designing a point of sale (POS) solution that will be deployed across multiple locations and will use an Azure Databricks workspace in the Standard tier. The solution will include multiple apps deployed to the on-premises network of each location.
You need to configure the authentication method that will be used by the app to access the workspace. The solution must minimize the administrative effort associated with staff turnover and credential management.
What should you configure?
A. a managed identity
B. a service principal
C. a personal access token
Understanding the Situation
POS Solution: A point-of-sale solution deployed at multiple locations with applications accessing an Azure Databricks workspace.
Authentication Requirement: The application needs to authenticate to the Databricks workspace.
Administrative Goal: Minimize administrative effort, particularly related to staff turnover and credential management.
Analyzing Authentication Options
A. a managed identity:
Mechanism: Managed identities provide an automatically managed identity in Azure AD. This eliminates the need for you to store credentials in code or configuration files. Azure services are assigned an identity with defined permissions and are then allowed to access other resources.
Suitability: Managed identities are best used when the application is running in Azure services. In this scenario the applications are running on premise, so this is not a suitable option.
B. a service principal:
Mechanism: A service principal is an identity for an application within Azure AD. It’s like a user, but it’s intended for applications rather than humans. You create a service principal in Azure AD and then configure it to access specific resources, for example the Databricks workspace. The app uses client id and client secret to connect to Azure AD.
Suitability: This would work for the application, but the secret needs to be managed, rotated and protected. This adds operational overhead and is not the optimal solution.
C. a personal access token:
Mechanism: A personal access token is a string that acts like a password for a user, granting access to specific resources. These tokens are typically linked to individual user accounts.
Suitability: Personal access tokens would be very difficult to manage with staff turnover and would create additional administrative overhead. Therefore this option is not appropriate.
Evaluation
Minimizing Administrative Effort:
Managed identities: Are the most secure and easy to manage as they do not require credential management. However, they are not suitable for applications running on-premise.
Service principals: Require managing credentials (client id and secret) which introduces management overhead.
Personal access tokens: Require managing tokens for each user, which increases overhead and complexity, especially with staff turnover.
On-premise application: The applications are deployed on premise, not in the cloud. Therefore managed identities are not a suitable option.
Conclusion
Although managed identities are ideal when running code in the cloud, in this situation, where the application is running on-premise, a service principal is the best option. Service principals provide a way for the application to authenticate without relying on user credentials or individual tokens. However the secret must be managed carefully.
Answer:
B. a service principal
You are developing an app that will read activity logs for an Azure subscription by using Azure Functions.
You need to recommend an authentication solution for Azure Functions. The solution must minimize administrative effort.
What should you include in the recommendation?
A. an enterprise application in Azure AD
B. system-assigned managed identities
C. shared access signatures (SAS)
D. application registration in Azure AD
Understanding the Situation
App: An Azure Function app needs to read activity logs for an Azure subscription.
Authentication Goal: Minimize the administrative effort associated with managing credentials.
Analyzing Authentication Options
A. an enterprise application in Azure AD:
Mechanism: An enterprise application is a representation of an application within an Azure AD tenant. You register an application and grant it permissions to access Azure resources.
Suitability: This will work, however it involves the management of credentials such as a secret, and this is not the most ideal solution.
B. system-assigned managed identities:
Mechanism: Managed identities provide an automatically managed identity in Azure AD. This eliminates the need for you to store credentials in code or configuration files. When the function is assigned a managed identity, it is automatically registered in your Azure Active Directory.
Suitability: Managed identities are designed to simplify the process of authenticating to Azure resources. They are the ideal solution for minimizing administrative effort and managing credentials. It is the optimal solution for this scenario.
C. shared access signatures (SAS):
Mechanism: SAS provides delegated access to storage accounts. SAS is not an appropriate solution in this scenario, where an application is required to authenticate to an Azure resource (activity logs), and is only appropriate when accessing storage accounts.
Suitability: Not suitable for this purpose as it is intended for storage account access.
D. application registration in Azure AD:
Mechanism: Application registration is the process of registering your application within an Azure AD tenant. It is required to allow an application to request authentication to Azure AD.
Suitability: Application registration is a pre-requisite for most authentication methods, including managed identities. However, application registration alone does not provide a mechanism to minimize administrative overhead. This is a pre-req step, but it not the best answer.
Evaluation
Minimize Administrative Effort:
Managed Identities: Do not require you to manage secrets or credentials. Azure automatically rotates the credentials of the managed identity, eliminating all administrative overhead.
Enterprise applications: Require managing secrets and credentials, and therefore require additional administrative effort.
SAS: Not appropriate for the specific scenario.
Application registration: Only part of the process and requires the use of a method (such as client secret) which requires management.
Conclusion
System-assigned managed identities are the ideal solution because they eliminate the need to manage and secure credentials explicitly, greatly reducing administrative effort. The Azure function will automatically be assigned a service principal identity within Azure AD, and this identity can then be authorized to read from the activity logs.
Answer:
B. system-assigned managed identities
You have an Azure Active Directory (Azure AD) tenant that syncs with an on-premises Active Directory domain.
Your company has a line-of-business (LOB) application that was developed internally.
You need to implement SAML single sign-on (SSO) and enforce multi-factor authentication (MFA) when users attempt to access the application from an unknown location.
Which two features should you include in the solution? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
A. Azure AD Privileged Identity Management (PIM)
B. Azure Application Gateway
C. Azure AD enterprise applications
D. Azure AD Identity Protection
E. Conditional Access policies
Understanding the Situation
Hybrid Environment: An Azure AD tenant synced with on-premises Active Directory.
LOB Application: An internally developed application needs to be integrated with Azure AD for SSO.
SSO Requirement: SAML-based single sign-on is needed.
MFA Enforcement: Multi-factor authentication (MFA) must be enforced when users access the application from unknown locations.
Analyzing Azure AD Features
A. Azure AD Privileged Identity Management (PIM):
Purpose: PIM is used to manage, control, and monitor access to important resources in your organization. It focuses on just-in-time access for privileged roles, not for general application SSO. It does not fulfill the requirement here.
Suitability: Not directly related to SAML SSO or location-based MFA enforcement.
B. Azure Application Gateway:
Purpose: Application Gateway is a web traffic load balancer with a WAF.
Suitability: This might be part of a solution if a web application firewall is required, but it is not required for SAML SSO and conditional access based on location, and therefore is not directly relevant to the requirements.
C. Azure AD enterprise applications:
Purpose: Enterprise applications are used to represent applications within Azure AD that use the directory for authentication, such as applications that use SAML SSO.
Suitability: Crucial for implementing SAML SSO with Azure AD for a custom application. You need to create an Enterprise Application to define how users authenticate against the application and to configure SAML.
D. Azure AD Identity Protection:
Purpose: Identity Protection detects and responds to risky sign-in behaviors based on machine learning and other analytics. It does not provide location based MFA enforcement.
Suitability: It can be used for risk-based MFA policies but is not used to enforce MFA based on location directly.
E. Conditional Access policies:
Purpose: Conditional Access policies allow you to control access to cloud apps based on conditions such as location, device, and user risk.
Suitability: This is essential for implementing location-based MFA enforcement. This will require a sign-in policy that specifies the use of MFA when the location is unknown.
Evaluation
SAML SSO: Azure AD enterprise applications are necessary to configure the application with SAML SSO. This will allow the application to trust the authentication process from Azure AD.
Location-based MFA: Conditional Access policies are required to enforce MFA when a user tries to access the application from an unfamiliar location.
Conclusion
The two required features are:
Azure AD enterprise applications: This allows for SAML-based authentication for the application.
Conditional Access policies: To enforce MFA based on sign-in location.
Answer:
C. Azure AD enterprise applications
E. Conditional Access policies
Your on-premises network contains an Active Directory Domain Services (AD DS) domain. The domain contains a server named Server1. Server1 contains an app named App1 that uses AD DS authentication. Remote users access App1 by using a VPN connection to the on-premises network.
You have an Azure AD tenant that syncs with the AD DS domain by using Azure AD Connect.
You need to ensure that the remote users can access App1 without using a VPN. The solution must meet the following requirements:
- Ensure that the users authenticate by using Azure Multi-Factor Authentication (MFA).
- Minimize administrative effort.
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
— —
Answer Area
In Azure AD:
A managed identity
An access package
An app registration
An enterprise application
On-premises:
A server that runs Windows Server and has the Azure AD Application Proxy connector installed
A server that runs Windows Server and has the on-premises data gateway (standard mode) installed
A server that runs Windows Server and has the Web Application Proxy role service installed
— —
Understanding the Situation
On-premises App: App1 is an on-premises application that uses AD DS for authentication.
Current Access: Remote users connect via VPN to access App1.
Goal: Enable remote users to access App1 without a VPN.
Requirements:
Use Azure MFA for authentication.
Minimize administrative effort.
Analyzing Azure AD Components
A managed identity: Managed identities are used for authenticating Azure resources to other Azure services. They are not used for authenticating on-premise applications. Therefore this is not a suitable option.
An access package: Access packages are used to govern access to resources within an organization. This is a good tool to provide user access, but not an appropriate mechanism to publish on-premises apps to the internet.
An app registration: An app registration is required to enable an application to authenticate with Azure AD. It is a pre-req for a number of solutions, and is not the correct answer.
An enterprise application: An enterprise application represents the application within Azure AD and acts as the point of contact when integrating with an external service. It is required for Azure AD Application Proxy. This is the correct solution.
Analyzing On-Premises Components
A server that runs Windows Server and has the Azure AD Application Proxy connector installed: This is the correct on-premises component. The Azure AD Application Proxy connector acts as a reverse proxy, securely publishing your on-premises applications to the internet, enabling external users to access them without needing a VPN. This component facilitates the secure connection with the enterprise application, allowing for SSO.
A server that runs Windows Server and has the on-premises data gateway (standard mode) installed: The on-premises data gateway is used for connecting on-premises data sources to Azure services. It is not used for publishing applications to the internet, therefore it is not the right solution.
A server that runs Windows Server and has the Web Application Proxy role service installed: Web Application Proxy is a legacy solution, and not the optimal solution here. The Azure AD Application Proxy is more appropriate.
Evaluation
VPN Removal: Azure AD Application Proxy allows remote users to access on-premises applications without the need for a VPN.
Azure MFA: The Azure AD Application Proxy integrates seamlessly with Azure AD’s authentication services, including MFA.
AD Authentication: The Azure AD Application Proxy will securely pass the users’ identity to the on-premise application and will authenticate with AD.
Minimizing Admin Effort: This solution uses managed services, reducing the overall administrative overhead compared to managing complex VPN connections.
Conclusion
The correct components for this solution are:
In Azure AD: An enterprise application.
On-premises: A server that runs Windows Server and has the Azure AD Application Proxy connector installed.
Answer:
In Azure AD: An enterprise application
On-premises: A server that runs Windows Server and has the Azure AD Application Proxy connector installed
You need to implement the Azure RBAC role assignments for the Network Contributor role. The solution must meet the authentication and authorization requirements.
What is the minimum number of assignments that you must use?
A. 1
B. 2
C. 5
D. 10
E. 15
Understanding the Requirements
Azure RBAC: You need to use Azure Role-Based Access Control (RBAC).
Network Contributor Role: You specifically need to assign the built-in Network Contributor role.
Goal: You need to determine the minimum number of role assignments required.
Key Concepts
Role Definition: A role (like Network Contributor) defines the set of permissions.
Role Assignment: A role assignment links a role definition to a specific user, group, or service principal at a specific scope (like a resource group, subscription, or management group).
Minimum Number of Assignments
The key to answering this question is that a single role assignment can grant access to multiple users if the assignment is done to a security group. Therefore:
One Assignment: You can create a security group in Azure AD, assign the Network Contributor role to that group at the appropriate scope (e.g., a specific resource group or the subscription), and then add all users who need that level of access to this security group.
Therefore, you can implement the requirements with only 1 role assignment.
Why not more?
You could assign the Network Contributor role individually to multiple users. However, that would not be the minimum.
Creating many individual role assignments is generally considered poor practice compared to using groups because it makes management complex.
You could create multiple groups, but there is no requirement for this.
Conclusion
The minimum number of role assignments required to provide the Network Contributor role to multiple users is 1, using a security group.
Answer:
A. 1
You have an Azure subscription that contains an Azure Kubernetes Service (AKS) instance named AKS1. AKS1 hosts microservice-based APIs that are configured to listen on non-default HTTP ports.
You plan to deploy a Standard tier Azure API Management instance named APIM1 that will make the APIs available to external users.
You need to ensure that the AKS1 APIs are accessible to APIM1. The solution must meet the following requirements:
- Implement MTLS authentication between APIM1 and AKS1.
- Minimize development effort.
- Minimize costs.
What should you do?
A. Implement an external load balancer on AKS1.
B. Redeploy APIM1 to the virtual network that contains AKS1.
C. Implement an ExternalName service on AKS1.
D. Deploy an ingress controller to AKS1.
Understanding the Situation
AKS1: An AKS cluster with microservice APIs on non-default HTTP ports.
APIM1: A Standard tier API Management instance that needs to expose the AKS1 APIs to external users.
Security: Mutual TLS (MTLS) authentication is required between APIM1 and AKS1.
Goals:
Minimize development effort.
Minimize costs.
Analyzing Solution Options
A. Implement an external load balancer on AKS1:
Mechanism: This would expose the services in AKS to the internet via an external IP address, which is not the most appropriate solution when utilizing API management.
Suitability: While a load balancer is common for exposing services to the internet, it does not directly address the requirement for MTLS authentication with APIM. In addition, exposing the microservices directly to the internet without API management is not the most secure or best practice solution. It also adds unnecessary costs.
B. Redeploy APIM1 to the virtual network that contains AKS1:
Mechanism: This places the API management service inside the same virtual network as the AKS cluster, allowing them to communicate via the private IP.
Suitability: This solves the networking requirement but does not directly implement MTLS. Also, redeploying APIM is a costly and time consuming operation, and this does not meet the goals of minimal development effort and cost.
C. Implement an ExternalName service on AKS1:
Mechanism: An ExternalName service in Kubernetes maps a DNS alias to an external hostname, allowing you to direct internal traffic to the desired endpoint.
Suitability: This is the correct approach to expose the services in AKS to API Management. When used in conjunction with MTLS authentication in APIM, this will fulfill all the requirements while also minimizing cost and development effort. The service in AKS will also require MTLS configured on the API itself, which is minimal.
D. Deploy an ingress controller to AKS1:
Mechanism: An ingress controller manages external access to services inside the Kubernetes cluster, often using layer 7 (HTTP) rules.
Suitability: Ingress controllers can be part of exposing services, however they do not solve for MTLS or the requirement to expose it to API Management. This approach will not fulfil the requirements.
Evaluation
MTLS Authentication: Requires server authentication on the AKS side, and client authentication on the APIM side. This configuration can be done without changing the AKS service configuration using an ExternalName service, which will keep configuration and costs minimal.
Minimize Development Effort: Using an ExternalName service requires minimal changes to AKS, and minimal configuration changes on API Management, and therefore the development effort is minimized.
Minimize Costs: Redeploying API Management will incur additional costs, as will implementing a Load Balancer. An externalName service has minimal costs.
Conclusion
The best approach that fulfills all the requirements is to implement an ExternalName service on AKS1, which exposes the service to API Management, and configure the MTLS connection on both the service, and API Management.
Answer:
C. Implement an ExternalName service on AKS1.
You have an Azure subscription named Sub1 that is linked to an Azure AD tenant named contoso.com.
You plan to implement two ASP.NET Core apps named App1 and App2 that will be deployed to 100 virtual machines in Sub1. Users will sign in to App1 and App2 by using their contoso.com credentials.
App1 requires read permissions to access the calendar of the signed-in user. App2 requires write permissions to access the calendar of the signed-in user.
You need to recommend an authentication and authorization solution for the apps. The solution must meet the following requirements:
- Use the principle of least privilege.
- Minimize administrative effort.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
Authentication:
Application registration in Azure AD
A system-assigned managed identity
A user-assigned managed identity
Authorization:
Application permissions
Azure role-based access control (Azure RBAC)
Delegated permissions
Understanding the Situation
Apps: Two ASP.NET Core apps (App1, App2) deployed to virtual machines.
Authentication: Users will sign in using their contoso.com (Azure AD) credentials.
Authorization:
App1 needs read access to the user’s calendar.
App2 needs write access to the user’s calendar.
Goals:
Principle of least privilege (granting only necessary permissions).
Minimize administrative effort.
Analyzing Authentication Options
Application registration in Azure AD:
Mechanism: Creating an app registration is a prerequisite for an application to authenticate with Azure AD. This provides an identity for the application in Azure AD. This step is required to create an Azure application.
Suitability: Required in all of the scenarios here, and therefore is not specific enough to be the correct answer.
A system-assigned managed identity:
Mechanism: A system-assigned managed identity is an identity automatically assigned to an Azure resource by Azure. The identity is bound to the lifecycle of the Azure resource. This option will work if the virtual machine and application are using managed identities.
Suitability: Suitable if you configure your VM with managed identities, this will simplify credentials management, and reduce administrative overhead. This is the ideal solution for this part.
A user-assigned managed identity:
Mechanism: A user-assigned managed identity is created as a standalone resource that you can then assign to one or more Azure resources.
Suitability: This is also an option for managing managed identities. However a system-assigned managed identity is simpler in this scenario, because the resource lifecycle is the same.
Analyzing Authorization Options
Application permissions:
Mechanism: Application permissions grant access to an application to an API for data that is not specific to a user. They do not act on behalf of a user.
Suitability: This is not the correct approach in this case as the application needs to access the user’s calendar, not all calendars in the organization.
Azure role-based access control (Azure RBAC):
Mechanism: RBAC is used for managing access to Azure resources (e.g. VMs, networks, storage etc) and is not appropriate for managing access to Graph API data (e.g. calendar).
Suitability: Not suitable for managing access to the user’s calendar information.
Delegated permissions:
Mechanism: Delegated permissions grant an application access to specific resources on behalf of a signed-in user. These can be set on the application registration or the enterprise application.
Suitability: This is the appropriate authorization type for this situation. App1 will require a delegated permission to “read the user’s calendar” and App2 will require a delegated permission to “write to the user’s calendar”.
Evaluation
Authentication: Using a system-assigned managed identity for authentication on the virtual machines to simplify authentication and reduce administrative overhead. Each virtual machine will have a unique identity.
Authorization: Using delegated permissions will enable the application to act on behalf of the user, allowing access to their calendar. This is ideal for implementing the principle of least privilege.
Conclusion
The ideal combination for the given requirements is:
Authentication: A system-assigned managed identity
Authorization: Delegated permissions
Answer:
Authentication: A system-assigned managed identity
Authorization: Delegated permissions
HOTSPOT -
You plan to deploy an Azure web app named App1 that will use Azure Active Directory (Azure AD) authentication.
App1 will be accessed from the internet by the users at your company. All the users have computers that run Windows 10 and are joined to Azure AD.
You need to recommend a solution to ensure that the users can connect to App1 without being prompted for authentication and can access App1 only from company-owned computers.
What should you recommend for each requirement? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
The users can connect to App1 without being
prompted for authentication:
An Azure AD app registration
An Azure AD managed identity
Azure AD Application Proxy
The users can access App1 only from
company-owned computers:
A Conditional Access policy
An Azure AD administrative unit
Azure Application Gateway
Azure Blueprints
Azure Policy
Understanding the Situation
App1: An Azure web app using Azure AD authentication.
Access: Users need to access App1 from the internet.
Requirements:
Seamless Access: Users should not be prompted for authentication (SSO).
Device Restriction: Only company-owned (Azure AD joined) Windows 10 devices should be allowed to access App1.
Analyzing Options for Seamless Access
An Azure AD app registration:
Mechanism: An app registration is the first step when integrating an application with Azure AD for authentication, so it is a prerequisite step.
Suitability: While necessary for enabling authentication, it does not directly ensure seamless authentication. It does not manage the device configuration.
An Azure AD managed identity:
Mechanism: Managed identities are used by Azure resources to authenticate to other Azure services, not the user accessing the web app directly.
Suitability: Not applicable for the situation.
Azure AD Application Proxy:
Mechanism: Primarily designed to publish on-premises web applications to the internet.
Suitability: Not required for applications already hosted in Azure.
Analyzing Options for Device Restriction
A Conditional Access policy:
Mechanism: Conditional Access policies allow you to define access rules based on various conditions including device state.
Suitability: This is the correct tool to enforce device-based access restrictions. It enables access from Azure AD Joined devices.
An Azure AD administrative unit:
Mechanism: Administrative units are used to scope permissions within an Azure AD tenant.
Suitability: Not applicable to device-based restrictions.
Azure Application Gateway:
Mechanism: Provides load balancing and web application firewall features.
Suitability: Not designed for controlling access based on device state.
Azure Blueprints:
Mechanism: Blueprints are used to deploy and update collections of Azure resources.
Suitability: Not designed to provide device based access restrictions.
Azure Policy:
Mechanism: Used to enforce organizational standards and assess compliance.
Suitability: Not designed to control access based on the device.
Evaluation
Seamless Authentication: When a user is using an Azure AD joined computer to connect to a resource that uses Azure AD for authentication, they will automatically be authenticated using their current windows sign-in credentials. This will provide single-sign-on with no additional work. An application registration is a pre-req, but not the answer itself.
Device Restriction: Azure AD Conditional Access policies are specifically designed for this type of scenario, they can be set up to restrict application access to specific device types or device states such as only Azure AD joined devices.
Conclusion
The correct components for this solution are:
The users can connect to App1 without being prompted for authentication: An Azure AD app registration, in conjunction with Azure AD joined devices, will automatically authenticate users.
The users can access App1 only from company-owned computers: A Conditional Access policy.
Answer:
The users can connect to App1 without being prompted for authentication: An Azure AD app registration
The users can access App1 only from company-owned computers: A Conditional Access policy
HOTSPOT -
You have several Azure App Service web apps that use Azure Key Vault to store data encryption keys.
Several departments have the following requests to support the web app:
Department
Security
Request
* Review the membership of administrative roles and require
users to provide a justification for continued membership.
* Get alerts about changes in administrator assignments.
* See a history of administrator activation, including which
changes administrators made to Azure resources.
Development
* Enable the applications to access Key Vault and retrieve
keys for use in code.
Quality Assurance
* Receive temporary administrator access to create and
configure additional web apps in the test environment.
Which service should you recommend for each department’s request? To answer, configure the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Security:
Development:
Quality Assurance:
Azure AD Privileged Identity Management
Azure Managed Identity
Azure AD Connect
Azure AD Identity Protection
Azure AD Privileged Identity Management
Azure Managed Identity
Azure AD Connect
Azure AD Identity Protection
Azure AD Privileged Identity Management
Azure Managed Identity
Azure AD Connect
Azure AD Identity Protection
Understanding the Departments and Their Requirements
Security:
Needs to review administrator role memberships.
Wants alerts on administrator changes.
Needs a history of administrator actions.
Development:
Needs to enable applications to access Key Vault to retrieve keys.
Quality Assurance (QA):
Needs temporary admin access to create and configure resources.
Analyzing Azure Services
Azure AD Privileged Identity Management (PIM):
Purpose: Manages, controls, and monitors access to important resources. Allows for just-in-time (JIT) access to privileged roles, can provide alerts for role changes, requires justification for continued roles, and has an audit history.
Suitability: Matches the Security and Quality Assurance requirements.
Azure Managed Identity:
Purpose: Provides an identity for Azure services to use when authenticating to other Azure services. It is used by applications to securely retrieve keys from Key Vault.
Suitability: Matches the Development requirement.
Azure AD Connect:
Purpose: Synchronizes on-premises identities with Azure AD.
Suitability: Not directly related to any of these requests.
Azure AD Identity Protection:
Purpose: Detects and responds to risky sign-in behaviors.
Suitability: Not directly related to any of these requests.
Matching Departments to Services
Security: The requirement for role membership review, alerts on administrator changes, and history of admin actions all point to the use of Azure AD Privileged Identity Management (PIM).
Development: The need for the application to securely retrieve keys from Key Vault is best met with Azure Managed Identity, removing the requirement to store secrets or connection strings in the application.
Quality Assurance: The requirement for temporary access to create and configure resources is best met using Azure AD Privileged Identity Management (PIM), as it will provide just-in-time access for the required purpose.
Conclusion
The correct service recommendations are:
Security: Azure AD Privileged Identity Management
Development: Azure Managed Identity
Quality Assurance: Azure AD Privileged Identity Management
Answer:
Security: Azure AD Privileged Identity Management
Development: Azure Managed Identity
Quality Assurance: Azure AD Privileged Identity Management
HOTSPOT -
You are designing a software as a service (SaaS) application that will enable Azure Active Directory (Azure AD) users to create and publish online surveys. The
SaaS application will have a front-end web app and a back-end web API. The web app will rely on the web API to handle updates to customer surveys.
You need to design an authorization flow for the SaaS application. The solution must meet the following requirements:
✑ To access the back-end web API, the web app must authenticate by using OAuth 2 bearer tokens.
✑ The web app must authenticate by using the identities of individual users.
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
The access tokens will be generated by:
Azure AD
A web app
A web API
Authorization decisions will be performed by:
Azure AD
A web app
A web API
Requirements:
SaaS Application: A SaaS application with a web app and a web API.
OAuth 2.0 Bearer Tokens: Web app must use OAuth 2.0 bearer tokens to access the web API.
User Authentication: Web app must authenticate on behalf of individual users.
Authorization Decisions: Authorization must be performed based on the user’s identity.
Answer Area:
The access tokens will be generated by:
Azure AD
Authorization decisions will be performed by:
A web API
Explanation:
The access tokens will be generated by:
Azure AD:
Why it’s correct: In an OAuth 2.0 flow, the authorization server (in this case Azure AD) issues access tokens to clients (the web app) after successful authentication. The access token is then used to access the resource API. This is the standard flow for Azure AD authentication.
Why not others:
A web app: The web app is the client, it cannot issue the tokens, instead, it requests them from Azure AD.
A web API: A web API is the resource to be protected by the tokens. It does not issue tokens for clients to call other APIs.
Authorization decisions will be performed by:
A web API:
Why it’s correct: The resource (the web API) is responsible for validating the access token and making authorization decisions. It will check if the client (web app) has the correct permissions to access specific data or operations based on claims in the token.
Why not others:
Azure AD: Azure AD is responsible for authentication and issuing tokens, it is not directly involved in the authorization of what an application is allowed to do.
A web app: The web app consumes the API, it does not make the authorization decisions on the API.
Important Notes for the AZ-304 Exam:
OAuth 2.0: Be very familiar with the OAuth 2.0 authorization flow, and the role of the client application, the authentication server, and the API resource server.
Azure AD as Authorization Server: Understand that Azure AD is used as an authentication and authorization server in Azure-based applications.
Access Tokens: Understand what access tokens are, how they are used, and that they are generated by the auth server.
Authorization Decisions: Know that APIs are responsible for authorizing access based on the identity that is included in the token.
Security Best Practices: Secure your applications, and do not embed secrets.
Exam Focus: Always look for the components that match the specific authorization workflow, and know the specific purpose of each.
You have an Azure subscription that contains a custom application named Application1. Application1 was developed by an external company named Fabrikam,
Ltd. Developers at Fabrikam were assigned role-based access control (RBAC) permissions to the Application1 components. All users are licensed for the
Microsoft 365 E5 plan.
You need to recommend a solution to verify whether the Fabrikam developers still require permissions to Application1. The solution must meet the following requirements:
✑ To the manager of the developers, send a monthly email message that lists the access permissions to Application1.
✑ If the manager does not verify an access permission, automatically revoke that permission.
✑ Minimize development effort.
What should you recommend?
A. In Azure Active Directory (Azure AD), create an access review of Application1.
B. Create an Azure Automation runbook that runs the Get-AzRoleAssignment cmdlet.
C. In Azure Active Directory (Azure AD) Privileged Identity Management, create a custom role assignment for the Application1 resources.
D. Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet.
Understanding the Situation
Application1: A custom application in Azure with RBAC permissions assigned to Fabrikam developers.
Goal: Regularly verify whether Fabrikam developers still need access to Application1.
Requirements:
Monthly email to the developers’ manager listing access permissions.
Automatic revocation if access is not verified by the manager.
Minimize development effort.
Analyzing the Options
A. In Azure Active Directory (Azure AD), create an access review of Application1.
Mechanism: Access reviews are a feature in Azure AD that allow you to regularly review user access to resources. You can configure them to send out review requests to users or managers. These reviews can also be set up to automatically revoke access if not confirmed.
Suitability: This solution directly addresses the requirements, is easy to set up, and minimizes development effort. This is the best solution.
B. Create an Azure Automation runbook that runs the Get-AzRoleAssignment cmdlet.
Mechanism: Get-AzRoleAssignment is a PowerShell cmdlet to retrieve role assignments. You could potentially use this cmdlet in an Automation runbook to retrieve the role assignments, however it does not provide the ability to notify a manager, and revoke access, meaning that additional logic must be created. This will incur higher development overhead.
Suitability: Requires a lot of custom development to achieve the requirements.
C. In Azure Active Directory (Azure AD) Privileged Identity Management, create a custom role assignment for the Application1 resources.
Mechanism: Privileged Identity Management (PIM) is designed to control just-in-time access to highly privileged roles and it is not suited to provide periodic access reviews of standard roles.
Suitability: Not appropriate for this scenario, it’s not designed for this purpose and does not offer access review capabilities.
D. Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet.
Mechanism: Get-AzureADUserAppRoleAssignment is a PowerShell cmdlet to retrieve application role assignments for users. This cmdlet can be used to retrieve the list of users and their application roles, and can be used in an Automation runbook to achieve the desired result. However it does not support access reviews, or a notification to the manager, therefore additional logic must be developed.
Suitability: This requires significant additional development to notify the manager and to revoke access, and therefore does not minimize development effort.
Evaluation
Access Review with Manager Approval: Azure AD Access Reviews allows for the implementation of manager access reviews, automatic reminders to the manager, and automatic revocation upon review timeout.
Automation: Access reviews provide automatic reminders and automatic revocation, minimizing development overhead.
Least Privilege: Access reviews and automatic revocation will ensure users do not have access for longer than required, enforcing the principle of least privilege.
Conclusion
Creating an access review of Application1 in Azure AD is the most suitable solution. It directly meets the access review, notification, automatic revocation, and minimal development effort requirements.
Answer:
A. In Azure Active Directory (Azure AD), create an access review of Application1.
Your company has the infrastructure shown in the following table.
Location: Azure
Resource:
Azure subscription named Subscription1
20 Azure web apps
Location: On-premises datacenter
Resource:
Active Directory domain
Server running Azure AD Connect
Linux computer named Server1
The on-premises Active Directory domain syncs with Azure Active Directory (Azure AD).
Server1 runs an application named App1 that uses LDAP queries to verify user identities in the on-premises Active Directory domain.
You plan to migrate Server1 to a virtual machine in Subscription1.
A company security policy states that the virtual machines and services deployed to Subscription1 must be prevented from accessing the on-premises network.
You need to recommend a solution to ensure that App1 continues to function after the migration. The solution must meet the security policy.
What should you include in the recommendation?
A. Azure AD Application Proxy
B. the Active Directory Domain Services role on a virtual machine
C. an Azure VPN gateway
D. Azure AD Domain Services (Azure AD DS)
Understanding the Situation
App1: An application running on-premises, that uses LDAP queries to authenticate with on-premises Active Directory Domain Services (AD DS).
Migration: Server1 (and thus App1) is being moved to an Azure VM in Subscription1.
Security Policy: Azure resources in Subscription1 must not access the on-premises network.
Goal: App1 must continue to function (i.e., authenticate users) after the migration without violating the security policy.
Analyzing the Options
A. Azure AD Application Proxy:
Mechanism: Azure AD Application Proxy is designed to publish on-premises web applications to the internet so users can access them remotely without using a VPN. It is also useful for providing SSO for web applications.
Suitability: Application Proxy is not suited for the situation, as the goal is not to publish a web application.
B. the Active Directory Domain Services role on a virtual machine:
Mechanism: Deploying a Windows Server VM in Azure with the Active Directory Domain Services (AD DS) role allows you to create a domain controller in the cloud, and migrate your AD services into the cloud.
Suitability: While this would technically provide an AD DS environment, it would not allow the cloud resources to access the on-premises AD. It would provide an isolated version of Active Directory in the cloud and would not solve the problem.
C. an Azure VPN gateway:
Mechanism: A VPN gateway connects your on-premises network to Azure, creating a secure bridge between them.
Suitability: This option would violate the security policy by connecting the Azure network and on-premises network.
D. Azure AD Domain Services (Azure AD DS):
Mechanism: Azure AD DS provides a managed domain service in the cloud. This is completely separate to on premise Active Directory and can provide the same authentication methods that on premise AD DS can.
Suitability: This is the correct solution, as it can be used to provide the same authentication services within Azure, without connecting to the on-premise Active Directory.
Evaluation
Security Policy: The security policy prohibits any network access to the on-premise environment, therefore VPN should not be used.
App1 Functionality: App1 requires LDAP to query the user database for authentication purposes. Azure AD DS can provide this functionality.
Minimal Modification: Azure AD DS is designed to provide the same functionality that Active Directory provides, however without requiring a full domain controller deployment.
Conclusion
The recommended solution is to use Azure AD Domain Services (Azure AD DS). It provides the required AD DS functionality in Azure, which is used for LDAP queries. This ensures that App1 can function in Azure and remains compliant with the security policy.
Answer:
D. Azure AD Domain Services (Azure AD DS)
You are designing an app that will be hosted on Azure virtual machines that run Ubuntu. The app will use a third-party email service to send email messages to users. The third-party email service requires that the app authenticate by using an API key.
You need to recommend an Azure Key Vault solution for storing and accessing the API key. The solution must minimize administrative effort.
What should you recommend using to store and access the key? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
— —
Answer Area
Storage:
Certificate
Key
Secret
Access:
An API token
A managed service identity (MSI)
A service principal
— —
Understanding the Situation
App: An application on Ubuntu VMs needs to use an API key for a third-party email service.
Security: The API key needs to be stored securely.
Goal: Minimize administrative effort for key storage and access.
Analyzing Key Vault Storage Options
Certificate:
Purpose: Stores digital certificates used for encryption and authentication. Certificates are typically used for encryption or TLS/SSL.
Suitability: Not the correct data type for storing an API key (a string).
Key:
Purpose: Stores cryptographic keys used for encryption and signing.
Suitability: Not the correct data type for storing an API key (a string).
Secret:
Purpose: Stores arbitrary strings securely. It is the correct data type for storing the API key.
Suitability: The most appropriate Key Vault item type for storing the API key.
Analyzing Key Vault Access Options
An API token:
Purpose: A generic method for authentication, this is not a suitable option for authenticating to Key Vault.
Suitability: Does not directly address the need to securely authenticate to Key Vault, as a token is another secret that must be managed.
A managed service identity (MSI):
Purpose: Managed identities provide an automatically managed identity in Azure AD. This eliminates the need for you to store credentials in code or configuration files.
Suitability: Ideal for this scenario, because it provides a secure way for Azure resources to access other Azure resources without managing credentials, and therefore minimizes the administrative overhead.
A service principal:
Purpose: A service principal is an application identity within Azure AD, it is a method for providing access to Azure resources, and therefore will not be the most optimal solution.
Suitability: Could be used, but would involve more administrative overhead compared to a managed identity due to the need to manage a client id and secret.
Evaluation
API Key Storage: Using a Key Vault secret will store the key string securely.
Access: Using a managed service identity (MSI) will allow the application running on the virtual machine to securely authenticate to Key Vault without storing credentials within the virtual machine configuration, and will minimize administrative effort.
Conclusion
The correct options are:
Storage: Secret
Access: A managed service identity (MSI)
Answer:
Storage: Secret
Access: A managed service identity (MSI)