test1 Flashcards

https://www.dumpsbase.com/freedumps/?s=az+304

1
Q

Your network contains an on-premises Active Directory domain.

The domain contains the Hyper-V clusters shown in the following table.
Name Number of nodes Number of virtual machines running on cluster
Cluster1 4 20
Cluster2 3 15

You plan to implement Azure Site Recovery to protect six virtual machines running on Cluster1 and three virtual machines running on Cluster1 Virtual machines are running on all Cluster! and Cluster2 nodes.

You need to identify the minimum number of Azure Site Recovery Providers that must be installed on premises.

How many Providers should you identify?

1
7
9
16

A

Understanding Azure Site Recovery Providers:

The Azure Site Recovery (ASR) Provider is a software component that must be installed on each Hyper-V host that you want to protect with ASR.

The Provider communicates with the Azure Recovery Services Vault and facilitates replication and failover.

Requirements:

On-Premises Hyper-V: There are two Hyper-V clusters (Cluster1 and Cluster2).

Protection Scope: Six VMs from Cluster1 and three VMs from Cluster2 need to be protected by Azure Site Recovery.

Minimum Providers: Identify the minimum number of ASR Providers needed.

Analysis:

Cluster1: Has 4 nodes.

Cluster2: Has 3 nodes.

Provider per Host: One ASR Provider is needed on each Hyper-V host that will be replicated.

Protected VMs: Six VMs from Cluster1 and three from Cluster2 need protection.

VMs are running on all nodes: All VMs are running across all nodes, which means that we need an ASR Provider installed on all nodes.

Minimum Number of Providers:

Cluster1 requires a provider on each host: 4 providers

Cluster2 requires a provider on each host: 3 providers

Total: 4 + 3 = 7

Correct Answer:

7

Explanation:

You must install an Azure Site Recovery Provider on every Hyper-V host that contains virtual machines that you want to protect using ASR. Because you need to protect VMs on all nodes in both clusters, you must install a provider on every hyper-v host. This means you must install 4 providers on Cluster 1 and 3 providers on cluster 2, for a total of 7 providers.

Why not others:

1: It is not enough since there are 7 Hyper-V hosts in total.

9: This answer is incorrect because it does not match the total number of hyper-v hosts.

16: This answer is incorrect because it does not match the total number of hyper-v hosts.

Important Notes for the AZ-304 Exam:

Azure Site Recovery: Understand the architecture, requirements, and components of ASR.

ASR Provider: Know that the ASR Provider must be installed on each Hyper-V host to be protected.

Minimum Requirements: The exam often focuses on minimum requirements, not the total capacity or other metrics.

Hyper-V Integration: Understand how ASR integrates with Hyper-V for replication.

Exam Focus: Read the question carefully and identify the specific information related to required components.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You need to recommend a strategy for the web tier of WebApp1. The solution must minimize.

What should you recommend?

Create a runbook that resizes virtual machines automatically to a smaller size outside of business hours.
Configure the Scale Up settings for a web app.
Deploy a virtual machine scale set that scales out on a 75 percent CPU threshold.
Configure the Scale Out settings for a web app.

A

Requirements:

Web Tier Scaling: A strategy for scaling the web tier of WebApp1.

Minimize Cost: The solution must focus on minimizing cost.

Recommended Solution:

Configure the Scale Out settings for a web app.

Explanation:

Configure the Scale Out settings for a web app:

Why it’s the best fit:

Cost Minimization: Web apps (App Services) have a pay-as-you-go model and scale out to add more instances when demand increases and automatically scale back in when the demand decreases. This is cost-effective because you only pay for what you use.

Automatic Scaling: You can configure automatic scaling based on different performance metrics (CPU, memory, or custom metrics), ensuring that you scale out and in based on load.

Managed Service: It is a fully managed service, so it minimizes operational overhead.

Why not others:

Create a runbook that resizes virtual machines automatically to a smaller size outside of business hours: While this can help minimize cost, this is not ideal because VMs are still running all the time. Also, it is more complex to implement and manage.

Configure the Scale Up settings for a web app: Scale Up is more costly because you increase the compute resources of the existing instances.

Deploy a virtual machine scale set that scales out on a 75 percent CPU threshold: While it is possible to deploy and scale with scale sets, this is more costly since VMs are billed per hour and are more complex to manage than web apps.

Important Notes for the AZ-304 Exam:

Azure App Service: Be very familiar with Azure App Service and its scaling capabilities.

Web App Scale Out: Know the different scaling options for web apps, and when to scale out versus scale up.

Automatic Scaling: Understand how to configure automatic scaling based on performance metrics.

Cost Optimization: The exam often emphasizes cost-effective solutions. Be aware of the pricing models for different Azure services.

PaaS vs. IaaS: Understand the benefits of using PaaS services over IaaS for cost optimization.

Exam Focus: Be sure to select the best service that meets the requirements and provides the most cost effective solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You have an Azure subscription that contains a custom application named Application was developed by an external company named fabric, Ltd. Developers at Fabrikam were assigned role-based access control (RBAV) permissions to the Application components. All users are licensed for the Microsoft 365 E5 plan.

You need to recommends a solution to verify whether the Faricak developers still require permissions to Application1.

The solution must the following requirements.

  • To the manager of the developers, send a monthly email message that lists the access permissions to Application1.
  • If the manager does not verify access permission, automatically revoke that permission.
  • Minimize development effort.

What should you recommend?

In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources
Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet
Create an Azure Automation runbook that runs the Get-AzureRmRoleAssignment cmdlet
In Azure Active Directory (Azure AD), create an access review of Application1

A

Requirements:

External Developer Access: Fabrikam developers have RBAC permissions to an Azure application.

Access Verification: Need to verify if the Fabrikam developers still need access.

Monthly Email to Manager: Send a monthly email to the manager with access information.

Automatic Revocation: Revoke permissions if the manager does not approve.

Minimize Development: Minimize custom code development and use available services.

Recommended Solution:

In Azure Active Directory (Azure AD), create an access review of Application1

Explanation:

Azure AD Access Reviews:

Why it’s the best fit:

Automated Review: Azure AD Access Reviews provides a way to schedule recurring access reviews for groups, applications, or roles. It will automatically send notifications to the assigned reviewers (in this case, the manager).

Manager Review: You can configure the access review to have the manager review and approve or deny access for their developers.

Automatic Revocation: You can configure the access review to automatically remove access for users when they are not approved.

Minimal Development: Access reviews are a built-in feature of Azure AD that requires minimal configuration and no custom coding.

Why not others:

In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources: While PIM is great for managing and governing privileged roles, it’s not the best choice for regular access reviews of permissions, and it does not provide a way to have a review based on user accounts.

Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet: While possible, this requires custom development and management. Azure Access Reviews provides the functionality natively, therefore this is not the optimal solution for the requirements.

Create an Azure Automation runbook that runs the Get-AzureRmRoleAssignment cmdlet: Similar to the previous option, this is not the ideal solution since access reviews provides all of this functionality natively.

Important Notes for the AZ-304 Exam:

Azure AD Access Reviews: Be very familiar with Azure AD Access Reviews, and how they can be used to manage user access, and know the methods that you can use to perform them (for example, by a manager or by self review).

Access Management: Understand the importance of access reviews as part of an overall security strategy.

Access Reviews vs. PIM: Understand when to use PIM, and when to use Access Reviews.

Minimize Development: The exam often emphasizes solutions that minimize development effort.

Exam Focus: Select the simplest and most direct method to achieve the desired outcome.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

HOTSPOT -
You have an Azure SQL database named DB1.
You need to recommend a data security solution for DB1. The solution must meet the following requirements:
✑ When helpdesk supervisors query DB1, they must see the full number of each credit card.
✑ When helpdesk operators query DB1, they must see only the last four digits of each credit card number.
✑ A column named Credit Rating must never appear in plain text within the database system, and only client applications must be able to decrypt the Credit
Rating column.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Helpdesk requirements:
Always Encrypted
Azure Advanced Threat Protection (ATP)
Dynamic data masking
Transparent Data Encryption (TDE)
Credit Rating requirement:
Always Encrypted
Azure Advanced Threat Protection (ATP)
Dynamic data masking
Transparent Data Encryption (TDE)

A

Requirements:

Helpdesk Supervisors: Must see full credit card numbers.

Helpdesk Operators: Must see only the last four digits of credit card numbers.

Credit Rating Column: The Credit Rating column must never appear in plain text within the database system and must be decrypted by the client applications.

Answer Area:

Helpdesk requirements:

Dynamic data masking

Credit Rating requirement:

Always Encrypted

Explanation:

Helpdesk requirements:

Dynamic data masking:

Why it’s correct: Dynamic data masking allows you to obfuscate sensitive data based on the user’s role. You can configure masking rules to show the full credit card numbers to supervisors and only the last four digits to the operators. The underlying data is not modified, and the masking is applied at the query output level.

Why not others:

Always Encrypted: This encrypts the data, but doesn’t allow for different visibility of the data based on user roles.

Azure Advanced Threat Protection (ATP): This is for detecting malicious behavior, not for data masking.

Transparent Data Encryption (TDE): This encrypts data at rest, but does not apply specific policies based on user access or perform masking.

Credit Rating requirement:

Always Encrypted:

Why it’s correct: Always Encrypted ensures that sensitive data is always encrypted, both at rest and in transit. The encryption keys are stored and managed in the client application and are not accessible to database administrators. This satisfies the requirement that the column must never appear in plain text in the database system, and it is only decrypted in the client application.

Why not others:

Azure Advanced Threat Protection (ATP): It doesn’t encrypt or mask the data. It is meant for threat detection.

Dynamic data masking: Dynamic data masking only masks the data for specific users, but it does not encrypt the data.

Transparent Data Encryption (TDE): TDE encrypts data at rest, but it does not encrypt data in transit or protect against database administrators viewing the unencrypted data.

Important Notes for the AZ-304 Exam:

Always Encrypted: Understand what it does, how it encrypts data, where the encryption keys are managed, and the purpose of this approach for security.

Dynamic Data Masking: Know the purpose and configuration of dynamic data masking and how it helps control the data that users can see.

Transparent Data Encryption (TDE): Understand that TDE is used for encrypting data at rest, but it doesn’t protect data in transit, and does not provide different views of data.

Azure Advanced Threat Protection (ATP): Know that it is used for threat detection, not for masking or encrypting data.

Data Security: Be familiar with the different data security features in Azure SQL Database.

Exam Focus: You must be able to understand a complex scenario, and pick the different Azure components that meet each requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You have an Azure subscription.
Your on-premises network contains a file server named Server1. Server1 stores 5 TB of company files that are accessed rarely.
You plan to copy the files to Azure Storage.
You need to implement a storage solution for the files that meets the following requirements:
✑ The files must be available within 24 hours of being requested.
✑ Storage costs must be minimized.
Which two possible storage solutions achieve this goal? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.

A. Create a general-purpose v1 storage account. Create a blob container and copy the files to the blob container.
B. Create a general-purpose v2 storage account that is configured for the Hot default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.
C. Create a general-purpose v1 storage account. Create a file share in the storage account and copy the files to the file share.
D. Create a general-purpose v2 storage account that is configured for the Cool default access tier. Create a file share in the storage account and copy the files to the file share.
E. Create an Azure Blob storage account that is configured for the Cool default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.

A

The correct answers are B and E.

Here’s why:

Understanding the Requirements:

Availability within 24 hours: This requirement strongly suggests using the Archive access tier in Azure Blob Storage. The Archive tier has the lowest storage cost but also has a rehydration latency. Rehydration from Archive tier typically takes several hours, and is guaranteed within 24 hours.

Minimize storage costs: The Archive access tier is the most cost-effective storage tier in Azure Blob Storage for data that is rarely accessed.

Analyzing each option:

A. Create a general-purpose v1 storage account. Create a blob container and copy the files to the blob container.

Incorrect. General-purpose v1 accounts are older and less cost-optimized than v2 or Blob storage accounts. This option doesn’t specify any access tier, so it would likely default to Hot or Cool, which are more expensive than Archive and not suitable for rarely accessed data when cost minimization is a key requirement. It also doesn’t explicitly address the 24-hour availability through Archive tier.

B. Create a general-purpose v2 storage account that is configured for the Hot default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.

Correct. General-purpose v2 accounts are recommended and more cost-effective than v1. By setting the default tier to Hot (initially - though this default doesn’t really matter as we are overriding per blob) and then explicitly setting each file to the Archive access tier, we achieve the lowest storage cost and meet the 24-hour availability requirement. Setting individual blobs to Archive overrides the default account tier for those specific blobs.

C. Create a general-purpose v1 storage account. Create a file share in the storage account and copy the files to the file share.

Incorrect. Azure File Shares are designed for file system access (SMB, NFS) and are generally more expensive than Blob Storage for large amounts of data, especially for archive scenarios. File shares do not have access tiers like Archive. This option does not minimize cost and is not designed for rarely accessed, large datasets like this.

D. Create a general-purpose v2 storage account that is configured for the Cool default access tier. Create a file share in the storage account and copy the files to the file share.

Incorrect. Similar to option C, using Azure File Shares is not cost-effective for this scenario. While Cool tier is cheaper than Hot, it’s still more expensive than Archive, and File Shares themselves are generally pricier than Blob Storage. File Shares also don’t offer the Archive tier.

E. Create an Azure Blob storage account that is configured for the Cool default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.

Correct. Creating an Azure Blob storage account is specifically designed for blob data and can be more cost-optimized for blob storage compared to general-purpose accounts in some scenarios. Like option B, setting the default tier to Cool (or even Hot) is less important as long as we explicitly set each file to the Archive access tier. This option also effectively uses Archive tier for cost minimization and meets the 24-hour availability requirement. Azure Blob Storage accounts are designed to be cost-effective for blob data.

Why B and E are the best solutions:

Both options B and E leverage the Archive access tier of Azure Blob Storage, which is crucial for meeting both the cost minimization and 24-hour availability requirements. They use Blob containers which are the appropriate storage for files in this scenario. While they differ slightly in the type of storage account (general-purpose v2 vs. Azure Blob storage account), both are valid and effective solutions for storing rarely accessed files at the lowest cost with 24-hour retrieval.

Final Answer: B and E

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

HOTSPOT

You have an existing implementation of Microsoft SQL Server Integration Services (SSIS) packages stored in an SSISDB catalog on your on-premises network. The on-premises network does not have hybrid connectivity to Azure by using Site-to-Site VPN or ExpressRoute.

You want to migrate the packages to Azure Data Factory.

You need to recommend a solution that facilitates the migration while minimizing changes to the existing packages. The solution must minimize costs.

What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Store the SSISDB catalog by using:
Azure SQL Database
Azure Synapse Analytics
SQL Server on an Azure virtual machine
SQL Server on an on-premises computer
Implement a runtime engine for
package execution by using:
Self-hosted integration runtime only
Azure-SQL Server Integration Services Integration Runtime (IR) only
Azure-SQL Server Integration Services Integration Runtime and self-hosted integration runtime

A

Requirements:

Existing SSIS Packages: The packages are stored in an SSISDB catalog on-premises.

Migrate to ADF: The migration target is Azure Data Factory.

Minimize Changes: The solution should minimize changes to the existing SSIS packages.

Minimize Costs: The solution should be cost-effective.

No connectivity: There is no hybrid connectivity from the on-premises environment to Azure.

Answer Area:

Store the SSISDB catalog by using:

Azure SQL Database

Implement a runtime engine for package execution by using:

Azure-SQL Server Integration Services Integration Runtime (IR) only

Explanation:

Store the SSISDB catalog by using:

Azure SQL Database:

Why it’s correct: To migrate SSIS packages to Azure Data Factory, the SSISDB catalog needs to be stored in Azure. Azure SQL Database is the recommended and supported method of storing the SSISDB catalog when you are using the Azure SSIS Integration Runtime in ADF.

Why not others:

Azure Synapse Analytics: While Synapse Analytics also supports SQL functionality, it is not the recommended platform to host the SSISDB.
* SQL Server on an Azure virtual machine: While SQL Server on a VM would work, it is an IaaS solution, which requires additional management overhead and is not as cost-effective as using the PaaS Azure SQL Database.
* SQL Server on an on-premises computer: The SSISDB must be in Azure to be used by the Azure SSIS Integration Runtime.

Implement a runtime engine for package execution by using:

Azure-SQL Server Integration Services Integration Runtime (IR) only:

Why it’s correct: An Azure SSIS Integration Runtime is a fully managed service for executing SSIS packages in Azure. Because there is no hybrid network connectivity, you must use the Azure version, instead of a self-hosted IR. The Azure SSIS IR is the only way to run the SSIS packages that were migrated in Azure.

Why not others:

Self-hosted integration runtime only: The self-hosted integration runtime needs a hybrid network to Azure to be able to work. Because there is no VPN or expressroute, this is not an option.

Azure-SQL Server Integration Services Integration Runtime and self-hosted integration runtime: The self-hosted integration runtime is not necessary in this scenario because there is no need to connect to an on-premise resource.

Important Notes for the AZ-304 Exam:

Azure Data Factory: Be very familiar with ADF, its core concepts, and how to execute SSIS packages.

Azure SSIS IR: Know the purpose of an Azure SSIS Integration Runtime and how to set it up. Understand that it is used when running SSIS packages in Azure.

SSISDB in Azure: Understand how the SSISDB catalog is managed and stored in Azure when migrating from an on-prem environment.

Self-Hosted IR: Understand when the self-hosted IR is required and why it is not the appropriate answer for this specific scenario.

Hybrid Connectivity: Understand how hybrid connectivity affects the choice of integration runtime.

Cost Minimization: Know how to minimize costs by choosing the appropriate services (PaaS over IaaS).

Exam Focus: The exam emphasizes choosing the most appropriate solution while minimizing effort and cost.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You use Azure virtual machines to run a custom application that uses an Azure SQL database on the back end.

The IT apartment at your company recently enabled forced tunneling, Since the configuration change, developers have noticed degraded performance when they access the database.

You need to recommend a solution to minimize latency when accessing the database. The solution must minimize costs.

What should you include in the recommendation?

Azure SQL Database Managed instance
Azure virtual machines that run Microsoft SQL Server servers
Always On availability groups
virtual network (VNET) service endpoint

A

Understanding Forced Tunneling:

Forced tunneling in Azure directs all internet-bound traffic from a subnet through a virtual network appliance (like a firewall or proxy), on-premises network, or specific Azure service. This can increase latency since traffic to Azure services is routed through the forced tunnel, instead of going directly.

Requirements:

Azure SQL Database: Custom app on Azure VMs uses an Azure SQL database.

Forced Tunneling: Forced tunneling is enabled, causing performance degradation.

Minimize Latency: Minimize the latency when accessing the database.

Minimize Costs: The solution should be cost-effective.

Recommended Solution:

virtual network (VNET) service endpoint

Explanation:

Virtual Network Service Endpoints:

Why it’s the best fit: VNET service endpoints allow you to secure access to Azure service resources by enabling the use of a private IP address in the VNET. By enabling service endpoints for Azure SQL Database, traffic to that database from the Azure VMs within the VNET will bypass the forced tunnel, and instead go directly through the Azure backbone. This significantly reduces latency while also being cost effective.

Why not others:

Azure SQL Database Managed Instance: While Managed Instance is a good choice for many SQL scenarios, it is not the ideal solution for this problem. It does not help with the forced tunneling, and it also does not minimize cost since it is a more expensive offering.

Azure virtual machines that run Microsoft SQL Server servers: Moving the database to a VM in IaaS will not fix the problem. It will not address the latency issues created by the forced tunneling.

Always On availability groups: This helps with HA and DR, but it does not help with the latency issues caused by the forced tunneling. Also, it would add significant costs to the deployment.

Important Notes for the AZ-304 Exam:

Virtual Network Service Endpoints: Understand the benefits of using service endpoints.

Forced Tunneling: Know what forced tunneling is and how it can impact traffic flow.

Cost Minimization: Know the different ways to minimize costs when architecting a solution.

Network Performance: Understand the different ways to diagnose and improve performance when dealing with Azure network configurations.

Azure SQL: Know the different deployment options for Azure SQL.

Exam Focus: The exam will often require you to select the most appropriate solution that meets all of the requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You have an Azure subscription that is linked to an Azure Active Directory (Azure AD) tenant. The subscription contains 10 resource groups, one for each department at your company.

Each department has a specific spending limit for its Azure resources.

You need to ensure that when a department reaches its spending limit, the compute resources of the department shut down automatically.

Which two features should you include in the solution? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

Azure Logic Apps
Azure Monitor alerts
the spending limit of an Azure account
Cost Management budgets
Azure Log Analytics alerts

A

Requirements:

Departmental Limits: Each department has a specific spending limit for its Azure resources.

Resource Shutdown: Compute resources must shut down automatically when the spending limit is reached.

Correct Features:

Cost Management budgets

Azure Logic Apps

Explanation:

Cost Management budgets:

Why it’s correct: Cost Management budgets allow you to define a spending limit for a specific scope (resource group, subscription, management group). When the actual spend reaches the budget threshold, you can trigger alerts and take actions. Budgets is the way to monitor and alert based on the cost.

Why not others (by itself): Cost management budgets cannot automatically stop resources, it is a monitoring and alert mechanism, and needs other services in order to take action.

Azure Logic Apps:

Why it’s correct: Azure Logic Apps can be triggered by a budget alert. In the logic app, you can add actions that automatically shut down the compute resources. For example, you can use the Azure Resource Management connector to stop virtual machines.

Why not others (by itself): Logic apps require a trigger to start. Therefore, a budget alert must be configured.

Why not others:

Azure Monitor alerts: Azure Monitor alerts are for performance monitoring. While they can monitor costs, they cannot perform actions on those costs.

the spending limit of an Azure account: While the Azure Account might have a total spending limit, this does not allow for the control on resource groups, or the automation of stopping resources.

Azure Log Analytics alerts: Log Analytics is a great way to analyze logs, but does not work with cost alerts.

Important Notes for the AZ-304 Exam:

Cost Management Budgets: Be very familiar with Cost Management budgets and how they can be used to control spending, and know that they are the mechanism that you should use for cost alerts.

Azure Logic Apps: Know how to use Logic Apps to automate actions based on triggers, and how they integrate with Azure Management connectors.

Automated Actions: Understand that Logic Apps can be triggered by alerts and can be used to perform actions, such as shutting down resources.

Cost Control: Be familiar with the best practices for cost control and optimization in Azure.

Alerts: Know the difference between cost alerts and metrics alerts.

Exam Focus: Carefully read the requirement. You must know which services do what function. You need to know that you need a budget to alert when the spend is reached, and that you need Logic apps to automate an action when the alert is triggered.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

HOTSPOT

You configure OAuth2 authorization in API Management as shown in the exhibit.

Add OAuth2 service

Display name: (Empty field)
Id: (Empty field)
Description: (Empty field)
Client registration page URL: https://contoso.com/register
Authorization grant types:

Authorization code: Enabled

Implicit: Disabled

Resource owner password: Disabled

Client credentials: Disabled

Authorization endpoint URL: https://login.microsoftonline.com/contoso.onmicrosoft.com/oauth2/v2.0/authorize

Support state parameter: Disabled

Authorization Request method

GET: Enabled
POST: Disabled
Token endpoint URL: (Empty field)

Additional body parameters: (Empty field)

Button: Create

Use the drop-domain to select the answer choice that completes each statement based on

the information presented in the graphic. NOTE: Each correct selection is worth one point.
The selected authorization grant type is for
Background services
Headless device authentication
Single page applications
Web applications
To enable custom data in the grant flow, select
Client credentials
Implicit
Resource owner password
Support state parameter

A

OAuth2 Configuration Summary:

Authorization Grant Types: The configuration shows the “Authorization code” grant type as the only one enabled.

Authorization Endpoint URL: This is set to Microsoft’s OAuth2 authorization endpoint for the contoso.onmicrosoft.com tenant.

Other Settings: Various other settings related to authorization and token endpoints are displayed.

Answer Area:

The selected authorization grant type is for:

Web applications

To enable custom data in the grant flow, select

Support state parameter

Explanation:

The selected authorization grant type is for:

Web applications:

Why it’s correct: The authorization code grant type is the most secure and recommended method to obtain access tokens for web applications. In this flow the client (web app) first gets an authorization code from the authorization server, and then uses it to obtain an access token.

Why not others:

Background services: Background services (also known as daemon apps) typically use the client credentials flow, which is not enabled in this configuration.

Headless device authentication: Headless devices often use the device code flow, which is not a grant type present here.

Single-page applications: Single-page applications (SPAs) can use the authorization code flow, but often use the implicit grant type, which is disabled in this configuration.

To enable custom data in the grant flow, select:

Support state parameter:

Why it’s correct: The “Support state parameter” setting enables passing an opaque value in the authorization request, and will be returned by the authorization server with the code. This can be used to pass custom data that needs to be included in the authorization flow.

Why not others:

Client credentials: This is for service-to-service authentication without a user present.

Implicit: This is an older, less secure grant type for single-page applications. It does not enable passing custom data.

Resource owner password: This is a less secure grant type that should be avoided in most scenarios. It also does not enable passing custom data.

Important Notes for the AZ-304 Exam:

OAuth 2.0 Grant Types: Be very familiar with the different OAuth 2.0 grant types:

Authorization Code

Implicit

Client Credentials

Resource Owner Password

Device Code

API Management OAuth2 Settings: Understand how to configure OAuth 2.0 settings in Azure API Management.

“State” Parameter: Know the importance of the “state” parameter in OAuth flows and how it helps prevent CSRF attacks. Understand how this can be used to pass custom data.

API Security: Know how to properly secure APIs with OAuth 2.0.

Exam Focus: Be sure to select the answer based on a close inspection of the provided details.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You are designing an order processing system in Azure that will contain the Azure resources shown in the following table.

|—|—|—|
| Function1 | Function | Check product availability at vendor 1 |
| Function2 | Function | Check product availability at vendor 2 |
| storage1 | Storage account | Stores order processing logs |

The order processing system will have the following transaction flow:

✑ A customer will place an order by using App1.

✑ When the order is received, App1 will generate a message to check for product availability at vendor 1 and vendor 2.

✑ An integration component will process the message, and then trigger either Function1 or Function2 depending on the type of order.

✑ Once a vendor confirms the product availability, a status message for App1 will be generated by Function1 or Function2.

✑ All the steps of the transaction will be logged to storage1.

Which type of resource should you recommend for the integration component?

an Azure Data Factory pipeline
an Azure Service Bus queue
an Azure Event Grid domain
an Azure Event Hubs capture

| App1 | Web app | Processes customer orders |

Name | Type | Purpose |

A

The correct answer is an Azure Service Bus queue.

Here’s why:

Message Brokering: Azure Service Bus queues are designed for reliable, asynchronous message queuing. This perfectly fits the scenario where App1 generates a message and the integration component processes it to trigger the appropriate function.

Decoupling: Service Bus decouples App1 from Function1 and Function2. App1 simply sends a message to the queue, and it doesn’t need to know which function will eventually process it. This improves resilience and scalability.

Guaranteed Delivery: Service Bus ensures that messages are delivered at least once, which is crucial for order processing.

First-In, First-Out (FIFO) Ordering (Optional but useful): If the order of processing is important, Service Bus queues can be configured for FIFO delivery.

Why other options are less suitable:

Azure Data Factory pipeline: While ADF can orchestrate workflows and trigger activities (including Azure Functions), it’s primarily designed for data integration and ETL (Extract, Transform, Load) tasks. It’s overkill for this simple message routing scenario.

Azure Event Grid domain: Event Grid is ideal for event-driven architectures where publishers emit events and subscribers react to them. While it can trigger functions, it’s more suited for scenarios where you have many subscribers potentially interested in the same event. In this case, the routing is deterministic (either Function1 or Function2 based on the order type), making a queue a more direct fit. Event Grid also has a “push” delivery model, where it attempts to deliver events. Service Bus offers both “push” and “pull” models, giving more control over message consumption.

Azure Event Hubs capture: Event Hubs is designed for high-throughput ingestion of streaming data. The “capture” feature is for persisting this data to storage. It’s not primarily designed for routing messages to specific functions based on message content.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases.

The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.

You need to recommend a solution to meet the regulatory requirement.

Solution: You recommend creating resource groups based on locations and implementing resource locks on the resource groups.

Does this meet the goal?

Yes
No

A

Goal:

Deploy Azure App Service instances and Azure SQL databases simultaneously.

App Service instances must be deployed only to specific Azure regions.

Resources for the App Service instances must reside in the same region.

Proposed Solution:

Create resource groups based on locations.

Implement resource locks on the resource groups.

Analysis:

Resource Groups Based on Location:

Creating resource groups based on locations is a good practice for organizing resources in Azure. It makes it easier to manage resources and ensures that all the resources that belong to a specific geographic region are grouped together. This is an important step in reaching the goal.

Resource Locks

Resource locks, however, are only for preventing accidental deletion of resource groups and the resources within. They do not enforce which resources are deployed or where they are deployed, meaning that a user could still deploy a VM outside of the required location.

Does It Meet the Goal?: No

Explanation:

Resource Groups by Location (Partial Fulfillment): Creating resource groups by location does help with organizing resources and ensures they’re deployed in the same region, meeting part of the requirement of keeping all resources in the same location.

Resource Locks - These will not solve for the region requirement, because you can still create a resource in any region.

Missing Enforcement: The solution lacks any mechanism to enforce that the resources are only deployed in the correct Azure regions. This is a regulatory requirement, so a simple organization of resource groups is not enough.

No Region Enforcement: Resource locks prevent accidental deletion or modification of resources, but they do not restrict resource deployments to specific regions.

Correct Answer:

No

Important Notes for the AZ-304 Exam:

Resource Groups: Understand the purpose and use of resource groups.

Resource Locks: Know the purpose and limitations of resource locks.

Regulatory Requirements: Recognize that solutions must enforce compliance requirements. This is a key element of many questions.

Enforcement Mechanisms: Look for mechanisms that enforce policies instead of simply organizing resources.

Exam Focus: Read the proposed solution and verify if it truly meets the goal. If any part of the solution does not achieve the goal, then the answer is “No”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You need to recommend a data storage solution that meets the following requirements:

  • Ensures that applications can access the data by using a REST connection
  • Hosts 20 independent tables of varying sizes and usage patterns
  • Automatically replicates the data to a second Azure region
  • Minimizes costs

What should you recommend?

an Azure SQL Database that uses active geo-replication
tables in an Azure Storage account that use geo-redundant storage (GRS)
tables in an Azure Storage account that use read-access geo-redundant storage (RA-GRS)
an Azure SQL Database elastic database pool that uses active geo-replication

A

Requirements:

REST API Access: The data must be accessible through a REST interface.

Independent Tables: The solution must support 20 independent tables of different sizes and usage patterns.

Automatic Geo-Replication: The data must be automatically replicated to a secondary Azure region.

Minimize Costs: The solution should be cost-effective.

Recommended Solution:

Tables in an Azure Storage account that use read-access geo-redundant storage (RA-GRS)

Explanation:

Azure Storage Account with RA-GRS Tables:

REST Access: Azure Storage tables are directly accessible using a REST API, which is a fundamental part of their design.

Independent Tables: A single Azure Storage account can hold many independent tables, meeting the 20-table requirement.

Automatic Geo-Replication (RA-GRS): RA-GRS ensures that the data is replicated to a secondary region, and provides read access to that secondary location. This satisfies the HA and geo-redundancy requirements.

Minimize Cost: Azure Storage tables are designed to handle different patterns and are cost effective compared to SQL options.

Why not others:

Azure SQL Database with active geo-replication: While it provides strong SQL capabilities and geo-replication, SQL databases are more costly for simple table storage and have a higher operational overhead. SQL databases also do not have a REST interface, rather they use SQL.

Azure SQL Database elastic database pool with active geo-replication: Same reasons as above, but with the added complication of an elastic pool, which is unnecessary for the stated requirements and would add even more costs.

Tables in an Azure Storage account that use geo-redundant storage (GRS): This would meet the geo-replication requirements but it would not provide the ability to read from the secondary location, and so is not as good a choice as RA-GRS.

Important Notes for the AZ-304 Exam:

Azure Storage Tables: Know what they are designed for and their features (scalability, cost-effectiveness, REST API access). Be able to explain where they are appropriate.

Geo-Redundancy: Understand the differences between GRS, RA-GRS and how they impact performance, availability and cost.

Cost-Effective Solutions: The exam often asks for the most cost-effective solution. Be aware of the pricing models of different Azure services.

SQL Database Use Cases: Understand when to use SQL DBs and when other options (like Table storage) are more appropriate. SQL DBs are better suited for complex queries, transactions, and relational data models.

REST API Access: Know which Azure services offer a REST interface for data access and when it might be required.

Exam Technique: Ensure you fully read the requirements, so you don’t pick a more expensive or complex solution than is needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

HOTSPOT

Your company has two on-premises sites in New York and Los Angeles and Azure virtual networks in the East US Azure region and the West US Azure region. Each on-premises site has Azure ExpressRoute circuits to both regions.

You need to recommend a solution that meets the following requirements:

✑ Outbound traffic to the Internet from workloads hosted on the virtual networks must be routed through the closest available on-premises site.

✑ If an on-premises site fails, traffic from the workloads on the virtual networks to the Internet must reroute automatically to the other site.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Routing from the virtual networks to
the on-premises locations must be
configured by using:
Azure default routes
Border Gateway Protocol (BGP)
User-defined routes
The automatic routing configuration
following a failover must be
handled by using:
Border Gateway Protocol (BGP)
Hot Standby Routing Protocol (HSRP)
Virtual Router Redundancy Protocol (VRRP)

A

Correct Answers and Why

Routing from the virtual networks to the on-premises locations must be configured by using:

Border Gateway Protocol (BGP)

Why?

ExpressRoute Standard: ExpressRoute relies on BGP for exchanging routes between your on-premises networks and Azure virtual networks. It’s the fundamental routing protocol for this type of connectivity.

Dynamic Routing: BGP allows for dynamic route learning, meaning routes are automatically adjusted based on network changes (like a site going down). This is essential for the failover requirement.

Path Selection: BGP allows for attributes like Local Preference to choose the best path. The path to the nearest on-prem location can be preferred by setting a higher local preference.

Why Not the Others?

Azure Default Routes: These routes are for basic internal Azure connectivity and internet access within Azure. They don’t handle routing to on-premises networks over ExpressRoute.

User-defined routes (UDRs): While UDRs can force traffic through a specific path they do not facilitate dynamic failover without manual intervention and are therefore unsuitable in this scenario.

The automatic routing configuration following a failover must be handled by using:

Border Gateway Protocol (BGP)

Why?

BGP Convergence: BGP’s inherent nature is to dynamically adapt to network changes. If an on-premises site or an ExpressRoute path becomes unavailable, BGP automatically detects this and withdraws routes from the failed path.

Automatic Rerouting: BGP then advertises the available paths, leading to the rerouting of traffic through the remaining healthy site, achieving the automatic failover requirement.

Why Not the Others?

Hot Standby Routing Protocol (HSRP) and Virtual Router Redundancy Protocol (VRRP): These protocols are used for first-hop redundancy on local networks which is not applicable in Azure environments or to Expressroute configurations. They do not facilitate the end-to-end routing and failover required.

Important Notes for the AZ-304 Exam

ExpressRoute Routing is BGP-Based: Understand that BGP is the routing protocol for ExpressRoute. If a question involves routing over ExpressRoute, BGP is highly likely to be involved.

BGP for Dynamic Routing and Failover: Know that BGP not only provides routing but also provides failover capabilities through its dynamic path selection and convergence features.

Local Preference: Understand how BGP attributes like Local Preference can be used to influence path selection. This is key for scenarios where you want to force a primary path and have a secondary backup path.

Azure Networking Core Concepts: You should have a solid understanding of:

Virtual Networks: How they’re used, subnetting, IP addressing.

Route Tables: Both default and User-Defined, and how they control traffic routing.

ExpressRoute: The different connection options and associated routing implications.

Dynamic vs. Static Routing: Know the difference between dynamic routing (BGP) and static routing (User Defined Routes) and where they are best suited.

Hybrid Networking: Be prepared to deal with hybrid scenarios that connect on-premises and Azure resources.

Failover: Be aware of the failover options and be able to choose the best solutions for different circumstances. BGP is the most common solution for failover between on-prem and Azure.

HSRP and VRRP Applicability: These are first hop redundancy protocols used locally and are not suitable for Azure cloud environments. They should not be suggested for Azure routing scenarios.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You have an Azure subscription. The subscription contains an app ir-tal is hosted in Ihe East US, Central Europe, ant) East Asia regions You need to recommend a data-tier solution for the app.

The solution must meet the following requirements:

  • Support multiple consistency levels.
  • Be able to store at least 1 TB of data.
  • Be able to perform read and write operations in the Azure region that is local to the app instance

What should you Include In the recommendation?

a Microsoft SQL Server Always On availability group on Azure virtual machines
an Azure Cosmos OB database
an Azure SQL database in an elastic pool
Azure Table storage that uses geo-redundant storage (GRS) replication

A

Understanding the Requirements

Global Distribution: The application is deployed in multiple regions (East US, Central Europe, East Asia), meaning the data layer also needs to be globally accessible.

Multiple Consistency Levels: The solution must support different levels of data consistency (e.g., strong, eventual).

Scalability: It needs to store at least 1 TB of data.

Local Read/Write: Each application instance should be able to perform read and write operations in its local region for performance.

Evaluating the Options

a) Microsoft SQL Server Always On Availability Group on Azure Virtual Machines:

Pros:

Offers strong consistency.

Can store large amounts of data (1 TB+).

Cons:

Complex to manage: Requires setting up and maintaining virtual machines, clustering, and replication manually.

Not designed for low-latency multi-regional access: While you can do replication, it’s typically not optimized for providing very low-latency access to every region at the same time.

Does not inherently offer multiple consistency levels:

Verdict: Not the best fit. It’s too complex and doesn’t easily meet the multi-region, multiple consistency requirement.

b) An Azure Cosmos DB database:

Pros:

Globally Distributed: Designed for multi-region deployments and provides low-latency reads/writes in local regions.

Multiple Consistency Levels: Supports various consistency levels, from strong to eventual, that can be set per request.

Scalable: Can easily store 1 TB+ of data and scale as needed.

Fully Managed: Much easier to manage than SQL Server on VMs.

Cons:

Has different way of managing data and database design than relational solutions.

Verdict: Excellent fit. It directly addresses all the requirements.

c) An Azure SQL Database in an elastic pool:

Pros:

Scalable in terms of performance and resources.

Familiar relational database platform.

Cons:

Not inherently multi-regional: While you can do active geo-replication, it has limitations with low-latency reads from remote regions.

Limited consistency options: Primarily provides strong consistency, not multiple levels.

Not as horizontally scalable: It’s designed for relational data, not the more flexible scalability needed for a globally distributed app.

Does not provide local read/write in each region.

Verdict: Not the best choice. It doesn’t meet the multi-region low-latency and consistency requirements.

d) Azure Table storage that uses geo-redundant storage (GRS) replication:

Pros:

Highly scalable.

Relatively inexpensive.

GRS provides data replication.

Cons:

No multi-master writes: No local read/write in each region. Reads can come from a different location.

Limited consistency: Primarily eventual consistency, not the range required by the problem statement.

No SQL: Designed for non-relational data storage only.

Verdict: Not suitable. Lacks multiple consistency options, multi-master writes, and suitable performance for low latency reads.

Recommendation

Based on the analysis, the best solution is:

An Azure Cosmos DB database

Explanation

Azure Cosmos DB is purpose-built for globally distributed applications. It offers:

Global Distribution and Low Latency: Data can be replicated to multiple Azure regions, allowing applications to read and write data in their local region with low latency.

Multiple Consistency Levels: You can fine-tune the consistency level per request. Options range from strong consistency (data is guaranteed to be the same everywhere) to eventual consistency (data will eventually be consistent across regions).

Scalability: Cosmos DB can easily store 1 TB+ of data and automatically scales to handle increased traffic.

Ease of Management: As a fully managed service, it reduces operational overhead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Your company purchases an app named App1.

You plan to tun App1 on seven Azure virtual machines In an Availability Set. The number of fault domains is set to 3. The number of update domains is set to 20.

You need to identity how many App1 instances will remain available during a period of planned maintenance.

How many Appl instances should you identify?

1
2
6
7

A

Understanding Availability Sets

Purpose: Availability Sets are used to protect your applications from planned and unplanned downtime within an Azure datacenter.

Fault Domains (FDs): Fault Domains define groups of virtual machines that share a common power source and network switch. In the event of a power or switch failure, VMs in different FDs will be affected independently of each other.

Update Domains (UDs): Update Domains define groups of virtual machines that can be rebooted simultaneously during an Azure maintenance window. Azure applies planned maintenance to UDs one at a time.

The Key Rule

During planned maintenance, Azure updates VMs within a single Update Domain at a time. Azure moves to the next UD only after completing an update to the current UD. This means that while an update is being done on one UD, the other UDs are not affected.

Analyzing the Scenario

7 VMs in total

3 Fault Domains: This is important for unplanned maintenance, but doesn’t directly impact our answer here.

20 Update Domains: This is the important factor for planned maintenance.

It does not mean there are 20 physical UDs in the set. It just means up to 20 UDs can be used. The 7 VM’s will therefore each be in 1 of 7 unique UDs within the set of 20 UDs.

Calculating Availability During Planned Maintenance

Minimum VMs per Update Domain: Since you have 7 VMs and, even though there are 20 UDs, each virtual machine will be placed in its own update domain so each will be on its own UD.

Impact of Maintenance: During a planned maintenance event, Azure will update one UD at a time. Therefore during maintenance one of those 7 VMs will be unavailable while the update is applied.

Available VMs: That means that at any given time when maintenance is applied to one single UD, the remaining VMs in the other UDs will remain available. In this case 7-1=6 VMs.

Correct Answer

6

Important Notes for the AZ-304 Exam

Availability Sets vs. Virtual Machine Scale Sets: Know the difference. Availability Sets provide fault tolerance for individual VMs, while Scale Sets provide scalability and resilience for groups of identical VMs (often used for autoscaling). This question specifically used an availability set.

Fault Domains (FDs) vs. Update Domains (UDs): Be clear on the purpose of each. FDs for unplanned maintenance, UDs for planned maintenance.

Impact of UDs on Planned Maintenance: During planned maintenance, only one UD is updated at a time, ensuring that your application can remain available.

Distribution of VMs: In an availability set, Azure evenly distributes VMs across FDs and UDs.

Maximum FDs and UDs: Understand that the maximum number of FDs is 3 and UDs are 20 in Availability Sets.

Real-World Scenario: Be aware that real production workloads can have other availability and redundancy concerns and that more advanced redundancy can be achieved by using multiple availability sets in the same region or a combination of Availability sets and Availability zones.

Calculations: Be able to determine the availability of VMs during planned or unplanned maintenance based on the number of FDs and UDs as well as the number of VMs in a given configuration.

Best Practice: Best practice is to have at least 2 VMs in an availability set, and 2 availability sets in your region to provide redundancy in the event of zonal failures as well as UD / FD maintenance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Your company has the infrastructure shown in the following table:

Location: Azure
Resource:

Azure subscription named Subscription1
20 Azure web apps
Location: On-premises datacenter
Resource:

Active Directory domain
Server running Azure AD Connect
Linux computer named Server1

The on-premises Active Directory domain syncs to Azure Active Directory (Azure AD).

Server1 runs an application named Appl that uses LDAP queries to verify user identities in the on-premises Active Directory domain.

You plan to migrate Server1 to a virtual machine in Subscription1.

A company security policy states that the virtual machines and services deployed to Subscription! must be prevented from accessing the on-premises network.

You need to recommend a solution to ensure that Appl continues to function after the migration. The solution must meet the security policy.

What should you include in the recommendation?

Azure AD Domain Services (Azure AD DS)
an Azure VPN gateway
the Active Directory Domain Services role on a virtual machine
Azure AD Application Proxy

A

Understanding the Requirements

Application (App1): Uses LDAP queries to authenticate users in the on-premises Active Directory.

Migration: Moving from an on-premises Linux server to an Azure VM.

Security Policy: VMs and services in Azure are not allowed to access the on-premises network.

Functionality: The migrated application must still be able to authenticate users.

Analyzing the Options

Azure AD Domain Services (Azure AD DS)

Pros:

Provides a managed domain controller in Azure, allowing VMs to join the domain.

Supports LDAP queries for authentication.

Independent of the on-premises network.

Synchronizes user information from Azure AD.

Fully managed, eliminating the need for maintaining domain controllers.

Cons:

Cost implications from running an additional service.

Verdict: This is the most suitable option. It meets the functional requirements without violating the security policy.

An Azure VPN Gateway

Pros:

Provides a secure connection between Azure and on-premises networks.

Cons:

Violates the security policy that prevents Azure resources from connecting to on-premises.

Would allow the VM access to the entire on-prem network (if setup using site to site) including AD.

Verdict: Not a valid option because it directly contradicts the security policy.

The Active Directory Domain Services role on a virtual machine

Pros:

Provides the needed domain services

Cons:

Would require setting up and managing a domain controller in Azure.

Would need to setup a vpn connection to sync with on-prem which would violate the security policy.

Requires ongoing maintenance.

Verdict: Not a valid option because it would be hard to maintain and the connection to on-prem would violate the security policy.

Azure AD Application Proxy

Pros:

Allows external users to connect to internal resources.

Cons:

Not relevant for this use case. Application Proxy does not manage or provide LDAP access to users.

Verdict: Not a good fit as it does not help with authentication for the application.

Correct Recommendation

The best solution is Azure AD Domain Services (Azure AD DS).

Explanation

LDAP Compatibility: Azure AD DS provides a managed domain service compatible with LDAP queries, which is precisely what App1 needs for user authentication.

Isolated Azure Environment: Azure AD DS is entirely contained within Azure and does not require a connection to the on-premises network. This allows you to satisfy the security policy.

Azure AD Synchronization: Azure AD DS syncs users from Azure AD, meaning users will be able to authenticate after the migration.

Ease of Use: Azure AD DS is a fully managed service so you will not need to worry about the underlying infrastructure.

Important Notes for the AZ-304 Exam

Azure AD DS Use Cases: Know that Azure AD DS is designed for scenarios where you need domain services (including LDAP) in Azure but cannot/should not connect to on-premises domain controllers.

Hybrid Identity: Be familiar with hybrid identity options, such as using Azure AD Connect to sync on-premises Active Directory users to Azure AD.

Security Policies: Pay close attention to security policies described in exam questions. The answers should be able to fulfil any security requirements.

Service Selection: Be able to choose the correct Azure service based on the stated requirements of the question. For example, know when to use Azure AD DS as opposed to spinning up a domain controller in a VM.

Alternatives: You should know what other options there are that could theoretically be used, but also understand their pros and cons. For instance, you should be able to state that a VPN could facilitate the connection, but that the security policy would need to be updated.

LDAP Authentication: Understand LDAP as the core functionality for Active Directory authentication.

Fully Managed Services: Be aware of the benefits of managed services (like Azure AD DS) in reducing management overhead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You are reviewing an Azure architecture as shown in the Architecture exhibit (Click the Architecture tab.)

Log Files
|
v
Azure Data Factory ——-> Azure Data Lake Storage
| |
| |
| |
v |
Azure Databricks <—————-
|
v
Azure Synapse Analytics ——-> Azure Analysis Services
|
v
Power BI
Steps:
Ingest: Log Files → Azure Data Factory
Store: Azure Data Factory → Azure Data Lake Storage
Prep and Train: Azure Data Lake Storage ⇄ Azure Databricks
Model and Serve: Azure Synapse Analytics → Azure Analysis Services
Visualize: Azure Analysis Services → Power BI

The estimated monthly costs for the architecture are shown in the Costs exhibit. (Click the Costs tab.)

|—————————-|————————————————-|—————|
| Azure Synapse Analytics | Tier: Compute-optimised Gen2, Compute: DWU 100 x 1 | US$998.88 |
| Data Factory | Azure Data Factory V2 Type, Data Pipeline Service type, | US$4,993.14 |
| Azure Analysis Services | Developer (hours), 5 Instance(s), 720 Hours | US$475.20 |
| Power BI Embedded | 1 node(s) x 1 Month, Node type: A1, 1 Virtual Core(s), | US$735.91 |
| Storage Accounts | Block Blob Storage, General Purpose V2, LRS Redundant, | US$21.84 |
| Azure Databricks | Data Analytics Workload, Premium Tier, 1 D3V2 (4 vCPU) | US$515.02 |
| Estimate total: | | US$7,739.99 |

The log files are generated by user activity to Apache web servers. The log files are in a consistent format. Approximately 1 GB of logs are generated per day. Microsoft Power Bl is used to display weekly reports of the user activity.

You need to recommend a solution to minimize costs while maintaining the functionality of the architecture.

What should you recommend?

Replace Azure Data Factory with CRON jobs that use AzCopy.
Replace Azure Synapse Analytics with Azure SOL Database Hyperscale.
Replace Azure Synapse Analytics and Azure Analysis Services with SQL Server on an Azure virtual machine.
Replace Azure Databricks with Azure Machine Learning.

Service | Description | Cost |

A

Understanding the Existing Architecture

Data Ingestion: Log files from Apache web servers are ingested into Azure Data Lake Storage via Azure Data Factory.

Data Processing: Azure Databricks is used to prep and train the data.

Data Warehousing: Azure Synapse Analytics is used to model and serve data.

Data Visualization: Azure Analysis Services and Power BI are used for visualization.

Cost Breakdown and Bottlenecks

The cost breakdown shows the following areas as significant expenses:

Azure Data Factory: $4,993.14 (by far the most expensive item)

Azure Synapse Analytics: $998.88

Power BI Embedded: $735.91

The other items (Analysis services, Databricks, and storage) are relatively low cost.

Analyzing the Recommendations

Replace Azure Data Factory with CRON jobs that use AzCopy.

Pros:

Significant cost reduction: AzCopy is free and can be used with a simple CRON job.

Suitable for the relatively small amount of data that is being moved.

Cons:

Less feature rich than Data Factory (No orchestration, error handling, monitoring etc).

Adds management overhead as you need to create and maintain the CRON jobs.

Verdict: This is the best option. Given the small data volume, the complexity of Data Factory is overkill and the cost can be reduced dramatically.

Replace Azure Synapse Analytics with Azure SQL Database Hyperscale.

Pros:

Can be more cost effective for smaller workloads and can scale up or down easily.

Cons:

May need changes to the way the data is stored and managed.

Hyperscale is designed for transactional loads and may not be the best replacement for a Datawarehouse.

Verdict: Not the best option, as it may impact the architecture of the solution and the query patterns used.

Replace Azure Synapse Analytics and Azure Analysis Services with SQL Server on an Azure virtual machine.

Pros:

Could be less expensive than the managed service for small workloads.

Cons:

Significantly more management overhead, less scalable.

Would reduce the overall functionality of the solution, having to implement multiple services in one VM.

Would not reduce costs as the total cost of the VM, the sql licences, and management effort would likely cost more.

Verdict: Not recommended. Introduces complexity and management overhead.

Replace Azure Databricks with Azure Machine Learning.

Pros:

Azure Machine Learning can also do data processing.

May be more cost efficient depending on workload.

Cons:

Azure Machine learning is more focused on ML than processing/preparation of data.
* More geared towards predictive analytics than general data processing.
* May require a significant rework of the existing process.

Verdict: Not a suitable option as it is not a like for like replacement.

Recommendation

The best recommendation is:

Replace Azure Data Factory with CRON jobs that use AzCopy.

Explanation

Cost Savings: The primary issue is the high cost of Azure Data Factory. Using CRON jobs and AzCopy is a simple, low-cost alternative for the relatively small volume of data being moved.

Functionality: The CRON job will simply move the data from the source location to the Azure data lake, with the processing steps remaining the same.

Complexity: While this adds more management overhead by requiring you to create the CRON job and manage it, the simplicity of the requirements outweigh the complexity.

Important Notes for the AZ-304 Exam

Cost Optimization: Know that the exam may test your ability to identify cost drivers and suggest cost optimizations.

Azure Data Factory: Understand when ADF is the right tool and when a simpler tool will suffice. It’s often beneficial to use a tool as simple as possible, while still meeting requirements.

Data Transfer: Be aware of options like AzCopy for moving data in a low-cost way.

CRON jobs: Understand how CRON jobs can be used to schedule operations.

Azure Synapse Analytics: Understand how Azure Synapse Analytics can provide insights and processing power, but can also be expensive.

SQL Database Hyperscale: Understand when it is more beneficial to use Hyperscale over Synapse

SQL Server on Azure VM: Know the use cases of where a traditional SQL server may be appropriate.

Azure Analysis Services: Know that it is designed for fast data queries and reporting through tools like Power BI, but can add significant cost.

Azure Databricks and ML: Understand the difference and which scenarios are more suited for each.

Service selection: Know how to select a service based on the requirements provided.

Simplicity: Consider solutions that may be less feature-rich, but provide simpler (and lower cost) solutions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You have an Azure Active Directory (Azure AD) tenant.

You plan to provide users with access to shared files by using Azure Storage. The users will be provided with different levels of access to various Azure file shares based on their user account or their group membership.

You need to recommend which additional Azure services must be used to support the planned deployment.

What should you include in the recommendation?

an Azure AD enterprise application
Azure Information Protection
an Azure AD Domain Services (Azure AD DS) instance
an Azure Front Door instance

A

The correct answer is C. Azure AD Domain Services (Azure AD DS) instance.

Here’s why:

Understanding the Requirement: The core requirement is to control access to Azure File Shares based on user identities and group memberships defined in Azure AD. Azure File Shares, on their own, don’t natively understand Azure AD identities for access control.

How Azure AD DS Helps:

Extends Azure AD: Azure AD DS provides a managed domain controller service in Azure. It essentially creates a traditional Windows Server Active Directory domain synced with your Azure AD tenant.

Enables Kerberos Authentication: File Shares need a way to authenticate users who want to access them. Azure AD DS enables Kerberos authentication, which is the protocol used by Windows Server-based file servers. With Kerberos authentication enabled, you can assign specific NTFS permissions to individual users and groups on your Azure File Shares which directly translates into allowing or disallowing access.

Seamless Integration: After setting up Azure AD DS, the file shares can be joined to the domain, enabling users to authenticate using their Azure AD credentials seamlessly.

Access Control: This integration provides the capability to define granular NTFS style access control lists (ACLs) for file shares, allowing you to give users/groups specific permissions to the shares and folders.

Why other options are not the best fit:

A. Azure AD enterprise application: Azure AD enterprise applications are primarily used to manage authentication and authorization for cloud-based applications (SaaS). They don’t directly provide the means to manage access to files on Azure file shares in the way described in the scenario.

B. Protect Azure information: Protect Azure information is part of Microsoft Purview to protect sensitive data by classifying and labeling data. It doesn’t help directly in access to Azure file shares with users and their memberships.

D. Azure Front Door instance: Azure Front Door is a global, scalable entry-point for web applications and services. It is not relevant to accessing files on Azure File Shares.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

DRAG DROP

You are planning an Azure solution that will host production databases for a high-performance application. The solution will include the following components:

✑ Two virtual machines that will run Microsoft SQL Server 2016, will be deployed to different data centers in the same Azure region, and will be part of an Always On availability group.

✑ SQL Server data that will be backed up by using the Automated Backup feature of the SQL Server IaaS Agent Extension (SQLIaaSExtension)

You identify the storage priorities for various data types as shown in the following table.

|————————|—————————|
| Operating system | Speed and availability |
| Databases and logs | Speed and availability |
| Backups | Lowest cost |

Which storage type should you recommend for each data type? To answer, drag the appropriate storage types to the correct data types. Each storage type may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.

Storage Types
A geo-redundant storage (GRS) account
A locally-redundant storage (LRS) account
A premium managed disk
A standard managed disk

Answer Area
Operating system:
Databases and logs:
Backups:

Data type | Storage priority |

A

Understanding the Requirements

High-Performance Application: The application demands high speed and availability.

SQL Server Always On: Data is critical and must be resilient and highly available.

Automated Backups: Backups are important but not as critical as the operational data.

Storage Priorities:

Operating System: Speed and availability.

Databases and Logs: Speed and availability.

Backups: Lowest cost.

Analyzing the Storage Options

A geo-redundant storage (GRS) account:

Pros:

Provides data replication across a secondary region.

Best for disaster recovery and high availability.

Cons:

Highest cost among the storage options.

Higher latency than locally redundant storage (LRS) or premium storage.

Use Case: Best for backups when recovery from a regional outage is critical, or when backups need to be available from a different location.

A locally-redundant storage (LRS) account:

Pros:

Lowest cost storage.

Cons:

Data redundancy is limited to within the same data center.

Use Case: Suitable for backups where availability is less of a concern and lowest cost is the primary priority.

A premium managed disk:

Pros:

Highest performance with SSD storage.

Designed for high IOPS and low latency.

Cons:

Highest cost.

Use Case: Ideal for operating system disks, databases, and logs for high-performance applications.

A standard managed disk:

Pros:

Lower cost than premium disks.

Cons:

Uses HDD storage, offering less performance than SSD storage.

Use Case: Suitable for less performance-sensitive workloads and backups, where cost is an important factor.

Matching Storage to Data Types

Here’s how we should match the storage types:

Operating system:

Premium managed disk is the correct option. The operating system requires high-speed disk access for good virtual machine performance.

Databases and logs:

Premium managed disk is the correct option. Databases and logs require very low-latency and high IOPS. Premium disks are the only disks that provide these performance requirements.

Backups:

A locally-redundant storage (LRS) account is the best option. The automated backup configuration for SQL Server (SQLIaaSExtension) should use LRS storage for backups by default due to the cost benefits.

Answer Area

Data Type Storage Type
Operating system A premium managed disk
Databases and logs A premium managed disk
Backups A locally-redundant storage (LRS) account
Important Notes for the AZ-304 Exam

Managed Disks vs Unmanaged Disks: Know the difference between them and be aware that managed disks are the default option and almost always recommended.

Premium SSD vs Standard HDD: Understand the use cases of Premium disks for high IOPS/low-latency and Standard for cost sensitive workloads.

Storage Redundancy Options: Understand the difference between LRS, GRS, ZRS, and how to choose the best options for availability and durability requirements.

SQL Server on Azure VMs: Know best practices for SQL Server VM deployments including storage and backup configuration.

Performance Needs: Recognize which workloads need performance (like databases, operating systems) and which can tolerate lower performance and be cost-optimized (backups)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You nave 200 resource groups across 20 Azure subscriptions.

Your company’s security policy states that the security administrator most verify all assignments of the Owner role for the subscriptions and resource groups once a month. All assignments that are not approved try the security administrator must be removed automatically. The security administrator must be prompted every month to perform the verification.

What should you use to implement the security policy?

Access reviews in identity Governance
role assignments in Azure Active Directory (Azure AD) Privileged Identity Management (PIM)
Identity Secure Score in Azure Security Center
the user risk policy Azure Active Directory (Azure AD) Identity Protection

A

Understanding the Requirements

Scope: 20 Azure subscriptions and 200 resource groups.

Policy: Monthly verification of Owner role assignments.

Verification: A security administrator must approve or remove role assignments.

Automation: Unapproved assignments should be automatically removed.

Monthly Reminders: Security administrator must be prompted each month for verification.

Analyzing the Options

Access reviews in Identity Governance:

Pros:

Role Assignment Review: Specifically designed for reviewing and managing role assignments, including the Owner role.

Scheduled Reviews: Can be configured to run monthly.

Automatic Removal: Supports automatic removal of assignments not approved by the reviewer.

Reviewer Reminders: Notifies designated reviewers (security administrator) when reviews are due.

Scope: Can be used for both subscriptions and resource groups.

Cons:

Requires correct configuration of the governance policy and assignments to ensure the policy is enforced.

Verdict: This is the correct option as it directly meets all the requirements.

Role assignments in Azure Active Directory (Azure AD) Privileged Identity Management (PIM):

Pros:

Allows for just-in-time (JIT) role elevation.

Cons:

Does not directly facilitate regular reviews of role assignments.

PIM is generally used for temporary access not the requirement for recurring review and removal of assignments.

Verdict: Not suitable. Does not fulfil the requirement for monthly verification of role assignments.

Identity Secure Score in Azure Security Center:

Pros:

Provides a security score based on configurations and recommendations.

Cons:

Does not manage, monitor, or remove role assignments.

Only provides a score of the security posture but does not take actions to remove permissions.

Verdict: Not suitable. It is only used to monitor your posture.

The user risk policy in Azure Active Directory (Azure AD) Identity Protection:

Pros:

Detects and manages user risk based on suspicious activities.

Cons:

Does not manage role assignments, it is only used for user based risks and not for permissions.

Not relevant for the requirements for scheduled reviews of role assignments.

Verdict: Not suitable. Not used for role assignment reviews.

Recommendation

The best solution is:

Access reviews in Identity Governance

Explanation

Designed for Role Assignment Reviews: Access reviews are specifically built for reviewing and managing user access to resources.

Scheduled Monthly Reviews: You can configure the access reviews to occur every month.

Automatic Remediation: Unapproved role assignments can be automatically removed, which fulfills the security policy requirement.

Notifications: The security administrator will be notified when the monthly review is due and will be required to take action, or the review will complete automatically.

Comprehensive Scope: Access reviews can be configured at the subscription and resource group levels.

Important Notes for the AZ-304 Exam

Identity Governance: Know that Identity Governance provides access reviews and other features for managing user access.

Access Reviews: Understand how to use access reviews for recurring role assignment validation.

Privileged Identity Management (PIM): Know when to use PIM for JIT role activation and when it is not suitable, such as in this scenario.

Azure Security Center: Understand that it gives you a security posture but not a way to resolve assignment review issues, it only recommends remediation steps.

Azure AD Identity Protection: Understand its purpose in monitoring and dealing with user risk.

Role Assignments: know that RBAC is used to control roles, and that they can be assigned at multiple levels in Azure.

Automation: Be aware of how Azure Governance tools can help automate security tasks, such as removing assignments and sending out alerts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Your company purchases an app named App1.

You need to recommend a solution 10 ensure that App 1 can read and modify access reviews.

What should you recommend?

From the Azure Active Directory admin center, register App1. and then delegate permissions to the Microsoft Graph AP
From the Azure Active Directory admin center, register App1. from the Access control (IAM) blade, delegate permissions.
From API Management services, publish the API of App1. and then delegate permissions to the Microsoft Graph AP
From API Management services, publish the API of App1 From the Access control (IAM) blade, delegate permissions.

A

Understanding the Requirements

App1 Functionality: Needs to read and modify access reviews.

Azure Environment: Using Azure Active Directory (Azure AD).

Authorization: Must be authorized to perform these actions.

Analyzing the Options

From the Azure Active Directory admin center, register App1, and then delegate permissions to the Microsoft Graph API.

Pros:

Application Registration: The correct way to enable an application to be able to access protected resources in Azure AD.

Microsoft Graph API: The Microsoft Graph API is the correct API to access Azure AD, including access reviews.

Delegated Permissions: Permissions to access Microsoft Graph APIs must be delegated to applications, and this can be done using Azure AD application registrations.

Cons:

None. This is the correct approach.

Verdict: This is the correct solution.

From the Azure Active Directory admin center, register App1. from the Access control (IAM) blade, delegate permissions.

Pros:

Application Registration: Required to allow your app to integrate with Azure.

Cons:

Access Control (IAM): IAM is used for resource-level access control and not for delegating permissions for application access to Azure AD or Graph API resources.

Delegations to specific APIs such as graph api are not performed using the IAM blade.

Verdict: This is incorrect. IAM is not used to delegate permissions to the Microsoft Graph API.

From API Management services, publish the API of App1, and then delegate permissions to the Microsoft Graph API.

Pros:

API Management is useful when you want to expose your app as a third-party API.

Cons:

API Management: Not required for App1 to interact with the Graph API. API Management is not required to access graph API’s.

Does not support direct delegation of application permissions.

Verdict: This is incorrect. API Management is not the correct service for this task.

From API Management services, publish the API of App1. From the Access control (IAM) blade, delegate permissions.

Pros:

API Management is useful when you want to expose your app as a third-party API.

Cons:

API Management: Not required for App1 to interact with the Graph API.

IAM: IAM is not used to delegate access to the Graph API.

Verdict: This is incorrect. API Management is not the correct service, and IAM is not the correct way to configure delegation for a graph api.

Recommendation

The correct recommendation is:

From the Azure Active Directory admin center, register App1, and then delegate permissions to the Microsoft Graph API.

Explanation

Application Registration: Registering App1 in Azure AD creates an application object which represents your application and is used to identify your application within the directory.

Microsoft Graph API: The Microsoft Graph API is the unified endpoint for accessing Microsoft 365, Azure AD and other Microsoft cloud resources. Access reviews are also exposed through this API.

Delegated Permissions: You must delegate permissions to allow App1 to access the Graph API. By providing delegated permissions through the application registration, you allow the app to access resources on behalf of the logged in user. In the case of app-only access, this can be configured by granting application permissions rather than delegated permissions.

Authorization: After App1 is registered with delegated permissions it is allowed to perform actions on the Graph API such as accessing access reviews.

Important Notes for the AZ-304 Exam

Application Registration: Know how to register applications in Azure AD and why it is a required step to allow apps to access resources.

Microsoft Graph API: Understand that the Graph API is the primary way to access Microsoft 365 and Azure AD resources, including access reviews.

Delegated Permissions vs. Application Permissions: Be able to differentiate between these two types of permissions. Delegated permissions require an authenticated user. Application permissions are app-only and do not need a logged in user.

Access Control (IAM): Know that IAM is for resource level access and not for granting permission for applications.

API Management: Understand its purpose in publishing and securing APIs, but note that it is not necessary in this use case.

Security Principles: Understand the best practices for securing access to resources such as ensuring that the app is registered and given correct permissions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

HOTSPOT

Your company deploys several Linux and Windows virtual machines (VMs) to Azure. The VMs are deployed with the Microsoft Dependency Agent and the Log Analytics Agent installed by using Azure VM extensions. On-premises connectivity has been enabled by using Azure ExpressRoute.

You need to design a solution to monitor the VMs.

Which Azure monitoring services should you use? To answer, select the appropriate Azure monitoring services in the answer area. NOTE: Each correct selection is worth one point.

Scenario | Azure Monitoring Service

Analyze Network Security Group (NSG) flow logs for VMs
attempting Internet access:

Azure Traffic Analytics
Azure ExpressRoute Monitor
Azure Service Endpoint Monitor
Azure DNS Analytics

Visualize the VMs with their different processes and
dependencies on other computers and external processes:
Azure Service Map
Azure Activity Log
Azure Service Health
Azure Advisor

A

Understanding the Requirements

Monitoring Scope: Linux and Windows VMs in Azure.

Connectivity: On-premises connectivity via Azure ExpressRoute.

Microsoft Dependency Agent and Log Analytics Agent: Already deployed to VMs via extensions.

Monitoring Scenarios:

Analyzing NSG flow logs for VMs attempting Internet access.

Visualizing VMs with processes and dependencies.

Analyzing the Options

Azure Traffic Analytics:

Pros:

Analyzes NSG flow logs to identify traffic patterns and security risks.

Can detect VMs attempting Internet access by inspecting the flow logs.

Provides visualisations of traffic patterns for easy interpretation.

Cons:

Does not provide dependencies of VMs or processes.

Verdict: The correct service for the first scenario.

Azure ExpressRoute Monitor:

Pros:

Monitors the health and performance of ExpressRoute circuits.

Cons:

Does not analyse the flow logs or provide visibility of vm processes and dependencies.

Verdict: Not suitable for the described requirements.

Azure Service Endpoint Monitor:

Pros:

Monitors endpoints in Azure and provides status for services.

Cons:

Does not monitor the flow logs or provide visibility of vm processes and dependencies.

Verdict: Not suitable for the described requirements.

Azure DNS Analytics:

Pros:
* Provides insights into DNS performance and traffic.

Cons:

Does not monitor the flow logs or provide visibility of vm processes and dependencies.

Verdict: Not suitable for the described requirements.

Azure Service Map:

Pros:

Automatically discovers application components on Windows and Linux systems.

Visualizes VMs, processes, and dependencies.

Requires the Microsoft Dependency Agent which has already been installed.

Cons:

Not used to monitor NSG flow logs.

Verdict: Correct choice for the second scenario.

Azure Activity Log:

Pros:

Provides audit logs and tracks events at the subscription and resource level.

Cons:

Does not monitor NSG flow logs or provide process/dependency visualization.

Verdict: Not suitable. It is more related to platform events.

Azure Service Health:

Pros:

Provides insights into the health of Azure services.

Cons:

Does not monitor NSG flow logs or provide process/dependency visualization for individual VMs.

Verdict: Not suitable for the described requirements.

Azure Advisor:

Pros:

Provides recommendations on cost, performance, reliability, and security.

Cons:

Does not monitor the flow logs or provide visibility of vm processes and dependencies.

Verdict: Not suitable for the described requirements.

Answer Area

Scenario Azure Monitoring Service
Analyze Network Security Group (NSG) flow logs for VMs attempting Internet access: Azure Traffic Analytics
Visualize the VMs with their different processes and dependencies on other computers and external processes: Azure Service Map
Important Notes for the AZ-304 Exam

Traffic Analytics: Understand how to use Traffic Analytics to analyze NSG flow logs for security and network traffic monitoring.

Service Map: Know that service map can be used to map services and their dependencies.

Microsoft Dependency Agent: Know that Service Map requires this dependency agent to be deployed on the VMs.

Log Analytics Agent: Be aware that these agents collect logs and forward them to a log analytics workspace and is a pre-requisite for some of these solutions.

Azure Monitor: Know the purpose of all Azure Monitoring services in the overall Azure monitoring landscape.

Application Monitoring vs. Infrastructure Monitoring: Understand that there are a number of monitoring solutions in Azure that target different services. For this question you will need to identify the solution that facilitates monitoring the infrastructure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

You store web access logs data in Azure Blob storage.

You plan to generate monthly reports from the access logs.

You need to recommend an automated process to upload the data to Azure SQL Database every month.

What should you include in the recommendation?

Azure Data Factory
Data Migration Assistant
Microsoft SQL Server Migration Assistant (SSMA)
AzCopy

A

Understanding the Requirements

Source: Web access logs in Azure Blob storage.

Destination: Azure SQL Database.

Frequency: Monthly.

Automation: The process needs to be automated.

Transformation: No complex transformations are specified, so the service doesn’t need to be a powerful ETL tool.

Analyzing the Options

Azure Data Factory (ADF):

Pros:

Automated Data Movement: Designed to move data between different sources and sinks.

Scheduling: Supports scheduling pipelines for recurring execution (monthly).

Integration: Has built-in connectors for Blob storage and SQL Database.

Scalable: Can handle various data volumes and complexities.

Transformation: Supports data transformation if needed.

Cons:

Slightly more complex to configure than other options, however a simple ADF pipeline is quite easy to configure.

Verdict: This is the best fit. It can orchestrate the entire process from data extraction to data loading, and scheduling.

Data Migration Assistant (DMA):

Pros:

Helps with migrating databases to Azure, including schema and data migration.

Cons:

Not designed for continuous, scheduled data movement.

More of an interactive tool rather than an automated service.

Not suited to ingest logs into an existing database.

Verdict: Not suitable for recurring data uploads. It is more suited for migrations.

Microsoft SQL Server Migration Assistant (SSMA):

Pros:

Helps with migrating databases from on-premises to Azure SQL Database.

Cons:

Not designed for recurring data uploads from Blob Storage.

Primarily used for database migrations not for data ingestion.

Verdict: Not a valid option. This is used for migrations and not for scheduled data uploads.

AzCopy:

Pros:

Command-line tool to copy data to and from Azure Storage.

Cons:

Not a managed service, it does not handle scheduled operations, it has to be scheduled externally using OS tools (e.g. CRON, task scheduler).

Does not support direct data loading to a database, therefore you would need to build a custom solution to facilitate loading the data into the database.

Does not support any data transformation logic.

Verdict: Not the best option. Requires building a custom solution and does not directly fulfil the requirement to load data into a database.

Recommendation

The correct recommendation is:

Azure Data Factory

Explanation

Automation and Scheduling: Azure Data Factory allows you to create pipelines that can be scheduled to run monthly.

Built-in Connectors: It has connectors for both Azure Blob Storage (to read the logs) and Azure SQL Database (to load data).

Data Integration: It integrates all steps of data extraction, transformation (optional), and loading into a single pipeline.

Monitoring: It provides monitoring and logging for debugging and audit purposes.

Scalability: It can handle a large amount of data if required, and can scale up resources as needed.

Important Notes for the AZ-304 Exam

Azure Data Factory (ADF): Understand its capabilities as an ETL and data orchestration tool.

Automated Data Movement: Know how to set up ADF pipelines for recurring data movement.

Data Integration Tools: Familiarize yourself with the available connectors for different data sources and destinations.

Data Migration vs. Data Ingestion: Understand the difference between tools that are used for migration (e.g. DMA, SSMA) and tools for scheduled data uploads (e.g. ADF).

AzCopy: Know the purpose of AzCopy, and its use cases.

Transformation: Understand that transformation is often a requirement and that you can use data factory for this if needed.

Ease of Use: Although ADF is not the simplest tool, it is the easiest to maintain for scheduled recurring events when compared to a custom solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

You are designing a data protection strategy for Azure virtual machines. All the virtual machines use managed disks.

You need to recommend a solution that meets the following requirements:

  • The use of encryption keys is audited.
  • All the data is encrypted at rest always.
  • You manage the encryption keys, not Microsoft.

What should you include in the recommendation?

Azure Disk Encryption
Azure Storage Service Encryption
BitLocker Drive Encryption (BitLocker)
client-side encryption

A

Understanding the Requirements

Managed Disks: The virtual machines use Azure managed disks.

Encryption at Rest: All data must be encrypted when stored on disk.

Customer-Managed Keys: You must manage the encryption keys, not Microsoft.

Auditing: The use of encryption keys must be auditable.

Analyzing the Options

Azure Disk Encryption (ADE):

Pros:

Encrypts managed disks for both Windows and Linux VMs.

Supports customer-managed keys (CMK) with Azure Key Vault.

Data is encrypted at rest, meeting the security requirement.

Cons:

Does not support auditing of key usage.

Verdict: Does not fully satisfy the requirements due to lack of key usage auditing.

Azure Storage Service Encryption (SSE):

Pros:

Encrypts data at rest in Azure storage (including managed disks) by default.

Supports Microsoft-managed keys or customer-managed keys.

Cons:

Provides basic encryption for data at rest, but does not encrypt the OS disks of VMs.

Does not support the auditing of key usage.

Verdict: Does not provide full coverage of encryption for managed disks, and does not support auditing, therefore not a suitable choice.

BitLocker Drive Encryption (BitLocker):

Pros:

Encrypts drives in Windows operating systems.

Cons:

Would require manual setup and management for every VM.

Does not support auditing of key usage.

Does not support customer managed keys out of the box.

Verdict: Not the correct option. Too much manual overhead, lacks key auditing, and can be complex to manage.

Client-Side Encryption:

Pros:

The data is encrypted before it is sent to Azure.

The encryption key is managed by the client.

Cons:

This method requires custom implementations and additional effort from the client.

Does not support management or auditing of the keys in azure.

Verdict: Not suitable. Requires custom implementations, and is not a managed solution.

Recommendation

The recommendation should be Azure Disk Encryption with Customer-Managed Keys and Azure Key Vault as this is the closest to the correct answer, however further steps are required to implement the auditing requirements.

Explanation

Azure Disk Encryption (ADE): ADE provides encryption for both OS and data disks, using platform-managed keys or customer-managed keys.

Customer-Managed Keys (CMK): By using CMK with Azure Key Vault, you maintain full control over your encryption keys, which satisfies that requirement.

Azure Key Vault Auditing: Azure Key vault logs every event and access of secrets and keys, which can be monitored through Azure Log Analytics.

Encryption at Rest: The data at rest on the managed disks is always encrypted using the configured CMK keys.

Full coverage: This method fully encrypts all disks for the VM.

Steps to implement auditing:

Create an Azure Key Vault

Create a customer managed key in Azure Key Vault.

Configure ADE for the VM to use the customer managed key.

Configure Diagnostic settings on Azure Key Vault to send all logs to Azure Log Analytics.

Configure alerts on Key vault events using Azure Log Analytics to ensure that you are notified when keys are used or modified.

Important Notes for the AZ-304 Exam

Azure Disk Encryption (ADE): Know the options for ADE (platform-managed vs. customer-managed keys) and their implications.

Azure Key Vault: Understand its purpose for storing and managing secrets, keys, and certificates.

Encryption at Rest: Be aware of the different ways to achieve encryption at rest in Azure storage and databases.

Customer-Managed Keys: Know the benefits and implications of using customer-managed keys (CMK) for encryption.

Auditing: Be aware that auditing is a critical aspect of encryption and compliance.

Managed Disks: Understand that managed disks are now the default type in Azure and that encryption applies to them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Your company has the divisions shown in the following table.

|—|—|—|
| East | Sub1 | East.contoso.com |
| West | Sub2 | West.contoso.com |

Sub1 contains an Azure web app that runs an ASP.NET application named App1 uses the Microsoft identity platform (v2.0) to handler user authentication. users from east.contoso.com can authenticate to App1.

You need to recommend a solution to allow users from west.contoso.com to authenticate to App1.

What should you recommend for the west.contoso.com Azure AD tenant?

guest accounts
an app registration
pass-through authentication
a conditional access policy

Division | Azure subscription | Azure Active Directory (Azure AD) tenant |

A

Understanding the Requirements

App1: An ASP.NET application using the Microsoft identity platform (v2.0) for authentication.

Current Authentication: east.contoso.com users can already authenticate to App1.

New Authentication: Users from west.contoso.com must also be able to authenticate to App1.

Authentication: Using Microsoft Identity platform and not on-premises authentication.

Azure AD Tenants: The different divisions have different Azure AD tenants.

Analyzing the Options

Guest accounts:

Pros:

Cross-Tenant Access: Allows users from one Azure AD tenant to access resources in another Azure AD tenant.

Easy to Setup: Relatively easy to create and manage.

Azure AD Integration: Fully compatible with Azure AD and Microsoft identity platform (v2.0).

App Access: This will allow the users to be added to the east.contoso.com tenant and allow access to the app.

Cons:

Requires users to be invited.

Verdict: This is the correct solution.

An app registration:

Pros:

Required for all applications that require authentication from azure ad.

Cons:

The app registration is already done, and an additional app registration is not required.

Verdict: Not required. An app registration is already in place.

Pass-through authentication:

Pros:

Allows users to use their on-premises password to sign in to azure ad.

Cons:

Not suitable in this scenario as it is designed to use local passwords and is not relevant for cloud identity authentication.

Not designed for this use-case which is authentication between different azure AD tenants.

Verdict: Not a good solution. It is not applicable to cloud authentication and is designed for on-prem identity.

A conditional access policy:

Pros:

Used to enforce access control based on various conditions.

Cons:

Does not enable the required functionality to allow a new tenant access to an existing application.

Used to control which users can access a particular resource, but the user must be configured to authenticate first.

Verdict: Not the correct choice. Conditional access can be added later to restrict which users can access the app, but it will not provide the access needed for the app to work for the new tenant.

Recommendation

The correct recommendation is:

Guest accounts

Explanation

Azure AD Guest Accounts: Guest accounts in Azure AD allow you to invite external users into your Azure AD tenant. These users can then access the applications that are hosted on that tenant.

Cross-Tenant Access: Guest accounts enable cross-tenant collaboration, which is exactly what is needed in this scenario.

Microsoft Identity Platform Compatibility: Guest accounts fully integrate with the Microsoft identity platform (v2.0), making them compatible with the authentication mechanisms used by App1.

Access to the App: After a user is added as a guest in the east.contoso.com tenant, they are able to authenticate to the app using their existing credentials from the west.contoso.com tenant.

Important Notes for the AZ-304 Exam

Azure AD Guest Accounts: Understand the purpose of Azure AD guest accounts for cross-tenant collaboration.

Cross-Tenant Access: Know when and how to configure cross-tenant access with Azure AD.

Microsoft Identity Platform (v2.0): Understand that this platform is used for authentication of modern web and mobile applications.

Application Registrations: Know that an app registration is required to allow applications to access resources from Azure AD.

Pass-through Authentication: Understand that this is used to authenticate on-prem identities, not cloud identities.

Conditional Access: Know that this can control access, but cannot provide access on its own.

Authentication: Have a good understanding of authentication in Azure and how to configure it to work across multiple tenants.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

HOTSPOT

You are designing a solution for a stateless front-end application named Application1.

Application1 will be hosted on two Azure virtual machines named VM1 and VM2.

You plan to load balance connections to VM1 and VM2 from the Internet by using one Azure load balancer.

You need to recommend the minimum number of required public IP addresses.

How many public IP addresses should you recommend using for each resource? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Load balancer:
0
1
2
3
VM1:
0
1
2
3
VM2:
0
1
2
3

A

Understanding the Requirements

Application1: Stateless front-end application.

Hosting: On two Azure VMs (VM1 and VM2).

Load Balancing: Incoming traffic from the Internet must be load balanced across the two VMs.

Public IP Addresses: The goal is to determine the minimum number of public IP addresses required.

Analyzing the Setup

Load Balancer: An Azure load balancer, which provides the entry point for internet traffic and distributes it between the virtual machines.

Virtual Machines: VM1 and VM2 host the application. In this scenario, we want to know how many public IP addresses are required for each VM.

Public IP Addresses Needed

Load Balancer:

A load balancer needs a public IP address to be accessible from the internet. This IP address will be the entry point that the outside world connects to, and the load balancer will handle directing traffic to the back end VMs.

You would typically use one single IP address for this type of scenario.

Therefore the correct answer is 1

Virtual Machines (VM1 and VM2):

The application is being load balanced. It is therefore not required to have the virtual machines individually exposed to the public internet.

The Load Balancer will direct traffic to the virtual machines using a private IP address.

It is therefore not required for these to have public IP addresses.

Therefore the correct answer is 0

Answer Area

Resource Public IP Addresses
Load balancer 1
VM1 0
VM2 0
Explanation

Load Balancer:

The load balancer needs a single public IP address for internet access. This is the public entry point for all inbound connections. The Load Balancer is responsible for directing the traffic to the VMs in a balanced way.

Virtual Machines (VM1 and VM2):

Since the traffic is going to the VMs via the load balancer they do not require public IP addresses.

The load balancer will connect to the virtual machines using their private IP address, which are on the same network as the Load Balancer.

This allows the virtual machines to be protected from direct internet access, as the public facing IP is managed by the Load Balancer.

Important Notes for the AZ-304 Exam

Azure Load Balancer: Understand the role of load balancers in distributing traffic across VMs.

Public IP Addresses: Know when public IP addresses are required and when they are not.

Private IP Addresses: Understand that communication can happen within a virtual network using private IP addresses.

Stateless Applications: Recognize the purpose of stateless applications, and how load balancers are used.

Load Balancer Configuration: Know how load balancers work and how back end pools are configured to handle the traffic.

Security: Remember that it’s a best practice not to directly expose VMs to the internet, and that a Load Balancer with a public IP should be used instead

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

You need to deploy resources to host a stateless web app in an Azure subscription.

The solution must meet the following requirements:

  • Provide access to the full .NET framework.
  • Provide redundancy if an Azure region fails.
  • Grant administrators access to the operating system to install custom application dependencies.

Solution: You deploy a web app in an Isolated App Service plan.

Does this meet the goal?

Yes
No

A

Understanding the Requirements

Stateless Web App: The application is stateless.

Full .NET Framework: The application requires access to the full .NET Framework.

Regional Redundancy: The application must continue to function if an Azure region fails.

OS Access: Administrators need access to the operating system to install custom dependencies.

Analyzing the Proposed Solution: Isolated App Service Plan

Isolated App Service Plan: This plan provides the highest level of isolation and resources for a web app.

Now, let’s evaluate how the solution meets each requirement:

Provide access to the full .NET framework.

Analysis: An isolated app service plan allows you to select the operating system (Windows) and provides the full .NET framework, therefore meeting the requirement.

Verdict: Meets Requirement

Provide redundancy if an Azure region fails.

Analysis: Isolated App Service plans do not provide automatic multi-region redundancy. You would need to deploy the web app and app service plan to multiple regions, and manually configure traffic redirection using a tool like Azure Traffic Manager or Front Door.

Verdict: Does NOT meet requirement

Grant administrators access to the operating system to install custom application dependencies.

Analysis: App Service, including Isolated plans, do not grant administrators access to the underlying operating system. You are restricted to installing dependencies within the supported context of the web app.

Verdict: Does NOT meet requirement

Conclusion

The Isolated App Service plan meets one of the three requirements. Therefore, the answer is No.

Reasoning:

While an Isolated App Service plan offers a great amount of resource allocation and isolation, it does not give access to the underlying operating system to administrators, or provide automatic redundancy in the event of an outage. These limitations make the solution unsuitable for the requirements.

Correct Answer

No

Important Notes for the AZ-304 Exam

Azure App Service: Understand the different App Service plans (Free, Shared, Basic, Standard, Premium, and Isolated) and their features.

.NET Framework: Be aware of the support for the full .NET Framework in App Service plans and the limitations.

Regional Redundancy: Know how to achieve regional redundancy using traffic managers and other services.

OS Access: Remember that App Service generally does not provide access to the underlying OS.

Use Cases: Know when to select Azure VM’s over App Services, particularly when you need control of the underlying operating system.

Service Selection: Know how to select the correct Azure service that fits all the requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Your network contains an on-premises Active Directory domain.

The domain contains the Hyper-V clusters shown in the following table.

|—|—|—|
| Cluster 1 | 4 | 20 |
| Cluster 2 | 3 | 15 |

You plan to implement Azure Site Recovery to protect six virtual machines running on Cluster1 and three virtual machines running on Cluster1 Virtual machines are running on all Cluster! and Cluster2 nodes.

You need to identify the minimum number of Azure Site Recovery Providers that must be installed on premises.

How many Providers should you identify?

1
7
9
16

Name | Number of nodes | Number of virtual machines running on cluster |

A

Understanding the Requirements

On-Premises Environment: An on-premises Active Directory domain with two Hyper-V clusters.

Azure Site Recovery: Used to protect virtual machines.

Protected VMs: Six VMs from Cluster1 and three VMs from Cluster1.

Goal: Determine the minimum number of ASR Providers needed.

Understanding Azure Site Recovery Providers

Purpose: The Azure Site Recovery Provider is a component installed on each Hyper-V host that communicates with Azure Site Recovery to facilitate replication and failover of virtual machines.

Placement: The Provider is installed on each Hyper-V host that is part of a cluster that contains virtual machines to be protected.

Minimum Requirement: You need at least one provider installed per cluster.

Analyzing the Scenario

Cluster1: Has 4 nodes. Six virtual machines are to be protected.

Cluster2: Has 3 nodes. Three virtual machines are to be protected.

Calculating the Required Providers

Cluster1: Although only 6 virtual machines from cluster 1 are being protected, these are hosted on nodes within the cluster and these nodes need to have the ASR provider installed.

Since there are four nodes in the cluster, a minimum of four providers is required for the virtual machines in cluster1.

Cluster2: Only 3 virtual machines need to be protected in cluster 2 and therefore the nodes in the cluster that host these virtual machines will require the ASR provider.

Since there are three nodes in the cluster, a minimum of three providers are required for the virtual machines in cluster 2.

Total Providers: The total minimum number of ASR Providers is therefore 4+3 = 7

Note that even if only 1 vm was protected on each cluster, the total number of providers would be 4 + 3 = 7.

Correct Answer

7

Important Notes for the AZ-304 Exam

Azure Site Recovery (ASR): Understand the purpose and function of ASR for disaster recovery.

ASR Provider: Know that the ASR Provider needs to be installed on every Hyper-V host in order to protect its virtual machines.

Hyper-V Clusters: Understand how to use Azure Site Recovery with Hyper-V clusters.

Agent Requirements: You need to know what components are required to be deployed on the virtual machines as well as on the hyper-v hosts.

Deployment Requirements: You should know the pre-requisites for deploying a DR strategy in Azure, and be aware of any limitations.

Minimum Requirements: ASR needs a minimum of 1 provider per hyper-v host that contains VMs that need to be protected by ASR.

ASR Components: Be aware of the different components required for an ASR setup.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

HOTSPOT

You are designing a cost-optimized solution that uses Azure Batch to run two types of jobs on Linux nodes. The first job type will consist of short-running tasks for a development environment. The second jot type will consist of long-running Message Passing Interface (MPI) applications for a production environment that requires timely job completion.

You need to recommend the pool type and node type for each job type. The solution must minimize compute charges and leverage Azure Hybrid Benefit whenever possible.

What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

First job:
Batch service and dedicated virtual machines
User subscription and dedicated virtual machines
User subscription and low-priority virtual machines
Second job:
Batch service and dedicated virtual machines
User subscription and dedicated virtual machines
User subscription and low-priority virtual machines

A

First Job (Short-running, Development):

Pool Type: User subscription

Node Type: Low-priority virtual machines

Why?

User subscription: This pool allocation mode is generally simpler for development environments. Azure manages fewer resources, reducing complexity for the developer.

Low-priority virtual machines: Low-priority VMs offer significant cost savings (up to 80% compared to dedicated VMs). They are ideal for workloads that are not time-sensitive and can be interrupted, which is typical of development tasks. If Azure needs to reclaim the capacity, it will preempt these VMs. However, for short-running tasks, the risk of preemption is less impactful.

Second Job (Long-running MPI, Production):

Pool Type: Batch service

Node Type: Dedicated virtual machines

Why?

Batch service: For production workloads, especially those involving MPI and requiring timely completion, the Batch service allocation mode is preferred. It offers better control over the pool’s lifecycle and resources, and in some cases, can result in a lower cost due to how the subscription is billed.

Dedicated virtual machines: Long-running MPI applications are sensitive to interruptions. Dedicated VMs ensure that the nodes won’t be preempted, providing the stability needed for reliable and timely job completion.

Azure Hybrid Benefit:

Azure Hybrid Benefit can be applied to both dedicated and low-priority VMs in either pool type to further reduce costs if you have on-premises licenses for Windows Server or SQL Server. Because the question specifies Linux nodes, you would not be able to utilize AHB in this scenario.

Therefore, the correct answer is:

First Job: User subscription and low-priority virtual machines

Second Job: Batch service and dedicated virtual machines

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

You have an Azure Active Directory (Azure AD) tenant named Contoso.com. The tenant contains a group named Group1. Group1 contains all the administrator user accounts.

You discover several login attempts to the Azure portal from countries administrator users do NOT work.

You need to ensure that all login attempts to the portal from those countries require Azure Multi-Factor Authentication (MFA).

Solution: You implement an access package.

Does this meet the goal?

Yes
No

A

Understanding the Requirements

Azure AD Tenant: Contoso.com

Admin Group: Group1 contains all administrator user accounts.

Problem: Login attempts from unauthorized countries.

Goal: Enforce MFA for all login attempts from these countries for administrator users.

Analyzing the Proposed Solution: Access Package

Access Package: A tool in Azure AD Identity Governance that allows you to manage access to resources (such as applications, groups, or SharePoint sites) by grouping the resources and their associated access policies together.

Let’s see if an access package meets the needs:

Enforce MFA for all login attempts to the portal from those countries.

Analysis: Access packages manage access to resources. It does not provide controls based on the location of the user, or specifically, the sign-in of the user. It cannot be used to enforce MFA based on location.

Verdict: Does NOT meet requirement

Conclusion

The solution does not meet the goal, as an access package does not enforce MFA based on location. Therefore, the answer is No.

Correct Answer

No

Explanation

Access packages are used to manage access to resources. Access policies can be created to control how users are granted access to a particular resource, but they can’t be used to control authentication requirements for all login attempts from different locations.

The Correct Solution

The correct way to implement this scenario is to use a Conditional Access Policy. Conditional access policies are designed to control access to applications and services based on conditions such as:

Location (Countries/Regions)

User or Group (e.g., the administrators in Group1)

Device State

Application

With a Conditional Access Policy, you can specify that any login attempts from certain countries for users in Group1 must use MFA.

Important Notes for the AZ-304 Exam

Azure AD Conditional Access: Know the purpose and use of Conditional Access policies.

Access Packages: Understand the use cases of access packages in Azure AD Identity Governance.

MFA Enforcement: Know how to use conditional access to enforce MFA.

User and Group Scope: Know how to use conditions to target policies to specific users or groups.

Location Based Access: Understand how to configure conditional access based on geographical location.

Policy Selection: You should know when to select conditional access vs access policies and the use cases of each.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

HOTSPOT

You plan to deploy an Azure web app named Appl that will use Azure Active Directory (Azure AD) authentication.

App1 will be accessed from the internet by the users at your company. All the users have computers that run Windows 10 and are joined to Azure AD.

You need to recommend a solution to ensure that the users can connect to App1 without being prompted for authentication and can access App1 only from company-owned computers.

What should you recommend for each requirement? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

The users can connect to App1 without
being prompted for authentication:
An Azure AD app registration
An Azure AD managed identity
Azure AD Application Proxy

The users can access App1 only from
company-owned computers:
A conditional access policy
An Azure AD administrative unit
Azure Application Gateway
Azure Blueprints
Azure Policy

A

Understanding the Requirements

App1: An Azure web app using Azure AD authentication.

Users: Company users with Windows 10 computers joined to Azure AD.

Seamless Authentication: Users should be able to connect to App1 without any prompts for their credentials.

Company-Owned Devices: Access to App1 should only be allowed from company-owned computers.

Analyzing the Options

An Azure AD app registration:

Pros:

Required for all applications that use Azure AD.

Configures authentication for the application.

Cons:

Does not enable silent sign in or restrict access based on devices.

Verdict: Not sufficient to fulfil either of the requirements.

An Azure AD managed identity:

Pros:

Provides an identity for Azure services for accessing other Azure resources.

Cons:

Not applicable for the user authentication scenario.

Verdict: Not suitable. Not used for user access.

Azure AD Application Proxy:

Pros:

Enables access to internal web applications from the internet.

Cons:

Does not manage user credentials and does not restrict access to company owned machines.

Verdict: Not relevant for this scenario.

A conditional access policy:

Pros:

Can enforce authentication policies based on conditions, such as location, device compliance and other factors.

Can enforce access restrictions to only allow access from compliant or hybrid joined devices (company owned).

Cons:

Requires careful configuration

Verdict: This is the correct answer for the “company owned” devices requirement.

An Azure AD administrative unit:

Pros:

Used to scope management permissions and policies to a subset of users.

Cons:

Does not enable silent authentication and does not restrict access to devices.

Verdict: Not suitable for these requirements.

Azure Application Gateway:

Pros:

Load balances traffic to multiple backends.

Cons:

Does not manage user credentials and does not restrict access to devices.

Verdict: Not relevant for this scenario.

Azure Blueprints:

Pros:

Used to deploy resources using pre-defined templates.

Cons:

Does not manage user credentials and does not restrict access to devices.

Verdict: Not suitable for these requirements.

Azure Policy:

Pros:

Used to enforce specific resource configurations.

Cons:

Does not manage user credentials and does not restrict access to devices.

Verdict: Not suitable for these requirements.

Recommendations

Here’s how we should match the services to the requirements:

The users can connect to App1 without being prompted for authentication:

An Azure AD app registration: will facilitate the sign in process, however it will still require prompts from the user without a conditional access policy.

The users can access App1 only from company-owned computers:

A conditional access policy is required. Conditional Access can restrict access to only compliant or hybrid joined devices, and therefore prevent users from logging on from personal machines.

Answer Area

Requirement Recommended Solution
The users can connect to App1 without being prompted for authentication: An Azure AD app registration
The users can access App1 only from company-owned computers: A conditional access policy
Explanation

Azure AD app registration:

User Authentication: An app registration configures the authentication for the application. It does not ensure seamless authentication, but it is required to implement authentication for an application.

Conditional Access Policy:

Device-Based Restriction: Conditional access can restrict access based on device compliance, hybrid-joined state, and other factors to guarantee the user is on a company owned device.

Important Notes for the AZ-304 Exam

Azure AD Authentication: Know how Azure AD is used for authentication.

Conditional Access: Understand the purpose and functions of Conditional Access policies and how they can facilitate secure access based on various conditions.

Device Compliance: Know how devices can be marked as compliant or non-compliant within Azure.

Seamless Sign-in: Know that conditional access can facilitate seamless sign in with device based authentication.

Company Owned Devices: Know how conditional access can restrict access to company-owned devices only.

Policy Based Access: Understand that conditional access policies are used to enforce controls for users as they attempt to access resources.

Service Selection: Know how to select the service that best fits the requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

You are developing a web application that provides streaming video to users. You configure the application to use continuous integration and deployment.

The app must be highly available and provide a continuous streaming experience for users.

You need to recommend a solution that allows the application to store data in a geographical location that is closest to the user.

What should you recommend?

Azure App Service Web Apps
Azure App Service Isolated
Azure Redis Cache
Azure Content Delivery Network (CDN)

A

The correct answer is Azure Content Delivery Network (CDN).

Explanation:

Here’s why Azure CDN is the best recommendation and why the other options are less suitable:

Azure Content Delivery Network (CDN):

Geographical Proximity: CDNs are designed specifically to store and serve content from geographically distributed servers (edge servers) that are closer to users. When a user requests video content, the CDN automatically routes the request to the nearest edge server that has the content cached. This significantly reduces latency and improves the streaming experience by delivering data faster.

High Availability and Continuous Streaming: CDNs are built for high availability. They have multiple points of presence (POPs) globally, and if one edge server fails, users are automatically routed to another nearby edge server. This ensures continuous streaming even in case of server failures.

Video Streaming Optimization: CDNs are optimized for delivering streaming media content like videos. They often have features like adaptive bitrate streaming (ABR) support, which dynamically adjusts video quality based on the user’s network conditions, further enhancing the streaming experience.

Why other options are incorrect:

Azure App Service Web Apps: While Azure App Service is excellent for hosting web applications and provides high availability and scalability, it primarily hosts the application code and not the large video files themselves in a geographically distributed manner. You could deploy Web Apps in multiple regions for redundancy, but it doesn’t inherently solve the problem of geographically close data storage for video streaming. Web Apps would likely serve the application logic that uses a CDN or storage service to deliver the video content.

Azure App Service Isolated: App Service Isolated is just a more isolated and resource-dedicated tier of App Service. It doesn’t change the fundamental purpose of App Service, which is application hosting, not geographically distributed data storage for streaming. It also wouldn’t inherently place video data closer to the user.

Azure Redis Cache: Azure Redis Cache is an in-memory data store used for caching frequently accessed data to improve application performance. It’s not designed for storing and streaming large video files. While Redis can be geo-replicated, it’s primarily for caching smaller, frequently accessed pieces of data (like session data, frequently accessed database queries), not for serving large video streams. Redis Cache could be used to cache metadata or streaming session information, but not the video content itself for geographical proximity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

DRAG DROP

A company named Contoso, Ltd- has an Azure Active Directory {Azure AD) tenant that uses the Basic license. You plan to deploy two applications to Azure.

The applications have the requirements shown in the following table.

|—|—|
| Customer | Users must authenticate by using a personal Microsoft account and multi-factor authentication |
| Reporting | Users must authenticate by using either Contoso credentials or a personal Microsoft account. You must be able to manage the accounts from Azure AD. |

Which authentication strategy should you recommend for each application? To answer, drag the appropriate authentication strategies to the correct applications. Each authentication strategy may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.

Authentication Strategies
An Azure AD B2C tenant
An Azure AD v1.0 endpoint
An Azure AD v2.0 endpoint

Answer Area
Customer: Authentication strategy
Reporting: Authentication strategy

Application name | Requirement |

A

Understanding the Requirements

Contoso, Ltd. Azure AD Tenant: Using the Basic license.

Two Applications:

Customer: External users authenticate with a personal Microsoft account and require MFA.

Reporting: Internal and external users can use Contoso credentials or a personal Microsoft account, which must be managed from Azure AD.

Analyzing the Authentication Strategies

An Azure AD B2C tenant:

Pros:

Designed for customer-facing applications.

Supports social identities (like Microsoft accounts).

Supports MFA for all authentication types.

Offers customization of the login experience.

Allows management of external identities and authentication policies.

Cons:

Requires an additional Azure AD tenant.

Use Case: Best suited for customer-facing applications that need to support different kinds of identity providers, such as personal Microsoft Accounts.

An Azure AD v1.0 endpoint:

Pros:

Supports Azure AD accounts.

Supports multi factor authentication.

Basic authentication framework

Cons:

Does not support personal microsoft accounts,

Has a more limited set of features than v2.0.

Not designed for external customer authentication.

Use Case: Good for authenticating internal users, but not the best solution for external users.

An Azure AD v2.0 endpoint:

Pros:

Supports Azure AD accounts.

Supports personal Microsoft accounts.

Supports MFA for all authentication types.

Supports modern application development.

Cons:

Does not provide full B2C customization.

Does not manage external accounts within Azure AD.

Use Case: Ideal for authenticating internal (Azure AD) users and external personal accounts, however it does not offer the same level of configuration as B2C.

Matching Authentication Strategies to Applications

Here’s the correct mapping:

Customer:

An Azure AD B2C tenant is the best fit. It is specifically designed for customer-facing applications, supports personal microsoft accounts and MFA, and has good customisation options.

Reporting:

An Azure AD v2.0 endpoint is the most suitable. It is able to facilitate authentication for internal Azure AD users, and external personal microsoft account users, which is suitable for the given requirement. As the application does not require the level of customisation that B2C offers, this is the best option.

Answer Area

Application Authentication Strategy
Customer An Azure AD B2C tenant
Reporting An Azure AD v2.0 endpoint
Important Notes for the AZ-304 Exam

Azure AD B2C: Understand its purpose and use for customer-facing applications.

Azure AD v1.0 vs. v2.0: Know the differences between the v1 and v2 endpoints and how they impact authentication.

Microsoft Accounts: Understand that Azure AD v1.0 does not support personal Microsoft accounts, and therefore you would need to use v2.0, or B2C.

MFA: Know how to enforce MFA for different authentication types.

Authentication Strategies: Understand which strategy is best for different types of applications (e.g., internal vs. customer-facing).

Azure AD Licenses: Know that Azure AD B2C requires separate licensing from Azure AD basic.

Service Selection: Be able to select the correct Azure service that fits your requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Your deploy Azure App Service Web Apps that connect to on-premises Microsoft SQL Server instances by using Azure ExpressRoute You plan to migrate the SQL Server instances to Azure.

Migration of the SQL Server instances to Azure must

  • Support automatic patching and version updates to SQL Server.
  • Provide automatic backup services.
  • Allow for high-availability of the instances,
  • Provide a native VNET with private IP addressing.
  • Encrypt all data in transit
  • Be in a single-tenant environment with dedicated underlying infrastructure (compute, storage}.

You need to migrate the SQL Server instances to Azure.

Which Azure service should you use?

SQL Server Infrastructure-as-a-Service (laaS) virtual machine (VM)
Azure SQL Database with elastic pools
SQL Server in a Docker container running on Azure Container Instances (ACI)
Azure SQL Database Managed Instance
SQL Server in Docker containers running on Azure Kubermetes Service (AKS)

A

The correct answer is Azure SQL Database Managed Instance.

Here’s why:

Automatic patching and version updates to SQL Server: Azure SQL Database Managed Instance handles these tasks automatically, as it’s a Platform-as-a-Service (PaaS) offering.

Provide automatic backup services: Managed Instance includes automatic backups that you can configure for retention and frequency.

Allow for high-availability of the instances: Managed Instance provides built-in high availability.

Provide a native VNET with private IP addressing: Managed Instances are deployed directly into your Azure Virtual Network and have private IP addresses.

Encrypt all data in transit: Encryption in transit is enabled by default for connections to Managed Instances.

Be in a single-tenant environment with dedicated underlying infrastructure (compute, storage): This is a key characteristic of Managed Instance. While it’s a PaaS offering, it provides a more isolated environment compared to Azure SQL Database with elastic pools, which is multi-tenant.

Let’s look at why the other options are not the best fit:

SQL Server Infrastructure-as-a-Service (laaS) virtual machine (VM): While you have full control, you are responsible for patching, backups, and setting up high availability yourself. This doesn’t meet the automation requirements.

Azure SQL Database with elastic pools: This is a multi-tenant service where resources are shared among multiple customers. It doesn’t provide a dedicated underlying infrastructure. While it offers automatic patching, backups, and high availability, it doesn’t meet the single-tenant requirement. Also, direct native VNET integration was more complex (though VNet Service Endpoints are an option, it’s not the same as direct placement in a VNet).

SQL Server in a Docker container running on Azure Container Instances (ACI): You would be responsible for managing the SQL Server instance within the container, including patching and backups. High availability would also require manual configuration. While it can be in a VNET, it doesn’t inherently provide the managed services needed.

SQL Server in Docker containers running on Azure Kubernetes Service (AKS): Similar to ACI, you’d manage the SQL Server instance within the containers. While AKS offers robust orchestration for HA, it doesn’t provide the automatic patching and backup services at the SQL Server level that Managed Instance does. You’d need to implement those yourself.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure Storage v2 account named Storage1.

You plan to archive data to Storage1.

You need to ensure that the archived data cannot be deleted for five years. The solution must prevent administrators from deleting the data.

Solution: You create a file share, and you configure an access policy.

Does this meet the goal?

Yes
No

A

Understanding the Requirements

Azure Storage v2 Account: Storage1

Archival: Data will be archived in the storage account.

Retention Policy: Archived data must be protected from deletion for five years.

Administrator Protection: This protection must prevent even administrators from deleting the data.

Analyzing the Proposed Solution: Access Policy on a File Share

File Share Access Policy: Access policies on Azure file shares primarily control who can access the share, and what actions they can perform on the share, such as read, write, or delete.

Let’s evaluate if a file share access policy meets the stated needs:

Prevent Data Deletion for Five Years (including administrators):

Analysis: File share access policies can be used to prevent certain users or groups from deleting files on a file share, but not for a specific retention period like five years.

Access policies can be overridden by users with sufficient rights (like the storage account administrator).

Access policies do not apply a time based restriction to deletion.

Verdict: Does NOT meet the requirement to prevent deletion for five years, or to block admin users.

Conclusion

The proposed solution does not meet the goal because an access policy will not prevent all users, including administrators, from deleting data, and will also not impose a time based restriction on the deletion of data. Therefore, the answer is No.

Correct Answer

No

Explanation

File share access policies are about authorization to perform specific actions, but they do not implement immutability or retention. To implement a time based retention, you would need an Immutability policy on a blob container. This setting is designed to provide a time based retention mechanism and protect data from deletion even by the administrators.

Important Notes for the AZ-304 Exam

Azure Storage Access Policies: Understand their purpose and limitations in controlling access to data, and that they do not implement a time-based retention policy.

Azure Storage Immutability Policies: Understand that they provide a way to protect data from modification and deletion, and how you can set these policies.

Data Archival: You need to understand the ways that data can be archived, and how retention can be applied.

Admin Roles: Remember that administrators can override many security configurations and policies unless specifically protected by a service such as an immutability policy.

Security Best Practices: Be aware that security should be a consideration in every component of Azure.

Service Selection: Be able to select the correct Azure service that fits your requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

HOTSPOT

You need to recommend a solution for configuring the Azure Multi-Factor Authentication (MFA) settings.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Answer Area
Azure AD license:
Free
Basic
Premium P1
Premium P2
Access control for the sign-in risk policy:
Allow access and require multi-factor authentication
Block access and require multi-factor authentication
Allow access and require Azure MFA registration
Block access
Access control for the multi-factor
authentication registration policy:
Allow access and require multi-factor authentication
Block access and require multi-factor authentication
Allow access and require Azure MFA registration
Block access
— —

A

Understanding the Requirements

Azure MFA: The goal is to recommend a solution for configuring MFA.

Components to Configure:

Azure AD license

Access control for the sign-in risk policy

Access control for the multi-factor authentication registration policy

Analyzing the Options

Azure AD license:

Free: Basic MFA is available for all users with the free Azure AD license, however it does not allow conditional access or risk based MFA.

Basic: This license is very similar to the free tier.

Premium P1: Includes Conditional Access and advanced reporting, which is required for the requirements of the question.

Premium P2: Includes advanced Identity Protection and identity governance features.

Access control for the sign-in risk policy:

Allow access and require multi-factor authentication: Allows access, but requires MFA, which is suitable to mitigate the risk.

Block access and require multi-factor authentication: This does not make sense, as the user would not be able to log in.

Allow access and require Azure MFA registration: Allows access, and requires the user to register for MFA.

Block access: Blocks all access.

Access control for the multi-factor authentication registration policy:

Allow access and require multi-factor authentication: The user must already have MFA registered to log in.

Block access and require multi-factor authentication: This would lock users out, if they have not registered for MFA.

Allow access and require Azure MFA registration: This allows the user access, but requires them to register for MFA.

Block access: Blocks all access.

Recommendations

Here is the correct combination for each requirement:

Azure AD license: Premium P1

Reason: Conditional Access, which is required to configure MFA, requires an Azure AD Premium P1 license or higher. Free and Basic licenses do not support conditional access.

Access control for the sign-in risk policy: Allow access and require multi-factor authentication

Reason: We are not blocking sign in. When the policy is activated and user risk is detected, it will be required for them to authenticate using MFA before access is allowed.

Access control for the multi-factor authentication registration policy: Allow access and require Azure MFA registration

Reason: To ensure that users have MFA configured for the account, we should force them to register for MFA before they are able to proceed. This will ensure that all users are set up correctly.

Answer Area

Requirement Recommended Option
Azure AD license: Premium P1
Access control for the sign-in risk policy: Allow access and require multi-factor authentication
Access control for the multi-factor authentication registration policy: Allow access and require Azure MFA registration
Important Notes for the AZ-304 Exam

Azure AD Licensing: Understand the licensing options and which features are included in each.

Azure MFA: Know how to configure MFA, including registration policies and sign-in risk based policies.

Conditional Access: Understand the purpose of conditional access, and its requirements.

MFA Registration Policies: Know that these are important for ensuring that all users are set up correctly, before allowing them access to resources.

Risk Based Policies: Know that these are an essential component of a good security architecture.

Security Policies: Be aware of the best practices when setting up security policies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an on-premises Hyper-V cluster that hosts 20 virtual machines. Some virtual machines run Windows Server 2016 and some run Linux.
You plan to migrate the virtual machines to an Azure subscription.
You need to recommend a solution to replicate the disks of the virtual machines to Azure. The solution must ensure that the virtual machines remain available during the migration of the disks.
Solution: You recommend implementing an Azure Storage account, and then using Azure Migrate.
Does this meet the goal?

A. Yes
B. No

A

Understanding the Requirements

On-Premises: Hyper-V cluster hosting 20 VMs (Windows Server 2016 and Linux).

Migration: Move the VMs to Azure.

Disk Replication: The disk data must be copied to Azure.

Availability: The VMs must remain available during the disk migration process.

Analyzing the Proposed Solution: Recovery Services Vault and Azure Site Recovery

Recovery Services Vault: A management container in Azure for ASR and backups.

Azure Site Recovery (ASR): A service used for replicating virtual machines for disaster recovery and migration.

Let’s assess if this solution meets the stated requirements:

Replicate Virtual Machine Disks to Azure:

Analysis: Azure Site Recovery is specifically designed for replicating virtual machine disks to Azure.

Verdict: Meets Requirement

Ensure Virtual Machine Availability During Disk Migration:

Analysis: Azure Site Recovery uses continuous asynchronous replication. This means that the VMs will continue to run in the on-premises environment while a copy of their disks is being transferred to Azure. This ensures that users will not experience any downtime during the migration process.

Verdict: Meets Requirement

Conclusion

The proposed solution meets all requirements as it facilitates the replication of VM disks using Azure Site Recovery, and it provides continuous asynchronous replication which allows VMs to remain available during the process. Therefore, the answer is Yes.

Correct Answer

Yes

Explanation

Azure Site Recovery: ASR replicates virtual machine disks from on-premises Hyper-V environments to Azure, while keeping the VMs running.

Continuous Replication: ASR uses continuous replication which allows the VMs to be running during the migration process.

Migration Support: ASR can facilitate the migration of on-prem environments to Azure.

Disaster Recovery: ASR can also be used to facilitate disaster recovery to Azure if a primary data centre fails.

Important Notes for the AZ-304 Exam

Azure Site Recovery (ASR): Know the purpose and functionality of ASR, including how to set up replication.

Recovery Services Vault: Understand that ASR requires a Recovery Services vault to store the replication metadata.

Replication Options: Be aware of the different replication methods that ASR can perform, specifically that it will replicate continuously in the background.

Migration Strategies: Understand how to migrate workloads from on-prem to Azure using different services, such as ASR.

On-prem Considerations: Remember that pre-requisites such as installing the ASR agent, configuring networking, and other actions are required to facilitate the process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

HOTSPOT

You have the application architecture shown in the following exhibit.

Azure Active Directory
|
v
Internet
|
v
+—————–+
| Traffic Manager |
+—————–+
|
v
Azure DNS
/ \
v v
Active Region Standby Region
+ +
| |
Web App Web App
| |
v v
SQL Database SQL Database

Use the drop-down menus to select choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point.

To change the front end to an ative/active
architecture in which both regions process
incoming connections, you must [answer
choice].
add a load balancer to each region
add an Azure Application Gateway to each region
add an Azure content delivery network (CDN)
modify the Traffic Manager routing method

To control the threshold for failing over the
front end to the standby region, you must
configure the [answer choice].
an Application Insights availability test
Azure SQL Database failover groups
Connection Monitor in Azure Network Watcher
Endpoint monitor settings in Traffic Manager

A

Statement 1: To change the front end to an active/active architecture in which both regions process incoming connections, you must [modify the Traffic Manager routing method].

Why this is correct: As explained previously, Traffic Manager is responsible for routing traffic across regions. To have an active/active setup, you must use a Traffic Manager routing method that sends traffic to multiple regions simultaneously. Options like “Weighted” or “Performance” are suitable for active/active.

Why other options are not correct:

Add a load balancer to each region: Load balancers distribute traffic within a region, not between regions.

Add an Azure Application Gateway to each region: Similar to load balancers, Application Gateway is regional.

Add an Azure content delivery network (CDN): CDNs cache static content and do not handle dynamic traffic distribution across regions.

Statement 2: To control the threshold for failing over the front end to the standby region, you must configure the [Endpoint monitor settings in Traffic Manager].

Why this is correct: Traffic Manager’s endpoint monitoring is what determines if an endpoint is healthy and triggers a failover to a backup endpoint. The specific settings (probe interval, tolerated failures, status codes) define the conditions for failing over.

Why other options are not correct:

An Application Insights availability test: Application Insights provides monitoring, but does not directly control failover behavior of traffic manager.

Azure SQL Database failover groups: These manage database failover, not traffic routing at the web app level.

Connection Monitor in Azure Network Watcher: Connection Monitor is for network connectivity troubleshooting, not Traffic Manager endpoint failover.

Summary of Correct Answers:

Statement 1: modify the Traffic Manager routing method

Statement 2: Endpoint monitor settings in Traffic Manager

29
Q

You have an Azure Active Directory (Azure AD) tenant named contoso.com that contains two administrative user accounts named Admin1 and Admin2.

You create two Azure virtual machines named VM1 and VM2.

You need to ensure that Admin1 and Admin2 are notified when more than five events are added to the security log of VM1 or VM2 during a period of 120 seconds. The solution must minimize administrative tasks.

What should you create?

two action groups and two alert rules
one action group and one alert rule
five action groups and one alert rule
two action groups and one alert rule

A

The correct answer is one action group and one alert rule.

Here’s why:

Alert Rules: An Azure Alert rule defines the condition that triggers a notification. In this case, the condition is “more than five security log events in 120 seconds.” We only need one alert rule because we’re monitoring the same condition on both VMs.

Action Groups: Action groups define what happens when an alert is triggered. This could include sending an email, an SMS, or a push notification. In this scenario, we need to notify both Admin1 and Admin2. Since the notification method is the same for both admins, we only need one action group. We then specify both Admin1 and Admin2 in the recipients of this action group. This minimizes administrative effort by letting us manage both notifications in one place.

Explanation of why other options are incorrect:

Two action groups and two alert rules: This would work but is unnecessarily complex. It would mean you have to maintain and update alert rules and action groups separately which is more administration overhead.

Five action groups and one alert rule: This is not related to the requirements.

Two action groups and one alert rule: This is incorrect because we only need one action group with both admins.

Breakdown of the required steps (in summary):

Create an Alert Rule:

Set the resource scope to include both VM1 and VM2.

Set the signal to be the security log with the event count exceeding 5 in 120 seconds.

Create an Action Group:

Add both Admin1’s and Admin2’s contact information as recipients (typically email addresses) within one action group.

Link the Action Group to the Alert Rule:

When configuring the alert rule, link it to the single action group you created.

Key Concepts for Azure 304 Exam:

Azure Monitor Alerts: Understand how to create and configure alert rules.

Action Groups: Know how to create and use action groups to trigger notifications.

Scope: Understand how to target alert rules to one or more resources (in this case, both VMs).

Minimize Administrative Task: Recognize that the correct solution should accomplish the objective with the least amount of overhead. (DRY Principle - Don’t Repeat Yourself)

Alert Logic: Be familiar with the logic of alert conditions (e.g. count, threshold, time windows).

29
Q

Overview:

Existing Environment

Fabrikam, Inc. is an engineering company that has offices throughout Europe. The company has a main office in London and three branch offices in Amsterdam Berlin, and Rome.

Active Directory Environment:

The network contains two Active Directory forests named corp.fabnkam.com and rd.fabrikam.com. There are no trust relationships between the forests. Corp.fabrikam.com is a production forest that contains identities used for internal user and computer authentication. Rd.fabrikam.com is used by the research and development (R&D) department only.

Network Infrastructure:

Each office contains at least one domain controller from the corp.fabrikam.com domain. The main office contains all the domain controllers for the rd.fabrikam.com forest.

All the offices have a high-speed connection to the Internet.

An existing application named WebApp1 is hosted in the data center of the London office. WebApp1 is used by customers to place and track orders. WebApp1 has a web tier that uses Microsoft Internet Information Services (IIS) and a database tier that runs Microsoft SQL Server 2016. The web tier and the database tier are deployed to virtual machines that run on Hyper-V.

The IT department currently uses a separate Hyper-V environment to test updates to WebApp1.

Fabrikam purchases all Microsoft licenses through a Microsoft Enterprise Agreement that includes Software Assurance.

Problem Statement:

The use of Web App1 is unpredictable. At peak times, users often report delays. Al other times, many resources for WebApp1 are underutilized.

Requirements:

Planned Changes:

Fabrikam plans to move most of its production workloads to Azure during the next few years.

As one of its first projects, the company plans to establish a hybrid identity model, facilitating an upcoming Microsoft Office 365 deployment

All R&D operations will remain on-premises.

Fabrikam plans to migrate the production and test instances of WebApp1 to Azure.

Technical Requirements:

Fabrikam identifies the following technical requirements:

  • Web site content must be easily updated from a single point.
  • User input must be minimized when provisioning new app instances.
  • Whenever possible, existing on premises licenses must be used to reduce cost.
  • Users must always authenticate by using their corp.fabrikam.com UPN identity.
  • Any new deployments to Azure must be redundant in case an Azure region fails.
  • Whenever possible, solutions must be deployed to Azure by using platform as a service (PaaS).
  • An email distribution group named IT Support must be notified of any issues relating to the directory synchronization services.
  • Directory synchronization between Azure Active Directory (Azure AD) and corp.fabhkam.com must not be affected by a link failure between Azure and the on premises network.

Database Requirements:

Fabrikam identifies the following database requirements:

  • Database metrics for the production instance of WebApp1 must be available for analysis so that database administrators can optimize the performance settings.
  • To avoid disrupting customer access, database downtime must be minimized when databases are migrated.
  • Database backups must be retained for a minimum of seven years to meet compliance requirement

Security Requirements:

Fabrikam identifies the following security requirements:

  • Company information including policies, templates, and data must be inaccessible to anyone outside the company
  • Users on the on-premises network must be able to authenticate to corp.fabrikam.com if an Internet link fails.
  • Administrators must be able authenticate to the Azure portal by using their corp.fabrikam.com credentials.
  • All administrative access to the Azure portal must be secured by using multi-factor authentication.
  • The testing of WebApp1 updates must not be visible to anyone outside the company.

You need to recommend a notification solution for the IT Support distribution group.

What should you include in the recommendation?

Azure Network Watcher
an action group
a SendGrid account with advanced reporting
Azure AD Connect Health

A

The correct answer is an action group.

Here’s why:

Requirement: The requirement is to notify an email distribution group (IT Support) about issues relating to the directory synchronization services.

Action Groups in Azure Monitor: Azure Monitor uses action groups to define a collection of notification preferences that are triggered by an alert. You can configure an action group to send email notifications to the IT Support distribution group when an alert related to directory synchronization fires.

Why other options are less suitable:

Azure Network Watcher: This is a network performance monitoring and diagnostic service. While it can detect network connectivity issues that might affect directory synchronization, it doesn’t directly monitor the health of the synchronization service itself or send notifications in the way required.

A SendGrid account with advanced reporting: SendGrid is an email delivery service. While you could potentially integrate it with custom alerting solutions, it’s not the built-in mechanism for receiving notifications about Azure AD Connect health. It would require more complex configuration than using action groups. Also, the focus is on notification, not advanced email reporting in this specific requirement.

Azure AD Connect Health: This service monitors the health and performance of your on-premises Azure AD Connect servers and the synchronization process. It can identify issues. However, to actually send notifications based on these issues, you need to configure alerts within Azure Monitor and associate them with an action group. Azure AD Connect Health integrates with Azure Monitor for alerting.

29
Q

HOTSPOT

Your company has 20 web APIs that were developed in-house.

The company is developing 10 web apps that will use the web APIs. The web apps and the APIs are registered in the company’s Azure Active Directory (Azure AD) tenant. The web APIs are published by using Azure API Management.

You need to recommend a solution to block unauthorized requests originating from the web apps from reaching the web APIs.

The solution must meet the following requirements:

  • Use Azure AD-generated claims.
  • Minimize configuration and management effort.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Grant permissions to allow the web apps to
access the web APIs by using:
Azure AD
Azure API Management
The web APIs
Configure a JSON Web Token (JWT) validation
policy by using:
Azure AD
Azure API Management
The web APIs

A

Correct Answers:

Grant permissions to allow the web apps to access the web APIs by using: Azure AD

Configure a JSON Web Token (JWT) validation policy by using: Azure API Management

Explanation:

Granting Permissions Using Azure AD:

Why it’s correct: In an Azure AD-secured environment, applications (like your web apps) need explicit permissions to access other applications (like your web APIs). This is done through application permissions or delegated permissions granted within Azure AD.

How it works: You would register both your web apps and your web APIs in Azure AD. Then, for each web app, you would grant it the specific permissions it needs to call the web APIs it uses. This usually involves using the Azure portal or the Azure CLI to define the required API permissions that the web app must have before accessing the web APIs.

Key Concept: Azure AD manages authentication and authorization for your applications. This ensures that only authorized web apps can access your web APIs.

Why other options are wrong: The permissions are not granted directly through API Management or the web APIs themselves. Those components enforce access but do not manage the initial permission grant.

Configuring JWT Validation Policy Using Azure API Management:

Why it’s correct: When a web app makes a request to a web API, it includes a JSON Web Token (JWT) in the Authorization header. This JWT contains claims about the user (if delegated permission) or the application (if application permissions). Azure API Management can validate this JWT to confirm that:

The token was issued by a trusted Azure AD tenant.

The token is not expired.

The token contains the correct claims for this API and permissions.

The calling web app is who they claim to be in Azure AD.

How it works: You would configure an API Management policy to validate the incoming JWT using a validate-jwt policy that instructs API Management to check a incoming token against Azure AD. API Management would use the application id of the web API it is protecting to ensure that the token it is validating is a valid token for that specific API.

Key Concept: JWT validation is a common method to secure web APIs. API Management is designed for securing APIs and centralizes the management of policies for many APIs.

Why other options are wrong: Azure AD is the system that manages the tokens, not the system that validates the token. The Web APIs can do token validation, but using API Management is the correct option since it is a common place to configure policies for a large number of APIs.

In summary, the solution works as follows:

Azure AD grants permissions: The web apps are explicitly granted permissions in Azure AD to access the web APIs.

API Management validates JWTs: When a request arrives at API Management, it checks the JWT included in the request. API Management uses the application id of the protected web API as well as the token’s issuer to validate the token is valid for the API. If the JWT is valid, the request is forwarded to the web API; otherwise, the request is rejected.

Important Notes for Azure 304 Exam:

Azure AD Authentication and Authorization: Be clear on how Azure AD is used for both authentication (verifying the user or application identity) and authorization (determining what they’re allowed to access).

API Security Best Practices: Understand common API security concepts like JWT validation, scopes, and least privilege.

Azure API Management: Recognize how API Management is used to secure, manage, and publish APIs. Understand the key features of API Management, like policies.

Separation of Concerns: Note the separation of concerns: Azure AD for identity and permissions, API Management for API security enforcement and JWT validation.

Minimize Configuration: API Management policies can be applied to many APIs at once, minimizing config effort compared to implementing the validation logic in each web API individually.

Claims: Be familiar with the concept of claims in a JWT and how they are used to authorize access to API resources.

29
Q

You need to design a solution that will execute custom C# code in response to an event routed to Azure Event Grid. The solution must meet the following requirements:

The executed code must be able to access the private IP address of a Microsoft SQL Server instance that runs on an Azure virtual machine.

Costs must be minimized.

What should you include in the solution?

Azure Logic Apps in the integrated service environment
Azure Functions in the Dedicated plan and the Basic Azure App Service plan
Azure Logic Apps in the Consumption plan
Azure Functions in the Consumption plan

A

The correct answer is: Azure Functions in the Consumption plan

Explanation:

Azure Functions in the Consumption Plan:

Why it’s correct for the scenario:

Custom C# Code: Azure Functions natively supports running C# code. You can deploy custom C# logic as a Function app and trigger it from Event Grid.

Private IP Address Access: Azure Functions deployed in the consumption plan can integrate with an Azure virtual network via VNet integration. VNet integration is specifically designed for resources in the Consumption Plan to securely access resources within a private virtual network, allowing functions to access the private IP of your SQL Server VM.

Cost Optimization: The Consumption plan for Azure Functions is a serverless plan that is designed to minimize costs because you only pay when the function is executed. This aligns with the requirement to keep costs down.

Event Grid Trigger: Azure Functions have a built-in trigger for Event Grid events making the two services work well together.

How it Works: You create an Azure Function app, write your C# code, configure it to be triggered by Event Grid events, and set up VNet integration.

Why Other Options Are Incorrect:

Azure Logic Apps in the Integrated Service Environment (ISE): While Logic Apps can be triggered by Event Grid and can integrate with virtual networks in an ISE, it is generally more expensive than Azure Functions. Also, Logic Apps are a visual orchestration tool that is primarily designed to use prebuilt connectors rather than custom C# code. Thus, Azure Functions is the more appropriate solution for this use case.

Azure Functions in the Dedicated plan and the Basic Azure App Service plan: The Dedicated plan for Azure Functions (similar to an App Service Plan) provides dedicated compute resources, which can be more expensive than the consumption plan. Additionally, the requirement for accessing resources in a private network is a capability specific to Azure Functions in the consumption plan, not Azure App Service. This is also not optimized for cost, which is an explicit requirement.

Azure Logic Apps in the Consumption plan: While Logic Apps in Consumption plan can be triggered by Event Grid events and can invoke Azure Functions or use built-in connectors, the primary purpose of this requirement is to execute C# code. It is a better practice to use Azure Functions for executing custom code and Logic Apps for orchestration. Since the requirement is only for executing code, Azure Functions is the better choice. And in addition, Logic Apps can not access private VNet addresses in consumption plan.

Summary:

The best fit for the requirements of this problem is Azure Functions in the Consumption plan. It allows for custom C# execution, private network access via vnet integration, and is the most cost-effective option by only paying for code execution time.

Important Notes for Azure 304 Exam:

Azure Functions Plans: Understand the differences between the Consumption plan and the Dedicated (App Service) plan, especially when to use which plan. Consumption is serverless and cost-effective; Dedicated is for consistent performance and more control.

VNet Integration: Learn how services like Azure Functions can integrate with virtual networks for secure access to private resources. Understand which services allow VNet integration and which plans support it (Consumption plan for functions).

Event Grid Integration: Understand how to use Azure Event Grid to trigger different types of services, including Azure Functions.

Cost Optimization: Prioritize cost-effective solutions when explicitly mentioned in the requirements. Consumption-based services are often a good choice for this scenario.

Service Selection: Choose the correct Azure service based on its strengths. Azure Functions for custom code, Logic Apps for orchestration and workflow, Event Grid for event routing.

Serverless: Be familiar with serverless compute and its cost and scale benefits.

30
Q

You have an Azure SQL Database elastic pool.

You need to monitor the resource usage of the elastic pool for anomalous database activity based on historic usage patterns. The solution must minimize administrative effort.

What should you include in the solution?

a metric alert that uses a dynamic threshold
a metric alert that uses a static threshold
a log alert that uses a dynamic threshold
a log alert that uses a static threshold

A

The correct answer is: a metric alert that uses a dynamic threshold

Explanation:

Metric Alert with Dynamic Threshold:

Why it’s correct:

Resource Usage Monitoring: Metric alerts are specifically designed to monitor numerical values (metrics) that are emitted by Azure resources. Resource usage of a database (CPU, Data IO, Log IO etc) are available as numerical metrics.

Anomalous Activity: Dynamic thresholds are the key here. They use machine learning algorithms to establish a baseline of normal behavior based on historical data. When the current usage deviates significantly from this baseline, the alert triggers. This is ideal for detecting anomalies because it automatically adapts to changing usage patterns. This removes the human element needed for defining static thresholds, fulfilling the requirement to minimize administrative tasks.

Minimized Administrative Effort: Because the threshold is dynamic, the administrator does not have to adjust the threshold overtime.

Applicable to Elastic Pools: Metric alerts can be applied to elastic pools, allowing you to monitor the overall pool resource usage.

How it works: You configure a metric alert on the elastic pool that monitors the specific resource usage metrics you’re interested in (e.g., CPU percentage, Data IO percentage, Storage Space). You set the alert to use a dynamic threshold. The system automatically learns the typical patterns and then alerts when there is a significant deviation.

Why Other Options are Incorrect:

Metric Alert with Static Threshold: Static thresholds require you to manually set a fixed value that triggers the alert (e.g., “alert when CPU usage is above 80%”). This is problematic because:

Difficult to Define: Determining an appropriate static threshold can be challenging because normal resource usage varies and is difficult to predict. A static threshold that is correct for one time period may be inaccurate for a different time period.

Requires Maintenance: Static thresholds require constant manual adjustments to ensure they are relevant to the current usage patterns. This increases administrative overhead and therefore not fulfilling the requirement to minimize administrative effort.

Log Alert with Dynamic Threshold: Log alerts are designed to monitor events stored in logs. While logs can contain valuable data about database activity, they’re not as effective for directly monitoring resource usage metrics (e.g. CPU, Storage, IO). Log data is also not as easily used for machine learning to create dynamic thresholds.

Log Alert with Static Threshold: Similar to a metric alert with a static threshold, this will require more administration to tune a threshold and is not as effective as dynamic threshold with metrics. Log alerts are also not suitable for monitoring resource utilization metrics.

In summary, a metric alert with a dynamic threshold provides the best combination of accurate anomaly detection, minimizes administrative effort, and works well with the type of data that you’re trying to analyze.

Important Notes for Azure 304 Exam:

Metric Alerts: Be familiar with using metric alerts to monitor numerical values, including the different operators and threshold types.

Dynamic Thresholds: Deeply understand dynamic thresholds, their use cases, benefits, and limitations.

Log Alerts: Understand log alerts and how they differ from metric alerts.

Appropriate Monitoring Tools: Know which Azure monitoring tool (metric alerts, log alerts, Application Insights, etc.) is best suited for a particular task.

Anomaly Detection: Learn the concept of anomaly detection based on historical data.

Elastic Pools: Understand the structure and behavior of Azure SQL Database elastic pools.

Minimize Admin Effort: Be able to choose the most efficient solution that achieves the required monitoring with the least amount of manual configuration and upkeep.

31
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure Storage v2 account named storage1.

You plan to archive data to storage1.

You need to ensure that the archived data cannot be deleted for five years. The solution must prevent administrators from deleting the data.

Solution: You create an Azure Blob storage container, and you configure a legal hold access policy.

Does this meet the goal?

Yes
No

A

The Correct Answer is: No

Explanation:

Legal Holds:

Legal holds are designed to preserve data for litigation or compliance purposes, but they are not intended to prevent administrators from deleting data. An administrator with the correct permissions can remove a legal hold from a blob or container and subsequently delete it. They provide a mechanism to mark data as protected but do not offer immutable storage.

Requirement: The key requirement here is that the archived data “cannot be deleted for five years” and that it prevents even administrators from doing so. A legal hold does not provide this guarantee.

Immutable Storage: To achieve true immutability, you need to use Azure’s immutable storage feature, which relies on time-based retention policies or policy-based retention policies, not legal holds.

Azure Storage Immutability Policies:

Time-based retention policies: You can set a retention period on the blob, container or versioned blob so that they can not be deleted until after the period has passed.

Policy-based retention policies: You can setup a policy that controls the immutability settings for all blobs with a container.

Why the Provided Solution Fails: Using only a legal hold will not prevent administrators from removing the legal hold and subsequently deleting the data.

Why the other option is not correct:

Yes: The legal hold is not the correct mechanism to achieve immutability and prevent administrator deletion of the data.

Important Notes for the Azure 304 Exam:

Legal Holds vs. Immutable Storage: Know the fundamental differences between legal holds and immutable storage policies. Understand when to use each.

Immutable Storage: Be familiar with how time-based and policy based immutability policies work in Azure Storage. Understand how they prevent data deletion for a specified duration, and how they achieve that.

Administrator Privileges: Recognize that legal holds are often used within an administrative context, while immutable storage should explicitly prevent even administrators from deleting the data.

Retention Periods: Understand the use of retention periods for data immutability.

Data Compliance: Be able to apply appropriate Azure Storage solutions to meet compliance requirements related to data retention and protection.

31
Q

You plan to deploy an API by using Azure API Management

You need to recommend a solution to protect the API from a distributed denial of service (DDoS) attack.

What should you recommend?

Create network security groups (NSGs).
Enable quotas.
Strip the Powered-By response header.
Enable rate limiting

A

The correct answer is: Enable rate limiting.

Explanation:

Rate Limiting:

Why it’s the most effective for DDoS: Rate limiting is a fundamental technique for mitigating DDoS attacks. It works by restricting the number of requests a client (or IP address) can make within a given time window. This prevents an attacker from overwhelming the API with a high volume of requests, which is the core of a DDoS attack.

How it Works in API Management: Azure API Management allows you to configure rate limiting policies, specifying the maximum number of calls allowed per subscription key, IP address, or other criteria. When requests exceed the limit, API Management can return an error response, preventing the requests from reaching your backend API.

DDoS Protection: Rate limiting can effectively mitigate a variety of DDoS attacks, including volumetric attacks, resource-exhaustion attacks, and application layer attacks.

Relevance to the scenario: The goal is to protect against DDoS attacks, and rate limiting directly addresses that goal.

Why Other Options are Incorrect:

Create network security groups (NSGs): NSGs are essential for controlling network traffic at the virtual network level. However, while they can filter traffic based on IP addresses and ports, they do not effectively mitigate DDoS attacks. NSGs are more suitable for network-level security, not application-level security against malicious HTTP requests. You can not effectively mitigate an http flood with only network rules. Also, DDoS attacks can come from many IP addresses which are difficult to filter with only NSGs, so it is better to rely on the application level to provide the rate limiting functionality.

Enable quotas: Quotas control the overall usage of a resource (e.g., number of API calls per month, bandwidth limits). While quotas can help manage costs, they are not designed to prevent a sudden flood of malicious requests as is typically found in a DDoS attack, nor do they address rate limiting requirements. They are more about capacity planning, not active mitigation of a DDoS attack.

Strip the Powered-By response header: Removing the Powered-By response header is a security best practice to avoid revealing the tech stack for your API, but it doesn’t do anything to protect against DDoS attacks. While useful for security, it does not mitigate the requirement to protect against a distributed denial of service.

Summary:

Rate limiting provides the best defense against DDoS attacks by preventing an overwhelming number of requests. Other methods will not be as effective.

Important Notes for the Azure 304 Exam:

DDoS Mitigation: Be able to recognize which solutions best prevent DDoS attacks.

API Management Policies: Understand the different policies available in Azure API Management and how to apply them to achieve specific goals.

Rate Limiting: Deeply understand how rate limiting works and how it can mitigate various attacks. Know how to configure rate-limiting policies in Azure API Management.

Defense in Depth: Understand that security is a layered approach. NSGs and other security controls are necessary, but rate limiting provides a strong defense at the API level.

Best Practices: When presented with a security question, always look for solutions that align with recognized security best practices.

Application-Level Security: Remember that many security threats, including DDoS, occur at the application level (HTTP requests). Focus on services and techniques that operate at this level, such as rate limiting and request filtering.

31
Q

You are designing an Azure resource deployment that will use Azure Resource Manager templates. The deployment will use Azure Key Vault to store secrets.

You need to recommend a solution to meet the following requirements:

Prevent the IT staff that will perform the deployment from retrieving the secrets directly from Key Vault.

Use the principle of least privilege.

Which two actions should you recommend? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

Create a Key Vault access policy that allows all get key permissions, get secret permissions, and get certificate permissions.
From Access policies in Key Vault, enable access to the Azure Resource Manager for template deployment.
Create a Key Vault access policy that allows all list key permissions, list secret permissions, and list certificate permissions.
Assign the IT staff a custom role that includes the Microsoft.KeyVault/Vaults/Deploy/Action permission.
Assign the Key Vault Contributor role to the IT staff.

A

Correct Answers:

From Access policies in Key Vault, enable access to the Azure Resource Manager for template deployment.

Assign the IT staff a custom role that includes the Microsoft.KeyVault/Vaults/Deploy/Action permission.

Explanation:

Enable Access for Azure Resource Manager in Key Vault Access Policy:

Why it’s correct: This is a critical step. When you deploy resources via ARM templates, the Azure Resource Manager service needs permission to retrieve secrets from Key Vault. By enabling this specific access in the Key Vault Access Policies, you allow ARM to get the secrets during deployment but you do not grant direct access to individual users.

How it Works: This option will grant the Azure Resource Manager service principal (which represents Azure’s deployment service) the necessary permissions to read secrets. This is done by adding an access policy that grants “Get” and/or “List” permissions to the “Azure Resource Manager” service principal. This enables ARM template deployments to access secrets.

Least Privilege: It avoids granting direct secret read permissions to the IT staff, thus minimizing the privilege they are granted for the deployment.

Assign IT Staff a Custom Role with Microsoft.KeyVault/Vaults/Deploy/Action Permission:

Why it’s correct: The Microsoft.KeyVault/Vaults/Deploy/Action permission allows the user to deploy ARM templates that reference secrets from Key Vault, but it does not grant the user the permission to view or modify the secrets directly. This precisely meets the requirement to prevent direct access to the secret.

How it Works: You create a custom role in Azure RBAC with just this specific permission. You then assign this role to the IT staff responsible for deployment.

Least Privilege: By granting only this permission, you follow the principle of least privilege. IT staff can deploy using secrets, but they can not view, edit, or delete the secrets in Key Vault, fulfilling the requirements of the question.

Why Other Options Are Incorrect:

Create a Key Vault access policy that allows all get key permissions, get secret permissions, and get certificate permissions: This is incorrect as it grants far too many permissions. This would allow the IT staff to read all secrets directly and violate the first requirement and the principle of least privilege. This policy should not be granted to any specific user account, and especially not users responsible for deployment, as this would allow them to see all the values of the secrets directly.

Create a Key Vault access policy that allows all list key permissions, list secret permissions, and list certificate permissions: While less powerful than the option above, this is still incorrect because it allows users to see the list of secrets that are in the Key Vault, which is still more than is required. This does violate the least privilege rule as IT staff do not need to list secrets to deploy ARM templates.

Assign the Key Vault Contributor role to the IT staff: This role grants excessive permissions to the Key Vault, including the ability to read, create, modify, and delete secrets. This violates the principle of least privilege and is not correct because IT staff only need to be able to trigger an ARM deployment, and not manipulate the contents of the Key Vault.

Summary:

The correct approach is to grant the Azure Resource Manager access via Key Vault access policy, and grant IT staff a custom role with minimal necessary permissions (i.e. Microsoft.KeyVault/Vaults/Deploy/Action). This combination ensures that the required deployment occurs without granting excessive access to the staff performing the deployment.

Important Notes for Azure 304 Exam:

Key Vault Access Policies: Understand how access policies control access to Key Vault resources (secrets, keys, certificates).

Azure Resource Manager Access: Know how to grant the Azure Resource Manager service access to Key Vault secrets.

Azure RBAC: Deeply understand how to use Azure RBAC (custom roles and built-in roles) to manage permissions.

Least Privilege: Always adhere to the principle of least privilege when designing security solutions. Grant only the necessary permissions.

ARM Template Deployment: Be familiar with how ARM templates utilize Key Vault for secure parameterization.

Custom Roles: Know how to define and assign custom roles in Azure RBAC, and when this is the better option.

32
Q

Your company plans to publish APIs for its services by using Azure API Management.

You discover that service responses include the AspNet-Version header.

You need to recommend a solution to remove AspNet-Version from the response of the published APIs.

What should you include in the recommendation?

a new product
a modification to the URL scheme
a new policy
a new revision

A

The correct answer is: a new policy

Explanation:

API Management Policies:

Why it’s the right solution: Azure API Management policies are designed to modify request and response behavior. You can use policies to control a wide range of operations, including header manipulation, rate limiting, caching, authentication, and more. Specifically, you can use a set-header policy to remove the header.

How it Works: You would create a policy, most likely an outbound policy, that removes the AspNet-Version header from the HTTP response. This policy can be applied at various scopes in API Management: global, product, API, or operation.

Direct Header Modification: This is the most direct and efficient way to solve the problem. It avoids changing API structure or deployment and provides a centralized mechanism for removing the header.

Relevance to the scenario: The main goal is to remove the specific header. A policy is the most appropriate tool for header manipulation.

Why Other Options are Incorrect:

A new product: Products in API Management are used to group and manage APIs for different audiences and usage tiers. While you could apply a policy to a product, creating a new product just to remove a header is overkill. It’s not the right tool for this specific task, and creates unnecessary administrative overhead.

A modification to the URL scheme: Modifying the URL scheme changes the way that the API is accessed, and is unrelated to header management. This would be overkill for a header modification, as it is far more involved and is unrelated to header manipulation.

A new revision: API revisions are used for versioning of your APIs, and would require an entire new deployment. Like products, this is far more work than necessary for such a minor change, and is completely unnecessary.

In Summary:

The most efficient way to remove the AspNet-Version header from API responses is to use an API Management policy. The policy allows you to manipulate headers easily, and is a specific tool designed for this task.

Important Notes for the Azure 304 Exam:

API Management Policies: You MUST be familiar with API Management policies: where to use them, what they can do (including header management), and how to use them.

Outbound Policies: Understand the difference between inbound and outbound policies, and when to use each. (You should use an outbound policy when modifying responses from the API)

HTTP Headers: Be familiar with HTTP headers and how they’re used in API communication.

Security Best Practices: Removing unnecessary headers (such as the AspNet-Version header) is considered a security best practice, as it can disclose the underlying tech stack of the API.

API Management Features: Understand all the key features of API Management: policies, products, subscriptions, revisions, gateways, etc.

Appropriate Solution: Choosing the most appropriate solution (a policy) instead of a more complex one (new product, new revision, changing URL) for a specific problem, is an important skill to have for the exam.

32
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure Active Directory (Azure AD) tenant named contoso.com. The tenant contains a group named Group1. Group1 contains all the administrative user accounts.

You discover several login attempts to the Azure portal from countries where administrative users do NOT work.

You need to ensure that all login attempts to the Azure portal from those countries require Azure Multi-Factor Authentication (MFA).

Solution: Create an Access Review for Group1.

Does this solution meet the goal?

Yes
No

A

The correct answer is: No

Explanation:

Access Reviews:

What they do: Access reviews in Azure AD are designed to periodically review and recertify user access to resources. The focus is on reviewing who has access, and whether they should continue to have it. They are used to ensure that users have the correct access rights over time and can be used to automate the removal of unused or inappropriate access.

What they don’t do: Access reviews do not enforce MFA or modify authentication policies. They are not the right tool for adding conditional access policy rules.

Relevance to the scenario: Access reviews do not meet the requirement to mandate MFA based on login location.

Conditional Access Policies:

Why needed: To enforce MFA based on the login location, you must use Azure AD Conditional Access policies. These policies allow you to set conditions for access, such as location, device, application, and more.

How it would work: You would create a conditional access policy that applies to all users in Group1 that are accessing the Azure portal. You would add a condition for locations that are not allowed, and enforce MFA if login attempts originate from these locations.

Direct solution to the problem: Conditional access policies allow the administrator to directly control the authentication behavior based on specific conditions.

Why other option is not correct:

Yes: The access review is not designed to meet the goal of enforcing MFA based on specific user or location combinations.

In Summary:

Creating an access review for Group1 will not enforce MFA for login attempts from specified countries. Access reviews are designed to manage who has access, not the conditions under which they access resources. You need to use Azure AD Conditional Access for that purpose.

Important Notes for the Azure 304 Exam:

Conditional Access: You must understand how Azure AD Conditional Access works, including how to create policies, configure conditions, and enforce access controls.

Access Reviews: You must understand the use of Azure AD Access Reviews and their purpose. Know when to use access reviews as opposed to conditional access policies.

Multi-Factor Authentication (MFA): Know how to enforce MFA for different user scenarios.

Location-Based Access: Be familiar with how to use locations to define conditions for conditional access policies.

Azure AD Security: Know the various ways to protect your Azure AD environment. Be familiar with the different security features available in Azure AD.

Correct Solution: Be able to choose the correct solution for a problem by understanding what tools are designed for which tasks.

Key Takeaway: The primary takeaway from this question is that Access Reviews are for access governance (who has access), and Conditional Access is for controlling authentication conditions (how users access resources).

32
Q

You have an Azure Active Directory (Azure AD) tenant and Windows 10 devices.

You configure a conditional access policy as shown in the exhibit. (Click the Exhibit tab.)

MFA Policy

Name: MFA policy
Assignments

Users and groups: All users included and specific…
Cloud apps or actions: All cloud apps
Conditions: 0 conditions selected
Access controls

Grant: 2 controls selected
Session: 0 controls selected
Enable policy: Off

Grant

Select the controls to be enforced:
☐ Block access
☑ Grant access
☑ Require multi-factor authentication
☐ Require device to be marked as compliant
☑ Require Hybrid Azure AD joined device
☐ Require approved client app
☐ Require app protection policy (Preview)
For multiple controls:

☐ Require all the selected controls
☑ Require one of the selected controls
Warning: Don’t lock yourself out! Make sure that your device is Hybrid Azure AD Joined.

What is the result of the policy?

A. All users will always be prompted for multi-factor authentication (MFA).
B. Users will be prompted for multi-factor authentication (MFA) only when they sign in from devices that are NOT joined to Azure AD.
C. All users will be able to sign in without using multi-factor authentication (MFA).
D. Users will be prompted for multi-factor authentication (MFA) only when they sign in from devices that are joined to Azure AD.

A

The correct answer is: C. All users will be able to sign in without using multi-factor authentication (MFA).

Explanation:

Here’s a breakdown of why this is the case:

Policy is Disabled: The most important thing to notice is that the Enable policy switch is set to Off. This means the conditional access policy is not active and has no effect on the sign-in process.

Conditional Access Policies must be Enabled: For a conditional access policy to have any effect, it must be enabled. Since this is not the case, the specified users will not be prompted to use multi-factor authentication (MFA).

Why the Other Options are Incorrect:

A. All users will always be prompted for multi-factor authentication (MFA). This is incorrect because the policy is disabled. Even if the policy were enabled, this option assumes the policy applies to all logins and this is not true since no locations, or application conditions were specified.

B. Users will be prompted for multi-factor authentication (MFA) only when they sign in from devices that are NOT joined to Azure AD. This is incorrect for the same reason as above, that the policy is disabled. Also this answer choice refers to a “NOT joined device” which is not a condition of this policy.

D. Users will be prompted for multi-factor authentication (MFA) only when they sign in from devices that are joined to Azure AD. This is incorrect for the same reason as above, that the policy is disabled. Also this answer choice refers to a “joined device” which is not a condition of this policy.

In Summary:

Because the policy is disabled, the configured requirements for multi-factor authentication (MFA) will not be enforced. All users will be able to sign in without being prompted for multi-factor authentication.

Important Notes for Azure 304 Exam:

Conditional Access Policy Status: Always pay close attention to whether a conditional access policy is enabled or disabled. This is a critical detail often overlooked.

Conditions: Remember that conditional access policies are enforced only when the specified conditions are met. If there are no conditions, then the policy will apply to all sign-in attempts.

Controls: The “grant” settings in a conditional access policy define the requirements that need to be met for access.

“Require one of the selected controls” vs. “Require all the selected controls”: The “Require one of the selected controls” setting means that the user only needs to satisfy at least one of the required controls. “Require all the selected controls” means that the user must satisfy every control.

Policy Evaluation: Understand that conditional access policies are evaluated in order, and the first policy that matches will be enforced.

Testing and Planning: It is highly recommended to test conditional access policies, and have a plan in place if you were to lock yourself out of the system.

Azure AD Security: Be proficient with Azure AD conditional access as it is an important part of securing your Azure AD environment.

33
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it As a result, these questions will not appear In the review screen.

You have an on-premises Hyper-V cluster that hosts 20 virtual machines. Some virtual machines run Windows Server 2016 and some run Linux.

You plan to migrate the virtual machines to an Azure subscription.

You need to recommend a solution to replicate the disks of the virtual machines to Azure. The solution must ensure that the virtual machines remain available during the migration of the disks.

Solution: You recommend implementing an Azure Storage account and then running AzCopy.

Does this meet the goal?

Yes
NO

A

The correct answer is: No

Explanation:

AzCopy:

What it does: AzCopy is a command-line utility for copying data to and from Azure Storage. It’s great for bulk transfers, but it doesn’t provide the mechanisms for live replication or migration of virtual machine disks.

Why it’s unsuitable: When you use AzCopy to copy a VHD file (the virtual hard disk file) from Hyper-V to Azure Storage, the virtual machine will need to be powered down to ensure data consistency. This does not meet the requirement of maintaining availability during the data transfer.

Data transfer method: AzCopy is a copy tool, not a replication tool, meaning that it will create a duplicate of the data at the point in time you copy it. It is not designed to be a live synchronization method.

Requirement of “Availability”: The critical requirement here is that “the virtual machines remain available during the migration of the disks”. AzCopy does not provide live transfer or synchronization, and will require that you shut down your VM before the transfer can occur.

Azure Migrate for Live Migration: For migrating virtual machines with minimal downtime, you must use services like Azure Migrate that provide live migration functionality through agents or replication features. Azure Migrate specifically provides a method to replicate and migrate Hyper-V virtual machines to Azure with minimal disruption.

Why the Other Option is Incorrect:

Yes: The provided solution does not satisfy the requirement of keeping the VMs available during the transfer and therefore this is the wrong answer.

In Summary:

AzCopy is a useful tool for transferring data but not for live migration or replication. AzCopy does not allow the VMs to remain available, which is a core requirement, so the solution does not meet the goal.

Important Notes for the Azure 304 Exam:

Azure Migrate: Be familiar with the various migration tools, especially Azure Migrate. Understand how it can be used to migrate virtual machines (including Hyper-V) to Azure with minimal downtime using live replication technologies.

Replication vs. Copy: Understand the difference between replicating data and simply copying it. Replication implies an ongoing synchronization process.

Migration Methods: Understand different methods of migrating virtual machines. Know when a full migration is required versus a live migration.

AzCopy: Be familiar with AzCopy’s role for data transfers, but understand its limitations for live migration scenarios.

Virtual Machine Availability: Always prioritize keeping virtual machines available during a migration scenario. Understand how to use Azure tools to meet that objective.

Data Consistency: Understand how powering down virtual machines before copying data can help ensure data consistency.

Key Takeaway: You must understand the specific tool for the specific task. AzCopy is great for data transfer, but for migrating live VM disks, you should use Azure Migrate.

33
Q

CORRECT TEXT

You have an Azure subscription named Subscription1 that is linked to a hybrid Azure Active Directory (Azure AD) tenant.

You have an on-premises datacenter that does NOT have a VPN connection to Subscription1. The datacenter contains a computer named Server1 that has Microsoft SQL Server 2016 installed. Server1 is prevented from accessing the internet

An Azure logic app named LogicApp1 requires write access to a database on Server1.

You need to recommend a solution to provide LogicApp1 with the ability to access Server1.

What should you recommend deploying on-premises and in Azure? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Answer Area
On-premises:
A Web Application Proxy for Windows Server
An Azure AD Application Proxy connector
An On-premises data gateway
Hybrid Connection Manager
Azure:
A connection gateway resource
An Azure Application Gateway
An Azure Event Grid domain
An enterprise application

A

Correct Answers:

On-premises: An On-premises data gateway

Azure: A connection gateway resource

Explanation:

On-premises Data Gateway:

Why it’s correct: The On-premises Data Gateway is the essential component for providing secure access from cloud services like Logic Apps to on-premises data sources. It acts as a bridge between your Azure environment and your private network.

How it Works: You install the gateway on a computer within your on-premises network, and then the gateway is registered with the Azure service that needs to access your on-premise data. The gateway manages the connection securely and provides a communication channel between Azure and your on-premises environment without exposing your on-premises network to the public internet.

Relevance to the scenario: It directly addresses the need for LogicApp1 to reach the on-prem SQL Server.

Connection Gateway Resource (Azure):

Why it’s correct: In Azure, you will require a connection gateway that is used to register your on-prem data gateway. This Azure resource provides a bridge between the cloud and the on-premises environment. This resource is also used when configuring your logic app to access the on-premises resource.

How it Works: When you connect to an on-premises data source in Logic Apps (or other services like Power BI), you’ll select the connection gateway from a drop down list. Logic Apps use this Azure resource to send queries to the on-premises data gateway to be processed in your private network, enabling secure access to the SQL Server database.

Relevance to the scenario: This Azure component works in conjunction with the on-premises data gateway to establish the connection and is a required part of the access model.

Why Other Options Are Incorrect:

On-premises Options:

A Web Application Proxy for Windows Server: WAP is used for publishing web applications to the internet, not for connecting cloud services to on-premises databases. It is typically used for HTTP proxying and authentication, not direct database access from the cloud.

An Azure AD Application Proxy connector: This is used to publish internal web applications, not access databases, and is not designed for data sources. It also does not create the secure channel between Azure and the on-prem network.

Hybrid Connection Manager: This service is used to establish connectivity to Azure services, not for Logic Apps to access on-premises databases. It is not the appropriate tool for this use case, where we need logic apps to call into the on-prem environment.

Azure Options:

An Azure Application Gateway: This is a web traffic load balancer for HTTP traffic, not used for connecting to on-premises data sources. Application gateway is used to proxy HTTP traffic from the public internet, and is not used as an intermediary for connecting cloud services to on-premise data.

An Azure Event Grid domain: Event Grid is a message broker used for event-driven architectures, not for database access. While an event could trigger a logic app, this service is not relevant to this use case.

An enterprise application: An enterprise application in Azure AD is a representation of a cloud service or an application for authentication, not used for direct data connectivity. It also does not create the secure channel between Azure and the on-prem network.

In Summary:

The correct combination is the on-premises data gateway and a connection gateway in Azure. The on-premises data gateway acts as a secure bridge from Azure to on-premises data, while the connection gateway provides the link to this bridge from Azure.

Important Notes for Azure 304 Exam:

On-premises Data Gateway: You MUST know what this is, how it works, and when to use it for cloud-to-on-premises connectivity for services like Logic Apps, Power BI, and others.

Hybrid Connectivity: Understand the challenges of hybrid architectures, including connectivity, security, and data access.

Logic Apps: Be familiar with Logic Apps and their ability to connect to a variety of data sources, including on-premises resources.

Azure Network Services: Understand different Azure networking services and be able to pick the correct one for the job, such as Virtual Network Gateways, Application Gateway, and Hybrid Connections.

Security: Be familiar with the various security methods available and best practices for securely connecting to on-premise resources.

Data Integration: Be able to connect and query various types of data sources in the cloud.

33
Q

You have 70 TB of files on your on-premises file server.

You need to recommend solution for importing data to Azure. The solution must minimize cost.

What Azure service should you recommend?

Azure StorSimple
Azure Batch
Azure Data Box
Azure Stack

A

The correct answer is: Azure Data Box

Explanation:

Azure Data Box:

Why it’s the best choice for large data imports: Azure Data Box is a physical appliance that Microsoft ships to your location. You copy your data to the device, ship it back to Microsoft, and they upload the data to your Azure Storage account. This method is optimized for large data transfers like 70 TB, where network transfer can be slow and costly.

Cost Optimization: Azure Data Box is designed for cost efficiency. It avoids the need for significant bandwidth upgrades, which can be very expensive for a large dataset like 70 TB. You pay for the Data Box device usage and the cost for copying to Azure storage, however this is typically cheaper than using network transfer over the internet for such a large transfer.

Relevance to the scenario: The scenario is large data transfer and minimizing costs. Data Box is the best option.

Data Transfer: It allows you to copy data from your on-premises file server to the Data Box device and return the data to Microsoft to be uploaded to your Azure storage.

Why Other Options are Incorrect:

Azure StorSimple: Azure StorSimple is a hybrid cloud storage solution that primarily focuses on tiered storage, cloud backup, and disaster recovery. While it can store large volumes of data in Azure, its primary purpose is not for bulk data migration like this scenario, but rather for active, tiered storage and backups. Additionally, StorSimple is now end of life and should not be used.

Azure Batch: Azure Batch is a service for running large-scale parallel compute jobs, not for data transfers from on-premises environments. It’s used for processing data in the cloud, not getting data into the cloud.

Azure Stack: Azure Stack is an on-premises extension of Azure, designed to run Azure services within your own data center. It’s not a tool for data import. Additionally, Azure Stack is a complex solution for running local private cloud. It would not be appropriate for this situation.

In Summary:

Azure Data Box is the most cost-effective and practical option for transferring a large dataset like 70 TB from an on-premises file server to Azure. It avoids the high costs and slow speeds associated with network data transfer for that volume.

Important Notes for Azure 304 Exam:

Data Import Options: Be familiar with the different ways to import data into Azure, including Azure Data Box, Azure Import/Export service, AzCopy, and direct network transfers.

Azure Data Box Family: Understand the different types of Data Box devices (Data Box Disk, Data Box, Data Box Heavy), and when to use each based on the amount of data and the transfer speed requirements.

Cost Optimization: Be able to choose the most cost-effective solution for different types of data transfer based on the volume of data and the limitations of internet connectivity.

Bandwidth Limitations: Understand the limitations of internet bandwidth and when a physical transfer method (Data Box) is more appropriate than a network transfer.

Data Transfer Scenarios: Know which Azure services are designed for data migration vs. other purposes (e.g., StorSimple for hybrid storage, Batch for computation).

Service Selection: Be able to choose the best Azure service for a given task, especially the appropriate data transfer service based on requirements and scenario constraints.

33
Q

Your company has several Azure subscriptions that are part of a Microsoft Enterprise Agreement.

The company’s compliance team creates automatic alerts by using Azure Monitor.

You need to recommend a solution to automatically recreate the alerts in the new Azure subscriptions that are added to the Enterprise Agreement

What should you include in the recommendation?

Azure Automation runbooks
Azure Log Analytics alerts
Azure Monitor action groups
Azure Resource Manager templates
Azure Policy

A

The correct answer is: Azure Policy

Explanation:

Azure Policy:

Why it’s the best fit: Azure Policy is the ideal solution for enforcing compliance and standardization across Azure subscriptions. It allows you to define policies that automatically deploy resources (including alerts) to new subscriptions as they are added to the management scope.

How it works for alerts: You can create an Azure Policy definition that specifies the configuration of your desired Azure Monitor alerts. Then, you assign this policy at the management group level (or at the root of your enterprise agreement). Any new subscription created or moved into the management group will automatically have the policy applied.

Automatic Deployment: When the policy is applied to a new subscription, it will automatically deploy the required Azure Monitor alerts, eliminating manual configuration.

Compliance Enforcement: Azure Policy ensures that new subscriptions are compliant to the required alert configurations.

Relevance to the scenario: The requirement is automated deployment of alerts to new subscriptions, and Azure Policy is the best tool to achieve this.

Why Other Options are Incorrect:

Azure Automation runbooks: While Automation runbooks can be used to deploy resources, they are not the best choice for automated enforcement. You’d need to write and trigger the runbook yourself, or create a complicated external system to trigger this, so it does not fulfill the “automatic” requirement. This is not the best practice for compliance.

Azure Log Analytics alerts: Log Analytics alerts are a type of alert that can be deployed, but they don’t provide the mechanism for automatically creating the alerts. Log analytics is the alert source, not the deployment mechanism.

Azure Monitor action groups: Action groups are used to define what happens when an alert is triggered (e.g., send an email), but they do not provide the means to create the alerts. Action groups are not the correct service to solve this use case.

Azure Resource Manager templates: ARM templates are used to define infrastructure as code. While you could use ARM templates to create alerts, you would still need a mechanism to automatically deploy them to new subscriptions. ARM templates alone do not handle the requirement to deploy to all new subscriptions automatically.

In Summary:

Azure Policy is the correct solution because it provides the best way to automatically and consistently deploy the Azure Monitor alerts to every new subscription within the scope of an Azure Management Group, or the entire Enterprise Agreement. This meets the requirement to ensure automatic deployment and management.

Important Notes for the Azure 304 Exam:

Azure Policy: You must be very comfortable with Azure Policy: how to define policies, how to assign policies, and how to use them to enforce compliance and standards.

Management Groups: Understand the use of Azure Management Groups for organizing subscriptions and managing policies at scale.

Azure Monitor: Be familiar with the various Azure Monitor capabilities, including alerts, metrics, logs, and action groups.

Infrastructure as Code (IaC): While ARM templates can be used to provision resources, they’re not the best tool for automated enforcement across an organization.

Automatic Resource Deployment: Be able to choose a solution to provide the automated deployment of services, and prioritize tools that automate the compliance configuration.

Compliance: Understand the importance of compliance and standardization in enterprise environments, and know that Azure Policy is the tool to solve most compliance problems.

Service Selection: Choose the right Azure service for the job. Azure Policy for compliance and automated enforcement, and other services for resource deployment, and for different types of alerts.

34
Q

Note: This question is part of series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company has an on-premises Active Directory Domain Services (AD DS) domain and an established Azure Active Directory (Azure AD) environment.

Your company would like users to be automatically signed in to cloud apps when they are on their corporate desktops that are connected to the corporate network.

You need to enable single sign-on (SSO) for company users.

Solution: Configure an AD DS server in an Azure virtual machine (VM). Configure bidirectional replication.

Does the solution meet the goal?

Yes
No

A

The correct answer is: No

Explanation:

The Problem: The goal is to enable SSO for cloud apps from corporate desktops connected to the local network. The provided solution is too complex and does not directly solve this.

The Provided Solution (Azure VM with AD DS):

What it does: This solution involves creating a domain controller in an Azure VM and setting up bidirectional replication with the on-premises AD DS. While this provides a backup domain controller and can handle authentication for Azure-based resources, it does not directly enable SSO from the on-premises desktops to cloud applications.

Why it doesn’t meet the goal: Having a domain controller in Azure does not inherently enable SSO between on-premise computers and cloud based applications. The on-premise computers are still authenticated with the on-premise AD DS domain. There would be no SSO for the cloud apps if this was implemented.

Required Solution: Seamless SSO (Azure AD Connect) or Password Hash Synchronization

Why it’s correct: To achieve SSO with cloud applications, you need to have some method to synchronize authentication information from on-premises AD to Azure AD, and then use a technology like seamless SSO to automatically authenticate users without requiring a password prompt. The most commonly used technologies to achieve this are:

Password Hash Synchronization: This option allows you to use the same password for Azure AD as your on-prem AD password, so the user does not have to use a different password when logging into a cloud application.

Pass-through Authentication: You can also configure pass-through authentication to have Azure AD authenticate directly with your on-premises AD.

Seamless SSO: This is an extension of either of the two options above and provides a zero touch authentication experience. With seamless SSO, users do not need to enter their username and password when logging into Azure AD applications.

Why other options are not correct:

Yes: The solution does not meet the requirement of providing seamless single sign-on. A domain controller in Azure is not enough to achieve SSO for cloud apps from on-premise desktops.

In summary:

While an Azure-based domain controller and bidirectional replication are important for redundancy, it doesn’t address the actual requirement of SSO for cloud applications from on-premises desktops. You need Azure AD Connect with a method such as seamless SSO for this.

Important Notes for the Azure 304 Exam:

Azure AD Connect: You MUST know how Azure AD Connect works, including the synchronization options.

Single Sign-On (SSO): Be familiar with SSO and how it enables users to access multiple applications with a single set of credentials.

Seamless SSO: Understand how Seamless SSO works and what it requires.

Password Hash Synchronization and Pass-through Authentication: Understand the options for how Azure AD will authenticate users, and the differences between them.

Hybrid Identity: Understand the different components of a hybrid identity model.

On-Premises AD DS: Understand what it is, and how to connect it to Azure AD.

Correct Tool: Be able to choose the correct tool and service for the job. Be able to map a requirement to a solution, and to differentiate between solutions that seem similar.

Key Takeaway: Be able to distinguish between a solution that replicates AD data, and a solution that solves the problem of SSO with cloud apps, they are distinct requirements.

35
Q

HOTSPOT

You configure the Diagnostics settings for an Azure SQL database as shown in the following exhibit.

*Name
Diags

[ ] Archive to a storage account
[ ] Stream to an event hub
[x] Send to Log Analytics

Log Analytics
OMSWKspace1

LOG
[x] SQLInsights
[x] AutomaticTuning
[x] QueryStoreRuntimeStatistics
[x] QueryStoreWaitStatistics
[x] Errors
[x] DatabaseWaitStatistics
[ ] Timeouts
[ ] Blocks
[x] Deadlocks
[ ] Audit
[x] SQLSecurityAuditEvents

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point.

Answer Area
To perform real-time reporting by using
Microsoft Power BI, you must first
[answer choice]
clear Send to Log Analytics
clear SQLInsights
select Archive to a storage account
select Stream to an event hub
Diagnostics data can be reviewed in
[answer choice]
Azure Analysis Services
Azure Application Insights
Azure SQL Analytics
Microsoft SQL Server Analysis Services (SSAS)
SQL Health Check

A

Statement 1: To perform real-time reporting by using Microsoft Power BI, you must first [select Stream to an event hub].

Why this is correct:

Real-Time Data: Power BI is excellent for data visualization, but it relies on having a real-time data source. Streaming data to an event hub is the best method to allow Power BI to perform analysis on that stream of data.

Event Hub Integration: Event Hubs are designed for high-throughput, real-time data ingestion. Power BI can connect directly to an Azure Event Hub and consume the incoming data stream for real-time reporting, making it the ideal solution here.

Other Options:

Clear Send to Log Analytics would disable logging to Azure Log Analytics, it does not allow for real time reporting.

Clear SQLInsights also disables logging to Azure Log Analytics, which does not enable real time reporting.

Select Archive to a storage account is for storing data, but it’s not suitable for real-time Power BI reporting as this data is written to a file, not a stream, therefore, Power BI cannot directly consume it.

Important Azure 304 Exam Note: Understand different methods for real time ingestion of data, and be familiar with which azure services can consume a stream of data from an event hub.

Statement 2: Diagnostics data can be reviewed in [Azure SQL Analytics].

Why this is correct:

Azure SQL Analytics: This is specifically designed for analyzing data that is sent to log analytics from SQL databases. Azure SQL Analytics is a pre-built solution that provides dashboards and analytics to visualize and monitor SQL Database performance metrics that are sent to Log Analytics.

Log Analytics Integration: Because the diagnostic settings are configured to send data to Log Analytics, this data is stored and available to be consumed by Azure SQL Analytics.

Other Options:

Azure Analysis Services is used for creating an enterprise BI solution but cannot directly review the data being sent to Log Analytics.

Azure Application Insights is primarily used for application performance monitoring. It does not receive the logging information from Azure SQL DB.

Microsoft SQL Server Analysis Services (SSAS) is used for on-premises SQL analysis and is not suitable for data from Azure SQL.

SQL Health Check is a feature in the Azure portal to help identify potential problems in the SQL database, it does not provide the analytic tools to view the diagnostics data being sent to log analytics.

Important Azure 304 Exam Note: Understand how diagnostic settings integrate with Azure Monitor Logs and Azure SQL Analytics. Be able to pick out the correct tool for analyzing and visualizing diagnostic logs.

Summary of Correct Answers:

Statement 1: select Stream to an event hub

Statement 2: Azure SQL Analytics

Key Takeaways for Azure 304 Exam:

Diagnostic Settings: You should understand the options for configuring diagnostic settings for various Azure resources.

Real-Time Data Streaming: Be familiar with using Event Hubs for real-time ingestion and consumption of data streams.

Log Analytics Integration: Understand how diagnostic data is sent to Log Analytics, and how you can query and analyze it.

Azure SQL Analytics: Know how Azure SQL Analytics provides monitoring and analysis capabilities for Azure SQL databases.

Power BI Integration: Be familiar with how Power BI connects with Azure services for data analysis, and recognize the best data ingestion methods for streaming data to power BI.

Service Selection: Be able to select the most appropriate Azure service for a given task. For example: Azure SQL Analytics to review SQL diagnostics data, event hubs for streaming data, and Log Analytics for log storage and analysis.

36
Q

DRAG DROP

You have an on-premises datacenter named Site1. Site1 contains a VMware vSphere cluster named Cluster1 that hosts 100 virtual machines. Cluster1 is managed by using VMware vCenter.

You have an Azure subscription named Sub1.

You plan to migrate the virtual machines from Cluster1 to Sub1.

You need to identify which resources are required to run the virtual machines in Azure. The solution must minimize administrative effort.

What should you configure? To answer, drag the appropriate resources to the correct targets. Each resource may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Resources
An Azure Migrate appliance
An Azure Migrate project
An Azure VMware Solution private cloud
An Azure VMware Solution host
Answer Area
Sub1: Resource
Cluster1: Resource

A

For Sub1 (Azure Subscription):

From the options:

An Azure Migrate appliance: This is deployed on-premises, not in Azure.

An Azure Migrate project: This is created in Azure to manage and orchestrate the migration process. It’s essential for managing the migration within Azure.

An Azure VMware Solution private cloud: This is the Azure service that allows you to run VMware environments natively in Azure. It’s the target environment in Azure for running the VMs with minimal changes, thus minimizing administrative effort post-migration because it’s VMware-compatible.

An Azure VMware Solution host: Hosts are part of the AVS private cloud infrastructure but are managed by the AVS service, not directly by the user in this context.

The Azure VMware Solution private cloud is the most critical resource in Sub1 for running the VMs in Azure with minimal administrative effort. It provides the VMware-compatible environment in Azure. While Azure Migrate project is important for managing the migration, the AVS private cloud is the destination and core requirement for running the VMs in a VMware-like way in Azure.

Therefore, for Sub1, the closest with correct answer is: An Azure VMware Solution private cloud.

For Cluster1 (On-premises VMware):

From the options:

An Azure Migrate appliance: This is deployed on-premises to discover, assess, and migrate VMs from VMware. It’s the essential component on the on-premises side to interact with vCenter and facilitate the migration to Azure using Azure Migrate.

An Azure Migrate project: This is in Azure.

An Azure VMware Solution private cloud: This is in Azure.

An Azure VMware Solution host: Hosts are in Azure.

The Azure Migrate appliance is the most critical resource within Cluster1 (Site1) to enable the migration process itself. Without it, Azure Migrate cannot discover and migrate the VMs from the on-premises vSphere environment.

Therefore, for Cluster1, the closest with correct answer is: An Azure Migrate appliance.

Final Answer based on single selection “closest with correct answer” interpretation:

Sub1: Resource: An Azure VMware Solution private cloud
Cluster1: Resource: An Azure Migrate appliance

37
Q

HOTSPOT

You plan to deploy a containerized web app that will be hosted in five Azure Kubernetes Service (AKS) clusters. Each cluster will be hosted in a different Azure region.

You need to provide access to the app from the internet.

The solution must meet the following requirements:

  • Incoming HTTPS requests must be routed to the cluster that has the lowest network latency.
  • HTTPS traffic to individual pods must be routed via an ingress controller.
  • In the event of an AKS cluster outage, failover time must be minimized.

What should you include in the solution? To answer, select the appropriate options in the answer area.
— —
Answer Area
For global load balancing:
Azure Front Door
Azure Traffic Manager
Cross-region load balancing in Azure
Standard Load Balancer
As the ingress controller:
Azure Application Gateway
Azure Standard Load Balancer
Basic Azure Load Balancer
— —

A

To address the requirements for a globally distributed containerized web app across five AKS clusters, we need to select the most appropriate Azure services for global load balancing and ingress control within AKS.

For global load balancing (routing to the cluster with the lowest network latency and minimizing failover time):

Azure Front Door: Azure Front Door is a global, scalable entry point that uses Microsoft’s global edge network to create fast, secure, and widely scalable web applications. It offers:

Lowest latency routing: Front Door can route requests to the backend with the lowest latency among the configured AKS clusters in different regions.

Fast failover: In case of an AKS cluster outage, Front Door can quickly detect the unhealthy backend and automatically route traffic to healthy clusters, minimizing failover time.

HTTPS support and WAF: Front Door inherently supports HTTPS and can be integrated with Web Application Firewall (WAF) for security.

Azure Traffic Manager: Azure Traffic Manager is a DNS-based traffic load balancer. While it can route traffic based on performance (latency), the failover process is slower compared to Front Door because it relies on DNS propagation, which can take time to update across the internet. It is also less equipped for web application specific traffic management compared to Front Door.

Cross-region load balancing in Azure: This is a general concept and not a specific Azure service name from the provided options. We need to select from the given services.

Standard Load Balancer: Azure Standard Load Balancer is a regional load balancer. It operates within a single Azure region and cannot provide global load balancing or cross-region failover across the five AKS clusters located in different regions.

Therefore, for global load balancing, Azure Front Door is the most suitable and closest correct answer.

As the ingress controller (HTTPS traffic routing to individual pods within AKS):

Azure Application Gateway: Azure Application Gateway is a regional web traffic load balancer that can act as an ingress controller for AKS. It offers:

Layer 7 Load Balancing: Application Gateway operates at the application layer (Layer 7), enabling advanced routing decisions based on HTTP headers, paths, and cookies, which is necessary for routing HTTPS traffic to individual pods.

SSL Termination: Application Gateway can handle SSL termination, offloading the SSL encryption/decryption process from the AKS clusters.

WAF Integration: Application Gateway can be integrated with Web Application Firewall (WAF) for enhanced security at the ingress point.

Azure Standard Load Balancer: Azure Standard Load Balancer is a regional load balancer that operates at Layer 4 (Transport Layer). While it can be used as an ingress controller, it primarily provides basic load balancing based on IP addresses and ports. It’s less feature-rich for web application ingress scenarios compared to Application Gateway, especially for HTTPS routing and advanced features like SSL termination and WAF.

Basic Azure Load Balancer: Basic Azure Load Balancer is also a Layer 4 load balancer with even fewer features than Standard Load Balancer. It is not recommended for production web applications and is less suitable as an ingress controller compared to Application Gateway or even Standard Load Balancer in complex scenarios.

Therefore, as the ingress controller, Azure Application Gateway is the most suitable and closest correct answer because it is specifically designed for web application traffic and provides the necessary features for HTTPS routing, SSL termination, and potential WAF integration within AKS.

Final Answer:

For global load balancing: Azure Front Door
As the ingress controller: Azure Application Gateway

38
Q

HOTSPOT

You have five .NET Core applications that run on 10 Azure virtual machines in the same subscription.

You need to recommend a solution to ensure that the applications can authenticate by using the same Azure Active Directory (Azure AD) identity.

The solution must meet the following requirements:

✑ Ensure that the applications can authenticate only when running on the 10 virtual machines.

✑ Minimize administrative effort.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
To provision the Azure AD identity:
Create a system-assigned Managed Service Identity
Create a user-assigned Managed Service Identity
Register each application in Azure AD
To authenticate request a token by using:
An Azure AD v1.0 endpoint
An Azure AD v2.0 endpoint
An Azure Instance Metadata Service Identity
OAuth2 endpoint

A

To meet the requirements of allowing .NET Core applications on 10 Azure VMs to authenticate using the same Azure AD identity, while ensuring authentication is limited to these VMs and minimizing administrative effort, the following options are the most appropriate:

To provision the Azure AD identity:

Create a user-assigned Managed Service Identity

Why correct: User-assigned Managed Service Identities (MSIs) are standalone Azure resources that can be associated with multiple Azure resources, such as virtual machines. By creating a single user-assigned MSI and associating it with all 10 VMs, you achieve the goal of having all applications authenticate using the same Azure AD identity. This identity is managed independently and can be granted specific permissions.

Why preferred over system-assigned: While system-assigned MSIs could also be used (and each VM would get its own MSI), using a user-assigned MSI is generally considered better for scenarios where you want to share a single identity across multiple resources. It provides more explicit control and reusability of the identity. It also better aligns with the requirement of “same Azure AD identity” as it is literally a single identity resource.

Why not “Register each application in Azure AD”: Registering each application in Azure AD is more relevant if each application needed its own distinct identity and permissions. In this case, the requirement is for them to use the same identity. Application registration is also a more complex approach if the goal is primarily VM-based authentication. Managed Identities simplify the process for Azure resources.

To authenticate request a token by using:

An Azure Instance Metadata Service Identity

Why correct: The Azure Instance Metadata Service (IMDS) is the recommended and secure way for Azure resources (like VMs) with Managed Identities to request access tokens. IMDS is a REST API available at a well-known non-routable IP address (169.254.169.254) within the VM. It’s designed specifically for VMs to securely retrieve metadata and tokens related to their identities.

Why not “An Azure AD v1.0 endpoint / An Azure AD v2.0 endpoint”: While Azure AD v1.0 and v2.0 endpoints are the authorization endpoints for Azure AD, directly using these endpoints from within a VM to obtain tokens for Managed Identities is less secure and more complex than using IMDS. IMDS simplifies the process and is the intended method for Azure resources to get tokens for their assigned identities. IMDS handles the underlying complexities of token acquisition securely within the Azure environment.

Why not “OAuth2 endpoint”: “OAuth2 endpoint” is too generic. Azure AD v1.0 and v2.0 are OAuth2 endpoints, but IMDS is the specific mechanism for Azure resources with MSIs to interact with Azure AD to get tokens. IMDS abstracts away the direct OAuth2 interaction for the VM, making it simpler and more secure.

In summary, the best and most aligned choices are:

To provision the Azure AD identity: Create a user-assigned Managed Service Identity (for sharing a single identity across VMs and explicit control).

To authenticate request a token by using: An Azure Instance Metadata Service Identity (the secure and recommended method for VMs with MSIs to get tokens).

Final Answer:

To provision the Azure AD identity: Create a user-assigned Managed Service Identity
To authenticate request a token by using: An Azure Instance Metadata Service Identity

39
Q

You have an Azure web app that uses an Azure key vault named KeyVault1 in the West US Azure region.

You are designing a disaster recovery plan for KeyVault1.

You plan to back up the keys in KeyVault1.

You need to identify to where you can restore the backup.

What should you identify?

KeyVault1 only
the same region only
the same geography only
any region worldwide

A

The correct answer is the same geography only.

Here’s why:

Azure Geography: Azure geographies are defined areas of the world that contain at least one Azure region. They are designed to ensure data residency, compliance, and resilience. Examples of geographies include North America, Europe, Asia Pacific, etc. West US is part of the North America geography.

Key Vault Backup and Restore Limitations: Azure Key Vault’s backup and restore functionality is designed for disaster recovery and migration within the same geography. You can restore a Key Vault backup to a different region within the same geography, but not across geographies.

Let’s look at why the other options are incorrect:

KeyVault1 only: Restoring to the same Key Vault isn’t disaster recovery. If KeyVault1 is unavailable due to a regional outage, restoring to itself won’t help. Disaster recovery aims to restore services in a different location if the primary location fails.

The same region only: While you can restore to the same region, this limits your disaster recovery options. A region-wide disaster could impact the entire region, making restoring to the same region ineffective. Restoring within the same geography but to a different region provides better resilience.

Any region worldwide: This is incorrect and against the design principles of Azure Key Vault backup and restore and Azure geographies in general. Restoring to any region worldwide would raise several concerns:

Compliance and Data Residency: Many regulations (like GDPR, HIPAA, etc.) require data to reside within specific geographic boundaries. Restoring to any region could violate these regulations.

Latency and Performance: Restoring and accessing secrets from a Key Vault in a geographically distant region could introduce significant latency and performance issues for your web app.

Security and Control: Restricting restore to the same geography provides a level of control and security by keeping your secrets within a defined geopolitical boundary.

Why “the same geography only” is the closest and correct answer:

Disaster Recovery Focus: It allows for disaster recovery by enabling restoration to a different region, protecting against regional outages.

Geography Resilience: Azure geographies are designed to be isolated from failures in other geographies. Restoring within the same geography leverages this resilience.

Compliance and Data Residency: It helps maintain compliance and data residency requirements by keeping your secrets within the defined geography.

Performance and Latency: Restoring to a region within the same geography generally minimizes latency compared to cross-geography restoration.

In summary: For Azure Key Vault disaster recovery backups, you should plan to restore your backups to the same geography only. This provides a balance of resilience, compliance, and performance for your disaster recovery strategy.

Therefore, the answer is the same geography only.

40
Q

You plan provision a High Performance Computing (HPC) cluster in Azure that will use a third-party scheduler.

You need to recommend a solution to provision and manage the HPC cluster node.

What should you include in the recommendation?

Azure Lighthouse
Azure CycleCloud
Azure Purview
Azure Automation

A

Let’s analyze each option in the context of provisioning and managing HPC cluster nodes in Azure, especially when using a third-party scheduler:

Azure Lighthouse: Azure Lighthouse is an Azure service that enables multi-tenant management. It allows service providers to manage Azure resources across multiple customer tenants from within their own Azure tenant. Lighthouse is focused on delegated administration and cross-tenant access management, not on HPC cluster provisioning or node management. Therefore, Azure Lighthouse is not the correct choice.

Azure CycleCloud: Azure CycleCloud is an Azure service specifically designed for orchestrating and managing HPC environments in the cloud. It is designed to provision and manage HPC clusters, including:

Node provisioning: CycleCloud automates the creation and configuration of compute nodes in Azure.

Scheduler integration: CycleCloud is designed to work with various HPC schedulers, including third-party schedulers like Slurm, PBS Pro, Grid Engine, and others. It can integrate with these schedulers to dynamically scale and manage cluster resources based on job demands.

Cluster lifecycle management: CycleCloud handles the entire lifecycle of an HPC cluster, from initial provisioning and scaling to decommissioning.

Customization and flexibility: CycleCloud provides flexibility to customize cluster configurations and integrates with various Azure compute options, including virtual machines and scale sets.

Given that the requirement is to provision and manage HPC cluster nodes with a third-party scheduler, Azure CycleCloud is the most directly relevant and purpose-built service.

Azure Purview: Azure Purview is a unified data governance service that helps you manage and govern your on-premises, multi-cloud, and SaaS data. It focuses on data discovery, data lineage, data cataloging, and data security. Purview is not involved in provisioning or managing compute resources like HPC cluster nodes. Therefore, Azure Purview is not the correct choice.

Azure Automation: Azure Automation is a cloud-based automation service. You could potentially use Azure Automation to script the provisioning and management of HPC nodes using PowerShell or Python runbooks. However:

More manual effort: Using Azure Automation for HPC cluster management would require significant manual scripting and configuration to handle node provisioning, scheduler integration, scaling, and lifecycle management.

Less specialized: Azure Automation is a general-purpose automation tool, not specifically designed for the complexities of HPC cluster management.

CycleCloud is more purpose-built: Azure CycleCloud is specifically built to simplify HPC cluster deployment and management in Azure and is a much more streamlined and efficient solution for this scenario compared to building a solution from scratch using Azure Automation.

Conclusion:

Azure CycleCloud is the most appropriate and purpose-built Azure service for provisioning and managing HPC cluster nodes, especially when using a third-party scheduler. It minimizes the administrative effort and provides the necessary features for HPC cluster lifecycle management.

Final Answer: Azure CycleCloud

41
Q

You have an app named App1 that uses two on-premises Microsoft SQL Server databases named DB1 and DB2.

You plan to migrate DB1 and DB2 to Azure.

You need to recommend an Azure solution to host DB1 and DB2.

The solution must meet the following requirements:

  • Support server-side transactions across DB1 and DB2.
  • Minimize administrative effort to update the solution.

What should you recommend?

two SQL Server databases on an Azure virtual machine
two Azure SQL databases on different Azure SQL Database servers
two Azure SQL databases in an elastic pool
two Azure SQL databases on the same Azure SQL Database managed instance

A

Let’s analyze each option against the two requirements:

Requirement 1: Support server-side transactions across DB1 and DB2.

two SQL Server databases on an Azure virtual machine: Yes. SQL Server on a Virtual Machine is essentially a full SQL Server installation. You can create multiple databases within the same SQL Server instance on the VM. Within a single SQL Server instance, server-side transactions across databases (DB1 and DB2 in this case) are fully supported using standard SQL Server transaction mechanisms, including distributed transactions if needed (though typically cross-database transactions within the same instance are not considered “distributed” in the full sense and are simpler).

two Azure SQL databases on different Azure SQL Database servers: No. Azure SQL Database (single database) is a PaaS offering where each database resides on its own logical server. Server-side transactions directly across databases residing on different Azure SQL Database servers are NOT supported. While you can query data across databases using features like elastic queries (external tables), these are not true server-side transactions that guarantee full ACID properties across independent database servers. Distributed transactions are more complex and typically involve limitations in this scenario.

two Azure SQL databases in an elastic pool: Limited Support. Azure SQL databases within the same elastic pool reside on the same SQL Server logical server. Therefore, cross-database queries within the same elastic pool are possible. However, the support for robust server-side transactions spanning across databases in an elastic pool is more limited compared to a full SQL Server instance or Managed Instance.* While technically you might be able to achieve some level of transactional consistency, it’s not the primary design goal of elastic pools, which are focused on resource sharing and cost optimization rather than complex distributed transactions. True distributed transactions spanning databases in an elastic pool are not as straightforward or fully supported as within a single SQL Server instance or Managed Instance.

two Azure SQL databases on the same Azure SQL Database managed instance: Yes. Azure SQL Managed Instance is designed to provide near 100% compatibility with on-premises SQL Server. It fully supports server-side transactions across databases within the same Managed Instance, just like on-premises SQL Server. Features like cross-database queries and transactions work seamlessly across databases in the same Managed Instance.

Requirement 2: Minimize administrative effort to update the solution.

two SQL Server databases on an Azure virtual machine: No. SQL Server on Azure VM is IaaS (Infrastructure as a Service). While Azure manages the underlying infrastructure, you are responsible for managing the SQL Server instance itself, including patching, backups, high availability configuration, and updates of the SQL Server software. This requires significant administrative effort.

two Azure SQL databases on different Azure SQL Database servers: Yes. Azure SQL Database (single database) is PaaS (Platform as a Service). Microsoft handles most administrative tasks, including patching, backups, and high availability of the underlying SQL Server engine. This minimizes administrative effort for updating the database platform.

two Azure SQL databases in an elastic pool: Yes. Azure SQL Database (elastic pool) is also PaaS. Microsoft handles the administrative tasks for the underlying SQL Server engine and the elastic pool infrastructure, minimizing administrative effort.

two Azure SQL databases on the same Azure SQL Database managed instance: Yes. Azure SQL Managed Instance is also PaaS. Microsoft manages the underlying SQL Server engine, OS patching, backups, and high availability. This significantly reduces administrative effort compared to managing SQL Server on a VM, although Managed Instance generally requires slightly more management than single Azure SQL databases or elastic pools in certain areas (like maintenance windows).

Conclusion:

Considering both requirements, the option that best satisfies both is two Azure SQL databases on the same Azure SQL Database managed instance.

It fully supports server-side transactions across databases within the same instance, meeting the first requirement.

It is a PaaS offering, minimizing administrative effort for updates and maintenance, meeting the second requirement better than SQL Server on Azure VMs.

While “two Azure SQL databases in an elastic pool” is a PaaS option with lower admin effort, its support for server-side transactions across databases is not as robust or guaranteed as in Managed Instance. “two Azure SQL databases on different Azure SQL Database servers” fails to support server-side transactions. “two SQL Server databases on an Azure virtual machine” fails to minimize administrative effort.

Therefore, the closest and most correct answer is:

two Azure SQL databases on the same Azure SQL Database managed instance

42
Q

You have an Azure App Service app named App1.

You need to recommend a solution to monitor the response times of App1 for end users. The solution must minimize administrative effort.

What should you include in the recommendation?

Health check in App Service
Log Analytics
Application Insights
Azure Network Watcher connection monitor

A

Here’s why:

Application Insights is designed for Application Performance Monitoring (APM): Application Insights is a service within Azure Monitor specifically built to monitor the performance, availability, and usage of web applications, including those hosted in App Service. Its core purpose is to provide deep insights into how your application is behaving, including end-user response times.

Directly Measures End-User Response Times: Application Insights automatically collects telemetry data from your application, including:

Request Duration: How long it takes for requests to your application to be processed. This directly reflects response times.

Page Load Times (for client-side monitoring): If you include the Application Insights JavaScript snippet in your web pages, it can track how long it takes for pages to load in users’ browsers, giving you true end-user perceived performance.

Dependency Performance: It tracks the performance of calls your application makes to other services (databases, APIs, etc.), helping you pinpoint bottlenecks.

Minimal Administrative Effort: Integrating Application Insights with App Service is incredibly easy and requires minimal administrative effort. You can typically enable it directly from the App Service configuration blade in the Azure portal. Often, it’s just a matter of turning it “on” and configuring a connection string. No code changes are usually required for basic setup (though you can add custom telemetry if needed).

Out-of-the-box Dashboards and Analytics: Application Insights provides pre-built dashboards and analytics tools that visualize key performance indicators (KPIs) like response times, request rates, error rates, and more. You can quickly see trends, identify performance issues, and drill down into specific requests or transactions.

Let’s look at why the other options are less suitable:

Health check in App Service:

Purpose: Primarily used to monitor the availability and health of your App Service instance itself. It periodically pings your application’s endpoint to ensure it’s responding.

Limitations: It’s a very basic check. While it can detect if your app is down, it doesn’t provide detailed response time metrics for end users. It’s not designed for performance monitoring in the way Application Insights is. It just checks if the app is alive, not how performant it is for users.

Log Analytics:

Purpose: A powerful service for collecting and analyzing logs and metrics from various Azure resources. You could potentially use Log Analytics to analyze App Service logs to infer response times, but it would require significant configuration and custom queries.

Limitations: It’s not optimized for end-user response time monitoring in the same way Application Insights is. Extracting and analyzing response times from logs would be more complex and require more administrative effort compared to the out-of-the-box capabilities of Application Insights. Log Analytics is more general-purpose and requires more manual setup for this specific task.

Azure Network Watcher connection monitor:

Purpose: Designed for monitoring network connectivity and performance between different points in your network (e.g., between Azure VMs, from Azure to on-premises). It measures network latency, packet loss, etc.

Limitations: While network latency can influence end-user response times, Connection Monitor is not directly measuring the application’s response time from the end user’s perspective. It’s focused on network infrastructure, not application performance. It’s also more complex to set up and manage for this specific requirement compared to Application Insights.

In summary:

For monitoring end-user response times of an App Service app with minimal administrative effort, Application Insights is the clear and best choice. It’s specifically built for this purpose, integrates seamlessly with App Service, and provides comprehensive performance monitoring with minimal configuration.

Therefore, the correct answer is Application Insights.

43
Q

You architect a solution that calculates 3D geometry from height-map data.

You have the following requirements:

✑ Perform calculations in Azure.

✑ Each node must communicate data to every other node.

✑ Maximize the number of nodes to calculate multiple scenes as fast as possible.

✑ Require the least amount of effort to implement.

You need to recommend a solution.

Which two actions should you recommend? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

Create a render farm that uses Azure Batch.
Enable parallel file systems on Azure.
Enable parallel task execution on compute nodes.
Create a render farm that uses virtual machine (VM) scale sets.
Create a render farm that uses virtual machines (VMs).

A

Let’s analyze each option in the context of the requirements for calculating 3D geometry from height-map data in Azure.

Requirements:

Perform calculations in Azure. (All options are Azure-based)

Each node must communicate data to every other node. (Requires inter-node communication capabilities)

Maximize the number of nodes to calculate multiple scenes as fast as possible. (Scalability is crucial)

Require the least amount of effort to implement. (Simplicity and ease of use are important)

Option Analysis:

Create a render farm that uses Azure Batch:

Pros: Azure Batch is designed for running large-scale parallel and HPC workloads. It handles job scheduling, task distribution, and scaling of compute resources. It simplifies the management of a compute cluster in Azure. Batch supports inter-node communication through task dependencies and shared storage accounts. Batch is designed for scalability and efficient resource utilization. It reduces implementation effort by providing a managed service.

Cons: Might require some learning curve to set up Batch jobs and task dependencies.

Relevance: High. Azure Batch is a strong candidate for HPC workloads and meets the scalability and communication requirements.

Enable parallel file systems on Azure:

Pros: Parallel file systems (like Azure HPC Cache or Azure NetApp Files in some configurations) are crucial for HPC workloads where nodes need to share data efficiently. They improve data access performance and facilitate inter-node communication by providing a shared namespace. For geometry calculations, input height-map data and intermediate geometry data might benefit from a parallel file system.

Cons: Adds some complexity to setup and management of the file system. Cost can be a factor for high-performance parallel file systems.

Relevance: Medium to High. Parallel file systems can significantly enhance performance and simplify data sharing in a node-to-node communication scenario.

Enable parallel task execution on compute nodes:

Pros: This is a fundamental principle of parallel computing. Executing tasks in parallel on each node is essential to maximize the utilization of compute resources and speed up processing.

Cons: This is not a specific Azure service or action you “enable”. It’s a coding and architectural approach within the application itself. It’s assumed that any HPC solution will utilize parallel task execution.

Relevance: Low. This is more of a best practice than a specific recommendation for the Azure infrastructure setup.

Create a render farm that uses virtual machine (VM) scale sets:

Pros: VM scale sets allow for easy creation and management of a group of identical VMs. They provide scalability and load balancing capabilities. You could build a render farm on VM scale sets.

Cons: Requires more manual configuration and management compared to Azure Batch. You would need to handle task scheduling, distribution, and inter-node communication yourself or with additional tools. More implementation effort than Azure Batch.

Relevance: Medium. VM scale sets provide scalability, but managing the cluster and inter-node communication is less streamlined than with Azure Batch.

Create a render farm that uses virtual machines (VMs):

Pros: Direct control over individual VMs.

Cons: Highest implementation effort. Requires manual provisioning, configuration, scaling, load balancing, and managing inter-node communication. Not ideal for maximizing nodes or minimizing effort.

Relevance: Low. Too much manual effort, contradicts the requirement for least implementation effort.

Choosing the best two actions:

Considering the requirements, especially “node-to-node communication,” “maximize nodes,” and “least implementation effort,” the most impactful and efficient combination is:

Create a render farm that uses Azure Batch: This directly addresses scalability, node management, and reduces implementation effort by providing a managed HPC service. Batch inherently supports inter-node communication through task dependencies and shared storage.

Enable parallel file systems on Azure: This addresses the “node-to-node communication” requirement more directly by providing a shared, high-performance file system accessible to all compute nodes. This can be crucial for efficient data sharing and access in geometry calculations where nodes might need to exchange or access common datasets.

“Enable parallel task execution” is assumed and not a specific action in Azure infrastructure. VM scale sets and VMs are less efficient and more effort for this scenario compared to Azure Batch for a managed HPC solution.

Final Answer:

Create a render farm that uses Azure Batch.
Enable parallel file systems on Azure.

44
Q

Your company has 50 business units across the globe. The business units operate from 08:00 AM to 06:00 PM from Monday to Friday in their local time zone. Transactions are only processed during business hours.

You have an Azure subscription.

You plan to deploy an app named App1 that will manage the transactions for the business units.

App1 will use a separate Azure SQL database for each business unit.

You need to recommend an Azure SQL Database configuration for App1.

The solution must meet the following requirements:

  • Support Azure Hybrid Benefit licensing.
  • Minimize costs.

What should you recommend?

a vCore purchasing model and multiple single database instances
a DTU purchasing model and multiple single database instances
a vCore purchasing model and multiple database instances in an elastic pool
a DTU purchasing model and multiple database instances in an elastic pool

A

Let’s analyze each option based on the requirements for cost minimization and Azure Hybrid Benefit support, considering the business hours operation and multiple databases.

Requirements:

Support Azure Hybrid Benefit licensing: This is crucial for cost reduction if the company already has SQL Server licenses.

Minimize costs: This is the primary goal, considering the usage pattern and number of databases.

Separate Azure SQL database for each business unit: This implies managing multiple databases.

Business hours operation (08:00 AM to 06:00 PM Monday to Friday local time): This indicates that the databases will have periods of low or no activity outside business hours.

Analyzing each option:

a vCore purchasing model and multiple single database instances:

vCore model: Supports Azure Hybrid Benefit, which can significantly reduce licensing costs if the company has eligible SQL Server licenses.

Multiple single database instances: Each database is provisioned and billed independently. This can be costly if each database is provisioned for peak capacity but is mostly idle outside business hours. Provisioning 50 separate single databases can be expensive if not carefully managed.

Cost efficiency: Potentially less cost-efficient compared to elastic pools, especially given the business hour usage pattern.

a DTU purchasing model and multiple single database instances:

DTU model: Does not directly support Azure Hybrid Benefit. This means you cannot leverage existing SQL Server licenses, leading to higher costs.

Multiple single database instances: Similar cost concerns as with the vCore model for single instances - potentially inefficient resource utilization and higher costs for 50 databases.

Cost efficiency: Less cost-efficient due to lack of Hybrid Benefit and potentially inefficient resource utilization for single databases.

a vCore purchasing model and multiple database instances in an elastic pool:

vCore model: Supports Azure Hybrid Benefit, enabling significant cost savings on licensing.

Multiple database instances in an elastic pool: Elastic pools are designed for scenarios with multiple databases that have variable and unpredictable usage patterns. Databases in an elastic pool share a pool of resources (compute and storage). This is highly cost-effective in this scenario because the business units operate only during business hours. Databases can share resources during peak hours, and resources can be scaled down or shared more efficiently during off-peak hours and weekends. This is ideal for the described on-off usage pattern.

Cost efficiency: Most cost-efficient option due to Hybrid Benefit and efficient resource sharing with elastic pools, perfectly matching the business hours usage.

a DTU purchasing model and multiple database instances in an elastic pool:

DTU model: Does not directly support Azure Hybrid Benefit, increasing licensing costs.

Multiple database instances in an elastic pool: Elastic pools still offer cost savings through resource sharing, but the lack of Hybrid Benefit makes this option less cost-optimized than the vCore elastic pool option.

Cost efficiency: Less cost-efficient than vCore elastic pool due to lack of Hybrid Benefit.

Final Answer: The final answer is
a vCore purchasing model and multiple database instances in an elastic pool

45
Q

You ate designing an Azure governance solution.

All Azure resources must be easily identifiable based on the following operational information environment, owner, department and cost center

You need 10 ensure that you can use the operational information when you generate reports for the Azure resources.

What should you include in the solution?

Azure Active Directory (Azure AD) administrative units
an Azure data catalog that uses the Azure REST API as a data source
an Azure policy that enforces tagging rules
an Azure management group that uses parent groups to create a hierarchy

A

The correct answer is an Azure policy that enforces tagging rules.

Here’s why:

Azure Policy for Tagging Enforcement: Azure Policy is the Azure service designed to enforce organizational standards and compliance at scale. One of its key capabilities is to enforce tagging rules. You can create Azure policies that:

Require specific tags: Policies can mandate that certain tags (like environment, owner, department, cost center) must be present on all Azure resources.

Enforce tag values: Policies can also enforce specific allowed values or patterns for tags. For example, you could define a policy that the environment tag can only accept values like dev, test, prod.

Prevent non-compliant deployments: If a resource is created or updated without the required tags or with incorrect tag values, Azure Policy can prevent the deployment or mark the resource as non-compliant.

Audit existing resources: Policies can also audit existing resources to check for tag compliance and generate reports on non-compliant resources.

Tags for Reporting and Identification: Azure tags are key-value pairs that you apply to Azure resources. They are the fundamental mechanism in Azure for:

Organizing resources: Grouping resources logically based on various criteria.

Reporting and cost management: Tags are extensively used in Azure Cost Management to group and analyze costs based on different dimensions like environment, department, etc.

Automation and management: Tags can be used in scripts and automation to identify and manage resources based on their operational characteristics.

Resource identification: Tags make resources easily identifiable based on the operational information you define.

Why other options are less suitable:

Azure Active Directory (Azure AD) administrative units: Administrative units are used to delegate administrative permissions within Azure AD. They are not directly related to tagging Azure resources for operational information and reporting. While you could indirectly use AAD to manage who owns resources, it’s not the direct mechanism for tagging and reporting on resource operational information.

An Azure data catalog that uses the Azure REST API as a data source: Azure Data Catalog is a metadata management service. You could catalog Azure resources and their tags in a data catalog. However, a data catalog does not enforce tagging. It’s a tool for discovering and documenting data assets, not for governance and policy enforcement. It would be useful after tagging is enforced to help users find and understand resources based on tags, but it’s not the solution to ensure tagging happens.

An Azure management group that uses parent groups to create a hierarchy: Azure Management Groups help organize and govern Azure subscriptions in a hierarchy. You can apply policies at the management group level, including tagging policies. However, management groups themselves are not the tagging mechanism. They are a scope for applying policies, and you would still need Azure Policy to define and enforce the tagging rules within those management groups. Management groups are more about hierarchical management of subscriptions, not directly about tagging resources.

In summary:

To ensure all Azure resources are easily identifiable based on environment, owner, department, and cost center, and that this information is usable in reports, you need a mechanism to enforce the application of this operational information to every resource. Azure Policy enforcing tagging rules is the most direct and effective way to achieve this. Tags are the standard Azure mechanism for metadata and reporting, and Azure Policy is the service to enforce their consistent use.

Therefore, the correct answer is an Azure policy that enforces tagging rules.

46
Q

HOTSPOT

You have an Azure Load Balancer named LB1 that balances requests to five Azure virtual machines.

You need to develop a monitoring solution for LB1.

The solution must generate an alert when any of the following conditions are met:

✑ A virtual machine is unavailable.

✑ Connection attempts exceed 50,000 per minute.

Which signal should you include in the solution for each condition? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
An unavailable virtual machine:
Byte Count
Data Path Availability
Health Probe Status
Packet Count
SYN Count
More than 50,000 connection attempts per minute:
Byte Count
Data Path Availability
Health Probe Status
Packet Count
SYN Count

A

To monitor Azure Load Balancer (LB1) and generate alerts for the specified conditions, we need to select the appropriate signals that directly correspond to each condition.

An unavailable virtual machine:

Byte Count: This metric tracks the number of bytes processed by the load balancer. While it can indicate traffic volume, it does not directly reflect the availability of a backend virtual machine. A VM might be unavailable even if byte count is non-zero for other healthy VMs.

Data Path Availability: This metric generally refers to the availability of the network path between the load balancer and the backend pool. While path availability is important, it doesn’t directly indicate the health of a specific virtual machine application or service running on it. A data path could be available, but the application on the VM might be down.

Health Probe Status: This is the most direct and appropriate signal for monitoring VM availability. Azure Load Balancers use health probes to periodically check the health of backend instances. If a health probe fails (e.g., the VM doesn’t respond on the configured port, or the probe endpoint returns an error), the Load Balancer marks that VM as unhealthy and stops sending new traffic to it. Monitoring the Health Probe Status directly tells you if the Load Balancer considers a VM to be available or not.

Packet Count: Similar to Byte Count, Packet Count reflects traffic volume but not VM availability.

SYN Count: SYN Count measures the number of SYN packets received, which is related to connection attempts. It doesn’t directly indicate VM availability.

Therefore, for an unavailable virtual machine, the most appropriate signal is Health Probe Status.

More than 50,000 connection attempts per minute:

Byte Count: Byte Count is not directly related to connection attempts.

Data Path Availability: Data Path Availability reflects the health of the network path, not the volume of connection attempts.

Health Probe Status: Health Probe Status reflects VM availability, not connection attempts.

Packet Count: Packet Count measures the total number of packets, which could include packets related to connections, data transfer, etc. It’s not specific to connection attempts.

SYN Count: SYN Count is the most relevant signal for monitoring connection attempts. In TCP, a new connection is initiated with a SYN (synchronize) packet. Monitoring the SYN Count provides a direct measure of the number of connection initiation attempts. A high SYN Count per minute indicates a high volume of connection attempts.

Therefore, for more than 50,000 connection attempts per minute, the most appropriate signal is SYN Count.

Answer Area:
An unavailable virtual machine: Health Probe Status
More than 50,000 connection attempts per minute: SYN Count

47
Q

HOTSPOT

You have an Azure subscription. The subscription contains 100 virtual machine that am Windows Server.

You need to recommend a solution that will provide monitoring and an audit trail of the following modifications:

  • Changes to the Windows registry on the virtual machines
  • Changes to the DNS settings of the virtual machines

The solution must minimize administrative effort.

What should you recommend using for each change? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Windows registry changes:
Azure Automation Change Tracking
Azure Monitor Change Analysis
Azure Monitor for VM Insights
DNS settings changes:
Azure Automation Change Tracking
Azure Monitor Change Analysis
Azure Monitor for VM Insights

A

Let’s analyze each option for tracking changes to Windows Registry and DNS settings on Azure VMs, keeping in mind the requirement to minimize administrative effort.

Windows registry changes:

Azure Automation Change Tracking:

Correct. Azure Automation Change Tracking is specifically designed to track changes to files and registry keys on virtual machines. You can configure Change Tracking to monitor specific registry keys and values. When changes occur (additions, modifications, deletions), Change Tracking records these changes, providing an audit trail. This feature is part of Azure Automation and is built for configuration management and change auditing within VMs.

Administrative Effort: Once configured, Change Tracking automatically monitors and reports changes, minimizing ongoing administrative effort.

Azure Monitor Change Analysis:

Incorrect. Azure Monitor Change Analysis is designed to analyze changes in Azure resources and diagnose operational issues. It focuses on changes at the Azure Resource Manager level (e.g., changes to VM configurations, network configurations, etc.). It is not designed to track granular operating system level changes like Windows Registry modifications within VMs. While it might detect some high-level VM configuration changes, it’s not the tool for detailed registry auditing.

Azure Monitor for VM Insights:

Incorrect. Azure Monitor for VM Insights focuses on the performance and health monitoring of virtual machines. It collects metrics related to CPU, memory, disk, network, and process performance. It is not designed to provide audit trails of configuration changes like Windows Registry modifications. VM Insights is for performance monitoring, not configuration auditing.

For Windows registry changes, Azure Automation Change Tracking is the most appropriate and direct solution.

DNS settings changes:

Azure Automation Change Tracking:

Correct. DNS settings on Windows VMs are often configured through the operating system’s network settings, which can involve modifying configuration files or using command-line tools that might result in file changes. Azure Automation Change Tracking can be configured to monitor specific files that are associated with DNS client configuration (e.g., network adapter configuration files, DNS client service configuration files). By tracking changes to these files, Change Tracking can indirectly provide an audit trail of DNS settings modifications. While it doesn’t directly “understand” DNS settings semantically, it can track the file changes that represent DNS configuration changes.

Administrative Effort: Similar to registry changes, once configured to monitor the relevant files, Change Tracking automatically tracks and reports changes, minimizing ongoing effort.

Azure Monitor Change Analysis:

Incorrect. Azure Monitor Change Analysis, as previously mentioned, tracks Azure resource configuration changes at the Resource Manager level. DNS settings within the VM operating system are not directly tracked as Azure Resource Manager properties by Change Analysis.

Azure Monitor for VM Insights:

Incorrect. Azure Monitor for VM Insights is for performance and health monitoring, and not for auditing DNS configuration changes.

For DNS settings changes, Azure Automation Change Tracking, by monitoring relevant configuration files, provides the closest solution for auditing these changes with minimized administrative effort from the given options. It’s important to note that Change Tracking might be tracking file changes that represent DNS changes, rather than directly monitoring “DNS settings” as an abstract entity.

Final Answer:

Answer Area
Windows registry changes: Azure Automation Change Tracking
DNS settings changes: Azure Automation Change Tracking

48
Q

What should you recommend to meet the monitoring requirements for App2?

Microsoft Sentinel
Azure Application Insights
Container insights
VM insights

A

B. Azure Application Insights

49
Q

HOTSPOT

You have an Azure subscription.

You plan to deploy five storage accounts that will store block blobs and five storage accounts that will host file shares. The file shares will be accessed by using the SMB protocol.

You need to recommend an access authorization solution for the storage accounts.

The solution must meet the following requirements:

  • Maximize security.
  • Prevent the use of shared keys.
  • Whenever possible, support time-limited access.

What should you include in the solution? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
For the blobs:
A user delegation shared access signature (SAS) only
A shared access signature (SAS) and a stored access policy
A user delegation shared access signature (SAS) and a stored access policy
For the file shares:
Azure AD credentials
A user delegation shared access signature (SAS) only
A user delegation shared access signature (SAS) and a stored access policy

A

To determine the best access authorization solution, let’s analyze each requirement and the options for blobs and file shares separately.

Requirements:

Maximize security: This implies using the most secure authentication and authorization methods available.

Prevent the use of shared keys: Shared keys provide full access to the storage account and are less secure. We should avoid them.

Whenever possible, support time-limited access: Time-limited access reduces the window of opportunity for misuse if credentials are compromised.

For the Blobs:

Shared access signature (SAS): SAS provides delegated access to storage resources with specific permissions and a time limit, without exposing the storage account key.

User delegation SAS: This type of SAS is secured with Azure Active Directory (Azure AD) credentials. It’s more secure than service SAS or account SAS because it doesn’t rely on the storage account key. User delegation SAS is created using Azure AD credentials and signed by Azure AD.

Stored access policy: A stored access policy provides an additional layer of control for SAS. It allows you to centrally manage and revoke SAS tokens. However, SAS can be created even without stored access policy.

Considering the requirements for blobs:

A user delegation shared access signature (SAS) only: This option meets all requirements. It maximizes security by using Azure AD for authorization and avoiding shared keys. User delegation SAS is inherently time-limited.

A shared access signature (SAS) and a stored access policy: While stored access policies add a layer of management for SAS, the core security benefit comes from using a SAS that doesn’t rely on shared keys. If “shared access signature (SAS)” here is interpreted as Service SAS or Account SAS (which can be created with shared keys), then this option is less secure than User delegation SAS. However, if “shared access signature (SAS)” is meant to be broad including User delegation SAS and we are considering adding stored access policy on top of User delegation SAS, it is still valid, but maybe slightly more complex than “User delegation SAS only” for the stated requirements.

A user delegation shared access signature (SAS) and a stored access policy: Same as the previous point.

For maximizing security and preventing shared keys, and supporting time-limited access for blobs, A user delegation shared access signature (SAS) only is the most direct and secure option among the provided choices.

For the File Shares:

Azure AD credentials: For SMB file shares, Azure AD authentication provides a highly secure way to access file shares. It leverages centralized identity management, role-based access control (RBAC), and Kerberos authentication. It avoids the use of storage account keys.

User delegation shared access signature (SAS) only: User delegation SAS can be used for file shares, providing time-limited access and avoiding shared keys. However, for SMB access in enterprise environments, Azure AD authentication is generally preferred for better manageability and integration with existing identity infrastructure.

A user delegation shared access signature (SAS) and a stored access policy: Similar to blobs, stored access policies can be used with User delegation SAS for file shares. But again, for SMB in enterprise settings, Azure AD is often a more robust and preferred solution for ongoing file share access compared to SAS which might be more suited for specific, short-term access scenarios.

Considering the requirements for file shares accessed via SMB, especially “maximize security” and “prevent shared keys”, Azure AD credentials is the strongest option. It provides the most secure and manageable solution for SMB access in an enterprise context.

Final Answer:

For the blobs: A user delegation shared access signature (SAS) only
For the file shares: Azure AD credentials

50
Q

HOTSPOT

You have an Azure subscription.

You plan to deploy five storage accounts that will store block blobs and five storage accounts that will host file shares. The file shares will be accessed by using the SMB protocol.

You need to recommend an access authorization solution for the storage accounts.

The solution must meet the following requirements:

  • Maximize security.
  • Prevent the use of shared keys.
  • Whenever possible, support time-limited access.

What should you include in the solution? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
For the blobs: A user delegation shared access signature (SAS) only
A user delegation shared access signature (SAS) only
A shared access signature (SAS) and a stored access policy
A user delegation shared access signature (SAS) and a stored access policy
For the file shares: Azure AD credentials
Azure AD credentials
A user delegation shared access signature (SAS) only
A user delegation shared access signature (SAS) and a stored access policy

A

Let’s analyze each requirement and the provided options for both blobs and file shares.

Requirements:

Maximize security: This means using the most secure authorization methods.

Prevent the use of shared keys: Shared keys grant full access and are less secure.

Whenever possible, support time-limited access: Time-limited access reduces the window of opportunity for misuse if credentials are compromised.

For the blobs:

A user delegation shared access signature (SAS) only:

Pros: Secured with Azure AD credentials, not storage account keys. Prevents the use of shared keys. Supports time-limited access. Offers better security than account SAS or service SAS.

Cons: While secure, Azure AD RBAC is generally considered even more secure for managing access to blobs as it provides centralized access management through Azure AD. However, User delegation SAS is a strong option when SAS is required and shared keys must be avoided.

A shared access signature (SAS) and a stored access policy:

Pros: Can be a user delegation SAS (secured with Azure AD), which prevents shared keys. Stored access policies provide a centralized way to manage and revoke SAS tokens. Supports time-limited access through SAS expiration and stored access policy settings.

Cons: Slightly more complex to manage than just user delegation SAS alone, but provides better management capabilities for SAS at scale if needed. If not a user delegation SAS, it would use account keys, violating the requirement.

Considering the “maximize security” and “prevent shared keys” requirements, A user delegation shared access signature (SAS) only or A user delegation shared access signature (SAS) and a stored access policy (assuming it’s used with user delegation SAS) are both valid options compared to account or service SAS. Between these two, “A user delegation shared access signature (SAS) only” is slightly simpler and still meets all core requirements.

For the file shares:

Azure AD credentials:

Pros: Maximizes security. Uses Azure AD for authentication and authorization, eliminating the need for shared keys. Supports granular access control through Azure RBAC. Provides a centralized identity management approach. This is the most secure and recommended method for SMB file shares, especially for enterprise environments already using Azure AD.

Cons: Requires Azure AD DS or Azure AD Kerberos for Azure Files to be enabled. Might have a slightly higher initial setup complexity compared to SAS, but offers significantly better long-term security and manageability.

A user delegation shared access signature (SAS) only:

Pros: Can be used for file shares. Secured with Azure AD credentials, preventing shared keys. Supports time-limited access. More secure than storage account key based access.

Cons: While more secure than shared keys, it’s generally considered less secure and less manageable than using Azure AD credentials directly for SMB file shares. SAS for file shares might be more suitable for specific scenarios like temporary access for external users or applications, but not as the primary authorization method for general user access in an enterprise environment, especially when Azure AD integration is possible.

A user delegation shared access signature (SAS) and a stored access policy:

Pros: Similar to “A user delegation shared access signature (SAS) only” but with added management for SAS tokens.

Cons: Still less secure and less manageable than using Azure AD credentials directly for SMB file shares in most enterprise use cases.

Selecting the Closest Correct Options:

For blobs, given the options and requirements, A user delegation shared access signature (SAS) only is a good choice as it avoids shared keys and supports time-limited access. While RBAC with Azure AD is even more secure, it’s not listed as a direct option here.

For file shares, Azure AD credentials is definitively the best option to maximize security and prevent shared keys, and is the recommended method for SMB access in enterprise scenarios.

Therefore, the closest correct options are:

For the blobs: A user delegation shared access signature (SAS) only
For the file shares: Azure AD credentials

51
Q

You have to design a Data Engineering solution for your company. The company currently has an Azure subscription. They also have application data hosted in a database on a Microsoft SQL Server hosted in their on-premises data center server. They want to implement the following requirements Transfer transactional data from the on-premises SQL server onto a data warehouse in Azure. Data needs to be transferred every day in the night as a scheduled job

A managed Spark cluster needs to be in place for data engineers to perform analysis on the data stored in the SQL data warehouse. Here the data engineers should have the ability to develop notebooks in Scale, R and Python.

They also need to have a data lake store in place for the ingestion of data from multiple data sources Which of the following would the use for hosting the data warehouse in Azure?

Azure Data Factory
Azure Databricks
Azure Data Lake Gen2 Storage accounts
Azure Synapse Analytics

A

The correct answer is Azure Synapse Analytics.

Here’s why it’s the most appropriate choice and why the others are less suitable:

Why Azure Synapse Analytics is the best fit:

Data Warehouse in Azure: Azure Synapse Analytics is specifically designed as a fully managed, scalable data warehouse service in Azure. It provides a massively parallel processing (MPP) SQL engine (dedicated SQL pools) optimized for analytical workloads, which perfectly fits the requirement of a data warehouse.

Transactional Data Transfer: Azure Synapse Analytics integrates seamlessly with Azure Data Factory. You would typically use Azure Data Factory to create pipelines to extract data from the on-premises SQL Server database and load it into Azure Synapse Analytics on a daily schedule. Synapse itself is the data warehouse destination, and ADF is the tool for the data transfer.

Managed Spark Cluster for Analysis: Azure Synapse Analytics also includes Synapse Spark pools. These are serverless Apache Spark pools within the Synapse workspace, providing a managed Spark environment. Data engineers can use these Spark pools to perform analysis on the data stored in the data warehouse. Synapse Spark supports notebooks in Scala, R, Python, and .NET Spark, meeting the language requirements.

Data Lake Store: Azure Synapse Analytics is designed to work seamlessly with Azure Data Lake Storage Gen2. While the question asks where to host the data warehouse, it also mentions a data lake store for ingestion. Synapse Analytics is often used in conjunction with Data Lake Storage Gen2. Data can be staged in Data Lake Storage Gen2 before being loaded into the Synapse data warehouse, or Synapse Spark can directly process data residing in Data Lake Storage Gen2.

Why other options are less suitable:

Azure Data Factory: Azure Data Factory (ADF) is a data integration service, excellent for orchestrating data movement and transformation. While ADF is essential for transferring data from the on-premises SQL Server to Azure, it is not a data warehouse itself. ADF is a tool to load data into a data warehouse, but it doesn’t host the data warehouse.

Azure Databricks: Azure Databricks is a powerful Apache Spark-based analytics platform. It provides managed Spark clusters and is excellent for data engineering and data science tasks, including notebook development in Scala, R, and Python. However, while you can build a data warehouse-like solution on top of a data lake using Databricks, Databricks itself is not a dedicated data warehouse service in the same way as Azure Synapse Analytics SQL pools. Databricks is more focused on data processing and analytics on data lakes, rather than being a structured data warehouse.

Azure Data Lake Gen2 Storage accounts: Azure Data Lake Storage Gen2 is a highly scalable and cost-effective data lake storage service. It’s perfect for storing large volumes of data in various formats. However, Data Lake Storage Gen2 is just storage; it’s not a data warehouse or a Spark processing engine. It’s where you would store the data lake, and it can be used as storage for a data warehouse and for Spark to process data, but it doesn’t fulfill the requirement of hosting the data warehouse itself.

In Summary:

Azure Synapse Analytics is the only option that directly addresses all the core requirements:

Hosts a Data Warehouse: Synapse dedicated SQL pools are designed for data warehousing.

Supports Managed Spark for Analysis: Synapse Spark pools provide a managed Spark environment within the same service.

Integrates with Data Lake Storage Gen2: Synapse is built to work alongside Data Lake Storage Gen2.

Facilitates Data Transfer (via ADF integration): Although ADF is used for the actual transfer, Synapse is the destination and is designed to receive data from services like ADF.

Therefore, Azure Synapse Analytics is the most comprehensive and correct solution for hosting the data warehouse in this scenario.

52
Q

You plan to deploy an Azure Databricks Data Science & Engineering workspace and ingest data into the workspace.

Where should you persist the ingested data?

Azure Files
Azure Data Lake
Azure SQL Database
Azure Cosmos DB

A

Azure Databricks is a data analytics platform optimized for big data and machine learning. When you ingest data into a Databricks workspace, you need a storage layer to persist this data for processing and analysis. Let’s evaluate the options:

Azure Files: Azure Files provides fully managed file shares in the cloud that are accessible via SMB or NFS protocols. While Databricks can technically access Azure Files, it’s not the ideal storage for the primary data lake in a data science and engineering workspace. Azure Files is better suited for scenarios requiring file share access, like lift-and-shift applications or shared configuration files, not for large-scale data analytics workloads.

Azure Data Lake (specifically Azure Data Lake Storage Gen2): Azure Data Lake Storage Gen2 (ADLS Gen2) is built on top of Azure Blob Storage and is designed specifically for big data analytics. It provides:

Scalability and Cost-effectiveness: ADLS Gen2 is highly scalable and cost-effective for storing massive amounts of data.

Hierarchical Namespace: ADLS Gen2 offers a hierarchical namespace, which organizes data into directories and files, making data management and discovery easier.

Performance Optimized for Analytics: It’s optimized for big data analytics workloads and integrates seamlessly with Azure Databricks, Azure Synapse Analytics, and other Azure data services.

Security and Compliance: ADLS Gen2 provides robust security features and compliance certifications.

For a Databricks Data Science & Engineering workspace, ADLS Gen2 is the recommended and most common storage layer for persisting ingested data. It serves as the foundation for the data lake, where raw and processed data can be stored and accessed by Databricks clusters for various analytics tasks.

Azure SQL Database: Azure SQL Database is a fully managed relational database service. While Databricks can connect to and process data in Azure SQL Database, it’s not the primary storage for ingested data in a data science and engineering context. Azure SQL Database is better suited for structured, transactional data, and operational data stores, rather than as the main data lake for diverse and potentially unstructured or semi-structured ingested data.

Azure Cosmos DB: Azure Cosmos DB is a globally distributed, multi-model database service. Databricks can also connect to and process data in Cosmos DB. However, like Azure SQL Database, Cosmos DB is not the typical primary storage for ingested data in a general data science and engineering workspace. Cosmos DB is more appropriate for NoSQL workloads, real-time applications, or when global distribution and low latency are primary requirements for the processed data, not necessarily for the initial raw ingested data.

Conclusion:

For persisting ingested data in an Azure Databricks Data Science & Engineering workspace, Azure Data Lake (ADLS Gen2) is the most appropriate and widely used storage solution due to its scalability, cost-effectiveness, performance for analytics, and native integration with Databricks.

Final Answer: Azure Data Lake

53
Q

HOTSPOT

You are developing a multi-tier app named App1 that will be hosted on Azure virtual machines. The peak utilization periods for App1 will be from 8 AM to 9 AM and 4 PM to 5 PM on weekdays.

You need to deploy the infrastructure for App1.

The solution must meet the following requirements:

  • Support virtual machines deployed to four availability zones across two Azure regions.
  • Minimize costs by accumulating CPU credits during periods of low utilization.

What is the minimum number of virtual networks you should deploy, and which virtual machine size should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Number of virtual networks: 2
1
2
3
4
Virtual machine size: B-Series
A-Series
B-Series
D-Series
M-Series

A

Let’s analyze each requirement to determine the correct number of virtual networks and virtual machine size.

Requirement 1: Support virtual machines deployed to four availability zones across two Azure regions.

Availability Zones (AZs) are region-specific. Each Azure region that supports Availability Zones has multiple isolated locations within that region.

To deploy VMs across two Azure regions, you inherently need to use resources in both regions.

Virtual Networks (VNets) are regional resources. A VNet is confined to a single Azure region.

To deploy VMs in two different Azure regions, you must have at least one VNet in each region.

Within each region, you can deploy VMs to different Availability Zones within the same VNet in that region.

Therefore, to span two Azure regions and utilize availability zones within those regions, you need a minimum of two virtual networks, one in each region. You don’t need more than two VNets for this specific requirement as you can place multiple Availability Zones within a single VNet per region.

Minimum Number of virtual networks: 2

Requirement 2: Minimize costs by accumulating CPU credits during periods of low utilization.

This requirement directly points to burstable virtual machine sizes. Burstable VMs are designed for workloads that have periods of low CPU usage interspersed with occasional spikes in demand. These VMs accumulate CPU credits during periods of low utilization and can use these credits to burst above their baseline performance when needed.

Among the provided options, the B-Series virtual machines are specifically designed as burstable VMs.

Let’s look at the characteristics of the listed VM sizes:

A-Series: Basic, entry-level VMs. While cost-effective for very basic workloads, they are not designed for bursting or accumulating CPU credits in the same way as B-series. They are older generation and generally less performant per dollar than B-series for burstable workloads.

B-Series: Burstable VMs. Designed to accumulate CPU credits during idle periods and burst when needed. This is the ideal choice for workloads with predictable peak and off-peak usage patterns, directly meeting the cost minimization requirement for this scenario.

D-Series: General-purpose VMs. Good for many workloads, offering a balance of CPU, memory, and disk. However, they are not burstable VMs and do not accumulate CPU credits. They are designed for consistent performance, not cost optimization through bursting.

M-Series: Memory-optimized VMs. High-performance VMs designed for memory-intensive workloads like large databases or in-memory analytics. They are generally more expensive and not relevant for cost minimization through burstable credits.

Virtual machine size: B-Series

Final Answer:

Number of virtual networks: 2
Virtual machine size: B-Series

54
Q

You have the resources shown in the following table.

Name Type
AS1 Azure Synapse Analytics instance
CDB1 Azure Cosmos DB SQL API account

CDB1 hosts a container that stores continuously updated operational data

You are designing a solution that will use AS1 to analyze the operational data dairy.

You need to recommend a solution to analyze the data without affecting the performance of the operational data store.

What should you include in the recommendation?

Azure Data Factory with Azure Cosmos DB and Azure Synapse Analytics connectors
Azure Synapse Analytics with PolyBase data loading
Azure Cosmos DB change feed

A

The correct answer is Azure Cosmos DB change feed.

Here’s why:

Azure Cosmos DB Change Feed for Minimal Impact: The Azure Cosmos DB change feed is designed specifically for scenarios where you need to track changes in your data without directly querying the operational store in a way that could impact its performance. It provides a near real-time stream of all changes (inserts, updates, deletes) that occur within a Cosmos DB container.

Asynchronous Data Processing: By using the change feed, you can create a separate process (e.g., an Azure Function, Azure Stream Analytics job, or custom application) that consumes this feed and asynchronously processes the changes. This process can then load the relevant data into Azure Synapse Analytics for analysis. This approach decouples the analytical workload from the operational workload, ensuring that the operational data store (CDB1) is not burdened by analytical queries.

No Direct Analytical Queries to Operational Store: The change feed method avoids directly querying CDB1 from Synapse Analytics for analytical purposes. Instead, data is extracted as changes occur and then loaded into Synapse. This significantly reduces the potential for analytical queries to consume Request Units (RUs) on CDB1 and impact its performance.

Let’s look at why the other options are less suitable:

Azure Data Factory with Azure Cosmos DB and Azure Synapse Analytics connectors:

Potential Performance Impact: While Azure Data Factory (ADF) is a valid tool for moving data, using it to directly extract data from CDB1 and load it into AS1 for daily analysis could still impact CDB1’s performance. ADF would need to perform read operations against CDB1, and depending on the volume of data and the frequency of extraction, this could consume RUs and potentially slow down operational transactions. Even with optimized connectors, direct data extraction for analytical purposes can introduce load.

Azure Synapse Analytics with PolyBase data loading:

Direct Query Impact: PolyBase in Synapse Analytics allows you to query external data sources, including Cosmos DB. However, querying Cosmos DB directly from Synapse using PolyBase for analytical workloads will consume RUs in CDB1. Analytical queries can be resource-intensive and could significantly impact the performance and latency of CDB1’s operational workload if they are executed against the same container. PolyBase essentially makes Synapse query CDB1 in place, which is exactly what you want to avoid to minimize performance impact.

In summary:

To analyze operational data in AS1 without affecting the performance of CDB1, the Azure Cosmos DB change feed is the most appropriate and recommended solution. It provides an efficient, low-impact way to extract data changes from Cosmos DB and move them to Synapse for analysis without directly burdening the operational data store with analytical queries.

Therefore, the correct answer is Azure Cosmos DB change feed.

55
Q

Your company develops Azure applications.

You need to recommend a solution for the deployment of Azure subscriptions.

The solution must meet the following requirements:

What should you include in the recommendation?

Provision resource groups.
Support deployments across all Azure regions.
Create custom role-based access control (RBAC) roles.
Provide consistent virtual machine and virtual network configurations.

A

The question is incomplete. It is missing the “What should you include in the recommendation?” part of the question, and the answer options to select from.

However, based on the requirements listed:

Provision resource groups.

Support deployments across all Azure regions.

Create custom role-based access control (RBAC) roles.

Provide consistent virtual machine and virtual network configurations.

We can infer that the question is asking for a solution to consistently and programmatically deploy Azure subscriptions for application development within a company. The goal is to ensure consistency, security, and repeatability across these subscriptions.

Given these requirements, the most appropriate Azure service to recommend for subscription deployment and management that addresses all the points is Azure Blueprints.

Rationale for Azure Blueprints:

Provision resource groups: Azure Blueprints can define and deploy resource groups as part of its blueprint definition. You can specify the resource groups that should be created in each deployed subscription.

Support deployments across all Azure regions: Azure Blueprints can be designed to deploy resources into any Azure region. The resources defined within the blueprints (often using ARM templates) can specify the target Azure regions for deployment. Blueprints themselves are globally available.

Create custom role-based access control (RBAC) roles: Azure Blueprints allows you to define and assign custom RBAC roles as part of the blueprint. This ensures consistent access control across all subscriptions deployed using the blueprint. You can include role assignments for built-in roles or define and assign custom roles within the blueprint definition.

Provide consistent virtual machine and virtual network configurations: Azure Blueprints leverages Azure Resource Manager (ARM) templates. ARM templates (or Bicep files, which compile to ARM templates) are the ideal way to define consistent infrastructure-as-code. Blueprints can include ARM templates that define the exact configurations for virtual machines, virtual networks, and other Azure resources, guaranteeing consistency across deployments.

Why other potential options might be less suitable as a single best recommendation for subscription deployment:

Azure Resource Manager (ARM) Templates: While ARM templates are excellent for defining resource configurations and can meet the consistent VM/VNet configuration requirement, they are primarily focused on resource deployments within a subscription. They don’t inherently manage the subscription deployment itself or the consistent application of RBAC and policies at the subscription level in the same structured way as Blueprints. You’d need to build a more complex orchestration system around ARM templates to achieve subscription-level consistency.

Terraform: Terraform is a powerful Infrastructure-as-Code tool and could be used to deploy Azure subscriptions and resources. It can meet all the requirements. However, within the Azure ecosystem, Azure Blueprints is designed specifically for subscription governance and consistency, making it a more native and often simpler choice for this particular scenario within Azure, especially if the goal is to “recommend an Azure solution.” Terraform is a third-party tool, while Blueprints is a native Azure service focused on subscription-level governance.

Azure Policy: Azure Policy enforces organizational standards and assesses compliance. While Policy is crucial for governance and compliance of subscriptions (and can enforce RBAC and configuration standards), it’s not the deployment mechanism itself. Policy is used after subscriptions are deployed to ensure they adhere to standards. Blueprints can include policy assignments to enforce policies during and after deployment, making Blueprints a more comprehensive solution for deployment and governance.

Azure Automation: Azure Automation can be used to script deployments. However, it’s a more general-purpose automation service. Using Automation for subscription deployment would require significant scripting effort to handle all the requirements (resource groups, regions, RBAC, consistent configurations). Blueprints provides a more structured and declarative approach specifically for this purpose.

Therefore, assuming the question is asking for the single best Azure service recommendation to deploy Azure subscriptions meeting the listed requirements, the closest with correct answer is Azure Blueprints.

Final Answer: Azure Blueprints (assuming this would be an option if the question were complete). If other options were provided, please list them to allow for a more precise selection of the “closest with correct answer” from the given choices.

56
Q

You have an Azure Active Directory (Azure AD) tenant named contoso.com that has a security group named Group1. Group1 is configured for assigned membership. Group1 has 50 members, including 20 guest users.
You need to recommend a solution for evaluating the membership of Group1. The solution must meet the following requirements:
✑ The evaluation must be repeated automatically every three months.
✑ Every member must be able to report whether they need to be in Group1.
✑ Users who report that they do not need to be in Group1 must be removed from Group1 automatically.
✑ Users who do not report whether they need to be in Group1 must be removed from Group1 automatically.
What should you include in the recommendation?

A. Implement Azure AD Identity Protection.
B. Change the Membership type of Group1 to Dynamic User.
C. Create an access review.
D. Implement Azure AD Privileged Identity Management (PIM).

A

The correct answer is C. Create an access review.

Explanation:

Let’s analyze each option against the requirements:

A. Implement Azure AD Identity Protection.

Why Incorrect: Azure AD Identity Protection is a security service that focuses on detecting, investigating, and remediating risk-based vulnerabilities and identity compromises. It helps protect user accounts and the organization from threats. While Identity Protection is crucial for security, it does not directly address the requirement of evaluating and managing group membership based on user self-attestation and periodic reviews. It’s not designed for group membership management or reviews.

B. Change the Membership type of Group1 to Dynamic User.

Why Incorrect: Dynamic User groups manage membership automatically based on rules defined using user attributes. While dynamic groups automate membership management, they are rule-based, not review-based. They do not support the requirement for:

Self-attestation: Dynamic groups do not allow members to report whether they need to be in the group.

Periodic Reviews: Dynamic groups continuously evaluate membership based on rules, not on a periodic review cycle with user input.

Automatic Removal based on Self-Reporting or Non-Response: Dynamic groups automatically manage membership based on rules; they don’t incorporate user feedback or non-response as criteria for removal.

C. Create an access review.

Why Correct: Azure AD Access Reviews are specifically designed for reviewing and managing access to groups, applications, and roles. They directly address all the requirements:

Automatic, Periodic Evaluation: Access reviews can be configured to run automatically on a recurring schedule, such as every three months.

Self-Attestation: Access reviews can be configured to allow group members to review their own access and indicate whether they still need to be in the group.

Automatic Removal Based on Self-Denial: When configuring an access review, you can set the action to “Apply results automatically”. If a user denies their need for access during the review, or if they don’t respond within the review period, they can be automatically removed from the group.

Automatic Removal Based on Non-Response: Access reviews can also be configured to automatically remove users who do not respond to the review within the specified time frame, fulfilling this requirement as well.

D. Implement Azure AD Privileged Identity Management (PIM).

Why Incorrect: Azure AD Privileged Identity Management (PIM) is primarily focused on managing, controlling, and monitoring access to privileged roles and resources. While PIM includes access reviews for role assignments, it’s not the primary tool for regular membership reviews of standard security groups like Group1. PIM is more geared towards controlling and auditing elevated access, not general group membership cleanup and self-service review for all group members. While you could potentially use PIM for access reviews of group membership, Access Reviews (Option C) is the more direct and appropriate feature for the described scenario of regular group membership evaluation and self-attestation for a general security group.

57
Q

You are designing a solution that will include containerized applications running in an Azure Kubernetes Service (AKS) cluster.

You need to recommend a load balancing solution for HTTPS traffic.

The solution must meet the following requirements:

✑ Automatically configure load balancing rules as the applications are deployed to the cluster.

✑ Support Azure Web Application Firewall (WAF).

✑ Support cookie-based affinity.

✑ Support URL routing.

What should you include the recommendation?

an NGINX ingress controller
Application Gateway Ingress Controller (AGIC)
an HTTP application routing ingress controller
the Kubernetes load balancer service

A

The correct answer is Application Gateway Ingress Controller (AGIC).

Here’s why it’s the best choice and why the others are less suitable:

Why Application Gateway Ingress Controller (AGIC) is the correct answer:

Automatic Configuration: AGIC is specifically designed to automatically configure Azure Application Gateway based on Kubernetes Ingress resources. When you deploy or update an Ingress resource in your AKS cluster, AGIC automatically translates those configurations into Application Gateway settings, eliminating manual configuration of the load balancer itself. This directly addresses the “automatically configure load balancing rules” requirement.

Azure Web Application Firewall (WAF) Support: AGIC leverages Azure Application Gateway, which natively supports Azure Web Application Firewall (WAF). You can easily enable WAF on your Application Gateway to protect your containerized applications from common web exploits and vulnerabilities. This fulfills the “Support Azure Web Application Firewall (WAF)” requirement.

Cookie-based Affinity (Session Affinity): Azure Application Gateway, and therefore AGIC, supports cookie-based session affinity (also known as sticky sessions). This ensures that requests from the same client session are routed to the same backend pod within your AKS cluster, which is crucial for applications that maintain session state. This meets the “Support cookie-based affinity” requirement.

URL Routing (Path-based Routing and Host-based Routing): Application Gateway provides rich URL routing capabilities, including path-based routing (e.g., routing /app1 to one backend and /app2 to another) and host-based routing (routing requests based on the hostname). AGIC exposes these capabilities through Kubernetes Ingress resources, allowing you to define sophisticated URL routing rules for your applications. This satisfies the “Support URL routing” requirement.

Why other options are less suitable:

An NGINX ingress controller:

While NGINX is a very popular and powerful ingress controller, it requires more manual configuration to integrate with Azure services like WAF. You would typically need to deploy NGINX as a separate service in your AKS cluster and then configure it, possibly with additional components, to achieve WAF integration and potentially automate the configuration. It’s not as “out-of-the-box” and automatically managed for Azure integration as AGIC. While NGINX can support all the requirements, AGIC is designed for easier and more integrated Azure management.

An HTTP application routing ingress controller:

This is a simpler, basic ingress controller provided by AKS. It’s often used for basic HTTP routing scenarios but typically lacks advanced features like WAF integration and sophisticated routing rules required for production-grade applications. It’s unlikely to natively support WAF or offer the same level of cookie-based affinity and URL routing as Application Gateway or NGINX with extra configuration. It’s designed for simpler use cases and not for the listed requirements.

The Kubernetes load balancer service:

The Kubernetes LoadBalancer service in Azure creates an Azure Load Balancer (Standard Load Balancer). While it can expose services to the internet, it operates at Layer 4 (TCP/UDP) and is not designed for HTTP/HTTPS application-level load balancing and routing. It lacks URL routing, cookie-based affinity at the application level, and WAF capabilities. It’s primarily for basic TCP/UDP load balancing and not suitable for complex web application traffic management and security.

In summary:

For the given requirements of automatic configuration, WAF support, cookie-based affinity, and URL routing for HTTPS traffic in AKS, Application Gateway Ingress Controller (AGIC) is the most appropriate and recommended solution because it’s specifically designed to integrate Azure Application Gateway’s advanced features directly with AKS in an automated and managed way.

Therefore, the correct answer is Application Gateway Ingress Controller (AGIC).

58
Q

You need to recommend an App Service architecture that meets the requirements for App1. The solution must minimize costs.
What should you recommend?

A. one App Service Environment (ASE) per availability zone
B. one App Service Environment (ASE) per region
C. one App Service plan per region
D. one App Service plan per availability zone

A

The question specifically asks for an App Service architecture that minimizes costs for App1. Let’s analyze each option in terms of cost and typical use cases.

A. one App Service Environment (ASE) per availability zone:

Cost: App Service Environments (ASEs) are the most expensive App Service offering. Deploying multiple ASEs (especially one per availability zone) would be extremely costly due to the dedicated infrastructure and flat monthly fee associated with each ASE, in addition to the App Service plan costs within them. This option is definitively not cost-minimizing.

Use Case: ASEs are designed for very large, isolated, and highly secure workloads that require network isolation and extreme scale, often within large enterprises. They are not intended for cost-sensitive scenarios.

B. one App Service Environment (ASE) per region:

Cost: Still uses ASEs, which are very expensive. While slightly less costly than option A (fewer ASEs overall), it’s still a high-cost solution due to the fundamental ASE pricing model. Not cost-minimizing.

Use Case: Regional isolation and high scale, but still within the context of very large, enterprise-grade applications.

C. one App Service plan per region:

Cost: App Service plans are significantly more cost-effective than ASEs. You pay for the compute resources (VM instances) within the plan based on the chosen tier (e.g., Basic, Standard, Premium). Having one plan per region allows for regional deployments and scaling within each region’s plan. This is a much more cost-conscious approach than using ASEs. You can choose a suitable App Service plan tier (e.g., Standard or Premium) to balance cost and performance.

Use Case: Typical web applications, APIs, and backend services that require scalability, reliability, and regional presence, but do not demand the extreme isolation and scale of ASEs. This is a common and often cost-effective architecture for many applications.

D. one App Service plan per availability zone:

Cost: More expensive than option C because you would likely have multiple App Service plans within a single region (one per availability zone). While still less expensive than ASEs, it’s more costly than simply having one plan per region. You are paying for redundancy across availability zones, which adds cost.

Use Case: Applications that require high availability and zone redundancy within a single Azure region. This provides better fault tolerance within a region compared to a single plan per region, but at a higher cost.

Conclusion:

When the primary goal is to minimize costs, the most appropriate and cost-effective App Service architecture among the options provided is C. one App Service plan per region.

Options A and B (using ASEs) are extremely expensive and not cost-minimizing.

Option D (one App Service plan per availability zone) provides high availability within a region but at a higher cost than option C.

Option C (one App Service plan per region) offers a balance of regional deployment and scalability at a significantly lower cost than the other options. It is the most direct answer to the “minimize costs” requirement, assuming basic regional deployment and scaling are sufficient for App1’s unstated requirements.

Final Answer: C. one App Service plan per region

59
Q

You have 12 Azure subscriptions and three projects. Each project uses resources across multiple subscriptions.

You need to use Microsoft Cost Management to monitor costs on a per project basis. The solution must minimize administrative effort.

Which two components should you include in the solution? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A. budgets
B. resource tags
C. custom role-based access control (RBAC) roles
D. management groups
E. Azure boards

A

To effectively monitor costs on a per-project basis across multiple Azure subscriptions using Microsoft Cost Management while minimizing administrative effort, you need mechanisms to group and categorize costs by project. Let’s evaluate each option:

A. budgets

Explanation: Budgets in Azure Cost Management allow you to set spending limits for scopes like subscriptions, resource groups, or management groups. You can configure alerts to notify you when spending reaches a certain percentage of the budget.

Why it’s partially correct: Budgets are useful for cost control and alerting. You could create budgets for each project, potentially at the resource group level if all project resources are neatly organized within resource groups. However, budgets alone do not inherently provide a consolidated view of costs across subscriptions for each project. They are more about setting spending limits and getting notifications. While helpful for cost management, they are not the primary tool for cost monitoring by project across subscriptions.

B. resource tags

Explanation: Resource tags are key-value pairs that you can apply to Azure resources. Tags are inherited by cost management and billing systems.

Why it’s correct: Resource tags are essential for cost allocation and monitoring by project. By tagging all resources belonging to a specific project with a consistent tag (e.g., “Project: ProjectA”), you can then use Cost Management features to filter, group, and analyze costs based on these tags. This allows you to see the aggregated cost for each project across all subscriptions where project resources are deployed. This minimizes administrative effort because once tags are applied, Cost Management automatically uses them for reporting.

C. custom role-based access control (RBAC) roles

Explanation: Custom RBAC roles allow you to define granular permissions for accessing and managing Azure resources.

Why it’s incorrect: Custom RBAC roles are focused on security and access control. They do not directly contribute to cost monitoring or grouping costs by project. RBAC is about who can do what with Azure resources, not about cost tracking.

D. management groups

Explanation: Management groups provide a hierarchical container above Azure subscriptions. They allow you to organize subscriptions into a management hierarchy and apply policies and governance at scale.

Why it’s correct: Management groups are crucial for aggregating cost data across subscriptions for projects. By organizing subscriptions under management groups that represent projects (or project portfolios), you can use Cost Management to view consolidated costs at the management group level. This provides a high-level, aggregated view of project costs across all underlying subscriptions. This minimizes administrative effort because you organize subscriptions once into management groups, and Cost Management then automatically aggregates data at that level.

E. Azure boards

Explanation: Azure Boards is a service within Azure DevOps for work management, tracking tasks, bugs, and features.

Why it’s incorrect: Azure Boards is a project management tool and is not related to Azure Cost Management or cost monitoring. It doesn’t provide any cost aggregation or reporting capabilities.

Conclusion:

The two components that are essential for monitoring costs on a per-project basis across multiple subscriptions with minimal administrative effort are:

B. resource tags: To categorize resources by project within subscriptions.

D. management groups: To aggregate costs across subscriptions at the project level.

60
Q

HOTSPOT

You have a Microsoft Entra tenant named contoso.com that contains multiple enterprise apps. Your company has a business partner that has a Microsoft Entra tenant named fabrikam.com. You need to recommend an identity governance solution that will provide users in fabrikam.com with access to the enterprise apps in contoso.com.

The solution must meet the following requirements:

  • Ensure that administrators in fabrikam.com can provide the fabrikam.com users with access to the contoso.com enterprise apps.
  • Automatically provision identities for the fabrikam.com users to enable access to the contoso.com enterprise apps.
  • Provide policy-based management of access assignments, approvals, and expirations.
  • Minimize administrative effort.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct collection is worth one point.
Answer Area
For identity provisioning:
B2B collaboration
B2B direct connect
Cross-tenant synchronization
For access management:
Entitlement management
Permissions Management
Privileged Identity Management (PIM)

A

Answer:

For identity provisioning: Cross-tenant synchronization

For access management: Entitlement management

Explanation:

For Identity Provisioning: Cross-tenant synchronization

Cross-tenant synchronization is a feature in Microsoft Entra ID Governance specifically designed to automate the creation, update, and deletion of users across Microsoft Entra tenants. In this scenario, it perfectly addresses the requirement to automatically provision identities for fabrikam.com users in contoso.com.

How it works for this scenario: You would configure cross-tenant synchronization to flow users from fabrikam.com to contoso.com. Based on defined rules (e.g., users in a specific group in fabrikam.com), identities (user accounts) will be automatically created in contoso.com. This eliminates the need for manual invitation or individual user creation for each fabrikam.com user needing access.

Why other Identity Provisioning options are less suitable:

B2B collaboration (Guest accounts): While B2B collaboration allows inviting guest users from fabrikam.com to contoso.com, it doesn’t automatically provision identities in the same way. Guest accounts are essentially pointers to the external identity provider. It requires initial invitation and is less automated for large-scale provisioning compared to cross-tenant synchronization.

B2B direct connect: B2B direct connect is primarily for enabling Teams Shared Channels and is not directly related to provisioning identities for enterprise application access in the way described in the question.

For Access Management: Entitlement management

Entitlement management is part of Microsoft Entra ID Governance and is specifically built for policy-based identity governance. It provides a way to manage who has access to what resources (like enterprise applications) through access packages.

How it works for this scenario:

You would create access packages in contoso.com that grant access to the relevant enterprise applications.

You can configure policies within these access packages to define:

Who can request access: You can allow users from fabrikam.com to request access.

Approval workflows: You can set up approval workflows, potentially involving fabrikam.com administrators to approve access requests from their users.

Access review and expiration: You can configure periodic access reviews and automatic expiration of access assignments.

You can delegate the management of these access packages to designated individuals, potentially including administrators from fabrikam.com, enabling them to manage access for their users to contoso.com applications in a self-service manner.

Why other Access Management options are less suitable:

Permissions Management: Permissions Management is focused on discovering, remediating, and monitoring permissions across multi-cloud environments. It’s more about managing existing permissions and less about the policy-driven access assignment, approval, and lifecycle management required in this scenario.

Privileged Identity Management (PIM): PIM is designed to manage, control, and monitor access to privileged roles and resources within an organization. While it does involve access assignments and approvals, it’s primarily for internal privileged access management and not for managing access for external partner users to enterprise applications in the way entitlement management is designed for.

61
Q

You have an app named App1 that uses an on-premises Microsoft SQL Server database named DB1.

You plan to migrate DB1 to an Azure SQL managed instance.

You need to enable customer-managed Transparent Data Encryption (TDE) for the instance. The solution must maximize encryption strength.

Which type of encryption algorithm and key length should you use for the TDE protector?

AES256
RSA4096
RSA2048
RSA3072

A

To maximize encryption strength for customer-managed Transparent Data Encryption (TDE) in Azure SQL Managed Instance, you need to choose the strongest algorithm and key length for the TDE protector. The TDE protector is the asymmetric key used to encrypt the Database Encryption Key (DEK).

Let’s evaluate each option:

AES256: AES (Advanced Encryption Standard) is a symmetric encryption algorithm and AES256 uses a 256-bit key. While AES256 is a strong encryption algorithm and is used by TDE to encrypt the data itself (using the DEK), it is not used as the TDE protector algorithm. The TDE protector in Azure SQL Managed Instance is an asymmetric key algorithm, specifically RSA. Therefore, AES256 is not the correct algorithm for the TDE protector.

RSA4096: RSA (Rivest-Shamir-Adleman) is an asymmetric encryption algorithm. RSA4096 uses a 4096-bit key length. RSA is the algorithm used for the TDE protector in Azure SQL Managed Instance when you choose customer-managed keys. Among the RSA options provided (RSA2048, RSA3072, RSA4096), RSA4096 offers the longest key length. Longer key lengths in RSA generally provide stronger encryption as they are more resistant to brute-force attacks and cryptanalysis.

RSA2048: RSA2048 is also an asymmetric encryption algorithm, but with a 2048-bit key length. While RSA2048 is still considered secure for many applications, it offers a lower level of encryption strength compared to RSA4096.

RSA3072: RSA3072 is an asymmetric encryption algorithm with a 3072-bit key length. It provides stronger encryption than RSA2048 but is still less secure than RSA4096.

Conclusion:

To maximize encryption strength for the TDE protector in Azure SQL Managed Instance when using customer-managed keys, you should choose the RSA algorithm with the longest key length available from the options. RSA4096 provides the strongest encryption strength among the given RSA key length options.

Final Answer: RSA4096

62
Q

HOTSPOT

You have an Azure subscription. The subscription contains an Azure SQL managed instance that stores employee details, including social security numbers and phone numbers.

You need to configure the managed instance to meet the following requirements:

  • The helpdesk team must see only the last four digits of an employee’s phone number.
  • Cloud administrators must be prevented from seeing the employee’s social security numbers.

What should you enable tor each column in the managed instance? To answer select the appropriate options in the answer area. NOTE; Each correct selection is worth one point
Answer Area
Phone numbers:
Always Encrypted
Column encryption
Dynamic data masking
Transparent Data Encryption (TDE)
Social security numbers:
Always Encrypted
Column encryption
Dynamic data masking
Transparent Data Encryption (TDE)

A

Let’s analyze each requirement and the Azure SQL Managed Instance security features to determine the best options.

Requirement 1: The helpdesk team must see only the last four digits of an employee’s phone number.

Dynamic data masking: This feature is designed to limit sensitive data exposure by masking it to non-privileged users. You can configure masking rules to show only a portion of the data, such as the last four digits of a phone number. This perfectly aligns with the requirement for the helpdesk team.

Always Encrypted: While Always Encrypted encrypts data both at rest and in use, it is not designed for masking. To show the last four digits, you would need to decrypt the entire phone number and then apply masking logic in the application, which is not efficient for this scenario. Always Encrypted is more about full data protection and access control through encryption keys.

Column encryption: This is a general term and might be confused with Always Encrypted. If it refers to Always Encrypted, the same limitations apply as described above. If it refers to something else (less likely in Azure SQL context), it’s not a standard feature for masking data.

Transparent Data Encryption (TDE): TDE encrypts data at rest (data files, log files, and backups). It does not control what users see when they query the data. TDE protects against offline attacks by encrypting the physical storage, but it does not mask data for authorized users within the database system.

Therefore, for Phone numbers, Dynamic data masking is the most appropriate solution.

Requirement 2: Cloud administrators must be prevented from seeing the employee’s social security numbers.

Dynamic data masking: You can use dynamic data masking to completely mask the social security number column for cloud administrators. By setting a masking rule that replaces the entire value with a fixed string or a partial value, you can prevent them from seeing the actual social security numbers when they query the database.

Always Encrypted: Always Encrypted with randomized or deterministic encryption can effectively prevent cloud administrators from seeing the actual social security numbers, if they are not granted access to the decryption keys. This is a strong security measure. However, implementing and managing Always Encrypted can be more complex than Dynamic Data Masking.

Column encryption: Again, if this refers to Always Encrypted, it could work as described above.

Transparent Data Encryption (TDE): TDE encrypts data at rest, but it does not prevent authorized database users, including cloud administrators who typically have high-level SQL permissions, from seeing the data when querying. TDE is not for access control within the database system.

Considering both requirements and aiming for the most straightforward solution with minimized administrative effort for these specific masking and access control needs, Dynamic data masking is suitable for both columns. While Always Encrypted provides stronger encryption for Social Security Numbers, Dynamic Data Masking effectively prevents visibility for cloud administrators as required and is simpler to implement and manage for this scenario. For phone numbers, Dynamic Data Masking is clearly the best fit for showing partial data.

Final Answer:

Answer Area
Phone numbers: Dynamic data masking
Social security numbers: Dynamic data masking

62
Q

HOTSPOT

You have an Azure subscription that contains the resources shown in the following table.
— —
Name Type Description
App1 Azure App Service app None
Workspace1 Log Analytics workspace Configured to use a pay-as-you-go pricing tier
App1Logs Log Analytics table Hosted in Workspace 1
Configured to use the Analytics Logs data plan
— —

Log files from App1 are ingested to App 1 Logs. An average of 120 GB of log data is ingested per day.

You configure an Azure Monitor alert that will be triggered if the App1 logs contain error messages.

You need to minimize the Log Analytics costs associated with App1.

The solution must meet the following requirements:

  • Ensure that all the log files from App1 are ingested to App 1 Logs.
  • Minimize the impact on the Azure Monitor alert.

Which resource should you modify, and which modification should you perform? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Resource:
App1
App1Logs
Workspace1
Modification:
Change to a commitment pricing tier.
Change to the Basic Logs data plan.
Set a daily cap.

A

To minimize Log Analytics costs associated with App1 logs while ensuring ingestion and minimizing impact on the Azure Monitor alert, you should modify the App1Logs resource and change the data plan to Basic Logs.

Here’s why:

Resource: App1Logs: The costs are directly associated with the App1Logs table where the log data is ingested and stored. Modifying this specific resource is the most targeted approach to cost optimization for these logs.

Modification: Change to the Basic Logs data plan: Basic Logs in Azure Monitor are designed for high-volume, low-cost ingestion of verbose logs that are primarily used for debugging, troubleshooting, and auditing, but less frequently queried for complex analytics. They offer significantly reduced ingestion and retention costs compared to Analytics Logs.

Let’s analyze why this combination is correct and why other options are incorrect:

Why App1Logs and Change to Basic Logs is correct:

Cost Minimization: Basic Logs are cheaper than Analytics Logs, directly reducing costs associated with App1Logs.

Log Ingestion: Basic Logs still ingest all log files, fulfilling the requirement to “Ensure that all the log files from App1 are ingested to App 1 Logs.”

Minimal Impact on Alert: Azure Monitor alerts can still be configured to work with Basic Logs. While Basic Logs have limitations in query capabilities compared to Analytics Logs, simple alerts checking for the presence of error messages (as described in the question “alert that will be triggered if the App1 logs contain error messages”) can generally function with Basic Logs. The impact is minimized because basic alerts can still work.

Why other combinations are incorrect:

Workspace1 and Change to a commitment pricing tier: While commitment tiers can reduce costs for high-volume ingestion at the workspace level, they might not be the most minimal change for optimizing costs specifically for App1Logs. Changing the data plan of App1Logs is a more targeted and potentially more cost-effective approach for this specific scenario. Also, commitment tier is more about predictable cost, not necessarily the lowest possible cost if Basic Logs are sufficient.

Workspace1 and Set a daily cap: Setting a daily cap on the workspace would reduce costs, but it directly violates the requirement to “Ensure that all the log files from App1 are ingested to App 1 Logs.” If the daily data volume exceeds the cap, logs will be dropped.

App1 and any modification: Modifying App1 (the App Service app) itself is not directly relevant to minimizing Log Analytics costs. The costs are incurred in Log Analytics based on data ingestion and retention, not in the App Service itself.

App1Logs and Change to a commitment pricing tier: Similar to Workspace1 commitment tier, it might be less targeted and potentially less optimal than directly using Basic Logs if the alert can function with Basic Logs’ limitations.

App1Logs and Set a daily cap: Setting a daily cap on App1Logs would also violate the requirement to ingest all logs.

Therefore, the most appropriate and cost-effective solution that meets the requirements is to modify App1Logs and change the data plan to Basic Logs.

Final Answer:

Answer Area

Resource: App1Logs

Modification: Change to the Basic Logs data plan.

62
Q

Your company has the divisions shown in the following table.
Division Azure subscription Azure AD tenant
East Sub1 Contoso.com
West Sub2 Fabrikam.com

Sub1 contains an Azure App Service web app named App1. Appl uses Azure AD for single-tenant user authentication. Users from contoso.com can authenticate to App1.

You need to recommend a solution to enable users in the fabrikam.com tenant to authenticate to App1.

What should you recommend?

Configure the Azure AD provisioning service.
Configure Supported account types in the application registration and update the sign-in endpoint.
Configure assignments for the fabrikam.com users by using Azure AD Privileged Identity Management (PIM).
Enable Azure AD pass-through authentication and update the sign-in endpoint

A

Let’s analyze each option to determine the best solution for enabling users from the fabrikam.com Azure AD tenant to authenticate to App1 in the contoso.com tenant.

Configure the Azure AD provisioning service.

Explanation: Azure AD provisioning service is used to automate the creation, modification, and deletion of user identities and roles in target applications. It’s primarily concerned with user lifecycle management and synchronizing identities between Azure AD and applications.

Relevance to the requirement: This option is not directly relevant to enabling cross-tenant authentication for a web application. Provisioning service is not designed for authentication flows or allowing users from external directories to access an application in a different directory.

Why incorrect: Provisioning service does not handle the authentication challenge of users from fabrikam.com accessing App1.

Configure Supported account types in the application registration and update the sign-in endpoint.

Explanation: In Azure AD application registrations, the “Supported account types” setting determines who can authenticate to the application. For single-tenant applications, it’s typically set to “Accounts in this organizational directory only.” To allow users from other Azure AD tenants, you need to change this setting to a multi-tenant option, such as “Accounts in any organizational directory”. Updating the sign-in endpoint might also be necessary to ensure the application correctly handles multi-tenant authentication flows, often using the /common endpoint for multi-tenant applications.

Relevance to the requirement: This option directly addresses the requirement. By configuring the application registration to support accounts from other directories, users from fabrikam.com will be able to authenticate.

Why correct: This is the most direct and standard approach to enable multi-tenant authentication for an Azure AD application. By changing the supported account types and potentially adjusting the sign-in endpoint, App1 can be configured to accept authentication requests from fabrikam.com users.

Configure assignments for the fabrikam.com users by using Azure AD Privileged Identity Management (PIM).

Explanation: Azure AD Privileged Identity Management (PIM) is used to manage, control, and monitor access to important resources in your organization by providing just-in-time privileged access to Azure resources and Azure AD. PIM is about managing privileged roles and access, not general user authentication for applications across tenants.

Relevance to the requirement: PIM is not the correct tool for enabling basic authentication for fabrikam.com users to App1. PIM is for managing elevated permissions, not for granting initial access for external users.

Why incorrect: PIM is not designed for this scenario. It’s about managing privileged access within an organization, not enabling cross-tenant application authentication for regular users.

Enable Azure AD pass-through authentication and update the sign-in endpoint.

Explanation: Azure AD Pass-through authentication is a user sign-in method that lets your users sign in to both on-premises and Azure AD-based applications with the same passwords. It works by validating the users’ passwords directly against your on-premises Active Directory.

Relevance to the requirement: Pass-through authentication is relevant for hybrid scenarios where you have an on-premises Active Directory. In this case, we have two Azure AD tenants. Pass-through authentication, in its typical use case, is not directly for enabling cross-tenant authentication between two separate Azure AD tenants for a web application. While federation concepts might be involved in complex cross-tenant setups, pass-through authentication itself is not the primary solution for this basic cross-tenant access need. Updating the sign-in endpoint is a general step in authentication configuration, but enabling pass-through authentication is not the core action to solve this cross-tenant web app access problem.

Why incorrect: Pass-through authentication is not the appropriate solution for enabling users from a different Azure AD tenant to authenticate to a web app in another tenant. It’s more relevant for hybrid identity scenarios with on-premises Active Directory.

Conclusion:

The most direct and appropriate solution to enable users from fabrikam.com to authenticate to App1 in contoso.com is to configure the application registration of App1 to support multi-tenant authentication by changing the “Supported account types” and potentially updating the sign-in endpoint. This makes the application capable of accepting users from other Azure AD directories, including fabrikam.com.

Final Answer: Configure Supported account types in the application registration and update the sign-in endpoint.

63
Q

Your on-premises datacenter contains a server that runs Linux and hosts a Java app named Appl.

App1 has the following characteristics:

  • App1 is an interactive app that users access by using HTTPS connections.
  • The number of connections to App1 changes significantly throughout the day.
  • App1 runs multiple concurrent instances.
  • App1 requires major changes to run in a container. You plan to migrate App1 to Azure.

You need to recommend a compute solution for Appl.

The solution must meet the following requirements:

  • The solution must run multiple instances of Appl.
  • The number of instances must be managed automatically depending on the load.
  • Administrative effort must be minimized.

What should you include in the recommendation?

Azure Batch
Azure App Service
Azure Kubernetes Service (AKS)
Azure Virtual Machine Scale Sets

A

The correct answer is Azure App Service.

Explanation:

Let’s analyze each option against the requirements:

Requirement 1: The solution must run multiple instances of App1.

Azure App Service: Azure App Service is designed to run multiple instances of web applications for high availability and scalability. You can easily configure the number of instances.

Azure Kubernetes Service (AKS): AKS is designed to run multiple instances of containerized applications (pods), which can host App1 if it’s containerized.

Azure Virtual Machine Scale Sets: Azure Virtual Machine Scale Sets allow you to create and manage a group of identical, load-balanced VMs. You can deploy App1 to each VM instance in the scale set.

Azure Batch: Azure Batch is designed for batch processing and large-scale parallel compute jobs, not for continuously running, interactive web applications. It’s not the right choice for running multiple instances of a web application.

Requirement 2: The number of instances must be managed automatically depending on the load.

Azure App Service: Azure App Service has built-in autoscaling features. You can configure autoscaling rules based on various metrics (CPU usage, memory usage, request queue length, etc.) to automatically increase or decrease the number of instances based on load fluctuations.

Azure Kubernetes Service (AKS): AKS supports autoscaling through Horizontal Pod Autoscaler (HPA) for scaling pods (container instances) and Cluster Autoscaler for scaling the number of nodes (VMs) in the AKS cluster. Autoscaling is a core feature of Kubernetes.

Azure Virtual Machine Scale Sets: Azure Virtual Machine Scale Sets have built-in autoscaling capabilities. You can configure autoscaling rules based on VM metrics to automatically adjust the number of VMs in the scale set based on load.

Azure Batch: Azure Batch is less focused on automatic scaling based on real-time load changes for a continuously running application. Scaling in Batch is more about adjusting resources for batch jobs.

Requirement 3: Administrative effort must be minimized.

Azure App Service: Azure App Service is a Platform-as-a-Service (PaaS) offering. It minimizes administrative effort because Microsoft manages the underlying infrastructure, operating system patching, load balancing, and scaling. You primarily focus on deploying and managing your application code.

Azure Kubernetes Service (AKS): AKS is a managed Kubernetes service, which simplifies Kubernetes management compared to self-managed Kubernetes. However, it still requires more administrative effort than App Service. You are responsible for managing the Kubernetes cluster configuration, deployments, and some aspects of the underlying infrastructure.

Azure Virtual Machine Scale Sets: Azure Virtual Machine Scale Sets are Infrastructure-as-a-Service (IaaS). You have more control but also more administrative responsibility. You are responsible for managing the operating system, web server, application deployment, and patching within the VMs in the scale set. This requires more administrative effort than PaaS options like App Service and AKS.

Azure Batch: Azure Batch requires managing batch jobs, pools, and nodes, which is a different kind of administrative effort, but not minimized for hosting a web application.

App1 requires major changes to run in a container: This statement is a crucial constraint. It implies that containerization is not the preferred path due to the effort involved.

64
Q

You plan to migrate data to Azure.

The IT department at your company identifies the following requirements:

✑ The storage must support 1 PB of data.

✑ The data must be stored in blob storage.

✑ The storage must support three levels of subfolders.

✑ The storage must support access control lists (ACLs).

You need to meet the requirements.

What should you use?

a premium storage account that is configured for block blobs
a general purpose v2 storage account that has hierarchical namespace enabled
a premium storage account that is configured for page blobs
a premium storage account that is configured for files shares and supports large file shares

A

Let’s break down each requirement and evaluate the provided options:

Requirements:

1 PB of data: All Azure storage account types can scale to petabytes, so this requirement is not a differentiator between the options.

Blob storage: We need to store data in blob storage, which eliminates the “premium storage account that is configured for files shares and supports large file shares” option as it refers to Azure Files, not Blob storage.

Three levels of subfolders: This strongly indicates the need for a hierarchical namespace. Traditional blob storage is flat and does not natively support subfolders in the way a file system does.

Access control lists (ACLs): ACLs are necessary for granular permission management, especially within a hierarchical structure.

Option Analysis:

a premium storage account that is configured for block blobs:

Blob storage: Yes, it’s block blob storage.

1 PB of data: Yes, premium storage can handle this volume.

Three levels of subfolders: No. Premium block blob storage accounts, like standard blob storage, are fundamentally flat namespace storage. While you can simulate folders using prefixes in blob names, it’s not a true hierarchical structure and does not support ACLs in the same way as a hierarchical namespace.

ACLs: Block blobs support ACLs on individual blobs, but not hierarchical ACLs for folder structures.

a general purpose v2 storage account that has hierarchical namespace enabled:

Blob storage: Yes, enabling hierarchical namespace on a general-purpose v2 account turns it into Azure Data Lake Storage Gen2, which is built on blob storage.

1 PB of data: Yes, general-purpose v2 accounts can handle this volume.

Three levels of subfolders: Yes. Hierarchical namespace is specifically designed to enable a file system-like structure within blob storage, supporting directories and subdirectories (folders).

ACLs: Yes. Hierarchical namespace fully supports Access Control Lists (ACLs) at the directory and file level, providing POSIX-compliant access control, which is essential for managing permissions within a folder structure.

a premium storage account that is configured for page blobs:

Blob storage: Yes, it’s page blob storage.

1 PB of data: Yes, premium storage can handle this volume.

Three levels of subfolders: No. Premium page blob storage accounts, like standard blob storage and premium block blob storage, are also flat namespace storage. Page blobs are primarily designed for random read/write operations, such as virtual machine disks.

ACLs: Page blobs support ACLs on individual blobs, but not hierarchical ACLs for folder structures.

a premium storage account that is configured for files shares and supports large file shares:

Blob storage: No. This option is for Azure Files, which provides fully managed file shares in the cloud using the SMB protocol. Azure Files is not blob storage.

1 PB of data: Azure Files can support large file shares.

Three levels of subfolders: Yes, Azure Files supports a full file system hierarchy with folders and subfolders.

ACLs: Yes, Azure Files supports standard file system ACLs (NTFS ACLs over SMB or POSIX-style ACLs over NFS).

Conclusion:

Only a general purpose v2 storage account that has hierarchical namespace enabled (Azure Data Lake Storage Gen2) fully satisfies all the requirements, particularly the need for three levels of subfolders and ACLs within blob storage. Hierarchical namespace is the key feature that provides the folder structure and ACL capabilities necessary for this scenario.

Final Answer:
a general purpose v2 storage account that has hierarchical namespace enabled

65
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You are designing an Azure solution for a company that has four departments. Each department will deploy several Azure app services and Azure SQL databases.

You need to recommend a solution to report the costs for each department to deploy the app services and the databases. The solution must provide a consolidated view for cost reporting that displays cost broken down by department.

Solution: Create a separate resource group for each department. Place the resources for each department in its respective resource group.

Does this meet the goal?

Yes
No

A

The proposed solution is to create a separate resource group for each department and place their respective Azure App Services and Azure SQL databases within those resource groups. Let’s evaluate if this solution meets the goal of reporting costs for each department and providing a consolidated view broken down by department.

Resource Groups as Cost Boundaries: Resource groups in Azure act as logical containers. By placing all resources for a specific department within a dedicated resource group, you effectively create a logical boundary for cost tracking.

Azure Cost Management Capabilities: Azure Cost Management allows you to analyze and manage your Azure costs. One of the key features of Azure Cost Management is the ability to filter and group costs by various dimensions, including Resource Group.

Reporting Costs by Department: If you consistently place resources for each department into their designated resource groups, you can use Azure Cost Management to:

Filter costs: Filter costs to show only resources within a specific resource group.

Group costs: Group costs by resource group to see a summary of costs for each resource group.

Export data: Export cost data, which includes the resource group information, for further analysis and reporting.

Consolidated View: Azure Cost Management provides a consolidated view of costs across your Azure subscriptions. By using the grouping and filtering capabilities based on resource groups, you can easily create a consolidated report that breaks down costs by department (resource group).

Benefits of using Resource Groups for Cost Reporting:

Simplicity: It’s a straightforward and easy-to-implement approach. Resource groups are a fundamental Azure concept.

Native Azure Feature: It leverages built-in Azure features (Resource Groups and Cost Management) without requiring additional services or complex configurations.

Granularity: Resource groups provide a good level of granularity for cost breakdown at the department level.

Integration with Cost Management: Azure Cost Management is designed to work effectively with resource groups for cost analysis.

Limitations (Minor for this specific goal):

Enforcement: Relies on organizational discipline to ensure resources are placed in the correct resource groups. Governance policies and processes may be needed to enforce this.

Cross-department Resources: If resources are shared across departments and reside in a common resource group, cost allocation for those shared resources might require additional tagging strategies or more complex cost allocation rules. However, for the stated scenario where each department deploys their own app services and databases, this is less of a concern.

Conclusion:

Creating separate resource groups for each department and placing their resources within those groups does meet the goal. It provides a simple and effective way to organize resources for departmental cost reporting using Azure Cost Management. You can easily filter and group costs by resource group to achieve the required cost breakdown and consolidated view.

Answer: Yes

65
Q

HOTSPOT

You have a resource group named RG1 that contains the objects shown in the following table.
— —
Name Type Location
ASP-RG1 App Service plan East US
KV1 Azure Key Vault East US
KV2 Azure Key Vault West Europe
App1 Azure Logic Apps West US
— —

You need to configure permissions so that App1 can copy all the secrets from KV1 to KV2. App1 currently has the Get permission for the secrets in KV1.

Which additional permissions should you assign to App1? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Permission to assign so that App1 can copy
the secrets from KV1:
Add
Backup
Create
List
Unwrap Key
Permission to assign so that App1 can copy
the secrets to KV2:
Create
Import
List
Wrap Key

A

To copy secrets from KV1 to KV2 using App1, App1 needs specific permissions on both Key Vaults. Let’s break down the required permissions for each step.

Permission to assign so that App1 can copy the secrets from KV1:

Requirement: App1 needs to copy secrets from KV1. App1 already has “Get” permission, which allows it to read the content of a secret. To copy all secrets, App1 needs to be able to discover or enumerate the secrets available in KV1.

Analyze Permissions:

Add: Permission to add a new version of an existing secret. Not relevant for copying from KV1.

Backup: Permission to back up the entire Key Vault. While Backup would include the secrets, it’s an overly broad permission for just copying secrets and not the most efficient way to copy individual secrets. Also, Backup is more for vault-level operations, not secret-level copying.

Create: Permission to create new secrets. Not relevant for copying from KV1.

List: Permission to list the secrets in the Key Vault. This is essential. To copy all secrets, App1 first needs to list the names of the secrets in KV1, and then use the “Get” permission (which it already has) to retrieve the content of each secret.

Unwrap Key: Permission to decrypt a key that was previously encrypted with a key. Not relevant for copying secrets.

Conclusion for copying from KV1: The essential additional permission is List.

Permission to assign so that App1 can copy the secrets to KV2:

Requirement: App1 needs to copy secrets to KV2. This means App1 needs to be able to create or add secrets in KV2.

Analyze Permissions:

Create: Permission to create new secrets in the Key Vault. This is essential. To copy secrets to KV2, App1 needs to have the permission to create new secrets in KV2.

Import: Permission to import a secret into the Key Vault. This is also relevant and potentially more accurate than “Create” in the context of copying secrets. “Import” suggests bringing in secrets from an external source, which aligns with the idea of copying from KV1 to KV2.

List: Permission to list secrets. Not relevant for copying to KV2 (writing/creating).

Wrap Key: Permission to encrypt a key using a key in the Key Vault. Not relevant for copying secrets.

Conclusion for copying to KV2: Both Create and Import are relevant. However, “Import” might be slightly more contextually accurate as “copying” suggests bringing secrets into KV2. For the purpose of the exam question, and given the options, Create is a fundamental permission needed to add secrets, and likely the intended correct answer. “Import” could also be considered correct. From the provided choices, Create is the most directly relevant for the action of putting secrets into KV2.

Final Answer:

Permission to assign so that App1 can copy the secrets from KV1: List
Permission to assign so that App1 can copy the secrets to KV2: Create

66
Q

A company is planning on deploying an application onto Azure. The application will be based on the .Net core programming language. The application would be hosted using Azure Web apps. Below is part of the various requirements for the application

Give the ability to correlate Azure resource usage and the performance data with the actual application configuration and performance data

Give the ability to visualize the relationships between application components

Give the ability to track requests and exceptions to specific lines of code from within the application Give the ability to actually analyse how uses return to an application and see how often they only select a particular drop-down value

Which of the following service would be best suited for fulfilling the requirement of “Give the ability to correlate Azure resource usage and the performance data with the actual application configuration and performance data”

Azure Application Insights
Azure Service Map
Azure Log Analytics
Azure Activity Log

A

The question specifically asks for a service that provides the ability to correlate Azure resource usage and performance data with the actual application configuration and performance data. Let’s evaluate each option against this specific requirement.

Azure Application Insights: This is an Application Performance Monitoring (APM) service designed to monitor live web applications. It automatically detects performance anomalies and includes powerful analytics tools to help diagnose issues and understand user behavior. Crucially, Application Insights integrates application performance data (requests, response times, exceptions, etc.) with Azure platform metrics (CPU usage, memory usage, etc. of the underlying Azure resources hosting the application). This integration is designed to directly correlate resource usage with application performance. It also allows you to track application configuration through custom telemetry and dependencies. This option directly addresses the stated requirement.

Azure Service Map: Azure Service Map automatically discovers application components and maps dependencies between services. While it visualizes relationships between components, it is less focused on the direct correlation between Azure resource usage and application performance data. Service Map is more about understanding the topology and dependencies of your application infrastructure. It doesn’t inherently combine application performance metrics with Azure resource metrics in a way that directly answers the correlation requirement.

Azure Log Analytics: Azure Log Analytics is a service to collect and analyze logs and metrics from various Azure resources. You can collect both Azure resource logs and application logs in Log Analytics. You could potentially create queries to try and correlate resource usage and application performance data if you ingest both types of data and know how to join them. However, Log Analytics is a general-purpose data analysis tool, not specifically designed for the out-of-the-box correlation of Azure resource usage with application configuration and performance in the same way Application Insights is. It requires more manual effort and configuration to achieve this specific correlation compared to the built-in capabilities of Application Insights.

Azure Activity Log: Azure Activity Log provides audit logs of operations performed on Azure resources. It is primarily focused on auditing and operational awareness of changes in your Azure environment. It does not provide application performance data and is not designed for correlating resource usage with application configuration and performance. It’s focused on control plane operations, not application telemetry and performance.

Conclusion:

Azure Application Insights is specifically designed to fulfill the requirement of correlating Azure resource usage and performance data with application configuration and performance data. It is built as an APM tool with this core functionality in mind, offering out-of-the-box features to achieve this correlation for web applications like Azure Web Apps. The other options either focus on different aspects (Service Map - dependencies, Activity Log - audit trails) or require more manual effort to achieve the desired correlation (Log Analytics).

Therefore, the best-suited service for the specified requirement is Azure Application Insights.

Final Answer: Azure Application Insights

66
Q

You have an on-premises app named App1.

Customers use App1 to manage digital images.

You plan to migrate App1 to Azure.

You need to recommend a data storage solution for App1. The solution must meet the following image storage requirements:

  • Encrypt images at rest.
  • Allow files up to 50 MB.
  • Manage access to the images by using Azure Web Application Firewall (WAF) on Azure Front Door.

The solution must meet the following customer account requirements:

  • Support automatic scale out of the storage.
  • Maintain the availability of App1 if a datacenter fails.
  • Support reading and writing data from multiple Azure regions.

Which service should you include in the recommendation for each type of data? To answer, drag the appropriate services to the correct type of data. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct answer is worth one point.
Services
Azure Blob storage
Azure Cosmos DB
Azure SQL Database
Azure Table storage
Answer Area
Image storage: Service
Customer accounts: Service

A

To choose the single “closest with correct answer” for each category, we need to re-evaluate the services based on the given requirements and select the most fitting service for each data type.

Image storage: Service

Azure Blob storage: Azure Blob storage is specifically designed for storing large amounts of unstructured data like images. It inherently supports encryption at rest, easily handles files up to 50MB (and much larger), and integrates seamlessly with Azure Front Door for CDN and WAF capabilities. This is the most direct and best fit for image storage requirements.

Azure Cosmos DB: While Cosmos DB can store binary data, it’s not optimized for efficiently serving large binary files like images directly through a CDN and WAF in the same way Blob storage is. Cosmos DB is better suited for structured and semi-structured data.

Azure SQL Database: Azure SQL Database is a relational database service, not designed for storing and efficiently serving large binary files like images. It’s not a suitable primary storage for images in this scenario.

Azure Table storage: Azure Table storage is a NoSQL key-value store. It’s not designed for storing and serving binary image files efficiently.

Therefore, for Image storage, the closest with correct answer is Azure Blob storage.

Customer accounts: Service

Azure Cosmos DB: Azure Cosmos DB is a globally distributed, multi-model database service that excels in scenarios requiring automatic scale-out, high availability across datacenter failures, and multi-region read/write capabilities. It’s designed for globally distributed applications and is well-suited for managing customer account data that needs to be highly available and scalable. This is the most direct and best fit for customer account requirements.

Azure SQL Database: Azure SQL Database is a relational database service. While it can be scaled and made highly available, achieving multi-region read/write and automatic global distribution with the same ease and native capabilities as Cosmos DB is more complex and potentially less cost-effective.

Azure Blob storage: Azure Blob storage is designed for unstructured data and not optimized for managing structured customer account data that requires querying, indexing, and transactional consistency.

Azure Table storage: Azure Table storage is a NoSQL key-value store. While scalable, it is less feature-rich and less globally distributed out-of-the-box compared to Cosmos DB. It may not be the best choice for complex customer account management requiring robust global availability and multi-region write.

Therefore, for Customer accounts, the closest with correct answer is Azure Cosmos DB.

Answer Area:
Image storage: Service - Azure Blob storage
Customer accounts: Service - Azure Cosmos DB

67
Q

Your company plans to deploy various Azure App Service instances that will use Azure SQL databases.

The App Service instances will be deployed at the same time as the Azure SQL databases.

The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.

You need to recommend a solution to meet the regulatory requirement.

Solution: You recommend using the Regulatory compliance dashboard in Microsoft Defender for Cloud.

Does this meet the goal?

Yes
No

A

The proposed solution is to use the Regulatory compliance dashboard in Microsoft Defender for Cloud to meet the regulatory requirement of deploying Azure App Service instances only to specific Azure regions. Let’s evaluate if this is sufficient.

Goal:

Enforce deployment of App Service instances only to specific Azure regions.

Ensure resources for App Service instances reside in the same region (primarily a constraint, but the main goal is regional enforcement).

Solution: Regulatory compliance dashboard in Microsoft Defender for Cloud

Functionality: The Regulatory compliance dashboard in Microsoft Defender for Cloud is designed to:

Assess compliance: Evaluate your Azure environment against various regulatory standards and security benchmarks.

Provide visibility: Show your compliance posture and identify areas of non-compliance.

Offer recommendations: Suggest actions to improve compliance.

Report: Generate reports on compliance status.

Limitations in Enforcement: The Regulatory compliance dashboard is primarily a monitoring and reporting tool. It does not actively prevent the deployment of resources in non-compliant regions. It can only detect and report on non-compliance after resources have been deployed.

Why the Solution Does NOT Meet the Goal:

The core of the regulatory requirement is to prevent deployments in unapproved Azure regions. The Regulatory compliance dashboard is designed to monitor and report on compliance, but it does not have the capability to block or prevent deployments in real-time.

To enforce regional deployment restrictions, you need a service that can actively prevent non-compliant deployments before they happen.

The correct tool for enforcement is Azure Policy. Azure Policy allows you to define policies that:

Specify allowed regions: You can create a policy that lists the permitted Azure regions.

Assign policies to scopes: You can assign this policy to your subscription or resource groups.

Enforce with “Deny” effect: You can set the policy effect to “Deny,” which will prevent any deployment that violates the policy (e.g., deploying an App Service in a non-allowed region).

Conclusion:

The Regulatory compliance dashboard in Microsoft Defender for Cloud is a valuable tool for monitoring and improving your security and compliance posture. However, it is not designed to enforce deployment restrictions like region limitations. Therefore, using it alone does not meet the goal of ensuring App Service instances are deployed only to specific Azure regions.

Answer:

B. No

68
Q

DRAG DROP

You have an on-premises network that uses an IP address space of 172.16.0.0/16. You plan to deploy 25 virtual machines to a new Azure subscription.

You identify the following technical requirements:

  • All Azure virtual machines must be placed on the same subnet named Subnet1.
  • All the Azure virtual machines must be able to communicate with all on-premises servers.
  • The servers must be able to communicate between the on-premises network and Azure by using a site-to-site VPN.

You need to recommend a subnet design that meets the technical requirements.

What should you include in the recommendation? To answer, drag the appropriate network addresses to the correct subnets. Each network address may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Network Addresses
172.16.0.0/16
172.16.1.0/27
192.168.0.0/24
192.168.1.0/27

Answer Area
Subnet1: Network address
Gateway subnet: Network address

A

Subnet1: Network address: 192.168.0.0/24
Gateway subnet: Network address: 192.168.1.0/27

Explanation:

Let’s break down why this is the correct solution based on the requirements and available options:

  1. On-premises Network IP Address Space:

The on-premises network uses 172.16.0.0/16. This is a crucial piece of information. When you connect Azure to your on-premises network via VPN, you must avoid overlapping IP address ranges. If you use the same or overlapping ranges in Azure, routing will become ambiguous and communication will fail.

  1. Subnet1 Requirements (for VMs):

25 Virtual Machines: Subnet1 needs to be large enough to accommodate at least 25 VMs.

Same Subnet: All VMs must be in Subnet1.

  1. Gateway Subnet Requirement (for VPN Gateway):

Site-to-Site VPN: A site-to-site VPN requires a dedicated Gateway subnet. Azure VPN Gateways are deployed into this subnet. Azure documentation recommends a Gateway subnet of at least /28 or /27.

Analyzing the Network Address Options:

172.16.0.0/16: This is the on-premises network range. You cannot use this range or any subnet within it for your Azure virtual network subnets that need to communicate with the on-premises network. Doing so would create an IP address overlap and break routing.

172.16.1.0/27: This subnet falls within the 172.16.0.0/16 range (specifically within 172.16.0.0 - 172.16.255.255). Therefore, using this for Subnet1 or Gateway subnet in Azure would create an overlap with the on-premises network, which is incorrect.

192.168.0.0/24: This subnet is in the 192.168.0.0/16 private IP range, which is completely separate from the on-premises 172.16.0.0/16 range. This is a suitable candidate for an Azure subnet because it avoids IP address overlap. A /24 subnet provides 256 addresses, with 251 usable addresses (Azure reserves 5 addresses within each subnet), which is more than sufficient for 25 VMs.

192.168.1.0/27: This subnet is also in the 192.168.0.0/16 private IP range and is completely separate from the on-premises 172.16.0.0/16 range. This is also a suitable candidate for an Azure subnet. A /27 subnet provides 32 addresses, with 27 usable addresses, which is sufficient for the Gateway subnet (which doesn’t host VMs, only the VPN Gateway).

Putting it Together - Why the Recommended Answer is Correct:

Subnet1: 192.168.0.0/24:

Non-overlapping: Completely separate from the on-premises 172.16.0.0/16 range.

Sufficient Size: /24 provides enough IP addresses for 25 VMs and future growth.

Meets VM Requirement: All VMs can be placed in this subnet.

Gateway subnet: 192.168.1.0/27:

Non-overlapping: Completely separate from the on-premises 172.16.0.0/16 range.

Sufficient Size: /27 is a recommended size for the Gateway subnet in Azure.

Meets VPN Requirement: Dedicated subnet for the VPN Gateway.

Why other combinations would be incorrect:

Using 172.16.0.0/16 or 172.16.1.0/27 for either Subnet1 or Gateway subnet would create an IP address overlap with the on-premises network, making VPN communication unreliable or impossible.

Using a very small subnet like /27 for Subnet1 might be technically possible for exactly 25 VMs, but it’s not best practice and offers no room for growth or other resources in the same subnet. /24 for Subnet1 is a more common and flexible choice.

68
Q

Note: This question is part of series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company has an on-premises Active Directory Domain Services (AD DS) domain and an established Azure Active Directory (Azure AD) environment.

Your company would like users to be automatically signed in to cloud apps when they are on their corporate desktops that are connected to the corporate network.

You need to enable single sign-on (SSO) for company users.

Solution: Install and configure an on-premises Active Directory Federation Services (AD FS) server with a trust established between the AD FS server and Azure AD.

Does the solution meet the goal?

Yes
No

A

The solution proposes using Active Directory Federation Services (AD FS) to achieve single sign-on (SSO) in a hybrid environment with on-premises Active Directory Domain Services (AD DS) and Azure Active Directory (Azure AD). Let’s break down why this solution works and meets the goal:

Goal: Users should be automatically signed in to cloud apps when on corporate desktops connected to the corporate network. This means seamless SSO without requiring users to explicitly enter their credentials again for each cloud application.

Solution: On-premises AD FS with Azure AD Trust: AD FS is a Microsoft technology designed to provide federated identity and access management. In this context, it acts as an identity provider (IdP) for Azure AD.

How AD FS enables SSO in this scenario:

Federation Trust: Establishing a federation trust between AD FS and Azure AD means that Azure AD trusts AD FS to authenticate users from the on-premises domain.

User Access Cloud App: When a user on a corporate desktop (domain-joined and on the corporate network) attempts to access a cloud application that relies on Azure AD for authentication:

The cloud application redirects the user to Azure AD for authentication.

Azure AD identifies that the user’s domain is federated with AD FS.

Azure AD redirects the authentication request to the on-premises AD FS server.

Integrated Windows Authentication (IWA): Because the user is on a domain-joined machine within the corporate network, AD FS can leverage Integrated Windows Authentication (IWA) using Kerberos or NTLM. This allows AD FS to authenticate the user silently in the background without prompting for credentials again, because the user is already logged into their Windows domain account.

Token Issuance: After successful authentication via IWA, AD FS issues a security token (typically SAML) to Azure AD.

Azure AD Trust and Authorization: Azure AD trusts the token issued by AD FS and uses it to grant the user access to the cloud application.

Why this meets the goal:

Automatic Sign-in: Users on corporate desktops within the network will experience automatic sign-in to cloud applications because AD FS can authenticate them silently using their existing domain credentials through IWA. They won’t be prompted to re-enter usernames and passwords for each cloud app, achieving seamless SSO.

Corporate Desktops and Network: The solution specifically works for users on corporate desktops connected to the corporate network, as IWA relies on domain membership and network connectivity to the AD FS server.

Established Azure AD Environment: The solution is designed to integrate with an existing Azure AD environment, leveraging federation to extend on-premises identity to the cloud.

Conclusion:

Installing and configuring an on-premises AD FS server with a trust to Azure AD is a valid and commonly used solution to achieve seamless single sign-on for users in a hybrid environment as described in the scenario. It allows users to automatically access cloud applications without re-entering credentials when they are on their corporate network and domain-joined devices.

Answer: Yes