test1 Flashcards

https://www.dumpsbase.com/freedumps/?s=az+304

1
Q

Your network contains an on-premises Active Directory domain.

The domain contains the Hyper-V clusters shown in the following table.

Name Number of nodes Number of virtual machines running on cluster
Cluster1 4 20
Cluster2 3 15
— —

You plan to implement Azure Site Recovery to protect six virtual machines running on Cluster1 and three virtual machines running on Cluster1 Virtual machines are running on all Cluster! and Cluster2 nodes.

You need to identify the minimum number of Azure Site Recovery Providers that must be installed on premises.

How many Providers should you identify?

1
7
9
16

A

Understanding Azure Site Recovery Providers:

The Azure Site Recovery (ASR) Provider is a software component that must be installed on each Hyper-V host that you want to protect with ASR.

The Provider communicates with the Azure Recovery Services Vault and facilitates replication and failover.

Requirements:

On-Premises Hyper-V: There are two Hyper-V clusters (Cluster1 and Cluster2).

Protection Scope: Six VMs from Cluster1 and three VMs from Cluster2 need to be protected by Azure Site Recovery.

Minimum Providers: Identify the minimum number of ASR Providers needed.

Analysis:

Cluster1: Has 4 nodes.

Cluster2: Has 3 nodes.

Provider per Host: One ASR Provider is needed on each Hyper-V host that will be replicated.

Protected VMs: Six VMs from Cluster1 and three from Cluster2 need protection.

VMs are running on all nodes: All VMs are running across all nodes, which means that we need an ASR Provider installed on all nodes.

Minimum Number of Providers:

Cluster1 requires a provider on each host: 4 providers

Cluster2 requires a provider on each host: 3 providers

Total: 4 + 3 = 7

Correct Answer:

7

Explanation:

You must install an Azure Site Recovery Provider on every Hyper-V host that contains virtual machines that you want to protect using ASR. Because you need to protect VMs on all nodes in both clusters, you must install a provider on every hyper-v host. This means you must install 4 providers on Cluster 1 and 3 providers on cluster 2, for a total of 7 providers.

Why not others:

1: It is not enough since there are 7 Hyper-V hosts in total.

9: This answer is incorrect because it does not match the total number of hyper-v hosts.

16: This answer is incorrect because it does not match the total number of hyper-v hosts.

Important Notes for the AZ-304 Exam:

Azure Site Recovery: Understand the architecture, requirements, and components of ASR.

ASR Provider: Know that the ASR Provider must be installed on each Hyper-V host to be protected.

Minimum Requirements: The exam often focuses on minimum requirements, not the total capacity or other metrics.

Hyper-V Integration: Understand how ASR integrates with Hyper-V for replication.

Exam Focus: Read the question carefully and identify the specific information related to required components.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You need to recommend a strategy for the web tier of WebApp1. The solution must minimize.

What should you recommend?

Create a runbook that resizes virtual machines automatically to a smaller size outside of business hours.
Configure the Scale Up settings for a web app.
Deploy a virtual machine scale set that scales out on a 75 percent CPU threshold.
Configure the Scale Out settings for a web app.

A

Requirements:

Web Tier Scaling: A strategy for scaling the web tier of WebApp1.

Minimize Cost: The solution must focus on minimizing cost.

Recommended Solution:

Configure the Scale Out settings for a web app.

Explanation:

Configure the Scale Out settings for a web app:

Why it’s the best fit:

Cost Minimization: Web apps (App Services) have a pay-as-you-go model and scale out to add more instances when demand increases and automatically scale back in when the demand decreases. This is cost-effective because you only pay for what you use.

Automatic Scaling: You can configure automatic scaling based on different performance metrics (CPU, memory, or custom metrics), ensuring that you scale out and in based on load.

Managed Service: It is a fully managed service, so it minimizes operational overhead.

Why not others:

Create a runbook that resizes virtual machines automatically to a smaller size outside of business hours: While this can help minimize cost, this is not ideal because VMs are still running all the time. Also, it is more complex to implement and manage.

Configure the Scale Up settings for a web app: Scale Up is more costly because you increase the compute resources of the existing instances.

Deploy a virtual machine scale set that scales out on a 75 percent CPU threshold: While it is possible to deploy and scale with scale sets, this is more costly since VMs are billed per hour and are more complex to manage than web apps.

Important Notes for the AZ-304 Exam:

Azure App Service: Be very familiar with Azure App Service and its scaling capabilities.

Web App Scale Out: Know the different scaling options for web apps, and when to scale out versus scale up.

Automatic Scaling: Understand how to configure automatic scaling based on performance metrics.

Cost Optimization: The exam often emphasizes cost-effective solutions. Be aware of the pricing models for different Azure services.

PaaS vs. IaaS: Understand the benefits of using PaaS services over IaaS for cost optimization.

Exam Focus: Be sure to select the best service that meets the requirements and provides the most cost effective solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You have an Azure subscription that contains a custom application named Application was developed by an external company named fabric, Ltd. Developers at Fabrikam were assigned role-based access control (RBAV) permissions to the Application components. All users are licensed for the Microsoft 365 E5 plan.

You need to recommends a solution to verify whether the Faricak developers still require permissions to Application1.

The solution must the following requirements.

  • To the manager of the developers, send a monthly email message that lists the access permissions to Application1.
  • If the manager does not verify access permission, automatically revoke that permission.
  • Minimize development effort.

What should you recommend?

In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources
Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet
Create an Azure Automation runbook that runs the Get-AzureRmRoleAssignment cmdlet
In Azure Active Directory (Azure AD), create an access review of Application1

A

Requirements:

External Developer Access: Fabrikam developers have RBAC permissions to an Azure application.

Access Verification: Need to verify if the Fabrikam developers still need access.

Monthly Email to Manager: Send a monthly email to the manager with access information.

Automatic Revocation: Revoke permissions if the manager does not approve.

Minimize Development: Minimize custom code development and use available services.

Recommended Solution:

In Azure Active Directory (Azure AD), create an access review of Application1

Explanation:

Azure AD Access Reviews:

Why it’s the best fit:

Automated Review: Azure AD Access Reviews provides a way to schedule recurring access reviews for groups, applications, or roles. It will automatically send notifications to the assigned reviewers (in this case, the manager).

Manager Review: You can configure the access review to have the manager review and approve or deny access for their developers.

Automatic Revocation: You can configure the access review to automatically remove access for users when they are not approved.

Minimal Development: Access reviews are a built-in feature of Azure AD that requires minimal configuration and no custom coding.

Why not others:

In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources: While PIM is great for managing and governing privileged roles, it’s not the best choice for regular access reviews of permissions, and it does not provide a way to have a review based on user accounts.

Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet: While possible, this requires custom development and management. Azure Access Reviews provides the functionality natively, therefore this is not the optimal solution for the requirements.

Create an Azure Automation runbook that runs the Get-AzureRmRoleAssignment cmdlet: Similar to the previous option, this is not the ideal solution since access reviews provides all of this functionality natively.

Important Notes for the AZ-304 Exam:

Azure AD Access Reviews: Be very familiar with Azure AD Access Reviews, and how they can be used to manage user access, and know the methods that you can use to perform them (for example, by a manager or by self review).

Access Management: Understand the importance of access reviews as part of an overall security strategy.

Access Reviews vs. PIM: Understand when to use PIM, and when to use Access Reviews.

Minimize Development: The exam often emphasizes solutions that minimize development effort.

Exam Focus: Select the simplest and most direct method to achieve the desired outcome.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You have an Azure SQL database named DB1.

You need to recommend a data security solution for DB1. the solution must meet the following requirements:

  • When helpdesk supervisors query DS1. they must see the full number of each credit card.
  • When helpdesk operators Query DB1. they must see only the last four digits of each credit card number
  • A column named Credit Rating must never appear in plain text within the database system, and only client applications must be able to decrypt the Credit Rating column.

What should you include in the recommendation To answer, select the appropriate options in the answer area? NOTE: Each correct selection is worth one point.
Helpdesk requirements:
Always Encrypted
Azure Advanced Threat Protection (ATP)
Dynamic data masking
Transparent Data Encryption (TDE)
Credit Rating requirement:
Always Encrypted
Azure Advanced Threat Protection (ATP)
Dynamic data masking
Transparent Data Encryption (TDE)

A

Requirements:

Helpdesk Supervisors: Must see full credit card numbers.

Helpdesk Operators: Must see only the last four digits of credit card numbers.

Credit Rating Column: The Credit Rating column must never appear in plain text within the database system and must be decrypted by the client applications.

Answer Area:

Helpdesk requirements:

Dynamic data masking

Credit Rating requirement:

Always Encrypted

Explanation:

Helpdesk requirements:

Dynamic data masking:

Why it’s correct: Dynamic data masking allows you to obfuscate sensitive data based on the user’s role. You can configure masking rules to show the full credit card numbers to supervisors and only the last four digits to the operators. The underlying data is not modified, and the masking is applied at the query output level.

Why not others:

Always Encrypted: This encrypts the data, but doesn’t allow for different visibility of the data based on user roles.

Azure Advanced Threat Protection (ATP): This is for detecting malicious behavior, not for data masking.

Transparent Data Encryption (TDE): This encrypts data at rest, but does not apply specific policies based on user access or perform masking.

Credit Rating requirement:

Always Encrypted:

Why it’s correct: Always Encrypted ensures that sensitive data is always encrypted, both at rest and in transit. The encryption keys are stored and managed in the client application and are not accessible to database administrators. This satisfies the requirement that the column must never appear in plain text in the database system, and it is only decrypted in the client application.

Why not others:

Azure Advanced Threat Protection (ATP): It doesn’t encrypt or mask the data. It is meant for threat detection.

Dynamic data masking: Dynamic data masking only masks the data for specific users, but it does not encrypt the data.

Transparent Data Encryption (TDE): TDE encrypts data at rest, but it does not encrypt data in transit or protect against database administrators viewing the unencrypted data.

Important Notes for the AZ-304 Exam:

Always Encrypted: Understand what it does, how it encrypts data, where the encryption keys are managed, and the purpose of this approach for security.

Dynamic Data Masking: Know the purpose and configuration of dynamic data masking and how it helps control the data that users can see.

Transparent Data Encryption (TDE): Understand that TDE is used for encrypting data at rest, but it doesn’t protect data in transit, and does not provide different views of data.

Azure Advanced Threat Protection (ATP): Know that it is used for threat detection, not for masking or encrypting data.

Data Security: Be familiar with the different data security features in Azure SQL Database.

Exam Focus: You must be able to understand a complex scenario, and pick the different Azure components that meet each requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You have an Azure subscription.

Your on-premises network contains a file server named Server1. Server 1 stores 5 TB of company files that are accessed rarely.

You plan to copy the files to Azure Storage.

You need to implement a storage solution for the files that meets the following requirements:

  • The files must be available within 24 hours of being requested.
  • Storage costs must be minimized.

Which two possible storage solutions achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

Create a general-purpose v2 storage account that is configured for the Hot default access tier Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.
Create an Azure Blob storage account that is configured for the Cool default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.
Create a general-purpose v1 storage account. Create a blob container and copy the files to the blob container.
Create a general-purpose v1 storage account Create a file share in the storage account and copy the files to the file share.
Create a general-purpose v2 storage account that is configured for the Cool default access tier. Create a file share in the storage account and copy the files to the file share.

A

Requirements:

Infrequent Access: The files are rarely accessed.

24-Hour Retrieval: Files must be available within 24 hours of a request.

Cost Minimization: Storage costs must be minimized.

5 TB: Size of data to be stored.

On-Premises Data: Data currently located on a file server.

Correct Solutions:

Create a general-purpose v2 storage account that is configured for the Cool default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.

Create an Azure Blob storage account that is configured for the Cool default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.

Explanation:

Create a general-purpose v2 storage account that is configured for the Cool default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier:

Why it’s correct:

Archive Access Tier: Setting the files to the Archive tier will result in the lowest storage costs, and it guarantees that files can be available within 24 hours of requesting a rehydration.

General Purpose v2: This is the recommended storage account type for most scenarios.

Blob Container: This is the correct storage type to store a large amount of files in Azure.

Create an Azure Blob storage account that is configured for the Cool default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.:

Why it’s correct:

Archive Access Tier: Setting the files to the Archive tier will result in the lowest storage costs, and it guarantees that files can be available within 24 hours of requesting a rehydration.
* Azure Blob Storage: This storage account type is optimized for blob storage.
* Blob Container: This is the correct storage type to store a large amount of files in Azure.

Why not others:

Create a general-purpose v2 storage account that is configured for the Hot default access tier: The Hot tier is the most expensive, and does not meet the requirements for minimizing costs.

Create a general-purpose v1 storage account. Create a blob container and copy the files to the blob container. General Purpose v1 storage accounts are the older versions and are not recommended. In addition, it would need the files to be set to Archive tier to meet the cost requirements.

Create a general-purpose v1 storage account Create a file share in the storage account and copy the files to the file share: File shares are not cost effective for large amounts of data storage, and do not support the archive tier.

Create a general-purpose v2 storage account that is configured for the Cool default access tier. Create a file share in the storage account and copy the files to the file share. File shares are not cost effective for large amounts of data storage, and do not support the archive tier.

Important Notes for the AZ-304 Exam:

Azure Storage Access Tiers: Be very familiar with the different access tiers: Hot, Cool, and Archive. Know their use cases, costs, and retrieval time implications.

Storage Account Types: Understand the differences between general-purpose v1, v2, and blob storage accounts, and when to use each.

Blob Storage: Know how to store data in blob storage using containers.

File Shares: Understand how Azure file shares are used. They are not designed for storing large amounts of data for archival.

Cost Minimization: The exam often emphasizes cost-effective solutions. Know the pricing implications of different Azure services and tiers.

Exam Focus: Be sure to read the full requirement to choose the correct service and tier combination.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

HOTSPOT

You have an existing implementation of Microsoft SQL Server Integration Services (SSIS) packages stored in an SSISDB catalog on your on-premises network. The on-premises network does not have hybrid connectivity to Azure by using Site-to-Site VPN or ExpressRoute.

You want to migrate the packages to Azure Data Factory.

You need to recommend a solution that facilitates the migration while minimizing changes to the existing packages. The solution must minimize costs.

What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Store the SSISDB catalog by using:
Azure SQL Database
Azure Synapse Analytics
SQL Server on an Azure virtual machine
SQL Server on an on-premises computer
Implement a runtime engine for
package execution by using:
Self-hosted integration runtime only
Azure-SQL Server Integration Services Integration Runtime (IR) only
Azure-SQL Server Integration Services Integration Runtime and self-hosted integration runtime

A

Requirements:

Existing SSIS Packages: The packages are stored in an SSISDB catalog on-premises.

Migrate to ADF: The migration target is Azure Data Factory.

Minimize Changes: The solution should minimize changes to the existing SSIS packages.

Minimize Costs: The solution should be cost-effective.

No connectivity: There is no hybrid connectivity from the on-premises environment to Azure.

Answer Area:

Store the SSISDB catalog by using:

Azure SQL Database

Implement a runtime engine for package execution by using:

Azure-SQL Server Integration Services Integration Runtime (IR) only

Explanation:

Store the SSISDB catalog by using:

Azure SQL Database:

Why it’s correct: To migrate SSIS packages to Azure Data Factory, the SSISDB catalog needs to be stored in Azure. Azure SQL Database is the recommended and supported method of storing the SSISDB catalog when you are using the Azure SSIS Integration Runtime in ADF.

Why not others:

Azure Synapse Analytics: While Synapse Analytics also supports SQL functionality, it is not the recommended platform to host the SSISDB.
* SQL Server on an Azure virtual machine: While SQL Server on a VM would work, it is an IaaS solution, which requires additional management overhead and is not as cost-effective as using the PaaS Azure SQL Database.
* SQL Server on an on-premises computer: The SSISDB must be in Azure to be used by the Azure SSIS Integration Runtime.

Implement a runtime engine for package execution by using:

Azure-SQL Server Integration Services Integration Runtime (IR) only:

Why it’s correct: An Azure SSIS Integration Runtime is a fully managed service for executing SSIS packages in Azure. Because there is no hybrid network connectivity, you must use the Azure version, instead of a self-hosted IR. The Azure SSIS IR is the only way to run the SSIS packages that were migrated in Azure.

Why not others:

Self-hosted integration runtime only: The self-hosted integration runtime needs a hybrid network to Azure to be able to work. Because there is no VPN or expressroute, this is not an option.

Azure-SQL Server Integration Services Integration Runtime and self-hosted integration runtime: The self-hosted integration runtime is not necessary in this scenario because there is no need to connect to an on-premise resource.

Important Notes for the AZ-304 Exam:

Azure Data Factory: Be very familiar with ADF, its core concepts, and how to execute SSIS packages.

Azure SSIS IR: Know the purpose of an Azure SSIS Integration Runtime and how to set it up. Understand that it is used when running SSIS packages in Azure.

SSISDB in Azure: Understand how the SSISDB catalog is managed and stored in Azure when migrating from an on-prem environment.

Self-Hosted IR: Understand when the self-hosted IR is required and why it is not the appropriate answer for this specific scenario.

Hybrid Connectivity: Understand how hybrid connectivity affects the choice of integration runtime.

Cost Minimization: Know how to minimize costs by choosing the appropriate services (PaaS over IaaS).

Exam Focus: The exam emphasizes choosing the most appropriate solution while minimizing effort and cost.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You use Azure virtual machines to run a custom application that uses an Azure SQL database on the back end.

The IT apartment at your company recently enabled forced tunneling, Since the configuration change, developers have noticed degraded performance when they access the database.

You need to recommend a solution to minimize latency when accessing the database. The solution must minimize costs.

What should you include in the recommendation?

Azure SQL Database Managed instance
Azure virtual machines that run Microsoft SQL Server servers
Always On availability groups
virtual network (VNET) service endpoint

A

Understanding Forced Tunneling:

Forced tunneling in Azure directs all internet-bound traffic from a subnet through a virtual network appliance (like a firewall or proxy), on-premises network, or specific Azure service. This can increase latency since traffic to Azure services is routed through the forced tunnel, instead of going directly.

Requirements:

Azure SQL Database: Custom app on Azure VMs uses an Azure SQL database.

Forced Tunneling: Forced tunneling is enabled, causing performance degradation.

Minimize Latency: Minimize the latency when accessing the database.

Minimize Costs: The solution should be cost-effective.

Recommended Solution:

virtual network (VNET) service endpoint

Explanation:

Virtual Network Service Endpoints:

Why it’s the best fit: VNET service endpoints allow you to secure access to Azure service resources by enabling the use of a private IP address in the VNET. By enabling service endpoints for Azure SQL Database, traffic to that database from the Azure VMs within the VNET will bypass the forced tunnel, and instead go directly through the Azure backbone. This significantly reduces latency while also being cost effective.

Why not others:

Azure SQL Database Managed Instance: While Managed Instance is a good choice for many SQL scenarios, it is not the ideal solution for this problem. It does not help with the forced tunneling, and it also does not minimize cost since it is a more expensive offering.

Azure virtual machines that run Microsoft SQL Server servers: Moving the database to a VM in IaaS will not fix the problem. It will not address the latency issues created by the forced tunneling.

Always On availability groups: This helps with HA and DR, but it does not help with the latency issues caused by the forced tunneling. Also, it would add significant costs to the deployment.

Important Notes for the AZ-304 Exam:

Virtual Network Service Endpoints: Understand the benefits of using service endpoints.

Forced Tunneling: Know what forced tunneling is and how it can impact traffic flow.

Cost Minimization: Know the different ways to minimize costs when architecting a solution.

Network Performance: Understand the different ways to diagnose and improve performance when dealing with Azure network configurations.

Azure SQL: Know the different deployment options for Azure SQL.

Exam Focus: The exam will often require you to select the most appropriate solution that meets all of the requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You have an Azure subscription that is linked to an Azure Active Directory (Azure AD) tenant. The subscription contains 10 resource groups, one for each department at your company.

Each department has a specific spending limit for its Azure resources.

You need to ensure that when a department reaches its spending limit, the compute resources of the department shut down automatically.

Which two features should you include in the solution? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

Azure Logic Apps
Azure Monitor alerts
the spending limit of an Azure account
Cost Management budgets
Azure Log Analytics alerts

A

Requirements:

Departmental Limits: Each department has a specific spending limit for its Azure resources.

Resource Shutdown: Compute resources must shut down automatically when the spending limit is reached.

Correct Features:

Cost Management budgets

Azure Logic Apps

Explanation:

Cost Management budgets:

Why it’s correct: Cost Management budgets allow you to define a spending limit for a specific scope (resource group, subscription, management group). When the actual spend reaches the budget threshold, you can trigger alerts and take actions. Budgets is the way to monitor and alert based on the cost.

Why not others (by itself): Cost management budgets cannot automatically stop resources, it is a monitoring and alert mechanism, and needs other services in order to take action.

Azure Logic Apps:

Why it’s correct: Azure Logic Apps can be triggered by a budget alert. In the logic app, you can add actions that automatically shut down the compute resources. For example, you can use the Azure Resource Management connector to stop virtual machines.

Why not others (by itself): Logic apps require a trigger to start. Therefore, a budget alert must be configured.

Why not others:

Azure Monitor alerts: Azure Monitor alerts are for performance monitoring. While they can monitor costs, they cannot perform actions on those costs.

the spending limit of an Azure account: While the Azure Account might have a total spending limit, this does not allow for the control on resource groups, or the automation of stopping resources.

Azure Log Analytics alerts: Log Analytics is a great way to analyze logs, but does not work with cost alerts.

Important Notes for the AZ-304 Exam:

Cost Management Budgets: Be very familiar with Cost Management budgets and how they can be used to control spending, and know that they are the mechanism that you should use for cost alerts.

Azure Logic Apps: Know how to use Logic Apps to automate actions based on triggers, and how they integrate with Azure Management connectors.

Automated Actions: Understand that Logic Apps can be triggered by alerts and can be used to perform actions, such as shutting down resources.

Cost Control: Be familiar with the best practices for cost control and optimization in Azure.

Alerts: Know the difference between cost alerts and metrics alerts.

Exam Focus: Carefully read the requirement. You must know which services do what function. You need to know that you need a budget to alert when the spend is reached, and that you need Logic apps to automate an action when the alert is triggered.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

HOTSPOT

You configure OAuth2 authorization in API Management as shown in the exhibit.

Add OAuth2 service

Display name: (Empty field)
Id: (Empty field)
Description: (Empty field)
Client registration page URL: https://contoso.com/register
Authorization grant types:

Authorization code: Enabled

Implicit: Disabled

Resource owner password: Disabled

Client credentials: Disabled

Authorization endpoint URL: https://login.microsoftonline.com/contoso.onmicrosoft.com/oauth2/v2.0/authorize

Support state parameter: Disabled

Authorization Request method

GET: Enabled
POST: Disabled
Token endpoint URL: (Empty field)

Additional body parameters: (Empty field)

Button: Create

Use the drop-domain to select the answer choice that completes each statement based on

the information presented in the graphic. NOTE: Each correct selection is worth one point.
The selected authorization grant type is for
Background services
Headless device authentication
Single page applications
Web applications
To enable custom data in the grant flow, select
Client credentials
Implicit
Resource owner password
Support state parameter

A

OAuth2 Configuration Summary:

Authorization Grant Types: The configuration shows the “Authorization code” grant type as the only one enabled.

Authorization Endpoint URL: This is set to Microsoft’s OAuth2 authorization endpoint for the contoso.onmicrosoft.com tenant.

Other Settings: Various other settings related to authorization and token endpoints are displayed.

Answer Area:

The selected authorization grant type is for:

Web applications

To enable custom data in the grant flow, select

Support state parameter

Explanation:

The selected authorization grant type is for:

Web applications:

Why it’s correct: The authorization code grant type is the most secure and recommended method to obtain access tokens for web applications. In this flow the client (web app) first gets an authorization code from the authorization server, and then uses it to obtain an access token.

Why not others:

Background services: Background services (also known as daemon apps) typically use the client credentials flow, which is not enabled in this configuration.

Headless device authentication: Headless devices often use the device code flow, which is not a grant type present here.

Single-page applications: Single-page applications (SPAs) can use the authorization code flow, but often use the implicit grant type, which is disabled in this configuration.

To enable custom data in the grant flow, select:

Support state parameter:

Why it’s correct: The “Support state parameter” setting enables passing an opaque value in the authorization request, and will be returned by the authorization server with the code. This can be used to pass custom data that needs to be included in the authorization flow.

Why not others:

Client credentials: This is for service-to-service authentication without a user present.

Implicit: This is an older, less secure grant type for single-page applications. It does not enable passing custom data.

Resource owner password: This is a less secure grant type that should be avoided in most scenarios. It also does not enable passing custom data.

Important Notes for the AZ-304 Exam:

OAuth 2.0 Grant Types: Be very familiar with the different OAuth 2.0 grant types:

Authorization Code

Implicit

Client Credentials

Resource Owner Password

Device Code

API Management OAuth2 Settings: Understand how to configure OAuth 2.0 settings in Azure API Management.

“State” Parameter: Know the importance of the “state” parameter in OAuth flows and how it helps prevent CSRF attacks. Understand how this can be used to pass custom data.

API Security: Know how to properly secure APIs with OAuth 2.0.

Exam Focus: Be sure to select the answer based on a close inspection of the provided details.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You are designing an order processing system in Azure that will contain the Azure resources shown in the following table.

|—|—|—|
| App1 | Web app | Processes customer orders |
| Function1 | Function | Check product availability at vendor 1 |
| Function2 | Function | Check product availability at vendor 2 |
| storage1 | Storage account | Stores order processing logs |

The order processing system will have the following transaction flow:

✑ A customer will place an order by using App1.

✑ When the order is received, App1 will generate a message to check for product availability at vendor 1 and vendor 2.

✑ An integration component will process the message, and then trigger either Function1 or Function2 depending on the type of order.

✑ Once a vendor confirms the product availability, a status message for App1 will be generated by Function1 or Function2.

✑ All the steps of the transaction will be logged to storage1.

Which type of resource should you recommend for the integration component?

an Azure Data Factory pipeline
an Azure Service Bus queue
an Azure Event Grid domain
an Azure Event Hubs capture

Name | Type | Purpose |

A

Requirements:

Message Processing: A component is needed to process messages generated by App1.

Conditional Triggering: The component must trigger either Function1 or Function2 based on the order type.

Logging: All steps of the transaction must be logged in storage1.

Recommended Resource:

an Azure Service Bus queue

Explanation:

Azure Service Bus queue:

Why it’s the best fit:

Message Broker: Service Bus is a reliable message broker that can decouple components in your system, and provide a way for them to communicate asynchronously.

Message Routing and Filtering: Service Bus queues and topics provide mechanisms for message routing and filtering. You can configure the service bus to send messages from App1 to different queues or topics, and then have Function1 and Function2 subscribe to those queues, based on the different order types.

Reliable Messaging: Service Bus ensures reliable message delivery, even if a function fails.

Logging: By integrating the queue with Logic Apps, you can add steps in order to log the activity in storage1.

Why not others:

an Azure Data Factory pipeline: Data Factory is for data integration, ETL, and data transformations, not suitable for processing and routing messages. Also, it does not integrate well with functions.

an Azure Event Grid domain: Event Grid is designed for reactive event-based systems, not for processing sequential workflows. It is also not the most reliable way to send message, and does not guarantee delivery in the same way as Service Bus.

an Azure Event Hubs capture: Event Hubs is for high-throughput ingestion of event data, but not the most ideal for message routing and guaranteed delivery and does not integrate well with serverless functions.

Important Notes for the AZ-304 Exam:

Azure Service Bus: Be very familiar with Service Bus, including queues and topics, the different delivery guarantees, and its use cases for reliable message queuing and routing.

Message Brokers: Understand the purpose of message brokers, decoupling systems, and asynchronous processing.

Azure Functions Integration: Know how Azure Functions can be triggered by messages from Service Bus queues or topics.

Event-Driven Architectures: Understand the difference between messaging and event-driven architectures.

Data Integration: Know the use cases for Azure Data Factory.

Exam Focus: Carefully consider the specific requirements of the problem and select the component that best fits those requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases.

The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.

You need to recommend a solution to meet the regulatory requirement.

Solution: You recommend creating resource groups based on locations and implementing resource locks on the resource groups.

Does this meet the goal?

Yes
No

A

Goal:

Deploy Azure App Service instances and Azure SQL databases simultaneously.

App Service instances must be deployed only to specific Azure regions.

Resources for the App Service instances must reside in the same region.

Proposed Solution:

Create resource groups based on locations.

Implement resource locks on the resource groups.

Analysis:

Resource Groups Based on Location:

Creating resource groups based on locations is a good practice for organizing resources in Azure. It makes it easier to manage resources and ensures that all the resources that belong to a specific geographic region are grouped together. This is an important step in reaching the goal.

Resource Locks

Resource locks, however, are only for preventing accidental deletion of resource groups and the resources within. They do not enforce which resources are deployed or where they are deployed, meaning that a user could still deploy a VM outside of the required location.

Does It Meet the Goal?: No

Explanation:

Resource Groups by Location (Partial Fulfillment): Creating resource groups by location does help with organizing resources and ensures they’re deployed in the same region, meeting part of the requirement of keeping all resources in the same location.

Resource Locks - These will not solve for the region requirement, because you can still create a resource in any region.

Missing Enforcement: The solution lacks any mechanism to enforce that the resources are only deployed in the correct Azure regions. This is a regulatory requirement, so a simple organization of resource groups is not enough.

No Region Enforcement: Resource locks prevent accidental deletion or modification of resources, but they do not restrict resource deployments to specific regions.

Correct Answer:

No

Important Notes for the AZ-304 Exam:

Resource Groups: Understand the purpose and use of resource groups.

Resource Locks: Know the purpose and limitations of resource locks.

Regulatory Requirements: Recognize that solutions must enforce compliance requirements. This is a key element of many questions.

Enforcement Mechanisms: Look for mechanisms that enforce policies instead of simply organizing resources.

Exam Focus: Read the proposed solution and verify if it truly meets the goal. If any part of the solution does not achieve the goal, then the answer is “No”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You need to recommend a data storage solution that meets the following requirements:

  • Ensures that applications can access the data by using a REST connection
  • Hosts 20 independent tables of varying sizes and usage patterns
  • Automatically replicates the data to a second Azure region
  • Minimizes costs

What should you recommend?

an Azure SQL Database that uses active geo-replication
tables in an Azure Storage account that use geo-redundant storage (GRS)
tables in an Azure Storage account that use read-access geo-redundant storage (RA-GRS)
an Azure SQL Database elastic database pool that uses active geo-replication

A

Requirements:

REST API Access: The data must be accessible through a REST interface.

Independent Tables: The solution must support 20 independent tables of different sizes and usage patterns.

Automatic Geo-Replication: The data must be automatically replicated to a secondary Azure region.

Minimize Costs: The solution should be cost-effective.

Recommended Solution:

Tables in an Azure Storage account that use read-access geo-redundant storage (RA-GRS)

Explanation:

Azure Storage Account with RA-GRS Tables:

REST Access: Azure Storage tables are directly accessible using a REST API, which is a fundamental part of their design.

Independent Tables: A single Azure Storage account can hold many independent tables, meeting the 20-table requirement.

Automatic Geo-Replication (RA-GRS): RA-GRS ensures that the data is replicated to a secondary region, and provides read access to that secondary location. This satisfies the HA and geo-redundancy requirements.

Minimize Cost: Azure Storage tables are designed to handle different patterns and are cost effective compared to SQL options.

Why not others:

Azure SQL Database with active geo-replication: While it provides strong SQL capabilities and geo-replication, SQL databases are more costly for simple table storage and have a higher operational overhead. SQL databases also do not have a REST interface, rather they use SQL.

Azure SQL Database elastic database pool with active geo-replication: Same reasons as above, but with the added complication of an elastic pool, which is unnecessary for the stated requirements and would add even more costs.

Tables in an Azure Storage account that use geo-redundant storage (GRS): This would meet the geo-replication requirements but it would not provide the ability to read from the secondary location, and so is not as good a choice as RA-GRS.

Important Notes for the AZ-304 Exam:

Azure Storage Tables: Know what they are designed for and their features (scalability, cost-effectiveness, REST API access). Be able to explain where they are appropriate.

Geo-Redundancy: Understand the differences between GRS, RA-GRS and how they impact performance, availability and cost.

Cost-Effective Solutions: The exam often asks for the most cost-effective solution. Be aware of the pricing models of different Azure services.

SQL Database Use Cases: Understand when to use SQL DBs and when other options (like Table storage) are more appropriate. SQL DBs are better suited for complex queries, transactions, and relational data models.

REST API Access: Know which Azure services offer a REST interface for data access and when it might be required.

Exam Technique: Ensure you fully read the requirements, so you don’t pick a more expensive or complex solution than is needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

HOTSPOT

Your company has two on-premises sites in New York and Los Angeles and Azure virtual networks in the East US Azure region and the West US Azure region. Each on-premises site has Azure ExpressRoute circuits to both regions.

You need to recommend a solution that meets the following requirements:

✑ Outbound traffic to the Internet from workloads hosted on the virtual networks must be routed through the closest available on-premises site.

✑ If an on-premises site fails, traffic from the workloads on the virtual networks to the Internet must reroute automatically to the other site.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Routing from the virtual networks to
the on-premises locations must be
configured by using:
Azure default routes
Border Gateway Protocol (BGP)
User-defined routes
The automatic routing configuration
following a failover must be
handled by using:
Border Gateway Protocol (BGP)
Hot Standby Routing Protocol (HSRP)
Virtual Router Redundancy Protocol (VRRP)

A

Correct Answers and Why

Routing from the virtual networks to the on-premises locations must be configured by using:

Border Gateway Protocol (BGP)

Why?

ExpressRoute Standard: ExpressRoute relies on BGP for exchanging routes between your on-premises networks and Azure virtual networks. It’s the fundamental routing protocol for this type of connectivity.

Dynamic Routing: BGP allows for dynamic route learning, meaning routes are automatically adjusted based on network changes (like a site going down). This is essential for the failover requirement.

Path Selection: BGP allows for attributes like Local Preference to choose the best path. The path to the nearest on-prem location can be preferred by setting a higher local preference.

Why Not the Others?

Azure Default Routes: These routes are for basic internal Azure connectivity and internet access within Azure. They don’t handle routing to on-premises networks over ExpressRoute.

User-defined routes (UDRs): While UDRs can force traffic through a specific path they do not facilitate dynamic failover without manual intervention and are therefore unsuitable in this scenario.

The automatic routing configuration following a failover must be handled by using:

Border Gateway Protocol (BGP)

Why?

BGP Convergence: BGP’s inherent nature is to dynamically adapt to network changes. If an on-premises site or an ExpressRoute path becomes unavailable, BGP automatically detects this and withdraws routes from the failed path.

Automatic Rerouting: BGP then advertises the available paths, leading to the rerouting of traffic through the remaining healthy site, achieving the automatic failover requirement.

Why Not the Others?

Hot Standby Routing Protocol (HSRP) and Virtual Router Redundancy Protocol (VRRP): These protocols are used for first-hop redundancy on local networks which is not applicable in Azure environments or to Expressroute configurations. They do not facilitate the end-to-end routing and failover required.

Important Notes for the AZ-304 Exam

ExpressRoute Routing is BGP-Based: Understand that BGP is the routing protocol for ExpressRoute. If a question involves routing over ExpressRoute, BGP is highly likely to be involved.

BGP for Dynamic Routing and Failover: Know that BGP not only provides routing but also provides failover capabilities through its dynamic path selection and convergence features.

Local Preference: Understand how BGP attributes like Local Preference can be used to influence path selection. This is key for scenarios where you want to force a primary path and have a secondary backup path.

Azure Networking Core Concepts: You should have a solid understanding of:

Virtual Networks: How they’re used, subnetting, IP addressing.

Route Tables: Both default and User-Defined, and how they control traffic routing.

ExpressRoute: The different connection options and associated routing implications.

Dynamic vs. Static Routing: Know the difference between dynamic routing (BGP) and static routing (User Defined Routes) and where they are best suited.

Hybrid Networking: Be prepared to deal with hybrid scenarios that connect on-premises and Azure resources.

Failover: Be aware of the failover options and be able to choose the best solutions for different circumstances. BGP is the most common solution for failover between on-prem and Azure.

HSRP and VRRP Applicability: These are first hop redundancy protocols used locally and are not suitable for Azure cloud environments. They should not be suggested for Azure routing scenarios.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You have an Azure subscription. The subscription contains an app ir-tal is hosted in Ihe East US, Central Europe, ant) East Asia regions You need to recommend a data-tier solution for the app.

The solution must meet the following requirements:

  • Support multiple consistency levels.
  • Be able to store at least 1 TB of data.
  • Be able to perform read and write operations in the Azure region that is local to the app instance

What should you Include In the recommendation?

a Microsoft SQL Server Always On availability group on Azure virtual machines
an Azure Cosmos OB database
an Azure SQL database in an elastic pool
Azure Table storage that uses geo-redundant storage (GRS) replication

A

Understanding the Requirements

Global Distribution: The application is deployed in multiple regions (East US, Central Europe, East Asia), meaning the data layer also needs to be globally accessible.

Multiple Consistency Levels: The solution must support different levels of data consistency (e.g., strong, eventual).

Scalability: It needs to store at least 1 TB of data.

Local Read/Write: Each application instance should be able to perform read and write operations in its local region for performance.

Evaluating the Options

a) Microsoft SQL Server Always On Availability Group on Azure Virtual Machines:

Pros:

Offers strong consistency.

Can store large amounts of data (1 TB+).

Cons:

Complex to manage: Requires setting up and maintaining virtual machines, clustering, and replication manually.

Not designed for low-latency multi-regional access: While you can do replication, it’s typically not optimized for providing very low-latency access to every region at the same time.

Does not inherently offer multiple consistency levels:

Verdict: Not the best fit. It’s too complex and doesn’t easily meet the multi-region, multiple consistency requirement.

b) An Azure Cosmos DB database:

Pros:

Globally Distributed: Designed for multi-region deployments and provides low-latency reads/writes in local regions.

Multiple Consistency Levels: Supports various consistency levels, from strong to eventual, that can be set per request.

Scalable: Can easily store 1 TB+ of data and scale as needed.

Fully Managed: Much easier to manage than SQL Server on VMs.

Cons:

Has different way of managing data and database design than relational solutions.

Verdict: Excellent fit. It directly addresses all the requirements.

c) An Azure SQL Database in an elastic pool:

Pros:

Scalable in terms of performance and resources.

Familiar relational database platform.

Cons:

Not inherently multi-regional: While you can do active geo-replication, it has limitations with low-latency reads from remote regions.

Limited consistency options: Primarily provides strong consistency, not multiple levels.

Not as horizontally scalable: It’s designed for relational data, not the more flexible scalability needed for a globally distributed app.

Does not provide local read/write in each region.

Verdict: Not the best choice. It doesn’t meet the multi-region low-latency and consistency requirements.

d) Azure Table storage that uses geo-redundant storage (GRS) replication:

Pros:

Highly scalable.

Relatively inexpensive.

GRS provides data replication.

Cons:

No multi-master writes: No local read/write in each region. Reads can come from a different location.

Limited consistency: Primarily eventual consistency, not the range required by the problem statement.

No SQL: Designed for non-relational data storage only.

Verdict: Not suitable. Lacks multiple consistency options, multi-master writes, and suitable performance for low latency reads.

Recommendation

Based on the analysis, the best solution is:

An Azure Cosmos DB database

Explanation

Azure Cosmos DB is purpose-built for globally distributed applications. It offers:

Global Distribution and Low Latency: Data can be replicated to multiple Azure regions, allowing applications to read and write data in their local region with low latency.

Multiple Consistency Levels: You can fine-tune the consistency level per request. Options range from strong consistency (data is guaranteed to be the same everywhere) to eventual consistency (data will eventually be consistent across regions).

Scalability: Cosmos DB can easily store 1 TB+ of data and automatically scales to handle increased traffic.

Ease of Management: As a fully managed service, it reduces operational overhead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Your company purchases an app named App1.

You plan to tun App1 on seven Azure virtual machines In an Availability Set. The number of fault domains is set to 3. The number of update domains is set to 20.

You need to identity how many App1 instances will remain available during a period of planned maintenance.

How many Appl instances should you identify?

1
2
6
7

A

Understanding Availability Sets

Purpose: Availability Sets are used to protect your applications from planned and unplanned downtime within an Azure datacenter.

Fault Domains (FDs): Fault Domains define groups of virtual machines that share a common power source and network switch. In the event of a power or switch failure, VMs in different FDs will be affected independently of each other.

Update Domains (UDs): Update Domains define groups of virtual machines that can be rebooted simultaneously during an Azure maintenance window. Azure applies planned maintenance to UDs one at a time.

The Key Rule

During planned maintenance, Azure updates VMs within a single Update Domain at a time. Azure moves to the next UD only after completing an update to the current UD. This means that while an update is being done on one UD, the other UDs are not affected.

Analyzing the Scenario

7 VMs in total

3 Fault Domains: This is important for unplanned maintenance, but doesn’t directly impact our answer here.

20 Update Domains: This is the important factor for planned maintenance.

It does not mean there are 20 physical UDs in the set. It just means up to 20 UDs can be used. The 7 VM’s will therefore each be in 1 of 7 unique UDs within the set of 20 UDs.

Calculating Availability During Planned Maintenance

Minimum VMs per Update Domain: Since you have 7 VMs and, even though there are 20 UDs, each virtual machine will be placed in its own update domain so each will be on its own UD.

Impact of Maintenance: During a planned maintenance event, Azure will update one UD at a time. Therefore during maintenance one of those 7 VMs will be unavailable while the update is applied.

Available VMs: That means that at any given time when maintenance is applied to one single UD, the remaining VMs in the other UDs will remain available. In this case 7-1=6 VMs.

Correct Answer

6

Important Notes for the AZ-304 Exam

Availability Sets vs. Virtual Machine Scale Sets: Know the difference. Availability Sets provide fault tolerance for individual VMs, while Scale Sets provide scalability and resilience for groups of identical VMs (often used for autoscaling). This question specifically used an availability set.

Fault Domains (FDs) vs. Update Domains (UDs): Be clear on the purpose of each. FDs for unplanned maintenance, UDs for planned maintenance.

Impact of UDs on Planned Maintenance: During planned maintenance, only one UD is updated at a time, ensuring that your application can remain available.

Distribution of VMs: In an availability set, Azure evenly distributes VMs across FDs and UDs.

Maximum FDs and UDs: Understand that the maximum number of FDs is 3 and UDs are 20 in Availability Sets.

Real-World Scenario: Be aware that real production workloads can have other availability and redundancy concerns and that more advanced redundancy can be achieved by using multiple availability sets in the same region or a combination of Availability sets and Availability zones.

Calculations: Be able to determine the availability of VMs during planned or unplanned maintenance based on the number of FDs and UDs as well as the number of VMs in a given configuration.

Best Practice: Best practice is to have at least 2 VMs in an availability set, and 2 availability sets in your region to provide redundancy in the event of zonal failures as well as UD / FD maintenance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Your company has the infrastructure shown in the following table:

Location: Azure
Resource:

Azure subscription named Subscription1
20 Azure web apps
Location: On-premises datacenter
Resource:

Active Directory domain
Server running Azure AD Connect
Linux computer named Server1

The on-premises Active Directory domain syncs to Azure Active Directory (Azure AD).

Server1 runs an application named Appl that uses LDAP queries to verify user identities in the on-premises Active Directory domain.

You plan to migrate Server1 to a virtual machine in Subscription1.

A company security policy states that the virtual machines and services deployed to Subscription! must be prevented from accessing the on-premises network.

You need to recommend a solution to ensure that Appl continues to function after the migration. The solution must meet the security policy.

What should you include in the recommendation?

Azure AD Domain Services (Azure AD DS)
an Azure VPN gateway
the Active Directory Domain Services role on a virtual machine
Azure AD Application Proxy

A

Understanding the Requirements

Application (App1): Uses LDAP queries to authenticate users in the on-premises Active Directory.

Migration: Moving from an on-premises Linux server to an Azure VM.

Security Policy: VMs and services in Azure are not allowed to access the on-premises network.

Functionality: The migrated application must still be able to authenticate users.

Analyzing the Options

Azure AD Domain Services (Azure AD DS)

Pros:

Provides a managed domain controller in Azure, allowing VMs to join the domain.

Supports LDAP queries for authentication.

Independent of the on-premises network.

Synchronizes user information from Azure AD.

Fully managed, eliminating the need for maintaining domain controllers.

Cons:

Cost implications from running an additional service.

Verdict: This is the most suitable option. It meets the functional requirements without violating the security policy.

An Azure VPN Gateway

Pros:

Provides a secure connection between Azure and on-premises networks.

Cons:

Violates the security policy that prevents Azure resources from connecting to on-premises.

Would allow the VM access to the entire on-prem network (if setup using site to site) including AD.

Verdict: Not a valid option because it directly contradicts the security policy.

The Active Directory Domain Services role on a virtual machine

Pros:

Provides the needed domain services

Cons:

Would require setting up and managing a domain controller in Azure.

Would need to setup a vpn connection to sync with on-prem which would violate the security policy.

Requires ongoing maintenance.

Verdict: Not a valid option because it would be hard to maintain and the connection to on-prem would violate the security policy.

Azure AD Application Proxy

Pros:

Allows external users to connect to internal resources.

Cons:

Not relevant for this use case. Application Proxy does not manage or provide LDAP access to users.

Verdict: Not a good fit as it does not help with authentication for the application.

Correct Recommendation

The best solution is Azure AD Domain Services (Azure AD DS).

Explanation

LDAP Compatibility: Azure AD DS provides a managed domain service compatible with LDAP queries, which is precisely what App1 needs for user authentication.

Isolated Azure Environment: Azure AD DS is entirely contained within Azure and does not require a connection to the on-premises network. This allows you to satisfy the security policy.

Azure AD Synchronization: Azure AD DS syncs users from Azure AD, meaning users will be able to authenticate after the migration.

Ease of Use: Azure AD DS is a fully managed service so you will not need to worry about the underlying infrastructure.

Important Notes for the AZ-304 Exam

Azure AD DS Use Cases: Know that Azure AD DS is designed for scenarios where you need domain services (including LDAP) in Azure but cannot/should not connect to on-premises domain controllers.

Hybrid Identity: Be familiar with hybrid identity options, such as using Azure AD Connect to sync on-premises Active Directory users to Azure AD.

Security Policies: Pay close attention to security policies described in exam questions. The answers should be able to fulfil any security requirements.

Service Selection: Be able to choose the correct Azure service based on the stated requirements of the question. For example, know when to use Azure AD DS as opposed to spinning up a domain controller in a VM.

Alternatives: You should know what other options there are that could theoretically be used, but also understand their pros and cons. For instance, you should be able to state that a VPN could facilitate the connection, but that the security policy would need to be updated.

LDAP Authentication: Understand LDAP as the core functionality for Active Directory authentication.

Fully Managed Services: Be aware of the benefits of managed services (like Azure AD DS) in reducing management overhead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You are reviewing an Azure architecture as shown in the Architecture exhibit (Click the Architecture tab.)

Log Files
|
v
Azure Data Factory ——-> Azure Data Lake Storage
| |
| |
| |
v |
Azure Databricks <—————-
|
v
Azure Synapse Analytics ——-> Azure Analysis Services
|
v
Power BI
Steps:
Ingest: Log Files → Azure Data Factory
Store: Azure Data Factory → Azure Data Lake Storage
Prep and Train: Azure Data Lake Storage ⇄ Azure Databricks
Model and Serve: Azure Synapse Analytics → Azure Analysis Services
Visualize: Azure Analysis Services → Power BI

The estimated monthly costs for the architecture are shown in the Costs exhibit. (Click the Costs tab.)

|—————————-|————————————————-|—————|
| Azure Synapse Analytics | Tier: Compute-optimised Gen2, Compute: DWU 100 x 1 | US$998.88 |
| Data Factory | Azure Data Factory V2 Type, Data Pipeline Service type, | US$4,993.14 |
| Azure Analysis Services | Developer (hours), 5 Instance(s), 720 Hours | US$475.20 |
| Power BI Embedded | 1 node(s) x 1 Month, Node type: A1, 1 Virtual Core(s), | US$735.91 |
| Storage Accounts | Block Blob Storage, General Purpose V2, LRS Redundant, | US$21.84 |
| Azure Databricks | Data Analytics Workload, Premium Tier, 1 D3V2 (4 vCPU) | US$515.02 |
| Estimate total: | | US$7,739.99 |

The log files are generated by user activity to Apache web servers. The log files are in a consistent format. Approximately 1 GB of logs are generated per day. Microsoft Power Bl is used to display weekly reports of the user activity.

You need to recommend a solution to minimize costs while maintaining the functionality of the architecture.

What should you recommend?

Replace Azure Data Factory with CRON jobs that use AzCopy.
Replace Azure Synapse Analytics with Azure SOL Database Hyperscale.
Replace Azure Synapse Analytics and Azure Analysis Services with SQL Server on an Azure virtual machine.
Replace Azure Databricks with Azure Machine Learning.

Service | Description | Cost |

A

Understanding the Existing Architecture

Data Ingestion: Log files from Apache web servers are ingested into Azure Data Lake Storage via Azure Data Factory.

Data Processing: Azure Databricks is used to prep and train the data.

Data Warehousing: Azure Synapse Analytics is used to model and serve data.

Data Visualization: Azure Analysis Services and Power BI are used for visualization.

Cost Breakdown and Bottlenecks

The cost breakdown shows the following areas as significant expenses:

Azure Data Factory: $4,993.14 (by far the most expensive item)

Azure Synapse Analytics: $998.88

Power BI Embedded: $735.91

The other items (Analysis services, Databricks, and storage) are relatively low cost.

Analyzing the Recommendations

Replace Azure Data Factory with CRON jobs that use AzCopy.

Pros:

Significant cost reduction: AzCopy is free and can be used with a simple CRON job.

Suitable for the relatively small amount of data that is being moved.

Cons:

Less feature rich than Data Factory (No orchestration, error handling, monitoring etc).

Adds management overhead as you need to create and maintain the CRON jobs.

Verdict: This is the best option. Given the small data volume, the complexity of Data Factory is overkill and the cost can be reduced dramatically.

Replace Azure Synapse Analytics with Azure SQL Database Hyperscale.

Pros:

Can be more cost effective for smaller workloads and can scale up or down easily.

Cons:

May need changes to the way the data is stored and managed.

Hyperscale is designed for transactional loads and may not be the best replacement for a Datawarehouse.

Verdict: Not the best option, as it may impact the architecture of the solution and the query patterns used.

Replace Azure Synapse Analytics and Azure Analysis Services with SQL Server on an Azure virtual machine.

Pros:

Could be less expensive than the managed service for small workloads.

Cons:

Significantly more management overhead, less scalable.

Would reduce the overall functionality of the solution, having to implement multiple services in one VM.

Would not reduce costs as the total cost of the VM, the sql licences, and management effort would likely cost more.

Verdict: Not recommended. Introduces complexity and management overhead.

Replace Azure Databricks with Azure Machine Learning.

Pros:

Azure Machine Learning can also do data processing.

May be more cost efficient depending on workload.

Cons:

Azure Machine learning is more focused on ML than processing/preparation of data.
* More geared towards predictive analytics than general data processing.
* May require a significant rework of the existing process.

Verdict: Not a suitable option as it is not a like for like replacement.

Recommendation

The best recommendation is:

Replace Azure Data Factory with CRON jobs that use AzCopy.

Explanation

Cost Savings: The primary issue is the high cost of Azure Data Factory. Using CRON jobs and AzCopy is a simple, low-cost alternative for the relatively small volume of data being moved.

Functionality: The CRON job will simply move the data from the source location to the Azure data lake, with the processing steps remaining the same.

Complexity: While this adds more management overhead by requiring you to create the CRON job and manage it, the simplicity of the requirements outweigh the complexity.

Important Notes for the AZ-304 Exam

Cost Optimization: Know that the exam may test your ability to identify cost drivers and suggest cost optimizations.

Azure Data Factory: Understand when ADF is the right tool and when a simpler tool will suffice. It’s often beneficial to use a tool as simple as possible, while still meeting requirements.

Data Transfer: Be aware of options like AzCopy for moving data in a low-cost way.

CRON jobs: Understand how CRON jobs can be used to schedule operations.

Azure Synapse Analytics: Understand how Azure Synapse Analytics can provide insights and processing power, but can also be expensive.

SQL Database Hyperscale: Understand when it is more beneficial to use Hyperscale over Synapse

SQL Server on Azure VM: Know the use cases of where a traditional SQL server may be appropriate.

Azure Analysis Services: Know that it is designed for fast data queries and reporting through tools like Power BI, but can add significant cost.

Azure Databricks and ML: Understand the difference and which scenarios are more suited for each.

Service selection: Know how to select a service based on the requirements provided.

Simplicity: Consider solutions that may be less feature-rich, but provide simpler (and lower cost) solutions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You have an Azure Active Directory (Azure AD) tenant.

You plan to provide users with access to shared files by using Azure Storage. The users will be provided with different levels of access to various Azure file shares based on their user account or their group membership.

You need to recommend which additional Azure services must be used to support the planned deployment.

What should you include in the recommendation?

an Azure AD enterprise application
Azure Information Protection
an Azure AD Domain Services (Azure AD DS) instance
an Azure Front Door instance

A

Understanding the Requirements

Azure File Shares: Using Azure Storage to host shared files.

Granular Access Control: Users need different levels of access to different file shares.

User/Group-Based Permissions: Access should be based on the user’s account and their Azure AD group memberships.

Azure AD Authentication: Users will be using their Azure AD credentials.

Analyzing the Options

An Azure AD enterprise application:

Pros:

Allows you to register an application with Azure AD for authentication and authorization.

This can be used to allow users to access other resources including file shares based on their claims (group membership).

Cons:

Would require custom logic and code development to implement access based on group membership.

Verdict: Not the correct choice. An application registration is required for service principals, but not directly for file share access.

Azure Information Protection:

Pros:

Provides information protection through labeling, classification, and encryption.

Cons:

It doesn’t directly control access to Azure File Shares and does not provide role based access control (RBAC)

Verdict: Not the correct choice. While AIP can help protect the files themselves, it’s not what is needed to control access based on the identity and group memberships of the users.

An Azure AD Domain Services (Azure AD DS) instance:

Pros:

Provides domain services in Azure.

Can be used to join Azure VMs to the managed domain.

Cons:

Not required. You can use the Azure AD authentication directly, so a domain service is not required to provide access to the fileshares.

Verdict: Not the correct choice. While Azure AD DS provides a domain for Azure resources, it’s not required for this use case and would be an unnecessary complexity.

An Azure Front Door instance:

Pros:
* Provides a global entry point for your web applications.

Cons:

Azure Front door cannot be used for Azure fileshare access.

Verdict: Not the correct choice as it does not help with Azure fileshare access.

Recommendation

None of the mentioned services directly support the planned deployment, however the best option is to do the following:

Role Based Access Control (RBAC):

Explanation

RBAC: Role-Based Access Control (RBAC) in Azure allows you to define roles (like “Reader,” “Contributor,” “Owner”) and assign these roles to users or groups at the file share, storage account, or resource group levels.

Azure AD Identities: RBAC integrates directly with Azure AD, so you can easily grant permissions based on a user’s Azure AD account or their group memberships.

Granular Permissions: You can use RBAC to configure different permission levels for different users and groups on different file shares, meeting the stated requirements.

No Additional Services: Unlike the incorrect answers, RBAC is a core feature of Azure and does not require additional services.

Important Notes for the AZ-304 Exam

Azure RBAC: Know RBAC inside and out. How to create roles, assign roles, scopes, and use in Azure.

Azure Storage Security: Understand how security works in Azure Storage (using access keys, Shared Access Signatures (SAS), and RBAC)

Azure AD Authentication: Know how Azure AD can be used to grant access to Azure resources

Azure File Shares: Understand how they work, and their security options.

Common Use Cases: RBAC is used everywhere in Azure, so be comfortable applying it in different scenarios.

Role Creation: Know when you will need to create a custom role, and how to do so.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

DRAG DROP

You are planning an Azure solution that will host production databases for a high-performance application. The solution will include the following components:

✑ Two virtual machines that will run Microsoft SQL Server 2016, will be deployed to different data centers in the same Azure region, and will be part of an Always On availability group.

✑ SQL Server data that will be backed up by using the Automated Backup feature of the SQL Server IaaS Agent Extension (SQLIaaSExtension)

You identify the storage priorities for various data types as shown in the following table.

|————————|—————————|
| Operating system | Speed and availability |
| Databases and logs | Speed and availability |
| Backups | Lowest cost |

Which storage type should you recommend for each data type? To answer, drag the appropriate storage types to the correct data types. Each storage type may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.

Storage Types
A geo-redundant storage (GRS) account
A locally-redundant storage (LRS) account
A premium managed disk
A standard managed disk

Answer Area
Operating system:
Databases and logs:
Backups:

Data type | Storage priority |

A

Understanding the Requirements

High-Performance Application: The application demands high speed and availability.

SQL Server Always On: Data is critical and must be resilient and highly available.

Automated Backups: Backups are important but not as critical as the operational data.

Storage Priorities:

Operating System: Speed and availability.

Databases and Logs: Speed and availability.

Backups: Lowest cost.

Analyzing the Storage Options

A geo-redundant storage (GRS) account:

Pros:

Provides data replication across a secondary region.

Best for disaster recovery and high availability.

Cons:

Highest cost among the storage options.

Higher latency than locally redundant storage (LRS) or premium storage.

Use Case: Best for backups when recovery from a regional outage is critical, or when backups need to be available from a different location.

A locally-redundant storage (LRS) account:

Pros:

Lowest cost storage.

Cons:

Data redundancy is limited to within the same data center.

Use Case: Suitable for backups where availability is less of a concern and lowest cost is the primary priority.

A premium managed disk:

Pros:

Highest performance with SSD storage.

Designed for high IOPS and low latency.

Cons:

Highest cost.

Use Case: Ideal for operating system disks, databases, and logs for high-performance applications.

A standard managed disk:

Pros:

Lower cost than premium disks.

Cons:

Uses HDD storage, offering less performance than SSD storage.

Use Case: Suitable for less performance-sensitive workloads and backups, where cost is an important factor.

Matching Storage to Data Types

Here’s how we should match the storage types:

Operating system:

Premium managed disk is the correct option. The operating system requires high-speed disk access for good virtual machine performance.

Databases and logs:

Premium managed disk is the correct option. Databases and logs require very low-latency and high IOPS. Premium disks are the only disks that provide these performance requirements.

Backups:

A locally-redundant storage (LRS) account is the best option. The automated backup configuration for SQL Server (SQLIaaSExtension) should use LRS storage for backups by default due to the cost benefits.

Answer Area

Data Type Storage Type
Operating system A premium managed disk
Databases and logs A premium managed disk
Backups A locally-redundant storage (LRS) account
Important Notes for the AZ-304 Exam

Managed Disks vs Unmanaged Disks: Know the difference between them and be aware that managed disks are the default option and almost always recommended.

Premium SSD vs Standard HDD: Understand the use cases of Premium disks for high IOPS/low-latency and Standard for cost sensitive workloads.

Storage Redundancy Options: Understand the difference between LRS, GRS, ZRS, and how to choose the best options for availability and durability requirements.

SQL Server on Azure VMs: Know best practices for SQL Server VM deployments including storage and backup configuration.

Performance Needs: Recognize which workloads need performance (like databases, operating systems) and which can tolerate lower performance and be cost-optimized (backups)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You are developing a sales application that will contain several Azure cloud services and will handle different components of a transaction Different cloud services will process customer orders balling, payment inventory, and stopping.

You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using REST messages.

What should you include in the recommendation?

Azure Queue storage
Azure Data Lake
Azure Service Fabric
Azure Traffic Manager

A

Understanding the Requirements

Asynchronous Communication: Cloud services need to communicate without waiting for a response from each other.

REST Messages: Communication should be done via HTTP-based REST messages.

Transaction Information: The messages contain data related to customer orders, billing, payment, inventory, and shipping.

Multiple Cloud Services: The solution must enable several cloud services to communicate effectively.

Analyzing the Options

Azure Queue Storage:

Pros:

Asynchronous: Supports asynchronous message queuing.

HTTP-Based API: Provides a REST API for sending and receiving messages.

Scalable: Can handle high message volumes.

Simple to Use: Relatively easy to set up and use for message queuing.

Cost-Effective: One of the most cost-effective options for asynchronous messaging.

Cons:

Does not support message filtering, prioritization, or sessions.

Messages have a size limit of 64KB which may not suit more complex messages.

Verdict: This is the best fit for the given requirements.

Azure Data Lake:

Pros:

Scalable storage for large data sets.

Cons:

Not designed for message queuing or asynchronous communication.

Does not have a REST based messaging API.

Verdict: Not a suitable choice as it does not provide messaging functionality.

Azure Service Fabric:

Pros:

Platform for building microservices.

Provides reliable communication patterns between services.

Cons:

More complex than needed for simple message queuing.

Not designed for simple asynchronous communication with REST messages.

Adds unnecessary operational overhead if a simple messaging system is required.

Verdict: Not suitable. Too complex for the given scenario.

Azure Traffic Manager:

Pros:

Provides traffic routing based on performance and priority.

Cons:

Does not handle messaging or asynchronous communication.

Not applicable to the scenario.

Verdict: Not a relevant option. Its functionality is outside the requirements for the question.

Recommendation

The correct recommendation is:

Azure Queue Storage

Explanation

Asynchronous Messaging: Azure Queue Storage is designed to facilitate asynchronous communication between different components of an application. Services can add messages to the queue, and other services can read those messages from the queue independently of each other.

REST API: Queue Storage exposes a REST API that allows services to interact with queues through HTTP requests.

Scalability: Azure Queue Storage can scale to accommodate a large number of messages and message senders/receivers.

Cost-Effectiveness: It is one of the most cost-effective services for asynchronous messaging on Azure.

Important Notes for the AZ-304 Exam

Asynchronous Messaging: Understand when and why to use asynchronous messaging.

Azure Queue Storage: Know that it’s a great option for simple messaging, with its REST API, ease of use, scalability, and cost effectiveness.

Azure Service Bus: Be aware of when to use Azure Service Bus over queue storage, particularly if more complex features are needed such as message filtering, prioritization, sessions, or publish/subscribe.

REST API: Recognize that many Azure services use REST for API access.

Microservices: Know that services communicate with one another in a microservices environment using various methods.

Appropriate Use Cases: Focus on matching the right service with the appropriate use case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

You nave 200 resource groups across 20 Azure subscriptions.

Your company’s security policy states that the security administrator most verify all assignments of the Owner role for the subscriptions and resource groups once a month. All assignments that are not approved try the security administrator must be removed automatically. The security administrator must be prompted every month to perform the verification.

What should you use to implement the security policy?

Access reviews in identity Governance
role assignments in Azure Active Directory (Azure AD) Privileged Identity Management (PIM)
Identity Secure Score in Azure Security Center
the user risk policy Azure Active Directory (Azure AD) Identity Protection

A

Understanding the Requirements

Scope: 20 Azure subscriptions and 200 resource groups.

Policy: Monthly verification of Owner role assignments.

Verification: A security administrator must approve or remove role assignments.

Automation: Unapproved assignments should be automatically removed.

Monthly Reminders: Security administrator must be prompted each month for verification.

Analyzing the Options

Access reviews in Identity Governance:

Pros:

Role Assignment Review: Specifically designed for reviewing and managing role assignments, including the Owner role.

Scheduled Reviews: Can be configured to run monthly.

Automatic Removal: Supports automatic removal of assignments not approved by the reviewer.

Reviewer Reminders: Notifies designated reviewers (security administrator) when reviews are due.

Scope: Can be used for both subscriptions and resource groups.

Cons:

Requires correct configuration of the governance policy and assignments to ensure the policy is enforced.

Verdict: This is the correct option as it directly meets all the requirements.

Role assignments in Azure Active Directory (Azure AD) Privileged Identity Management (PIM):

Pros:

Allows for just-in-time (JIT) role elevation.

Cons:

Does not directly facilitate regular reviews of role assignments.

PIM is generally used for temporary access not the requirement for recurring review and removal of assignments.

Verdict: Not suitable. Does not fulfil the requirement for monthly verification of role assignments.

Identity Secure Score in Azure Security Center:

Pros:

Provides a security score based on configurations and recommendations.

Cons:

Does not manage, monitor, or remove role assignments.

Only provides a score of the security posture but does not take actions to remove permissions.

Verdict: Not suitable. It is only used to monitor your posture.

The user risk policy in Azure Active Directory (Azure AD) Identity Protection:

Pros:

Detects and manages user risk based on suspicious activities.

Cons:

Does not manage role assignments, it is only used for user based risks and not for permissions.

Not relevant for the requirements for scheduled reviews of role assignments.

Verdict: Not suitable. Not used for role assignment reviews.

Recommendation

The best solution is:

Access reviews in Identity Governance

Explanation

Designed for Role Assignment Reviews: Access reviews are specifically built for reviewing and managing user access to resources.

Scheduled Monthly Reviews: You can configure the access reviews to occur every month.

Automatic Remediation: Unapproved role assignments can be automatically removed, which fulfills the security policy requirement.

Notifications: The security administrator will be notified when the monthly review is due and will be required to take action, or the review will complete automatically.

Comprehensive Scope: Access reviews can be configured at the subscription and resource group levels.

Important Notes for the AZ-304 Exam

Identity Governance: Know that Identity Governance provides access reviews and other features for managing user access.

Access Reviews: Understand how to use access reviews for recurring role assignment validation.

Privileged Identity Management (PIM): Know when to use PIM for JIT role activation and when it is not suitable, such as in this scenario.

Azure Security Center: Understand that it gives you a security posture but not a way to resolve assignment review issues, it only recommends remediation steps.

Azure AD Identity Protection: Understand its purpose in monitoring and dealing with user risk.

Role Assignments: know that RBAC is used to control roles, and that they can be assigned at multiple levels in Azure.

Automation: Be aware of how Azure Governance tools can help automate security tasks, such as removing assignments and sending out alerts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Your company purchases an app named App1.

You need to recommend a solution 10 ensure that App 1 can read and modify access reviews.

What should you recommend?

From the Azure Active Directory admin center, register App1. and then delegate permissions to the Microsoft Graph AP
From the Azure Active Directory admin center, register App1. from the Access control (IAM) blade, delegate permissions.
From API Management services, publish the API of App1. and then delegate permissions to the Microsoft Graph AP
From API Management services, publish the API of App1 From the Access control (IAM) blade, delegate permissions.

A

Understanding the Requirements

App1 Functionality: Needs to read and modify access reviews.

Azure Environment: Using Azure Active Directory (Azure AD).

Authorization: Must be authorized to perform these actions.

Analyzing the Options

From the Azure Active Directory admin center, register App1, and then delegate permissions to the Microsoft Graph API.

Pros:

Application Registration: The correct way to enable an application to be able to access protected resources in Azure AD.

Microsoft Graph API: The Microsoft Graph API is the correct API to access Azure AD, including access reviews.

Delegated Permissions: Permissions to access Microsoft Graph APIs must be delegated to applications, and this can be done using Azure AD application registrations.

Cons:

None. This is the correct approach.

Verdict: This is the correct solution.

From the Azure Active Directory admin center, register App1. from the Access control (IAM) blade, delegate permissions.

Pros:

Application Registration: Required to allow your app to integrate with Azure.

Cons:

Access Control (IAM): IAM is used for resource-level access control and not for delegating permissions for application access to Azure AD or Graph API resources.

Delegations to specific APIs such as graph api are not performed using the IAM blade.

Verdict: This is incorrect. IAM is not used to delegate permissions to the Microsoft Graph API.

From API Management services, publish the API of App1, and then delegate permissions to the Microsoft Graph API.

Pros:

API Management is useful when you want to expose your app as a third-party API.

Cons:

API Management: Not required for App1 to interact with the Graph API. API Management is not required to access graph API’s.

Does not support direct delegation of application permissions.

Verdict: This is incorrect. API Management is not the correct service for this task.

From API Management services, publish the API of App1. From the Access control (IAM) blade, delegate permissions.

Pros:

API Management is useful when you want to expose your app as a third-party API.

Cons:

API Management: Not required for App1 to interact with the Graph API.

IAM: IAM is not used to delegate access to the Graph API.

Verdict: This is incorrect. API Management is not the correct service, and IAM is not the correct way to configure delegation for a graph api.

Recommendation

The correct recommendation is:

From the Azure Active Directory admin center, register App1, and then delegate permissions to the Microsoft Graph API.

Explanation

Application Registration: Registering App1 in Azure AD creates an application object which represents your application and is used to identify your application within the directory.

Microsoft Graph API: The Microsoft Graph API is the unified endpoint for accessing Microsoft 365, Azure AD and other Microsoft cloud resources. Access reviews are also exposed through this API.

Delegated Permissions: You must delegate permissions to allow App1 to access the Graph API. By providing delegated permissions through the application registration, you allow the app to access resources on behalf of the logged in user. In the case of app-only access, this can be configured by granting application permissions rather than delegated permissions.

Authorization: After App1 is registered with delegated permissions it is allowed to perform actions on the Graph API such as accessing access reviews.

Important Notes for the AZ-304 Exam

Application Registration: Know how to register applications in Azure AD and why it is a required step to allow apps to access resources.

Microsoft Graph API: Understand that the Graph API is the primary way to access Microsoft 365 and Azure AD resources, including access reviews.

Delegated Permissions vs. Application Permissions: Be able to differentiate between these two types of permissions. Delegated permissions require an authenticated user. Application permissions are app-only and do not need a logged in user.

Access Control (IAM): Know that IAM is for resource level access and not for granting permission for applications.

API Management: Understand its purpose in publishing and securing APIs, but note that it is not necessary in this use case.

Security Principles: Understand the best practices for securing access to resources such as ensuring that the app is registered and given correct permissions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

HOTSPOT

Your company deploys several Linux and Windows virtual machines (VMs) to Azure. The VMs are deployed with the Microsoft Dependency Agent and the Log Analytics Agent installed by using Azure VM extensions. On-premises connectivity has been enabled by using Azure ExpressRoute.

You need to design a solution to monitor the VMs.

Which Azure monitoring services should you use? To answer, select the appropriate Azure monitoring services in the answer area. NOTE: Each correct selection is worth one point.

Scenario | Azure Monitoring Service

Analyze Network Security Group (NSG) flow logs for VMs
attempting Internet access:

Azure Traffic Analytics
Azure ExpressRoute Monitor
Azure Service Endpoint Monitor
Azure DNS Analytics

Visualize the VMs with their different processes and
dependencies on other computers and external processes:
Azure Service Map
Azure Activity Log
Azure Service Health
Azure Advisor

A

Understanding the Requirements

Monitoring Scope: Linux and Windows VMs in Azure.

Connectivity: On-premises connectivity via Azure ExpressRoute.

Microsoft Dependency Agent and Log Analytics Agent: Already deployed to VMs via extensions.

Monitoring Scenarios:

Analyzing NSG flow logs for VMs attempting Internet access.

Visualizing VMs with processes and dependencies.

Analyzing the Options

Azure Traffic Analytics:

Pros:

Analyzes NSG flow logs to identify traffic patterns and security risks.

Can detect VMs attempting Internet access by inspecting the flow logs.

Provides visualisations of traffic patterns for easy interpretation.

Cons:

Does not provide dependencies of VMs or processes.

Verdict: The correct service for the first scenario.

Azure ExpressRoute Monitor:

Pros:

Monitors the health and performance of ExpressRoute circuits.

Cons:

Does not analyse the flow logs or provide visibility of vm processes and dependencies.

Verdict: Not suitable for the described requirements.

Azure Service Endpoint Monitor:

Pros:

Monitors endpoints in Azure and provides status for services.

Cons:

Does not monitor the flow logs or provide visibility of vm processes and dependencies.

Verdict: Not suitable for the described requirements.

Azure DNS Analytics:

Pros:
* Provides insights into DNS performance and traffic.

Cons:

Does not monitor the flow logs or provide visibility of vm processes and dependencies.

Verdict: Not suitable for the described requirements.

Azure Service Map:

Pros:

Automatically discovers application components on Windows and Linux systems.

Visualizes VMs, processes, and dependencies.

Requires the Microsoft Dependency Agent which has already been installed.

Cons:

Not used to monitor NSG flow logs.

Verdict: Correct choice for the second scenario.

Azure Activity Log:

Pros:

Provides audit logs and tracks events at the subscription and resource level.

Cons:

Does not monitor NSG flow logs or provide process/dependency visualization.

Verdict: Not suitable. It is more related to platform events.

Azure Service Health:

Pros:

Provides insights into the health of Azure services.

Cons:

Does not monitor NSG flow logs or provide process/dependency visualization for individual VMs.

Verdict: Not suitable for the described requirements.

Azure Advisor:

Pros:

Provides recommendations on cost, performance, reliability, and security.

Cons:

Does not monitor the flow logs or provide visibility of vm processes and dependencies.

Verdict: Not suitable for the described requirements.

Answer Area

Scenario Azure Monitoring Service
Analyze Network Security Group (NSG) flow logs for VMs attempting Internet access: Azure Traffic Analytics
Visualize the VMs with their different processes and dependencies on other computers and external processes: Azure Service Map
Important Notes for the AZ-304 Exam

Traffic Analytics: Understand how to use Traffic Analytics to analyze NSG flow logs for security and network traffic monitoring.

Service Map: Know that service map can be used to map services and their dependencies.

Microsoft Dependency Agent: Know that Service Map requires this dependency agent to be deployed on the VMs.

Log Analytics Agent: Be aware that these agents collect logs and forward them to a log analytics workspace and is a pre-requisite for some of these solutions.

Azure Monitor: Know the purpose of all Azure Monitoring services in the overall Azure monitoring landscape.

Application Monitoring vs. Infrastructure Monitoring: Understand that there are a number of monitoring solutions in Azure that target different services. For this question you will need to identify the solution that facilitates monitoring the infrastructure.

20
Q

You store web access logs data in Azure Blob storage.

You plan to generate monthly reports from the access logs.

You need to recommend an automated process to upload the data to Azure SQL Database every month.

What should you include in the recommendation?

Azure Data Factory
Data Migration Assistant
Microsoft SQL Server Migration Assistant (SSMA)
AzCopy

A

Understanding the Requirements

Source: Web access logs in Azure Blob storage.

Destination: Azure SQL Database.

Frequency: Monthly.

Automation: The process needs to be automated.

Transformation: No complex transformations are specified, so the service doesn’t need to be a powerful ETL tool.

Analyzing the Options

Azure Data Factory (ADF):

Pros:

Automated Data Movement: Designed to move data between different sources and sinks.

Scheduling: Supports scheduling pipelines for recurring execution (monthly).

Integration: Has built-in connectors for Blob storage and SQL Database.

Scalable: Can handle various data volumes and complexities.

Transformation: Supports data transformation if needed.

Cons:

Slightly more complex to configure than other options, however a simple ADF pipeline is quite easy to configure.

Verdict: This is the best fit. It can orchestrate the entire process from data extraction to data loading, and scheduling.

Data Migration Assistant (DMA):

Pros:

Helps with migrating databases to Azure, including schema and data migration.

Cons:

Not designed for continuous, scheduled data movement.

More of an interactive tool rather than an automated service.

Not suited to ingest logs into an existing database.

Verdict: Not suitable for recurring data uploads. It is more suited for migrations.

Microsoft SQL Server Migration Assistant (SSMA):

Pros:

Helps with migrating databases from on-premises to Azure SQL Database.

Cons:

Not designed for recurring data uploads from Blob Storage.

Primarily used for database migrations not for data ingestion.

Verdict: Not a valid option. This is used for migrations and not for scheduled data uploads.

AzCopy:

Pros:

Command-line tool to copy data to and from Azure Storage.

Cons:

Not a managed service, it does not handle scheduled operations, it has to be scheduled externally using OS tools (e.g. CRON, task scheduler).

Does not support direct data loading to a database, therefore you would need to build a custom solution to facilitate loading the data into the database.

Does not support any data transformation logic.

Verdict: Not the best option. Requires building a custom solution and does not directly fulfil the requirement to load data into a database.

Recommendation

The correct recommendation is:

Azure Data Factory

Explanation

Automation and Scheduling: Azure Data Factory allows you to create pipelines that can be scheduled to run monthly.

Built-in Connectors: It has connectors for both Azure Blob Storage (to read the logs) and Azure SQL Database (to load data).

Data Integration: It integrates all steps of data extraction, transformation (optional), and loading into a single pipeline.

Monitoring: It provides monitoring and logging for debugging and audit purposes.

Scalability: It can handle a large amount of data if required, and can scale up resources as needed.

Important Notes for the AZ-304 Exam

Azure Data Factory (ADF): Understand its capabilities as an ETL and data orchestration tool.

Automated Data Movement: Know how to set up ADF pipelines for recurring data movement.

Data Integration Tools: Familiarize yourself with the available connectors for different data sources and destinations.

Data Migration vs. Data Ingestion: Understand the difference between tools that are used for migration (e.g. DMA, SSMA) and tools for scheduled data uploads (e.g. ADF).

AzCopy: Know the purpose of AzCopy, and its use cases.

Transformation: Understand that transformation is often a requirement and that you can use data factory for this if needed.

Ease of Use: Although ADF is not the simplest tool, it is the easiest to maintain for scheduled recurring events when compared to a custom solution.

21
Q

You are designing a data protection strategy for Azure virtual machines. All the virtual machines use managed disks.

You need to recommend a solution that meets the following requirements:

  • The use of encryption keys is audited.
  • All the data is encrypted at rest always.
  • You manage the encryption keys, not Microsoft.

What should you include in the recommendation?

Azure Disk Encryption
Azure Storage Service Encryption
BitLocker Drive Encryption (BitLocker)
client-side encryption

A

Understanding the Requirements

Managed Disks: The virtual machines use Azure managed disks.

Encryption at Rest: All data must be encrypted when stored on disk.

Customer-Managed Keys: You must manage the encryption keys, not Microsoft.

Auditing: The use of encryption keys must be auditable.

Analyzing the Options

Azure Disk Encryption (ADE):

Pros:

Encrypts managed disks for both Windows and Linux VMs.

Supports customer-managed keys (CMK) with Azure Key Vault.

Data is encrypted at rest, meeting the security requirement.

Cons:

Does not support auditing of key usage.

Verdict: Does not fully satisfy the requirements due to lack of key usage auditing.

Azure Storage Service Encryption (SSE):

Pros:

Encrypts data at rest in Azure storage (including managed disks) by default.

Supports Microsoft-managed keys or customer-managed keys.

Cons:

Provides basic encryption for data at rest, but does not encrypt the OS disks of VMs.

Does not support the auditing of key usage.

Verdict: Does not provide full coverage of encryption for managed disks, and does not support auditing, therefore not a suitable choice.

BitLocker Drive Encryption (BitLocker):

Pros:

Encrypts drives in Windows operating systems.

Cons:

Would require manual setup and management for every VM.

Does not support auditing of key usage.

Does not support customer managed keys out of the box.

Verdict: Not the correct option. Too much manual overhead, lacks key auditing, and can be complex to manage.

Client-Side Encryption:

Pros:

The data is encrypted before it is sent to Azure.

The encryption key is managed by the client.

Cons:

This method requires custom implementations and additional effort from the client.

Does not support management or auditing of the keys in azure.

Verdict: Not suitable. Requires custom implementations, and is not a managed solution.

Recommendation

The recommendation should be Azure Disk Encryption with Customer-Managed Keys and Azure Key Vault as this is the closest to the correct answer, however further steps are required to implement the auditing requirements.

Explanation

Azure Disk Encryption (ADE): ADE provides encryption for both OS and data disks, using platform-managed keys or customer-managed keys.

Customer-Managed Keys (CMK): By using CMK with Azure Key Vault, you maintain full control over your encryption keys, which satisfies that requirement.

Azure Key Vault Auditing: Azure Key vault logs every event and access of secrets and keys, which can be monitored through Azure Log Analytics.

Encryption at Rest: The data at rest on the managed disks is always encrypted using the configured CMK keys.

Full coverage: This method fully encrypts all disks for the VM.

Steps to implement auditing:

Create an Azure Key Vault

Create a customer managed key in Azure Key Vault.

Configure ADE for the VM to use the customer managed key.

Configure Diagnostic settings on Azure Key Vault to send all logs to Azure Log Analytics.

Configure alerts on Key vault events using Azure Log Analytics to ensure that you are notified when keys are used or modified.

Important Notes for the AZ-304 Exam

Azure Disk Encryption (ADE): Know the options for ADE (platform-managed vs. customer-managed keys) and their implications.

Azure Key Vault: Understand its purpose for storing and managing secrets, keys, and certificates.

Encryption at Rest: Be aware of the different ways to achieve encryption at rest in Azure storage and databases.

Customer-Managed Keys: Know the benefits and implications of using customer-managed keys (CMK) for encryption.

Auditing: Be aware that auditing is a critical aspect of encryption and compliance.

Managed Disks: Understand that managed disks are now the default type in Azure and that encryption applies to them.

22
Q

Your company has the divisions shown in the following table.

|—|—|—|
| East | Sub1 | East.contoso.com |
| West | Sub2 | West.contoso.com |

Sub1 contains an Azure web app that runs an ASP.NET application named App1 uses the Microsoft identity platform (v2.0) to handler user authentication. users from east.contoso.com can authenticate to App1.

You need to recommend a solution to allow users from west.contoso.com to authenticate to App1.

What should you recommend for the west.contoso.com Azure AD tenant?

guest accounts
an app registration
pass-through authentication
a conditional access policy

Division | Azure subscription | Azure Active Directory (Azure AD) tenant |

A

Understanding the Requirements

App1: An ASP.NET application using the Microsoft identity platform (v2.0) for authentication.

Current Authentication: east.contoso.com users can already authenticate to App1.

New Authentication: Users from west.contoso.com must also be able to authenticate to App1.

Authentication: Using Microsoft Identity platform and not on-premises authentication.

Azure AD Tenants: The different divisions have different Azure AD tenants.

Analyzing the Options

Guest accounts:

Pros:

Cross-Tenant Access: Allows users from one Azure AD tenant to access resources in another Azure AD tenant.

Easy to Setup: Relatively easy to create and manage.

Azure AD Integration: Fully compatible with Azure AD and Microsoft identity platform (v2.0).

App Access: This will allow the users to be added to the east.contoso.com tenant and allow access to the app.

Cons:

Requires users to be invited.

Verdict: This is the correct solution.

An app registration:

Pros:

Required for all applications that require authentication from azure ad.

Cons:

The app registration is already done, and an additional app registration is not required.

Verdict: Not required. An app registration is already in place.

Pass-through authentication:

Pros:

Allows users to use their on-premises password to sign in to azure ad.

Cons:

Not suitable in this scenario as it is designed to use local passwords and is not relevant for cloud identity authentication.

Not designed for this use-case which is authentication between different azure AD tenants.

Verdict: Not a good solution. It is not applicable to cloud authentication and is designed for on-prem identity.

A conditional access policy:

Pros:

Used to enforce access control based on various conditions.

Cons:

Does not enable the required functionality to allow a new tenant access to an existing application.

Used to control which users can access a particular resource, but the user must be configured to authenticate first.

Verdict: Not the correct choice. Conditional access can be added later to restrict which users can access the app, but it will not provide the access needed for the app to work for the new tenant.

Recommendation

The correct recommendation is:

Guest accounts

Explanation

Azure AD Guest Accounts: Guest accounts in Azure AD allow you to invite external users into your Azure AD tenant. These users can then access the applications that are hosted on that tenant.

Cross-Tenant Access: Guest accounts enable cross-tenant collaboration, which is exactly what is needed in this scenario.

Microsoft Identity Platform Compatibility: Guest accounts fully integrate with the Microsoft identity platform (v2.0), making them compatible with the authentication mechanisms used by App1.

Access to the App: After a user is added as a guest in the east.contoso.com tenant, they are able to authenticate to the app using their existing credentials from the west.contoso.com tenant.

Important Notes for the AZ-304 Exam

Azure AD Guest Accounts: Understand the purpose of Azure AD guest accounts for cross-tenant collaboration.

Cross-Tenant Access: Know when and how to configure cross-tenant access with Azure AD.

Microsoft Identity Platform (v2.0): Understand that this platform is used for authentication of modern web and mobile applications.

Application Registrations: Know that an app registration is required to allow applications to access resources from Azure AD.

Pass-through Authentication: Understand that this is used to authenticate on-prem identities, not cloud identities.

Conditional Access: Know that this can control access, but cannot provide access on its own.

Authentication: Have a good understanding of authentication in Azure and how to configure it to work across multiple tenants.

23
Q

HOTSPOT

You are designing a solution for a stateless front-end application named Application1.

Application1 will be hosted on two Azure virtual machines named VM1 and VM2.

You plan to load balance connections to VM1 and VM2 from the Internet by using one Azure load balancer.

You need to recommend the minimum number of required public IP addresses.

How many public IP addresses should you recommend using for each resource? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Load balancer:
0
1
2
3
VM1:
0
1
2
3
VM2:
0
1
2
3

A

Understanding the Requirements

Application1: Stateless front-end application.

Hosting: On two Azure VMs (VM1 and VM2).

Load Balancing: Incoming traffic from the Internet must be load balanced across the two VMs.

Public IP Addresses: The goal is to determine the minimum number of public IP addresses required.

Analyzing the Setup

Load Balancer: An Azure load balancer, which provides the entry point for internet traffic and distributes it between the virtual machines.

Virtual Machines: VM1 and VM2 host the application. In this scenario, we want to know how many public IP addresses are required for each VM.

Public IP Addresses Needed

Load Balancer:

A load balancer needs a public IP address to be accessible from the internet. This IP address will be the entry point that the outside world connects to, and the load balancer will handle directing traffic to the back end VMs.

You would typically use one single IP address for this type of scenario.

Therefore the correct answer is 1

Virtual Machines (VM1 and VM2):

The application is being load balanced. It is therefore not required to have the virtual machines individually exposed to the public internet.

The Load Balancer will direct traffic to the virtual machines using a private IP address.

It is therefore not required for these to have public IP addresses.

Therefore the correct answer is 0

Answer Area

Resource Public IP Addresses
Load balancer 1
VM1 0
VM2 0
Explanation

Load Balancer:

The load balancer needs a single public IP address for internet access. This is the public entry point for all inbound connections. The Load Balancer is responsible for directing the traffic to the VMs in a balanced way.

Virtual Machines (VM1 and VM2):

Since the traffic is going to the VMs via the load balancer they do not require public IP addresses.

The load balancer will connect to the virtual machines using their private IP address, which are on the same network as the Load Balancer.

This allows the virtual machines to be protected from direct internet access, as the public facing IP is managed by the Load Balancer.

Important Notes for the AZ-304 Exam

Azure Load Balancer: Understand the role of load balancers in distributing traffic across VMs.

Public IP Addresses: Know when public IP addresses are required and when they are not.

Private IP Addresses: Understand that communication can happen within a virtual network using private IP addresses.

Stateless Applications: Recognize the purpose of stateless applications, and how load balancers are used.

Load Balancer Configuration: Know how load balancers work and how back end pools are configured to handle the traffic.

Security: Remember that it’s a best practice not to directly expose VMs to the internet, and that a Load Balancer with a public IP should be used instead

23
Q

You need to deploy resources to host a stateless web app in an Azure subscription.

The solution must meet the following requirements:

  • Provide access to the full .NET framework.
  • Provide redundancy if an Azure region fails.
  • Grant administrators access to the operating system to install custom application dependencies.

Solution: You deploy a web app in an Isolated App Service plan.

Does this meet the goal?

Yes
No

A

Understanding the Requirements

Stateless Web App: The application is stateless.

Full .NET Framework: The application requires access to the full .NET Framework.

Regional Redundancy: The application must continue to function if an Azure region fails.

OS Access: Administrators need access to the operating system to install custom dependencies.

Analyzing the Proposed Solution: Isolated App Service Plan

Isolated App Service Plan: This plan provides the highest level of isolation and resources for a web app.

Now, let’s evaluate how the solution meets each requirement:

Provide access to the full .NET framework.

Analysis: An isolated app service plan allows you to select the operating system (Windows) and provides the full .NET framework, therefore meeting the requirement.

Verdict: Meets Requirement

Provide redundancy if an Azure region fails.

Analysis: Isolated App Service plans do not provide automatic multi-region redundancy. You would need to deploy the web app and app service plan to multiple regions, and manually configure traffic redirection using a tool like Azure Traffic Manager or Front Door.

Verdict: Does NOT meet requirement

Grant administrators access to the operating system to install custom application dependencies.

Analysis: App Service, including Isolated plans, do not grant administrators access to the underlying operating system. You are restricted to installing dependencies within the supported context of the web app.

Verdict: Does NOT meet requirement

Conclusion

The Isolated App Service plan meets one of the three requirements. Therefore, the answer is No.

Reasoning:

While an Isolated App Service plan offers a great amount of resource allocation and isolation, it does not give access to the underlying operating system to administrators, or provide automatic redundancy in the event of an outage. These limitations make the solution unsuitable for the requirements.

Correct Answer

No

Important Notes for the AZ-304 Exam

Azure App Service: Understand the different App Service plans (Free, Shared, Basic, Standard, Premium, and Isolated) and their features.

.NET Framework: Be aware of the support for the full .NET Framework in App Service plans and the limitations.

Regional Redundancy: Know how to achieve regional redundancy using traffic managers and other services.

OS Access: Remember that App Service generally does not provide access to the underlying OS.

Use Cases: Know when to select Azure VM’s over App Services, particularly when you need control of the underlying operating system.

Service Selection: Know how to select the correct Azure service that fits all the requirements.

24
Q

Your network contains an on-premises Active Directory domain.

The domain contains the Hyper-V clusters shown in the following table.

|—|—|—|
| Cluster 1 | 4 | 20 |
| Cluster 2 | 3 | 15 |

You plan to implement Azure Site Recovery to protect six virtual machines running on Cluster1 and three virtual machines running on Cluster1 Virtual machines are running on all Cluster! and Cluster2 nodes.

You need to identify the minimum number of Azure Site Recovery Providers that must be installed on premises.

How many Providers should you identify?

1
7
9
16

Name | Number of nodes | Number of virtual machines running on cluster |

A

Understanding the Requirements

On-Premises Environment: An on-premises Active Directory domain with two Hyper-V clusters.

Azure Site Recovery: Used to protect virtual machines.

Protected VMs: Six VMs from Cluster1 and three VMs from Cluster1.

Goal: Determine the minimum number of ASR Providers needed.

Understanding Azure Site Recovery Providers

Purpose: The Azure Site Recovery Provider is a component installed on each Hyper-V host that communicates with Azure Site Recovery to facilitate replication and failover of virtual machines.

Placement: The Provider is installed on each Hyper-V host that is part of a cluster that contains virtual machines to be protected.

Minimum Requirement: You need at least one provider installed per cluster.

Analyzing the Scenario

Cluster1: Has 4 nodes. Six virtual machines are to be protected.

Cluster2: Has 3 nodes. Three virtual machines are to be protected.

Calculating the Required Providers

Cluster1: Although only 6 virtual machines from cluster 1 are being protected, these are hosted on nodes within the cluster and these nodes need to have the ASR provider installed.

Since there are four nodes in the cluster, a minimum of four providers is required for the virtual machines in cluster1.

Cluster2: Only 3 virtual machines need to be protected in cluster 2 and therefore the nodes in the cluster that host these virtual machines will require the ASR provider.

Since there are three nodes in the cluster, a minimum of three providers are required for the virtual machines in cluster 2.

Total Providers: The total minimum number of ASR Providers is therefore 4+3 = 7

Note that even if only 1 vm was protected on each cluster, the total number of providers would be 4 + 3 = 7.

Correct Answer

7

Important Notes for the AZ-304 Exam

Azure Site Recovery (ASR): Understand the purpose and function of ASR for disaster recovery.

ASR Provider: Know that the ASR Provider needs to be installed on every Hyper-V host in order to protect its virtual machines.

Hyper-V Clusters: Understand how to use Azure Site Recovery with Hyper-V clusters.

Agent Requirements: You need to know what components are required to be deployed on the virtual machines as well as on the hyper-v hosts.

Deployment Requirements: You should know the pre-requisites for deploying a DR strategy in Azure, and be aware of any limitations.

Minimum Requirements: ASR needs a minimum of 1 provider per hyper-v host that contains VMs that need to be protected by ASR.

ASR Components: Be aware of the different components required for an ASR setup.

25
Q

HOTSPOT

You are designing a cost-optimized solution that uses Azure Batch to run two types of jobs on Linux nodes. The first job type will consist of short-running tasks for a development environment. The second jot type will consist of long-running Message Passing Interface (MPI) applications for a production environment that requires timely job completion.

You need to recommend the pool type and node type for each job type. The solution must minimize compute charges and leverage Azure Hybrid Benefit whenever possible.

What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

First job:
Batch service and dedicated virtual machines
User subscription and dedicated virtual machines
User subscription and low-priority virtual machines
Second job:
Batch service and dedicated virtual machines
User subscription and dedicated virtual machines
User subscription and low-priority virtual machines

A

Understanding the Requirements

Two Job Types:

Short-running tasks (development).

Long-running MPI applications (production, timely completion).

Linux Nodes: All jobs will run on Linux nodes.

Cost Optimization: Minimize compute charges.

Azure Hybrid Benefit: Leverage whenever possible (for Windows licenses only, but this is still a good consideration).

Analyzing the Options

Batch Service and Dedicated Virtual Machines:

Pros:

Reliable, Consistent Performance: Dedicated VMs are not preempted and therefore have predictable performance.

Suitable for Production Workloads: Provides a good option for production workloads that need reliable compute.

Cons:

More expensive than low priority VMs.

Use Case: Suitable for both types of jobs where reliability is a factor, but may be more suitable for production workloads.

User Subscription and Dedicated Virtual Machines:

Pros:

Reliable Consistent Performance: Dedicated VMs are not preempted and therefore have predictable performance.

Azure Hybrid Benefit: You can use your own existing licensing if applicable.

Cons:

More expensive than low priority VMs.

Use Case: Suitable for both types of jobs where reliability is a factor, but may be more suitable for production workloads and where you want to leverage your existing licensing.

User Subscription and Low-Priority Virtual Machines:

Pros:

Lower cost: Low-priority VMs are cheaper than dedicated VMs because they can be preempted when Azure needs capacity.

Suitable for Development Workloads: Good choice for development where cost is more important than reliability.

Cons:

Can be preempted: May not be suitable for time-sensitive workloads or production environments.

May not be suited to long running tasks, but more suited to short tasks.

Use Case: Best for development jobs where reliability is not as critical.

Recommendations

Here’s the best combination of pool type and node type for each job:

First Job (Short-running, Development):

Pool Type: User subscription and low-priority virtual machines

Reason: Low-priority VMs are the most cost-effective option for development tasks. Since they are short-running, the risk of preemption is minimized. You will also get the benefits of the licensing.

Second Job (Long-running MPI, Production):

Pool Type: User subscription and dedicated virtual machines

Reason: Dedicated VMs are best for production workloads that require reliability, consistency, and timely completion of tasks, and you will get the benefit of your own licenses, and consistent performance with no possibility of being preempted.

Answer Area

Job Type Pool Type
First job (short-running, development) User subscription and low-priority virtual machines
Second job (long-running MPI, production) User subscription and dedicated virtual machines
Important Notes for the AZ-304 Exam

Azure Batch: Understand the purpose and use of Azure Batch for running large scale compute jobs.

Pool Types: Know the difference between Batch service and user subscription pools, and how to leverage your licensing when you choose a user subscription pool.

Node Types: Understand the pros and cons of dedicated and low-priority nodes, and how they impact reliability and cost.

Azure Hybrid Benefit: Be aware of how it can be leveraged for Windows VM licenses. This is applicable to all user subscription pools, which are suitable if you have your own licenses.

Cost Optimization: Understand how to balance cost and performance when selecting VM types.

Development vs Production: Know that it is a good practice to have separate environments for development and production.

Timely Completion: If time sensitive jobs are required to complete without being interrupted, then low priority VMs are not suitable.

26
Q

You have an Azure Active Directory (Azure AD) tenant named Contoso.com. The tenant contains a group named Group1. Group1 contains all the administrator user accounts.

You discover several login attempts to the Azure portal from countries administrator users do NOT work.

You need to ensure that all login attempts to the portal from those countries require Azure Multi-Factor Authentication (MFA).

Solution: You implement an access package.

Does this meet the goal?

Yes
No

A

Understanding the Requirements

Azure AD Tenant: Contoso.com

Admin Group: Group1 contains all administrator user accounts.

Problem: Login attempts from unauthorized countries.

Goal: Enforce MFA for all login attempts from these countries for administrator users.

Analyzing the Proposed Solution: Access Package

Access Package: A tool in Azure AD Identity Governance that allows you to manage access to resources (such as applications, groups, or SharePoint sites) by grouping the resources and their associated access policies together.

Let’s see if an access package meets the needs:

Enforce MFA for all login attempts to the portal from those countries.

Analysis: Access packages manage access to resources. It does not provide controls based on the location of the user, or specifically, the sign-in of the user. It cannot be used to enforce MFA based on location.

Verdict: Does NOT meet requirement

Conclusion

The solution does not meet the goal, as an access package does not enforce MFA based on location. Therefore, the answer is No.

Correct Answer

No

Explanation

Access packages are used to manage access to resources. Access policies can be created to control how users are granted access to a particular resource, but they can’t be used to control authentication requirements for all login attempts from different locations.

The Correct Solution

The correct way to implement this scenario is to use a Conditional Access Policy. Conditional access policies are designed to control access to applications and services based on conditions such as:

Location (Countries/Regions)

User or Group (e.g., the administrators in Group1)

Device State

Application

With a Conditional Access Policy, you can specify that any login attempts from certain countries for users in Group1 must use MFA.

Important Notes for the AZ-304 Exam

Azure AD Conditional Access: Know the purpose and use of Conditional Access policies.

Access Packages: Understand the use cases of access packages in Azure AD Identity Governance.

MFA Enforcement: Know how to use conditional access to enforce MFA.

User and Group Scope: Know how to use conditions to target policies to specific users or groups.

Location Based Access: Understand how to configure conditional access based on geographical location.

Policy Selection: You should know when to select conditional access vs access policies and the use cases of each.

27
Q

HOTSPOT

You plan to deploy an Azure web app named Appl that will use Azure Active Directory (Azure AD) authentication.

App1 will be accessed from the internet by the users at your company. All the users have computers that run Windows 10 and are joined to Azure AD.

You need to recommend a solution to ensure that the users can connect to App1 without being prompted for authentication and can access App1 only from company-owned computers.

What should you recommend for each requirement? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

The users can connect to App1 without
being prompted for authentication:
An Azure AD app registration
An Azure AD managed identity
Azure AD Application Proxy

The users can access App1 only from
company-owned computers:
A conditional access policy
An Azure AD administrative unit
Azure Application Gateway
Azure Blueprints
Azure Policy

A

Understanding the Requirements

App1: An Azure web app using Azure AD authentication.

Users: Company users with Windows 10 computers joined to Azure AD.

Seamless Authentication: Users should be able to connect to App1 without any prompts for their credentials.

Company-Owned Devices: Access to App1 should only be allowed from company-owned computers.

Analyzing the Options

An Azure AD app registration:

Pros:

Required for all applications that use Azure AD.

Configures authentication for the application.

Cons:

Does not enable silent sign in or restrict access based on devices.

Verdict: Not sufficient to fulfil either of the requirements.

An Azure AD managed identity:

Pros:

Provides an identity for Azure services for accessing other Azure resources.

Cons:

Not applicable for the user authentication scenario.

Verdict: Not suitable. Not used for user access.

Azure AD Application Proxy:

Pros:

Enables access to internal web applications from the internet.

Cons:

Does not manage user credentials and does not restrict access to company owned machines.

Verdict: Not relevant for this scenario.

A conditional access policy:

Pros:

Can enforce authentication policies based on conditions, such as location, device compliance and other factors.

Can enforce access restrictions to only allow access from compliant or hybrid joined devices (company owned).

Cons:

Requires careful configuration

Verdict: This is the correct answer for the “company owned” devices requirement.

An Azure AD administrative unit:

Pros:

Used to scope management permissions and policies to a subset of users.

Cons:

Does not enable silent authentication and does not restrict access to devices.

Verdict: Not suitable for these requirements.

Azure Application Gateway:

Pros:

Load balances traffic to multiple backends.

Cons:

Does not manage user credentials and does not restrict access to devices.

Verdict: Not relevant for this scenario.

Azure Blueprints:

Pros:

Used to deploy resources using pre-defined templates.

Cons:

Does not manage user credentials and does not restrict access to devices.

Verdict: Not suitable for these requirements.

Azure Policy:

Pros:

Used to enforce specific resource configurations.

Cons:

Does not manage user credentials and does not restrict access to devices.

Verdict: Not suitable for these requirements.

Recommendations

Here’s how we should match the services to the requirements:

The users can connect to App1 without being prompted for authentication:

An Azure AD app registration: will facilitate the sign in process, however it will still require prompts from the user without a conditional access policy.

The users can access App1 only from company-owned computers:

A conditional access policy is required. Conditional Access can restrict access to only compliant or hybrid joined devices, and therefore prevent users from logging on from personal machines.

Answer Area

Requirement Recommended Solution
The users can connect to App1 without being prompted for authentication: An Azure AD app registration
The users can access App1 only from company-owned computers: A conditional access policy
Explanation

Azure AD app registration:

User Authentication: An app registration configures the authentication for the application. It does not ensure seamless authentication, but it is required to implement authentication for an application.

Conditional Access Policy:

Device-Based Restriction: Conditional access can restrict access based on device compliance, hybrid-joined state, and other factors to guarantee the user is on a company owned device.

Important Notes for the AZ-304 Exam

Azure AD Authentication: Know how Azure AD is used for authentication.

Conditional Access: Understand the purpose and functions of Conditional Access policies and how they can facilitate secure access based on various conditions.

Device Compliance: Know how devices can be marked as compliant or non-compliant within Azure.

Seamless Sign-in: Know that conditional access can facilitate seamless sign in with device based authentication.

Company Owned Devices: Know how conditional access can restrict access to company-owned devices only.

Policy Based Access: Understand that conditional access policies are used to enforce controls for users as they attempt to access resources.

Service Selection: Know how to select the service that best fits the requirements.

27
Q

You are developing a web application that provides streaming video to users. You configure the application to use continuous integration and deployment.

The app must be highly available and provide a continuous streaming experience for users.

You need to recommend a solution that allows the application to store data in a geographical location that is closest to the user.

What should you recommend?

Azure App Service Web Apps
Azure App Service Isolated
Azure Redis Cache
Azure Content Delivery Network (CDN)

A

Understanding the Requirements

Streaming Video Application: The application streams video content to users.

High Availability: The application must be highly available.

Continuous Streaming: Users need a smooth and uninterrupted streaming experience.

Geographical Proximity: Data (video content) must be stored geographically close to the user.

Analyzing the Options

Azure App Service Web Apps:

Pros:

Platform-as-a-Service (PaaS) for hosting web applications.

Supports continuous integration and deployment.

Can be deployed to multiple regions to increase availability.

Cons:

Doesn’t inherently store content geographically close to the user, though multiple regional deployments can be used to help with this.

Not designed for caching video content.

App service itself does not facilitate the storage of the content.

Verdict: Not the best choice. Does not provide the storage capability based on the location of the users.

Azure App Service Isolated:

Pros:

Offers the most isolation and resources for running web apps.

Supports continuous integration and deployment.

Can be deployed to multiple regions to increase availability.

Cons:

Doesn’t inherently store content geographically close to the user.

Not designed for caching video content.

App service itself does not facilitate the storage of the content.

Verdict: Not the best choice, for the same reasons as regular app service.

Azure Redis Cache:

Pros:

A high-performance, in-memory data store.

Can be used to cache frequently accessed data to reduce database load.

Cons:

Not designed for large media files.

Does not provide a content delivery service.

Verdict: Not the best option. Redis cache is designed to cache small key/value pairs, not large media files.

Azure Content Delivery Network (CDN):

Pros:

Geographic Content Delivery: Caches content in edge servers located around the world.

Low Latency: Delivers content from the closest edge server to the user, resulting in low latency and a better streaming experience.

High Availability: Highly scalable and fault-tolerant.

Optimized for Media: Designed for streaming large media files such as videos.

Cons:

Requires data to be loaded into the CDN initially, and therefore will not store the data initially.

Verdict: This is the best option. It is specifically designed for delivering content geographically closer to the user.

Recommendation

The correct recommendation is:

Azure Content Delivery Network (CDN)

Explanation

Geographic Caching: A CDN caches content in edge servers located around the world, minimizing the distance between the user and the data.

Low-Latency Streaming: Delivering content from the nearest edge server reduces latency and results in a smoother streaming experience.

High Availability and Scalability: A CDN provides a highly available and scalable solution for delivering content.

Media Optimisation: The CDN is designed specifically for delivering media files such as video, with different optimisations specific to this type of content delivery.

Important Notes for the AZ-304 Exam

Content Delivery Networks (CDNs): Understand their purpose in delivering content from edge locations to reduce latency.

Geographical Distribution: Know how to use CDNs to store content in locations closest to the user.

Streaming Video Applications: Understand how to use Azure to deliver streaming media content.

Azure App Service: Understand that although App service supports multi-regional deployments, they do not provide content delivery and caching.

Azure Redis Cache: Understand the purpose of Redis Cache, and what scenarios it is most suitable for.

Service Selection: Understand how to select the correct Azure service that fits your requirements.

27
Q

DRAG DROP

A company named Contoso, Ltd- has an Azure Active Directory {Azure AD) tenant that uses the Basic license. You plan to deploy two applications to Azure.

The applications have the requirements shown in the following table.

|—|—|
| Customer | Users must authenticate by using a personal Microsoft account and multi-factor authentication |
| Reporting | Users must authenticate by using either Contoso credentials or a personal Microsoft account. You must be able to manage the accounts from Azure AD. |

Which authentication strategy should you recommend for each application? To answer, drag the appropriate authentication strategies to the correct applications. Each authentication strategy may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.

Authentication Strategies
An Azure AD B2C tenant
An Azure AD v1.0 endpoint
An Azure AD v2.0 endpoint

Answer Area
Customer: Authentication strategy
Reporting: Authentication strategy

Application name | Requirement |

A

Understanding the Requirements

Contoso, Ltd. Azure AD Tenant: Using the Basic license.

Two Applications:

Customer: External users authenticate with a personal Microsoft account and require MFA.

Reporting: Internal and external users can use Contoso credentials or a personal Microsoft account, which must be managed from Azure AD.

Analyzing the Authentication Strategies

An Azure AD B2C tenant:

Pros:

Designed for customer-facing applications.

Supports social identities (like Microsoft accounts).

Supports MFA for all authentication types.

Offers customization of the login experience.

Allows management of external identities and authentication policies.

Cons:

Requires an additional Azure AD tenant.

Use Case: Best suited for customer-facing applications that need to support different kinds of identity providers, such as personal Microsoft Accounts.

An Azure AD v1.0 endpoint:

Pros:

Supports Azure AD accounts.

Supports multi factor authentication.

Basic authentication framework

Cons:

Does not support personal microsoft accounts,

Has a more limited set of features than v2.0.

Not designed for external customer authentication.

Use Case: Good for authenticating internal users, but not the best solution for external users.

An Azure AD v2.0 endpoint:

Pros:

Supports Azure AD accounts.

Supports personal Microsoft accounts.

Supports MFA for all authentication types.

Supports modern application development.

Cons:

Does not provide full B2C customization.

Does not manage external accounts within Azure AD.

Use Case: Ideal for authenticating internal (Azure AD) users and external personal accounts, however it does not offer the same level of configuration as B2C.

Matching Authentication Strategies to Applications

Here’s the correct mapping:

Customer:

An Azure AD B2C tenant is the best fit. It is specifically designed for customer-facing applications, supports personal microsoft accounts and MFA, and has good customisation options.

Reporting:

An Azure AD v2.0 endpoint is the most suitable. It is able to facilitate authentication for internal Azure AD users, and external personal microsoft account users, which is suitable for the given requirement. As the application does not require the level of customisation that B2C offers, this is the best option.

Answer Area

Application Authentication Strategy
Customer An Azure AD B2C tenant
Reporting An Azure AD v2.0 endpoint
Important Notes for the AZ-304 Exam

Azure AD B2C: Understand its purpose and use for customer-facing applications.

Azure AD v1.0 vs. v2.0: Know the differences between the v1 and v2 endpoints and how they impact authentication.

Microsoft Accounts: Understand that Azure AD v1.0 does not support personal Microsoft accounts, and therefore you would need to use v2.0, or B2C.

MFA: Know how to enforce MFA for different authentication types.

Authentication Strategies: Understand which strategy is best for different types of applications (e.g., internal vs. customer-facing).

Azure AD Licenses: Know that Azure AD B2C requires separate licensing from Azure AD basic.

Service Selection: Be able to select the correct Azure service that fits your requirements.

28
Q

Your deploy Azure App Service Web Apps that connect to on-premises Microsoft SQL Server instances by using Azure ExpressRoute You plan to migrate the SQL Server instances to Azure.

Migration of the SQL Server instances to Azure must

  • Support automatic patching and version updates to SQL Server.
  • Provide automatic backup services.
  • Allow for high-availability of the instances,
  • Provide a native VNET with private IP addressing.
  • Encrypt all data in transit
  • Be in a single-tenant environment with dedicated underlying infrastructure (compute, storage}.

You need to migrate the SQL Server instances to Azure.

Which Azure service should you use?

SQL Server Infrastructure-as-a-Service (laaS) virtual machine (VM)
Azure SQL Database with elastic pools
SQL Server in a Docker container running on Azure Container Instances (ACI)
Azure SQL Database Managed Instance
SQL Server in Docker containers running on Azure Kubermetes Service (AKS)

A

Understanding the Requirements

Migration Target: SQL Server instances migrating from on-premises to Azure.

Automatic Patching/Updates: SQL Server must be automatically patched and updated.

Automatic Backups: Automated backup services are needed.

High Availability: Instances must be highly available.

Native VNET: Must support private IP addressing within a native virtual network (VNET).

Data Encryption: All data must be encrypted in transit.

Single-Tenant Environment: Requires dedicated underlying infrastructure (compute, storage).

Analyzing the Azure Services

SQL Server Infrastructure-as-a-Service (IaaS) virtual machine (VM):

Pros:

Full control over SQL Server.

Supports private IPs within a VNET.

Data in transit can be encrypted.

Can achieve high availability by setting up an availability group.

Cons:

Requires manual patching and version updates.

Requires manual configuration of backup services.

Not a fully managed service.

Verdict: Does not meet the requirement for automatic patching/updates or automatic backups.

Azure SQL Database with elastic pools:

Pros:

Fully managed service.

Automatic patching, version updates, and backups.

High availability built-in.

Supports encryption in transit.

Cons:

Does not support a dedicated single-tenant environment. Elastic pools provide shared resources within the service.

Does not provide a native VNET integration.

Verdict: Not suitable because it does not provide single tenant or VNET.

SQL Server in a Docker container running on Azure Container Instances (ACI):

Pros:

Simple platform for running containerized applications.

Can be used to host a SQL Server image in a container.

Cons:

Does not provide high availability.

Does not support automatic patching and version updates.

Does not support backup services.

Does not support a native VNET integration.

Does not provide single tenant infrastructure.

Verdict: Does not meet the majority of the requirements.

Azure SQL Database Managed Instance:

Pros:

Fully managed service, almost 100% compatible with on-prem SQL Server.

Automatic patching, version updates, and backups.

Built-in high availability.

Supports private IP within a native VNET, as it is deployed into your VNET.

Data in transit is encrypted.

Provides a single-tenant environment with dedicated resources.

Cons:

More expensive than other options.

Verdict: This is the best fit. It meets all requirements.

SQL Server in Docker containers running on Azure Kubernetes Service (AKS):

Pros:

Highly scalable and resilient platform for running containerised apps.

Cons:

Requires manual configuration of patching, version updates, backups, and high availability.

Requires an additional layer of management of AKS itself.

Not a fully managed service.

Does not provide single-tenant infrastructure.

Verdict: Does not fulfil the majority of the requirements.

Recommendation

The correct recommendation is:

Azure SQL Database Managed Instance

Explanation

Fully Managed Service: Managed Instance handles patching, updates, and backups automatically.

High Availability: It has built-in high availability features.

Native VNET Support: Managed Instance is deployed into your own VNET with private IP addresses.

Encryption in Transit: Data is automatically encrypted in transit.

Single-Tenant: It provides a dedicated environment on the underlying infrastructure (compute, storage).

Important Notes for the AZ-304 Exam

Azure SQL Database Options: Know the difference between single database, elastic pool, and Managed Instance.

Managed Instance: Understand its key features, including native VNET integration, high availability, and automatic updates.

Fully Managed Services: Recognize the benefits of a fully managed service such as patching and backups, and that these features are often included.

IaaS vs. PaaS: Know when to select IaaS (VMs) or PaaS (Managed Instance) and how they differ.

VNET Integration: Be aware of which services provide native VNET integration.

Single and Multi-Tenant: Understand the differences between single-tenant and multi-tenant environments.

High Availability: Be aware that Managed instances provide high availability by default.

29
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure Storage v2 account named Storage1.

You plan to archive data to Storage1.

You need to ensure that the archived data cannot be deleted for five years. The solution must prevent administrators from deleting the data.

Solution: You create a file share, and you configure an access policy.

Does this meet the goal?

Yes
No

A

Understanding the Requirements

Azure Storage v2 Account: Storage1

Archival: Data will be archived in the storage account.

Retention Policy: Archived data must be protected from deletion for five years.

Administrator Protection: This protection must prevent even administrators from deleting the data.

Analyzing the Proposed Solution: Access Policy on a File Share

File Share Access Policy: Access policies on Azure file shares primarily control who can access the share, and what actions they can perform on the share, such as read, write, or delete.

Let’s evaluate if a file share access policy meets the stated needs:

Prevent Data Deletion for Five Years (including administrators):

Analysis: File share access policies can be used to prevent certain users or groups from deleting files on a file share, but not for a specific retention period like five years.

Access policies can be overridden by users with sufficient rights (like the storage account administrator).

Access policies do not apply a time based restriction to deletion.

Verdict: Does NOT meet the requirement to prevent deletion for five years, or to block admin users.

Conclusion

The proposed solution does not meet the goal because an access policy will not prevent all users, including administrators, from deleting data, and will also not impose a time based restriction on the deletion of data. Therefore, the answer is No.

Correct Answer

No

Explanation

File share access policies are about authorization to perform specific actions, but they do not implement immutability or retention. To implement a time based retention, you would need an Immutability policy on a blob container. This setting is designed to provide a time based retention mechanism and protect data from deletion even by the administrators.

Important Notes for the AZ-304 Exam

Azure Storage Access Policies: Understand their purpose and limitations in controlling access to data, and that they do not implement a time-based retention policy.

Azure Storage Immutability Policies: Understand that they provide a way to protect data from modification and deletion, and how you can set these policies.

Data Archival: You need to understand the ways that data can be archived, and how retention can be applied.

Admin Roles: Remember that administrators can override many security configurations and policies unless specifically protected by a service such as an immutability policy.

Security Best Practices: Be aware that security should be a consideration in every component of Azure.

Service Selection: Be able to select the correct Azure service that fits your requirements.

30
Q

HOTSPOT

You need to recommend a solution for configuring the Azure Multi-Factor Authentication (MFA) settings.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Answer Area
Azure AD license:
Free
Basic
Premium P1
Premium P2
Access control for the sign-in risk policy:
Allow access and require multi-factor authentication
Block access and require multi-factor authentication
Allow access and require Azure MFA registration
Block access
Access control for the multi-factor
authentication registration policy:
Allow access and require multi-factor authentication
Block access and require multi-factor authentication
Allow access and require Azure MFA registration
Block access
— —

A

Understanding the Requirements

Azure MFA: The goal is to recommend a solution for configuring MFA.

Components to Configure:

Azure AD license

Access control for the sign-in risk policy

Access control for the multi-factor authentication registration policy

Analyzing the Options

Azure AD license:

Free: Basic MFA is available for all users with the free Azure AD license, however it does not allow conditional access or risk based MFA.

Basic: This license is very similar to the free tier.

Premium P1: Includes Conditional Access and advanced reporting, which is required for the requirements of the question.

Premium P2: Includes advanced Identity Protection and identity governance features.

Access control for the sign-in risk policy:

Allow access and require multi-factor authentication: Allows access, but requires MFA, which is suitable to mitigate the risk.

Block access and require multi-factor authentication: This does not make sense, as the user would not be able to log in.

Allow access and require Azure MFA registration: Allows access, and requires the user to register for MFA.

Block access: Blocks all access.

Access control for the multi-factor authentication registration policy:

Allow access and require multi-factor authentication: The user must already have MFA registered to log in.

Block access and require multi-factor authentication: This would lock users out, if they have not registered for MFA.

Allow access and require Azure MFA registration: This allows the user access, but requires them to register for MFA.

Block access: Blocks all access.

Recommendations

Here is the correct combination for each requirement:

Azure AD license: Premium P1

Reason: Conditional Access, which is required to configure MFA, requires an Azure AD Premium P1 license or higher. Free and Basic licenses do not support conditional access.

Access control for the sign-in risk policy: Allow access and require multi-factor authentication

Reason: We are not blocking sign in. When the policy is activated and user risk is detected, it will be required for them to authenticate using MFA before access is allowed.

Access control for the multi-factor authentication registration policy: Allow access and require Azure MFA registration

Reason: To ensure that users have MFA configured for the account, we should force them to register for MFA before they are able to proceed. This will ensure that all users are set up correctly.

Answer Area

Requirement Recommended Option
Azure AD license: Premium P1
Access control for the sign-in risk policy: Allow access and require multi-factor authentication
Access control for the multi-factor authentication registration policy: Allow access and require Azure MFA registration
Important Notes for the AZ-304 Exam

Azure AD Licensing: Understand the licensing options and which features are included in each.

Azure MFA: Know how to configure MFA, including registration policies and sign-in risk based policies.

Conditional Access: Understand the purpose of conditional access, and its requirements.

MFA Registration Policies: Know that these are important for ensuring that all users are set up correctly, before allowing them access to resources.

Risk Based Policies: Know that these are an essential component of a good security architecture.

Security Policies: Be aware of the best practices when setting up security policies.

30
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution. while others might not have a correct solution.

After you answer a question In this section, you will NOT be able to return to it As a result these questions will not appear In the review screen.

You have an on-premises Hyper-V cluster that hosts 20 virtual machines Some virtual machines run Windows Server 2016 and some run Linux.

You plan to morale the virtual machine? to an Azure subscription

You need to recommend 9 solution 10 replicate the disks of the virtual machines to Azure. The solution must ensure that the virtual machines remain available during the migration of the disks.

Solution: You recommend implementing a Recovery Services vault and then using Azure Site Recovery.

Dees this meet the goal?

Yes
No

A

Understanding the Requirements

On-Premises: Hyper-V cluster hosting 20 VMs (Windows Server 2016 and Linux).

Migration: Move the VMs to Azure.

Disk Replication: The disk data must be copied to Azure.

Availability: The VMs must remain available during the disk migration process.

Analyzing the Proposed Solution: Recovery Services Vault and Azure Site Recovery

Recovery Services Vault: A management container in Azure for ASR and backups.

Azure Site Recovery (ASR): A service used for replicating virtual machines for disaster recovery and migration.

Let’s assess if this solution meets the stated requirements:

Replicate Virtual Machine Disks to Azure:

Analysis: Azure Site Recovery is specifically designed for replicating virtual machine disks to Azure.

Verdict: Meets Requirement

Ensure Virtual Machine Availability During Disk Migration:

Analysis: Azure Site Recovery uses continuous asynchronous replication. This means that the VMs will continue to run in the on-premises environment while a copy of their disks is being transferred to Azure. This ensures that users will not experience any downtime during the migration process.

Verdict: Meets Requirement

Conclusion

The proposed solution meets all requirements as it facilitates the replication of VM disks using Azure Site Recovery, and it provides continuous asynchronous replication which allows VMs to remain available during the process. Therefore, the answer is Yes.

Correct Answer

Yes

Explanation

Azure Site Recovery: ASR replicates virtual machine disks from on-premises Hyper-V environments to Azure, while keeping the VMs running.

Continuous Replication: ASR uses continuous replication which allows the VMs to be running during the migration process.

Migration Support: ASR can facilitate the migration of on-prem environments to Azure.

Disaster Recovery: ASR can also be used to facilitate disaster recovery to Azure if a primary data centre fails.

Important Notes for the AZ-304 Exam

Azure Site Recovery (ASR): Know the purpose and functionality of ASR, including how to set up replication.

Recovery Services Vault: Understand that ASR requires a Recovery Services vault to store the replication metadata.

Replication Options: Be aware of the different replication methods that ASR can perform, specifically that it will replicate continuously in the background.

Migration Strategies: Understand how to migrate workloads from on-prem to Azure using different services, such as ASR.

On-prem Considerations: Remember that pre-requisites such as installing the ASR agent, configuring networking, and other actions are required to facilitate the process.

30
Q

HOTSPOT

You have the application architecture shown in the following exhibit.

Azure Active Directory
|
v
Internet
|
v
+—————–+
| Traffic Manager |
+—————–+
|
v
Azure DNS
/ \
v v
Active Region Standby Region
+ +
| |
Web App Web App
| |
v v
SQL Database SQL Database

Use the drop-down menus to select choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point.

To change the front end to an ative/active
architecture in which both regions process
incoming connections, you must [answer
choice].
add a load balancer to each region
add an Azure Application Gateway to each region
add an Azure content delivery network (CDN)
modify the Traffic Manager routing method

To control the threshold for failing over the
front end to the standby region, you must
configure the [answer choice].
an Application Insights availability test
Azure SQL Database failover groups
Connection Monitor in Azure Network Watcher
Endpoint monitor settings in Traffic Manager

A

Statement 1: To change the front end to an active/active architecture in which both regions process incoming connections, you must [modify the Traffic Manager routing method].

Why this is correct: As explained previously, Traffic Manager is responsible for routing traffic across regions. To have an active/active setup, you must use a Traffic Manager routing method that sends traffic to multiple regions simultaneously. Options like “Weighted” or “Performance” are suitable for active/active.

Why other options are not correct:

Add a load balancer to each region: Load balancers distribute traffic within a region, not between regions.

Add an Azure Application Gateway to each region: Similar to load balancers, Application Gateway is regional.

Add an Azure content delivery network (CDN): CDNs cache static content and do not handle dynamic traffic distribution across regions.

Statement 2: To control the threshold for failing over the front end to the standby region, you must configure the [Endpoint monitor settings in Traffic Manager].

Why this is correct: Traffic Manager’s endpoint monitoring is what determines if an endpoint is healthy and triggers a failover to a backup endpoint. The specific settings (probe interval, tolerated failures, status codes) define the conditions for failing over.

Why other options are not correct:

An Application Insights availability test: Application Insights provides monitoring, but does not directly control failover behavior of traffic manager.

Azure SQL Database failover groups: These manage database failover, not traffic routing at the web app level.

Connection Monitor in Azure Network Watcher: Connection Monitor is for network connectivity troubleshooting, not Traffic Manager endpoint failover.

Summary of Correct Answers:

Statement 1: modify the Traffic Manager routing method

Statement 2: Endpoint monitor settings in Traffic Manager

30
Q

You have an Azure Active Directory (Azure AD) tenant named contoso.com that contains two administrative user accounts named Admin1 and Admin2.

You create two Azure virtual machines named VM1 and VM2.

You need to ensure that Admin1 and Admin2 are notified when more than five events are added to the security log of VM1 or VM2 during a period of 120 seconds. The solution must minimize administrative tasks.

What should you create?

two action groups and two alert rules
one action group and one alert rule
five action groups and one alert rule
two action groups and one alert rule

A

The correct answer is one action group and one alert rule.

Here’s why:

Alert Rules: An Azure Alert rule defines the condition that triggers a notification. In this case, the condition is “more than five security log events in 120 seconds.” We only need one alert rule because we’re monitoring the same condition on both VMs.

Action Groups: Action groups define what happens when an alert is triggered. This could include sending an email, an SMS, or a push notification. In this scenario, we need to notify both Admin1 and Admin2. Since the notification method is the same for both admins, we only need one action group. We then specify both Admin1 and Admin2 in the recipients of this action group. This minimizes administrative effort by letting us manage both notifications in one place.

Explanation of why other options are incorrect:

Two action groups and two alert rules: This would work but is unnecessarily complex. It would mean you have to maintain and update alert rules and action groups separately which is more administration overhead.

Five action groups and one alert rule: This is not related to the requirements.

Two action groups and one alert rule: This is incorrect because we only need one action group with both admins.

Breakdown of the required steps (in summary):

Create an Alert Rule:

Set the resource scope to include both VM1 and VM2.

Set the signal to be the security log with the event count exceeding 5 in 120 seconds.

Create an Action Group:

Add both Admin1’s and Admin2’s contact information as recipients (typically email addresses) within one action group.

Link the Action Group to the Alert Rule:

When configuring the alert rule, link it to the single action group you created.

Key Concepts for Azure 304 Exam:

Azure Monitor Alerts: Understand how to create and configure alert rules.

Action Groups: Know how to create and use action groups to trigger notifications.

Scope: Understand how to target alert rules to one or more resources (in this case, both VMs).

Minimize Administrative Task: Recognize that the correct solution should accomplish the objective with the least amount of overhead. (DRY Principle - Don’t Repeat Yourself)

Alert Logic: Be familiar with the logic of alert conditions (e.g. count, threshold, time windows).

30
Q

Overview:

Existing Environment

Fabrikam, Inc. is an engineering company that has offices throughout Europe. The company has a main office in London and three branch offices in Amsterdam Berlin, and Rome.

Active Directory Environment:

The network contains two Active Directory forests named corp.fabnkam.com and rd.fabrikam.com. There are no trust relationships between the forests. Corp.fabrikam.com is a production forest that contains identities used for internal user and computer authentication. Rd.fabrikam.com is used by the research and development (R&D) department only.

Network Infrastructure:

Each office contains at least one domain controller from the corp.fabrikam.com domain. The main office contains all the domain controllers for the rd.fabrikam.com forest.

All the offices have a high-speed connection to the Internet.

An existing application named WebApp1 is hosted in the data center of the London office. WebApp1 is used by customers to place and track orders. WebApp1 has a web tier that uses Microsoft Internet Information Services (IIS) and a database tier that runs Microsoft SQL Server 2016. The web tier and the database tier are deployed to virtual machines that run on Hyper-V.

The IT department currently uses a separate Hyper-V environment to test updates to WebApp1.

Fabrikam purchases all Microsoft licenses through a Microsoft Enterprise Agreement that includes Software Assurance.

Problem Statement:

The use of Web App1 is unpredictable. At peak times, users often report delays. Al other times, many resources for WebApp1 are underutilized.

Requirements:

Planned Changes:

Fabrikam plans to move most of its production workloads to Azure during the next few years.

As one of its first projects, the company plans to establish a hybrid identity model, facilitating an upcoming Microsoft Office 365 deployment

All R&D operations will remain on-premises.

Fabrikam plans to migrate the production and test instances of WebApp1 to Azure.

Technical Requirements:

Fabrikam identifies the following technical requirements:

  • Web site content must be easily updated from a single point.
  • User input must be minimized when provisioning new app instances.
  • Whenever possible, existing on premises licenses must be used to reduce cost.
  • Users must always authenticate by using their corp.fabrikam.com UPN identity.
  • Any new deployments to Azure must be redundant in case an Azure region fails.
  • Whenever possible, solutions must be deployed to Azure by using platform as a service (PaaS).
  • An email distribution group named IT Support must be notified of any issues relating to the directory synchronization services.
  • Directory synchronization between Azure Active Directory (Azure AD) and corp.fabhkam.com must not be affected by a link failure between Azure and the on premises network.

Database Requirements:

Fabrikam identifies the following database requirements:

  • Database metrics for the production instance of WebApp1 must be available for analysis so that database administrators can optimize the performance settings.
  • To avoid disrupting customer access, database downtime must be minimized when databases are migrated.
  • Database backups must be retained for a minimum of seven years to meet compliance requirement

Security Requirements:

Fabrikam identifies the following security requirements:

  • Company information including policies, templates, and data must be inaccessible to anyone outside the company
  • Users on the on-premises network must be able to authenticate to corp.fabrikam.com if an Internet link fails.
  • Administrators must be able authenticate to the Azure portal by using their corp.fabrikam.com credentials.
  • All administrative access to the Azure portal must be secured by using multi-factor authentication.
  • The testing of WebApp1 updates must not be visible to anyone outside the company.

You need to recommend a notification solution for the IT Support distribution group.

What should you include in the recommendation?

Azure Network Watcher
an action group
a SendGrid account with advanced reporting
Azure AD Connect Health

A

Okay, let’s break down this question and select the single best answer:

The correct answer is: an action group and Azure AD Connect Health

Explanation:

Why Action Group is correct: As discussed in the previous response, action groups are the essential component for sending notifications triggered by Azure alerts or services. In this case, they are how the email notification gets sent to the “IT Support” distribution group. Action Groups are specifically designed for sending notifications, and no other options do this.

Why Azure AD Connect Health is correct: Azure AD Connect health is the primary monitoring service for any hybrid Azure AD environment. If there is an issue with the synchronization, Azure AD Connect Health will detect this and trigger alerts. The purpose of this requirement is to notify IT support of synchronization issues, without the monitoring service, there will be no alerts for IT support.

Why other options are incorrect (and why we must select only two):

Azure Network Watcher: Azure Network Watcher is designed for monitoring network performance and troubleshooting network issues. While network problems could indirectly impact directory sync, it’s not the primary or most efficient way to detect and alert on directory sync issues. It’s a separate function that doesn’t meet the main requirement.

A SendGrid account with advanced reporting: While SendGrid can handle email delivery, it does not provide monitoring on the Azure AD Connect health, nor does it act as a trigger for alert notifications. Azure Action groups are purpose built and the more direct option for sending email notifications based on alert triggers.

Why selecting only one is wrong:

You need BOTH the alert source AND the notification method:

Action Group by itself: Action Groups are the notification mechanism, but they need an alert source to trigger them. Without a service that monitors the synchronization health, the action group won’t have anything to respond to.

Azure AD Connect Health by itself: Azure AD Connect Health provides monitoring, but it cannot directly notify the IT support group without some type of notification service to send the emails.

Therefore, the single best answer is an action group and Azure AD Connect Health because they work together to meet the requirement to both detect the issue and notify the appropriate team.

Important Notes for Azure 304 Exam:

Understand Service Roles: The key to these questions is understanding the specific role that each Azure service plays. Azure AD Connect Health is for directory sync monitoring, action groups are for notifications.

Alerting Mechanisms: Learn the complete chain of alerts. There’s always an alert source (e.g., Azure AD Connect Health), a trigger condition, and a notification method (e.g., action groups).

Context is Critical: The problem scenario outlines the need for directory sync notifications. Don’t get distracted by options that deal with general monitoring; focus on services that directly address the requirement.

Minimize Complexity: When multiple options can solve a problem, choose the most direct and simple solution that meets the requirement, without adding unnecessary overhead or services. This is why SendGrid is not the best option, the more direct notification solution is Azure Action Groups.

Read the Questions Carefully: Make sure to read the questions and requirements carefully. In this example it says “You need to recommend a notification solution for the IT Support distribution group” which requires the combination of both the monitoring and notification services.

30
Q

HOTSPOT

Your company has 20 web APIs that were developed in-house.

The company is developing 10 web apps that will use the web APIs. The web apps and the APIs are registered in the company’s Azure Active Directory (Azure AD) tenant. The web APIs are published by using Azure API Management.

You need to recommend a solution to block unauthorized requests originating from the web apps from reaching the web APIs.

The solution must meet the following requirements:

  • Use Azure AD-generated claims.
  • Minimize configuration and management effort.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Grant permissions to allow the web apps to
access the web APIs by using:
Azure AD
Azure API Management
The web APIs
Configure a JSON Web Token (JWT) validation
policy by using:
Azure AD
Azure API Management
The web APIs

A

Correct Answers:

Grant permissions to allow the web apps to access the web APIs by using: Azure AD

Configure a JSON Web Token (JWT) validation policy by using: Azure API Management

Explanation:

Granting Permissions Using Azure AD:

Why it’s correct: In an Azure AD-secured environment, applications (like your web apps) need explicit permissions to access other applications (like your web APIs). This is done through application permissions or delegated permissions granted within Azure AD.

How it works: You would register both your web apps and your web APIs in Azure AD. Then, for each web app, you would grant it the specific permissions it needs to call the web APIs it uses. This usually involves using the Azure portal or the Azure CLI to define the required API permissions that the web app must have before accessing the web APIs.

Key Concept: Azure AD manages authentication and authorization for your applications. This ensures that only authorized web apps can access your web APIs.

Why other options are wrong: The permissions are not granted directly through API Management or the web APIs themselves. Those components enforce access but do not manage the initial permission grant.

Configuring JWT Validation Policy Using Azure API Management:

Why it’s correct: When a web app makes a request to a web API, it includes a JSON Web Token (JWT) in the Authorization header. This JWT contains claims about the user (if delegated permission) or the application (if application permissions). Azure API Management can validate this JWT to confirm that:

The token was issued by a trusted Azure AD tenant.

The token is not expired.

The token contains the correct claims for this API and permissions.

The calling web app is who they claim to be in Azure AD.

How it works: You would configure an API Management policy to validate the incoming JWT using a validate-jwt policy that instructs API Management to check a incoming token against Azure AD. API Management would use the application id of the web API it is protecting to ensure that the token it is validating is a valid token for that specific API.

Key Concept: JWT validation is a common method to secure web APIs. API Management is designed for securing APIs and centralizes the management of policies for many APIs.

Why other options are wrong: Azure AD is the system that manages the tokens, not the system that validates the token. The Web APIs can do token validation, but using API Management is the correct option since it is a common place to configure policies for a large number of APIs.

In summary, the solution works as follows:

Azure AD grants permissions: The web apps are explicitly granted permissions in Azure AD to access the web APIs.

API Management validates JWTs: When a request arrives at API Management, it checks the JWT included in the request. API Management uses the application id of the protected web API as well as the token’s issuer to validate the token is valid for the API. If the JWT is valid, the request is forwarded to the web API; otherwise, the request is rejected.

Important Notes for Azure 304 Exam:

Azure AD Authentication and Authorization: Be clear on how Azure AD is used for both authentication (verifying the user or application identity) and authorization (determining what they’re allowed to access).

API Security Best Practices: Understand common API security concepts like JWT validation, scopes, and least privilege.

Azure API Management: Recognize how API Management is used to secure, manage, and publish APIs. Understand the key features of API Management, like policies.

Separation of Concerns: Note the separation of concerns: Azure AD for identity and permissions, API Management for API security enforcement and JWT validation.

Minimize Configuration: API Management policies can be applied to many APIs at once, minimizing config effort compared to implementing the validation logic in each web API individually.

Claims: Be familiar with the concept of claims in a JWT and how they are used to authorize access to API resources.

30
Q

You need to design a solution that will execute custom C# code in response to an event routed to Azure Event Grid. The solution must meet the following requirements:

The executed code must be able to access the private IP address of a Microsoft SQL Server instance that runs on an Azure virtual machine.

Costs must be minimized.

What should you include in the solution?

Azure Logic Apps in the integrated service environment
Azure Functions in the Dedicated plan and the Basic Azure App Service plan
Azure Logic Apps in the Consumption plan
Azure Functions in the Consumption plan

A

The correct answer is: Azure Functions in the Consumption plan

Explanation:

Azure Functions in the Consumption Plan:

Why it’s correct for the scenario:

Custom C# Code: Azure Functions natively supports running C# code. You can deploy custom C# logic as a Function app and trigger it from Event Grid.

Private IP Address Access: Azure Functions deployed in the consumption plan can integrate with an Azure virtual network via VNet integration. VNet integration is specifically designed for resources in the Consumption Plan to securely access resources within a private virtual network, allowing functions to access the private IP of your SQL Server VM.

Cost Optimization: The Consumption plan for Azure Functions is a serverless plan that is designed to minimize costs because you only pay when the function is executed. This aligns with the requirement to keep costs down.

Event Grid Trigger: Azure Functions have a built-in trigger for Event Grid events making the two services work well together.

How it Works: You create an Azure Function app, write your C# code, configure it to be triggered by Event Grid events, and set up VNet integration.

Why Other Options Are Incorrect:

Azure Logic Apps in the Integrated Service Environment (ISE): While Logic Apps can be triggered by Event Grid and can integrate with virtual networks in an ISE, it is generally more expensive than Azure Functions. Also, Logic Apps are a visual orchestration tool that is primarily designed to use prebuilt connectors rather than custom C# code. Thus, Azure Functions is the more appropriate solution for this use case.

Azure Functions in the Dedicated plan and the Basic Azure App Service plan: The Dedicated plan for Azure Functions (similar to an App Service Plan) provides dedicated compute resources, which can be more expensive than the consumption plan. Additionally, the requirement for accessing resources in a private network is a capability specific to Azure Functions in the consumption plan, not Azure App Service. This is also not optimized for cost, which is an explicit requirement.

Azure Logic Apps in the Consumption plan: While Logic Apps in Consumption plan can be triggered by Event Grid events and can invoke Azure Functions or use built-in connectors, the primary purpose of this requirement is to execute C# code. It is a better practice to use Azure Functions for executing custom code and Logic Apps for orchestration. Since the requirement is only for executing code, Azure Functions is the better choice. And in addition, Logic Apps can not access private VNet addresses in consumption plan.

Summary:

The best fit for the requirements of this problem is Azure Functions in the Consumption plan. It allows for custom C# execution, private network access via vnet integration, and is the most cost-effective option by only paying for code execution time.

Important Notes for Azure 304 Exam:

Azure Functions Plans: Understand the differences between the Consumption plan and the Dedicated (App Service) plan, especially when to use which plan. Consumption is serverless and cost-effective; Dedicated is for consistent performance and more control.

VNet Integration: Learn how services like Azure Functions can integrate with virtual networks for secure access to private resources. Understand which services allow VNet integration and which plans support it (Consumption plan for functions).

Event Grid Integration: Understand how to use Azure Event Grid to trigger different types of services, including Azure Functions.

Cost Optimization: Prioritize cost-effective solutions when explicitly mentioned in the requirements. Consumption-based services are often a good choice for this scenario.

Service Selection: Choose the correct Azure service based on its strengths. Azure Functions for custom code, Logic Apps for orchestration and workflow, Event Grid for event routing.

Serverless: Be familiar with serverless compute and its cost and scale benefits.

31
Q

You have an Azure SQL Database elastic pool.

You need to monitor the resource usage of the elastic pool for anomalous database activity based on historic usage patterns. The solution must minimize administrative effort.

What should you include in the solution?

a metric alert that uses a dynamic threshold
a metric alert that uses a static threshold
a log alert that uses a dynamic threshold
a log alert that uses a static threshold

A

The correct answer is: a metric alert that uses a dynamic threshold

Explanation:

Metric Alert with Dynamic Threshold:

Why it’s correct:

Resource Usage Monitoring: Metric alerts are specifically designed to monitor numerical values (metrics) that are emitted by Azure resources. Resource usage of a database (CPU, Data IO, Log IO etc) are available as numerical metrics.

Anomalous Activity: Dynamic thresholds are the key here. They use machine learning algorithms to establish a baseline of normal behavior based on historical data. When the current usage deviates significantly from this baseline, the alert triggers. This is ideal for detecting anomalies because it automatically adapts to changing usage patterns. This removes the human element needed for defining static thresholds, fulfilling the requirement to minimize administrative tasks.

Minimized Administrative Effort: Because the threshold is dynamic, the administrator does not have to adjust the threshold overtime.

Applicable to Elastic Pools: Metric alerts can be applied to elastic pools, allowing you to monitor the overall pool resource usage.

How it works: You configure a metric alert on the elastic pool that monitors the specific resource usage metrics you’re interested in (e.g., CPU percentage, Data IO percentage, Storage Space). You set the alert to use a dynamic threshold. The system automatically learns the typical patterns and then alerts when there is a significant deviation.

Why Other Options are Incorrect:

Metric Alert with Static Threshold: Static thresholds require you to manually set a fixed value that triggers the alert (e.g., “alert when CPU usage is above 80%”). This is problematic because:

Difficult to Define: Determining an appropriate static threshold can be challenging because normal resource usage varies and is difficult to predict. A static threshold that is correct for one time period may be inaccurate for a different time period.

Requires Maintenance: Static thresholds require constant manual adjustments to ensure they are relevant to the current usage patterns. This increases administrative overhead and therefore not fulfilling the requirement to minimize administrative effort.

Log Alert with Dynamic Threshold: Log alerts are designed to monitor events stored in logs. While logs can contain valuable data about database activity, they’re not as effective for directly monitoring resource usage metrics (e.g. CPU, Storage, IO). Log data is also not as easily used for machine learning to create dynamic thresholds.

Log Alert with Static Threshold: Similar to a metric alert with a static threshold, this will require more administration to tune a threshold and is not as effective as dynamic threshold with metrics. Log alerts are also not suitable for monitoring resource utilization metrics.

In summary, a metric alert with a dynamic threshold provides the best combination of accurate anomaly detection, minimizes administrative effort, and works well with the type of data that you’re trying to analyze.

Important Notes for Azure 304 Exam:

Metric Alerts: Be familiar with using metric alerts to monitor numerical values, including the different operators and threshold types.

Dynamic Thresholds: Deeply understand dynamic thresholds, their use cases, benefits, and limitations.

Log Alerts: Understand log alerts and how they differ from metric alerts.

Appropriate Monitoring Tools: Know which Azure monitoring tool (metric alerts, log alerts, Application Insights, etc.) is best suited for a particular task.

Anomaly Detection: Learn the concept of anomaly detection based on historical data.

Elastic Pools: Understand the structure and behavior of Azure SQL Database elastic pools.

Minimize Admin Effort: Be able to choose the most efficient solution that achieves the required monitoring with the least amount of manual configuration and upkeep.

32
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure Storage v2 account named storage1.

You plan to archive data to storage1.

You need to ensure that the archived data cannot be deleted for five years. The solution must prevent administrators from deleting the data.

Solution: You create an Azure Blob storage container, and you configure a legal hold access policy.

Does this meet the goal?

Yes
No

A

The Correct Answer is: No

Explanation:

Legal Holds:

Legal holds are designed to preserve data for litigation or compliance purposes, but they are not intended to prevent administrators from deleting data. An administrator with the correct permissions can remove a legal hold from a blob or container and subsequently delete it. They provide a mechanism to mark data as protected but do not offer immutable storage.

Requirement: The key requirement here is that the archived data “cannot be deleted for five years” and that it prevents even administrators from doing so. A legal hold does not provide this guarantee.

Immutable Storage: To achieve true immutability, you need to use Azure’s immutable storage feature, which relies on time-based retention policies or policy-based retention policies, not legal holds.

Azure Storage Immutability Policies:

Time-based retention policies: You can set a retention period on the blob, container or versioned blob so that they can not be deleted until after the period has passed.

Policy-based retention policies: You can setup a policy that controls the immutability settings for all blobs with a container.

Why the Provided Solution Fails: Using only a legal hold will not prevent administrators from removing the legal hold and subsequently deleting the data.

Why the other option is not correct:

Yes: The legal hold is not the correct mechanism to achieve immutability and prevent administrator deletion of the data.

Important Notes for the Azure 304 Exam:

Legal Holds vs. Immutable Storage: Know the fundamental differences between legal holds and immutable storage policies. Understand when to use each.

Immutable Storage: Be familiar with how time-based and policy based immutability policies work in Azure Storage. Understand how they prevent data deletion for a specified duration, and how they achieve that.

Administrator Privileges: Recognize that legal holds are often used within an administrative context, while immutable storage should explicitly prevent even administrators from deleting the data.

Retention Periods: Understand the use of retention periods for data immutability.

Data Compliance: Be able to apply appropriate Azure Storage solutions to meet compliance requirements related to data retention and protection.

32
Q

You plan to deploy an API by using Azure API Management

You need to recommend a solution to protect the API from a distributed denial of service (DDoS) attack.

What should you recommend?

Create network security groups (NSGs).
Enable quotas.
Strip the Powered-By response header.
Enable rate limiting

A

The correct answer is: Enable rate limiting.

Explanation:

Rate Limiting:

Why it’s the most effective for DDoS: Rate limiting is a fundamental technique for mitigating DDoS attacks. It works by restricting the number of requests a client (or IP address) can make within a given time window. This prevents an attacker from overwhelming the API with a high volume of requests, which is the core of a DDoS attack.

How it Works in API Management: Azure API Management allows you to configure rate limiting policies, specifying the maximum number of calls allowed per subscription key, IP address, or other criteria. When requests exceed the limit, API Management can return an error response, preventing the requests from reaching your backend API.

DDoS Protection: Rate limiting can effectively mitigate a variety of DDoS attacks, including volumetric attacks, resource-exhaustion attacks, and application layer attacks.

Relevance to the scenario: The goal is to protect against DDoS attacks, and rate limiting directly addresses that goal.

Why Other Options are Incorrect:

Create network security groups (NSGs): NSGs are essential for controlling network traffic at the virtual network level. However, while they can filter traffic based on IP addresses and ports, they do not effectively mitigate DDoS attacks. NSGs are more suitable for network-level security, not application-level security against malicious HTTP requests. You can not effectively mitigate an http flood with only network rules. Also, DDoS attacks can come from many IP addresses which are difficult to filter with only NSGs, so it is better to rely on the application level to provide the rate limiting functionality.

Enable quotas: Quotas control the overall usage of a resource (e.g., number of API calls per month, bandwidth limits). While quotas can help manage costs, they are not designed to prevent a sudden flood of malicious requests as is typically found in a DDoS attack, nor do they address rate limiting requirements. They are more about capacity planning, not active mitigation of a DDoS attack.

Strip the Powered-By response header: Removing the Powered-By response header is a security best practice to avoid revealing the tech stack for your API, but it doesn’t do anything to protect against DDoS attacks. While useful for security, it does not mitigate the requirement to protect against a distributed denial of service.

Summary:

Rate limiting provides the best defense against DDoS attacks by preventing an overwhelming number of requests. Other methods will not be as effective.

Important Notes for the Azure 304 Exam:

DDoS Mitigation: Be able to recognize which solutions best prevent DDoS attacks.

API Management Policies: Understand the different policies available in Azure API Management and how to apply them to achieve specific goals.

Rate Limiting: Deeply understand how rate limiting works and how it can mitigate various attacks. Know how to configure rate-limiting policies in Azure API Management.

Defense in Depth: Understand that security is a layered approach. NSGs and other security controls are necessary, but rate limiting provides a strong defense at the API level.

Best Practices: When presented with a security question, always look for solutions that align with recognized security best practices.

Application-Level Security: Remember that many security threats, including DDoS, occur at the application level (HTTP requests). Focus on services and techniques that operate at this level, such as rate limiting and request filtering.

32
Q

You are designing an Azure resource deployment that will use Azure Resource Manager templates. The deployment will use Azure Key Vault to store secrets.

You need to recommend a solution to meet the following requirements:

Prevent the IT staff that will perform the deployment from retrieving the secrets directly from Key Vault.

Use the principle of least privilege.

Which two actions should you recommend? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

Create a Key Vault access policy that allows all get key permissions, get secret permissions, and get certificate permissions.
From Access policies in Key Vault, enable access to the Azure Resource Manager for template deployment.
Create a Key Vault access policy that allows all list key permissions, list secret permissions, and list certificate permissions.
Assign the IT staff a custom role that includes the Microsoft.KeyVault/Vaults/Deploy/Action permission.
Assign the Key Vault Contributor role to the IT staff.

A

Correct Answers:

From Access policies in Key Vault, enable access to the Azure Resource Manager for template deployment.

Assign the IT staff a custom role that includes the Microsoft.KeyVault/Vaults/Deploy/Action permission.

Explanation:

Enable Access for Azure Resource Manager in Key Vault Access Policy:

Why it’s correct: This is a critical step. When you deploy resources via ARM templates, the Azure Resource Manager service needs permission to retrieve secrets from Key Vault. By enabling this specific access in the Key Vault Access Policies, you allow ARM to get the secrets during deployment but you do not grant direct access to individual users.

How it Works: This option will grant the Azure Resource Manager service principal (which represents Azure’s deployment service) the necessary permissions to read secrets. This is done by adding an access policy that grants “Get” and/or “List” permissions to the “Azure Resource Manager” service principal. This enables ARM template deployments to access secrets.

Least Privilege: It avoids granting direct secret read permissions to the IT staff, thus minimizing the privilege they are granted for the deployment.

Assign IT Staff a Custom Role with Microsoft.KeyVault/Vaults/Deploy/Action Permission:

Why it’s correct: The Microsoft.KeyVault/Vaults/Deploy/Action permission allows the user to deploy ARM templates that reference secrets from Key Vault, but it does not grant the user the permission to view or modify the secrets directly. This precisely meets the requirement to prevent direct access to the secret.

How it Works: You create a custom role in Azure RBAC with just this specific permission. You then assign this role to the IT staff responsible for deployment.

Least Privilege: By granting only this permission, you follow the principle of least privilege. IT staff can deploy using secrets, but they can not view, edit, or delete the secrets in Key Vault, fulfilling the requirements of the question.

Why Other Options Are Incorrect:

Create a Key Vault access policy that allows all get key permissions, get secret permissions, and get certificate permissions: This is incorrect as it grants far too many permissions. This would allow the IT staff to read all secrets directly and violate the first requirement and the principle of least privilege. This policy should not be granted to any specific user account, and especially not users responsible for deployment, as this would allow them to see all the values of the secrets directly.

Create a Key Vault access policy that allows all list key permissions, list secret permissions, and list certificate permissions: While less powerful than the option above, this is still incorrect because it allows users to see the list of secrets that are in the Key Vault, which is still more than is required. This does violate the least privilege rule as IT staff do not need to list secrets to deploy ARM templates.

Assign the Key Vault Contributor role to the IT staff: This role grants excessive permissions to the Key Vault, including the ability to read, create, modify, and delete secrets. This violates the principle of least privilege and is not correct because IT staff only need to be able to trigger an ARM deployment, and not manipulate the contents of the Key Vault.

Summary:

The correct approach is to grant the Azure Resource Manager access via Key Vault access policy, and grant IT staff a custom role with minimal necessary permissions (i.e. Microsoft.KeyVault/Vaults/Deploy/Action). This combination ensures that the required deployment occurs without granting excessive access to the staff performing the deployment.

Important Notes for Azure 304 Exam:

Key Vault Access Policies: Understand how access policies control access to Key Vault resources (secrets, keys, certificates).

Azure Resource Manager Access: Know how to grant the Azure Resource Manager service access to Key Vault secrets.

Azure RBAC: Deeply understand how to use Azure RBAC (custom roles and built-in roles) to manage permissions.

Least Privilege: Always adhere to the principle of least privilege when designing security solutions. Grant only the necessary permissions.

ARM Template Deployment: Be familiar with how ARM templates utilize Key Vault for secure parameterization.

Custom Roles: Know how to define and assign custom roles in Azure RBAC, and when this is the better option.

33
Q

Your company plans to publish APIs for its services by using Azure API Management.

You discover that service responses include the AspNet-Version header.

You need to recommend a solution to remove AspNet-Version from the response of the published APIs.

What should you include in the recommendation?

a new product
a modification to the URL scheme
a new policy
a new revision

A

The correct answer is: a new policy

Explanation:

API Management Policies:

Why it’s the right solution: Azure API Management policies are designed to modify request and response behavior. You can use policies to control a wide range of operations, including header manipulation, rate limiting, caching, authentication, and more. Specifically, you can use a set-header policy to remove the header.

How it Works: You would create a policy, most likely an outbound policy, that removes the AspNet-Version header from the HTTP response. This policy can be applied at various scopes in API Management: global, product, API, or operation.

Direct Header Modification: This is the most direct and efficient way to solve the problem. It avoids changing API structure or deployment and provides a centralized mechanism for removing the header.

Relevance to the scenario: The main goal is to remove the specific header. A policy is the most appropriate tool for header manipulation.

Why Other Options are Incorrect:

A new product: Products in API Management are used to group and manage APIs for different audiences and usage tiers. While you could apply a policy to a product, creating a new product just to remove a header is overkill. It’s not the right tool for this specific task, and creates unnecessary administrative overhead.

A modification to the URL scheme: Modifying the URL scheme changes the way that the API is accessed, and is unrelated to header management. This would be overkill for a header modification, as it is far more involved and is unrelated to header manipulation.

A new revision: API revisions are used for versioning of your APIs, and would require an entire new deployment. Like products, this is far more work than necessary for such a minor change, and is completely unnecessary.

In Summary:

The most efficient way to remove the AspNet-Version header from API responses is to use an API Management policy. The policy allows you to manipulate headers easily, and is a specific tool designed for this task.

Important Notes for the Azure 304 Exam:

API Management Policies: You MUST be familiar with API Management policies: where to use them, what they can do (including header management), and how to use them.

Outbound Policies: Understand the difference between inbound and outbound policies, and when to use each. (You should use an outbound policy when modifying responses from the API)

HTTP Headers: Be familiar with HTTP headers and how they’re used in API communication.

Security Best Practices: Removing unnecessary headers (such as the AspNet-Version header) is considered a security best practice, as it can disclose the underlying tech stack of the API.

API Management Features: Understand all the key features of API Management: policies, products, subscriptions, revisions, gateways, etc.

Appropriate Solution: Choosing the most appropriate solution (a policy) instead of a more complex one (new product, new revision, changing URL) for a specific problem, is an important skill to have for the exam.

33
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure Active Directory (Azure AD) tenant named contoso.com. The tenant contains a group named Group1. Group1 contains all the administrative user accounts.

You discover several login attempts to the Azure portal from countries where administrative users do NOT work.

You need to ensure that all login attempts to the Azure portal from those countries require Azure Multi-Factor Authentication (MFA).

Solution: Create an Access Review for Group1.

Does this solution meet the goal?

Yes
No

A

The correct answer is: No

Explanation:

Access Reviews:

What they do: Access reviews in Azure AD are designed to periodically review and recertify user access to resources. The focus is on reviewing who has access, and whether they should continue to have it. They are used to ensure that users have the correct access rights over time and can be used to automate the removal of unused or inappropriate access.

What they don’t do: Access reviews do not enforce MFA or modify authentication policies. They are not the right tool for adding conditional access policy rules.

Relevance to the scenario: Access reviews do not meet the requirement to mandate MFA based on login location.

Conditional Access Policies:

Why needed: To enforce MFA based on the login location, you must use Azure AD Conditional Access policies. These policies allow you to set conditions for access, such as location, device, application, and more.

How it would work: You would create a conditional access policy that applies to all users in Group1 that are accessing the Azure portal. You would add a condition for locations that are not allowed, and enforce MFA if login attempts originate from these locations.

Direct solution to the problem: Conditional access policies allow the administrator to directly control the authentication behavior based on specific conditions.

Why other option is not correct:

Yes: The access review is not designed to meet the goal of enforcing MFA based on specific user or location combinations.

In Summary:

Creating an access review for Group1 will not enforce MFA for login attempts from specified countries. Access reviews are designed to manage who has access, not the conditions under which they access resources. You need to use Azure AD Conditional Access for that purpose.

Important Notes for the Azure 304 Exam:

Conditional Access: You must understand how Azure AD Conditional Access works, including how to create policies, configure conditions, and enforce access controls.

Access Reviews: You must understand the use of Azure AD Access Reviews and their purpose. Know when to use access reviews as opposed to conditional access policies.

Multi-Factor Authentication (MFA): Know how to enforce MFA for different user scenarios.

Location-Based Access: Be familiar with how to use locations to define conditions for conditional access policies.

Azure AD Security: Know the various ways to protect your Azure AD environment. Be familiar with the different security features available in Azure AD.

Correct Solution: Be able to choose the correct solution for a problem by understanding what tools are designed for which tasks.

Key Takeaway: The primary takeaway from this question is that Access Reviews are for access governance (who has access), and Conditional Access is for controlling authentication conditions (how users access resources).

33
Q

You have an Azure Active Directory (Azure AD) tenant and Windows 10 devices.

You configure a conditional access policy as shown in the exhibit. (Click the Exhibit tab.)

MFA Policy

Name: MFA policy
Assignments

Users and groups: All users included and specific…
Cloud apps or actions: All cloud apps
Conditions: 0 conditions selected
Access controls

Grant: 2 controls selected
Session: 0 controls selected
Enable policy: Off

Grant

Select the controls to be enforced:
☐ Block access
☑ Grant access
☑ Require multi-factor authentication
☑ Require device to be marked as compliant
☑ Require Hybrid Azure AD joined device
☑ Require approved client app
☑ Require app protection policy (Preview)
For multiple controls:

☐ Require all the selected controls
☑ Require one of the selected controls
Warning: Don’t lock yourself out! Make sure that your device is Hybrid Azure AD Joined.

What is the result of the policy?

A. All users will always be prompted for multi-factor authentication (MFA).
B. Users will be prompted for multi-factor authentication (MFA) only when they sign in from devices that are NOT joined to Azure AD.
C. All users will be able to sign in without using multi-factor authentication (MFA).
D. Users will be prompted for multi-factor authentication (MFA) only when they sign in from devices that are joined to Azure AD.

A

The correct answer is: C. All users will be able to sign in without using multi-factor authentication (MFA).

Explanation:

Here’s a breakdown of why this is the case:

Policy is Disabled: The most important thing to notice is that the Enable policy switch is set to Off. This means the conditional access policy is not active and has no effect on the sign-in process.

Conditional Access Policies must be Enabled: For a conditional access policy to have any effect, it must be enabled. Since this is not the case, the specified users will not be prompted to use multi-factor authentication (MFA).

Why the Other Options are Incorrect:

A. All users will always be prompted for multi-factor authentication (MFA). This is incorrect because the policy is disabled. Even if the policy were enabled, this option assumes the policy applies to all logins and this is not true since no locations, or application conditions were specified.

B. Users will be prompted for multi-factor authentication (MFA) only when they sign in from devices that are NOT joined to Azure AD. This is incorrect for the same reason as above, that the policy is disabled. Also this answer choice refers to a “NOT joined device” which is not a condition of this policy.

D. Users will be prompted for multi-factor authentication (MFA) only when they sign in from devices that are joined to Azure AD. This is incorrect for the same reason as above, that the policy is disabled. Also this answer choice refers to a “joined device” which is not a condition of this policy.

In Summary:

Because the policy is disabled, the configured requirements for multi-factor authentication (MFA) will not be enforced. All users will be able to sign in without being prompted for multi-factor authentication.

Important Notes for Azure 304 Exam:

Conditional Access Policy Status: Always pay close attention to whether a conditional access policy is enabled or disabled. This is a critical detail often overlooked.

Conditions: Remember that conditional access policies are enforced only when the specified conditions are met. If there are no conditions, then the policy will apply to all sign-in attempts.

Controls: The “grant” settings in a conditional access policy define the requirements that need to be met for access.

“Require one of the selected controls” vs. “Require all the selected controls”: The “Require one of the selected controls” setting means that the user only needs to satisfy at least one of the required controls. “Require all the selected controls” means that the user must satisfy every control.

Policy Evaluation: Understand that conditional access policies are evaluated in order, and the first policy that matches will be enforced.

Testing and Planning: It is highly recommended to test conditional access policies, and have a plan in place if you were to lock yourself out of the system.

Azure AD Security: Be proficient with Azure AD conditional access as it is an important part of securing your Azure AD environment.

34
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it As a result, these questions will not appear In the review screen.

You have an on-premises Hyper-V cluster that hosts 20 virtual machines. Some virtual machines run Windows Server 2016 and some run Linux.

You plan to migrate the virtual machines to an Azure subscription.

You need to recommend a solution to replicate the disks of the virtual machines to Azure. The solution must ensure that the virtual machines remain available during the migration of the disks.

Solution: You recommend implementing an Azure Storage account and then running AzCopy.

Does this meet the goal?

Yes
NO

A

The correct answer is: No

Explanation:

AzCopy:

What it does: AzCopy is a command-line utility for copying data to and from Azure Storage. It’s great for bulk transfers, but it doesn’t provide the mechanisms for live replication or migration of virtual machine disks.

Why it’s unsuitable: When you use AzCopy to copy a VHD file (the virtual hard disk file) from Hyper-V to Azure Storage, the virtual machine will need to be powered down to ensure data consistency. This does not meet the requirement of maintaining availability during the data transfer.

Data transfer method: AzCopy is a copy tool, not a replication tool, meaning that it will create a duplicate of the data at the point in time you copy it. It is not designed to be a live synchronization method.

Requirement of “Availability”: The critical requirement here is that “the virtual machines remain available during the migration of the disks”. AzCopy does not provide live transfer or synchronization, and will require that you shut down your VM before the transfer can occur.

Azure Migrate for Live Migration: For migrating virtual machines with minimal downtime, you must use services like Azure Migrate that provide live migration functionality through agents or replication features. Azure Migrate specifically provides a method to replicate and migrate Hyper-V virtual machines to Azure with minimal disruption.

Why the Other Option is Incorrect:

Yes: The provided solution does not satisfy the requirement of keeping the VMs available during the transfer and therefore this is the wrong answer.

In Summary:

AzCopy is a useful tool for transferring data but not for live migration or replication. AzCopy does not allow the VMs to remain available, which is a core requirement, so the solution does not meet the goal.

Important Notes for the Azure 304 Exam:

Azure Migrate: Be familiar with the various migration tools, especially Azure Migrate. Understand how it can be used to migrate virtual machines (including Hyper-V) to Azure with minimal downtime using live replication technologies.

Replication vs. Copy: Understand the difference between replicating data and simply copying it. Replication implies an ongoing synchronization process.

Migration Methods: Understand different methods of migrating virtual machines. Know when a full migration is required versus a live migration.

AzCopy: Be familiar with AzCopy’s role for data transfers, but understand its limitations for live migration scenarios.

Virtual Machine Availability: Always prioritize keeping virtual machines available during a migration scenario. Understand how to use Azure tools to meet that objective.

Data Consistency: Understand how powering down virtual machines before copying data can help ensure data consistency.

Key Takeaway: You must understand the specific tool for the specific task. AzCopy is great for data transfer, but for migrating live VM disks, you should use Azure Migrate.

34
Q

CORRECT TEXT

You have an Azure subscription named Subscription1 that is linked to a hybrid Azure Active Directory (Azure AD) tenant.

You have an on-premises datacenter that does NOT have a VPN connection to Subscription1. The datacenter contains a computer named Server1 that has Microsoft SQL Server 2016 installed. Server1 is prevented from accessing the internet

An Azure logic app named LogicApp1 requires write access to a database on Server1.

You need to recommend a solution to provide LogicApp1 with the ability to access Server1.

What should you recommend deploying on-premises and in Azure? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Answer Area
On-premises:
A Web Application Proxy for Windows Server
An Azure AD Application Proxy connector
An On-premises data gateway
Hybrid Connection Manager
Azure:
A connection gateway resource
An Azure Application Gateway
An Azure Event Grid domain
An enterprise application

A

Correct Answers:

On-premises: An On-premises data gateway

Azure: A connection gateway resource

Explanation:

On-premises Data Gateway:

Why it’s correct: The On-premises Data Gateway is the essential component for providing secure access from cloud services like Logic Apps to on-premises data sources. It acts as a bridge between your Azure environment and your private network.

How it Works: You install the gateway on a computer within your on-premises network, and then the gateway is registered with the Azure service that needs to access your on-premise data. The gateway manages the connection securely and provides a communication channel between Azure and your on-premises environment without exposing your on-premises network to the public internet.

Relevance to the scenario: It directly addresses the need for LogicApp1 to reach the on-prem SQL Server.

Connection Gateway Resource (Azure):

Why it’s correct: In Azure, you will require a connection gateway that is used to register your on-prem data gateway. This Azure resource provides a bridge between the cloud and the on-premises environment. This resource is also used when configuring your logic app to access the on-premises resource.

How it Works: When you connect to an on-premises data source in Logic Apps (or other services like Power BI), you’ll select the connection gateway from a drop down list. Logic Apps use this Azure resource to send queries to the on-premises data gateway to be processed in your private network, enabling secure access to the SQL Server database.

Relevance to the scenario: This Azure component works in conjunction with the on-premises data gateway to establish the connection and is a required part of the access model.

Why Other Options Are Incorrect:

On-premises Options:

A Web Application Proxy for Windows Server: WAP is used for publishing web applications to the internet, not for connecting cloud services to on-premises databases. It is typically used for HTTP proxying and authentication, not direct database access from the cloud.

An Azure AD Application Proxy connector: This is used to publish internal web applications, not access databases, and is not designed for data sources. It also does not create the secure channel between Azure and the on-prem network.

Hybrid Connection Manager: This service is used to establish connectivity to Azure services, not for Logic Apps to access on-premises databases. It is not the appropriate tool for this use case, where we need logic apps to call into the on-prem environment.

Azure Options:

An Azure Application Gateway: This is a web traffic load balancer for HTTP traffic, not used for connecting to on-premises data sources. Application gateway is used to proxy HTTP traffic from the public internet, and is not used as an intermediary for connecting cloud services to on-premise data.

An Azure Event Grid domain: Event Grid is a message broker used for event-driven architectures, not for database access. While an event could trigger a logic app, this service is not relevant to this use case.

An enterprise application: An enterprise application in Azure AD is a representation of a cloud service or an application for authentication, not used for direct data connectivity. It also does not create the secure channel between Azure and the on-prem network.

In Summary:

The correct combination is the on-premises data gateway and a connection gateway in Azure. The on-premises data gateway acts as a secure bridge from Azure to on-premises data, while the connection gateway provides the link to this bridge from Azure.

Important Notes for Azure 304 Exam:

On-premises Data Gateway: You MUST know what this is, how it works, and when to use it for cloud-to-on-premises connectivity for services like Logic Apps, Power BI, and others.

Hybrid Connectivity: Understand the challenges of hybrid architectures, including connectivity, security, and data access.

Logic Apps: Be familiar with Logic Apps and their ability to connect to a variety of data sources, including on-premises resources.

Azure Network Services: Understand different Azure networking services and be able to pick the correct one for the job, such as Virtual Network Gateways, Application Gateway, and Hybrid Connections.

Security: Be familiar with the various security methods available and best practices for securely connecting to on-premise resources.

Data Integration: Be able to connect and query various types of data sources in the cloud.

34
Q

You have 70 TB of files on your on-premises file server.

You need to recommend solution for importing data to Azure. The solution must minimize cost.

What Azure service should you recommend?

Azure StorSimple
Azure Batch
Azure Data Box
Azure Stack

A

The correct answer is: Azure Data Box

Explanation:

Azure Data Box:

Why it’s the best choice for large data imports: Azure Data Box is a physical appliance that Microsoft ships to your location. You copy your data to the device, ship it back to Microsoft, and they upload the data to your Azure Storage account. This method is optimized for large data transfers like 70 TB, where network transfer can be slow and costly.

Cost Optimization: Azure Data Box is designed for cost efficiency. It avoids the need for significant bandwidth upgrades, which can be very expensive for a large dataset like 70 TB. You pay for the Data Box device usage and the cost for copying to Azure storage, however this is typically cheaper than using network transfer over the internet for such a large transfer.

Relevance to the scenario: The scenario is large data transfer and minimizing costs. Data Box is the best option.

Data Transfer: It allows you to copy data from your on-premises file server to the Data Box device and return the data to Microsoft to be uploaded to your Azure storage.

Why Other Options are Incorrect:

Azure StorSimple: Azure StorSimple is a hybrid cloud storage solution that primarily focuses on tiered storage, cloud backup, and disaster recovery. While it can store large volumes of data in Azure, its primary purpose is not for bulk data migration like this scenario, but rather for active, tiered storage and backups. Additionally, StorSimple is now end of life and should not be used.

Azure Batch: Azure Batch is a service for running large-scale parallel compute jobs, not for data transfers from on-premises environments. It’s used for processing data in the cloud, not getting data into the cloud.

Azure Stack: Azure Stack is an on-premises extension of Azure, designed to run Azure services within your own data center. It’s not a tool for data import. Additionally, Azure Stack is a complex solution for running local private cloud. It would not be appropriate for this situation.

In Summary:

Azure Data Box is the most cost-effective and practical option for transferring a large dataset like 70 TB from an on-premises file server to Azure. It avoids the high costs and slow speeds associated with network data transfer for that volume.

Important Notes for Azure 304 Exam:

Data Import Options: Be familiar with the different ways to import data into Azure, including Azure Data Box, Azure Import/Export service, AzCopy, and direct network transfers.

Azure Data Box Family: Understand the different types of Data Box devices (Data Box Disk, Data Box, Data Box Heavy), and when to use each based on the amount of data and the transfer speed requirements.

Cost Optimization: Be able to choose the most cost-effective solution for different types of data transfer based on the volume of data and the limitations of internet connectivity.

Bandwidth Limitations: Understand the limitations of internet bandwidth and when a physical transfer method (Data Box) is more appropriate than a network transfer.

Data Transfer Scenarios: Know which Azure services are designed for data migration vs. other purposes (e.g., StorSimple for hybrid storage, Batch for computation).

Service Selection: Be able to choose the best Azure service for a given task, especially the appropriate data transfer service based on requirements and scenario constraints.

34
Q

Your company has several Azure subscriptions that are part of a Microsoft Enterprise Agreement.

The company’s compliance team creates automatic alerts by using Azure Monitor.

You need to recommend a solution to automatically recreate the alerts in the new Azure subscriptions that are added to the Enterprise Agreement

What should you include in the recommendation?

Azure Automation runbooks
Azure Log Analytics alerts
Azure Monitor action groups
Azure Resource Manager templates
Azure Policy

A

The correct answer is: Azure Policy

Explanation:

Azure Policy:

Why it’s the best fit: Azure Policy is the ideal solution for enforcing compliance and standardization across Azure subscriptions. It allows you to define policies that automatically deploy resources (including alerts) to new subscriptions as they are added to the management scope.

How it works for alerts: You can create an Azure Policy definition that specifies the configuration of your desired Azure Monitor alerts. Then, you assign this policy at the management group level (or at the root of your enterprise agreement). Any new subscription created or moved into the management group will automatically have the policy applied.

Automatic Deployment: When the policy is applied to a new subscription, it will automatically deploy the required Azure Monitor alerts, eliminating manual configuration.

Compliance Enforcement: Azure Policy ensures that new subscriptions are compliant to the required alert configurations.

Relevance to the scenario: The requirement is automated deployment of alerts to new subscriptions, and Azure Policy is the best tool to achieve this.

Why Other Options are Incorrect:

Azure Automation runbooks: While Automation runbooks can be used to deploy resources, they are not the best choice for automated enforcement. You’d need to write and trigger the runbook yourself, or create a complicated external system to trigger this, so it does not fulfill the “automatic” requirement. This is not the best practice for compliance.

Azure Log Analytics alerts: Log Analytics alerts are a type of alert that can be deployed, but they don’t provide the mechanism for automatically creating the alerts. Log analytics is the alert source, not the deployment mechanism.

Azure Monitor action groups: Action groups are used to define what happens when an alert is triggered (e.g., send an email), but they do not provide the means to create the alerts. Action groups are not the correct service to solve this use case.

Azure Resource Manager templates: ARM templates are used to define infrastructure as code. While you could use ARM templates to create alerts, you would still need a mechanism to automatically deploy them to new subscriptions. ARM templates alone do not handle the requirement to deploy to all new subscriptions automatically.

In Summary:

Azure Policy is the correct solution because it provides the best way to automatically and consistently deploy the Azure Monitor alerts to every new subscription within the scope of an Azure Management Group, or the entire Enterprise Agreement. This meets the requirement to ensure automatic deployment and management.

Important Notes for the Azure 304 Exam:

Azure Policy: You must be very comfortable with Azure Policy: how to define policies, how to assign policies, and how to use them to enforce compliance and standards.

Management Groups: Understand the use of Azure Management Groups for organizing subscriptions and managing policies at scale.

Azure Monitor: Be familiar with the various Azure Monitor capabilities, including alerts, metrics, logs, and action groups.

Infrastructure as Code (IaC): While ARM templates can be used to provision resources, they’re not the best tool for automated enforcement across an organization.

Automatic Resource Deployment: Be able to choose a solution to provide the automated deployment of services, and prioritize tools that automate the compliance configuration.

Compliance: Understand the importance of compliance and standardization in enterprise environments, and know that Azure Policy is the tool to solve most compliance problems.

Service Selection: Choose the right Azure service for the job. Azure Policy for compliance and automated enforcement, and other services for resource deployment, and for different types of alerts.

35
Q

Note: This question is part of series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company has an on-premises Active Directory Domain Services (AD DS) domain and an established Azure Active Directory (Azure AD) environment.

Your company would like users to be automatically signed in to cloud apps when they are on their corporate desktops that are connected to the corporate network.

You need to enable single sign-on (SSO) for company users.

Solution: Configure an AD DS server in an Azure virtual machine (VM). Configure bidirectional replication.

Does the solution meet the goal?

Yes
No

A

The correct answer is: No

Explanation:

The Problem: The goal is to enable SSO for cloud apps from corporate desktops connected to the local network. The provided solution is too complex and does not directly solve this.

The Provided Solution (Azure VM with AD DS):

What it does: This solution involves creating a domain controller in an Azure VM and setting up bidirectional replication with the on-premises AD DS. While this provides a backup domain controller and can handle authentication for Azure-based resources, it does not directly enable SSO from the on-premises desktops to cloud applications.

Why it doesn’t meet the goal: Having a domain controller in Azure does not inherently enable SSO between on-premise computers and cloud based applications. The on-premise computers are still authenticated with the on-premise AD DS domain. There would be no SSO for the cloud apps if this was implemented.

Required Solution: Seamless SSO (Azure AD Connect) or Password Hash Synchronization

Why it’s correct: To achieve SSO with cloud applications, you need to have some method to synchronize authentication information from on-premises AD to Azure AD, and then use a technology like seamless SSO to automatically authenticate users without requiring a password prompt. The most commonly used technologies to achieve this are:

Password Hash Synchronization: This option allows you to use the same password for Azure AD as your on-prem AD password, so the user does not have to use a different password when logging into a cloud application.

Pass-through Authentication: You can also configure pass-through authentication to have Azure AD authenticate directly with your on-premises AD.

Seamless SSO: This is an extension of either of the two options above and provides a zero touch authentication experience. With seamless SSO, users do not need to enter their username and password when logging into Azure AD applications.

Why other options are not correct:

Yes: The solution does not meet the requirement of providing seamless single sign-on. A domain controller in Azure is not enough to achieve SSO for cloud apps from on-premise desktops.

In summary:

While an Azure-based domain controller and bidirectional replication are important for redundancy, it doesn’t address the actual requirement of SSO for cloud applications from on-premises desktops. You need Azure AD Connect with a method such as seamless SSO for this.

Important Notes for the Azure 304 Exam:

Azure AD Connect: You MUST know how Azure AD Connect works, including the synchronization options.

Single Sign-On (SSO): Be familiar with SSO and how it enables users to access multiple applications with a single set of credentials.

Seamless SSO: Understand how Seamless SSO works and what it requires.

Password Hash Synchronization and Pass-through Authentication: Understand the options for how Azure AD will authenticate users, and the differences between them.

Hybrid Identity: Understand the different components of a hybrid identity model.

On-Premises AD DS: Understand what it is, and how to connect it to Azure AD.

Correct Tool: Be able to choose the correct tool and service for the job. Be able to map a requirement to a solution, and to differentiate between solutions that seem similar.

Key Takeaway: Be able to distinguish between a solution that replicates AD data, and a solution that solves the problem of SSO with cloud apps, they are distinct requirements.

36
Q

HOTSPOT

You configure the Diagnostics settings for an Azure SQL database as shown in the following exhibit.

*Name
Diags

[ ] Archive to a storage account
[ ] Stream to an event hub
[x] Send to Log Analytics

Log Analytics
OMSWKspace1

LOG
[x] SQLInsights
[x] AutomaticTuning
[x] QueryStoreRuntimeStatistics
[x] QueryStoreWaitStatistics
[x] Errors
[x] DatabaseWaitStatistics
[ ] Timeouts
[ ] Blocks
[x] Deadlocks
[ ] Audit
[x] SQLSecurityAuditEvents

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point.

Answer Area
To perform real-time reporting by using
Microsoft Power BI, you must first
[answer choice]
clear Send to Log Analytics
clear SQLInsights
select Archive to a storage account
select Stream to an event hub
Diagnostics data can be reviewed in
[answer choice]
Azure Analysis Services
Azure Application Insights
Azure SQL Analytics
Microsoft SQL Server Analysis Services (SSAS)
SQL Health Check

A

Statement 1: To perform real-time reporting by using Microsoft Power BI, you must first [select Stream to an event hub].

Why this is correct:

Real-Time Data: Power BI is excellent for data visualization, but it relies on having a real-time data source. Streaming data to an event hub is the best method to allow Power BI to perform analysis on that stream of data.

Event Hub Integration: Event Hubs are designed for high-throughput, real-time data ingestion. Power BI can connect directly to an Azure Event Hub and consume the incoming data stream for real-time reporting, making it the ideal solution here.

Other Options:

Clear Send to Log Analytics would disable logging to Azure Log Analytics, it does not allow for real time reporting.

Clear SQLInsights also disables logging to Azure Log Analytics, which does not enable real time reporting.

Select Archive to a storage account is for storing data, but it’s not suitable for real-time Power BI reporting as this data is written to a file, not a stream, therefore, Power BI cannot directly consume it.

Important Azure 304 Exam Note: Understand different methods for real time ingestion of data, and be familiar with which azure services can consume a stream of data from an event hub.

Statement 2: Diagnostics data can be reviewed in [Azure SQL Analytics].

Why this is correct:

Azure SQL Analytics: This is specifically designed for analyzing data that is sent to log analytics from SQL databases. Azure SQL Analytics is a pre-built solution that provides dashboards and analytics to visualize and monitor SQL Database performance metrics that are sent to Log Analytics.

Log Analytics Integration: Because the diagnostic settings are configured to send data to Log Analytics, this data is stored and available to be consumed by Azure SQL Analytics.

Other Options:

Azure Analysis Services is used for creating an enterprise BI solution but cannot directly review the data being sent to Log Analytics.

Azure Application Insights is primarily used for application performance monitoring. It does not receive the logging information from Azure SQL DB.

Microsoft SQL Server Analysis Services (SSAS) is used for on-premises SQL analysis and is not suitable for data from Azure SQL.

SQL Health Check is a feature in the Azure portal to help identify potential problems in the SQL database, it does not provide the analytic tools to view the diagnostics data being sent to log analytics.

Important Azure 304 Exam Note: Understand how diagnostic settings integrate with Azure Monitor Logs and Azure SQL Analytics. Be able to pick out the correct tool for analyzing and visualizing diagnostic logs.

Summary of Correct Answers:

Statement 1: select Stream to an event hub

Statement 2: Azure SQL Analytics

Key Takeaways for Azure 304 Exam:

Diagnostic Settings: You should understand the options for configuring diagnostic settings for various Azure resources.

Real-Time Data Streaming: Be familiar with using Event Hubs for real-time ingestion and consumption of data streams.

Log Analytics Integration: Understand how diagnostic data is sent to Log Analytics, and how you can query and analyze it.

Azure SQL Analytics: Know how Azure SQL Analytics provides monitoring and analysis capabilities for Azure SQL databases.

Power BI Integration: Be familiar with how Power BI connects with Azure services for data analysis, and recognize the best data ingestion methods for streaming data to power BI.

Service Selection: Be able to select the most appropriate Azure service for a given task. For example: Azure SQL Analytics to review SQL diagnostics data, event hubs for streaming data, and Log Analytics for log storage and analysis.