test1 Flashcards
https://www.dumpsbase.com/freedumps/?s=az+304
Your network contains an on-premises Active Directory domain.
The domain contains the Hyper-V clusters shown in the following table.
Name Number of nodes Number of virtual machines running on cluster
Cluster1 4 20
Cluster2 3 15
You plan to implement Azure Site Recovery to protect six virtual machines running on Cluster1 and three virtual machines running on Cluster1 Virtual machines are running on all Cluster! and Cluster2 nodes.
You need to identify the minimum number of Azure Site Recovery Providers that must be installed on premises.
How many Providers should you identify?
1
7
9
16
Understanding Azure Site Recovery Providers:
The Azure Site Recovery (ASR) Provider is a software component that must be installed on each Hyper-V host that you want to protect with ASR.
The Provider communicates with the Azure Recovery Services Vault and facilitates replication and failover.
Requirements:
On-Premises Hyper-V: There are two Hyper-V clusters (Cluster1 and Cluster2).
Protection Scope: Six VMs from Cluster1 and three VMs from Cluster2 need to be protected by Azure Site Recovery.
Minimum Providers: Identify the minimum number of ASR Providers needed.
Analysis:
Cluster1: Has 4 nodes.
Cluster2: Has 3 nodes.
Provider per Host: One ASR Provider is needed on each Hyper-V host that will be replicated.
Protected VMs: Six VMs from Cluster1 and three from Cluster2 need protection.
VMs are running on all nodes: All VMs are running across all nodes, which means that we need an ASR Provider installed on all nodes.
Minimum Number of Providers:
Cluster1 requires a provider on each host: 4 providers
Cluster2 requires a provider on each host: 3 providers
Total: 4 + 3 = 7
Correct Answer:
7
Explanation:
You must install an Azure Site Recovery Provider on every Hyper-V host that contains virtual machines that you want to protect using ASR. Because you need to protect VMs on all nodes in both clusters, you must install a provider on every hyper-v host. This means you must install 4 providers on Cluster 1 and 3 providers on cluster 2, for a total of 7 providers.
Why not others:
1: It is not enough since there are 7 Hyper-V hosts in total.
9: This answer is incorrect because it does not match the total number of hyper-v hosts.
16: This answer is incorrect because it does not match the total number of hyper-v hosts.
Important Notes for the AZ-304 Exam:
Azure Site Recovery: Understand the architecture, requirements, and components of ASR.
ASR Provider: Know that the ASR Provider must be installed on each Hyper-V host to be protected.
Minimum Requirements: The exam often focuses on minimum requirements, not the total capacity or other metrics.
Hyper-V Integration: Understand how ASR integrates with Hyper-V for replication.
Exam Focus: Read the question carefully and identify the specific information related to required components.
You need to recommend a strategy for the web tier of WebApp1. The solution must minimize.
What should you recommend?
Create a runbook that resizes virtual machines automatically to a smaller size outside of business hours.
Configure the Scale Up settings for a web app.
Deploy a virtual machine scale set that scales out on a 75 percent CPU threshold.
Configure the Scale Out settings for a web app.
Requirements:
Web Tier Scaling: A strategy for scaling the web tier of WebApp1.
Minimize Cost: The solution must focus on minimizing cost.
Recommended Solution:
Configure the Scale Out settings for a web app.
Explanation:
Configure the Scale Out settings for a web app:
Why it’s the best fit:
Cost Minimization: Web apps (App Services) have a pay-as-you-go model and scale out to add more instances when demand increases and automatically scale back in when the demand decreases. This is cost-effective because you only pay for what you use.
Automatic Scaling: You can configure automatic scaling based on different performance metrics (CPU, memory, or custom metrics), ensuring that you scale out and in based on load.
Managed Service: It is a fully managed service, so it minimizes operational overhead.
Why not others:
Create a runbook that resizes virtual machines automatically to a smaller size outside of business hours: While this can help minimize cost, this is not ideal because VMs are still running all the time. Also, it is more complex to implement and manage.
Configure the Scale Up settings for a web app: Scale Up is more costly because you increase the compute resources of the existing instances.
Deploy a virtual machine scale set that scales out on a 75 percent CPU threshold: While it is possible to deploy and scale with scale sets, this is more costly since VMs are billed per hour and are more complex to manage than web apps.
Important Notes for the AZ-304 Exam:
Azure App Service: Be very familiar with Azure App Service and its scaling capabilities.
Web App Scale Out: Know the different scaling options for web apps, and when to scale out versus scale up.
Automatic Scaling: Understand how to configure automatic scaling based on performance metrics.
Cost Optimization: The exam often emphasizes cost-effective solutions. Be aware of the pricing models for different Azure services.
PaaS vs. IaaS: Understand the benefits of using PaaS services over IaaS for cost optimization.
Exam Focus: Be sure to select the best service that meets the requirements and provides the most cost effective solution.
You have an Azure subscription that contains a custom application named Application was developed by an external company named fabric, Ltd. Developers at Fabrikam were assigned role-based access control (RBAV) permissions to the Application components. All users are licensed for the Microsoft 365 E5 plan.
You need to recommends a solution to verify whether the Faricak developers still require permissions to Application1.
The solution must the following requirements.
- To the manager of the developers, send a monthly email message that lists the access permissions to Application1.
- If the manager does not verify access permission, automatically revoke that permission.
- Minimize development effort.
What should you recommend?
In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources
Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet
Create an Azure Automation runbook that runs the Get-AzureRmRoleAssignment cmdlet
In Azure Active Directory (Azure AD), create an access review of Application1
Requirements:
External Developer Access: Fabrikam developers have RBAC permissions to an Azure application.
Access Verification: Need to verify if the Fabrikam developers still need access.
Monthly Email to Manager: Send a monthly email to the manager with access information.
Automatic Revocation: Revoke permissions if the manager does not approve.
Minimize Development: Minimize custom code development and use available services.
Recommended Solution:
In Azure Active Directory (Azure AD), create an access review of Application1
Explanation:
Azure AD Access Reviews:
Why it’s the best fit:
Automated Review: Azure AD Access Reviews provides a way to schedule recurring access reviews for groups, applications, or roles. It will automatically send notifications to the assigned reviewers (in this case, the manager).
Manager Review: You can configure the access review to have the manager review and approve or deny access for their developers.
Automatic Revocation: You can configure the access review to automatically remove access for users when they are not approved.
Minimal Development: Access reviews are a built-in feature of Azure AD that requires minimal configuration and no custom coding.
Why not others:
In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources: While PIM is great for managing and governing privileged roles, it’s not the best choice for regular access reviews of permissions, and it does not provide a way to have a review based on user accounts.
Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet: While possible, this requires custom development and management. Azure Access Reviews provides the functionality natively, therefore this is not the optimal solution for the requirements.
Create an Azure Automation runbook that runs the Get-AzureRmRoleAssignment cmdlet: Similar to the previous option, this is not the ideal solution since access reviews provides all of this functionality natively.
Important Notes for the AZ-304 Exam:
Azure AD Access Reviews: Be very familiar with Azure AD Access Reviews, and how they can be used to manage user access, and know the methods that you can use to perform them (for example, by a manager or by self review).
Access Management: Understand the importance of access reviews as part of an overall security strategy.
Access Reviews vs. PIM: Understand when to use PIM, and when to use Access Reviews.
Minimize Development: The exam often emphasizes solutions that minimize development effort.
Exam Focus: Select the simplest and most direct method to achieve the desired outcome.
HOTSPOT -
You have an Azure SQL database named DB1.
You need to recommend a data security solution for DB1. The solution must meet the following requirements:
✑ When helpdesk supervisors query DB1, they must see the full number of each credit card.
✑ When helpdesk operators query DB1, they must see only the last four digits of each credit card number.
✑ A column named Credit Rating must never appear in plain text within the database system, and only client applications must be able to decrypt the Credit
Rating column.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Helpdesk requirements:
Always Encrypted
Azure Advanced Threat Protection (ATP)
Dynamic data masking
Transparent Data Encryption (TDE)
Credit Rating requirement:
Always Encrypted
Azure Advanced Threat Protection (ATP)
Dynamic data masking
Transparent Data Encryption (TDE)
Requirements:
Helpdesk Supervisors: Must see full credit card numbers.
Helpdesk Operators: Must see only the last four digits of credit card numbers.
Credit Rating Column: The Credit Rating column must never appear in plain text within the database system and must be decrypted by the client applications.
Answer Area:
Helpdesk requirements:
Dynamic data masking
Credit Rating requirement:
Always Encrypted
Explanation:
Helpdesk requirements:
Dynamic data masking:
Why it’s correct: Dynamic data masking allows you to obfuscate sensitive data based on the user’s role. You can configure masking rules to show the full credit card numbers to supervisors and only the last four digits to the operators. The underlying data is not modified, and the masking is applied at the query output level.
Why not others:
Always Encrypted: This encrypts the data, but doesn’t allow for different visibility of the data based on user roles.
Azure Advanced Threat Protection (ATP): This is for detecting malicious behavior, not for data masking.
Transparent Data Encryption (TDE): This encrypts data at rest, but does not apply specific policies based on user access or perform masking.
Credit Rating requirement:
Always Encrypted:
Why it’s correct: Always Encrypted ensures that sensitive data is always encrypted, both at rest and in transit. The encryption keys are stored and managed in the client application and are not accessible to database administrators. This satisfies the requirement that the column must never appear in plain text in the database system, and it is only decrypted in the client application.
Why not others:
Azure Advanced Threat Protection (ATP): It doesn’t encrypt or mask the data. It is meant for threat detection.
Dynamic data masking: Dynamic data masking only masks the data for specific users, but it does not encrypt the data.
Transparent Data Encryption (TDE): TDE encrypts data at rest, but it does not encrypt data in transit or protect against database administrators viewing the unencrypted data.
Important Notes for the AZ-304 Exam:
Always Encrypted: Understand what it does, how it encrypts data, where the encryption keys are managed, and the purpose of this approach for security.
Dynamic Data Masking: Know the purpose and configuration of dynamic data masking and how it helps control the data that users can see.
Transparent Data Encryption (TDE): Understand that TDE is used for encrypting data at rest, but it doesn’t protect data in transit, and does not provide different views of data.
Azure Advanced Threat Protection (ATP): Know that it is used for threat detection, not for masking or encrypting data.
Data Security: Be familiar with the different data security features in Azure SQL Database.
Exam Focus: You must be able to understand a complex scenario, and pick the different Azure components that meet each requirement.
You have an Azure subscription.
Your on-premises network contains a file server named Server1. Server1 stores 5 TB of company files that are accessed rarely.
You plan to copy the files to Azure Storage.
You need to implement a storage solution for the files that meets the following requirements:
✑ The files must be available within 24 hours of being requested.
✑ Storage costs must be minimized.
Which two possible storage solutions achieve this goal? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. Create a general-purpose v1 storage account. Create a blob container and copy the files to the blob container.
B. Create a general-purpose v2 storage account that is configured for the Hot default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.
C. Create a general-purpose v1 storage account. Create a file share in the storage account and copy the files to the file share.
D. Create a general-purpose v2 storage account that is configured for the Cool default access tier. Create a file share in the storage account and copy the files to the file share.
E. Create an Azure Blob storage account that is configured for the Cool default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.
The correct answers are B and E.
Here’s why:
Understanding the Requirements:
Availability within 24 hours: This requirement strongly suggests using the Archive access tier in Azure Blob Storage. The Archive tier has the lowest storage cost but also has a rehydration latency. Rehydration from Archive tier typically takes several hours, and is guaranteed within 24 hours.
Minimize storage costs: The Archive access tier is the most cost-effective storage tier in Azure Blob Storage for data that is rarely accessed.
Analyzing each option:
A. Create a general-purpose v1 storage account. Create a blob container and copy the files to the blob container.
Incorrect. General-purpose v1 accounts are older and less cost-optimized than v2 or Blob storage accounts. This option doesn’t specify any access tier, so it would likely default to Hot or Cool, which are more expensive than Archive and not suitable for rarely accessed data when cost minimization is a key requirement. It also doesn’t explicitly address the 24-hour availability through Archive tier.
B. Create a general-purpose v2 storage account that is configured for the Hot default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.
Correct. General-purpose v2 accounts are recommended and more cost-effective than v1. By setting the default tier to Hot (initially - though this default doesn’t really matter as we are overriding per blob) and then explicitly setting each file to the Archive access tier, we achieve the lowest storage cost and meet the 24-hour availability requirement. Setting individual blobs to Archive overrides the default account tier for those specific blobs.
C. Create a general-purpose v1 storage account. Create a file share in the storage account and copy the files to the file share.
Incorrect. Azure File Shares are designed for file system access (SMB, NFS) and are generally more expensive than Blob Storage for large amounts of data, especially for archive scenarios. File shares do not have access tiers like Archive. This option does not minimize cost and is not designed for rarely accessed, large datasets like this.
D. Create a general-purpose v2 storage account that is configured for the Cool default access tier. Create a file share in the storage account and copy the files to the file share.
Incorrect. Similar to option C, using Azure File Shares is not cost-effective for this scenario. While Cool tier is cheaper than Hot, it’s still more expensive than Archive, and File Shares themselves are generally pricier than Blob Storage. File Shares also don’t offer the Archive tier.
E. Create an Azure Blob storage account that is configured for the Cool default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.
Correct. Creating an Azure Blob storage account is specifically designed for blob data and can be more cost-optimized for blob storage compared to general-purpose accounts in some scenarios. Like option B, setting the default tier to Cool (or even Hot) is less important as long as we explicitly set each file to the Archive access tier. This option also effectively uses Archive tier for cost minimization and meets the 24-hour availability requirement. Azure Blob Storage accounts are designed to be cost-effective for blob data.
Why B and E are the best solutions:
Both options B and E leverage the Archive access tier of Azure Blob Storage, which is crucial for meeting both the cost minimization and 24-hour availability requirements. They use Blob containers which are the appropriate storage for files in this scenario. While they differ slightly in the type of storage account (general-purpose v2 vs. Azure Blob storage account), both are valid and effective solutions for storing rarely accessed files at the lowest cost with 24-hour retrieval.
Final Answer: B and E
HOTSPOT
You have an existing implementation of Microsoft SQL Server Integration Services (SSIS) packages stored in an SSISDB catalog on your on-premises network. The on-premises network does not have hybrid connectivity to Azure by using Site-to-Site VPN or ExpressRoute.
You want to migrate the packages to Azure Data Factory.
You need to recommend a solution that facilitates the migration while minimizing changes to the existing packages. The solution must minimize costs.
What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Store the SSISDB catalog by using:
Azure SQL Database
Azure Synapse Analytics
SQL Server on an Azure virtual machine
SQL Server on an on-premises computer
Implement a runtime engine for
package execution by using:
Self-hosted integration runtime only
Azure-SQL Server Integration Services Integration Runtime (IR) only
Azure-SQL Server Integration Services Integration Runtime and self-hosted integration runtime
Requirements:
Existing SSIS Packages: The packages are stored in an SSISDB catalog on-premises.
Migrate to ADF: The migration target is Azure Data Factory.
Minimize Changes: The solution should minimize changes to the existing SSIS packages.
Minimize Costs: The solution should be cost-effective.
No connectivity: There is no hybrid connectivity from the on-premises environment to Azure.
Answer Area:
Store the SSISDB catalog by using:
Azure SQL Database
Implement a runtime engine for package execution by using:
Azure-SQL Server Integration Services Integration Runtime (IR) only
Explanation:
Store the SSISDB catalog by using:
Azure SQL Database:
Why it’s correct: To migrate SSIS packages to Azure Data Factory, the SSISDB catalog needs to be stored in Azure. Azure SQL Database is the recommended and supported method of storing the SSISDB catalog when you are using the Azure SSIS Integration Runtime in ADF.
Why not others:
Azure Synapse Analytics: While Synapse Analytics also supports SQL functionality, it is not the recommended platform to host the SSISDB.
* SQL Server on an Azure virtual machine: While SQL Server on a VM would work, it is an IaaS solution, which requires additional management overhead and is not as cost-effective as using the PaaS Azure SQL Database.
* SQL Server on an on-premises computer: The SSISDB must be in Azure to be used by the Azure SSIS Integration Runtime.
Implement a runtime engine for package execution by using:
Azure-SQL Server Integration Services Integration Runtime (IR) only:
Why it’s correct: An Azure SSIS Integration Runtime is a fully managed service for executing SSIS packages in Azure. Because there is no hybrid network connectivity, you must use the Azure version, instead of a self-hosted IR. The Azure SSIS IR is the only way to run the SSIS packages that were migrated in Azure.
Why not others:
Self-hosted integration runtime only: The self-hosted integration runtime needs a hybrid network to Azure to be able to work. Because there is no VPN or expressroute, this is not an option.
Azure-SQL Server Integration Services Integration Runtime and self-hosted integration runtime: The self-hosted integration runtime is not necessary in this scenario because there is no need to connect to an on-premise resource.
Important Notes for the AZ-304 Exam:
Azure Data Factory: Be very familiar with ADF, its core concepts, and how to execute SSIS packages.
Azure SSIS IR: Know the purpose of an Azure SSIS Integration Runtime and how to set it up. Understand that it is used when running SSIS packages in Azure.
SSISDB in Azure: Understand how the SSISDB catalog is managed and stored in Azure when migrating from an on-prem environment.
Self-Hosted IR: Understand when the self-hosted IR is required and why it is not the appropriate answer for this specific scenario.
Hybrid Connectivity: Understand how hybrid connectivity affects the choice of integration runtime.
Cost Minimization: Know how to minimize costs by choosing the appropriate services (PaaS over IaaS).
Exam Focus: The exam emphasizes choosing the most appropriate solution while minimizing effort and cost.
You use Azure virtual machines to run a custom application that uses an Azure SQL database on the back end.
The IT apartment at your company recently enabled forced tunneling, Since the configuration change, developers have noticed degraded performance when they access the database.
You need to recommend a solution to minimize latency when accessing the database. The solution must minimize costs.
What should you include in the recommendation?
Azure SQL Database Managed instance
Azure virtual machines that run Microsoft SQL Server servers
Always On availability groups
virtual network (VNET) service endpoint
Understanding Forced Tunneling:
Forced tunneling in Azure directs all internet-bound traffic from a subnet through a virtual network appliance (like a firewall or proxy), on-premises network, or specific Azure service. This can increase latency since traffic to Azure services is routed through the forced tunnel, instead of going directly.
Requirements:
Azure SQL Database: Custom app on Azure VMs uses an Azure SQL database.
Forced Tunneling: Forced tunneling is enabled, causing performance degradation.
Minimize Latency: Minimize the latency when accessing the database.
Minimize Costs: The solution should be cost-effective.
Recommended Solution:
virtual network (VNET) service endpoint
Explanation:
Virtual Network Service Endpoints:
Why it’s the best fit: VNET service endpoints allow you to secure access to Azure service resources by enabling the use of a private IP address in the VNET. By enabling service endpoints for Azure SQL Database, traffic to that database from the Azure VMs within the VNET will bypass the forced tunnel, and instead go directly through the Azure backbone. This significantly reduces latency while also being cost effective.
Why not others:
Azure SQL Database Managed Instance: While Managed Instance is a good choice for many SQL scenarios, it is not the ideal solution for this problem. It does not help with the forced tunneling, and it also does not minimize cost since it is a more expensive offering.
Azure virtual machines that run Microsoft SQL Server servers: Moving the database to a VM in IaaS will not fix the problem. It will not address the latency issues created by the forced tunneling.
Always On availability groups: This helps with HA and DR, but it does not help with the latency issues caused by the forced tunneling. Also, it would add significant costs to the deployment.
Important Notes for the AZ-304 Exam:
Virtual Network Service Endpoints: Understand the benefits of using service endpoints.
Forced Tunneling: Know what forced tunneling is and how it can impact traffic flow.
Cost Minimization: Know the different ways to minimize costs when architecting a solution.
Network Performance: Understand the different ways to diagnose and improve performance when dealing with Azure network configurations.
Azure SQL: Know the different deployment options for Azure SQL.
Exam Focus: The exam will often require you to select the most appropriate solution that meets all of the requirements.
You have an Azure subscription that is linked to an Azure Active Directory (Azure AD) tenant. The subscription contains 10 resource groups, one for each department at your company.
Each department has a specific spending limit for its Azure resources.
You need to ensure that when a department reaches its spending limit, the compute resources of the department shut down automatically.
Which two features should you include in the solution? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
Azure Logic Apps
Azure Monitor alerts
the spending limit of an Azure account
Cost Management budgets
Azure Log Analytics alerts
Requirements:
Departmental Limits: Each department has a specific spending limit for its Azure resources.
Resource Shutdown: Compute resources must shut down automatically when the spending limit is reached.
Correct Features:
Cost Management budgets
Azure Logic Apps
Explanation:
Cost Management budgets:
Why it’s correct: Cost Management budgets allow you to define a spending limit for a specific scope (resource group, subscription, management group). When the actual spend reaches the budget threshold, you can trigger alerts and take actions. Budgets is the way to monitor and alert based on the cost.
Why not others (by itself): Cost management budgets cannot automatically stop resources, it is a monitoring and alert mechanism, and needs other services in order to take action.
Azure Logic Apps:
Why it’s correct: Azure Logic Apps can be triggered by a budget alert. In the logic app, you can add actions that automatically shut down the compute resources. For example, you can use the Azure Resource Management connector to stop virtual machines.
Why not others (by itself): Logic apps require a trigger to start. Therefore, a budget alert must be configured.
Why not others:
Azure Monitor alerts: Azure Monitor alerts are for performance monitoring. While they can monitor costs, they cannot perform actions on those costs.
the spending limit of an Azure account: While the Azure Account might have a total spending limit, this does not allow for the control on resource groups, or the automation of stopping resources.
Azure Log Analytics alerts: Log Analytics is a great way to analyze logs, but does not work with cost alerts.
Important Notes for the AZ-304 Exam:
Cost Management Budgets: Be very familiar with Cost Management budgets and how they can be used to control spending, and know that they are the mechanism that you should use for cost alerts.
Azure Logic Apps: Know how to use Logic Apps to automate actions based on triggers, and how they integrate with Azure Management connectors.
Automated Actions: Understand that Logic Apps can be triggered by alerts and can be used to perform actions, such as shutting down resources.
Cost Control: Be familiar with the best practices for cost control and optimization in Azure.
Alerts: Know the difference between cost alerts and metrics alerts.
Exam Focus: Carefully read the requirement. You must know which services do what function. You need to know that you need a budget to alert when the spend is reached, and that you need Logic apps to automate an action when the alert is triggered.
HOTSPOT
You configure OAuth2 authorization in API Management as shown in the exhibit.
Add OAuth2 service
Display name: (Empty field)
Id: (Empty field)
Description: (Empty field)
Client registration page URL: https://contoso.com/register
Authorization grant types:
Authorization code: Enabled
Implicit: Disabled
Resource owner password: Disabled
Client credentials: Disabled
Authorization endpoint URL: https://login.microsoftonline.com/contoso.onmicrosoft.com/oauth2/v2.0/authorize
Support state parameter: Disabled
Authorization Request method
GET: Enabled
POST: Disabled
Token endpoint URL: (Empty field)
Additional body parameters: (Empty field)
Button: Create
Use the drop-domain to select the answer choice that completes each statement based on
the information presented in the graphic. NOTE: Each correct selection is worth one point.
The selected authorization grant type is for
Background services
Headless device authentication
Single page applications
Web applications
To enable custom data in the grant flow, select
Client credentials
Implicit
Resource owner password
Support state parameter
OAuth2 Configuration Summary:
Authorization Grant Types: The configuration shows the “Authorization code” grant type as the only one enabled.
Authorization Endpoint URL: This is set to Microsoft’s OAuth2 authorization endpoint for the contoso.onmicrosoft.com tenant.
Other Settings: Various other settings related to authorization and token endpoints are displayed.
Answer Area:
The selected authorization grant type is for:
Web applications
To enable custom data in the grant flow, select
Support state parameter
Explanation:
The selected authorization grant type is for:
Web applications:
Why it’s correct: The authorization code grant type is the most secure and recommended method to obtain access tokens for web applications. In this flow the client (web app) first gets an authorization code from the authorization server, and then uses it to obtain an access token.
Why not others:
Background services: Background services (also known as daemon apps) typically use the client credentials flow, which is not enabled in this configuration.
Headless device authentication: Headless devices often use the device code flow, which is not a grant type present here.
Single-page applications: Single-page applications (SPAs) can use the authorization code flow, but often use the implicit grant type, which is disabled in this configuration.
To enable custom data in the grant flow, select:
Support state parameter:
Why it’s correct: The “Support state parameter” setting enables passing an opaque value in the authorization request, and will be returned by the authorization server with the code. This can be used to pass custom data that needs to be included in the authorization flow.
Why not others:
Client credentials: This is for service-to-service authentication without a user present.
Implicit: This is an older, less secure grant type for single-page applications. It does not enable passing custom data.
Resource owner password: This is a less secure grant type that should be avoided in most scenarios. It also does not enable passing custom data.
Important Notes for the AZ-304 Exam:
OAuth 2.0 Grant Types: Be very familiar with the different OAuth 2.0 grant types:
Authorization Code
Implicit
Client Credentials
Resource Owner Password
Device Code
API Management OAuth2 Settings: Understand how to configure OAuth 2.0 settings in Azure API Management.
“State” Parameter: Know the importance of the “state” parameter in OAuth flows and how it helps prevent CSRF attacks. Understand how this can be used to pass custom data.
API Security: Know how to properly secure APIs with OAuth 2.0.
Exam Focus: Be sure to select the answer based on a close inspection of the provided details.
You are designing an order processing system in Azure that will contain the Azure resources shown in the following table.
|—|—|—|
| Function1 | Function | Check product availability at vendor 1 |
| Function2 | Function | Check product availability at vendor 2 |
| storage1 | Storage account | Stores order processing logs |
The order processing system will have the following transaction flow:
✑ A customer will place an order by using App1.
✑ When the order is received, App1 will generate a message to check for product availability at vendor 1 and vendor 2.
✑ An integration component will process the message, and then trigger either Function1 or Function2 depending on the type of order.
✑ Once a vendor confirms the product availability, a status message for App1 will be generated by Function1 or Function2.
✑ All the steps of the transaction will be logged to storage1.
Which type of resource should you recommend for the integration component?
an Azure Data Factory pipeline
an Azure Service Bus queue
an Azure Event Grid domain
an Azure Event Hubs capture
| App1 | Web app | Processes customer orders |
Name | Type | Purpose |
The correct answer is an Azure Service Bus queue.
Here’s why:
Message Brokering: Azure Service Bus queues are designed for reliable, asynchronous message queuing. This perfectly fits the scenario where App1 generates a message and the integration component processes it to trigger the appropriate function.
Decoupling: Service Bus decouples App1 from Function1 and Function2. App1 simply sends a message to the queue, and it doesn’t need to know which function will eventually process it. This improves resilience and scalability.
Guaranteed Delivery: Service Bus ensures that messages are delivered at least once, which is crucial for order processing.
First-In, First-Out (FIFO) Ordering (Optional but useful): If the order of processing is important, Service Bus queues can be configured for FIFO delivery.
Why other options are less suitable:
Azure Data Factory pipeline: While ADF can orchestrate workflows and trigger activities (including Azure Functions), it’s primarily designed for data integration and ETL (Extract, Transform, Load) tasks. It’s overkill for this simple message routing scenario.
Azure Event Grid domain: Event Grid is ideal for event-driven architectures where publishers emit events and subscribers react to them. While it can trigger functions, it’s more suited for scenarios where you have many subscribers potentially interested in the same event. In this case, the routing is deterministic (either Function1 or Function2 based on the order type), making a queue a more direct fit. Event Grid also has a “push” delivery model, where it attempts to deliver events. Service Bus offers both “push” and “pull” models, giving more control over message consumption.
Azure Event Hubs capture: Event Hubs is designed for high-throughput ingestion of streaming data. The “capture” feature is for persisting this data to storage. It’s not primarily designed for routing messages to specific functions based on message content.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases.
The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.
You need to recommend a solution to meet the regulatory requirement.
Solution: You recommend creating resource groups based on locations and implementing resource locks on the resource groups.
Does this meet the goal?
Yes
No
Goal:
Deploy Azure App Service instances and Azure SQL databases simultaneously.
App Service instances must be deployed only to specific Azure regions.
Resources for the App Service instances must reside in the same region.
Proposed Solution:
Create resource groups based on locations.
Implement resource locks on the resource groups.
Analysis:
Resource Groups Based on Location:
Creating resource groups based on locations is a good practice for organizing resources in Azure. It makes it easier to manage resources and ensures that all the resources that belong to a specific geographic region are grouped together. This is an important step in reaching the goal.
Resource Locks
Resource locks, however, are only for preventing accidental deletion of resource groups and the resources within. They do not enforce which resources are deployed or where they are deployed, meaning that a user could still deploy a VM outside of the required location.
Does It Meet the Goal?: No
Explanation:
Resource Groups by Location (Partial Fulfillment): Creating resource groups by location does help with organizing resources and ensures they’re deployed in the same region, meeting part of the requirement of keeping all resources in the same location.
Resource Locks - These will not solve for the region requirement, because you can still create a resource in any region.
Missing Enforcement: The solution lacks any mechanism to enforce that the resources are only deployed in the correct Azure regions. This is a regulatory requirement, so a simple organization of resource groups is not enough.
No Region Enforcement: Resource locks prevent accidental deletion or modification of resources, but they do not restrict resource deployments to specific regions.
Correct Answer:
No
Important Notes for the AZ-304 Exam:
Resource Groups: Understand the purpose and use of resource groups.
Resource Locks: Know the purpose and limitations of resource locks.
Regulatory Requirements: Recognize that solutions must enforce compliance requirements. This is a key element of many questions.
Enforcement Mechanisms: Look for mechanisms that enforce policies instead of simply organizing resources.
Exam Focus: Read the proposed solution and verify if it truly meets the goal. If any part of the solution does not achieve the goal, then the answer is “No”.
You need to recommend a data storage solution that meets the following requirements:
- Ensures that applications can access the data by using a REST connection
- Hosts 20 independent tables of varying sizes and usage patterns
- Automatically replicates the data to a second Azure region
- Minimizes costs
What should you recommend?
an Azure SQL Database that uses active geo-replication
tables in an Azure Storage account that use geo-redundant storage (GRS)
tables in an Azure Storage account that use read-access geo-redundant storage (RA-GRS)
an Azure SQL Database elastic database pool that uses active geo-replication
Requirements:
REST API Access: The data must be accessible through a REST interface.
Independent Tables: The solution must support 20 independent tables of different sizes and usage patterns.
Automatic Geo-Replication: The data must be automatically replicated to a secondary Azure region.
Minimize Costs: The solution should be cost-effective.
Recommended Solution:
Tables in an Azure Storage account that use read-access geo-redundant storage (RA-GRS)
Explanation:
Azure Storage Account with RA-GRS Tables:
REST Access: Azure Storage tables are directly accessible using a REST API, which is a fundamental part of their design.
Independent Tables: A single Azure Storage account can hold many independent tables, meeting the 20-table requirement.
Automatic Geo-Replication (RA-GRS): RA-GRS ensures that the data is replicated to a secondary region, and provides read access to that secondary location. This satisfies the HA and geo-redundancy requirements.
Minimize Cost: Azure Storage tables are designed to handle different patterns and are cost effective compared to SQL options.
Why not others:
Azure SQL Database with active geo-replication: While it provides strong SQL capabilities and geo-replication, SQL databases are more costly for simple table storage and have a higher operational overhead. SQL databases also do not have a REST interface, rather they use SQL.
Azure SQL Database elastic database pool with active geo-replication: Same reasons as above, but with the added complication of an elastic pool, which is unnecessary for the stated requirements and would add even more costs.
Tables in an Azure Storage account that use geo-redundant storage (GRS): This would meet the geo-replication requirements but it would not provide the ability to read from the secondary location, and so is not as good a choice as RA-GRS.
Important Notes for the AZ-304 Exam:
Azure Storage Tables: Know what they are designed for and their features (scalability, cost-effectiveness, REST API access). Be able to explain where they are appropriate.
Geo-Redundancy: Understand the differences between GRS, RA-GRS and how they impact performance, availability and cost.
Cost-Effective Solutions: The exam often asks for the most cost-effective solution. Be aware of the pricing models of different Azure services.
SQL Database Use Cases: Understand when to use SQL DBs and when other options (like Table storage) are more appropriate. SQL DBs are better suited for complex queries, transactions, and relational data models.
REST API Access: Know which Azure services offer a REST interface for data access and when it might be required.
Exam Technique: Ensure you fully read the requirements, so you don’t pick a more expensive or complex solution than is needed.
HOTSPOT
Your company has two on-premises sites in New York and Los Angeles and Azure virtual networks in the East US Azure region and the West US Azure region. Each on-premises site has Azure ExpressRoute circuits to both regions.
You need to recommend a solution that meets the following requirements:
✑ Outbound traffic to the Internet from workloads hosted on the virtual networks must be routed through the closest available on-premises site.
✑ If an on-premises site fails, traffic from the workloads on the virtual networks to the Internet must reroute automatically to the other site.
What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Routing from the virtual networks to
the on-premises locations must be
configured by using:
Azure default routes
Border Gateway Protocol (BGP)
User-defined routes
The automatic routing configuration
following a failover must be
handled by using:
Border Gateway Protocol (BGP)
Hot Standby Routing Protocol (HSRP)
Virtual Router Redundancy Protocol (VRRP)
Correct Answers and Why
Routing from the virtual networks to the on-premises locations must be configured by using:
Border Gateway Protocol (BGP)
Why?
ExpressRoute Standard: ExpressRoute relies on BGP for exchanging routes between your on-premises networks and Azure virtual networks. It’s the fundamental routing protocol for this type of connectivity.
Dynamic Routing: BGP allows for dynamic route learning, meaning routes are automatically adjusted based on network changes (like a site going down). This is essential for the failover requirement.
Path Selection: BGP allows for attributes like Local Preference to choose the best path. The path to the nearest on-prem location can be preferred by setting a higher local preference.
Why Not the Others?
Azure Default Routes: These routes are for basic internal Azure connectivity and internet access within Azure. They don’t handle routing to on-premises networks over ExpressRoute.
User-defined routes (UDRs): While UDRs can force traffic through a specific path they do not facilitate dynamic failover without manual intervention and are therefore unsuitable in this scenario.
The automatic routing configuration following a failover must be handled by using:
Border Gateway Protocol (BGP)
Why?
BGP Convergence: BGP’s inherent nature is to dynamically adapt to network changes. If an on-premises site or an ExpressRoute path becomes unavailable, BGP automatically detects this and withdraws routes from the failed path.
Automatic Rerouting: BGP then advertises the available paths, leading to the rerouting of traffic through the remaining healthy site, achieving the automatic failover requirement.
Why Not the Others?
Hot Standby Routing Protocol (HSRP) and Virtual Router Redundancy Protocol (VRRP): These protocols are used for first-hop redundancy on local networks which is not applicable in Azure environments or to Expressroute configurations. They do not facilitate the end-to-end routing and failover required.
Important Notes for the AZ-304 Exam
ExpressRoute Routing is BGP-Based: Understand that BGP is the routing protocol for ExpressRoute. If a question involves routing over ExpressRoute, BGP is highly likely to be involved.
BGP for Dynamic Routing and Failover: Know that BGP not only provides routing but also provides failover capabilities through its dynamic path selection and convergence features.
Local Preference: Understand how BGP attributes like Local Preference can be used to influence path selection. This is key for scenarios where you want to force a primary path and have a secondary backup path.
Azure Networking Core Concepts: You should have a solid understanding of:
Virtual Networks: How they’re used, subnetting, IP addressing.
Route Tables: Both default and User-Defined, and how they control traffic routing.
ExpressRoute: The different connection options and associated routing implications.
Dynamic vs. Static Routing: Know the difference between dynamic routing (BGP) and static routing (User Defined Routes) and where they are best suited.
Hybrid Networking: Be prepared to deal with hybrid scenarios that connect on-premises and Azure resources.
Failover: Be aware of the failover options and be able to choose the best solutions for different circumstances. BGP is the most common solution for failover between on-prem and Azure.
HSRP and VRRP Applicability: These are first hop redundancy protocols used locally and are not suitable for Azure cloud environments. They should not be suggested for Azure routing scenarios.
You have an Azure subscription. The subscription contains an app ir-tal is hosted in Ihe East US, Central Europe, ant) East Asia regions You need to recommend a data-tier solution for the app.
The solution must meet the following requirements:
- Support multiple consistency levels.
- Be able to store at least 1 TB of data.
- Be able to perform read and write operations in the Azure region that is local to the app instance
What should you Include In the recommendation?
a Microsoft SQL Server Always On availability group on Azure virtual machines
an Azure Cosmos OB database
an Azure SQL database in an elastic pool
Azure Table storage that uses geo-redundant storage (GRS) replication
Understanding the Requirements
Global Distribution: The application is deployed in multiple regions (East US, Central Europe, East Asia), meaning the data layer also needs to be globally accessible.
Multiple Consistency Levels: The solution must support different levels of data consistency (e.g., strong, eventual).
Scalability: It needs to store at least 1 TB of data.
Local Read/Write: Each application instance should be able to perform read and write operations in its local region for performance.
Evaluating the Options
a) Microsoft SQL Server Always On Availability Group on Azure Virtual Machines:
Pros:
Offers strong consistency.
Can store large amounts of data (1 TB+).
Cons:
Complex to manage: Requires setting up and maintaining virtual machines, clustering, and replication manually.
Not designed for low-latency multi-regional access: While you can do replication, it’s typically not optimized for providing very low-latency access to every region at the same time.
Does not inherently offer multiple consistency levels:
Verdict: Not the best fit. It’s too complex and doesn’t easily meet the multi-region, multiple consistency requirement.
b) An Azure Cosmos DB database:
Pros:
Globally Distributed: Designed for multi-region deployments and provides low-latency reads/writes in local regions.
Multiple Consistency Levels: Supports various consistency levels, from strong to eventual, that can be set per request.
Scalable: Can easily store 1 TB+ of data and scale as needed.
Fully Managed: Much easier to manage than SQL Server on VMs.
Cons:
Has different way of managing data and database design than relational solutions.
Verdict: Excellent fit. It directly addresses all the requirements.
c) An Azure SQL Database in an elastic pool:
Pros:
Scalable in terms of performance and resources.
Familiar relational database platform.
Cons:
Not inherently multi-regional: While you can do active geo-replication, it has limitations with low-latency reads from remote regions.
Limited consistency options: Primarily provides strong consistency, not multiple levels.
Not as horizontally scalable: It’s designed for relational data, not the more flexible scalability needed for a globally distributed app.
Does not provide local read/write in each region.
Verdict: Not the best choice. It doesn’t meet the multi-region low-latency and consistency requirements.
d) Azure Table storage that uses geo-redundant storage (GRS) replication:
Pros:
Highly scalable.
Relatively inexpensive.
GRS provides data replication.
Cons:
No multi-master writes: No local read/write in each region. Reads can come from a different location.
Limited consistency: Primarily eventual consistency, not the range required by the problem statement.
No SQL: Designed for non-relational data storage only.
Verdict: Not suitable. Lacks multiple consistency options, multi-master writes, and suitable performance for low latency reads.
Recommendation
Based on the analysis, the best solution is:
An Azure Cosmos DB database
Explanation
Azure Cosmos DB is purpose-built for globally distributed applications. It offers:
Global Distribution and Low Latency: Data can be replicated to multiple Azure regions, allowing applications to read and write data in their local region with low latency.
Multiple Consistency Levels: You can fine-tune the consistency level per request. Options range from strong consistency (data is guaranteed to be the same everywhere) to eventual consistency (data will eventually be consistent across regions).
Scalability: Cosmos DB can easily store 1 TB+ of data and automatically scales to handle increased traffic.
Ease of Management: As a fully managed service, it reduces operational overhead.
Your company purchases an app named App1.
You plan to tun App1 on seven Azure virtual machines In an Availability Set. The number of fault domains is set to 3. The number of update domains is set to 20.
You need to identity how many App1 instances will remain available during a period of planned maintenance.
How many Appl instances should you identify?
1
2
6
7
Understanding Availability Sets
Purpose: Availability Sets are used to protect your applications from planned and unplanned downtime within an Azure datacenter.
Fault Domains (FDs): Fault Domains define groups of virtual machines that share a common power source and network switch. In the event of a power or switch failure, VMs in different FDs will be affected independently of each other.
Update Domains (UDs): Update Domains define groups of virtual machines that can be rebooted simultaneously during an Azure maintenance window. Azure applies planned maintenance to UDs one at a time.
The Key Rule
During planned maintenance, Azure updates VMs within a single Update Domain at a time. Azure moves to the next UD only after completing an update to the current UD. This means that while an update is being done on one UD, the other UDs are not affected.
Analyzing the Scenario
7 VMs in total
3 Fault Domains: This is important for unplanned maintenance, but doesn’t directly impact our answer here.
20 Update Domains: This is the important factor for planned maintenance.
It does not mean there are 20 physical UDs in the set. It just means up to 20 UDs can be used. The 7 VM’s will therefore each be in 1 of 7 unique UDs within the set of 20 UDs.
Calculating Availability During Planned Maintenance
Minimum VMs per Update Domain: Since you have 7 VMs and, even though there are 20 UDs, each virtual machine will be placed in its own update domain so each will be on its own UD.
Impact of Maintenance: During a planned maintenance event, Azure will update one UD at a time. Therefore during maintenance one of those 7 VMs will be unavailable while the update is applied.
Available VMs: That means that at any given time when maintenance is applied to one single UD, the remaining VMs in the other UDs will remain available. In this case 7-1=6 VMs.
Correct Answer
6
Important Notes for the AZ-304 Exam
Availability Sets vs. Virtual Machine Scale Sets: Know the difference. Availability Sets provide fault tolerance for individual VMs, while Scale Sets provide scalability and resilience for groups of identical VMs (often used for autoscaling). This question specifically used an availability set.
Fault Domains (FDs) vs. Update Domains (UDs): Be clear on the purpose of each. FDs for unplanned maintenance, UDs for planned maintenance.
Impact of UDs on Planned Maintenance: During planned maintenance, only one UD is updated at a time, ensuring that your application can remain available.
Distribution of VMs: In an availability set, Azure evenly distributes VMs across FDs and UDs.
Maximum FDs and UDs: Understand that the maximum number of FDs is 3 and UDs are 20 in Availability Sets.
Real-World Scenario: Be aware that real production workloads can have other availability and redundancy concerns and that more advanced redundancy can be achieved by using multiple availability sets in the same region or a combination of Availability sets and Availability zones.
Calculations: Be able to determine the availability of VMs during planned or unplanned maintenance based on the number of FDs and UDs as well as the number of VMs in a given configuration.
Best Practice: Best practice is to have at least 2 VMs in an availability set, and 2 availability sets in your region to provide redundancy in the event of zonal failures as well as UD / FD maintenance.
Your company has the infrastructure shown in the following table:
Location: Azure
Resource:
Azure subscription named Subscription1
20 Azure web apps
Location: On-premises datacenter
Resource:
Active Directory domain
Server running Azure AD Connect
Linux computer named Server1
The on-premises Active Directory domain syncs to Azure Active Directory (Azure AD).
Server1 runs an application named Appl that uses LDAP queries to verify user identities in the on-premises Active Directory domain.
You plan to migrate Server1 to a virtual machine in Subscription1.
A company security policy states that the virtual machines and services deployed to Subscription! must be prevented from accessing the on-premises network.
You need to recommend a solution to ensure that Appl continues to function after the migration. The solution must meet the security policy.
What should you include in the recommendation?
Azure AD Domain Services (Azure AD DS)
an Azure VPN gateway
the Active Directory Domain Services role on a virtual machine
Azure AD Application Proxy
Understanding the Requirements
Application (App1): Uses LDAP queries to authenticate users in the on-premises Active Directory.
Migration: Moving from an on-premises Linux server to an Azure VM.
Security Policy: VMs and services in Azure are not allowed to access the on-premises network.
Functionality: The migrated application must still be able to authenticate users.
Analyzing the Options
Azure AD Domain Services (Azure AD DS)
Pros:
Provides a managed domain controller in Azure, allowing VMs to join the domain.
Supports LDAP queries for authentication.
Independent of the on-premises network.
Synchronizes user information from Azure AD.
Fully managed, eliminating the need for maintaining domain controllers.
Cons:
Cost implications from running an additional service.
Verdict: This is the most suitable option. It meets the functional requirements without violating the security policy.
An Azure VPN Gateway
Pros:
Provides a secure connection between Azure and on-premises networks.
Cons:
Violates the security policy that prevents Azure resources from connecting to on-premises.
Would allow the VM access to the entire on-prem network (if setup using site to site) including AD.
Verdict: Not a valid option because it directly contradicts the security policy.
The Active Directory Domain Services role on a virtual machine
Pros:
Provides the needed domain services
Cons:
Would require setting up and managing a domain controller in Azure.
Would need to setup a vpn connection to sync with on-prem which would violate the security policy.
Requires ongoing maintenance.
Verdict: Not a valid option because it would be hard to maintain and the connection to on-prem would violate the security policy.
Azure AD Application Proxy
Pros:
Allows external users to connect to internal resources.
Cons:
Not relevant for this use case. Application Proxy does not manage or provide LDAP access to users.
Verdict: Not a good fit as it does not help with authentication for the application.
Correct Recommendation
The best solution is Azure AD Domain Services (Azure AD DS).
Explanation
LDAP Compatibility: Azure AD DS provides a managed domain service compatible with LDAP queries, which is precisely what App1 needs for user authentication.
Isolated Azure Environment: Azure AD DS is entirely contained within Azure and does not require a connection to the on-premises network. This allows you to satisfy the security policy.
Azure AD Synchronization: Azure AD DS syncs users from Azure AD, meaning users will be able to authenticate after the migration.
Ease of Use: Azure AD DS is a fully managed service so you will not need to worry about the underlying infrastructure.
Important Notes for the AZ-304 Exam
Azure AD DS Use Cases: Know that Azure AD DS is designed for scenarios where you need domain services (including LDAP) in Azure but cannot/should not connect to on-premises domain controllers.
Hybrid Identity: Be familiar with hybrid identity options, such as using Azure AD Connect to sync on-premises Active Directory users to Azure AD.
Security Policies: Pay close attention to security policies described in exam questions. The answers should be able to fulfil any security requirements.
Service Selection: Be able to choose the correct Azure service based on the stated requirements of the question. For example, know when to use Azure AD DS as opposed to spinning up a domain controller in a VM.
Alternatives: You should know what other options there are that could theoretically be used, but also understand their pros and cons. For instance, you should be able to state that a VPN could facilitate the connection, but that the security policy would need to be updated.
LDAP Authentication: Understand LDAP as the core functionality for Active Directory authentication.
Fully Managed Services: Be aware of the benefits of managed services (like Azure AD DS) in reducing management overhead.
You are reviewing an Azure architecture as shown in the Architecture exhibit (Click the Architecture tab.)
Log Files
|
v
Azure Data Factory ——-> Azure Data Lake Storage
| |
| |
| |
v |
Azure Databricks <—————-
|
v
Azure Synapse Analytics ——-> Azure Analysis Services
|
v
Power BI
Steps:
Ingest: Log Files → Azure Data Factory
Store: Azure Data Factory → Azure Data Lake Storage
Prep and Train: Azure Data Lake Storage ⇄ Azure Databricks
Model and Serve: Azure Synapse Analytics → Azure Analysis Services
Visualize: Azure Analysis Services → Power BI
The estimated monthly costs for the architecture are shown in the Costs exhibit. (Click the Costs tab.)
|—————————-|————————————————-|—————|
| Azure Synapse Analytics | Tier: Compute-optimised Gen2, Compute: DWU 100 x 1 | US$998.88 |
| Data Factory | Azure Data Factory V2 Type, Data Pipeline Service type, | US$4,993.14 |
| Azure Analysis Services | Developer (hours), 5 Instance(s), 720 Hours | US$475.20 |
| Power BI Embedded | 1 node(s) x 1 Month, Node type: A1, 1 Virtual Core(s), | US$735.91 |
| Storage Accounts | Block Blob Storage, General Purpose V2, LRS Redundant, | US$21.84 |
| Azure Databricks | Data Analytics Workload, Premium Tier, 1 D3V2 (4 vCPU) | US$515.02 |
| Estimate total: | | US$7,739.99 |
The log files are generated by user activity to Apache web servers. The log files are in a consistent format. Approximately 1 GB of logs are generated per day. Microsoft Power Bl is used to display weekly reports of the user activity.
You need to recommend a solution to minimize costs while maintaining the functionality of the architecture.
What should you recommend?
Replace Azure Data Factory with CRON jobs that use AzCopy.
Replace Azure Synapse Analytics with Azure SOL Database Hyperscale.
Replace Azure Synapse Analytics and Azure Analysis Services with SQL Server on an Azure virtual machine.
Replace Azure Databricks with Azure Machine Learning.
Service | Description | Cost |
Understanding the Existing Architecture
Data Ingestion: Log files from Apache web servers are ingested into Azure Data Lake Storage via Azure Data Factory.
Data Processing: Azure Databricks is used to prep and train the data.
Data Warehousing: Azure Synapse Analytics is used to model and serve data.
Data Visualization: Azure Analysis Services and Power BI are used for visualization.
Cost Breakdown and Bottlenecks
The cost breakdown shows the following areas as significant expenses:
Azure Data Factory: $4,993.14 (by far the most expensive item)
Azure Synapse Analytics: $998.88
Power BI Embedded: $735.91
The other items (Analysis services, Databricks, and storage) are relatively low cost.
Analyzing the Recommendations
Replace Azure Data Factory with CRON jobs that use AzCopy.
Pros:
Significant cost reduction: AzCopy is free and can be used with a simple CRON job.
Suitable for the relatively small amount of data that is being moved.
Cons:
Less feature rich than Data Factory (No orchestration, error handling, monitoring etc).
Adds management overhead as you need to create and maintain the CRON jobs.
Verdict: This is the best option. Given the small data volume, the complexity of Data Factory is overkill and the cost can be reduced dramatically.
Replace Azure Synapse Analytics with Azure SQL Database Hyperscale.
Pros:
Can be more cost effective for smaller workloads and can scale up or down easily.
Cons:
May need changes to the way the data is stored and managed.
Hyperscale is designed for transactional loads and may not be the best replacement for a Datawarehouse.
Verdict: Not the best option, as it may impact the architecture of the solution and the query patterns used.
Replace Azure Synapse Analytics and Azure Analysis Services with SQL Server on an Azure virtual machine.
Pros:
Could be less expensive than the managed service for small workloads.
Cons:
Significantly more management overhead, less scalable.
Would reduce the overall functionality of the solution, having to implement multiple services in one VM.
Would not reduce costs as the total cost of the VM, the sql licences, and management effort would likely cost more.
Verdict: Not recommended. Introduces complexity and management overhead.
Replace Azure Databricks with Azure Machine Learning.
Pros:
Azure Machine Learning can also do data processing.
May be more cost efficient depending on workload.
Cons:
Azure Machine learning is more focused on ML than processing/preparation of data.
* More geared towards predictive analytics than general data processing.
* May require a significant rework of the existing process.
Verdict: Not a suitable option as it is not a like for like replacement.
Recommendation
The best recommendation is:
Replace Azure Data Factory with CRON jobs that use AzCopy.
Explanation
Cost Savings: The primary issue is the high cost of Azure Data Factory. Using CRON jobs and AzCopy is a simple, low-cost alternative for the relatively small volume of data being moved.
Functionality: The CRON job will simply move the data from the source location to the Azure data lake, with the processing steps remaining the same.
Complexity: While this adds more management overhead by requiring you to create the CRON job and manage it, the simplicity of the requirements outweigh the complexity.
Important Notes for the AZ-304 Exam
Cost Optimization: Know that the exam may test your ability to identify cost drivers and suggest cost optimizations.
Azure Data Factory: Understand when ADF is the right tool and when a simpler tool will suffice. It’s often beneficial to use a tool as simple as possible, while still meeting requirements.
Data Transfer: Be aware of options like AzCopy for moving data in a low-cost way.
CRON jobs: Understand how CRON jobs can be used to schedule operations.
Azure Synapse Analytics: Understand how Azure Synapse Analytics can provide insights and processing power, but can also be expensive.
SQL Database Hyperscale: Understand when it is more beneficial to use Hyperscale over Synapse
SQL Server on Azure VM: Know the use cases of where a traditional SQL server may be appropriate.
Azure Analysis Services: Know that it is designed for fast data queries and reporting through tools like Power BI, but can add significant cost.
Azure Databricks and ML: Understand the difference and which scenarios are more suited for each.
Service selection: Know how to select a service based on the requirements provided.
Simplicity: Consider solutions that may be less feature-rich, but provide simpler (and lower cost) solutions.
You have an Azure Active Directory (Azure AD) tenant.
You plan to provide users with access to shared files by using Azure Storage. The users will be provided with different levels of access to various Azure file shares based on their user account or their group membership.
You need to recommend which additional Azure services must be used to support the planned deployment.
What should you include in the recommendation?
an Azure AD enterprise application
Azure Information Protection
an Azure AD Domain Services (Azure AD DS) instance
an Azure Front Door instance
The correct answer is C. Azure AD Domain Services (Azure AD DS) instance.
Here’s why:
Understanding the Requirement: The core requirement is to control access to Azure File Shares based on user identities and group memberships defined in Azure AD. Azure File Shares, on their own, don’t natively understand Azure AD identities for access control.
How Azure AD DS Helps:
Extends Azure AD: Azure AD DS provides a managed domain controller service in Azure. It essentially creates a traditional Windows Server Active Directory domain synced with your Azure AD tenant.
Enables Kerberos Authentication: File Shares need a way to authenticate users who want to access them. Azure AD DS enables Kerberos authentication, which is the protocol used by Windows Server-based file servers. With Kerberos authentication enabled, you can assign specific NTFS permissions to individual users and groups on your Azure File Shares which directly translates into allowing or disallowing access.
Seamless Integration: After setting up Azure AD DS, the file shares can be joined to the domain, enabling users to authenticate using their Azure AD credentials seamlessly.
Access Control: This integration provides the capability to define granular NTFS style access control lists (ACLs) for file shares, allowing you to give users/groups specific permissions to the shares and folders.
Why other options are not the best fit:
A. Azure AD enterprise application: Azure AD enterprise applications are primarily used to manage authentication and authorization for cloud-based applications (SaaS). They don’t directly provide the means to manage access to files on Azure file shares in the way described in the scenario.
B. Protect Azure information: Protect Azure information is part of Microsoft Purview to protect sensitive data by classifying and labeling data. It doesn’t help directly in access to Azure file shares with users and their memberships.
D. Azure Front Door instance: Azure Front Door is a global, scalable entry-point for web applications and services. It is not relevant to accessing files on Azure File Shares.
DRAG DROP
You are planning an Azure solution that will host production databases for a high-performance application. The solution will include the following components:
✑ Two virtual machines that will run Microsoft SQL Server 2016, will be deployed to different data centers in the same Azure region, and will be part of an Always On availability group.
✑ SQL Server data that will be backed up by using the Automated Backup feature of the SQL Server IaaS Agent Extension (SQLIaaSExtension)
You identify the storage priorities for various data types as shown in the following table.
|————————|—————————|
| Operating system | Speed and availability |
| Databases and logs | Speed and availability |
| Backups | Lowest cost |
Which storage type should you recommend for each data type? To answer, drag the appropriate storage types to the correct data types. Each storage type may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Storage Types
A geo-redundant storage (GRS) account
A locally-redundant storage (LRS) account
A premium managed disk
A standard managed disk
Answer Area
Operating system:
Databases and logs:
Backups:
Data type | Storage priority |
Understanding the Requirements
High-Performance Application: The application demands high speed and availability.
SQL Server Always On: Data is critical and must be resilient and highly available.
Automated Backups: Backups are important but not as critical as the operational data.
Storage Priorities:
Operating System: Speed and availability.
Databases and Logs: Speed and availability.
Backups: Lowest cost.
Analyzing the Storage Options
A geo-redundant storage (GRS) account:
Pros:
Provides data replication across a secondary region.
Best for disaster recovery and high availability.
Cons:
Highest cost among the storage options.
Higher latency than locally redundant storage (LRS) or premium storage.
Use Case: Best for backups when recovery from a regional outage is critical, or when backups need to be available from a different location.
A locally-redundant storage (LRS) account:
Pros:
Lowest cost storage.
Cons:
Data redundancy is limited to within the same data center.
Use Case: Suitable for backups where availability is less of a concern and lowest cost is the primary priority.
A premium managed disk:
Pros:
Highest performance with SSD storage.
Designed for high IOPS and low latency.
Cons:
Highest cost.
Use Case: Ideal for operating system disks, databases, and logs for high-performance applications.
A standard managed disk:
Pros:
Lower cost than premium disks.
Cons:
Uses HDD storage, offering less performance than SSD storage.
Use Case: Suitable for less performance-sensitive workloads and backups, where cost is an important factor.
Matching Storage to Data Types
Here’s how we should match the storage types:
Operating system:
Premium managed disk is the correct option. The operating system requires high-speed disk access for good virtual machine performance.
Databases and logs:
Premium managed disk is the correct option. Databases and logs require very low-latency and high IOPS. Premium disks are the only disks that provide these performance requirements.
Backups:
A locally-redundant storage (LRS) account is the best option. The automated backup configuration for SQL Server (SQLIaaSExtension) should use LRS storage for backups by default due to the cost benefits.
Answer Area
Data Type Storage Type
Operating system A premium managed disk
Databases and logs A premium managed disk
Backups A locally-redundant storage (LRS) account
Important Notes for the AZ-304 Exam
Managed Disks vs Unmanaged Disks: Know the difference between them and be aware that managed disks are the default option and almost always recommended.
Premium SSD vs Standard HDD: Understand the use cases of Premium disks for high IOPS/low-latency and Standard for cost sensitive workloads.
Storage Redundancy Options: Understand the difference between LRS, GRS, ZRS, and how to choose the best options for availability and durability requirements.
SQL Server on Azure VMs: Know best practices for SQL Server VM deployments including storage and backup configuration.
Performance Needs: Recognize which workloads need performance (like databases, operating systems) and which can tolerate lower performance and be cost-optimized (backups)
You nave 200 resource groups across 20 Azure subscriptions.
Your company’s security policy states that the security administrator most verify all assignments of the Owner role for the subscriptions and resource groups once a month. All assignments that are not approved try the security administrator must be removed automatically. The security administrator must be prompted every month to perform the verification.
What should you use to implement the security policy?
Access reviews in identity Governance
role assignments in Azure Active Directory (Azure AD) Privileged Identity Management (PIM)
Identity Secure Score in Azure Security Center
the user risk policy Azure Active Directory (Azure AD) Identity Protection
Understanding the Requirements
Scope: 20 Azure subscriptions and 200 resource groups.
Policy: Monthly verification of Owner role assignments.
Verification: A security administrator must approve or remove role assignments.
Automation: Unapproved assignments should be automatically removed.
Monthly Reminders: Security administrator must be prompted each month for verification.
Analyzing the Options
Access reviews in Identity Governance:
Pros:
Role Assignment Review: Specifically designed for reviewing and managing role assignments, including the Owner role.
Scheduled Reviews: Can be configured to run monthly.
Automatic Removal: Supports automatic removal of assignments not approved by the reviewer.
Reviewer Reminders: Notifies designated reviewers (security administrator) when reviews are due.
Scope: Can be used for both subscriptions and resource groups.
Cons:
Requires correct configuration of the governance policy and assignments to ensure the policy is enforced.
Verdict: This is the correct option as it directly meets all the requirements.
Role assignments in Azure Active Directory (Azure AD) Privileged Identity Management (PIM):
Pros:
Allows for just-in-time (JIT) role elevation.
Cons:
Does not directly facilitate regular reviews of role assignments.
PIM is generally used for temporary access not the requirement for recurring review and removal of assignments.
Verdict: Not suitable. Does not fulfil the requirement for monthly verification of role assignments.
Identity Secure Score in Azure Security Center:
Pros:
Provides a security score based on configurations and recommendations.
Cons:
Does not manage, monitor, or remove role assignments.
Only provides a score of the security posture but does not take actions to remove permissions.
Verdict: Not suitable. It is only used to monitor your posture.
The user risk policy in Azure Active Directory (Azure AD) Identity Protection:
Pros:
Detects and manages user risk based on suspicious activities.
Cons:
Does not manage role assignments, it is only used for user based risks and not for permissions.
Not relevant for the requirements for scheduled reviews of role assignments.
Verdict: Not suitable. Not used for role assignment reviews.
Recommendation
The best solution is:
Access reviews in Identity Governance
Explanation
Designed for Role Assignment Reviews: Access reviews are specifically built for reviewing and managing user access to resources.
Scheduled Monthly Reviews: You can configure the access reviews to occur every month.
Automatic Remediation: Unapproved role assignments can be automatically removed, which fulfills the security policy requirement.
Notifications: The security administrator will be notified when the monthly review is due and will be required to take action, or the review will complete automatically.
Comprehensive Scope: Access reviews can be configured at the subscription and resource group levels.
Important Notes for the AZ-304 Exam
Identity Governance: Know that Identity Governance provides access reviews and other features for managing user access.
Access Reviews: Understand how to use access reviews for recurring role assignment validation.
Privileged Identity Management (PIM): Know when to use PIM for JIT role activation and when it is not suitable, such as in this scenario.
Azure Security Center: Understand that it gives you a security posture but not a way to resolve assignment review issues, it only recommends remediation steps.
Azure AD Identity Protection: Understand its purpose in monitoring and dealing with user risk.
Role Assignments: know that RBAC is used to control roles, and that they can be assigned at multiple levels in Azure.
Automation: Be aware of how Azure Governance tools can help automate security tasks, such as removing assignments and sending out alerts.
Your company purchases an app named App1.
You need to recommend a solution 10 ensure that App 1 can read and modify access reviews.
What should you recommend?
From the Azure Active Directory admin center, register App1. and then delegate permissions to the Microsoft Graph AP
From the Azure Active Directory admin center, register App1. from the Access control (IAM) blade, delegate permissions.
From API Management services, publish the API of App1. and then delegate permissions to the Microsoft Graph AP
From API Management services, publish the API of App1 From the Access control (IAM) blade, delegate permissions.
Understanding the Requirements
App1 Functionality: Needs to read and modify access reviews.
Azure Environment: Using Azure Active Directory (Azure AD).
Authorization: Must be authorized to perform these actions.
Analyzing the Options
From the Azure Active Directory admin center, register App1, and then delegate permissions to the Microsoft Graph API.
Pros:
Application Registration: The correct way to enable an application to be able to access protected resources in Azure AD.
Microsoft Graph API: The Microsoft Graph API is the correct API to access Azure AD, including access reviews.
Delegated Permissions: Permissions to access Microsoft Graph APIs must be delegated to applications, and this can be done using Azure AD application registrations.
Cons:
None. This is the correct approach.
Verdict: This is the correct solution.
From the Azure Active Directory admin center, register App1. from the Access control (IAM) blade, delegate permissions.
Pros:
Application Registration: Required to allow your app to integrate with Azure.
Cons:
Access Control (IAM): IAM is used for resource-level access control and not for delegating permissions for application access to Azure AD or Graph API resources.
Delegations to specific APIs such as graph api are not performed using the IAM blade.
Verdict: This is incorrect. IAM is not used to delegate permissions to the Microsoft Graph API.
From API Management services, publish the API of App1, and then delegate permissions to the Microsoft Graph API.
Pros:
API Management is useful when you want to expose your app as a third-party API.
Cons:
API Management: Not required for App1 to interact with the Graph API. API Management is not required to access graph API’s.
Does not support direct delegation of application permissions.
Verdict: This is incorrect. API Management is not the correct service for this task.
From API Management services, publish the API of App1. From the Access control (IAM) blade, delegate permissions.
Pros:
API Management is useful when you want to expose your app as a third-party API.
Cons:
API Management: Not required for App1 to interact with the Graph API.
IAM: IAM is not used to delegate access to the Graph API.
Verdict: This is incorrect. API Management is not the correct service, and IAM is not the correct way to configure delegation for a graph api.
Recommendation
The correct recommendation is:
From the Azure Active Directory admin center, register App1, and then delegate permissions to the Microsoft Graph API.
Explanation
Application Registration: Registering App1 in Azure AD creates an application object which represents your application and is used to identify your application within the directory.
Microsoft Graph API: The Microsoft Graph API is the unified endpoint for accessing Microsoft 365, Azure AD and other Microsoft cloud resources. Access reviews are also exposed through this API.
Delegated Permissions: You must delegate permissions to allow App1 to access the Graph API. By providing delegated permissions through the application registration, you allow the app to access resources on behalf of the logged in user. In the case of app-only access, this can be configured by granting application permissions rather than delegated permissions.
Authorization: After App1 is registered with delegated permissions it is allowed to perform actions on the Graph API such as accessing access reviews.
Important Notes for the AZ-304 Exam
Application Registration: Know how to register applications in Azure AD and why it is a required step to allow apps to access resources.
Microsoft Graph API: Understand that the Graph API is the primary way to access Microsoft 365 and Azure AD resources, including access reviews.
Delegated Permissions vs. Application Permissions: Be able to differentiate between these two types of permissions. Delegated permissions require an authenticated user. Application permissions are app-only and do not need a logged in user.
Access Control (IAM): Know that IAM is for resource level access and not for granting permission for applications.
API Management: Understand its purpose in publishing and securing APIs, but note that it is not necessary in this use case.
Security Principles: Understand the best practices for securing access to resources such as ensuring that the app is registered and given correct permissions.
HOTSPOT
Your company deploys several Linux and Windows virtual machines (VMs) to Azure. The VMs are deployed with the Microsoft Dependency Agent and the Log Analytics Agent installed by using Azure VM extensions. On-premises connectivity has been enabled by using Azure ExpressRoute.
You need to design a solution to monitor the VMs.
Which Azure monitoring services should you use? To answer, select the appropriate Azure monitoring services in the answer area. NOTE: Each correct selection is worth one point.
Scenario | Azure Monitoring Service
Analyze Network Security Group (NSG) flow logs for VMs
attempting Internet access:
Azure Traffic Analytics
Azure ExpressRoute Monitor
Azure Service Endpoint Monitor
Azure DNS Analytics
Visualize the VMs with their different processes and
dependencies on other computers and external processes:
Azure Service Map
Azure Activity Log
Azure Service Health
Azure Advisor
Understanding the Requirements
Monitoring Scope: Linux and Windows VMs in Azure.
Connectivity: On-premises connectivity via Azure ExpressRoute.
Microsoft Dependency Agent and Log Analytics Agent: Already deployed to VMs via extensions.
Monitoring Scenarios:
Analyzing NSG flow logs for VMs attempting Internet access.
Visualizing VMs with processes and dependencies.
Analyzing the Options
Azure Traffic Analytics:
Pros:
Analyzes NSG flow logs to identify traffic patterns and security risks.
Can detect VMs attempting Internet access by inspecting the flow logs.
Provides visualisations of traffic patterns for easy interpretation.
Cons:
Does not provide dependencies of VMs or processes.
Verdict: The correct service for the first scenario.
Azure ExpressRoute Monitor:
Pros:
Monitors the health and performance of ExpressRoute circuits.
Cons:
Does not analyse the flow logs or provide visibility of vm processes and dependencies.
Verdict: Not suitable for the described requirements.
Azure Service Endpoint Monitor:
Pros:
Monitors endpoints in Azure and provides status for services.
Cons:
Does not monitor the flow logs or provide visibility of vm processes and dependencies.
Verdict: Not suitable for the described requirements.
Azure DNS Analytics:
Pros:
* Provides insights into DNS performance and traffic.
Cons:
Does not monitor the flow logs or provide visibility of vm processes and dependencies.
Verdict: Not suitable for the described requirements.
Azure Service Map:
Pros:
Automatically discovers application components on Windows and Linux systems.
Visualizes VMs, processes, and dependencies.
Requires the Microsoft Dependency Agent which has already been installed.
Cons:
Not used to monitor NSG flow logs.
Verdict: Correct choice for the second scenario.
Azure Activity Log:
Pros:
Provides audit logs and tracks events at the subscription and resource level.
Cons:
Does not monitor NSG flow logs or provide process/dependency visualization.
Verdict: Not suitable. It is more related to platform events.
Azure Service Health:
Pros:
Provides insights into the health of Azure services.
Cons:
Does not monitor NSG flow logs or provide process/dependency visualization for individual VMs.
Verdict: Not suitable for the described requirements.
Azure Advisor:
Pros:
Provides recommendations on cost, performance, reliability, and security.
Cons:
Does not monitor the flow logs or provide visibility of vm processes and dependencies.
Verdict: Not suitable for the described requirements.
Answer Area
Scenario Azure Monitoring Service
Analyze Network Security Group (NSG) flow logs for VMs attempting Internet access: Azure Traffic Analytics
Visualize the VMs with their different processes and dependencies on other computers and external processes: Azure Service Map
Important Notes for the AZ-304 Exam
Traffic Analytics: Understand how to use Traffic Analytics to analyze NSG flow logs for security and network traffic monitoring.
Service Map: Know that service map can be used to map services and their dependencies.
Microsoft Dependency Agent: Know that Service Map requires this dependency agent to be deployed on the VMs.
Log Analytics Agent: Be aware that these agents collect logs and forward them to a log analytics workspace and is a pre-requisite for some of these solutions.
Azure Monitor: Know the purpose of all Azure Monitoring services in the overall Azure monitoring landscape.
Application Monitoring vs. Infrastructure Monitoring: Understand that there are a number of monitoring solutions in Azure that target different services. For this question you will need to identify the solution that facilitates monitoring the infrastructure.
You store web access logs data in Azure Blob storage.
You plan to generate monthly reports from the access logs.
You need to recommend an automated process to upload the data to Azure SQL Database every month.
What should you include in the recommendation?
Azure Data Factory
Data Migration Assistant
Microsoft SQL Server Migration Assistant (SSMA)
AzCopy
Understanding the Requirements
Source: Web access logs in Azure Blob storage.
Destination: Azure SQL Database.
Frequency: Monthly.
Automation: The process needs to be automated.
Transformation: No complex transformations are specified, so the service doesn’t need to be a powerful ETL tool.
Analyzing the Options
Azure Data Factory (ADF):
Pros:
Automated Data Movement: Designed to move data between different sources and sinks.
Scheduling: Supports scheduling pipelines for recurring execution (monthly).
Integration: Has built-in connectors for Blob storage and SQL Database.
Scalable: Can handle various data volumes and complexities.
Transformation: Supports data transformation if needed.
Cons:
Slightly more complex to configure than other options, however a simple ADF pipeline is quite easy to configure.
Verdict: This is the best fit. It can orchestrate the entire process from data extraction to data loading, and scheduling.
Data Migration Assistant (DMA):
Pros:
Helps with migrating databases to Azure, including schema and data migration.
Cons:
Not designed for continuous, scheduled data movement.
More of an interactive tool rather than an automated service.
Not suited to ingest logs into an existing database.
Verdict: Not suitable for recurring data uploads. It is more suited for migrations.
Microsoft SQL Server Migration Assistant (SSMA):
Pros:
Helps with migrating databases from on-premises to Azure SQL Database.
Cons:
Not designed for recurring data uploads from Blob Storage.
Primarily used for database migrations not for data ingestion.
Verdict: Not a valid option. This is used for migrations and not for scheduled data uploads.
AzCopy:
Pros:
Command-line tool to copy data to and from Azure Storage.
Cons:
Not a managed service, it does not handle scheduled operations, it has to be scheduled externally using OS tools (e.g. CRON, task scheduler).
Does not support direct data loading to a database, therefore you would need to build a custom solution to facilitate loading the data into the database.
Does not support any data transformation logic.
Verdict: Not the best option. Requires building a custom solution and does not directly fulfil the requirement to load data into a database.
Recommendation
The correct recommendation is:
Azure Data Factory
Explanation
Automation and Scheduling: Azure Data Factory allows you to create pipelines that can be scheduled to run monthly.
Built-in Connectors: It has connectors for both Azure Blob Storage (to read the logs) and Azure SQL Database (to load data).
Data Integration: It integrates all steps of data extraction, transformation (optional), and loading into a single pipeline.
Monitoring: It provides monitoring and logging for debugging and audit purposes.
Scalability: It can handle a large amount of data if required, and can scale up resources as needed.
Important Notes for the AZ-304 Exam
Azure Data Factory (ADF): Understand its capabilities as an ETL and data orchestration tool.
Automated Data Movement: Know how to set up ADF pipelines for recurring data movement.
Data Integration Tools: Familiarize yourself with the available connectors for different data sources and destinations.
Data Migration vs. Data Ingestion: Understand the difference between tools that are used for migration (e.g. DMA, SSMA) and tools for scheduled data uploads (e.g. ADF).
AzCopy: Know the purpose of AzCopy, and its use cases.
Transformation: Understand that transformation is often a requirement and that you can use data factory for this if needed.
Ease of Use: Although ADF is not the simplest tool, it is the easiest to maintain for scheduled recurring events when compared to a custom solution.
You are designing a data protection strategy for Azure virtual machines. All the virtual machines use managed disks.
You need to recommend a solution that meets the following requirements:
- The use of encryption keys is audited.
- All the data is encrypted at rest always.
- You manage the encryption keys, not Microsoft.
What should you include in the recommendation?
Azure Disk Encryption
Azure Storage Service Encryption
BitLocker Drive Encryption (BitLocker)
client-side encryption
Understanding the Requirements
Managed Disks: The virtual machines use Azure managed disks.
Encryption at Rest: All data must be encrypted when stored on disk.
Customer-Managed Keys: You must manage the encryption keys, not Microsoft.
Auditing: The use of encryption keys must be auditable.
Analyzing the Options
Azure Disk Encryption (ADE):
Pros:
Encrypts managed disks for both Windows and Linux VMs.
Supports customer-managed keys (CMK) with Azure Key Vault.
Data is encrypted at rest, meeting the security requirement.
Cons:
Does not support auditing of key usage.
Verdict: Does not fully satisfy the requirements due to lack of key usage auditing.
Azure Storage Service Encryption (SSE):
Pros:
Encrypts data at rest in Azure storage (including managed disks) by default.
Supports Microsoft-managed keys or customer-managed keys.
Cons:
Provides basic encryption for data at rest, but does not encrypt the OS disks of VMs.
Does not support the auditing of key usage.
Verdict: Does not provide full coverage of encryption for managed disks, and does not support auditing, therefore not a suitable choice.
BitLocker Drive Encryption (BitLocker):
Pros:
Encrypts drives in Windows operating systems.
Cons:
Would require manual setup and management for every VM.
Does not support auditing of key usage.
Does not support customer managed keys out of the box.
Verdict: Not the correct option. Too much manual overhead, lacks key auditing, and can be complex to manage.
Client-Side Encryption:
Pros:
The data is encrypted before it is sent to Azure.
The encryption key is managed by the client.
Cons:
This method requires custom implementations and additional effort from the client.
Does not support management or auditing of the keys in azure.
Verdict: Not suitable. Requires custom implementations, and is not a managed solution.
Recommendation
The recommendation should be Azure Disk Encryption with Customer-Managed Keys and Azure Key Vault as this is the closest to the correct answer, however further steps are required to implement the auditing requirements.
Explanation
Azure Disk Encryption (ADE): ADE provides encryption for both OS and data disks, using platform-managed keys or customer-managed keys.
Customer-Managed Keys (CMK): By using CMK with Azure Key Vault, you maintain full control over your encryption keys, which satisfies that requirement.
Azure Key Vault Auditing: Azure Key vault logs every event and access of secrets and keys, which can be monitored through Azure Log Analytics.
Encryption at Rest: The data at rest on the managed disks is always encrypted using the configured CMK keys.
Full coverage: This method fully encrypts all disks for the VM.
Steps to implement auditing:
Create an Azure Key Vault
Create a customer managed key in Azure Key Vault.
Configure ADE for the VM to use the customer managed key.
Configure Diagnostic settings on Azure Key Vault to send all logs to Azure Log Analytics.
Configure alerts on Key vault events using Azure Log Analytics to ensure that you are notified when keys are used or modified.
Important Notes for the AZ-304 Exam
Azure Disk Encryption (ADE): Know the options for ADE (platform-managed vs. customer-managed keys) and their implications.
Azure Key Vault: Understand its purpose for storing and managing secrets, keys, and certificates.
Encryption at Rest: Be aware of the different ways to achieve encryption at rest in Azure storage and databases.
Customer-Managed Keys: Know the benefits and implications of using customer-managed keys (CMK) for encryption.
Auditing: Be aware that auditing is a critical aspect of encryption and compliance.
Managed Disks: Understand that managed disks are now the default type in Azure and that encryption applies to them.
Your company has the divisions shown in the following table.
|—|—|—|
| East | Sub1 | East.contoso.com |
| West | Sub2 | West.contoso.com |
Sub1 contains an Azure web app that runs an ASP.NET application named App1 uses the Microsoft identity platform (v2.0) to handler user authentication. users from east.contoso.com can authenticate to App1.
You need to recommend a solution to allow users from west.contoso.com to authenticate to App1.
What should you recommend for the west.contoso.com Azure AD tenant?
guest accounts
an app registration
pass-through authentication
a conditional access policy
Division | Azure subscription | Azure Active Directory (Azure AD) tenant |
Understanding the Requirements
App1: An ASP.NET application using the Microsoft identity platform (v2.0) for authentication.
Current Authentication: east.contoso.com users can already authenticate to App1.
New Authentication: Users from west.contoso.com must also be able to authenticate to App1.
Authentication: Using Microsoft Identity platform and not on-premises authentication.
Azure AD Tenants: The different divisions have different Azure AD tenants.
Analyzing the Options
Guest accounts:
Pros:
Cross-Tenant Access: Allows users from one Azure AD tenant to access resources in another Azure AD tenant.
Easy to Setup: Relatively easy to create and manage.
Azure AD Integration: Fully compatible with Azure AD and Microsoft identity platform (v2.0).
App Access: This will allow the users to be added to the east.contoso.com tenant and allow access to the app.
Cons:
Requires users to be invited.
Verdict: This is the correct solution.
An app registration:
Pros:
Required for all applications that require authentication from azure ad.
Cons:
The app registration is already done, and an additional app registration is not required.
Verdict: Not required. An app registration is already in place.
Pass-through authentication:
Pros:
Allows users to use their on-premises password to sign in to azure ad.
Cons:
Not suitable in this scenario as it is designed to use local passwords and is not relevant for cloud identity authentication.
Not designed for this use-case which is authentication between different azure AD tenants.
Verdict: Not a good solution. It is not applicable to cloud authentication and is designed for on-prem identity.
A conditional access policy:
Pros:
Used to enforce access control based on various conditions.
Cons:
Does not enable the required functionality to allow a new tenant access to an existing application.
Used to control which users can access a particular resource, but the user must be configured to authenticate first.
Verdict: Not the correct choice. Conditional access can be added later to restrict which users can access the app, but it will not provide the access needed for the app to work for the new tenant.
Recommendation
The correct recommendation is:
Guest accounts
Explanation
Azure AD Guest Accounts: Guest accounts in Azure AD allow you to invite external users into your Azure AD tenant. These users can then access the applications that are hosted on that tenant.
Cross-Tenant Access: Guest accounts enable cross-tenant collaboration, which is exactly what is needed in this scenario.
Microsoft Identity Platform Compatibility: Guest accounts fully integrate with the Microsoft identity platform (v2.0), making them compatible with the authentication mechanisms used by App1.
Access to the App: After a user is added as a guest in the east.contoso.com tenant, they are able to authenticate to the app using their existing credentials from the west.contoso.com tenant.
Important Notes for the AZ-304 Exam
Azure AD Guest Accounts: Understand the purpose of Azure AD guest accounts for cross-tenant collaboration.
Cross-Tenant Access: Know when and how to configure cross-tenant access with Azure AD.
Microsoft Identity Platform (v2.0): Understand that this platform is used for authentication of modern web and mobile applications.
Application Registrations: Know that an app registration is required to allow applications to access resources from Azure AD.
Pass-through Authentication: Understand that this is used to authenticate on-prem identities, not cloud identities.
Conditional Access: Know that this can control access, but cannot provide access on its own.
Authentication: Have a good understanding of authentication in Azure and how to configure it to work across multiple tenants.
HOTSPOT
You are designing a solution for a stateless front-end application named Application1.
Application1 will be hosted on two Azure virtual machines named VM1 and VM2.
You plan to load balance connections to VM1 and VM2 from the Internet by using one Azure load balancer.
You need to recommend the minimum number of required public IP addresses.
How many public IP addresses should you recommend using for each resource? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Load balancer:
0
1
2
3
VM1:
0
1
2
3
VM2:
0
1
2
3
Understanding the Requirements
Application1: Stateless front-end application.
Hosting: On two Azure VMs (VM1 and VM2).
Load Balancing: Incoming traffic from the Internet must be load balanced across the two VMs.
Public IP Addresses: The goal is to determine the minimum number of public IP addresses required.
Analyzing the Setup
Load Balancer: An Azure load balancer, which provides the entry point for internet traffic and distributes it between the virtual machines.
Virtual Machines: VM1 and VM2 host the application. In this scenario, we want to know how many public IP addresses are required for each VM.
Public IP Addresses Needed
Load Balancer:
A load balancer needs a public IP address to be accessible from the internet. This IP address will be the entry point that the outside world connects to, and the load balancer will handle directing traffic to the back end VMs.
You would typically use one single IP address for this type of scenario.
Therefore the correct answer is 1
Virtual Machines (VM1 and VM2):
The application is being load balanced. It is therefore not required to have the virtual machines individually exposed to the public internet.
The Load Balancer will direct traffic to the virtual machines using a private IP address.
It is therefore not required for these to have public IP addresses.
Therefore the correct answer is 0
Answer Area
Resource Public IP Addresses
Load balancer 1
VM1 0
VM2 0
Explanation
Load Balancer:
The load balancer needs a single public IP address for internet access. This is the public entry point for all inbound connections. The Load Balancer is responsible for directing the traffic to the VMs in a balanced way.
Virtual Machines (VM1 and VM2):
Since the traffic is going to the VMs via the load balancer they do not require public IP addresses.
The load balancer will connect to the virtual machines using their private IP address, which are on the same network as the Load Balancer.
This allows the virtual machines to be protected from direct internet access, as the public facing IP is managed by the Load Balancer.
Important Notes for the AZ-304 Exam
Azure Load Balancer: Understand the role of load balancers in distributing traffic across VMs.
Public IP Addresses: Know when public IP addresses are required and when they are not.
Private IP Addresses: Understand that communication can happen within a virtual network using private IP addresses.
Stateless Applications: Recognize the purpose of stateless applications, and how load balancers are used.
Load Balancer Configuration: Know how load balancers work and how back end pools are configured to handle the traffic.
Security: Remember that it’s a best practice not to directly expose VMs to the internet, and that a Load Balancer with a public IP should be used instead
You need to deploy resources to host a stateless web app in an Azure subscription.
The solution must meet the following requirements:
- Provide access to the full .NET framework.
- Provide redundancy if an Azure region fails.
- Grant administrators access to the operating system to install custom application dependencies.
Solution: You deploy a web app in an Isolated App Service plan.
Does this meet the goal?
Yes
No
Understanding the Requirements
Stateless Web App: The application is stateless.
Full .NET Framework: The application requires access to the full .NET Framework.
Regional Redundancy: The application must continue to function if an Azure region fails.
OS Access: Administrators need access to the operating system to install custom dependencies.
Analyzing the Proposed Solution: Isolated App Service Plan
Isolated App Service Plan: This plan provides the highest level of isolation and resources for a web app.
Now, let’s evaluate how the solution meets each requirement:
Provide access to the full .NET framework.
Analysis: An isolated app service plan allows you to select the operating system (Windows) and provides the full .NET framework, therefore meeting the requirement.
Verdict: Meets Requirement
Provide redundancy if an Azure region fails.
Analysis: Isolated App Service plans do not provide automatic multi-region redundancy. You would need to deploy the web app and app service plan to multiple regions, and manually configure traffic redirection using a tool like Azure Traffic Manager or Front Door.
Verdict: Does NOT meet requirement
Grant administrators access to the operating system to install custom application dependencies.
Analysis: App Service, including Isolated plans, do not grant administrators access to the underlying operating system. You are restricted to installing dependencies within the supported context of the web app.
Verdict: Does NOT meet requirement
Conclusion
The Isolated App Service plan meets one of the three requirements. Therefore, the answer is No.
Reasoning:
While an Isolated App Service plan offers a great amount of resource allocation and isolation, it does not give access to the underlying operating system to administrators, or provide automatic redundancy in the event of an outage. These limitations make the solution unsuitable for the requirements.
Correct Answer
No
Important Notes for the AZ-304 Exam
Azure App Service: Understand the different App Service plans (Free, Shared, Basic, Standard, Premium, and Isolated) and their features.
.NET Framework: Be aware of the support for the full .NET Framework in App Service plans and the limitations.
Regional Redundancy: Know how to achieve regional redundancy using traffic managers and other services.
OS Access: Remember that App Service generally does not provide access to the underlying OS.
Use Cases: Know when to select Azure VM’s over App Services, particularly when you need control of the underlying operating system.
Service Selection: Know how to select the correct Azure service that fits all the requirements.
Your network contains an on-premises Active Directory domain.
The domain contains the Hyper-V clusters shown in the following table.
|—|—|—|
| Cluster 1 | 4 | 20 |
| Cluster 2 | 3 | 15 |
You plan to implement Azure Site Recovery to protect six virtual machines running on Cluster1 and three virtual machines running on Cluster1 Virtual machines are running on all Cluster! and Cluster2 nodes.
You need to identify the minimum number of Azure Site Recovery Providers that must be installed on premises.
How many Providers should you identify?
1
7
9
16
Name | Number of nodes | Number of virtual machines running on cluster |
Understanding the Requirements
On-Premises Environment: An on-premises Active Directory domain with two Hyper-V clusters.
Azure Site Recovery: Used to protect virtual machines.
Protected VMs: Six VMs from Cluster1 and three VMs from Cluster1.
Goal: Determine the minimum number of ASR Providers needed.
Understanding Azure Site Recovery Providers
Purpose: The Azure Site Recovery Provider is a component installed on each Hyper-V host that communicates with Azure Site Recovery to facilitate replication and failover of virtual machines.
Placement: The Provider is installed on each Hyper-V host that is part of a cluster that contains virtual machines to be protected.
Minimum Requirement: You need at least one provider installed per cluster.
Analyzing the Scenario
Cluster1: Has 4 nodes. Six virtual machines are to be protected.
Cluster2: Has 3 nodes. Three virtual machines are to be protected.
Calculating the Required Providers
Cluster1: Although only 6 virtual machines from cluster 1 are being protected, these are hosted on nodes within the cluster and these nodes need to have the ASR provider installed.
Since there are four nodes in the cluster, a minimum of four providers is required for the virtual machines in cluster1.
Cluster2: Only 3 virtual machines need to be protected in cluster 2 and therefore the nodes in the cluster that host these virtual machines will require the ASR provider.
Since there are three nodes in the cluster, a minimum of three providers are required for the virtual machines in cluster 2.
Total Providers: The total minimum number of ASR Providers is therefore 4+3 = 7
Note that even if only 1 vm was protected on each cluster, the total number of providers would be 4 + 3 = 7.
Correct Answer
7
Important Notes for the AZ-304 Exam
Azure Site Recovery (ASR): Understand the purpose and function of ASR for disaster recovery.
ASR Provider: Know that the ASR Provider needs to be installed on every Hyper-V host in order to protect its virtual machines.
Hyper-V Clusters: Understand how to use Azure Site Recovery with Hyper-V clusters.
Agent Requirements: You need to know what components are required to be deployed on the virtual machines as well as on the hyper-v hosts.
Deployment Requirements: You should know the pre-requisites for deploying a DR strategy in Azure, and be aware of any limitations.
Minimum Requirements: ASR needs a minimum of 1 provider per hyper-v host that contains VMs that need to be protected by ASR.
ASR Components: Be aware of the different components required for an ASR setup.
HOTSPOT
You are designing a cost-optimized solution that uses Azure Batch to run two types of jobs on Linux nodes. The first job type will consist of short-running tasks for a development environment. The second jot type will consist of long-running Message Passing Interface (MPI) applications for a production environment that requires timely job completion.
You need to recommend the pool type and node type for each job type. The solution must minimize compute charges and leverage Azure Hybrid Benefit whenever possible.
What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
First job:
Batch service and dedicated virtual machines
User subscription and dedicated virtual machines
User subscription and low-priority virtual machines
Second job:
Batch service and dedicated virtual machines
User subscription and dedicated virtual machines
User subscription and low-priority virtual machines
First Job (Short-running, Development):
Pool Type: User subscription
Node Type: Low-priority virtual machines
Why?
User subscription: This pool allocation mode is generally simpler for development environments. Azure manages fewer resources, reducing complexity for the developer.
Low-priority virtual machines: Low-priority VMs offer significant cost savings (up to 80% compared to dedicated VMs). They are ideal for workloads that are not time-sensitive and can be interrupted, which is typical of development tasks. If Azure needs to reclaim the capacity, it will preempt these VMs. However, for short-running tasks, the risk of preemption is less impactful.
Second Job (Long-running MPI, Production):
Pool Type: Batch service
Node Type: Dedicated virtual machines
Why?
Batch service: For production workloads, especially those involving MPI and requiring timely completion, the Batch service allocation mode is preferred. It offers better control over the pool’s lifecycle and resources, and in some cases, can result in a lower cost due to how the subscription is billed.
Dedicated virtual machines: Long-running MPI applications are sensitive to interruptions. Dedicated VMs ensure that the nodes won’t be preempted, providing the stability needed for reliable and timely job completion.
Azure Hybrid Benefit:
Azure Hybrid Benefit can be applied to both dedicated and low-priority VMs in either pool type to further reduce costs if you have on-premises licenses for Windows Server or SQL Server. Because the question specifies Linux nodes, you would not be able to utilize AHB in this scenario.
Therefore, the correct answer is:
First Job: User subscription and low-priority virtual machines
Second Job: Batch service and dedicated virtual machines
You have an Azure Active Directory (Azure AD) tenant named Contoso.com. The tenant contains a group named Group1. Group1 contains all the administrator user accounts.
You discover several login attempts to the Azure portal from countries administrator users do NOT work.
You need to ensure that all login attempts to the portal from those countries require Azure Multi-Factor Authentication (MFA).
Solution: You implement an access package.
Does this meet the goal?
Yes
No
Understanding the Requirements
Azure AD Tenant: Contoso.com
Admin Group: Group1 contains all administrator user accounts.
Problem: Login attempts from unauthorized countries.
Goal: Enforce MFA for all login attempts from these countries for administrator users.
Analyzing the Proposed Solution: Access Package
Access Package: A tool in Azure AD Identity Governance that allows you to manage access to resources (such as applications, groups, or SharePoint sites) by grouping the resources and their associated access policies together.
Let’s see if an access package meets the needs:
Enforce MFA for all login attempts to the portal from those countries.
Analysis: Access packages manage access to resources. It does not provide controls based on the location of the user, or specifically, the sign-in of the user. It cannot be used to enforce MFA based on location.
Verdict: Does NOT meet requirement
Conclusion
The solution does not meet the goal, as an access package does not enforce MFA based on location. Therefore, the answer is No.
Correct Answer
No
Explanation
Access packages are used to manage access to resources. Access policies can be created to control how users are granted access to a particular resource, but they can’t be used to control authentication requirements for all login attempts from different locations.
The Correct Solution
The correct way to implement this scenario is to use a Conditional Access Policy. Conditional access policies are designed to control access to applications and services based on conditions such as:
Location (Countries/Regions)
User or Group (e.g., the administrators in Group1)
Device State
Application
With a Conditional Access Policy, you can specify that any login attempts from certain countries for users in Group1 must use MFA.
Important Notes for the AZ-304 Exam
Azure AD Conditional Access: Know the purpose and use of Conditional Access policies.
Access Packages: Understand the use cases of access packages in Azure AD Identity Governance.
MFA Enforcement: Know how to use conditional access to enforce MFA.
User and Group Scope: Know how to use conditions to target policies to specific users or groups.
Location Based Access: Understand how to configure conditional access based on geographical location.
Policy Selection: You should know when to select conditional access vs access policies and the use cases of each.
HOTSPOT
You plan to deploy an Azure web app named Appl that will use Azure Active Directory (Azure AD) authentication.
App1 will be accessed from the internet by the users at your company. All the users have computers that run Windows 10 and are joined to Azure AD.
You need to recommend a solution to ensure that the users can connect to App1 without being prompted for authentication and can access App1 only from company-owned computers.
What should you recommend for each requirement? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
The users can connect to App1 without
being prompted for authentication:
An Azure AD app registration
An Azure AD managed identity
Azure AD Application Proxy
The users can access App1 only from
company-owned computers:
A conditional access policy
An Azure AD administrative unit
Azure Application Gateway
Azure Blueprints
Azure Policy
Understanding the Requirements
App1: An Azure web app using Azure AD authentication.
Users: Company users with Windows 10 computers joined to Azure AD.
Seamless Authentication: Users should be able to connect to App1 without any prompts for their credentials.
Company-Owned Devices: Access to App1 should only be allowed from company-owned computers.
Analyzing the Options
An Azure AD app registration:
Pros:
Required for all applications that use Azure AD.
Configures authentication for the application.
Cons:
Does not enable silent sign in or restrict access based on devices.
Verdict: Not sufficient to fulfil either of the requirements.
An Azure AD managed identity:
Pros:
Provides an identity for Azure services for accessing other Azure resources.
Cons:
Not applicable for the user authentication scenario.
Verdict: Not suitable. Not used for user access.
Azure AD Application Proxy:
Pros:
Enables access to internal web applications from the internet.
Cons:
Does not manage user credentials and does not restrict access to company owned machines.
Verdict: Not relevant for this scenario.
A conditional access policy:
Pros:
Can enforce authentication policies based on conditions, such as location, device compliance and other factors.
Can enforce access restrictions to only allow access from compliant or hybrid joined devices (company owned).
Cons:
Requires careful configuration
Verdict: This is the correct answer for the “company owned” devices requirement.
An Azure AD administrative unit:
Pros:
Used to scope management permissions and policies to a subset of users.
Cons:
Does not enable silent authentication and does not restrict access to devices.
Verdict: Not suitable for these requirements.
Azure Application Gateway:
Pros:
Load balances traffic to multiple backends.
Cons:
Does not manage user credentials and does not restrict access to devices.
Verdict: Not relevant for this scenario.
Azure Blueprints:
Pros:
Used to deploy resources using pre-defined templates.
Cons:
Does not manage user credentials and does not restrict access to devices.
Verdict: Not suitable for these requirements.
Azure Policy:
Pros:
Used to enforce specific resource configurations.
Cons:
Does not manage user credentials and does not restrict access to devices.
Verdict: Not suitable for these requirements.
Recommendations
Here’s how we should match the services to the requirements:
The users can connect to App1 without being prompted for authentication:
An Azure AD app registration: will facilitate the sign in process, however it will still require prompts from the user without a conditional access policy.
The users can access App1 only from company-owned computers:
A conditional access policy is required. Conditional Access can restrict access to only compliant or hybrid joined devices, and therefore prevent users from logging on from personal machines.
Answer Area
Requirement Recommended Solution
The users can connect to App1 without being prompted for authentication: An Azure AD app registration
The users can access App1 only from company-owned computers: A conditional access policy
Explanation
Azure AD app registration:
User Authentication: An app registration configures the authentication for the application. It does not ensure seamless authentication, but it is required to implement authentication for an application.
Conditional Access Policy:
Device-Based Restriction: Conditional access can restrict access based on device compliance, hybrid-joined state, and other factors to guarantee the user is on a company owned device.
Important Notes for the AZ-304 Exam
Azure AD Authentication: Know how Azure AD is used for authentication.
Conditional Access: Understand the purpose and functions of Conditional Access policies and how they can facilitate secure access based on various conditions.
Device Compliance: Know how devices can be marked as compliant or non-compliant within Azure.
Seamless Sign-in: Know that conditional access can facilitate seamless sign in with device based authentication.
Company Owned Devices: Know how conditional access can restrict access to company-owned devices only.
Policy Based Access: Understand that conditional access policies are used to enforce controls for users as they attempt to access resources.
Service Selection: Know how to select the service that best fits the requirements.
You are developing a web application that provides streaming video to users. You configure the application to use continuous integration and deployment.
The app must be highly available and provide a continuous streaming experience for users.
You need to recommend a solution that allows the application to store data in a geographical location that is closest to the user.
What should you recommend?
Azure App Service Web Apps
Azure App Service Isolated
Azure Redis Cache
Azure Content Delivery Network (CDN)
The correct answer is Azure Content Delivery Network (CDN).
Explanation:
Here’s why Azure CDN is the best recommendation and why the other options are less suitable:
Azure Content Delivery Network (CDN):
Geographical Proximity: CDNs are designed specifically to store and serve content from geographically distributed servers (edge servers) that are closer to users. When a user requests video content, the CDN automatically routes the request to the nearest edge server that has the content cached. This significantly reduces latency and improves the streaming experience by delivering data faster.
High Availability and Continuous Streaming: CDNs are built for high availability. They have multiple points of presence (POPs) globally, and if one edge server fails, users are automatically routed to another nearby edge server. This ensures continuous streaming even in case of server failures.
Video Streaming Optimization: CDNs are optimized for delivering streaming media content like videos. They often have features like adaptive bitrate streaming (ABR) support, which dynamically adjusts video quality based on the user’s network conditions, further enhancing the streaming experience.
Why other options are incorrect:
Azure App Service Web Apps: While Azure App Service is excellent for hosting web applications and provides high availability and scalability, it primarily hosts the application code and not the large video files themselves in a geographically distributed manner. You could deploy Web Apps in multiple regions for redundancy, but it doesn’t inherently solve the problem of geographically close data storage for video streaming. Web Apps would likely serve the application logic that uses a CDN or storage service to deliver the video content.
Azure App Service Isolated: App Service Isolated is just a more isolated and resource-dedicated tier of App Service. It doesn’t change the fundamental purpose of App Service, which is application hosting, not geographically distributed data storage for streaming. It also wouldn’t inherently place video data closer to the user.
Azure Redis Cache: Azure Redis Cache is an in-memory data store used for caching frequently accessed data to improve application performance. It’s not designed for storing and streaming large video files. While Redis can be geo-replicated, it’s primarily for caching smaller, frequently accessed pieces of data (like session data, frequently accessed database queries), not for serving large video streams. Redis Cache could be used to cache metadata or streaming session information, but not the video content itself for geographical proximity.
DRAG DROP
A company named Contoso, Ltd- has an Azure Active Directory {Azure AD) tenant that uses the Basic license. You plan to deploy two applications to Azure.
The applications have the requirements shown in the following table.
|—|—|
| Customer | Users must authenticate by using a personal Microsoft account and multi-factor authentication |
| Reporting | Users must authenticate by using either Contoso credentials or a personal Microsoft account. You must be able to manage the accounts from Azure AD. |
Which authentication strategy should you recommend for each application? To answer, drag the appropriate authentication strategies to the correct applications. Each authentication strategy may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Authentication Strategies
An Azure AD B2C tenant
An Azure AD v1.0 endpoint
An Azure AD v2.0 endpoint
Answer Area
Customer: Authentication strategy
Reporting: Authentication strategy
Application name | Requirement |
Understanding the Requirements
Contoso, Ltd. Azure AD Tenant: Using the Basic license.
Two Applications:
Customer: External users authenticate with a personal Microsoft account and require MFA.
Reporting: Internal and external users can use Contoso credentials or a personal Microsoft account, which must be managed from Azure AD.
Analyzing the Authentication Strategies
An Azure AD B2C tenant:
Pros:
Designed for customer-facing applications.
Supports social identities (like Microsoft accounts).
Supports MFA for all authentication types.
Offers customization of the login experience.
Allows management of external identities and authentication policies.
Cons:
Requires an additional Azure AD tenant.
Use Case: Best suited for customer-facing applications that need to support different kinds of identity providers, such as personal Microsoft Accounts.
An Azure AD v1.0 endpoint:
Pros:
Supports Azure AD accounts.
Supports multi factor authentication.
Basic authentication framework
Cons:
Does not support personal microsoft accounts,
Has a more limited set of features than v2.0.
Not designed for external customer authentication.
Use Case: Good for authenticating internal users, but not the best solution for external users.
An Azure AD v2.0 endpoint:
Pros:
Supports Azure AD accounts.
Supports personal Microsoft accounts.
Supports MFA for all authentication types.
Supports modern application development.
Cons:
Does not provide full B2C customization.
Does not manage external accounts within Azure AD.
Use Case: Ideal for authenticating internal (Azure AD) users and external personal accounts, however it does not offer the same level of configuration as B2C.
Matching Authentication Strategies to Applications
Here’s the correct mapping:
Customer:
An Azure AD B2C tenant is the best fit. It is specifically designed for customer-facing applications, supports personal microsoft accounts and MFA, and has good customisation options.
Reporting:
An Azure AD v2.0 endpoint is the most suitable. It is able to facilitate authentication for internal Azure AD users, and external personal microsoft account users, which is suitable for the given requirement. As the application does not require the level of customisation that B2C offers, this is the best option.
Answer Area
Application Authentication Strategy
Customer An Azure AD B2C tenant
Reporting An Azure AD v2.0 endpoint
Important Notes for the AZ-304 Exam
Azure AD B2C: Understand its purpose and use for customer-facing applications.
Azure AD v1.0 vs. v2.0: Know the differences between the v1 and v2 endpoints and how they impact authentication.
Microsoft Accounts: Understand that Azure AD v1.0 does not support personal Microsoft accounts, and therefore you would need to use v2.0, or B2C.
MFA: Know how to enforce MFA for different authentication types.
Authentication Strategies: Understand which strategy is best for different types of applications (e.g., internal vs. customer-facing).
Azure AD Licenses: Know that Azure AD B2C requires separate licensing from Azure AD basic.
Service Selection: Be able to select the correct Azure service that fits your requirements.
Your deploy Azure App Service Web Apps that connect to on-premises Microsoft SQL Server instances by using Azure ExpressRoute You plan to migrate the SQL Server instances to Azure.
Migration of the SQL Server instances to Azure must
- Support automatic patching and version updates to SQL Server.
- Provide automatic backup services.
- Allow for high-availability of the instances,
- Provide a native VNET with private IP addressing.
- Encrypt all data in transit
- Be in a single-tenant environment with dedicated underlying infrastructure (compute, storage}.
You need to migrate the SQL Server instances to Azure.
Which Azure service should you use?
SQL Server Infrastructure-as-a-Service (laaS) virtual machine (VM)
Azure SQL Database with elastic pools
SQL Server in a Docker container running on Azure Container Instances (ACI)
Azure SQL Database Managed Instance
SQL Server in Docker containers running on Azure Kubermetes Service (AKS)
The correct answer is Azure SQL Database Managed Instance.
Here’s why:
Automatic patching and version updates to SQL Server: Azure SQL Database Managed Instance handles these tasks automatically, as it’s a Platform-as-a-Service (PaaS) offering.
Provide automatic backup services: Managed Instance includes automatic backups that you can configure for retention and frequency.
Allow for high-availability of the instances: Managed Instance provides built-in high availability.
Provide a native VNET with private IP addressing: Managed Instances are deployed directly into your Azure Virtual Network and have private IP addresses.
Encrypt all data in transit: Encryption in transit is enabled by default for connections to Managed Instances.
Be in a single-tenant environment with dedicated underlying infrastructure (compute, storage): This is a key characteristic of Managed Instance. While it’s a PaaS offering, it provides a more isolated environment compared to Azure SQL Database with elastic pools, which is multi-tenant.
Let’s look at why the other options are not the best fit:
SQL Server Infrastructure-as-a-Service (laaS) virtual machine (VM): While you have full control, you are responsible for patching, backups, and setting up high availability yourself. This doesn’t meet the automation requirements.
Azure SQL Database with elastic pools: This is a multi-tenant service where resources are shared among multiple customers. It doesn’t provide a dedicated underlying infrastructure. While it offers automatic patching, backups, and high availability, it doesn’t meet the single-tenant requirement. Also, direct native VNET integration was more complex (though VNet Service Endpoints are an option, it’s not the same as direct placement in a VNet).
SQL Server in a Docker container running on Azure Container Instances (ACI): You would be responsible for managing the SQL Server instance within the container, including patching and backups. High availability would also require manual configuration. While it can be in a VNET, it doesn’t inherently provide the managed services needed.
SQL Server in Docker containers running on Azure Kubernetes Service (AKS): Similar to ACI, you’d manage the SQL Server instance within the containers. While AKS offers robust orchestration for HA, it doesn’t provide the automatic patching and backup services at the SQL Server level that Managed Instance does. You’d need to implement those yourself.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure Storage v2 account named Storage1.
You plan to archive data to Storage1.
You need to ensure that the archived data cannot be deleted for five years. The solution must prevent administrators from deleting the data.
Solution: You create a file share, and you configure an access policy.
Does this meet the goal?
Yes
No
Understanding the Requirements
Azure Storage v2 Account: Storage1
Archival: Data will be archived in the storage account.
Retention Policy: Archived data must be protected from deletion for five years.
Administrator Protection: This protection must prevent even administrators from deleting the data.
Analyzing the Proposed Solution: Access Policy on a File Share
File Share Access Policy: Access policies on Azure file shares primarily control who can access the share, and what actions they can perform on the share, such as read, write, or delete.
Let’s evaluate if a file share access policy meets the stated needs:
Prevent Data Deletion for Five Years (including administrators):
Analysis: File share access policies can be used to prevent certain users or groups from deleting files on a file share, but not for a specific retention period like five years.
Access policies can be overridden by users with sufficient rights (like the storage account administrator).
Access policies do not apply a time based restriction to deletion.
Verdict: Does NOT meet the requirement to prevent deletion for five years, or to block admin users.
Conclusion
The proposed solution does not meet the goal because an access policy will not prevent all users, including administrators, from deleting data, and will also not impose a time based restriction on the deletion of data. Therefore, the answer is No.
Correct Answer
No
Explanation
File share access policies are about authorization to perform specific actions, but they do not implement immutability or retention. To implement a time based retention, you would need an Immutability policy on a blob container. This setting is designed to provide a time based retention mechanism and protect data from deletion even by the administrators.
Important Notes for the AZ-304 Exam
Azure Storage Access Policies: Understand their purpose and limitations in controlling access to data, and that they do not implement a time-based retention policy.
Azure Storage Immutability Policies: Understand that they provide a way to protect data from modification and deletion, and how you can set these policies.
Data Archival: You need to understand the ways that data can be archived, and how retention can be applied.
Admin Roles: Remember that administrators can override many security configurations and policies unless specifically protected by a service such as an immutability policy.
Security Best Practices: Be aware that security should be a consideration in every component of Azure.
Service Selection: Be able to select the correct Azure service that fits your requirements.
HOTSPOT
You need to recommend a solution for configuring the Azure Multi-Factor Authentication (MFA) settings.
What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Azure AD license:
Free
Basic
Premium P1
Premium P2
Access control for the sign-in risk policy:
Allow access and require multi-factor authentication
Block access and require multi-factor authentication
Allow access and require Azure MFA registration
Block access
Access control for the multi-factor
authentication registration policy:
Allow access and require multi-factor authentication
Block access and require multi-factor authentication
Allow access and require Azure MFA registration
Block access
— —
Understanding the Requirements
Azure MFA: The goal is to recommend a solution for configuring MFA.
Components to Configure:
Azure AD license
Access control for the sign-in risk policy
Access control for the multi-factor authentication registration policy
Analyzing the Options
Azure AD license:
Free: Basic MFA is available for all users with the free Azure AD license, however it does not allow conditional access or risk based MFA.
Basic: This license is very similar to the free tier.
Premium P1: Includes Conditional Access and advanced reporting, which is required for the requirements of the question.
Premium P2: Includes advanced Identity Protection and identity governance features.
Access control for the sign-in risk policy:
Allow access and require multi-factor authentication: Allows access, but requires MFA, which is suitable to mitigate the risk.
Block access and require multi-factor authentication: This does not make sense, as the user would not be able to log in.
Allow access and require Azure MFA registration: Allows access, and requires the user to register for MFA.
Block access: Blocks all access.
Access control for the multi-factor authentication registration policy:
Allow access and require multi-factor authentication: The user must already have MFA registered to log in.
Block access and require multi-factor authentication: This would lock users out, if they have not registered for MFA.
Allow access and require Azure MFA registration: This allows the user access, but requires them to register for MFA.
Block access: Blocks all access.
Recommendations
Here is the correct combination for each requirement:
Azure AD license: Premium P1
Reason: Conditional Access, which is required to configure MFA, requires an Azure AD Premium P1 license or higher. Free and Basic licenses do not support conditional access.
Access control for the sign-in risk policy: Allow access and require multi-factor authentication
Reason: We are not blocking sign in. When the policy is activated and user risk is detected, it will be required for them to authenticate using MFA before access is allowed.
Access control for the multi-factor authentication registration policy: Allow access and require Azure MFA registration
Reason: To ensure that users have MFA configured for the account, we should force them to register for MFA before they are able to proceed. This will ensure that all users are set up correctly.
Answer Area
Requirement Recommended Option
Azure AD license: Premium P1
Access control for the sign-in risk policy: Allow access and require multi-factor authentication
Access control for the multi-factor authentication registration policy: Allow access and require Azure MFA registration
Important Notes for the AZ-304 Exam
Azure AD Licensing: Understand the licensing options and which features are included in each.
Azure MFA: Know how to configure MFA, including registration policies and sign-in risk based policies.
Conditional Access: Understand the purpose of conditional access, and its requirements.
MFA Registration Policies: Know that these are important for ensuring that all users are set up correctly, before allowing them access to resources.
Risk Based Policies: Know that these are an essential component of a good security architecture.
Security Policies: Be aware of the best practices when setting up security policies.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an on-premises Hyper-V cluster that hosts 20 virtual machines. Some virtual machines run Windows Server 2016 and some run Linux.
You plan to migrate the virtual machines to an Azure subscription.
You need to recommend a solution to replicate the disks of the virtual machines to Azure. The solution must ensure that the virtual machines remain available during the migration of the disks.
Solution: You recommend implementing an Azure Storage account, and then using Azure Migrate.
Does this meet the goal?
A. Yes
B. No
Understanding the Requirements
On-Premises: Hyper-V cluster hosting 20 VMs (Windows Server 2016 and Linux).
Migration: Move the VMs to Azure.
Disk Replication: The disk data must be copied to Azure.
Availability: The VMs must remain available during the disk migration process.
Analyzing the Proposed Solution: Recovery Services Vault and Azure Site Recovery
Recovery Services Vault: A management container in Azure for ASR and backups.
Azure Site Recovery (ASR): A service used for replicating virtual machines for disaster recovery and migration.
Let’s assess if this solution meets the stated requirements:
Replicate Virtual Machine Disks to Azure:
Analysis: Azure Site Recovery is specifically designed for replicating virtual machine disks to Azure.
Verdict: Meets Requirement
Ensure Virtual Machine Availability During Disk Migration:
Analysis: Azure Site Recovery uses continuous asynchronous replication. This means that the VMs will continue to run in the on-premises environment while a copy of their disks is being transferred to Azure. This ensures that users will not experience any downtime during the migration process.
Verdict: Meets Requirement
Conclusion
The proposed solution meets all requirements as it facilitates the replication of VM disks using Azure Site Recovery, and it provides continuous asynchronous replication which allows VMs to remain available during the process. Therefore, the answer is Yes.
Correct Answer
Yes
Explanation
Azure Site Recovery: ASR replicates virtual machine disks from on-premises Hyper-V environments to Azure, while keeping the VMs running.
Continuous Replication: ASR uses continuous replication which allows the VMs to be running during the migration process.
Migration Support: ASR can facilitate the migration of on-prem environments to Azure.
Disaster Recovery: ASR can also be used to facilitate disaster recovery to Azure if a primary data centre fails.
Important Notes for the AZ-304 Exam
Azure Site Recovery (ASR): Know the purpose and functionality of ASR, including how to set up replication.
Recovery Services Vault: Understand that ASR requires a Recovery Services vault to store the replication metadata.
Replication Options: Be aware of the different replication methods that ASR can perform, specifically that it will replicate continuously in the background.
Migration Strategies: Understand how to migrate workloads from on-prem to Azure using different services, such as ASR.
On-prem Considerations: Remember that pre-requisites such as installing the ASR agent, configuring networking, and other actions are required to facilitate the process.