test1 Flashcards
https://www.dumpsbase.com/freedumps/?s=az+304
Your network contains an on-premises Active Directory domain.
The domain contains the Hyper-V clusters shown in the following table.
Name Number of nodes Number of virtual machines running on cluster
Cluster1 4 20
Cluster2 3 15
You plan to implement Azure Site Recovery to protect six virtual machines running on Cluster1 and three virtual machines running on Cluster1 Virtual machines are running on all Cluster! and Cluster2 nodes.
You need to identify the minimum number of Azure Site Recovery Providers that must be installed on premises.
How many Providers should you identify?
1
7
9
16
Understanding Azure Site Recovery Providers:
The Azure Site Recovery (ASR) Provider is a software component that must be installed on each Hyper-V host that you want to protect with ASR.
The Provider communicates with the Azure Recovery Services Vault and facilitates replication and failover.
Requirements:
On-Premises Hyper-V: There are two Hyper-V clusters (Cluster1 and Cluster2).
Protection Scope: Six VMs from Cluster1 and three VMs from Cluster2 need to be protected by Azure Site Recovery.
Minimum Providers: Identify the minimum number of ASR Providers needed.
Analysis:
Cluster1: Has 4 nodes.
Cluster2: Has 3 nodes.
Provider per Host: One ASR Provider is needed on each Hyper-V host that will be replicated.
Protected VMs: Six VMs from Cluster1 and three from Cluster2 need protection.
VMs are running on all nodes: All VMs are running across all nodes, which means that we need an ASR Provider installed on all nodes.
Minimum Number of Providers:
Cluster1 requires a provider on each host: 4 providers
Cluster2 requires a provider on each host: 3 providers
Total: 4 + 3 = 7
Correct Answer:
7
Explanation:
You must install an Azure Site Recovery Provider on every Hyper-V host that contains virtual machines that you want to protect using ASR. Because you need to protect VMs on all nodes in both clusters, you must install a provider on every hyper-v host. This means you must install 4 providers on Cluster 1 and 3 providers on cluster 2, for a total of 7 providers.
Why not others:
1: It is not enough since there are 7 Hyper-V hosts in total.
9: This answer is incorrect because it does not match the total number of hyper-v hosts.
16: This answer is incorrect because it does not match the total number of hyper-v hosts.
Important Notes for the AZ-304 Exam:
Azure Site Recovery: Understand the architecture, requirements, and components of ASR.
ASR Provider: Know that the ASR Provider must be installed on each Hyper-V host to be protected.
Minimum Requirements: The exam often focuses on minimum requirements, not the total capacity or other metrics.
Hyper-V Integration: Understand how ASR integrates with Hyper-V for replication.
Exam Focus: Read the question carefully and identify the specific information related to required components.
You need to recommend a strategy for the web tier of WebApp1. The solution must minimize.
What should you recommend?
Create a runbook that resizes virtual machines automatically to a smaller size outside of business hours.
Configure the Scale Up settings for a web app.
Deploy a virtual machine scale set that scales out on a 75 percent CPU threshold.
Configure the Scale Out settings for a web app.
Requirements:
Web Tier Scaling: A strategy for scaling the web tier of WebApp1.
Minimize Cost: The solution must focus on minimizing cost.
Recommended Solution:
Configure the Scale Out settings for a web app.
Explanation:
Configure the Scale Out settings for a web app:
Why it’s the best fit:
Cost Minimization: Web apps (App Services) have a pay-as-you-go model and scale out to add more instances when demand increases and automatically scale back in when the demand decreases. This is cost-effective because you only pay for what you use.
Automatic Scaling: You can configure automatic scaling based on different performance metrics (CPU, memory, or custom metrics), ensuring that you scale out and in based on load.
Managed Service: It is a fully managed service, so it minimizes operational overhead.
Why not others:
Create a runbook that resizes virtual machines automatically to a smaller size outside of business hours: While this can help minimize cost, this is not ideal because VMs are still running all the time. Also, it is more complex to implement and manage.
Configure the Scale Up settings for a web app: Scale Up is more costly because you increase the compute resources of the existing instances.
Deploy a virtual machine scale set that scales out on a 75 percent CPU threshold: While it is possible to deploy and scale with scale sets, this is more costly since VMs are billed per hour and are more complex to manage than web apps.
Important Notes for the AZ-304 Exam:
Azure App Service: Be very familiar with Azure App Service and its scaling capabilities.
Web App Scale Out: Know the different scaling options for web apps, and when to scale out versus scale up.
Automatic Scaling: Understand how to configure automatic scaling based on performance metrics.
Cost Optimization: The exam often emphasizes cost-effective solutions. Be aware of the pricing models for different Azure services.
PaaS vs. IaaS: Understand the benefits of using PaaS services over IaaS for cost optimization.
Exam Focus: Be sure to select the best service that meets the requirements and provides the most cost effective solution.
You have an Azure subscription that contains a custom application named Application was developed by an external company named fabric, Ltd. Developers at Fabrikam were assigned role-based access control (RBAV) permissions to the Application components. All users are licensed for the Microsoft 365 E5 plan.
You need to recommends a solution to verify whether the Faricak developers still require permissions to Application1.
The solution must the following requirements.
- To the manager of the developers, send a monthly email message that lists the access permissions to Application1.
- If the manager does not verify access permission, automatically revoke that permission.
- Minimize development effort.
What should you recommend?
In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources
Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet
Create an Azure Automation runbook that runs the Get-AzureRmRoleAssignment cmdlet
In Azure Active Directory (Azure AD), create an access review of Application1
Requirements:
External Developer Access: Fabrikam developers have RBAC permissions to an Azure application.
Access Verification: Need to verify if the Fabrikam developers still need access.
Monthly Email to Manager: Send a monthly email to the manager with access information.
Automatic Revocation: Revoke permissions if the manager does not approve.
Minimize Development: Minimize custom code development and use available services.
Recommended Solution:
In Azure Active Directory (Azure AD), create an access review of Application1
Explanation:
Azure AD Access Reviews:
Why it’s the best fit:
Automated Review: Azure AD Access Reviews provides a way to schedule recurring access reviews for groups, applications, or roles. It will automatically send notifications to the assigned reviewers (in this case, the manager).
Manager Review: You can configure the access review to have the manager review and approve or deny access for their developers.
Automatic Revocation: You can configure the access review to automatically remove access for users when they are not approved.
Minimal Development: Access reviews are a built-in feature of Azure AD that requires minimal configuration and no custom coding.
Why not others:
In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources: While PIM is great for managing and governing privileged roles, it’s not the best choice for regular access reviews of permissions, and it does not provide a way to have a review based on user accounts.
Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet: While possible, this requires custom development and management. Azure Access Reviews provides the functionality natively, therefore this is not the optimal solution for the requirements.
Create an Azure Automation runbook that runs the Get-AzureRmRoleAssignment cmdlet: Similar to the previous option, this is not the ideal solution since access reviews provides all of this functionality natively.
Important Notes for the AZ-304 Exam:
Azure AD Access Reviews: Be very familiar with Azure AD Access Reviews, and how they can be used to manage user access, and know the methods that you can use to perform them (for example, by a manager or by self review).
Access Management: Understand the importance of access reviews as part of an overall security strategy.
Access Reviews vs. PIM: Understand when to use PIM, and when to use Access Reviews.
Minimize Development: The exam often emphasizes solutions that minimize development effort.
Exam Focus: Select the simplest and most direct method to achieve the desired outcome.
HOTSPOT -
You have an Azure SQL database named DB1.
You need to recommend a data security solution for DB1. The solution must meet the following requirements:
✑ When helpdesk supervisors query DB1, they must see the full number of each credit card.
✑ When helpdesk operators query DB1, they must see only the last four digits of each credit card number.
✑ A column named Credit Rating must never appear in plain text within the database system, and only client applications must be able to decrypt the Credit
Rating column.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Helpdesk requirements:
Always Encrypted
Azure Advanced Threat Protection (ATP)
Dynamic data masking
Transparent Data Encryption (TDE)
Credit Rating requirement:
Always Encrypted
Azure Advanced Threat Protection (ATP)
Dynamic data masking
Transparent Data Encryption (TDE)
Requirements:
Helpdesk Supervisors: Must see full credit card numbers.
Helpdesk Operators: Must see only the last four digits of credit card numbers.
Credit Rating Column: The Credit Rating column must never appear in plain text within the database system and must be decrypted by the client applications.
Answer Area:
Helpdesk requirements:
Dynamic data masking
Credit Rating requirement:
Always Encrypted
Explanation:
Helpdesk requirements:
Dynamic data masking:
Why it’s correct: Dynamic data masking allows you to obfuscate sensitive data based on the user’s role. You can configure masking rules to show the full credit card numbers to supervisors and only the last four digits to the operators. The underlying data is not modified, and the masking is applied at the query output level.
Why not others:
Always Encrypted: This encrypts the data, but doesn’t allow for different visibility of the data based on user roles.
Azure Advanced Threat Protection (ATP): This is for detecting malicious behavior, not for data masking.
Transparent Data Encryption (TDE): This encrypts data at rest, but does not apply specific policies based on user access or perform masking.
Credit Rating requirement:
Always Encrypted:
Why it’s correct: Always Encrypted ensures that sensitive data is always encrypted, both at rest and in transit. The encryption keys are stored and managed in the client application and are not accessible to database administrators. This satisfies the requirement that the column must never appear in plain text in the database system, and it is only decrypted in the client application.
Why not others:
Azure Advanced Threat Protection (ATP): It doesn’t encrypt or mask the data. It is meant for threat detection.
Dynamic data masking: Dynamic data masking only masks the data for specific users, but it does not encrypt the data.
Transparent Data Encryption (TDE): TDE encrypts data at rest, but it does not encrypt data in transit or protect against database administrators viewing the unencrypted data.
Important Notes for the AZ-304 Exam:
Always Encrypted: Understand what it does, how it encrypts data, where the encryption keys are managed, and the purpose of this approach for security.
Dynamic Data Masking: Know the purpose and configuration of dynamic data masking and how it helps control the data that users can see.
Transparent Data Encryption (TDE): Understand that TDE is used for encrypting data at rest, but it doesn’t protect data in transit, and does not provide different views of data.
Azure Advanced Threat Protection (ATP): Know that it is used for threat detection, not for masking or encrypting data.
Data Security: Be familiar with the different data security features in Azure SQL Database.
Exam Focus: You must be able to understand a complex scenario, and pick the different Azure components that meet each requirement.
You have an Azure subscription.
Your on-premises network contains a file server named Server1. Server1 stores 5 TB of company files that are accessed rarely.
You plan to copy the files to Azure Storage.
You need to implement a storage solution for the files that meets the following requirements:
✑ The files must be available within 24 hours of being requested.
✑ Storage costs must be minimized.
Which two possible storage solutions achieve this goal? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. Create a general-purpose v1 storage account. Create a blob container and copy the files to the blob container.
B. Create a general-purpose v2 storage account that is configured for the Hot default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.
C. Create a general-purpose v1 storage account. Create a file share in the storage account and copy the files to the file share.
D. Create a general-purpose v2 storage account that is configured for the Cool default access tier. Create a file share in the storage account and copy the files to the file share.
E. Create an Azure Blob storage account that is configured for the Cool default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.
The correct answers are B and E.
Here’s why:
Understanding the Requirements:
Availability within 24 hours: This requirement strongly suggests using the Archive access tier in Azure Blob Storage. The Archive tier has the lowest storage cost but also has a rehydration latency. Rehydration from Archive tier typically takes several hours, and is guaranteed within 24 hours.
Minimize storage costs: The Archive access tier is the most cost-effective storage tier in Azure Blob Storage for data that is rarely accessed.
Analyzing each option:
A. Create a general-purpose v1 storage account. Create a blob container and copy the files to the blob container.
Incorrect. General-purpose v1 accounts are older and less cost-optimized than v2 or Blob storage accounts. This option doesn’t specify any access tier, so it would likely default to Hot or Cool, which are more expensive than Archive and not suitable for rarely accessed data when cost minimization is a key requirement. It also doesn’t explicitly address the 24-hour availability through Archive tier.
B. Create a general-purpose v2 storage account that is configured for the Hot default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.
Correct. General-purpose v2 accounts are recommended and more cost-effective than v1. By setting the default tier to Hot (initially - though this default doesn’t really matter as we are overriding per blob) and then explicitly setting each file to the Archive access tier, we achieve the lowest storage cost and meet the 24-hour availability requirement. Setting individual blobs to Archive overrides the default account tier for those specific blobs.
C. Create a general-purpose v1 storage account. Create a file share in the storage account and copy the files to the file share.
Incorrect. Azure File Shares are designed for file system access (SMB, NFS) and are generally more expensive than Blob Storage for large amounts of data, especially for archive scenarios. File shares do not have access tiers like Archive. This option does not minimize cost and is not designed for rarely accessed, large datasets like this.
D. Create a general-purpose v2 storage account that is configured for the Cool default access tier. Create a file share in the storage account and copy the files to the file share.
Incorrect. Similar to option C, using Azure File Shares is not cost-effective for this scenario. While Cool tier is cheaper than Hot, it’s still more expensive than Archive, and File Shares themselves are generally pricier than Blob Storage. File Shares also don’t offer the Archive tier.
E. Create an Azure Blob storage account that is configured for the Cool default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.
Correct. Creating an Azure Blob storage account is specifically designed for blob data and can be more cost-optimized for blob storage compared to general-purpose accounts in some scenarios. Like option B, setting the default tier to Cool (or even Hot) is less important as long as we explicitly set each file to the Archive access tier. This option also effectively uses Archive tier for cost minimization and meets the 24-hour availability requirement. Azure Blob Storage accounts are designed to be cost-effective for blob data.
Why B and E are the best solutions:
Both options B and E leverage the Archive access tier of Azure Blob Storage, which is crucial for meeting both the cost minimization and 24-hour availability requirements. They use Blob containers which are the appropriate storage for files in this scenario. While they differ slightly in the type of storage account (general-purpose v2 vs. Azure Blob storage account), both are valid and effective solutions for storing rarely accessed files at the lowest cost with 24-hour retrieval.
Final Answer: B and E
HOTSPOT
You have an existing implementation of Microsoft SQL Server Integration Services (SSIS) packages stored in an SSISDB catalog on your on-premises network. The on-premises network does not have hybrid connectivity to Azure by using Site-to-Site VPN or ExpressRoute.
You want to migrate the packages to Azure Data Factory.
You need to recommend a solution that facilitates the migration while minimizing changes to the existing packages. The solution must minimize costs.
What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Store the SSISDB catalog by using:
Azure SQL Database
Azure Synapse Analytics
SQL Server on an Azure virtual machine
SQL Server on an on-premises computer
Implement a runtime engine for
package execution by using:
Self-hosted integration runtime only
Azure-SQL Server Integration Services Integration Runtime (IR) only
Azure-SQL Server Integration Services Integration Runtime and self-hosted integration runtime
Requirements:
Existing SSIS Packages: The packages are stored in an SSISDB catalog on-premises.
Migrate to ADF: The migration target is Azure Data Factory.
Minimize Changes: The solution should minimize changes to the existing SSIS packages.
Minimize Costs: The solution should be cost-effective.
No connectivity: There is no hybrid connectivity from the on-premises environment to Azure.
Answer Area:
Store the SSISDB catalog by using:
Azure SQL Database
Implement a runtime engine for package execution by using:
Azure-SQL Server Integration Services Integration Runtime (IR) only
Explanation:
Store the SSISDB catalog by using:
Azure SQL Database:
Why it’s correct: To migrate SSIS packages to Azure Data Factory, the SSISDB catalog needs to be stored in Azure. Azure SQL Database is the recommended and supported method of storing the SSISDB catalog when you are using the Azure SSIS Integration Runtime in ADF.
Why not others:
Azure Synapse Analytics: While Synapse Analytics also supports SQL functionality, it is not the recommended platform to host the SSISDB.
* SQL Server on an Azure virtual machine: While SQL Server on a VM would work, it is an IaaS solution, which requires additional management overhead and is not as cost-effective as using the PaaS Azure SQL Database.
* SQL Server on an on-premises computer: The SSISDB must be in Azure to be used by the Azure SSIS Integration Runtime.
Implement a runtime engine for package execution by using:
Azure-SQL Server Integration Services Integration Runtime (IR) only:
Why it’s correct: An Azure SSIS Integration Runtime is a fully managed service for executing SSIS packages in Azure. Because there is no hybrid network connectivity, you must use the Azure version, instead of a self-hosted IR. The Azure SSIS IR is the only way to run the SSIS packages that were migrated in Azure.
Why not others:
Self-hosted integration runtime only: The self-hosted integration runtime needs a hybrid network to Azure to be able to work. Because there is no VPN or expressroute, this is not an option.
Azure-SQL Server Integration Services Integration Runtime and self-hosted integration runtime: The self-hosted integration runtime is not necessary in this scenario because there is no need to connect to an on-premise resource.
Important Notes for the AZ-304 Exam:
Azure Data Factory: Be very familiar with ADF, its core concepts, and how to execute SSIS packages.
Azure SSIS IR: Know the purpose of an Azure SSIS Integration Runtime and how to set it up. Understand that it is used when running SSIS packages in Azure.
SSISDB in Azure: Understand how the SSISDB catalog is managed and stored in Azure when migrating from an on-prem environment.
Self-Hosted IR: Understand when the self-hosted IR is required and why it is not the appropriate answer for this specific scenario.
Hybrid Connectivity: Understand how hybrid connectivity affects the choice of integration runtime.
Cost Minimization: Know how to minimize costs by choosing the appropriate services (PaaS over IaaS).
Exam Focus: The exam emphasizes choosing the most appropriate solution while minimizing effort and cost.
You use Azure virtual machines to run a custom application that uses an Azure SQL database on the back end.
The IT apartment at your company recently enabled forced tunneling, Since the configuration change, developers have noticed degraded performance when they access the database.
You need to recommend a solution to minimize latency when accessing the database. The solution must minimize costs.
What should you include in the recommendation?
Azure SQL Database Managed instance
Azure virtual machines that run Microsoft SQL Server servers
Always On availability groups
virtual network (VNET) service endpoint
Understanding Forced Tunneling:
Forced tunneling in Azure directs all internet-bound traffic from a subnet through a virtual network appliance (like a firewall or proxy), on-premises network, or specific Azure service. This can increase latency since traffic to Azure services is routed through the forced tunnel, instead of going directly.
Requirements:
Azure SQL Database: Custom app on Azure VMs uses an Azure SQL database.
Forced Tunneling: Forced tunneling is enabled, causing performance degradation.
Minimize Latency: Minimize the latency when accessing the database.
Minimize Costs: The solution should be cost-effective.
Recommended Solution:
virtual network (VNET) service endpoint
Explanation:
Virtual Network Service Endpoints:
Why it’s the best fit: VNET service endpoints allow you to secure access to Azure service resources by enabling the use of a private IP address in the VNET. By enabling service endpoints for Azure SQL Database, traffic to that database from the Azure VMs within the VNET will bypass the forced tunnel, and instead go directly through the Azure backbone. This significantly reduces latency while also being cost effective.
Why not others:
Azure SQL Database Managed Instance: While Managed Instance is a good choice for many SQL scenarios, it is not the ideal solution for this problem. It does not help with the forced tunneling, and it also does not minimize cost since it is a more expensive offering.
Azure virtual machines that run Microsoft SQL Server servers: Moving the database to a VM in IaaS will not fix the problem. It will not address the latency issues created by the forced tunneling.
Always On availability groups: This helps with HA and DR, but it does not help with the latency issues caused by the forced tunneling. Also, it would add significant costs to the deployment.
Important Notes for the AZ-304 Exam:
Virtual Network Service Endpoints: Understand the benefits of using service endpoints.
Forced Tunneling: Know what forced tunneling is and how it can impact traffic flow.
Cost Minimization: Know the different ways to minimize costs when architecting a solution.
Network Performance: Understand the different ways to diagnose and improve performance when dealing with Azure network configurations.
Azure SQL: Know the different deployment options for Azure SQL.
Exam Focus: The exam will often require you to select the most appropriate solution that meets all of the requirements.
You have an Azure subscription that is linked to an Azure Active Directory (Azure AD) tenant. The subscription contains 10 resource groups, one for each department at your company.
Each department has a specific spending limit for its Azure resources.
You need to ensure that when a department reaches its spending limit, the compute resources of the department shut down automatically.
Which two features should you include in the solution? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
Azure Logic Apps
Azure Monitor alerts
the spending limit of an Azure account
Cost Management budgets
Azure Log Analytics alerts
Requirements:
Departmental Limits: Each department has a specific spending limit for its Azure resources.
Resource Shutdown: Compute resources must shut down automatically when the spending limit is reached.
Correct Features:
Cost Management budgets
Azure Logic Apps
Explanation:
Cost Management budgets:
Why it’s correct: Cost Management budgets allow you to define a spending limit for a specific scope (resource group, subscription, management group). When the actual spend reaches the budget threshold, you can trigger alerts and take actions. Budgets is the way to monitor and alert based on the cost.
Why not others (by itself): Cost management budgets cannot automatically stop resources, it is a monitoring and alert mechanism, and needs other services in order to take action.
Azure Logic Apps:
Why it’s correct: Azure Logic Apps can be triggered by a budget alert. In the logic app, you can add actions that automatically shut down the compute resources. For example, you can use the Azure Resource Management connector to stop virtual machines.
Why not others (by itself): Logic apps require a trigger to start. Therefore, a budget alert must be configured.
Why not others:
Azure Monitor alerts: Azure Monitor alerts are for performance monitoring. While they can monitor costs, they cannot perform actions on those costs.
the spending limit of an Azure account: While the Azure Account might have a total spending limit, this does not allow for the control on resource groups, or the automation of stopping resources.
Azure Log Analytics alerts: Log Analytics is a great way to analyze logs, but does not work with cost alerts.
Important Notes for the AZ-304 Exam:
Cost Management Budgets: Be very familiar with Cost Management budgets and how they can be used to control spending, and know that they are the mechanism that you should use for cost alerts.
Azure Logic Apps: Know how to use Logic Apps to automate actions based on triggers, and how they integrate with Azure Management connectors.
Automated Actions: Understand that Logic Apps can be triggered by alerts and can be used to perform actions, such as shutting down resources.
Cost Control: Be familiar with the best practices for cost control and optimization in Azure.
Alerts: Know the difference between cost alerts and metrics alerts.
Exam Focus: Carefully read the requirement. You must know which services do what function. You need to know that you need a budget to alert when the spend is reached, and that you need Logic apps to automate an action when the alert is triggered.
HOTSPOT
You configure OAuth2 authorization in API Management as shown in the exhibit.
Add OAuth2 service
Display name: (Empty field)
Id: (Empty field)
Description: (Empty field)
Client registration page URL: https://contoso.com/register
Authorization grant types:
Authorization code: Enabled
Implicit: Disabled
Resource owner password: Disabled
Client credentials: Disabled
Authorization endpoint URL: https://login.microsoftonline.com/contoso.onmicrosoft.com/oauth2/v2.0/authorize
Support state parameter: Disabled
Authorization Request method
GET: Enabled
POST: Disabled
Token endpoint URL: (Empty field)
Additional body parameters: (Empty field)
Button: Create
Use the drop-domain to select the answer choice that completes each statement based on
the information presented in the graphic. NOTE: Each correct selection is worth one point.
The selected authorization grant type is for
Background services
Headless device authentication
Single page applications
Web applications
To enable custom data in the grant flow, select
Client credentials
Implicit
Resource owner password
Support state parameter
OAuth2 Configuration Summary:
Authorization Grant Types: The configuration shows the “Authorization code” grant type as the only one enabled.
Authorization Endpoint URL: This is set to Microsoft’s OAuth2 authorization endpoint for the contoso.onmicrosoft.com tenant.
Other Settings: Various other settings related to authorization and token endpoints are displayed.
Answer Area:
The selected authorization grant type is for:
Web applications
To enable custom data in the grant flow, select
Support state parameter
Explanation:
The selected authorization grant type is for:
Web applications:
Why it’s correct: The authorization code grant type is the most secure and recommended method to obtain access tokens for web applications. In this flow the client (web app) first gets an authorization code from the authorization server, and then uses it to obtain an access token.
Why not others:
Background services: Background services (also known as daemon apps) typically use the client credentials flow, which is not enabled in this configuration.
Headless device authentication: Headless devices often use the device code flow, which is not a grant type present here.
Single-page applications: Single-page applications (SPAs) can use the authorization code flow, but often use the implicit grant type, which is disabled in this configuration.
To enable custom data in the grant flow, select:
Support state parameter:
Why it’s correct: The “Support state parameter” setting enables passing an opaque value in the authorization request, and will be returned by the authorization server with the code. This can be used to pass custom data that needs to be included in the authorization flow.
Why not others:
Client credentials: This is for service-to-service authentication without a user present.
Implicit: This is an older, less secure grant type for single-page applications. It does not enable passing custom data.
Resource owner password: This is a less secure grant type that should be avoided in most scenarios. It also does not enable passing custom data.
Important Notes for the AZ-304 Exam:
OAuth 2.0 Grant Types: Be very familiar with the different OAuth 2.0 grant types:
Authorization Code
Implicit
Client Credentials
Resource Owner Password
Device Code
API Management OAuth2 Settings: Understand how to configure OAuth 2.0 settings in Azure API Management.
“State” Parameter: Know the importance of the “state” parameter in OAuth flows and how it helps prevent CSRF attacks. Understand how this can be used to pass custom data.
API Security: Know how to properly secure APIs with OAuth 2.0.
Exam Focus: Be sure to select the answer based on a close inspection of the provided details.
You are designing an order processing system in Azure that will contain the Azure resources shown in the following table.
Name Type Purpose
App1 Web app Processes customer orders
Function1 Function Check product availability at vendor 1
Function2 Function Check product availability at vendor 2
storage1 Storage account Stores order processing logs
The order processing system will have the following transaction flow:
✑ A customer will place an order by using App1.
✑ When the order is received, App1 will generate a message to check for product availability at vendor 1 and vendor 2.
✑ An integration component will process the message, and then trigger either Function1 or Function2 depending on the type of order.
✑ Once a vendor confirms the product availability, a status message for App1 will be generated by Function1 or Function2.
✑ All the steps of the transaction will be logged to storage1.
Which type of resource should you recommend for the integration component?
an Azure Data Factory pipeline
an Azure Service Bus queue
an Azure Event Grid domain
an Azure Event Hubs capture
Function1 | Function | Check product availability at vendor 1 |
Correct Answer:
An Azure Service Bus queue
Why Azure Service Bus Queue is Correct:
Message Processing: The integration component needs to receive a message from App1, process it (e.g., determine the order type), and trigger either Function1 or Function2. A Service Bus queue supports this by holding the message until a consumer (e.g., a Logic App, Function, or custom code) dequeues it, evaluates the order type, and invokes the appropriate function.
Decoupling: Queues decouple App1 from the Functions, ensuring reliable, asynchronous communication—critical for an order processing system where availability checks may take time.
Function Integration: Azure Functions have Service Bus trigger bindings, making it seamless to process queue messages and trigger downstream actions (e.g., calling Function1 or Function2).
Reliability: Features like message persistence, retries, and dead-letter queues ensure no orders are lost, aligning with the production nature of the system.
Logging Alignment: While the queue itself doesn’t log to storage1, it fits into the workflow where Functions or other components can log steps, as specified.
AZ-304 Relevance: Service Bus is a common choice in AZ-304 for designing transactional, message-driven architectures, balancing scalability, reliability, and integration simplicity.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases.
The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.
You need to recommend a solution to meet the regulatory requirement.
Solution: You recommend creating resource groups based on locations and implementing resource locks on the resource groups.
Does this meet the goal?
Yes
No
Goal:
Deploy Azure App Service instances and Azure SQL databases simultaneously.
App Service instances must be deployed only to specific Azure regions.
Resources for the App Service instances must reside in the same region.
Proposed Solution:
Create resource groups based on locations.
Implement resource locks on the resource groups.
Analysis:
Resource Groups Based on Location:
Creating resource groups based on locations is a good practice for organizing resources in Azure. It makes it easier to manage resources and ensures that all the resources that belong to a specific geographic region are grouped together. This is an important step in reaching the goal.
Resource Locks
Resource locks, however, are only for preventing accidental deletion of resource groups and the resources within. They do not enforce which resources are deployed or where they are deployed, meaning that a user could still deploy a VM outside of the required location.
Does It Meet the Goal?: No
Explanation:
Resource Groups by Location (Partial Fulfillment): Creating resource groups by location does help with organizing resources and ensures they’re deployed in the same region, meeting part of the requirement of keeping all resources in the same location.
Resource Locks - These will not solve for the region requirement, because you can still create a resource in any region.
Missing Enforcement: The solution lacks any mechanism to enforce that the resources are only deployed in the correct Azure regions. This is a regulatory requirement, so a simple organization of resource groups is not enough.
No Region Enforcement: Resource locks prevent accidental deletion or modification of resources, but they do not restrict resource deployments to specific regions.
Correct Answer:
No
Important Notes for the AZ-304 Exam:
Resource Groups: Understand the purpose and use of resource groups.
Resource Locks: Know the purpose and limitations of resource locks.
Regulatory Requirements: Recognize that solutions must enforce compliance requirements. This is a key element of many questions.
Enforcement Mechanisms: Look for mechanisms that enforce policies instead of simply organizing resources.
Exam Focus: Read the proposed solution and verify if it truly meets the goal. If any part of the solution does not achieve the goal, then the answer is “No”.
You need to recommend a data storage solution that meets the following requirements:
- Ensures that applications can access the data by using a REST connection
- Hosts 20 independent tables of varying sizes and usage patterns
- Automatically replicates the data to a second Azure region
- Minimizes costs
What should you recommend?
an Azure SQL Database that uses active geo-replication
tables in an Azure Storage account that use geo-redundant storage (GRS)
tables in an Azure Storage account that use read-access geo-redundant storage (RA-GRS)
an Azure SQL Database elastic database pool that uses active geo-replication
Requirements:
REST API Access: The data must be accessible through a REST interface.
Independent Tables: The solution must support 20 independent tables of different sizes and usage patterns.
Automatic Geo-Replication: The data must be automatically replicated to a secondary Azure region.
Minimize Costs: The solution should be cost-effective.
Recommended Solution:
Tables in an Azure Storage account that use read-access geo-redundant storage (RA-GRS)
Explanation:
Azure Storage Account with RA-GRS Tables:
REST Access: Azure Storage tables are directly accessible using a REST API, which is a fundamental part of their design.
Independent Tables: A single Azure Storage account can hold many independent tables, meeting the 20-table requirement.
Automatic Geo-Replication (RA-GRS): RA-GRS ensures that the data is replicated to a secondary region, and provides read access to that secondary location. This satisfies the HA and geo-redundancy requirements.
Minimize Cost: Azure Storage tables are designed to handle different patterns and are cost effective compared to SQL options.
Why not others:
Azure SQL Database with active geo-replication: While it provides strong SQL capabilities and geo-replication, SQL databases are more costly for simple table storage and have a higher operational overhead. SQL databases also do not have a REST interface, rather they use SQL.
Azure SQL Database elastic database pool with active geo-replication: Same reasons as above, but with the added complication of an elastic pool, which is unnecessary for the stated requirements and would add even more costs.
Tables in an Azure Storage account that use geo-redundant storage (GRS): This would meet the geo-replication requirements but it would not provide the ability to read from the secondary location, and so is not as good a choice as RA-GRS.
Important Notes for the AZ-304 Exam:
Azure Storage Tables: Know what they are designed for and their features (scalability, cost-effectiveness, REST API access). Be able to explain where they are appropriate.
Geo-Redundancy: Understand the differences between GRS, RA-GRS and how they impact performance, availability and cost.
Cost-Effective Solutions: The exam often asks for the most cost-effective solution. Be aware of the pricing models of different Azure services.
SQL Database Use Cases: Understand when to use SQL DBs and when other options (like Table storage) are more appropriate. SQL DBs are better suited for complex queries, transactions, and relational data models.
REST API Access: Know which Azure services offer a REST interface for data access and when it might be required.
Exam Technique: Ensure you fully read the requirements, so you don’t pick a more expensive or complex solution than is needed.
HOTSPOT
Your company has two on-premises sites in New York and Los Angeles and Azure virtual networks in the East US Azure region and the West US Azure region. Each on-premises site has Azure ExpressRoute circuits to both regions.
You need to recommend a solution that meets the following requirements:
✑ Outbound traffic to the Internet from workloads hosted on the virtual networks must be routed through the closest available on-premises site.
✑ If an on-premises site fails, traffic from the workloads on the virtual networks to the Internet must reroute automatically to the other site.
What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Routing from the virtual networks to
the on-premises locations must be
configured by using:
Azure default routes
Border Gateway Protocol (BGP)
User-defined routes
The automatic routing configuration
following a failover must be
handled by using:
Border Gateway Protocol (BGP)
Hot Standby Routing Protocol (HSRP)
Virtual Router Redundancy Protocol (VRRP)
Correct Answer:
Routing from the virtual networks to the on-premises locations must be configured by using:
Border Gateway Protocol (BGP)
The automatic routing configuration following a failover must be handled by using:
Border Gateway Protocol (BGP)
Correct Answers and Why
Routing from the virtual networks to the on-premises locations must be configured by using:
Border Gateway Protocol (BGP)
Why?
ExpressRoute Standard: ExpressRoute relies on BGP for exchanging routes between your on-premises networks and Azure virtual networks. It’s the fundamental routing protocol for this type of connectivity.
Dynamic Routing: BGP allows for dynamic route learning, meaning routes are automatically adjusted based on network changes (like a site going down). This is essential for the failover requirement.
Path Selection: BGP allows for attributes like Local Preference to choose the best path. The path to the nearest on-prem location can be preferred by setting a higher local preference.
Why Not the Others?
Azure Default Routes: These routes are for basic internal Azure connectivity and internet access within Azure. They don’t handle routing to on-premises networks over ExpressRoute.
User-defined routes (UDRs): While UDRs can force traffic through a specific path they do not facilitate dynamic failover without manual intervention and are therefore unsuitable in this scenario.
The automatic routing configuration following a failover must be handled by using:
Border Gateway Protocol (BGP)
Why?
BGP Convergence: BGP’s inherent nature is to dynamically adapt to network changes. If an on-premises site or an ExpressRoute path becomes unavailable, BGP automatically detects this and withdraws routes from the failed path.
Automatic Rerouting: BGP then advertises the available paths, leading to the rerouting of traffic through the remaining healthy site, achieving the automatic failover requirement.
Why Not the Others?
Hot Standby Routing Protocol (HSRP) and Virtual Router Redundancy Protocol (VRRP): These protocols are used for first-hop redundancy on local networks which is not applicable in Azure environments or to Expressroute configurations. They do not facilitate the end-to-end routing and failover required.
Important Notes for the AZ-304 Exam
ExpressRoute Routing is BGP-Based: Understand that BGP is the routing protocol for ExpressRoute. If a question involves routing over ExpressRoute, BGP is highly likely to be involved.
BGP for Dynamic Routing and Failover: Know that BGP not only provides routing but also provides failover capabilities through its dynamic path selection and convergence features.
Local Preference: Understand how BGP attributes like Local Preference can be used to influence path selection. This is key for scenarios where you want to force a primary path and have a secondary backup path.
Azure Networking Core Concepts: You should have a solid understanding of:
Virtual Networks: How they’re used, subnetting, IP addressing.
Route Tables: Both default and User-Defined, and how they control traffic routing.
ExpressRoute: The different connection options and associated routing implications.
Dynamic vs. Static Routing: Know the difference between dynamic routing (BGP) and static routing (User Defined Routes) and where they are best suited.
Hybrid Networking: Be prepared to deal with hybrid scenarios that connect on-premises and Azure resources.
Failover: Be aware of the failover options and be able to choose the best solutions for different circumstances. BGP is the most common solution for failover between on-prem and Azure.
HSRP and VRRP Applicability: These are first hop redundancy protocols used locally and are not suitable for Azure cloud environments. They should not be suggested for Azure routing scenarios.
You have an Azure subscription. The subscription contains an app that is hosted in the East US, Central Europe, and East Asia regions.
You need to recommend a data-tier solution for the app. The solution must meet the following requirements:
✑ Support multiple consistency levels.
✑ Be able to store at least 1 TB of data.
✑ Be able to perform read and write operations in the Azure region that is local to the app instance.
What should you include in the recommendation?
A. an Azure Cosmos DB database
B. a Microsoft SQL Server Always On availability group on Azure virtual machines
C. an Azure SQL database in an elastic pool
D. Azure Table storage that uses geo-redundant storage (GRS) replication
The correct option for the data-tier solution that meets all the specified requirements is:
A. an Azure Cosmos DB database
Here’s why this option is the best choice:
Support for Multiple Consistency Levels: Azure Cosmos DB provides five consistency levels (strong, bounded staleness, session, consistent prefix, and eventual), allowing you to choose the level of consistency that best fits your application’s needs.
Storage Capacity: Azure Cosmos DB can easily store more than 1 TB of data, making it suitable for applications with large datasets.
Local Read and Write Operations: Azure Cosmos DB is designed for global distribution and can automatically replicate your data across multiple regions. This means that read and write operations can occur in the Azure region that is local to the app instance, ensuring low latency and high availability.
High Availability: Cosmos DB is built to provide high availability and resilience, automatically handling failover between regions if one becomes unavailable.
The other options are less suitable for the following reasons:
B. a Microsoft SQL Server Always On availability group on Azure virtual machines: While this solution provides high availability, it does not inherently support multiple consistency levels or automatic regional replication without additional configuration.
C. an Azure SQL database in an elastic pool: This option supports scaling but does not provide the same level of global distribution or multiple consistency levels as Cosmos DB.
D. Azure Table storage that uses geo-redundant storage (GRS) replication: While GRS provides redundancy, it does not support multiple consistency levels or allow for local read/write operations in different regions effectively.
Your company purchases an app named App1.
You plan to run App1 on seven Azure virtual machines in an Availability Set. The number of fault domains is set to 3. The number of update domains is set to 20.
You need to identify how many App1 instances will remain available during a period of planned maintenance.
How many App1 instances should you identify?
A. 1
B. 2
C. 6
D. 7
Understanding Availability Sets
Purpose: Availability Sets are used to protect your applications from planned and unplanned downtime within an Azure datacenter.
Fault Domains (FDs): Fault Domains define groups of virtual machines that share a common power source and network switch. In the event of a power or switch failure, VMs in different FDs will be affected independently of each other.
Update Domains (UDs): Update Domains define groups of virtual machines that can be rebooted simultaneously during an Azure maintenance window. Azure applies planned maintenance to UDs one at a time.
The Key Rule
During planned maintenance, Azure updates VMs within a single Update Domain at a time. Azure moves to the next UD only after completing an update to the current UD. This means that while an update is being done on one UD, the other UDs are not affected.
Analyzing the Scenario
7 VMs in total
3 Fault Domains: This is important for unplanned maintenance, but doesn’t directly impact our answer here.
20 Update Domains: This is the important factor for planned maintenance.
It does not mean there are 20 physical UDs in the set. It just means up to 20 UDs can be used. The 7 VM’s will therefore each be in 1 of 7 unique UDs within the set of 20 UDs.
Calculating Availability During Planned Maintenance
Minimum VMs per Update Domain: Since you have 7 VMs and, even though there are 20 UDs, each virtual machine will be placed in its own update domain so each will be on its own UD.
Impact of Maintenance: During a planned maintenance event, Azure will update one UD at a time. Therefore during maintenance one of those 7 VMs will be unavailable while the update is applied.
Available VMs: That means that at any given time when maintenance is applied to one single UD, the remaining VMs in the other UDs will remain available. In this case 7-1=6 VMs.
Correct Answer
6
Important Notes for the AZ-304 Exam
Availability Sets vs. Virtual Machine Scale Sets: Know the difference. Availability Sets provide fault tolerance for individual VMs, while Scale Sets provide scalability and resilience for groups of identical VMs (often used for autoscaling). This question specifically used an availability set.
Fault Domains (FDs) vs. Update Domains (UDs): Be clear on the purpose of each. FDs for unplanned maintenance, UDs for planned maintenance.
Impact of UDs on Planned Maintenance: During planned maintenance, only one UD is updated at a time, ensuring that your application can remain available.
Distribution of VMs: In an availability set, Azure evenly distributes VMs across FDs and UDs.
Maximum FDs and UDs: Understand that the maximum number of FDs is 3 and UDs are 20 in Availability Sets.
Real-World Scenario: Be aware that real production workloads can have other availability and redundancy concerns and that more advanced redundancy can be achieved by using multiple availability sets in the same region or a combination of Availability sets and Availability zones.
Calculations: Be able to determine the availability of VMs during planned or unplanned maintenance based on the number of FDs and UDs as well as the number of VMs in a given configuration.
Best Practice: Best practice is to have at least 2 VMs in an availability set, and 2 availability sets in your region to provide redundancy in the event of zonal failures as well as UD / FD maintenance.
Your company has the infrastructure shown in the following table:
Location: Azure
Resource:
Azure subscription named Subscription1
20 Azure web apps
Location: On-premises datacenter
Resource:
Active Directory domain
Server running Azure AD Connect
Linux computer named Server1
The on-premises Active Directory domain syncs to Azure Active Directory (Azure AD).
Server1 runs an application named Appl that uses LDAP queries to verify user identities in the on-premises Active Directory domain.
You plan to migrate Server1 to a virtual machine in Subscription1.
A company security policy states that the virtual machines and services deployed to Subscription! must be prevented from accessing the on-premises network.
You need to recommend a solution to ensure that Appl continues to function after the migration. The solution must meet the security policy.
What should you include in the recommendation?
Azure AD Domain Services (Azure AD DS)
an Azure VPN gateway
the Active Directory Domain Services role on a virtual machine
Azure AD Application Proxy
Understanding the Requirements
Application (App1): Uses LDAP queries to authenticate users in the on-premises Active Directory.
Migration: Moving from an on-premises Linux server to an Azure VM.
Security Policy: VMs and services in Azure are not allowed to access the on-premises network.
Functionality: The migrated application must still be able to authenticate users.
Analyzing the Options
Azure AD Domain Services (Azure AD DS)
Pros:
Provides a managed domain controller in Azure, allowing VMs to join the domain.
Supports LDAP queries for authentication.
Independent of the on-premises network.
Synchronizes user information from Azure AD.
Fully managed, eliminating the need for maintaining domain controllers.
Cons:
Cost implications from running an additional service.
Verdict: This is the most suitable option. It meets the functional requirements without violating the security policy.
An Azure VPN Gateway
Pros:
Provides a secure connection between Azure and on-premises networks.
Cons:
Violates the security policy that prevents Azure resources from connecting to on-premises.
Would allow the VM access to the entire on-prem network (if setup using site to site) including AD.
Verdict: Not a valid option because it directly contradicts the security policy.
The Active Directory Domain Services role on a virtual machine
Pros:
Provides the needed domain services
Cons:
Would require setting up and managing a domain controller in Azure.
Would need to setup a vpn connection to sync with on-prem which would violate the security policy.
Requires ongoing maintenance.
Verdict: Not a valid option because it would be hard to maintain and the connection to on-prem would violate the security policy.
Azure AD Application Proxy
Pros:
Allows external users to connect to internal resources.
Cons:
Not relevant for this use case. Application Proxy does not manage or provide LDAP access to users.
Verdict: Not a good fit as it does not help with authentication for the application.
Correct Recommendation
The best solution is Azure AD Domain Services (Azure AD DS).
Explanation
LDAP Compatibility: Azure AD DS provides a managed domain service compatible with LDAP queries, which is precisely what App1 needs for user authentication.
Isolated Azure Environment: Azure AD DS is entirely contained within Azure and does not require a connection to the on-premises network. This allows you to satisfy the security policy.
Azure AD Synchronization: Azure AD DS syncs users from Azure AD, meaning users will be able to authenticate after the migration.
Ease of Use: Azure AD DS is a fully managed service so you will not need to worry about the underlying infrastructure.
Important Notes for the AZ-304 Exam
Azure AD DS Use Cases: Know that Azure AD DS is designed for scenarios where you need domain services (including LDAP) in Azure but cannot/should not connect to on-premises domain controllers.
Hybrid Identity: Be familiar with hybrid identity options, such as using Azure AD Connect to sync on-premises Active Directory users to Azure AD.
Security Policies: Pay close attention to security policies described in exam questions. The answers should be able to fulfil any security requirements.
Service Selection: Be able to choose the correct Azure service based on the stated requirements of the question. For example, know when to use Azure AD DS as opposed to spinning up a domain controller in a VM.
Alternatives: You should know what other options there are that could theoretically be used, but also understand their pros and cons. For instance, you should be able to state that a VPN could facilitate the connection, but that the security policy would need to be updated.
LDAP Authentication: Understand LDAP as the core functionality for Active Directory authentication.
Fully Managed Services: Be aware of the benefits of managed services (like Azure AD DS) in reducing management overhead.
You are reviewing an Azure architecture as shown in the Architecture exhibit (Click the Architecture tab.)
Log Files
|
v
Azure Data Factory ——-> Azure Data Lake Storage
| |
| |
| |
v |
Azure Databricks <—————-
|
v
Azure Synapse Analytics ——-> Azure Analysis Services
|
v
Power BI
Steps:
Ingest: Log Files → Azure Data Factory
Store: Azure Data Factory → Azure Data Lake Storage
Prep and Train: Azure Data Lake Storage ⇄ Azure Databricks
Model and Serve: Azure Synapse Analytics → Azure Analysis Services
Visualize: Azure Analysis Services → Power BI
The estimated monthly costs for the architecture are shown in the Costs exhibit. (Click the Costs tab.)
|—————————-|————————————————-|—————|
| Azure Synapse Analytics | Tier: Compute-optimised Gen2, Compute: DWU 100 x 1 | US$998.88 |
| Data Factory | Azure Data Factory V2 Type, Data Pipeline Service type, | US$4,993.14 |
| Azure Analysis Services | Developer (hours), 5 Instance(s), 720 Hours | US$475.20 |
| Power BI Embedded | 1 node(s) x 1 Month, Node type: A1, 1 Virtual Core(s), | US$735.91 |
| Storage Accounts | Block Blob Storage, General Purpose V2, LRS Redundant, | US$21.84 |
| Azure Databricks | Data Analytics Workload, Premium Tier, 1 D3V2 (4 vCPU) | US$515.02 |
| Estimate total: | | US$7,739.99 |
The log files are generated by user activity to Apache web servers. The log files are in a consistent format. Approximately 1 GB of logs are generated per day. Microsoft Power Bl is used to display weekly reports of the user activity.
You need to recommend a solution to minimize costs while maintaining the functionality of the architecture.
What should you recommend?
Replace Azure Data Factory with CRON jobs that use AzCopy.
Replace Azure Synapse Analytics with Azure SOL Database Hyperscale.
Replace Azure Synapse Analytics and Azure Analysis Services with SQL Server on an Azure virtual machine.
Replace Azure Databricks with Azure Machine Learning.
Service | Description | Cost |
Understanding the Existing Architecture
Data Ingestion: Log files from Apache web servers are ingested into Azure Data Lake Storage via Azure Data Factory.
Data Processing: Azure Databricks is used to prep and train the data.
Data Warehousing: Azure Synapse Analytics is used to model and serve data.
Data Visualization: Azure Analysis Services and Power BI are used for visualization.
Cost Breakdown and Bottlenecks
The cost breakdown shows the following areas as significant expenses:
Azure Data Factory: $4,993.14 (by far the most expensive item)
Azure Synapse Analytics: $998.88
Power BI Embedded: $735.91
The other items (Analysis services, Databricks, and storage) are relatively low cost.
Analyzing the Recommendations
Replace Azure Data Factory with CRON jobs that use AzCopy.
Pros:
Significant cost reduction: AzCopy is free and can be used with a simple CRON job.
Suitable for the relatively small amount of data that is being moved.
Cons:
Less feature rich than Data Factory (No orchestration, error handling, monitoring etc).
Adds management overhead as you need to create and maintain the CRON jobs.
Verdict: This is the best option. Given the small data volume, the complexity of Data Factory is overkill and the cost can be reduced dramatically.
Replace Azure Synapse Analytics with Azure SQL Database Hyperscale.
Pros:
Can be more cost effective for smaller workloads and can scale up or down easily.
Cons:
May need changes to the way the data is stored and managed.
Hyperscale is designed for transactional loads and may not be the best replacement for a Datawarehouse.
Verdict: Not the best option, as it may impact the architecture of the solution and the query patterns used.
Replace Azure Synapse Analytics and Azure Analysis Services with SQL Server on an Azure virtual machine.
Pros:
Could be less expensive than the managed service for small workloads.
Cons:
Significantly more management overhead, less scalable.
Would reduce the overall functionality of the solution, having to implement multiple services in one VM.
Would not reduce costs as the total cost of the VM, the sql licences, and management effort would likely cost more.
Verdict: Not recommended. Introduces complexity and management overhead.
Replace Azure Databricks with Azure Machine Learning.
Pros:
Azure Machine Learning can also do data processing.
May be more cost efficient depending on workload.
Cons:
Azure Machine learning is more focused on ML than processing/preparation of data.
* More geared towards predictive analytics than general data processing.
* May require a significant rework of the existing process.
Verdict: Not a suitable option as it is not a like for like replacement.
Recommendation
The best recommendation is:
Replace Azure Data Factory with CRON jobs that use AzCopy.
Explanation
Cost Savings: The primary issue is the high cost of Azure Data Factory. Using CRON jobs and AzCopy is a simple, low-cost alternative for the relatively small volume of data being moved.
Functionality: The CRON job will simply move the data from the source location to the Azure data lake, with the processing steps remaining the same.
Complexity: While this adds more management overhead by requiring you to create the CRON job and manage it, the simplicity of the requirements outweigh the complexity.
Important Notes for the AZ-304 Exam
Cost Optimization: Know that the exam may test your ability to identify cost drivers and suggest cost optimizations.
Azure Data Factory: Understand when ADF is the right tool and when a simpler tool will suffice. It’s often beneficial to use a tool as simple as possible, while still meeting requirements.
Data Transfer: Be aware of options like AzCopy for moving data in a low-cost way.
CRON jobs: Understand how CRON jobs can be used to schedule operations.
Azure Synapse Analytics: Understand how Azure Synapse Analytics can provide insights and processing power, but can also be expensive.
SQL Database Hyperscale: Understand when it is more beneficial to use Hyperscale over Synapse
SQL Server on Azure VM: Know the use cases of where a traditional SQL server may be appropriate.
Azure Analysis Services: Know that it is designed for fast data queries and reporting through tools like Power BI, but can add significant cost.
Azure Databricks and ML: Understand the difference and which scenarios are more suited for each.
Service selection: Know how to select a service based on the requirements provided.
Simplicity: Consider solutions that may be less feature-rich, but provide simpler (and lower cost) solutions.
You have an Azure Active Directory (Azure AD) tenant.
You plan to provide users with access to shared files by using Azure Storage. The users will be provided with different levels of access to various Azure file shares based on their user account or their group membership.
You need to recommend which additional Azure services must be used to support the planned deployment.
What should you include in the recommendation?
an Azure AD enterprise application
Azure Information Protection
an Azure AD Domain Services (Azure AD DS) instance
an Azure Front Door instance
The correct answer is C. Azure AD Domain Services (Azure AD DS) instance.
Here’s why:
Understanding the Requirement: The core requirement is to control access to Azure File Shares based on user identities and group memberships defined in Azure AD. Azure File Shares, on their own, don’t natively understand Azure AD identities for access control.
How Azure AD DS Helps:
Extends Azure AD: Azure AD DS provides a managed domain controller service in Azure. It essentially creates a traditional Windows Server Active Directory domain synced with your Azure AD tenant.
Enables Kerberos Authentication: File Shares need a way to authenticate users who want to access them. Azure AD DS enables Kerberos authentication, which is the protocol used by Windows Server-based file servers. With Kerberos authentication enabled, you can assign specific NTFS permissions to individual users and groups on your Azure File Shares which directly translates into allowing or disallowing access.
Seamless Integration: After setting up Azure AD DS, the file shares can be joined to the domain, enabling users to authenticate using their Azure AD credentials seamlessly.
Access Control: This integration provides the capability to define granular NTFS style access control lists (ACLs) for file shares, allowing you to give users/groups specific permissions to the shares and folders.
Why other options are not the best fit:
A. Azure AD enterprise application: Azure AD enterprise applications are primarily used to manage authentication and authorization for cloud-based applications (SaaS). They don’t directly provide the means to manage access to files on Azure file shares in the way described in the scenario.
B. Protect Azure information: Protect Azure information is part of Microsoft Purview to protect sensitive data by classifying and labeling data. It doesn’t help directly in access to Azure file shares with users and their memberships.
D. Azure Front Door instance: Azure Front Door is a global, scalable entry-point for web applications and services. It is not relevant to accessing files on Azure File Shares.
You nave 200 resource groups across 20 Azure subscriptions.
Your company’s security policy states that the security administrator most verify all assignments of the Owner role for the subscriptions and resource groups once a month. All assignments that are not approved try the security administrator must be removed automatically. The security administrator must be prompted every month to perform the verification.
What should you use to implement the security policy?
Access reviews in identity Governance
role assignments in Azure Active Directory (Azure AD) Privileged Identity Management (PIM)
Identity Secure Score in Azure Security Center
the user risk policy Azure Active Directory (Azure AD) Identity Protection
Final Answer:
Access reviews in Identity Governance
Why Access reviews in Identity Governance is Correct:
Monthly Verification: Access reviews can be scheduled to run monthly, prompting the security administrator to verify all Owner role assignments across 20 subscriptions and 200 resource groups.
Automatic Removal: Configurable to automatically revoke unapproved assignments at the end of the review period, enforcing the policy without manual intervention.
Prompting: Sends automated notifications to the security administrator each month to initiate the review, ensuring compliance with the policy.
AZ-304 Relevance: Access reviews are a key AZ-304 feature for governance and compliance, specifically designed to manage and audit access to Azure resources at scale, making them ideal for this multi-subscription, multi-resource group scenario.
Your company purchases an app named App1.
You need to recommend a solution 10 ensure that App 1 can read and modify access reviews.
What should you recommend?
From the Azure Active Directory admin center, register App1. and then delegate permissions to the Microsoft Graph AP
From the Azure Active Directory admin center, register App1. from the Access control (IAM) blade, delegate permissions.
From API Management services, publish the API of App1. and then delegate permissions to the Microsoft Graph AP
From API Management services, publish the API of App1 From the Access control (IAM) blade, delegate permissions.
Understanding the Requirements
App1 Functionality: Needs to read and modify access reviews.
Azure Environment: Using Azure Active Directory (Azure AD).
Authorization: Must be authorized to perform these actions.
Analyzing the Options
From the Azure Active Directory admin center, register App1, and then delegate permissions to the Microsoft Graph API.
Pros:
Application Registration: The correct way to enable an application to be able to access protected resources in Azure AD.
Microsoft Graph API: The Microsoft Graph API is the correct API to access Azure AD, including access reviews.
Delegated Permissions: Permissions to access Microsoft Graph APIs must be delegated to applications, and this can be done using Azure AD application registrations.
Cons:
None. This is the correct approach.
Verdict: This is the correct solution.
From the Azure Active Directory admin center, register App1. from the Access control (IAM) blade, delegate permissions.
Pros:
Application Registration: Required to allow your app to integrate with Azure.
Cons:
Access Control (IAM): IAM is used for resource-level access control and not for delegating permissions for application access to Azure AD or Graph API resources.
Delegations to specific APIs such as graph api are not performed using the IAM blade.
Verdict: This is incorrect. IAM is not used to delegate permissions to the Microsoft Graph API.
From API Management services, publish the API of App1, and then delegate permissions to the Microsoft Graph API.
Pros:
API Management is useful when you want to expose your app as a third-party API.
Cons:
API Management: Not required for App1 to interact with the Graph API. API Management is not required to access graph API’s.
Does not support direct delegation of application permissions.
Verdict: This is incorrect. API Management is not the correct service for this task.
From API Management services, publish the API of App1. From the Access control (IAM) blade, delegate permissions.
Pros:
API Management is useful when you want to expose your app as a third-party API.
Cons:
API Management: Not required for App1 to interact with the Graph API.
IAM: IAM is not used to delegate access to the Graph API.
Verdict: This is incorrect. API Management is not the correct service, and IAM is not the correct way to configure delegation for a graph api.
Recommendation
The correct recommendation is:
From the Azure Active Directory admin center, register App1, and then delegate permissions to the Microsoft Graph API.
Explanation
Application Registration: Registering App1 in Azure AD creates an application object which represents your application and is used to identify your application within the directory.
Microsoft Graph API: The Microsoft Graph API is the unified endpoint for accessing Microsoft 365, Azure AD and other Microsoft cloud resources. Access reviews are also exposed through this API.
Delegated Permissions: You must delegate permissions to allow App1 to access the Graph API. By providing delegated permissions through the application registration, you allow the app to access resources on behalf of the logged in user. In the case of app-only access, this can be configured by granting application permissions rather than delegated permissions.
Authorization: After App1 is registered with delegated permissions it is allowed to perform actions on the Graph API such as accessing access reviews.
Important Notes for the AZ-304 Exam
Application Registration: Know how to register applications in Azure AD and why it is a required step to allow apps to access resources.
Microsoft Graph API: Understand that the Graph API is the primary way to access Microsoft 365 and Azure AD resources, including access reviews.
Delegated Permissions vs. Application Permissions: Be able to differentiate between these two types of permissions. Delegated permissions require an authenticated user. Application permissions are app-only and do not need a logged in user.
Access Control (IAM): Know that IAM is for resource level access and not for granting permission for applications.
API Management: Understand its purpose in publishing and securing APIs, but note that it is not necessary in this use case.
Security Principles: Understand the best practices for securing access to resources such as ensuring that the app is registered and given correct permissions.
You store web access logs data in Azure Blob storage.
You plan to generate monthly reports from the access logs.
You need to recommend an automated process to upload the data to Azure SQL Database every month.
What should you include in the recommendation?
Azure Data Factory
Data Migration Assistant
Microsoft SQL Server Migration Assistant (SSMA)
AzCopy
Understanding the Requirements
Source: Web access logs in Azure Blob storage.
Destination: Azure SQL Database.
Frequency: Monthly.
Automation: The process needs to be automated.
Transformation: No complex transformations are specified, so the service doesn’t need to be a powerful ETL tool.
Analyzing the Options
Azure Data Factory (ADF):
Pros:
Automated Data Movement: Designed to move data between different sources and sinks.
Scheduling: Supports scheduling pipelines for recurring execution (monthly).
Integration: Has built-in connectors for Blob storage and SQL Database.
Scalable: Can handle various data volumes and complexities.
Transformation: Supports data transformation if needed.
Cons:
Slightly more complex to configure than other options, however a simple ADF pipeline is quite easy to configure.
Verdict: This is the best fit. It can orchestrate the entire process from data extraction to data loading, and scheduling.
Data Migration Assistant (DMA):
Pros:
Helps with migrating databases to Azure, including schema and data migration.
Cons:
Not designed for continuous, scheduled data movement.
More of an interactive tool rather than an automated service.
Not suited to ingest logs into an existing database.
Verdict: Not suitable for recurring data uploads. It is more suited for migrations.
Microsoft SQL Server Migration Assistant (SSMA):
Pros:
Helps with migrating databases from on-premises to Azure SQL Database.
Cons:
Not designed for recurring data uploads from Blob Storage.
Primarily used for database migrations not for data ingestion.
Verdict: Not a valid option. This is used for migrations and not for scheduled data uploads.
AzCopy:
Pros:
Command-line tool to copy data to and from Azure Storage.
Cons:
Not a managed service, it does not handle scheduled operations, it has to be scheduled externally using OS tools (e.g. CRON, task scheduler).
Does not support direct data loading to a database, therefore you would need to build a custom solution to facilitate loading the data into the database.
Does not support any data transformation logic.
Verdict: Not the best option. Requires building a custom solution and does not directly fulfil the requirement to load data into a database.
Recommendation
The correct recommendation is:
Azure Data Factory
Explanation
Automation and Scheduling: Azure Data Factory allows you to create pipelines that can be scheduled to run monthly.
Built-in Connectors: It has connectors for both Azure Blob Storage (to read the logs) and Azure SQL Database (to load data).
Data Integration: It integrates all steps of data extraction, transformation (optional), and loading into a single pipeline.
Monitoring: It provides monitoring and logging for debugging and audit purposes.
Scalability: It can handle a large amount of data if required, and can scale up resources as needed.
Important Notes for the AZ-304 Exam
Azure Data Factory (ADF): Understand its capabilities as an ETL and data orchestration tool.
Automated Data Movement: Know how to set up ADF pipelines for recurring data movement.
Data Integration Tools: Familiarize yourself with the available connectors for different data sources and destinations.
Data Migration vs. Data Ingestion: Understand the difference between tools that are used for migration (e.g. DMA, SSMA) and tools for scheduled data uploads (e.g. ADF).
AzCopy: Know the purpose of AzCopy, and its use cases.
Transformation: Understand that transformation is often a requirement and that you can use data factory for this if needed.
Ease of Use: Although ADF is not the simplest tool, it is the easiest to maintain for scheduled recurring events when compared to a custom solution.
You are designing a data protection strategy for Azure virtual machines. All the virtual machines use managed disks.
You need to recommend a solution that meets the following requirements:
- The use of encryption keys is audited.
- All the data is encrypted at rest always.
- You manage the encryption keys, not Microsoft.
What should you include in the recommendation?
Azure Disk Encryption
Azure Storage Service Encryption
BitLocker Drive Encryption (BitLocker)
client-side encryption
Understanding the Requirements
Managed Disks: The virtual machines use Azure managed disks.
Encryption at Rest: All data must be encrypted when stored on disk.
Customer-Managed Keys: You must manage the encryption keys, not Microsoft.
Auditing: The use of encryption keys must be auditable.
Analyzing the Options
Azure Disk Encryption (ADE):
Pros:
Encrypts managed disks for both Windows and Linux VMs.
Supports customer-managed keys (CMK) with Azure Key Vault.
Data is encrypted at rest, meeting the security requirement.
Cons:
Does not support auditing of key usage.
Verdict: Does not fully satisfy the requirements due to lack of key usage auditing.
Azure Storage Service Encryption (SSE):
Pros:
Encrypts data at rest in Azure storage (including managed disks) by default.
Supports Microsoft-managed keys or customer-managed keys.
Cons:
Provides basic encryption for data at rest, but does not encrypt the OS disks of VMs.
Does not support the auditing of key usage.
Verdict: Does not provide full coverage of encryption for managed disks, and does not support auditing, therefore not a suitable choice.
BitLocker Drive Encryption (BitLocker):
Pros:
Encrypts drives in Windows operating systems.
Cons:
Would require manual setup and management for every VM.
Does not support auditing of key usage.
Does not support customer managed keys out of the box.
Verdict: Not the correct option. Too much manual overhead, lacks key auditing, and can be complex to manage.
Client-Side Encryption:
Pros:
The data is encrypted before it is sent to Azure.
The encryption key is managed by the client.
Cons:
This method requires custom implementations and additional effort from the client.
Does not support management or auditing of the keys in azure.
Verdict: Not suitable. Requires custom implementations, and is not a managed solution.
Recommendation
The recommendation should be Azure Disk Encryption with Customer-Managed Keys and Azure Key Vault as this is the closest to the correct answer, however further steps are required to implement the auditing requirements.
Explanation
Azure Disk Encryption (ADE): ADE provides encryption for both OS and data disks, using platform-managed keys or customer-managed keys.
Customer-Managed Keys (CMK): By using CMK with Azure Key Vault, you maintain full control over your encryption keys, which satisfies that requirement.
Azure Key Vault Auditing: Azure Key vault logs every event and access of secrets and keys, which can be monitored through Azure Log Analytics.
Encryption at Rest: The data at rest on the managed disks is always encrypted using the configured CMK keys.
Full coverage: This method fully encrypts all disks for the VM.
Steps to implement auditing:
Create an Azure Key Vault
Create a customer managed key in Azure Key Vault.
Configure ADE for the VM to use the customer managed key.
Configure Diagnostic settings on Azure Key Vault to send all logs to Azure Log Analytics.
Configure alerts on Key vault events using Azure Log Analytics to ensure that you are notified when keys are used or modified.
Important Notes for the AZ-304 Exam
Azure Disk Encryption (ADE): Know the options for ADE (platform-managed vs. customer-managed keys) and their implications.
Azure Key Vault: Understand its purpose for storing and managing secrets, keys, and certificates.
Encryption at Rest: Be aware of the different ways to achieve encryption at rest in Azure storage and databases.
Customer-Managed Keys: Know the benefits and implications of using customer-managed keys (CMK) for encryption.
Auditing: Be aware that auditing is a critical aspect of encryption and compliance.
Managed Disks: Understand that managed disks are now the default type in Azure and that encryption applies to them.
Your company has the divisions shown in the following table.
|—|—|—|
| East | Sub1 | East.contoso.com |
| West | Sub2 | West.contoso.com |
Sub1 contains an Azure web app that runs an ASP.NET application named App1 uses the Microsoft identity platform (v2.0) to handler user authentication. users from east.contoso.com can authenticate to App1.
You need to recommend a solution to allow users from west.contoso.com to authenticate to App1.
What should you recommend for the west.contoso.com Azure AD tenant?
guest accounts
an app registration
pass-through authentication
a conditional access policy
Division | Azure subscription | Azure Active Directory (Azure AD) tenant |
Understanding the Requirements
App1: An ASP.NET application using the Microsoft identity platform (v2.0) for authentication.
Current Authentication: east.contoso.com users can already authenticate to App1.
New Authentication: Users from west.contoso.com must also be able to authenticate to App1.
Authentication: Using Microsoft Identity platform and not on-premises authentication.
Azure AD Tenants: The different divisions have different Azure AD tenants.
Analyzing the Options
Guest accounts:
Pros:
Cross-Tenant Access: Allows users from one Azure AD tenant to access resources in another Azure AD tenant.
Easy to Setup: Relatively easy to create and manage.
Azure AD Integration: Fully compatible with Azure AD and Microsoft identity platform (v2.0).
App Access: This will allow the users to be added to the east.contoso.com tenant and allow access to the app.
Cons:
Requires users to be invited.
Verdict: This is the correct solution.
An app registration:
Pros:
Required for all applications that require authentication from azure ad.
Cons:
The app registration is already done, and an additional app registration is not required.
Verdict: Not required. An app registration is already in place.
Pass-through authentication:
Pros:
Allows users to use their on-premises password to sign in to azure ad.
Cons:
Not suitable in this scenario as it is designed to use local passwords and is not relevant for cloud identity authentication.
Not designed for this use-case which is authentication between different azure AD tenants.
Verdict: Not a good solution. It is not applicable to cloud authentication and is designed for on-prem identity.
A conditional access policy:
Pros:
Used to enforce access control based on various conditions.
Cons:
Does not enable the required functionality to allow a new tenant access to an existing application.
Used to control which users can access a particular resource, but the user must be configured to authenticate first.
Verdict: Not the correct choice. Conditional access can be added later to restrict which users can access the app, but it will not provide the access needed for the app to work for the new tenant.
Recommendation
The correct recommendation is:
Guest accounts
Explanation
Azure AD Guest Accounts: Guest accounts in Azure AD allow you to invite external users into your Azure AD tenant. These users can then access the applications that are hosted on that tenant.
Cross-Tenant Access: Guest accounts enable cross-tenant collaboration, which is exactly what is needed in this scenario.
Microsoft Identity Platform Compatibility: Guest accounts fully integrate with the Microsoft identity platform (v2.0), making them compatible with the authentication mechanisms used by App1.
Access to the App: After a user is added as a guest in the east.contoso.com tenant, they are able to authenticate to the app using their existing credentials from the west.contoso.com tenant.
Important Notes for the AZ-304 Exam
Azure AD Guest Accounts: Understand the purpose of Azure AD guest accounts for cross-tenant collaboration.
Cross-Tenant Access: Know when and how to configure cross-tenant access with Azure AD.
Microsoft Identity Platform (v2.0): Understand that this platform is used for authentication of modern web and mobile applications.
Application Registrations: Know that an app registration is required to allow applications to access resources from Azure AD.
Pass-through Authentication: Understand that this is used to authenticate on-prem identities, not cloud identities.
Conditional Access: Know that this can control access, but cannot provide access on its own.
Authentication: Have a good understanding of authentication in Azure and how to configure it to work across multiple tenants.
HOTSPOT
You are designing a solution for a stateless front-end application named Application1.
Application1 will be hosted on two Azure virtual machines named VM1 and VM2.
You plan to load balance connections to VM1 and VM2 from the Internet by using one Azure load balancer.
You need to recommend the minimum number of required public IP addresses.
How many public IP addresses should you recommend using for each resource? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Load balancer:
0
1
2
3
VM1:
0
1
2
3
VM2:
0
1
2
3
Final Answer:
Load balancer: 1
VM1: 0
VM2: 0
Why Correct:
Load Balancer (1): One public IP is the minimum needed for an Internet-facing load balancer (Standard SKU) to expose Application1 to external clients. The stateless nature of the app allows simple distribution to VM1 and VM2 via this single IP. This aligns with Azure best practices for public-facing applications and the AZ-305 focus on efficient resource use.
VM1 and VM2 (0 each): Assigning no public IPs to the VMs minimizes resource usage and cost while maintaining the intended design (load-balanced access). The VMs only need private IPs within the VNet, as the load balancer handles all external communication. This is standard for load-balanced architectures in Azure.
Why Others Are Incorrect:
Load Balancer (0): An internal load balancer (0 public IPs) can’t serve Internet traffic, failing the requirement.
Load Balancer (2+): Multiple IPs are only needed for multiple services or advanced scenarios (e.g., multi-region), not specified here.
VM1/VM2 (1+): Public IPs on VMs would bypass the load balancer, contradicting the design, and increase costs unnecessarily.