test1 Flashcards
https://www.dumpsbase.com/freedumps/?s=az+304
Your network contains an on-premises Active Directory domain.
The domain contains the Hyper-V clusters shown in the following table.
Name Number of nodes Number of virtual machines running on cluster
Cluster1 4 20
Cluster2 3 15
— —
You plan to implement Azure Site Recovery to protect six virtual machines running on Cluster1 and three virtual machines running on Cluster1 Virtual machines are running on all Cluster! and Cluster2 nodes.
You need to identify the minimum number of Azure Site Recovery Providers that must be installed on premises.
How many Providers should you identify?
1
7
9
16
Understanding Azure Site Recovery Providers:
The Azure Site Recovery (ASR) Provider is a software component that must be installed on each Hyper-V host that you want to protect with ASR.
The Provider communicates with the Azure Recovery Services Vault and facilitates replication and failover.
Requirements:
On-Premises Hyper-V: There are two Hyper-V clusters (Cluster1 and Cluster2).
Protection Scope: Six VMs from Cluster1 and three VMs from Cluster2 need to be protected by Azure Site Recovery.
Minimum Providers: Identify the minimum number of ASR Providers needed.
Analysis:
Cluster1: Has 4 nodes.
Cluster2: Has 3 nodes.
Provider per Host: One ASR Provider is needed on each Hyper-V host that will be replicated.
Protected VMs: Six VMs from Cluster1 and three from Cluster2 need protection.
VMs are running on all nodes: All VMs are running across all nodes, which means that we need an ASR Provider installed on all nodes.
Minimum Number of Providers:
Cluster1 requires a provider on each host: 4 providers
Cluster2 requires a provider on each host: 3 providers
Total: 4 + 3 = 7
Correct Answer:
7
Explanation:
You must install an Azure Site Recovery Provider on every Hyper-V host that contains virtual machines that you want to protect using ASR. Because you need to protect VMs on all nodes in both clusters, you must install a provider on every hyper-v host. This means you must install 4 providers on Cluster 1 and 3 providers on cluster 2, for a total of 7 providers.
Why not others:
1: It is not enough since there are 7 Hyper-V hosts in total.
9: This answer is incorrect because it does not match the total number of hyper-v hosts.
16: This answer is incorrect because it does not match the total number of hyper-v hosts.
Important Notes for the AZ-304 Exam:
Azure Site Recovery: Understand the architecture, requirements, and components of ASR.
ASR Provider: Know that the ASR Provider must be installed on each Hyper-V host to be protected.
Minimum Requirements: The exam often focuses on minimum requirements, not the total capacity or other metrics.
Hyper-V Integration: Understand how ASR integrates with Hyper-V for replication.
Exam Focus: Read the question carefully and identify the specific information related to required components.
You need to recommend a strategy for the web tier of WebApp1. The solution must minimize.
What should you recommend?
Create a runbook that resizes virtual machines automatically to a smaller size outside of business hours.
Configure the Scale Up settings for a web app.
Deploy a virtual machine scale set that scales out on a 75 percent CPU threshold.
Configure the Scale Out settings for a web app.
Requirements:
Web Tier Scaling: A strategy for scaling the web tier of WebApp1.
Minimize Cost: The solution must focus on minimizing cost.
Recommended Solution:
Configure the Scale Out settings for a web app.
Explanation:
Configure the Scale Out settings for a web app:
Why it’s the best fit:
Cost Minimization: Web apps (App Services) have a pay-as-you-go model and scale out to add more instances when demand increases and automatically scale back in when the demand decreases. This is cost-effective because you only pay for what you use.
Automatic Scaling: You can configure automatic scaling based on different performance metrics (CPU, memory, or custom metrics), ensuring that you scale out and in based on load.
Managed Service: It is a fully managed service, so it minimizes operational overhead.
Why not others:
Create a runbook that resizes virtual machines automatically to a smaller size outside of business hours: While this can help minimize cost, this is not ideal because VMs are still running all the time. Also, it is more complex to implement and manage.
Configure the Scale Up settings for a web app: Scale Up is more costly because you increase the compute resources of the existing instances.
Deploy a virtual machine scale set that scales out on a 75 percent CPU threshold: While it is possible to deploy and scale with scale sets, this is more costly since VMs are billed per hour and are more complex to manage than web apps.
Important Notes for the AZ-304 Exam:
Azure App Service: Be very familiar with Azure App Service and its scaling capabilities.
Web App Scale Out: Know the different scaling options for web apps, and when to scale out versus scale up.
Automatic Scaling: Understand how to configure automatic scaling based on performance metrics.
Cost Optimization: The exam often emphasizes cost-effective solutions. Be aware of the pricing models for different Azure services.
PaaS vs. IaaS: Understand the benefits of using PaaS services over IaaS for cost optimization.
Exam Focus: Be sure to select the best service that meets the requirements and provides the most cost effective solution.
You have an Azure subscription that contains a custom application named Application was developed by an external company named fabric, Ltd. Developers at Fabrikam were assigned role-based access control (RBAV) permissions to the Application components. All users are licensed for the Microsoft 365 E5 plan.
You need to recommends a solution to verify whether the Faricak developers still require permissions to Application1.
The solution must the following requirements.
- To the manager of the developers, send a monthly email message that lists the access permissions to Application1.
- If the manager does not verify access permission, automatically revoke that permission.
- Minimize development effort.
What should you recommend?
In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources
Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet
Create an Azure Automation runbook that runs the Get-AzureRmRoleAssignment cmdlet
In Azure Active Directory (Azure AD), create an access review of Application1
Requirements:
External Developer Access: Fabrikam developers have RBAC permissions to an Azure application.
Access Verification: Need to verify if the Fabrikam developers still need access.
Monthly Email to Manager: Send a monthly email to the manager with access information.
Automatic Revocation: Revoke permissions if the manager does not approve.
Minimize Development: Minimize custom code development and use available services.
Recommended Solution:
In Azure Active Directory (Azure AD), create an access review of Application1
Explanation:
Azure AD Access Reviews:
Why it’s the best fit:
Automated Review: Azure AD Access Reviews provides a way to schedule recurring access reviews for groups, applications, or roles. It will automatically send notifications to the assigned reviewers (in this case, the manager).
Manager Review: You can configure the access review to have the manager review and approve or deny access for their developers.
Automatic Revocation: You can configure the access review to automatically remove access for users when they are not approved.
Minimal Development: Access reviews are a built-in feature of Azure AD that requires minimal configuration and no custom coding.
Why not others:
In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources: While PIM is great for managing and governing privileged roles, it’s not the best choice for regular access reviews of permissions, and it does not provide a way to have a review based on user accounts.
Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet: While possible, this requires custom development and management. Azure Access Reviews provides the functionality natively, therefore this is not the optimal solution for the requirements.
Create an Azure Automation runbook that runs the Get-AzureRmRoleAssignment cmdlet: Similar to the previous option, this is not the ideal solution since access reviews provides all of this functionality natively.
Important Notes for the AZ-304 Exam:
Azure AD Access Reviews: Be very familiar with Azure AD Access Reviews, and how they can be used to manage user access, and know the methods that you can use to perform them (for example, by a manager or by self review).
Access Management: Understand the importance of access reviews as part of an overall security strategy.
Access Reviews vs. PIM: Understand when to use PIM, and when to use Access Reviews.
Minimize Development: The exam often emphasizes solutions that minimize development effort.
Exam Focus: Select the simplest and most direct method to achieve the desired outcome.
You have an Azure SQL database named DB1.
You need to recommend a data security solution for DB1. the solution must meet the following requirements:
- When helpdesk supervisors query DS1. they must see the full number of each credit card.
- When helpdesk operators Query DB1. they must see only the last four digits of each credit card number
- A column named Credit Rating must never appear in plain text within the database system, and only client applications must be able to decrypt the Credit Rating column.
What should you include in the recommendation To answer, select the appropriate options in the answer area? NOTE: Each correct selection is worth one point.
Helpdesk requirements:
Always Encrypted
Azure Advanced Threat Protection (ATP)
Dynamic data masking
Transparent Data Encryption (TDE)
Credit Rating requirement:
Always Encrypted
Azure Advanced Threat Protection (ATP)
Dynamic data masking
Transparent Data Encryption (TDE)
Requirements:
Helpdesk Supervisors: Must see full credit card numbers.
Helpdesk Operators: Must see only the last four digits of credit card numbers.
Credit Rating Column: The Credit Rating column must never appear in plain text within the database system and must be decrypted by the client applications.
Answer Area:
Helpdesk requirements:
Dynamic data masking
Credit Rating requirement:
Always Encrypted
Explanation:
Helpdesk requirements:
Dynamic data masking:
Why it’s correct: Dynamic data masking allows you to obfuscate sensitive data based on the user’s role. You can configure masking rules to show the full credit card numbers to supervisors and only the last four digits to the operators. The underlying data is not modified, and the masking is applied at the query output level.
Why not others:
Always Encrypted: This encrypts the data, but doesn’t allow for different visibility of the data based on user roles.
Azure Advanced Threat Protection (ATP): This is for detecting malicious behavior, not for data masking.
Transparent Data Encryption (TDE): This encrypts data at rest, but does not apply specific policies based on user access or perform masking.
Credit Rating requirement:
Always Encrypted:
Why it’s correct: Always Encrypted ensures that sensitive data is always encrypted, both at rest and in transit. The encryption keys are stored and managed in the client application and are not accessible to database administrators. This satisfies the requirement that the column must never appear in plain text in the database system, and it is only decrypted in the client application.
Why not others:
Azure Advanced Threat Protection (ATP): It doesn’t encrypt or mask the data. It is meant for threat detection.
Dynamic data masking: Dynamic data masking only masks the data for specific users, but it does not encrypt the data.
Transparent Data Encryption (TDE): TDE encrypts data at rest, but it does not encrypt data in transit or protect against database administrators viewing the unencrypted data.
Important Notes for the AZ-304 Exam:
Always Encrypted: Understand what it does, how it encrypts data, where the encryption keys are managed, and the purpose of this approach for security.
Dynamic Data Masking: Know the purpose and configuration of dynamic data masking and how it helps control the data that users can see.
Transparent Data Encryption (TDE): Understand that TDE is used for encrypting data at rest, but it doesn’t protect data in transit, and does not provide different views of data.
Azure Advanced Threat Protection (ATP): Know that it is used for threat detection, not for masking or encrypting data.
Data Security: Be familiar with the different data security features in Azure SQL Database.
Exam Focus: You must be able to understand a complex scenario, and pick the different Azure components that meet each requirement.
You have an Azure subscription.
Your on-premises network contains a file server named Server1. Server 1 stores 5 TB of company files that are accessed rarely.
You plan to copy the files to Azure Storage.
You need to implement a storage solution for the files that meets the following requirements:
- The files must be available within 24 hours of being requested.
- Storage costs must be minimized.
Which two possible storage solutions achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
Create a general-purpose v2 storage account that is configured for the Hot default access tier Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.
Create an Azure Blob storage account that is configured for the Cool default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.
Create a general-purpose v1 storage account. Create a blob container and copy the files to the blob container.
Create a general-purpose v1 storage account Create a file share in the storage account and copy the files to the file share.
Create a general-purpose v2 storage account that is configured for the Cool default access tier. Create a file share in the storage account and copy the files to the file share.
Requirements:
Infrequent Access: The files are rarely accessed.
24-Hour Retrieval: Files must be available within 24 hours of a request.
Cost Minimization: Storage costs must be minimized.
5 TB: Size of data to be stored.
On-Premises Data: Data currently located on a file server.
Correct Solutions:
Create a general-purpose v2 storage account that is configured for the Cool default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.
Create an Azure Blob storage account that is configured for the Cool default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.
Explanation:
Create a general-purpose v2 storage account that is configured for the Cool default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier:
Why it’s correct:
Archive Access Tier: Setting the files to the Archive tier will result in the lowest storage costs, and it guarantees that files can be available within 24 hours of requesting a rehydration.
General Purpose v2: This is the recommended storage account type for most scenarios.
Blob Container: This is the correct storage type to store a large amount of files in Azure.
Create an Azure Blob storage account that is configured for the Cool default access tier. Create a blob container, copy the files to the blob container, and set each file to the Archive access tier.:
Why it’s correct:
Archive Access Tier: Setting the files to the Archive tier will result in the lowest storage costs, and it guarantees that files can be available within 24 hours of requesting a rehydration.
* Azure Blob Storage: This storage account type is optimized for blob storage.
* Blob Container: This is the correct storage type to store a large amount of files in Azure.
Why not others:
Create a general-purpose v2 storage account that is configured for the Hot default access tier: The Hot tier is the most expensive, and does not meet the requirements for minimizing costs.
Create a general-purpose v1 storage account. Create a blob container and copy the files to the blob container. General Purpose v1 storage accounts are the older versions and are not recommended. In addition, it would need the files to be set to Archive tier to meet the cost requirements.
Create a general-purpose v1 storage account Create a file share in the storage account and copy the files to the file share: File shares are not cost effective for large amounts of data storage, and do not support the archive tier.
Create a general-purpose v2 storage account that is configured for the Cool default access tier. Create a file share in the storage account and copy the files to the file share. File shares are not cost effective for large amounts of data storage, and do not support the archive tier.
Important Notes for the AZ-304 Exam:
Azure Storage Access Tiers: Be very familiar with the different access tiers: Hot, Cool, and Archive. Know their use cases, costs, and retrieval time implications.
Storage Account Types: Understand the differences between general-purpose v1, v2, and blob storage accounts, and when to use each.
Blob Storage: Know how to store data in blob storage using containers.
File Shares: Understand how Azure file shares are used. They are not designed for storing large amounts of data for archival.
Cost Minimization: The exam often emphasizes cost-effective solutions. Know the pricing implications of different Azure services and tiers.
Exam Focus: Be sure to read the full requirement to choose the correct service and tier combination.
HOTSPOT
You have an existing implementation of Microsoft SQL Server Integration Services (SSIS) packages stored in an SSISDB catalog on your on-premises network. The on-premises network does not have hybrid connectivity to Azure by using Site-to-Site VPN or ExpressRoute.
You want to migrate the packages to Azure Data Factory.
You need to recommend a solution that facilitates the migration while minimizing changes to the existing packages. The solution must minimize costs.
What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Store the SSISDB catalog by using:
Azure SQL Database
Azure Synapse Analytics
SQL Server on an Azure virtual machine
SQL Server on an on-premises computer
Implement a runtime engine for
package execution by using:
Self-hosted integration runtime only
Azure-SQL Server Integration Services Integration Runtime (IR) only
Azure-SQL Server Integration Services Integration Runtime and self-hosted integration runtime
Requirements:
Existing SSIS Packages: The packages are stored in an SSISDB catalog on-premises.
Migrate to ADF: The migration target is Azure Data Factory.
Minimize Changes: The solution should minimize changes to the existing SSIS packages.
Minimize Costs: The solution should be cost-effective.
No connectivity: There is no hybrid connectivity from the on-premises environment to Azure.
Answer Area:
Store the SSISDB catalog by using:
Azure SQL Database
Implement a runtime engine for package execution by using:
Azure-SQL Server Integration Services Integration Runtime (IR) only
Explanation:
Store the SSISDB catalog by using:
Azure SQL Database:
Why it’s correct: To migrate SSIS packages to Azure Data Factory, the SSISDB catalog needs to be stored in Azure. Azure SQL Database is the recommended and supported method of storing the SSISDB catalog when you are using the Azure SSIS Integration Runtime in ADF.
Why not others:
Azure Synapse Analytics: While Synapse Analytics also supports SQL functionality, it is not the recommended platform to host the SSISDB.
* SQL Server on an Azure virtual machine: While SQL Server on a VM would work, it is an IaaS solution, which requires additional management overhead and is not as cost-effective as using the PaaS Azure SQL Database.
* SQL Server on an on-premises computer: The SSISDB must be in Azure to be used by the Azure SSIS Integration Runtime.
Implement a runtime engine for package execution by using:
Azure-SQL Server Integration Services Integration Runtime (IR) only:
Why it’s correct: An Azure SSIS Integration Runtime is a fully managed service for executing SSIS packages in Azure. Because there is no hybrid network connectivity, you must use the Azure version, instead of a self-hosted IR. The Azure SSIS IR is the only way to run the SSIS packages that were migrated in Azure.
Why not others:
Self-hosted integration runtime only: The self-hosted integration runtime needs a hybrid network to Azure to be able to work. Because there is no VPN or expressroute, this is not an option.
Azure-SQL Server Integration Services Integration Runtime and self-hosted integration runtime: The self-hosted integration runtime is not necessary in this scenario because there is no need to connect to an on-premise resource.
Important Notes for the AZ-304 Exam:
Azure Data Factory: Be very familiar with ADF, its core concepts, and how to execute SSIS packages.
Azure SSIS IR: Know the purpose of an Azure SSIS Integration Runtime and how to set it up. Understand that it is used when running SSIS packages in Azure.
SSISDB in Azure: Understand how the SSISDB catalog is managed and stored in Azure when migrating from an on-prem environment.
Self-Hosted IR: Understand when the self-hosted IR is required and why it is not the appropriate answer for this specific scenario.
Hybrid Connectivity: Understand how hybrid connectivity affects the choice of integration runtime.
Cost Minimization: Know how to minimize costs by choosing the appropriate services (PaaS over IaaS).
Exam Focus: The exam emphasizes choosing the most appropriate solution while minimizing effort and cost.
You use Azure virtual machines to run a custom application that uses an Azure SQL database on the back end.
The IT apartment at your company recently enabled forced tunneling, Since the configuration change, developers have noticed degraded performance when they access the database.
You need to recommend a solution to minimize latency when accessing the database. The solution must minimize costs.
What should you include in the recommendation?
Azure SQL Database Managed instance
Azure virtual machines that run Microsoft SQL Server servers
Always On availability groups
virtual network (VNET) service endpoint
Understanding Forced Tunneling:
Forced tunneling in Azure directs all internet-bound traffic from a subnet through a virtual network appliance (like a firewall or proxy), on-premises network, or specific Azure service. This can increase latency since traffic to Azure services is routed through the forced tunnel, instead of going directly.
Requirements:
Azure SQL Database: Custom app on Azure VMs uses an Azure SQL database.
Forced Tunneling: Forced tunneling is enabled, causing performance degradation.
Minimize Latency: Minimize the latency when accessing the database.
Minimize Costs: The solution should be cost-effective.
Recommended Solution:
virtual network (VNET) service endpoint
Explanation:
Virtual Network Service Endpoints:
Why it’s the best fit: VNET service endpoints allow you to secure access to Azure service resources by enabling the use of a private IP address in the VNET. By enabling service endpoints for Azure SQL Database, traffic to that database from the Azure VMs within the VNET will bypass the forced tunnel, and instead go directly through the Azure backbone. This significantly reduces latency while also being cost effective.
Why not others:
Azure SQL Database Managed Instance: While Managed Instance is a good choice for many SQL scenarios, it is not the ideal solution for this problem. It does not help with the forced tunneling, and it also does not minimize cost since it is a more expensive offering.
Azure virtual machines that run Microsoft SQL Server servers: Moving the database to a VM in IaaS will not fix the problem. It will not address the latency issues created by the forced tunneling.
Always On availability groups: This helps with HA and DR, but it does not help with the latency issues caused by the forced tunneling. Also, it would add significant costs to the deployment.
Important Notes for the AZ-304 Exam:
Virtual Network Service Endpoints: Understand the benefits of using service endpoints.
Forced Tunneling: Know what forced tunneling is and how it can impact traffic flow.
Cost Minimization: Know the different ways to minimize costs when architecting a solution.
Network Performance: Understand the different ways to diagnose and improve performance when dealing with Azure network configurations.
Azure SQL: Know the different deployment options for Azure SQL.
Exam Focus: The exam will often require you to select the most appropriate solution that meets all of the requirements.
You have an Azure subscription that is linked to an Azure Active Directory (Azure AD) tenant. The subscription contains 10 resource groups, one for each department at your company.
Each department has a specific spending limit for its Azure resources.
You need to ensure that when a department reaches its spending limit, the compute resources of the department shut down automatically.
Which two features should you include in the solution? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
Azure Logic Apps
Azure Monitor alerts
the spending limit of an Azure account
Cost Management budgets
Azure Log Analytics alerts
Requirements:
Departmental Limits: Each department has a specific spending limit for its Azure resources.
Resource Shutdown: Compute resources must shut down automatically when the spending limit is reached.
Correct Features:
Cost Management budgets
Azure Logic Apps
Explanation:
Cost Management budgets:
Why it’s correct: Cost Management budgets allow you to define a spending limit for a specific scope (resource group, subscription, management group). When the actual spend reaches the budget threshold, you can trigger alerts and take actions. Budgets is the way to monitor and alert based on the cost.
Why not others (by itself): Cost management budgets cannot automatically stop resources, it is a monitoring and alert mechanism, and needs other services in order to take action.
Azure Logic Apps:
Why it’s correct: Azure Logic Apps can be triggered by a budget alert. In the logic app, you can add actions that automatically shut down the compute resources. For example, you can use the Azure Resource Management connector to stop virtual machines.
Why not others (by itself): Logic apps require a trigger to start. Therefore, a budget alert must be configured.
Why not others:
Azure Monitor alerts: Azure Monitor alerts are for performance monitoring. While they can monitor costs, they cannot perform actions on those costs.
the spending limit of an Azure account: While the Azure Account might have a total spending limit, this does not allow for the control on resource groups, or the automation of stopping resources.
Azure Log Analytics alerts: Log Analytics is a great way to analyze logs, but does not work with cost alerts.
Important Notes for the AZ-304 Exam:
Cost Management Budgets: Be very familiar with Cost Management budgets and how they can be used to control spending, and know that they are the mechanism that you should use for cost alerts.
Azure Logic Apps: Know how to use Logic Apps to automate actions based on triggers, and how they integrate with Azure Management connectors.
Automated Actions: Understand that Logic Apps can be triggered by alerts and can be used to perform actions, such as shutting down resources.
Cost Control: Be familiar with the best practices for cost control and optimization in Azure.
Alerts: Know the difference between cost alerts and metrics alerts.
Exam Focus: Carefully read the requirement. You must know which services do what function. You need to know that you need a budget to alert when the spend is reached, and that you need Logic apps to automate an action when the alert is triggered.
HOTSPOT
You configure OAuth2 authorization in API Management as shown in the exhibit.
Add OAuth2 service
Display name: (Empty field)
Id: (Empty field)
Description: (Empty field)
Client registration page URL: https://contoso.com/register
Authorization grant types:
Authorization code: Enabled
Implicit: Disabled
Resource owner password: Disabled
Client credentials: Disabled
Authorization endpoint URL: https://login.microsoftonline.com/contoso.onmicrosoft.com/oauth2/v2.0/authorize
Support state parameter: Disabled
Authorization Request method
GET: Enabled
POST: Disabled
Token endpoint URL: (Empty field)
Additional body parameters: (Empty field)
Button: Create
Use the drop-domain to select the answer choice that completes each statement based on
the information presented in the graphic. NOTE: Each correct selection is worth one point.
The selected authorization grant type is for
Background services
Headless device authentication
Single page applications
Web applications
To enable custom data in the grant flow, select
Client credentials
Implicit
Resource owner password
Support state parameter
OAuth2 Configuration Summary:
Authorization Grant Types: The configuration shows the “Authorization code” grant type as the only one enabled.
Authorization Endpoint URL: This is set to Microsoft’s OAuth2 authorization endpoint for the contoso.onmicrosoft.com tenant.
Other Settings: Various other settings related to authorization and token endpoints are displayed.
Answer Area:
The selected authorization grant type is for:
Web applications
To enable custom data in the grant flow, select
Support state parameter
Explanation:
The selected authorization grant type is for:
Web applications:
Why it’s correct: The authorization code grant type is the most secure and recommended method to obtain access tokens for web applications. In this flow the client (web app) first gets an authorization code from the authorization server, and then uses it to obtain an access token.
Why not others:
Background services: Background services (also known as daemon apps) typically use the client credentials flow, which is not enabled in this configuration.
Headless device authentication: Headless devices often use the device code flow, which is not a grant type present here.
Single-page applications: Single-page applications (SPAs) can use the authorization code flow, but often use the implicit grant type, which is disabled in this configuration.
To enable custom data in the grant flow, select:
Support state parameter:
Why it’s correct: The “Support state parameter” setting enables passing an opaque value in the authorization request, and will be returned by the authorization server with the code. This can be used to pass custom data that needs to be included in the authorization flow.
Why not others:
Client credentials: This is for service-to-service authentication without a user present.
Implicit: This is an older, less secure grant type for single-page applications. It does not enable passing custom data.
Resource owner password: This is a less secure grant type that should be avoided in most scenarios. It also does not enable passing custom data.
Important Notes for the AZ-304 Exam:
OAuth 2.0 Grant Types: Be very familiar with the different OAuth 2.0 grant types:
Authorization Code
Implicit
Client Credentials
Resource Owner Password
Device Code
API Management OAuth2 Settings: Understand how to configure OAuth 2.0 settings in Azure API Management.
“State” Parameter: Know the importance of the “state” parameter in OAuth flows and how it helps prevent CSRF attacks. Understand how this can be used to pass custom data.
API Security: Know how to properly secure APIs with OAuth 2.0.
Exam Focus: Be sure to select the answer based on a close inspection of the provided details.
You are designing an order processing system in Azure that will contain the Azure resources shown in the following table.
|—|—|—|
| App1 | Web app | Processes customer orders |
| Function1 | Function | Check product availability at vendor 1 |
| Function2 | Function | Check product availability at vendor 2 |
| storage1 | Storage account | Stores order processing logs |
The order processing system will have the following transaction flow:
✑ A customer will place an order by using App1.
✑ When the order is received, App1 will generate a message to check for product availability at vendor 1 and vendor 2.
✑ An integration component will process the message, and then trigger either Function1 or Function2 depending on the type of order.
✑ Once a vendor confirms the product availability, a status message for App1 will be generated by Function1 or Function2.
✑ All the steps of the transaction will be logged to storage1.
Which type of resource should you recommend for the integration component?
an Azure Data Factory pipeline
an Azure Service Bus queue
an Azure Event Grid domain
an Azure Event Hubs capture
Name | Type | Purpose |
Requirements:
Message Processing: A component is needed to process messages generated by App1.
Conditional Triggering: The component must trigger either Function1 or Function2 based on the order type.
Logging: All steps of the transaction must be logged in storage1.
Recommended Resource:
an Azure Service Bus queue
Explanation:
Azure Service Bus queue:
Why it’s the best fit:
Message Broker: Service Bus is a reliable message broker that can decouple components in your system, and provide a way for them to communicate asynchronously.
Message Routing and Filtering: Service Bus queues and topics provide mechanisms for message routing and filtering. You can configure the service bus to send messages from App1 to different queues or topics, and then have Function1 and Function2 subscribe to those queues, based on the different order types.
Reliable Messaging: Service Bus ensures reliable message delivery, even if a function fails.
Logging: By integrating the queue with Logic Apps, you can add steps in order to log the activity in storage1.
Why not others:
an Azure Data Factory pipeline: Data Factory is for data integration, ETL, and data transformations, not suitable for processing and routing messages. Also, it does not integrate well with functions.
an Azure Event Grid domain: Event Grid is designed for reactive event-based systems, not for processing sequential workflows. It is also not the most reliable way to send message, and does not guarantee delivery in the same way as Service Bus.
an Azure Event Hubs capture: Event Hubs is for high-throughput ingestion of event data, but not the most ideal for message routing and guaranteed delivery and does not integrate well with serverless functions.
Important Notes for the AZ-304 Exam:
Azure Service Bus: Be very familiar with Service Bus, including queues and topics, the different delivery guarantees, and its use cases for reliable message queuing and routing.
Message Brokers: Understand the purpose of message brokers, decoupling systems, and asynchronous processing.
Azure Functions Integration: Know how Azure Functions can be triggered by messages from Service Bus queues or topics.
Event-Driven Architectures: Understand the difference between messaging and event-driven architectures.
Data Integration: Know the use cases for Azure Data Factory.
Exam Focus: Carefully consider the specific requirements of the problem and select the component that best fits those requirements.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases.
The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.
You need to recommend a solution to meet the regulatory requirement.
Solution: You recommend creating resource groups based on locations and implementing resource locks on the resource groups.
Does this meet the goal?
Yes
No
Goal:
Deploy Azure App Service instances and Azure SQL databases simultaneously.
App Service instances must be deployed only to specific Azure regions.
Resources for the App Service instances must reside in the same region.
Proposed Solution:
Create resource groups based on locations.
Implement resource locks on the resource groups.
Analysis:
Resource Groups Based on Location:
Creating resource groups based on locations is a good practice for organizing resources in Azure. It makes it easier to manage resources and ensures that all the resources that belong to a specific geographic region are grouped together. This is an important step in reaching the goal.
Resource Locks
Resource locks, however, are only for preventing accidental deletion of resource groups and the resources within. They do not enforce which resources are deployed or where they are deployed, meaning that a user could still deploy a VM outside of the required location.
Does It Meet the Goal?: No
Explanation:
Resource Groups by Location (Partial Fulfillment): Creating resource groups by location does help with organizing resources and ensures they’re deployed in the same region, meeting part of the requirement of keeping all resources in the same location.
Resource Locks - These will not solve for the region requirement, because you can still create a resource in any region.
Missing Enforcement: The solution lacks any mechanism to enforce that the resources are only deployed in the correct Azure regions. This is a regulatory requirement, so a simple organization of resource groups is not enough.
No Region Enforcement: Resource locks prevent accidental deletion or modification of resources, but they do not restrict resource deployments to specific regions.
Correct Answer:
No
Important Notes for the AZ-304 Exam:
Resource Groups: Understand the purpose and use of resource groups.
Resource Locks: Know the purpose and limitations of resource locks.
Regulatory Requirements: Recognize that solutions must enforce compliance requirements. This is a key element of many questions.
Enforcement Mechanisms: Look for mechanisms that enforce policies instead of simply organizing resources.
Exam Focus: Read the proposed solution and verify if it truly meets the goal. If any part of the solution does not achieve the goal, then the answer is “No”.
You need to recommend a data storage solution that meets the following requirements:
- Ensures that applications can access the data by using a REST connection
- Hosts 20 independent tables of varying sizes and usage patterns
- Automatically replicates the data to a second Azure region
- Minimizes costs
What should you recommend?
an Azure SQL Database that uses active geo-replication
tables in an Azure Storage account that use geo-redundant storage (GRS)
tables in an Azure Storage account that use read-access geo-redundant storage (RA-GRS)
an Azure SQL Database elastic database pool that uses active geo-replication
Requirements:
REST API Access: The data must be accessible through a REST interface.
Independent Tables: The solution must support 20 independent tables of different sizes and usage patterns.
Automatic Geo-Replication: The data must be automatically replicated to a secondary Azure region.
Minimize Costs: The solution should be cost-effective.
Recommended Solution:
Tables in an Azure Storage account that use read-access geo-redundant storage (RA-GRS)
Explanation:
Azure Storage Account with RA-GRS Tables:
REST Access: Azure Storage tables are directly accessible using a REST API, which is a fundamental part of their design.
Independent Tables: A single Azure Storage account can hold many independent tables, meeting the 20-table requirement.
Automatic Geo-Replication (RA-GRS): RA-GRS ensures that the data is replicated to a secondary region, and provides read access to that secondary location. This satisfies the HA and geo-redundancy requirements.
Minimize Cost: Azure Storage tables are designed to handle different patterns and are cost effective compared to SQL options.
Why not others:
Azure SQL Database with active geo-replication: While it provides strong SQL capabilities and geo-replication, SQL databases are more costly for simple table storage and have a higher operational overhead. SQL databases also do not have a REST interface, rather they use SQL.
Azure SQL Database elastic database pool with active geo-replication: Same reasons as above, but with the added complication of an elastic pool, which is unnecessary for the stated requirements and would add even more costs.
Tables in an Azure Storage account that use geo-redundant storage (GRS): This would meet the geo-replication requirements but it would not provide the ability to read from the secondary location, and so is not as good a choice as RA-GRS.
Important Notes for the AZ-304 Exam:
Azure Storage Tables: Know what they are designed for and their features (scalability, cost-effectiveness, REST API access). Be able to explain where they are appropriate.
Geo-Redundancy: Understand the differences between GRS, RA-GRS and how they impact performance, availability and cost.
Cost-Effective Solutions: The exam often asks for the most cost-effective solution. Be aware of the pricing models of different Azure services.
SQL Database Use Cases: Understand when to use SQL DBs and when other options (like Table storage) are more appropriate. SQL DBs are better suited for complex queries, transactions, and relational data models.
REST API Access: Know which Azure services offer a REST interface for data access and when it might be required.
Exam Technique: Ensure you fully read the requirements, so you don’t pick a more expensive or complex solution than is needed.
HOTSPOT
Your company has two on-premises sites in New York and Los Angeles and Azure virtual networks in the East US Azure region and the West US Azure region. Each on-premises site has Azure ExpressRoute circuits to both regions.
You need to recommend a solution that meets the following requirements:
✑ Outbound traffic to the Internet from workloads hosted on the virtual networks must be routed through the closest available on-premises site.
✑ If an on-premises site fails, traffic from the workloads on the virtual networks to the Internet must reroute automatically to the other site.
What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Routing from the virtual networks to
the on-premises locations must be
configured by using:
Azure default routes
Border Gateway Protocol (BGP)
User-defined routes
The automatic routing configuration
following a failover must be
handled by using:
Border Gateway Protocol (BGP)
Hot Standby Routing Protocol (HSRP)
Virtual Router Redundancy Protocol (VRRP)
Correct Answers and Why
Routing from the virtual networks to the on-premises locations must be configured by using:
Border Gateway Protocol (BGP)
Why?
ExpressRoute Standard: ExpressRoute relies on BGP for exchanging routes between your on-premises networks and Azure virtual networks. It’s the fundamental routing protocol for this type of connectivity.
Dynamic Routing: BGP allows for dynamic route learning, meaning routes are automatically adjusted based on network changes (like a site going down). This is essential for the failover requirement.
Path Selection: BGP allows for attributes like Local Preference to choose the best path. The path to the nearest on-prem location can be preferred by setting a higher local preference.
Why Not the Others?
Azure Default Routes: These routes are for basic internal Azure connectivity and internet access within Azure. They don’t handle routing to on-premises networks over ExpressRoute.
User-defined routes (UDRs): While UDRs can force traffic through a specific path they do not facilitate dynamic failover without manual intervention and are therefore unsuitable in this scenario.
The automatic routing configuration following a failover must be handled by using:
Border Gateway Protocol (BGP)
Why?
BGP Convergence: BGP’s inherent nature is to dynamically adapt to network changes. If an on-premises site or an ExpressRoute path becomes unavailable, BGP automatically detects this and withdraws routes from the failed path.
Automatic Rerouting: BGP then advertises the available paths, leading to the rerouting of traffic through the remaining healthy site, achieving the automatic failover requirement.
Why Not the Others?
Hot Standby Routing Protocol (HSRP) and Virtual Router Redundancy Protocol (VRRP): These protocols are used for first-hop redundancy on local networks which is not applicable in Azure environments or to Expressroute configurations. They do not facilitate the end-to-end routing and failover required.
Important Notes for the AZ-304 Exam
ExpressRoute Routing is BGP-Based: Understand that BGP is the routing protocol for ExpressRoute. If a question involves routing over ExpressRoute, BGP is highly likely to be involved.
BGP for Dynamic Routing and Failover: Know that BGP not only provides routing but also provides failover capabilities through its dynamic path selection and convergence features.
Local Preference: Understand how BGP attributes like Local Preference can be used to influence path selection. This is key for scenarios where you want to force a primary path and have a secondary backup path.
Azure Networking Core Concepts: You should have a solid understanding of:
Virtual Networks: How they’re used, subnetting, IP addressing.
Route Tables: Both default and User-Defined, and how they control traffic routing.
ExpressRoute: The different connection options and associated routing implications.
Dynamic vs. Static Routing: Know the difference between dynamic routing (BGP) and static routing (User Defined Routes) and where they are best suited.
Hybrid Networking: Be prepared to deal with hybrid scenarios that connect on-premises and Azure resources.
Failover: Be aware of the failover options and be able to choose the best solutions for different circumstances. BGP is the most common solution for failover between on-prem and Azure.
HSRP and VRRP Applicability: These are first hop redundancy protocols used locally and are not suitable for Azure cloud environments. They should not be suggested for Azure routing scenarios.
You have an Azure subscription. The subscription contains an app ir-tal is hosted in Ihe East US, Central Europe, ant) East Asia regions You need to recommend a data-tier solution for the app.
The solution must meet the following requirements:
- Support multiple consistency levels.
- Be able to store at least 1 TB of data.
- Be able to perform read and write operations in the Azure region that is local to the app instance
What should you Include In the recommendation?
a Microsoft SQL Server Always On availability group on Azure virtual machines
an Azure Cosmos OB database
an Azure SQL database in an elastic pool
Azure Table storage that uses geo-redundant storage (GRS) replication
Understanding the Requirements
Global Distribution: The application is deployed in multiple regions (East US, Central Europe, East Asia), meaning the data layer also needs to be globally accessible.
Multiple Consistency Levels: The solution must support different levels of data consistency (e.g., strong, eventual).
Scalability: It needs to store at least 1 TB of data.
Local Read/Write: Each application instance should be able to perform read and write operations in its local region for performance.
Evaluating the Options
a) Microsoft SQL Server Always On Availability Group on Azure Virtual Machines:
Pros:
Offers strong consistency.
Can store large amounts of data (1 TB+).
Cons:
Complex to manage: Requires setting up and maintaining virtual machines, clustering, and replication manually.
Not designed for low-latency multi-regional access: While you can do replication, it’s typically not optimized for providing very low-latency access to every region at the same time.
Does not inherently offer multiple consistency levels:
Verdict: Not the best fit. It’s too complex and doesn’t easily meet the multi-region, multiple consistency requirement.
b) An Azure Cosmos DB database:
Pros:
Globally Distributed: Designed for multi-region deployments and provides low-latency reads/writes in local regions.
Multiple Consistency Levels: Supports various consistency levels, from strong to eventual, that can be set per request.
Scalable: Can easily store 1 TB+ of data and scale as needed.
Fully Managed: Much easier to manage than SQL Server on VMs.
Cons:
Has different way of managing data and database design than relational solutions.
Verdict: Excellent fit. It directly addresses all the requirements.
c) An Azure SQL Database in an elastic pool:
Pros:
Scalable in terms of performance and resources.
Familiar relational database platform.
Cons:
Not inherently multi-regional: While you can do active geo-replication, it has limitations with low-latency reads from remote regions.
Limited consistency options: Primarily provides strong consistency, not multiple levels.
Not as horizontally scalable: It’s designed for relational data, not the more flexible scalability needed for a globally distributed app.
Does not provide local read/write in each region.
Verdict: Not the best choice. It doesn’t meet the multi-region low-latency and consistency requirements.
d) Azure Table storage that uses geo-redundant storage (GRS) replication:
Pros:
Highly scalable.
Relatively inexpensive.
GRS provides data replication.
Cons:
No multi-master writes: No local read/write in each region. Reads can come from a different location.
Limited consistency: Primarily eventual consistency, not the range required by the problem statement.
No SQL: Designed for non-relational data storage only.
Verdict: Not suitable. Lacks multiple consistency options, multi-master writes, and suitable performance for low latency reads.
Recommendation
Based on the analysis, the best solution is:
An Azure Cosmos DB database
Explanation
Azure Cosmos DB is purpose-built for globally distributed applications. It offers:
Global Distribution and Low Latency: Data can be replicated to multiple Azure regions, allowing applications to read and write data in their local region with low latency.
Multiple Consistency Levels: You can fine-tune the consistency level per request. Options range from strong consistency (data is guaranteed to be the same everywhere) to eventual consistency (data will eventually be consistent across regions).
Scalability: Cosmos DB can easily store 1 TB+ of data and automatically scales to handle increased traffic.
Ease of Management: As a fully managed service, it reduces operational overhead.
Your company purchases an app named App1.
You plan to tun App1 on seven Azure virtual machines In an Availability Set. The number of fault domains is set to 3. The number of update domains is set to 20.
You need to identity how many App1 instances will remain available during a period of planned maintenance.
How many Appl instances should you identify?
1
2
6
7
Understanding Availability Sets
Purpose: Availability Sets are used to protect your applications from planned and unplanned downtime within an Azure datacenter.
Fault Domains (FDs): Fault Domains define groups of virtual machines that share a common power source and network switch. In the event of a power or switch failure, VMs in different FDs will be affected independently of each other.
Update Domains (UDs): Update Domains define groups of virtual machines that can be rebooted simultaneously during an Azure maintenance window. Azure applies planned maintenance to UDs one at a time.
The Key Rule
During planned maintenance, Azure updates VMs within a single Update Domain at a time. Azure moves to the next UD only after completing an update to the current UD. This means that while an update is being done on one UD, the other UDs are not affected.
Analyzing the Scenario
7 VMs in total
3 Fault Domains: This is important for unplanned maintenance, but doesn’t directly impact our answer here.
20 Update Domains: This is the important factor for planned maintenance.
It does not mean there are 20 physical UDs in the set. It just means up to 20 UDs can be used. The 7 VM’s will therefore each be in 1 of 7 unique UDs within the set of 20 UDs.
Calculating Availability During Planned Maintenance
Minimum VMs per Update Domain: Since you have 7 VMs and, even though there are 20 UDs, each virtual machine will be placed in its own update domain so each will be on its own UD.
Impact of Maintenance: During a planned maintenance event, Azure will update one UD at a time. Therefore during maintenance one of those 7 VMs will be unavailable while the update is applied.
Available VMs: That means that at any given time when maintenance is applied to one single UD, the remaining VMs in the other UDs will remain available. In this case 7-1=6 VMs.
Correct Answer
6
Important Notes for the AZ-304 Exam
Availability Sets vs. Virtual Machine Scale Sets: Know the difference. Availability Sets provide fault tolerance for individual VMs, while Scale Sets provide scalability and resilience for groups of identical VMs (often used for autoscaling). This question specifically used an availability set.
Fault Domains (FDs) vs. Update Domains (UDs): Be clear on the purpose of each. FDs for unplanned maintenance, UDs for planned maintenance.
Impact of UDs on Planned Maintenance: During planned maintenance, only one UD is updated at a time, ensuring that your application can remain available.
Distribution of VMs: In an availability set, Azure evenly distributes VMs across FDs and UDs.
Maximum FDs and UDs: Understand that the maximum number of FDs is 3 and UDs are 20 in Availability Sets.
Real-World Scenario: Be aware that real production workloads can have other availability and redundancy concerns and that more advanced redundancy can be achieved by using multiple availability sets in the same region or a combination of Availability sets and Availability zones.
Calculations: Be able to determine the availability of VMs during planned or unplanned maintenance based on the number of FDs and UDs as well as the number of VMs in a given configuration.
Best Practice: Best practice is to have at least 2 VMs in an availability set, and 2 availability sets in your region to provide redundancy in the event of zonal failures as well as UD / FD maintenance.
Your company has the infrastructure shown in the following table:
Location: Azure
Resource:
Azure subscription named Subscription1
20 Azure web apps
Location: On-premises datacenter
Resource:
Active Directory domain
Server running Azure AD Connect
Linux computer named Server1
The on-premises Active Directory domain syncs to Azure Active Directory (Azure AD).
Server1 runs an application named Appl that uses LDAP queries to verify user identities in the on-premises Active Directory domain.
You plan to migrate Server1 to a virtual machine in Subscription1.
A company security policy states that the virtual machines and services deployed to Subscription! must be prevented from accessing the on-premises network.
You need to recommend a solution to ensure that Appl continues to function after the migration. The solution must meet the security policy.
What should you include in the recommendation?
Azure AD Domain Services (Azure AD DS)
an Azure VPN gateway
the Active Directory Domain Services role on a virtual machine
Azure AD Application Proxy
Understanding the Requirements
Application (App1): Uses LDAP queries to authenticate users in the on-premises Active Directory.
Migration: Moving from an on-premises Linux server to an Azure VM.
Security Policy: VMs and services in Azure are not allowed to access the on-premises network.
Functionality: The migrated application must still be able to authenticate users.
Analyzing the Options
Azure AD Domain Services (Azure AD DS)
Pros:
Provides a managed domain controller in Azure, allowing VMs to join the domain.
Supports LDAP queries for authentication.
Independent of the on-premises network.
Synchronizes user information from Azure AD.
Fully managed, eliminating the need for maintaining domain controllers.
Cons:
Cost implications from running an additional service.
Verdict: This is the most suitable option. It meets the functional requirements without violating the security policy.
An Azure VPN Gateway
Pros:
Provides a secure connection between Azure and on-premises networks.
Cons:
Violates the security policy that prevents Azure resources from connecting to on-premises.
Would allow the VM access to the entire on-prem network (if setup using site to site) including AD.
Verdict: Not a valid option because it directly contradicts the security policy.
The Active Directory Domain Services role on a virtual machine
Pros:
Provides the needed domain services
Cons:
Would require setting up and managing a domain controller in Azure.
Would need to setup a vpn connection to sync with on-prem which would violate the security policy.
Requires ongoing maintenance.
Verdict: Not a valid option because it would be hard to maintain and the connection to on-prem would violate the security policy.
Azure AD Application Proxy
Pros:
Allows external users to connect to internal resources.
Cons:
Not relevant for this use case. Application Proxy does not manage or provide LDAP access to users.
Verdict: Not a good fit as it does not help with authentication for the application.
Correct Recommendation
The best solution is Azure AD Domain Services (Azure AD DS).
Explanation
LDAP Compatibility: Azure AD DS provides a managed domain service compatible with LDAP queries, which is precisely what App1 needs for user authentication.
Isolated Azure Environment: Azure AD DS is entirely contained within Azure and does not require a connection to the on-premises network. This allows you to satisfy the security policy.
Azure AD Synchronization: Azure AD DS syncs users from Azure AD, meaning users will be able to authenticate after the migration.
Ease of Use: Azure AD DS is a fully managed service so you will not need to worry about the underlying infrastructure.
Important Notes for the AZ-304 Exam
Azure AD DS Use Cases: Know that Azure AD DS is designed for scenarios where you need domain services (including LDAP) in Azure but cannot/should not connect to on-premises domain controllers.
Hybrid Identity: Be familiar with hybrid identity options, such as using Azure AD Connect to sync on-premises Active Directory users to Azure AD.
Security Policies: Pay close attention to security policies described in exam questions. The answers should be able to fulfil any security requirements.
Service Selection: Be able to choose the correct Azure service based on the stated requirements of the question. For example, know when to use Azure AD DS as opposed to spinning up a domain controller in a VM.
Alternatives: You should know what other options there are that could theoretically be used, but also understand their pros and cons. For instance, you should be able to state that a VPN could facilitate the connection, but that the security policy would need to be updated.
LDAP Authentication: Understand LDAP as the core functionality for Active Directory authentication.
Fully Managed Services: Be aware of the benefits of managed services (like Azure AD DS) in reducing management overhead.
You are reviewing an Azure architecture as shown in the Architecture exhibit (Click the Architecture tab.)
Log Files
|
v
Azure Data Factory ——-> Azure Data Lake Storage
| |
| |
| |
v |
Azure Databricks <—————-
|
v
Azure Synapse Analytics ——-> Azure Analysis Services
|
v
Power BI
Steps:
Ingest: Log Files → Azure Data Factory
Store: Azure Data Factory → Azure Data Lake Storage
Prep and Train: Azure Data Lake Storage ⇄ Azure Databricks
Model and Serve: Azure Synapse Analytics → Azure Analysis Services
Visualize: Azure Analysis Services → Power BI
The estimated monthly costs for the architecture are shown in the Costs exhibit. (Click the Costs tab.)
|—————————-|————————————————-|—————|
| Azure Synapse Analytics | Tier: Compute-optimised Gen2, Compute: DWU 100 x 1 | US$998.88 |
| Data Factory | Azure Data Factory V2 Type, Data Pipeline Service type, | US$4,993.14 |
| Azure Analysis Services | Developer (hours), 5 Instance(s), 720 Hours | US$475.20 |
| Power BI Embedded | 1 node(s) x 1 Month, Node type: A1, 1 Virtual Core(s), | US$735.91 |
| Storage Accounts | Block Blob Storage, General Purpose V2, LRS Redundant, | US$21.84 |
| Azure Databricks | Data Analytics Workload, Premium Tier, 1 D3V2 (4 vCPU) | US$515.02 |
| Estimate total: | | US$7,739.99 |
The log files are generated by user activity to Apache web servers. The log files are in a consistent format. Approximately 1 GB of logs are generated per day. Microsoft Power Bl is used to display weekly reports of the user activity.
You need to recommend a solution to minimize costs while maintaining the functionality of the architecture.
What should you recommend?
Replace Azure Data Factory with CRON jobs that use AzCopy.
Replace Azure Synapse Analytics with Azure SOL Database Hyperscale.
Replace Azure Synapse Analytics and Azure Analysis Services with SQL Server on an Azure virtual machine.
Replace Azure Databricks with Azure Machine Learning.
Service | Description | Cost |
Understanding the Existing Architecture
Data Ingestion: Log files from Apache web servers are ingested into Azure Data Lake Storage via Azure Data Factory.
Data Processing: Azure Databricks is used to prep and train the data.
Data Warehousing: Azure Synapse Analytics is used to model and serve data.
Data Visualization: Azure Analysis Services and Power BI are used for visualization.
Cost Breakdown and Bottlenecks
The cost breakdown shows the following areas as significant expenses:
Azure Data Factory: $4,993.14 (by far the most expensive item)
Azure Synapse Analytics: $998.88
Power BI Embedded: $735.91
The other items (Analysis services, Databricks, and storage) are relatively low cost.
Analyzing the Recommendations
Replace Azure Data Factory with CRON jobs that use AzCopy.
Pros:
Significant cost reduction: AzCopy is free and can be used with a simple CRON job.
Suitable for the relatively small amount of data that is being moved.
Cons:
Less feature rich than Data Factory (No orchestration, error handling, monitoring etc).
Adds management overhead as you need to create and maintain the CRON jobs.
Verdict: This is the best option. Given the small data volume, the complexity of Data Factory is overkill and the cost can be reduced dramatically.
Replace Azure Synapse Analytics with Azure SQL Database Hyperscale.
Pros:
Can be more cost effective for smaller workloads and can scale up or down easily.
Cons:
May need changes to the way the data is stored and managed.
Hyperscale is designed for transactional loads and may not be the best replacement for a Datawarehouse.
Verdict: Not the best option, as it may impact the architecture of the solution and the query patterns used.
Replace Azure Synapse Analytics and Azure Analysis Services with SQL Server on an Azure virtual machine.
Pros:
Could be less expensive than the managed service for small workloads.
Cons:
Significantly more management overhead, less scalable.
Would reduce the overall functionality of the solution, having to implement multiple services in one VM.
Would not reduce costs as the total cost of the VM, the sql licences, and management effort would likely cost more.
Verdict: Not recommended. Introduces complexity and management overhead.
Replace Azure Databricks with Azure Machine Learning.
Pros:
Azure Machine Learning can also do data processing.
May be more cost efficient depending on workload.
Cons:
Azure Machine learning is more focused on ML than processing/preparation of data.
* More geared towards predictive analytics than general data processing.
* May require a significant rework of the existing process.
Verdict: Not a suitable option as it is not a like for like replacement.
Recommendation
The best recommendation is:
Replace Azure Data Factory with CRON jobs that use AzCopy.
Explanation
Cost Savings: The primary issue is the high cost of Azure Data Factory. Using CRON jobs and AzCopy is a simple, low-cost alternative for the relatively small volume of data being moved.
Functionality: The CRON job will simply move the data from the source location to the Azure data lake, with the processing steps remaining the same.
Complexity: While this adds more management overhead by requiring you to create the CRON job and manage it, the simplicity of the requirements outweigh the complexity.
Important Notes for the AZ-304 Exam
Cost Optimization: Know that the exam may test your ability to identify cost drivers and suggest cost optimizations.
Azure Data Factory: Understand when ADF is the right tool and when a simpler tool will suffice. It’s often beneficial to use a tool as simple as possible, while still meeting requirements.
Data Transfer: Be aware of options like AzCopy for moving data in a low-cost way.
CRON jobs: Understand how CRON jobs can be used to schedule operations.
Azure Synapse Analytics: Understand how Azure Synapse Analytics can provide insights and processing power, but can also be expensive.
SQL Database Hyperscale: Understand when it is more beneficial to use Hyperscale over Synapse
SQL Server on Azure VM: Know the use cases of where a traditional SQL server may be appropriate.
Azure Analysis Services: Know that it is designed for fast data queries and reporting through tools like Power BI, but can add significant cost.
Azure Databricks and ML: Understand the difference and which scenarios are more suited for each.
Service selection: Know how to select a service based on the requirements provided.
Simplicity: Consider solutions that may be less feature-rich, but provide simpler (and lower cost) solutions.
You have an Azure Active Directory (Azure AD) tenant.
You plan to provide users with access to shared files by using Azure Storage. The users will be provided with different levels of access to various Azure file shares based on their user account or their group membership.
You need to recommend which additional Azure services must be used to support the planned deployment.
What should you include in the recommendation?
an Azure AD enterprise application
Azure Information Protection
an Azure AD Domain Services (Azure AD DS) instance
an Azure Front Door instance
Understanding the Requirements
Azure File Shares: Using Azure Storage to host shared files.
Granular Access Control: Users need different levels of access to different file shares.
User/Group-Based Permissions: Access should be based on the user’s account and their Azure AD group memberships.
Azure AD Authentication: Users will be using their Azure AD credentials.
Analyzing the Options
An Azure AD enterprise application:
Pros:
Allows you to register an application with Azure AD for authentication and authorization.
This can be used to allow users to access other resources including file shares based on their claims (group membership).
Cons:
Would require custom logic and code development to implement access based on group membership.
Verdict: Not the correct choice. An application registration is required for service principals, but not directly for file share access.
Azure Information Protection:
Pros:
Provides information protection through labeling, classification, and encryption.
Cons:
It doesn’t directly control access to Azure File Shares and does not provide role based access control (RBAC)
Verdict: Not the correct choice. While AIP can help protect the files themselves, it’s not what is needed to control access based on the identity and group memberships of the users.
An Azure AD Domain Services (Azure AD DS) instance:
Pros:
Provides domain services in Azure.
Can be used to join Azure VMs to the managed domain.
Cons:
Not required. You can use the Azure AD authentication directly, so a domain service is not required to provide access to the fileshares.
Verdict: Not the correct choice. While Azure AD DS provides a domain for Azure resources, it’s not required for this use case and would be an unnecessary complexity.
An Azure Front Door instance:
Pros:
* Provides a global entry point for your web applications.
Cons:
Azure Front door cannot be used for Azure fileshare access.
Verdict: Not the correct choice as it does not help with Azure fileshare access.
Recommendation
None of the mentioned services directly support the planned deployment, however the best option is to do the following:
Role Based Access Control (RBAC):
Explanation
RBAC: Role-Based Access Control (RBAC) in Azure allows you to define roles (like “Reader,” “Contributor,” “Owner”) and assign these roles to users or groups at the file share, storage account, or resource group levels.
Azure AD Identities: RBAC integrates directly with Azure AD, so you can easily grant permissions based on a user’s Azure AD account or their group memberships.
Granular Permissions: You can use RBAC to configure different permission levels for different users and groups on different file shares, meeting the stated requirements.
No Additional Services: Unlike the incorrect answers, RBAC is a core feature of Azure and does not require additional services.
Important Notes for the AZ-304 Exam
Azure RBAC: Know RBAC inside and out. How to create roles, assign roles, scopes, and use in Azure.
Azure Storage Security: Understand how security works in Azure Storage (using access keys, Shared Access Signatures (SAS), and RBAC)
Azure AD Authentication: Know how Azure AD can be used to grant access to Azure resources
Azure File Shares: Understand how they work, and their security options.
Common Use Cases: RBAC is used everywhere in Azure, so be comfortable applying it in different scenarios.
Role Creation: Know when you will need to create a custom role, and how to do so.
DRAG DROP
You are planning an Azure solution that will host production databases for a high-performance application. The solution will include the following components:
✑ Two virtual machines that will run Microsoft SQL Server 2016, will be deployed to different data centers in the same Azure region, and will be part of an Always On availability group.
✑ SQL Server data that will be backed up by using the Automated Backup feature of the SQL Server IaaS Agent Extension (SQLIaaSExtension)
You identify the storage priorities for various data types as shown in the following table.
|————————|—————————|
| Operating system | Speed and availability |
| Databases and logs | Speed and availability |
| Backups | Lowest cost |
Which storage type should you recommend for each data type? To answer, drag the appropriate storage types to the correct data types. Each storage type may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Storage Types
A geo-redundant storage (GRS) account
A locally-redundant storage (LRS) account
A premium managed disk
A standard managed disk
Answer Area
Operating system:
Databases and logs:
Backups:
Data type | Storage priority |
Understanding the Requirements
High-Performance Application: The application demands high speed and availability.
SQL Server Always On: Data is critical and must be resilient and highly available.
Automated Backups: Backups are important but not as critical as the operational data.
Storage Priorities:
Operating System: Speed and availability.
Databases and Logs: Speed and availability.
Backups: Lowest cost.
Analyzing the Storage Options
A geo-redundant storage (GRS) account:
Pros:
Provides data replication across a secondary region.
Best for disaster recovery and high availability.
Cons:
Highest cost among the storage options.
Higher latency than locally redundant storage (LRS) or premium storage.
Use Case: Best for backups when recovery from a regional outage is critical, or when backups need to be available from a different location.
A locally-redundant storage (LRS) account:
Pros:
Lowest cost storage.
Cons:
Data redundancy is limited to within the same data center.
Use Case: Suitable for backups where availability is less of a concern and lowest cost is the primary priority.
A premium managed disk:
Pros:
Highest performance with SSD storage.
Designed for high IOPS and low latency.
Cons:
Highest cost.
Use Case: Ideal for operating system disks, databases, and logs for high-performance applications.
A standard managed disk:
Pros:
Lower cost than premium disks.
Cons:
Uses HDD storage, offering less performance than SSD storage.
Use Case: Suitable for less performance-sensitive workloads and backups, where cost is an important factor.
Matching Storage to Data Types
Here’s how we should match the storage types:
Operating system:
Premium managed disk is the correct option. The operating system requires high-speed disk access for good virtual machine performance.
Databases and logs:
Premium managed disk is the correct option. Databases and logs require very low-latency and high IOPS. Premium disks are the only disks that provide these performance requirements.
Backups:
A locally-redundant storage (LRS) account is the best option. The automated backup configuration for SQL Server (SQLIaaSExtension) should use LRS storage for backups by default due to the cost benefits.
Answer Area
Data Type Storage Type
Operating system A premium managed disk
Databases and logs A premium managed disk
Backups A locally-redundant storage (LRS) account
Important Notes for the AZ-304 Exam
Managed Disks vs Unmanaged Disks: Know the difference between them and be aware that managed disks are the default option and almost always recommended.
Premium SSD vs Standard HDD: Understand the use cases of Premium disks for high IOPS/low-latency and Standard for cost sensitive workloads.
Storage Redundancy Options: Understand the difference between LRS, GRS, ZRS, and how to choose the best options for availability and durability requirements.
SQL Server on Azure VMs: Know best practices for SQL Server VM deployments including storage and backup configuration.
Performance Needs: Recognize which workloads need performance (like databases, operating systems) and which can tolerate lower performance and be cost-optimized (backups)
You are developing a sales application that will contain several Azure cloud services and will handle different components of a transaction Different cloud services will process customer orders balling, payment inventory, and stopping.
You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using REST messages.
What should you include in the recommendation?
Azure Queue storage
Azure Data Lake
Azure Service Fabric
Azure Traffic Manager
Understanding the Requirements
Asynchronous Communication: Cloud services need to communicate without waiting for a response from each other.
REST Messages: Communication should be done via HTTP-based REST messages.
Transaction Information: The messages contain data related to customer orders, billing, payment, inventory, and shipping.
Multiple Cloud Services: The solution must enable several cloud services to communicate effectively.
Analyzing the Options
Azure Queue Storage:
Pros:
Asynchronous: Supports asynchronous message queuing.
HTTP-Based API: Provides a REST API for sending and receiving messages.
Scalable: Can handle high message volumes.
Simple to Use: Relatively easy to set up and use for message queuing.
Cost-Effective: One of the most cost-effective options for asynchronous messaging.
Cons:
Does not support message filtering, prioritization, or sessions.
Messages have a size limit of 64KB which may not suit more complex messages.
Verdict: This is the best fit for the given requirements.
Azure Data Lake:
Pros:
Scalable storage for large data sets.
Cons:
Not designed for message queuing or asynchronous communication.
Does not have a REST based messaging API.
Verdict: Not a suitable choice as it does not provide messaging functionality.
Azure Service Fabric:
Pros:
Platform for building microservices.
Provides reliable communication patterns between services.
Cons:
More complex than needed for simple message queuing.
Not designed for simple asynchronous communication with REST messages.
Adds unnecessary operational overhead if a simple messaging system is required.
Verdict: Not suitable. Too complex for the given scenario.
Azure Traffic Manager:
Pros:
Provides traffic routing based on performance and priority.
Cons:
Does not handle messaging or asynchronous communication.
Not applicable to the scenario.
Verdict: Not a relevant option. Its functionality is outside the requirements for the question.
Recommendation
The correct recommendation is:
Azure Queue Storage
Explanation
Asynchronous Messaging: Azure Queue Storage is designed to facilitate asynchronous communication between different components of an application. Services can add messages to the queue, and other services can read those messages from the queue independently of each other.
REST API: Queue Storage exposes a REST API that allows services to interact with queues through HTTP requests.
Scalability: Azure Queue Storage can scale to accommodate a large number of messages and message senders/receivers.
Cost-Effectiveness: It is one of the most cost-effective services for asynchronous messaging on Azure.
Important Notes for the AZ-304 Exam
Asynchronous Messaging: Understand when and why to use asynchronous messaging.
Azure Queue Storage: Know that it’s a great option for simple messaging, with its REST API, ease of use, scalability, and cost effectiveness.
Azure Service Bus: Be aware of when to use Azure Service Bus over queue storage, particularly if more complex features are needed such as message filtering, prioritization, sessions, or publish/subscribe.
REST API: Recognize that many Azure services use REST for API access.
Microservices: Know that services communicate with one another in a microservices environment using various methods.
Appropriate Use Cases: Focus on matching the right service with the appropriate use case.
You nave 200 resource groups across 20 Azure subscriptions.
Your company’s security policy states that the security administrator most verify all assignments of the Owner role for the subscriptions and resource groups once a month. All assignments that are not approved try the security administrator must be removed automatically. The security administrator must be prompted every month to perform the verification.
What should you use to implement the security policy?
Access reviews in identity Governance
role assignments in Azure Active Directory (Azure AD) Privileged Identity Management (PIM)
Identity Secure Score in Azure Security Center
the user risk policy Azure Active Directory (Azure AD) Identity Protection
Understanding the Requirements
Scope: 20 Azure subscriptions and 200 resource groups.
Policy: Monthly verification of Owner role assignments.
Verification: A security administrator must approve or remove role assignments.
Automation: Unapproved assignments should be automatically removed.
Monthly Reminders: Security administrator must be prompted each month for verification.
Analyzing the Options
Access reviews in Identity Governance:
Pros:
Role Assignment Review: Specifically designed for reviewing and managing role assignments, including the Owner role.
Scheduled Reviews: Can be configured to run monthly.
Automatic Removal: Supports automatic removal of assignments not approved by the reviewer.
Reviewer Reminders: Notifies designated reviewers (security administrator) when reviews are due.
Scope: Can be used for both subscriptions and resource groups.
Cons:
Requires correct configuration of the governance policy and assignments to ensure the policy is enforced.
Verdict: This is the correct option as it directly meets all the requirements.
Role assignments in Azure Active Directory (Azure AD) Privileged Identity Management (PIM):
Pros:
Allows for just-in-time (JIT) role elevation.
Cons:
Does not directly facilitate regular reviews of role assignments.
PIM is generally used for temporary access not the requirement for recurring review and removal of assignments.
Verdict: Not suitable. Does not fulfil the requirement for monthly verification of role assignments.
Identity Secure Score in Azure Security Center:
Pros:
Provides a security score based on configurations and recommendations.
Cons:
Does not manage, monitor, or remove role assignments.
Only provides a score of the security posture but does not take actions to remove permissions.
Verdict: Not suitable. It is only used to monitor your posture.
The user risk policy in Azure Active Directory (Azure AD) Identity Protection:
Pros:
Detects and manages user risk based on suspicious activities.
Cons:
Does not manage role assignments, it is only used for user based risks and not for permissions.
Not relevant for the requirements for scheduled reviews of role assignments.
Verdict: Not suitable. Not used for role assignment reviews.
Recommendation
The best solution is:
Access reviews in Identity Governance
Explanation
Designed for Role Assignment Reviews: Access reviews are specifically built for reviewing and managing user access to resources.
Scheduled Monthly Reviews: You can configure the access reviews to occur every month.
Automatic Remediation: Unapproved role assignments can be automatically removed, which fulfills the security policy requirement.
Notifications: The security administrator will be notified when the monthly review is due and will be required to take action, or the review will complete automatically.
Comprehensive Scope: Access reviews can be configured at the subscription and resource group levels.
Important Notes for the AZ-304 Exam
Identity Governance: Know that Identity Governance provides access reviews and other features for managing user access.
Access Reviews: Understand how to use access reviews for recurring role assignment validation.
Privileged Identity Management (PIM): Know when to use PIM for JIT role activation and when it is not suitable, such as in this scenario.
Azure Security Center: Understand that it gives you a security posture but not a way to resolve assignment review issues, it only recommends remediation steps.
Azure AD Identity Protection: Understand its purpose in monitoring and dealing with user risk.
Role Assignments: know that RBAC is used to control roles, and that they can be assigned at multiple levels in Azure.
Automation: Be aware of how Azure Governance tools can help automate security tasks, such as removing assignments and sending out alerts.
Your company purchases an app named App1.
You need to recommend a solution 10 ensure that App 1 can read and modify access reviews.
What should you recommend?
From the Azure Active Directory admin center, register App1. and then delegate permissions to the Microsoft Graph AP
From the Azure Active Directory admin center, register App1. from the Access control (IAM) blade, delegate permissions.
From API Management services, publish the API of App1. and then delegate permissions to the Microsoft Graph AP
From API Management services, publish the API of App1 From the Access control (IAM) blade, delegate permissions.
Understanding the Requirements
App1 Functionality: Needs to read and modify access reviews.
Azure Environment: Using Azure Active Directory (Azure AD).
Authorization: Must be authorized to perform these actions.
Analyzing the Options
From the Azure Active Directory admin center, register App1, and then delegate permissions to the Microsoft Graph API.
Pros:
Application Registration: The correct way to enable an application to be able to access protected resources in Azure AD.
Microsoft Graph API: The Microsoft Graph API is the correct API to access Azure AD, including access reviews.
Delegated Permissions: Permissions to access Microsoft Graph APIs must be delegated to applications, and this can be done using Azure AD application registrations.
Cons:
None. This is the correct approach.
Verdict: This is the correct solution.
From the Azure Active Directory admin center, register App1. from the Access control (IAM) blade, delegate permissions.
Pros:
Application Registration: Required to allow your app to integrate with Azure.
Cons:
Access Control (IAM): IAM is used for resource-level access control and not for delegating permissions for application access to Azure AD or Graph API resources.
Delegations to specific APIs such as graph api are not performed using the IAM blade.
Verdict: This is incorrect. IAM is not used to delegate permissions to the Microsoft Graph API.
From API Management services, publish the API of App1, and then delegate permissions to the Microsoft Graph API.
Pros:
API Management is useful when you want to expose your app as a third-party API.
Cons:
API Management: Not required for App1 to interact with the Graph API. API Management is not required to access graph API’s.
Does not support direct delegation of application permissions.
Verdict: This is incorrect. API Management is not the correct service for this task.
From API Management services, publish the API of App1. From the Access control (IAM) blade, delegate permissions.
Pros:
API Management is useful when you want to expose your app as a third-party API.
Cons:
API Management: Not required for App1 to interact with the Graph API.
IAM: IAM is not used to delegate access to the Graph API.
Verdict: This is incorrect. API Management is not the correct service, and IAM is not the correct way to configure delegation for a graph api.
Recommendation
The correct recommendation is:
From the Azure Active Directory admin center, register App1, and then delegate permissions to the Microsoft Graph API.
Explanation
Application Registration: Registering App1 in Azure AD creates an application object which represents your application and is used to identify your application within the directory.
Microsoft Graph API: The Microsoft Graph API is the unified endpoint for accessing Microsoft 365, Azure AD and other Microsoft cloud resources. Access reviews are also exposed through this API.
Delegated Permissions: You must delegate permissions to allow App1 to access the Graph API. By providing delegated permissions through the application registration, you allow the app to access resources on behalf of the logged in user. In the case of app-only access, this can be configured by granting application permissions rather than delegated permissions.
Authorization: After App1 is registered with delegated permissions it is allowed to perform actions on the Graph API such as accessing access reviews.
Important Notes for the AZ-304 Exam
Application Registration: Know how to register applications in Azure AD and why it is a required step to allow apps to access resources.
Microsoft Graph API: Understand that the Graph API is the primary way to access Microsoft 365 and Azure AD resources, including access reviews.
Delegated Permissions vs. Application Permissions: Be able to differentiate between these two types of permissions. Delegated permissions require an authenticated user. Application permissions are app-only and do not need a logged in user.
Access Control (IAM): Know that IAM is for resource level access and not for granting permission for applications.
API Management: Understand its purpose in publishing and securing APIs, but note that it is not necessary in this use case.
Security Principles: Understand the best practices for securing access to resources such as ensuring that the app is registered and given correct permissions.