test5 Flashcards
https://freedumps.certqueen.com/?s=AZ-304
Case Study
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.
To start the case study
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.
Overview. General Overview
Litware, Inc. is a medium-sized finance company.
Overview. Physical Locations
Litware has a main office in Boston.
Existing Environment. Identity Environment
The network contains an Active Directory forest named Litware.com that is linked to an Azure Active Directory (Azure AD) tenant named Litware.com. All users have Azure Active Directory Premium P2 licenses.
Litware has a second Azure AD tenant named dev.Litware.com that is used as a development environment.
The Litware.com tenant has a conditional acess policy named capolicy1. Capolicy1 requires that when users manage the Azure subscription for a production environment by
using the Azure portal, they must connect from a hybrid Azure AD-joined device.
Existing Environment. Azure Environment
Litware has 10 Azure subscriptions that are linked to the Litware.com tenant and five Azure subscriptions that are linked to the dev.Litware.com tenant. All the subscriptions are in an Enterprise Agreement (EA).
The Litware.com tenant contains a custom Azure role-based access control (Azure RBAC) role named Role1 that grants the DataActions read permission to the blobs and files in Azure Storage.
Existing Environment. On-premises Environment
The on-premises network of Litware contains the resources shown in the following table.
Name Type Configuration
SERVER1 Ubuntu 18.04 vitual machines hosted on Hyper-V The vitual machines host a third-party app named App1. App1 uses an external storage solution that provides Apache Hadoop-compatible data storage. The data storage supports POSIX access control list (ACL) file-level permissions.
SERVER10 Server that runs Windows Server 2016 The server contains a Microsoft SQL Server instance that hosts two databases named DB1 and DB2.
Existing Environment. Network Environment
Litware has ExpressRoute connectivity to Azure.
Planned Changes and Requirements. Planned Changes
Litware plans to implement the following changes:
✑ Migrate DB1 and DB2 to Azure.
✑ Migrate App1 to Azure virtual machines.
✑ Deploy the Azure virtual machines that will host App1 to Azure dedicated hosts.
Planned Changes and Requirements. Authentication and Authorization Requirements
Litware identifies the following authentication and authorization requirements:
✑ Users that manage the production environment by using the Azure portal must connect from a hybrid Azure AD-joined device and authenticate by using Azure Multi-Factor Authentication (MFA).
✑ The Network Contributor built-in RBAC role must be used to grant permission to all the virtual networks in all the Azure subscriptions.
✑ To access the resources in Azure, App1 must use the managed identity of the virtual machines that will host the app.
✑ Role1 must be used to assign permissions to the storage accounts of all the Azure subscriptions.
✑ RBAC roles must be applied at the highest level possible.
Planned Changes and Requirements. Resiliency Requirements
Litware identifies the following resiliency requirements:
✑ Once migrated to Azure, DB1 and DB2 must meet the following requirements:
- Maintain availability if two availability zones in the local Azure region fail.
- Fail over automatically.
- Minimize I/O latency.
✑ App1 must meet the following requirements:
- Be hosted in an Azure region that supports availability zones.
- Be hosted on Azure virtual machines that support automatic scaling.
- Maintain availability if two availability zones in the local Azure region fail.
Planned Changes and Requirements. Security and Compliance Requirements
Litware identifies the following security and compliance requirements:
✑ Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.
✑ On-premises users and services must be able to access the Azure Storage account that will host the data in App1.
✑ Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.
✑ All Azure SQL databases in the production environment must have Transparent Data Encryption (TDE) enabled.
✑ App1 must not share physical hardware with other workloads.
Planned Changes and Requirements. Business Requirements
Litware identifies the following business requirements:
✑ Minimize administrative effort.
✑ Minimize costs.
You migrate App1 to Azure. You need to ensure that the data storage for App1 meets the security and compliance requirement
What should you do?
Create an access policy for the blob
Modify the access level of the blob service.
Implement Azure resource locks.
Create Azure RBAC assignments.
The security and compliance requirement states: “Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.” This is a requirement for data immutability or Write-Once-Read-Many (WORM) storage.
Let’s examine each option:
Create an access policy for the blob: Azure Blob Storage offers a feature called Immutable Storage for Blob Storage, which allows you to store business-critical data in a WORM state. You can implement time-based retention policies to retain data for a specified period, during which blobs cannot be modified or deleted. This directly addresses the requirement of preventing modification for three years. An access policy in this context would refer to configuring an immutability policy.
Modify the access level of the blob service: Blob storage access tiers (Hot, Cool, Archive) are related to data access frequency and cost. Changing the access tier does not provide any immutability or write protection for the data. This option is irrelevant to the requirement.
Implement Azure resource locks: Azure Resource Locks are used to protect Azure resources (like storage accounts, virtual machines, etc.) from accidental deletion or modification at the Azure Resource Manager level. While you can lock a storage account to prevent deletion of the account itself, resource locks do not prevent modifications to the data within the blobs in the storage account. Resource locks are not designed for data immutability within a storage service.
Create Azure RBAC assignments: Azure Role-Based Access Control (RBAC) is used to manage access to Azure resources. You can use RBAC to control who can read, write, or delete blobs. However, RBAC is about authorization and permissions, not about enforcing immutability or retention policies. RBAC cannot prevent authorized users from modifying data within the retention period.
Considering the requirement for data immutability and prevention of modification for three years, the most appropriate solution is to Create an access policy for the blob. This refers to using the Immutable Storage feature of Azure Blob Storage and setting up a time-based retention policy for a duration of three years. This will ensure that once data is written, it cannot be modified or deleted for the specified period, meeting the security and compliance requirement.
Final Answer: Create an access policy for the blob
You need to implement the Azure RBAC role assignments for the Network Contributor role. The solution must meet the authentication and authorization requirements.
What is the minimum number of assignments that you must use?
1
2
5
10
15
The requirement is to use the Network Contributor built-in RBAC role to grant permission to all virtual networks in all Azure subscriptions. The principle is to apply RBAC roles at the highest level possible to minimize administrative effort.
Litware has:
10 Azure subscriptions in the Litware.com tenant (production environment)
5 Azure subscriptions in the dev.Litware.com tenant (development environment)
Total of 15 Azure subscriptions
The requirement is to grant the Network Contributor role to all virtual networks in all Azure subscriptions. This implies we need to cover all 15 subscriptions.
The highest level at which you can apply an RBAC role assignment that would affect all virtual networks within a subscription is the subscription level itself.
If there was a Management Group structure in place, and if all 15 subscriptions were under a single Management Group, then assigning the Network Contributor role at the Management Group level would be the most efficient way, requiring only 1 assignment. However, the case study does not explicitly mention the use of Management Groups.
In the absence of explicitly mentioned Management Groups that encompass all subscriptions, the highest level to apply RBAC to cover all virtual networks within each subscription is the subscription level.
Therefore, to grant the Network Contributor role to all virtual networks in all 15 subscriptions, and applying the role at the highest possible level (which we assume to be subscription level in this context), you would need to make 15 assignments, one assignment for each subscription.
If we were to assign at a lower level, such as resource group level, it would not meet the requirement of covering all virtual networks in all subscriptions with the minimum number of assignments. We would need many more assignments at the resource group level, and it would be much more complex to manage.
Since the question asks for the minimum number of assignments and to apply at the highest level possible, and assuming the highest manageable level to affect all virtual networks in a subscription is the subscription itself, the answer is 15. If a management group was implied and covered all subscriptions, the answer would be 1. However, based on the information provided, and to cover all subscriptions, 15 is the minimum number of assignments at the subscription level.
Final Answer: 15
HOTSPOT
You plan to migrate DB1 and DB2 to Azure.
You need to ensure that the Azure database and the service tier meet the resiliency and business requirements.
What should you configure? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Database:
A single Azure SQL database
Azure SQL Managed Instance
An Azure SQL Database elastic pool
Service tier:
Hyperscale
Business Critical
General Purpose
Explanation:
Box 1: SQL Managed Instance
Scenario: Once migrated to Azure, DB1 and DB2 must meet the following requirements:
✑ Maintain availability if two availability zones in the local Azure region fail.
✑ Fail over automatically.
✑ Minimize I/O latency.
The auto-failover groups feature allows you to manage the replication and failover of a group of databases on a server or all databases in a managed instance to another region. It is a declarative abstraction on top of the existing active geo-replication feature, designed to simplify deployment and management of geo-replicated databases at scale. You can initiate a geo-failover manually or you can delegate it to the Azure service based on a user-defined policy. The latter option allows you to automatically recover multiple related databases in a secondary region after a catastrophic failure or other unplanned event that results in full or partial loss of the SQL Database or SQL Managed Instance availability in the primary region.
Box 2: Business critical
SQL Managed Instance is available in two service tiers:
General purpose: Designed for applications with typical performance and I/O latency requirements.
Business critical: Designed for applications with low I/O latency requirements and minimal impact of underlying maintenance operations on the workload.
Overview
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.
To start the case study
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.
Existing Environment: Technical Environment
The on-premises network contains a single Active Directory domain named contoso.com.
Contoso has a single Azure subscription.
Existing Environment: Business Partnerships
Contoso has a business partnership with Fabrikam, Inc. Fabrikam users access some Contoso applications over the internet by using Azure Active Directory (Azure AD) guest accounts.
Requirements: Planned Changes
Contoso plans to deploy two applications named App1 and App2 to Azure.
Requirements: App1
App1 will be a Python web app hosted in Azure App Service that requires a Linux runtime.
Users from Contoso and Fabrikam will access App1.
App1 will access several services that require third-party credentials and access strings.
The credentials and access strings are stored in Azure Key Vault.
App1 will have six instances: three in the East US Azure region and three in the West Europe Azure region.
App1 has the following data requirements:
✑ Each instance will write data to a data store in the same availability zone as the instance.
✑ Data written by any App1 instance must be visible to all App1 instances.
App1 will only be accessible from the internet. App1 has the following connection requirements:
✑ Connections to App1 must pass through a web application firewall (WAF).
✑ Connections to App1 must be active-active load balanced between instances.
✑ All connections to App1 from North America must be directed to the East US region. All other connections must be directed to the West Europe region.
Every hour, you will run a maintenance task by invoking a PowerShell script that copies files from all the App1 instances. The PowerShell script will run from a central location.
Requirements: App2
App2 will be a NET app hosted in App Service that requires a Windows runtime.
App2 has the following file storage requirements:
✑ Save files to an Azure Storage account.
✑ Replicate files to an on-premises location.
✑ Ensure that on-premises clients can read the files over the LAN by using the SMB protocol.
You need to monitor App2 to analyze how long it takes to perform different transactions within the application. The solution must not require changes to the application code.
Application Development Requirements
Application developers will constantly develop new versions of App1 and App2.
The development process must meet the following requirements:
✑ A staging instance of a new application version must be deployed to the application host before the new version is used in production.
✑ After testing the new version, the staging version of the application will replace the production version.
✑ The switch to the new application version from staging to production must occur without any downtime of the application.
Identity Requirements
Contoso identifies the following requirements for managing Fabrikam access to resources:
✑ uk.co.certification.simulator.questionpool.PList@1863e940
✑ The solution must minimize development effort.
Security Requirement
All secrets used by Azure services must be stored in Azure Key Vault.
Services that require credentials must have the credentials tied to the service instance. The credentials must NOT be shared between services.
You need to recommend a solution that meets the application development requirements.
What should you include in the recommendation?
an Azure Container Registry instance
deployment slots
Continuous Integration/Continuous Deployment (CI/CD) sources
the Azure App Configuration service
The application development requirements explicitly state the need for a staging instance, testing, and zero-downtime deployment for new application versions. Let’s evaluate each option in the context of these requirements:
an Azure Container Registry instance: Azure Container Registry (ACR) is a service for building, storing, and managing container images. While ACR is crucial for containerized applications and can be part of a CI/CD pipeline, App1 and App2 are deployed to Azure App Service, which, according to the description, doesn’t explicitly mention containerization. ACR, by itself, does not directly enable staging or zero-downtime deployment for App Service applications.
deployment slots: Azure App Service deployment slots are a feature specifically designed to address the application development requirements outlined. Deployment slots allow you to:
Deploy a new version of your application to a staging slot.
Test the staged application in an environment that mirrors production.
Swap the staging slot into the production slot with minimal to zero downtime. This swap operation is very quick because it primarily involves changing the virtual IP addresses associated with the slots, not redeploying the application.
This option directly and effectively addresses all three application development requirements.
Continuous Integration/Continuous Deployment (CI/CD) sources: CI/CD sources like Azure DevOps, GitHub, or Bitbucket are tools and platforms that facilitate the automation of the software development lifecycle, including building, testing, and deploying applications. While CI/CD pipelines are essential for automating deployments to deployment slots, CI/CD sources themselves are not the mechanism for staging and zero-downtime deployment. They are used to manage and drive deployments, potentially to deployment slots, but they are not the solution itself for the stated requirement.
the Azure App Configuration service: Azure App Configuration is a service for centrally managing application settings and feature flags. It helps decouple configuration from code, enabling dynamic configuration updates without application redeployments. While App Configuration is valuable for managing application settings and can be integrated with CI/CD pipelines, it does not directly address the core requirement of staging new application versions and achieving zero-downtime swaps between versions.
Considering the explicit requirements for staging, testing, and zero-downtime deployment, deployment slots are the most direct and effective Azure App Service feature to meet these needs. They provide the necessary infrastructure to deploy a staging version, test it, and then swap it into production without downtime.
Final Answer: deployment slots
What should you recommend lo meet the monitoring requirements for App2?
Azure Application Insights
Container insights
Microsoft Sentinel
VM insights
The requirement is to monitor App2 to analyze transaction times without modifying the application code. App2 is a .NET application hosted in Azure App Service. Let’s evaluate each option:
Azure Application Insights: Application Insights is an Application Performance Monitoring (APM) service in Azure. It is designed specifically for web applications, including those hosted in Azure App Service. Application Insights can automatically instrument .NET applications running in App Service without requiring code changes through the use of the Application Insights Extension or Auto-Instrumentation. This feature automatically collects performance data, including request durations and transaction traces, which directly addresses the requirement to analyze transaction times.
Container insights: Container insights is a feature of Azure Monitor designed to monitor the performance and health of containerized workloads, primarily in Azure Kubernetes Service (AKS) and Azure Container Instances (ACI). Since App2 is hosted in Azure App Service (which is a PaaS service, not containers directly managed by the user), Container insights is not the appropriate monitoring solution for App2.
Microsoft Sentinel: Microsoft Sentinel is Azure’s cloud-native SIEM (Security Information and Event Management) and SOAR (Security Orchestration, Automation, and Response) solution. Sentinel is focused on security monitoring, threat detection, and incident response. While Sentinel can ingest data from various sources, including Azure Monitor logs (which could include Application Insights data), it is not primarily designed for application performance monitoring in the way that Application Insights is. Using Sentinel for this specific transaction monitoring requirement would be an indirect and overly complex approach compared to using Application Insights directly.
VM insights: VM insights is designed to monitor the performance and health of virtual machines and virtual machine scale sets. While Azure App Service instances run on virtual machines in the backend, VM insights focuses on monitoring the infrastructure level metrics of the VMs (CPU, memory, disk, network). It does not provide application-level transaction monitoring or analysis for applications running within App Service. VM insights is not the right tool to analyze application transaction times.
Considering the requirement for monitoring App2 transactions without code changes, and App2 being an App Service .NET application, Azure Application Insights is the most suitable and direct recommendation. It provides automatic instrumentation for App Service applications, enabling transaction monitoring without requiring any modifications to the application’s code.
Final Answer: Azure Application Insights
What should you recommend to meet the monitoring requirements for App2?
Microsoft Sentinel
Azure Application Insights
Container insights
VM insights
The requirement is to monitor App2 to analyze transaction times without requiring any changes to the application code. App2 is a .NET application hosted in Azure App Service.
Let’s evaluate each option again:
Microsoft Sentinel: Microsoft Sentinel is a cloud-native SIEM (Security Information and Event Management) and SOAR (Security Orchestration, Automation, and Response) solution. It is primarily focused on security monitoring, threat detection, and incident response. While Sentinel can ingest logs and metrics from various Azure services, it is not designed for application performance monitoring of transaction times in the way that APM tools are. It is not the appropriate service for this specific requirement.
Azure Application Insights: Azure Application Insights is an Application Performance Monitoring (APM) service in Azure. It is specifically designed for web applications and services, including those hosted in Azure App Service. A key feature of Application Insights is its ability to automatically instrument applications running in App Service without requiring changes to the application code. For .NET applications in App Service, you can enable the Application Insights Extension or Auto-Instrumentation. This automatically collects performance data, including request durations, dependencies, exceptions, and traces, which directly addresses the requirement to analyze transaction times within App2.
Container insights: Container insights is a feature of Azure Monitor that is designed to monitor the performance and health of containerized workloads, primarily in Azure Kubernetes Service (AKS) and Azure Container Instances (ACI). Since App2 is hosted in Azure App Service, which is a Platform-as-a-Service (PaaS) offering and not directly containerized by the user in the same way as AKS or ACI, Container insights is not the appropriate monitoring solution for App2.
VM insights: VM insights is a feature of Azure Monitor designed to monitor the performance and health of virtual machines and virtual machine scale sets. It collects data about the operating system and hardware metrics of VMs, such as CPU utilization, memory pressure, disk I/O, and network traffic. While App Service instances run on VMs in the backend, VM insights focuses on monitoring the infrastructure level metrics of these VMs, not the application-level transaction performance within App2. VM insights will not provide the detailed transaction timing analysis required for App2.
Considering the specific requirement of monitoring App2 transaction times without code changes for a .NET application in Azure App Service, Azure Application Insights is the most suitable and direct solution. It provides automatic instrumentation and is designed exactly for this type of application performance monitoring scenario.
Final Answer: Azure Application Insights
Overview
An insurance company, HABInsurance, operates in three states and provides home, auto, and boat insurance. Besides the head office, HABInsurance has three regional offices.
Current environment
General
An insurance company, HABInsurance, operates in three states and provides home, auto, and boat insurance. Besides the head office, HABInsurance has three regional offices.
Technology assessment
The company has two Active Directory forests: main.habinsurance.com and region.habinsurance.com. HABInsurance’s primary internal system is Insurance Processing System (IPS). It is an ASP.Net/C# application running on IIS/Windows Servers hosted in a data center. IPS has three tiers: web, business logic API, and a datastore on a back end. The company uses Microsoft SQL Server and MongoDB for the backend. The system has two parts: Customer data and Insurance forms and documents. Customer data is stored in Microsoft SQL Server and Insurance forms and documents ― in MongoDB. The company also has 10 TB of Human Resources (HR) data stored on NAS at the head office location. Requirements
General
HABInsurance plans to migrate its workloads to Azure. They purchased an Azure subscription.
Changes
During a transition period, HABInsurance wants to create a hybrid identity model along with a Microsoft Office 365 deployment. The company intends to sync its AD forests to Azure AD and benefit from Azure AD administrative units functionality.
HABInsurance needs to migrate the current IPSCustomers SQL database to a new fully managed SQL database in Azure that would be budget-oriented, balanced with scalable compute and storage options. The management team expects the Azure database service to scale the database resources dynamically with minimal downtime. The technical team proposes implementing a DTU-based purchasing model for the new database.
HABInsurance wants to migrate Insurance forms and documents to Azure database service. HABInsurance plans to move IPS first two tiers to Azure without any modifications. The technology team discusses the possibility of running IPS tiers on a set of virtual machines instances. The number of instances should be adjusted automatically based on the CPU utilization. An SLA of 99.95% must be guaranteed for the compute infrastructure.
The company needs to move HR data to Azure File shares.
In their new Azure ecosystem, HABInsurance plans to use internal and third-party applications. The company considers adding user consent for data access to the registered applications
Later, the technology team contemplates adding a customer self-service portal to IPS and deploying a new IPS to multi-region ASK. But the management team is worried about performance and availability of the multi-region AKS deployments during regional outages.
A company has an on-premises file server cbflserver that runs Windows Server 2019. Windows Admin Center manages this server. The company owns an Azure subscription. You need to provide an Azure solution to prevent data loss if the file server fails.
Solution: You decide to create an Azure Recovery Services vault. You then decide to install the Azure Backup agent and then schedule the backup.
Would this meet the requirement?
Yes
No
The requirement is to prevent data loss if the on-premises file server cbflserver running Windows Server 2019 fails. The proposed solution involves using Azure Recovery Services vault and the Azure Backup agent. Let’s break down why this solution is effective:
Azure Recovery Services Vault: Creating an Azure Recovery Services vault is the foundational step for setting up Azure Backup. The vault acts as a management container for backup and recovery points, and it handles the storage and management of backup data in Azure. This is the correct Azure service to use for backup purposes.
Azure Backup Agent: Installing the Azure Backup agent (also known as the MARS agent - Microsoft Azure Recovery Services agent) on the cbflserver is the correct approach for backing up files and folders from an on-premises Windows Server to Azure. This agent is specifically designed to communicate with the Azure Recovery Services vault and securely transfer backup data to Azure storage.
Scheduling Backup: Scheduling backups is essential for data protection. By scheduling backups, you ensure that data is regularly copied to Azure. In the event of a file server failure, you can restore the data from the latest backup stored in the Azure Recovery Services vault, thus preventing data loss.
By combining these three steps - creating a Recovery Services vault, installing the Azure Backup agent, and scheduling backups - you establish a functional backup system for the cbflserver. This system will create copies of the server’s data in Azure on a regular basis. If the cbflserver fails, the data can be restored from these backups, effectively preventing data loss.
Therefore, the proposed solution directly addresses the requirement of preventing data loss in case of file server failure.
Final Answer: Yes
A company is planning on deploying an application onto Azure. The application will be based on the .Net core programming language. The application would be hosted using Azure Web apps. Below is part of the various requirements for the application
Give the ability to correlate Azure resource usage and the performance data with the actual application configuration and performance data
Give the ability to visualize the relationships between application components
Give the ability to track requests and exceptions to specific lines of code from within the application Give the ability to actually analyse how uses return to an application and see how often they only select a particular drop-down value
Which of the following service would be best suited for fulfilling the requirement of “Give the ability to correlate Azure resource usage and the performance data with the actual application configuration and performance data”
Azure Application Insights
Azure Service Map
Azure Log Analytics
Azure Activity Log
The question specifically asks for a service that provides the ability to correlate Azure resource usage and performance data with application configuration and performance data. Let’s analyze each option in relation to this requirement:
Azure Application Insights: Azure Application Insights is an Application Performance Monitoring (APM) service designed for web applications and services. It excels at collecting and analyzing application performance data such as request rates, response times, exceptions, and dependencies. Critically, Application Insights also integrates with Azure Monitor metrics. This integration allows you to see Azure resource utilization (like CPU usage, memory consumption, etc. of the underlying App Service plan) alongside your application performance data within the same interface. Furthermore, Application Insights allows you to track custom properties and telemetry, which can include application configuration data if you choose to send it. Therefore, Application Insights directly facilitates the correlation of Azure resource usage and performance data with application configuration and performance data.
Azure Service Map: Azure Service Map automatically discovers application components and their dependencies, visualizing the relationships between servers, processes, and third-party services. While it provides a great visual representation of application architecture and dependencies, it is not primarily focused on correlating Azure resource usage metrics with detailed application performance and configuration data. Service Map is more about understanding the topology and connections within your application environment.
Azure Log Analytics: Azure Log Analytics is a powerful service for collecting and analyzing log and metric data from various sources across your Azure and on-premises environments. You could potentially use Log Analytics to collect both Azure resource logs (containing resource usage metrics) and application performance logs (which might include performance and configuration data). Then, you could write complex queries to try and correlate this data. However, this approach is more manual and requires significant configuration and query writing effort. Application Insights provides a more direct and out-of-the-box solution for this specific correlation requirement, especially for web applications hosted in Azure App Service.
Azure Activity Log: Azure Activity Log provides audit logs for operations performed on Azure resources. It records control plane operations like creating, updating, or deleting Azure resources. Activity Log is primarily for auditing and governance purposes, not for monitoring application performance or correlating resource usage with application configuration data. It does not contain the detailed performance metrics or application-level data needed for this requirement.
Considering the specific requirement to “correlate Azure resource usage and the performance data with the actual application configuration and performance data,” Azure Application Insights is the most directly and effectively suited service. It is designed for APM and has built-in features to integrate resource usage metrics with application performance telemetry, making correlation straightforward.
Final Answer: Azure Application Insights
A company has an on-premises file server cbflserver that runs Windows Server 2019. Windows Admin Center manages this server. The company owns an Azure subscription. You need to provide an Azure solution to prevent data loss if the file server fails.
Solution: You decide to register Windows Admin Center in Azure and then configure Azure Backup.
Would this meet the requirement?
Yes
No
The requirement is to prevent data loss for an on-premises file server cbflserver running Windows Server 2019 in case of failure. The proposed solution is to register Windows Admin Center in Azure and then configure Azure Backup. Let’s analyze if this solution meets the requirement.
Registering Windows Admin Center in Azure: Windows Admin Center (WAC) is a browser-based management tool for Windows Servers. Registering Windows Admin Center in Azure connects your on-premises WAC instance to your Azure subscription. This provides several benefits, including:
Hybrid Management: Allows you to manage your on-premises servers from within the Azure portal.
Azure Service Integration: Enables easier integration and configuration of Azure services for your on-premises servers directly from the WAC interface.
Configuring Azure Backup: Azure Backup is a cloud-based backup service that is part of Azure Recovery Services. It is designed to backup data from various sources, including on-premises Windows Servers. By configuring Azure Backup for cbflserver, you will be able to create backups of the server’s data in Azure.
How Windows Admin Center facilitates Azure Backup:
Windows Admin Center provides a user-friendly interface to configure Azure Backup for servers it manages. When you register WAC in Azure and then use WAC to configure Azure Backup for cbflserver, it simplifies the process by:
Guiding you through the Azure Backup setup: WAC can help you create a Recovery Services vault in Azure if you don’t already have one.
Simplifying agent installation: WAC can assist in deploying the Azure Backup agent to cbflserver.
Providing a centralized management point: You can manage backups for cbflserver directly from the WAC interface, which is integrated with Azure.
Does this solution meet the requirement of preventing data loss?
Yes. By configuring Azure Backup for cbflserver, regardless of whether you initiate the configuration through Windows Admin Center or directly through the Azure portal, you are setting up a backup process that will store copies of your server’s data in Azure. In the event of a failure of the cbflserver, you can restore the data from the backups stored in Azure, thus preventing data loss.
Registering Windows Admin Center in Azure is not strictly necessary for Azure Backup to function. You can configure Azure Backup directly from the Azure portal or using PowerShell. However, using Windows Admin Center, especially when it’s already used for server management, simplifies the configuration and management of Azure Backup for on-premises servers.
Therefore, the solution of registering Windows Admin Center in Azure and then configuring Azure Backup is a valid and effective way to prevent data loss for the on-premises file server cbflserver.
Final Answer: Yes
You have an Azure subscription that contains a custom application named Application was developed by an external company named fabric, Ltd. Developers at Fabrikam were assigned role-based access control (RBAV) permissions to the Application components. All users are licensed for the Microsoft 365 E5 plan.
You need to recommends a solution to verify whether the Faricak developers still require permissions to Application1.
The solution must the following requirements.
- To the manager of the developers, send a monthly email message that lists the access permissions to Application1.
- If the manager does not verify access permission, automatically revoke that permission.
- Minimize development effort.
What should you recommend?
In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources
Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet
Create an Azure Automation runbook that runs the Get-AzureRmRoleAssignment cmdlet
In Azure Active Directory (Azure AD), create an access review of Application1
The question asks for the best solution to verify if Fabrikam developers still require permissions to Application1, with specific requirements for monthly email notifications to managers, automatic revocation upon non-verification, and minimal development effort. Let’s evaluate each option against these requirements.
In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources: Azure AD Privileged Identity Management (PIM) is primarily used for managing, controlling, and monitoring access within an organization by enforcing just-in-time access for privileged roles. While PIM can manage role assignments, it is not inherently designed for periodic access reviews and automated revocations based on manager verification in the way described in the requirements. Creating a custom role assignment in PIM does not directly address the need for a monthly review and automatic revocation workflow.
Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet: This option involves using Azure Automation and PowerShell scripting. Get-AzureADUserAppRoleAssignment cmdlet can retrieve application role assignments in Azure AD. An Azure Automation runbook could be created to:
Run on a monthly schedule.
Use Get-AzureADUserAppRoleAssignment to list Fabrikam developers’ permissions to Application1.
Send an email to the managers with this list, requesting verification.
Implement logic to track responses and, if no response is received within a timeframe, use PowerShell cmdlets to revoke the permissions.
While technically feasible, this solution requires significant development effort to create the automation runbook, handle email notifications, track responses, and implement the revocation logic. It does not minimize development effort.
Create an Azure Automation runbook that runs the Get-AzureRmRoleAssignment cmdlet: Get-AzureRmRoleAssignment (or its modern equivalent Get-AzRoleAssignment in Az PowerShell module) retrieves Azure Role-Based Access Control (RBAC) assignments at the resource level. Similar to the previous option, an Azure Automation runbook could be developed to retrieve RBAC assignments for Application1 resources, notify managers, and revoke permissions if not verified. This option also suffers from the same drawback: it requires considerable custom development effort to build the entire verification and revocation process within the runbook.
In Azure Active Directory (Azure AD), create an access review of Application1: Azure AD Access Reviews are a built-in feature in Azure AD Premium P2 (which the users have with Microsoft 365 E5 licenses) specifically designed for this type of access governance scenario. Azure AD Access Reviews provide a streamlined way to:
Define the scope of the review: In this case, access to Application1.
Select reviewers: Managers of the Fabrikam developers.
Set a review schedule: Monthly.
Configure automatic actions: Specifically, “Auto-apply results to resource” which can be set to “Remove access” if reviewers don’t respond or deny access.
Send notifications: Reviewers (managers) are automatically notified by email to perform the review.
Track review progress and results: Azure AD provides a dashboard to monitor the review process.
Azure AD Access Reviews directly address all the specified requirements with minimal configuration and essentially zero development effort. It is a built-in feature designed for access governance and periodic reviews, making it the most efficient and appropriate solution.
Final Answer: In Azure Active Directory (Azure AD), create an access review of Application1
You have an Azure subscription. The subscription has a blob container that contains multiple blobs. Ten users in the finance department of your company plan to access the blobs during the month of April. You need to recommend a solution to enable access to the blobs during the month of April only.
Which security solution should you include in the recommendation?
shared access signatures (SAS)
access keys
conditional access policies
certificates
The correct security solution is shared access signatures (SAS).
Here’s why:
Temporary Access: SAS tokens provide a way to grant temporary, limited access to Azure Storage resources, such as blobs15. This perfectly fits the requirement to enable access only during the month of April.
Granular Control: With SAS, you can define the specific permissions (read, write, delete, etc.) and the exact time interval for which the access is valid1.
No Account Key Sharing: SAS tokens allow you to grant access without sharing your storage account keys, which is a critical security best practice1.
Here’s why the other options are not as suitable:
Access Keys: Access keys provide full access to the entire storage account. Sharing access keys would grant the finance department users far more permission than necessary and would not limit access to the month of April. This violates the principle of least privilege.
Conditional Access Policies: Conditional Access policies are used to enforce organizational policies based on identity, device, location, and other signals. While useful for many scenarios, they are not designed for granting temporary, time-bound access to specific storage resources.
Certificates: Certificates are typically used for authentication and encryption, not for granting temporary access to storage resources.
You have an Azure Active Directory (Azure AD) tenant that syncs with an on-premises Active Directory domain.
You have an internal web app named WebApp1 that is hosted on-premises. WebApp1 uses Integrated Windows authentication.
Some users work remotely and do NOT have VPN access to the on-premises network.
You need to provide the remote users with single sign-on (SSO) access to WebApp1.
Which two features should you include in the solution? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
Azure AD Application Proxy
Azure AD Privileged Identity Management (PIM)
Conditional Access policies
Azure Arc
Azure AD enterprise applications
Azure Application Gateway
To provide remote users with single sign-on (SSO) access to an on-premises web application (WebApp1) that uses Integrated Windows Authentication (IWA), without VPN access, you should use the following two Azure AD features:
Azure AD Application Proxy
Azure AD enterprise applications
Here’s why these two features are the correct combination:
- Azure AD Application Proxy:
Purpose: Azure AD Application Proxy is specifically designed to publish on-premises web applications to remote users securely through Azure AD authentication. It acts as a reverse proxy, sitting between the internet and your on-premises application.
How it helps in this scenario:
Secure Remote Access without VPN: It eliminates the need for users to connect via VPN to access WebApp1. Remote users access the application through an external URL provided by Application Proxy.
SSO with Azure AD: Application Proxy integrates with Azure AD for authentication. Users authenticate with their Azure AD credentials.
Handles Integrated Windows Authentication (IWA): Application Proxy can be configured to handle the backend Integrated Windows Authentication required by WebApp1. It does this by using Kerberos Constrained Delegation (KCD) and a Connector agent installed on-premises. The Connector agent performs the IWA on behalf of the user within the on-premises network.
- Azure AD enterprise applications:
Purpose: Azure AD enterprise applications are the representation of applications within your Azure AD tenant. They are used to manage authentication and authorization for applications that you want to integrate with Azure AD.
How it helps in this scenario:
Application Registration: You need to register WebApp1 as an enterprise application in your Azure AD tenant. This registration allows Azure AD to understand and manage authentication for WebApp1.
Configuration for Application Proxy: When you set up Azure AD Application Proxy for WebApp1, you will configure it based on this enterprise application registration. The enterprise application defines the authentication methods, user assignments, and other settings for accessing WebApp1 through Application Proxy.
Why other options are not the primary solution:
Azure AD Privileged Identity Management (PIM): PIM is for managing, controlling, and monitoring privileged access to Azure resources and Azure AD roles. It’s not directly involved in providing SSO access to web applications for remote users.
Conditional Access policies: Conditional Access policies are used to enforce authentication requirements based on conditions (like location, device, risk level). While you can use Conditional Access to enhance the security of access to WebApp1 through Application Proxy, it’s not the feature that enables the SSO access in the first place. Conditional Access would be a secondary security layer, not the core solution for SSO.
Azure Arc: Azure Arc is for managing on-premises and multi-cloud infrastructure from Azure. It does not provide SSO capabilities for web applications.
Azure Application Gateway: Azure Application Gateway is a web traffic load balancer and WAF for Azure-hosted web applications. It is not designed to provide reverse proxy and SSO for on-premises applications like Azure AD Application Proxy.
Therefore, the correct two features are Azure AD Application Proxy and Azure AD enterprise applications.
Final Answer: Azure AD Application Proxy and Azure AD enterprise applications
You have an Azure Active Directory (Azure AD) tenant named contoso.com that has a security group named Group’. Group i is configured Tor assigned membership. Group I has 50 members. including 20 guest users.
You need To recommend a solution for evaluating the member ship of Group1.
The solution must meet the following requirements:
- The evaluation must be repeated automatically every three months
- Every member must be able to report whether they need to be in Group1
- Users who report that they do not need to be in Group 1 must be removed from Group1 automatically
- Users who do not report whether they need to be m Group1 must be removed from Group1 automatically.
What should you include in me recommendation?
implement Azure AU Identity Protection.
Change the Membership type of Group1 to Dynamic User.
Implement Azure AD Privileged Identity Management.
Create an access review.
The question requires a solution for evaluating and managing the membership of an Azure AD Security Group (Group1) with specific requirements for automation, self-attestation, and automatic removal. Let’s analyze each option:
Implement Azure AD Identity Protection: Azure AD Identity Protection is focused on security and risk management for user identities. It detects risky sign-ins and vulnerabilities, and helps to remediate them. It does not provide features for group membership reviews, self-attestation, or automated removal based on user feedback regarding group membership. Therefore, this option does not meet the requirements.
Change the Membership type of Group1 to Dynamic User: Dynamic User groups manage membership based on rules that are evaluated against user attributes. While this automates group membership management based on predefined rules, it does not address the requirements for periodic reviews, self-attestation, or automatic removal based on user feedback or lack of response. Dynamic groups are rule-driven, not review-driven. Therefore, this option does not meet the requirements.
Implement Azure AD Privileged Identity Management (PIM): Azure AD Privileged Identity Management is used to manage, control, and monitor privileged access to resources in Azure AD and Azure. While PIM can be used for group membership management, it is primarily focused on roles that grant elevated privileges and managing just-in-time access. It is not designed for general group membership reviews and self-attestation across a broad group like Group1. Although PIM has some review capabilities, it’s not the most appropriate tool for this scenario compared to Access Reviews.
Create an access review: Azure AD Access Reviews are specifically designed to manage and review access to groups, applications, and roles. Access Reviews can be configured to meet all the stated requirements:
Periodic Reviews: Access Reviews can be set up to run automatically on a recurring schedule, such as every three months.
Self-Attestation: Access Reviews can be configured to allow users to self-attest to their need for continued access to the group. In this case, members of Group1 can be reviewers and attest if they need to remain in the group.
Automatic Removal Based on User Report: Access Reviews can be configured to automatically remove users who, during the review process, indicate that they no longer need access to the group.
Automatic Removal for Non-Response: Access Reviews can be configured to automatically remove users who do not respond to the access review within a specified time period.
Azure AD Access Reviews directly address all the requirements of the question and are the intended feature for managing group memberships in this way.
Final Answer: Create an access review.
HOTSPOT
You plan to deploy Azure Databricks to support a machine learning application. Data engineers will mount an Azure Data Lake Storage account to the Databricks file system. Permissions to folders are granted directly to the data engineers.
You need to recommend a design for the planned Databrick deployment.
The solution must meet the following requirements:
✑ Ensure that the data engineers can only access folders to which they have permissions.
✑ Minimize development effort.
✑ Minimize costs.
What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Databricks SKU:
Premium
Standard
Cluster configuration:
Credential passthrough
Managed identities
MLflow
A runtime that contains Photon
Secret scope
Databricks SKU: Premium
Requirement: Ensure that data engineers can only access folders to which they have permissions.
Explanation: Premium SKU is required to enable credential passthrough. Credential passthrough allows Databricks clusters to leverage the Azure Active Directory identity of the user submitting queries to access Azure Data Lake Storage (ADLS). This means that Databricks will use the data engineer’s own Azure AD credentials to authenticate and authorize access to ADLS. If the data engineer has permissions to a specific folder in ADLS, they can access it through Databricks; otherwise, they will be denied access. Standard SKU does not support credential passthrough for ADLS Gen2.
Cluster configuration: Credential passthrough
Requirement: Ensure that data engineers can only access folders to which they have permissions.
Explanation: Credential passthrough is the key feature that directly addresses the requirement of granular access control based on user permissions in ADLS. When credential passthrough is enabled on a Databricks cluster, the identity of the user running a job is passed through to ADLS. ADLS then uses its own access control mechanisms (like ACLs or RBAC) to determine if the user has permission to access the requested data. This directly ensures that data engineers can only access folders they are permitted to access.
Why other options are not the best fit or incorrect:
Standard Databricks SKU: Standard SKU does not support credential passthrough for Azure Data Lake Storage Gen2, which is essential for enforcing user-level permissions on folders in ADLS as described in the scenario.
Managed identities: While managed identities are a secure way for Azure resources to authenticate to other Azure services, they do not directly address the requirement of individual data engineers accessing data based on their own permissions. Managed identities would require granting permissions to the Databricks cluster’s managed identity, not to individual data engineers. This would mean all users of the cluster would have the same level of access, which contradicts the requirement of granular user-based permissions.
MLflow: MLflow is a platform for managing the machine learning lifecycle. It’s not directly related to data access control or minimizing costs in the context of storage access permissions. While useful for ML projects, it doesn’t contribute to solving the specific requirements outlined.
A runtime that contains Photon: Photon is a high-performance query engine optimized for Databricks. While it can improve performance and potentially reduce costs in the long run by running jobs faster, it is not directly related to data access control or minimizing development effort in the context of setting up permissions. Choosing a runtime with or without Photon does not address the core security and access control requirements.
Secret scope: Secret scopes are used to securely store and manage secrets (like passwords, API keys, etc.) in Databricks. While important for security in general, secret scopes are not directly related to the requirement of user-based folder permissions in ADLS. They are more relevant for managing credentials used by the Databricks cluster itself, not for enforcing user-level data access control using Azure AD identities.
Minimizing Development Effort & Costs:
Credential passthrough minimizes development effort because it leverages the existing Azure AD and ADLS permissions model. No custom access control mechanisms need to be developed within Databricks.
Standard runtime is generally less costly than Photon if performance gains are not a primary driver.
Choosing the Premium SKU is necessary for credential passthrough, even though it’s more expensive than Standard, because it’s the only way to meet the core security requirement of user-based folder permissions with minimal development effort. Trying to implement a custom permission system with Standard SKU and Managed Identities would be significantly more complex and potentially more costly in development time.
Therefore, the optimal solution to meet all requirements with minimal development effort and cost-effectiveness, while ensuring secure user-based access to folders in ADLS, is to choose Premium Databricks SKU and configure the cluster with Credential passthrough.
Final Answer:
Databricks SKU: Premium
Cluster configuration: Credential passthrough
MLflow:
A runtime that contains Photon:
Secret scope:
HOTSPOT
You plan to deploy an Azure web app named Appl that will use Azure Active Directory (Azure AD) authentication.
App1 will be accessed from the internet by the users at your company. All the users have computers that run Windows 10 and are joined to Azure AD.
You need to recommend a solution to ensure that the users can connect to App1 without being prompted for authentication and can access App1 only from company-owned computers.
What should you recommend for each requirement? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
The users can connect to App1 without
being prompted for authentication:
The users can access App1 only from
company-owned computers:
An Azure AD app registration
An Azure AD managed identity
Azure AD Application Proxy
A conditional access policy
An Azure AD administrative unit
Azure Application Gateway
Azure Blueprints
Azure Policy
The users can connect to App1 without being prompted for authentication: An Azure AD app registration
Explanation: To enable Azure AD authentication for App1, you must first register App1 as an application in Azure AD. This app registration establishes a trust relationship between App1 and Azure AD, allowing Azure AD to authenticate users for App1.
Why it enables SSO (Single Sign-On): When a user on an Azure AD joined Windows 10 computer attempts to access App1, and App1 is configured for Azure AD authentication, the web browser on the user’s machine can automatically pass the user’s existing Azure AD credentials to App1’s authentication request. This happens seamlessly in the background because the user is already logged into Azure AD on their Windows 10 machine. App registration is the fundamental step to enable this authentication flow, which leads to SSO in this scenario.
Why other options are not suitable for SSO in this context:
Azure AD managed identity: Managed identities are for Azure resources (like App1 itself) to authenticate to other Azure services, not for user authentication to App1.
Azure AD Application Proxy: Application Proxy is for publishing on-premises web applications to the internet via Azure AD. App1 is already an Azure web app and internet-facing, so Application Proxy is not needed for basic internet access or SSO for it.
A conditional access policy: Conditional access policies enforce conditions after authentication. While they can contribute to a better user experience, they are not the primary mechanism for enabling SSO itself.
An Azure AD administrative unit: Administrative units are for organizational management and delegation within Azure AD, not related to authentication flows or SSO.
Azure Application Gateway: Application Gateway is a web traffic load balancer and WAF. It doesn’t directly handle Azure AD authentication or SSO in this context.
Azure Blueprints & Azure Policy: These are for resource deployment and governance, not related to application authentication or SSO.
The users can access App1 only from company-owned computers: A conditional access policy
Explanation: Azure AD Conditional Access policies are specifically designed to enforce access controls based on various conditions, including device state. You can create a Conditional Access policy that targets App1 and requires devices to be marked as “compliant” or “hybrid Azure AD joined” to grant access.
How it works for company-owned computers: For Windows 10 computers joined to Azure AD, you can configure them to be either Hybrid Azure AD joined (if also domain-joined to on-premises AD) or simply Azure AD joined and managed by Intune (or other MDM). You can then use Conditional Access to require that devices accessing App1 are either Hybrid Azure AD joined or marked as compliant by Intune. This effectively restricts access to only company-managed and compliant devices, which are considered “company-owned” in this context.
Why other options are not suitable for device-based access control:
An Azure AD app registration: App registration is necessary for authentication but doesn’t enforce device-based restrictions.
Azure AD managed identity: Irrelevant to device-based access control for users.
Azure AD Application Proxy: Not relevant to device-based access control for Azure web apps.
An Azure AD administrative unit: Not relevant to device-based access control.
Azure Application Gateway, Azure Blueprints, Azure Policy: These are not directly designed for enforcing device-based access control for Azure AD authenticated applications.
Therefore, the most appropriate recommendations are:
The users can connect to App1 without being prompted for authentication: An Azure AD app registration
The users can access App1 only from company-owned computers: A conditional access policy
Final Answer:
The users can connect to App1 without
being prompted for authentication: An Azure AD app registration
The users can access App1 only from
company-owned computers: A conditional access policy
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is being deployed and configured for on-premises to Azure connectivity.
Several virtual machines exhibit network connectivity issues.
You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines.
Solution: Use Azure Traffic Analytics in Azure Network Watcher to analyze the network traffic.
Does this meet the goal?
Yes
No
The goal is to analyze network traffic to identify whether packets are being allowed or denied to virtual machines in a hybrid environment (on-premises and Azure connected via ExpressRoute). The proposed solution is to use Azure Traffic Analytics in Azure Network Watcher.
Let’s evaluate if Azure Traffic Analytics meets this goal:
Azure Traffic Analytics:
Functionality: Azure Traffic Analytics analyzes Network Security Group (NSG) flow logs, Azure Firewall logs, and Virtual Network Gateway logs to provide insights into network traffic in Azure. It helps visualize traffic patterns, identify security threats, and pinpoint network misconfigurations.
Scope: Traffic Analytics is focused on analyzing network traffic within Azure. It primarily works with Azure network resources like NSGs, Azure Firewalls, and Virtual Network Gateways.
Data Source: It relies on logs generated by Azure network components.
Hybrid Environment and ExpressRoute:
ExpressRoute Connectivity: ExpressRoute provides a private connection between on-premises networks and Azure.
Network Traffic Flow: Traffic flows between on-premises VMs and Azure VMs through the ExpressRoute connection.
On-premises VMs Visibility: Azure Traffic Analytics does not have direct visibility into the network traffic of on-premises virtual machines. It cannot analyze NSG flow logs or Azure Firewall logs for on-premises resources because these logs are generated by Azure network security components, which are not directly involved in securing on-premises networks.
Analyzing Network Connectivity Issues:
Azure VM Issues: For VMs in Azure that are protected by NSGs or Azure Firewall, Traffic Analytics can be helpful to understand if traffic is being allowed or denied by these Azure security components.
On-premises VM Issues: For VMs located on-premises, Azure Traffic Analytics is not directly applicable. Network connectivity issues for on-premises VMs would need to be analyzed using on-premises network monitoring tools and firewall logs.
Conclusion:
Azure Traffic Analytics is a valuable tool for analyzing network traffic and identifying allowed/denied packets within Azure.
However, it is not designed to analyze network traffic for on-premises virtual machines, even when they are connected to Azure via ExpressRoute. It lacks visibility into the on-premises network infrastructure.
Therefore, using Azure Traffic Analytics alone is insufficient to meet the goal of analyzing network traffic for all virtual machines (both on-premises and Azure) exhibiting network connectivity issues in this hybrid scenario. It will only provide insights into the Azure-side network traffic.
Final Answer: No
Why No is the correct answer: Azure Traffic Analytics is limited to analyzing network traffic within the Azure environment based on Azure network component logs (NSGs, Azure Firewall, etc.). It does not have visibility into on-premises network traffic, even when connected to Azure via ExpressRoute. Since the scenario involves VMs both on-premises and in Azure, and the need is to analyze network traffic to identify allowed/denied packets for all VMs, Azure Traffic Analytics by itself is not a sufficient solution. It can help with Azure VMs but not on-premises VMs.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company has deployed several virtual machines (VMs) on-premises and to Azure. Azure ExpressRoute has been deployed and configured for on-premises to Azure connectivity.
Several VMs are exhibiting network connectivity issues.
You need to analyze the network traffic to determine whether packets are being allowed or denied to the VMs.
Solution: Use the Azure Advisor to analyze the network traffic.
Does the solution meet the goal?
Yes
No
The goal is to analyze network traffic to determine whether packets are being allowed or denied to VMs in a hybrid environment. The proposed solution is to use Azure Advisor.
Let’s evaluate if Azure Advisor is suitable for this task:
Azure Advisor’s Purpose: Azure Advisor is a service in Azure that provides recommendations on how to optimize your Azure deployments for cost, security, reliability, operational excellence, and performance. It analyzes your Azure resource configurations and usage telemetry.
Azure Advisor’s Capabilities Related to Networking: Azure Advisor can provide recommendations related to networking, such as:
Security Recommendations: Suggesting improvements to Network Security Groups (NSGs) to enhance security, like closing exposed ports or recommending the use of Azure Firewall.
Performance Recommendations: Identifying potential network bottlenecks or underutilized network resources.
Cost Optimization: Identifying potential cost savings in network configurations.
Reliability: Recommending configurations for better network resilience.
Limitations of Azure Advisor for Network Traffic Analysis:
Not a Packet-Level Analyzer: Azure Advisor does not perform real-time or detailed packet-level network traffic analysis. It does not capture network packets or analyze packet headers to determine if packets are being allowed or denied by network security rules.
Recommendation-Based, Not Diagnostic: Azure Advisor provides recommendations based on configuration and usage patterns. It’s not a diagnostic tool to troubleshoot specific network connectivity issues by analyzing traffic flow in real-time or near real-time.
Focus on Azure Resources: Azure Advisor primarily focuses on Azure resources and their configurations. It does not have direct visibility into on-premises network traffic or detailed configurations of on-premises network devices.
Analyzing Network Connectivity Issues: To determine if packets are being allowed or denied, you need tools that can inspect network traffic flows, such as:
Network Watcher (Packet Capture, NSG Flow Logs, Connection Troubleshoot): These tools in Azure Network Watcher are designed for diagnosing network connectivity issues by capturing packets, analyzing NSG rule hits, and testing connectivity.
Network Monitoring Tools (e.g., Wireshark, tcpdump): These tools can capture and analyze network traffic at the packet level on both on-premises and Azure VMs (if installed and configured appropriately).
Firewall Logs: Analyzing logs from firewalls (Azure Firewall or on-premises firewalls) can show which traffic is being allowed or denied based on firewall rules.
Conclusion: Azure Advisor is a valuable tool for getting recommendations to improve your Azure environment, including some aspects of networking. However, it is not designed for or capable of analyzing network traffic at the packet level to determine if packets are being allowed or denied. It’s not a network traffic analysis tool in the sense required to troubleshoot network connectivity issues at a detailed level.
Final Answer: No
Explanation: Azure Advisor is not designed for real-time or packet-level network traffic analysis. It provides recommendations based on configuration and usage patterns but does not have the capability to analyze network traffic flows to determine if packets are being allowed or denied. To achieve the goal of analyzing network traffic for allowed/denied packets, tools like Azure Network Watcher (Packet Capture, NSG Flow Logs) or traditional network monitoring tools are required, not Azure Advisor.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company has deployed several virtual machines (VMs) on-premises and to Azure. Azure ExpressRoute has been deployed and configured for on-premises to Azure connectivity.
Several VMs are exhibiting network connectivity issues.
You need to analyze the network traffic to determine whether packets are being allowed or denied to the VMs.
Solution: Use Azure Network Watcher to run IP flow verify to analyze the network traffic.
Does the solution meet the goal?
Yes
No
The goal is to analyze network traffic to determine if packets are being allowed or denied to VMs in a hybrid environment. The proposed solution is to use Azure Network Watcher’s IP flow verify.
Let’s analyze if Azure Network Watcher’s IP flow verify is suitable for this goal:
Azure Network Watcher IP Flow Verify: This tool allows you to specify a source and destination IP address, port, and protocol, and then it checks the configured Network Security Groups (NSGs) and Azure Firewall rules in Azure to determine if the traffic would be allowed or denied.
How it helps in the hybrid scenario:
Azure VMs: For VMs in Azure, IP flow verify is directly applicable. You can use it to check if NSGs or Azure Firewall rules are blocking traffic to or from these VMs. This is crucial for diagnosing connectivity issues related to Azure network security configurations.
On-premises VMs communicating with Azure VMs: When on-premises VMs are experiencing connectivity issues with Azure VMs, IP flow verify can be used to check the Azure side of the connection. You can test if traffic from the on-premises VM’s IP range (or a representative IP) to the Azure VM is being blocked by Azure NSGs or Azure Firewall. This helps isolate whether the problem lies within Azure’s network security rules. While it doesn’t directly analyze on-premises firewalls or network configurations, it can pinpoint if the block is happening at the Azure perimeter.
Limitations: IP flow verify is primarily focused on the Azure network security layer (NSGs and Azure Firewall). It does not analyze on-premises firewalls, routers, or network configurations. Therefore, it will not provide a complete picture of the entire network path from on-premises to Azure.
Does it meet the goal? Yes, in part. IP flow verify does directly address the need to analyze network traffic to determine if packets are being allowed or denied, specifically in the context of Azure network security. For the Azure side of the hybrid connection, and for understanding if Azure NSGs or Firewall are causing the issues, IP flow verify is a valuable and relevant tool. While it doesn’t cover the on-premises network completely, it’s a significant step in diagnosing network connectivity problems in a hybrid environment, especially when Azure resources are involved in the communication path.
Considering the question asks “Does the solution meet the goal?”, and IP flow verify is a tool to analyze network traffic for allow/deny rules (within the Azure context which is part of the hybrid environment), the answer is Yes. It provides a mechanism to analyze a portion of the network path and identify potential packet blocking due to Azure security rules. It’s not a complete end-to-end hybrid solution, but it directly addresses the core requirement within the scope of Azure networking, which is relevant to the overall hybrid connectivity scenario.
Final Answer: Yes
DRAG DROP
You have an Azure subscription. The subscription contains Azure virtual machines that run Windows Server 2016 and Linux.
You need to use Azure Log Analytics design an alerting strategy for security-related events.
Which Log Analytics tables should you query? To answer, drag the appropriate tables to the correct log types. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Tables
AzureActivity
AzureDiagnostics
Event
Syslog
Answer Area
Events from Linux system logging: Table
Events from Windows event logs: Table
To design an alerting strategy for security-related events using Azure Log Analytics for both Windows and Linux VMs, you need to query the tables that specifically store operating system level logs, especially security logs.
Let’s analyze each table and determine its purpose:
AzureActivity: This table stores Azure subscription activity logs. These logs provide insights into the operations performed on Azure resources at the subscription level. While it may contain some security-related activities like changes to security configurations in Azure, it is not the primary source for OS-level security events from within the VMs.
AzureDiagnostics: This table stores diagnostic logs for various Azure services and resources. For Virtual Machines, Azure Diagnostics can collect guest OS logs and performance metrics. However, by default, it might not be configured to collect detailed security event logs. You would need to specifically configure Azure Diagnostics to collect Windows Security Events or Linux Security logs and send them to this table, which is less common for standard security event monitoring.
Event: This table is specifically designed to store Windows Event Logs collected from Windows VMs. Windows Security Events are a critical source of security-related information in Windows environments. Therefore, the Event table is the correct table to query for security events from Windows VMs.
Syslog: This table is specifically designed to store Syslog messages collected from Linux VMs. Syslog is the standard logging facility in Linux systems, and security-related events are often logged via Syslog. Therefore, the Syslog table is the correct table to query for security events from Linux VMs.
Based on this understanding:
Events from Linux system logging: The appropriate table is Syslog.
Events from Windows event logs: The appropriate table is Event.
Answer Area:
Events from Linux system logging: Table Syslog
Events from Windows event logs: Table Event
You are designing a large Azure environment that will contain many subscriptions.
You plan to use Azure Policy as part of a governance solution.
To which three scopes can you assign Azure Policy definitions? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
management groups
subscriptions
Azure Active Directory (Azure AD) tenants
resource groups
Azure Active Directory (Azure AD) administrative units
compute resources
Azure Policy is a service in Azure that enables you to create, assign, and manage policies that enforce different rules and effects over your resources. These policies help you stay compliant with your corporate standards and service level agreements. A key aspect of Azure Policy is understanding the scope at which policies can be applied. Scope determines the resources to which the policy will be enforced.
Let’s examine each option and determine if it’s a valid scope for Azure Policy assignment:
management groups: Correct. Management groups are containers for managing access, policy, and compliance across multiple Azure subscriptions. Azure Policy can be assigned at the management group level. Policies assigned at this level apply to all subscriptions within that management group and all resource groups and resources within those subscriptions. This is useful for enforcing organization-wide policies.
subscriptions: Correct. Subscriptions are a fundamental unit in Azure and represent a logical container for your resources. Azure Policy can be assigned at the subscription level. Policies assigned at this level apply to all resource groups and resources within that subscription. This is a common scope for enforcing policies specific to a project, department, or environment represented by a subscription.
Azure Active Directory (Azure AD) tenants: Incorrect. While Azure Policy is managed and integrated within the Azure AD tenant, the Azure AD tenant itself is not a direct scope for assigning Azure Policy definitions in the context of resource governance. Azure Policy is primarily concerned with the governance of Azure resources within subscriptions and management groups. While policies can interact with Azure AD in terms of identity and access management, the scope of policy assignment for resource governance is not the Azure AD tenant itself.
resource groups: Correct. Resource groups are logical containers for Azure resources within a subscription. Azure Policy can be assigned at the resource group level. Policies assigned at this level apply only to the resources within that specific resource group. This allows for very granular policy enforcement, tailored to specific applications or workloads within a resource group.
Azure Active Directory (Azure AD) administrative units: Incorrect. Azure AD administrative units are used for delegated administration within Azure AD. They allow you to grant administrative permissions to a subset of users and groups within your Azure AD organization. While they are related to Azure AD and management, they are not scopes for Azure Policy definitions in the context of Azure resource governance. Azure Policy focuses on the Azure resource hierarchy (management groups, subscriptions, resource groups).
compute resources: Incorrect. Compute resources, such as virtual machines, virtual machine scale sets, or Azure Kubernetes Service clusters, are individual Azure resources. While Azure Policy effects can be applied to compute resources to control their configuration and behavior, you do not directly assign Azure Policy definitions to individual compute resources as a scope. Policy definitions are assigned at the container levels (management groups, subscriptions, resource groups), and then they apply to the resources within those containers, including compute resources.
Therefore, the three correct scopes for assigning Azure Policy definitions are:
management groups
subscriptions
resource groups
Final Answer:
management groups
subscriptions
resource groups
DRAG DROP
Your on-premises network contains a server named Server1 that runs an ASP.NET application named App1.
You have a hybrid deployment of Azure Active Directory (Azure AD).
You need to recommend a solution to ensure that users sign in by using their Azure AD account and Azure Multi-Factor Authentication (MFA) when they connect to App1 from the internet.
Which three Azure services should you recommend be deployed and configured in sequence? To answer, move the appropriate services from the list of services to the answer area and arrange them in the correct order.
Services
an internal Azure Load Balancer
an Azure AD conditional access policy
Azure AD Application Proxy
an Azure AD managed identity
a public Azure Load Balancer
an Azure AD enterprise application
an App Service plan
Answer Area
Answer Area
Azure AD enterprise application
Azure AD Application Proxy
an Azure AD conditional access policy
Explanation:
Here’s the step-by-step rationale for the recommended sequence:
Azure AD enterprise application:
Reason: Before you can use Azure AD to manage authentication and access to App1, you must first register App1 as an application within your Azure AD tenant. This is done by creating an Azure AD enterprise application.
Function: Registering App1 as an enterprise application establishes an identity for App1 in Azure AD. This identity is crucial for Azure AD to understand that it needs to manage authentication for requests directed to App1. It also allows you to configure settings specific to App1, such as authentication methods and Conditional Access policies.
Azure AD Application Proxy:
Reason: Azure AD Application Proxy is the core service that enables secure remote access to on-premises web applications like App1 using Azure AD authentication.
Function:
Publishing to the Internet: Application Proxy publishes App1 to the internet through a public endpoint. Users access App1 via this public endpoint.
Reverse Proxy: It acts as a reverse proxy, intercepting user requests to App1 from the internet.
Azure AD Authentication Gateway: It handles the Azure AD authentication process. When a user accesses the Application Proxy endpoint, they are redirected to Azure AD for sign-in.
Secure Connection to On-premises: After successful Azure AD authentication, Application Proxy securely connects to Server1 (where App1 is hosted) on your on-premises network using an outbound connection from the Application Proxy connector.
an Azure AD conditional access policy:
Reason: To enforce Azure Multi-Factor Authentication (MFA) specifically when users access App1 from the internet, you need to configure an Azure AD Conditional Access policy.
Function:
Policy Enforcement: Conditional Access policies allow you to define conditions under which users can access specific applications.
MFA Requirement: You create a Conditional Access policy that targets the Azure AD enterprise application representing App1. Within this policy, you specify that MFA is required for users accessing App1, especially when accessing from outside the corporate network (which is implied when accessing from the internet).
Granular Control: Conditional Access provides granular control over access based on user, location, device, application, and risk signals.
Why other options are not in the sequence or not used:
an internal Azure Load Balancer / a public Azure Load Balancer: While load balancers are important in many architectures, they are not directly part of the core sequence for enabling Azure AD authentication and MFA for an on-premises app via Application Proxy in this basic scenario. Application Proxy itself handles the initial internet-facing endpoint. Load balancers could be relevant for scaling the application behind Server1 on-premises, but not for the core authentication and publishing flow using Application Proxy.
an Azure AD managed identity: Managed identities are used for Azure resources to authenticate to other Azure services. They are not relevant for user authentication to an on-premises application via Application Proxy.
an App Service plan: App Service plans are for hosting Azure App Services (PaaS). App1 is an on-premises application, not an Azure App Service, so App Service Plan is not needed.
Correct Sequence and Justification Summary:
The sequence Azure AD enterprise application -> Azure AD Application Proxy -> Azure AD conditional access policy is the correct order because it represents the logical flow of setting up Azure AD authentication and MFA for an on-premises application:
Register the Application: First, you must register App1 in Azure AD as an enterprise application.
Publish via Application Proxy: Then, you use Azure AD Application Proxy to publish App1 to the internet and handle the initial authentication handshake with Azure AD.
Enforce MFA: Finally, you create a Conditional Access policy to enforce MFA for access to App1, ensuring enhanced security.
You have 100 servers that run Windows Server 2012 R2 and host Microsoft SQL Server 2012 R2 instances.
The instances host databases that have the following characteristics:
✑ The largest database is currently 3 TB. None of the databases will ever exceed 4 TB.
✑ Stored procedures are implemented by using CLR.
You plan to move all the data from SQL Server to Azure.
You need to recommend an Azure service to host the databases.
The solution must meet the following requirements:
✑ Whenever possible, minimize management overhead for the migrated databases.
✑ Minimize the number of database changes required to facilitate the migration.
✑ Ensure that users can authenticate by using their Active Directory credentials.
What should you include in the recommendation?
Azure SQL Database single databases
Azure SQL Database Managed Instance
Azure SQL Database elastic pools
SQL Server 2016 on Azure virtual machines
Let’s analyze each option based on the requirements:
- Azure SQL Database single databases
Minimize management overhead: Azure SQL Database single databases are a Platform-as-a-Service (PaaS) offering. Microsoft manages most of the underlying infrastructure, including patching, backups, and high availability. This significantly minimizes management overhead compared to managing SQL Server on VMs.
Minimize database changes: Migration to Azure SQL Database single databases might require some database changes. While the T-SQL surface area is largely compatible, there can be differences in features and behaviors. Specifically, CLR support in single databases has some limitations compared to on-premises SQL Server or Managed Instance. You might need to review and potentially adjust your CLR stored procedures.
Ensure Active Directory authentication: Azure SQL Database single databases support Azure Active Directory (Azure AD) authentication, allowing users to authenticate using their Active Directory credentials.
- Azure SQL Database Managed Instance
Minimize management overhead: Azure SQL Database Managed Instance is also a PaaS offering, but it provides more features and control compared to single databases, resembling a traditional SQL Server instance. Microsoft still manages the underlying infrastructure, reducing management overhead compared to VMs, although slightly more than single databases due to the instance-level management capabilities.
Minimize database changes: Azure SQL Database Managed Instance is designed for near 100% compatibility with on-premises SQL Server, including feature parity for SQL Server 2012 R2 and later. This includes full CLR support with fewer restrictions than single databases. Migration to Managed Instance generally requires minimal database changes, making it ideal for applications with complex dependencies or features like CLR.
Ensure Active Directory authentication: Azure SQL Database Managed Instance fully supports Azure Active Directory (Azure AD) authentication and integration, allowing users to use their Active Directory credentials.
- Azure SQL Database elastic pools
Azure SQL Database elastic pools are a deployment method for Azure SQL Database single databases, not a separate service tier with different capabilities. They are used to cost-effectively manage and scale multiple single databases that have variable usage patterns. The characteristics regarding management overhead, database changes, and AD authentication are the same as for Azure SQL Database single databases. Therefore, this option doesn’t fundamentally change the analysis compared to single databases.
- SQL Server 2016 on Azure virtual machines
Minimize management overhead: SQL Server on Azure virtual machines is an Infrastructure-as-a-Service (IaaS) offering. You have full control over the SQL Server and operating system, but you are also responsible for managing everything, including OS patching, SQL Server patching, backups, high availability configuration, and infrastructure maintenance. This option has the highest management overhead.
Minimize database changes: Migrating to SQL Server on Azure VMs requires the fewest database changes. It is essentially running SQL Server in a VM, providing the highest compatibility with your existing on-premises SQL Server 2012 R2 instances, including full CLR support without changes.
Ensure Active Directory authentication: SQL Server on Azure VMs can be joined to an Active Directory domain (if you extend your on-premises AD to Azure or use Azure AD Domain Services) and can use Windows Authentication for users to authenticate with their Active Directory credentials.
Comparison and Best Choice:
Feature Azure SQL DB Single DB Azure SQL DB Managed Instance SQL Server on Azure VMs
Management Overhead Lowest Low Highest
Database Changes Medium Minimal Minimal
AD Authentication Yes Yes Yes
CLR Support Limited Full Full
Considering all requirements:
Minimize management overhead: Azure SQL Database options (Single DB and Managed Instance) are better than SQL Server on VMs.
Minimize database changes: Azure SQL Database Managed Instance and SQL Server on VMs are better than single databases, especially for CLR compatibility.
Ensure AD authentication: All options satisfy this requirement.
Azure SQL Database Managed Instance strikes the best balance. It significantly minimizes management overhead compared to VMs, minimizes database changes (especially important for CLR), and supports Active Directory authentication. While single databases have even lower management overhead, the potential for database changes due to CLR limitations makes Managed Instance a more suitable recommendation for minimizing database changes and ensuring feature compatibility, especially for applications relying on CLR. SQL Server on Azure VMs minimizes database changes the most but fails to minimize management overhead.
Final Answer: Azure SQL Database Managed Instance
You have an Azure subscription that contains an Azure Blob storage account named store1.
You have an on-premises file server named Setver1 that runs Windows Sewer 2016. Server1 stores 500 GB of company files.
You need to store a copy of the company files from Server 1 in store1.
Which two possible Azure services achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point
an Azure Batch account
an integration account
an On-premises data gateway
an Azure Import/Export job
Azure Data factory
Let’s analyze each Azure service option to determine its suitability for copying files from an on-premises file server to Azure Blob storage:
An Azure Batch account: Azure Batch is designed for large-scale parallel compute workloads. While technically you could write a custom application using Azure Batch to copy files, it’s not the intended use case, and it would be an overly complex solution for a simple file copy task. It’s not a direct file transfer service.
An integration account: Integration accounts are used in Azure Logic Apps and Azure Functions to store integration artifacts like schemas, maps, and certificates. They are not related to directly transferring files from on-premises to Azure Blob storage.
An On-premises data gateway: The On-premises data gateway acts as a bridge between on-premises data sources and Azure cloud services. It enables Azure services like Azure Data Factory, Logic Apps, Power BI, and Power Apps to securely access data behind a firewall in your on-premises network. For copying files from an on-premises file server to Azure Blob Storage, the On-premises data gateway is a crucial component to establish connectivity and secure data transfer.
An Azure Import/Export job: Azure Import/Export service is used for transferring large amounts of data to Azure Blob Storage and Azure Files by physically shipping disk drives to an Azure datacenter. This is suitable for very large datasets when network bandwidth is limited or slow, but it’s not ideal for a routine file copy of 500 GB from an active file server if a network connection is available. This method is not an online transfer service.
Azure Data Factory: Azure Data Factory (ADF) is a cloud-based data integration service. It allows you to create data-driven workflows to orchestrate and automate data movement and transformation. ADF has connectors for various data sources and sinks, including on-premises file systems (via a Self-hosted Integration Runtime, which is based on the same technology as the On-premises data gateway) and Azure Blob Storage. ADF is a well-suited and efficient service for copying files from an on-premises file server to Azure Blob storage.
Considering the requirements and the options:
On-premises data gateway is essential to enable Azure services to access the on-premises file server securely.
Azure Data Factory is a service designed for data movement and can utilize the On-premises data gateway to connect to the on-premises file server and copy files to Azure Blob storage.
Therefore, the two Azure services that, when used together, achieve the goal of copying files from an on-premises server to Azure Blob storage are:
An On-premises data gateway (required to provide secure access to the on-premises file server).
Azure Data Factory (to orchestrate the data copy process using the gateway to connect to the on-premises source and write to Azure Blob storage).
While they work together, the question asks for two possible Azure services that achieve this goal. In the context of the options provided and typical Azure hybrid scenarios, Azure Data Factory and On-premises data gateway are the most relevant and commonly used services for this type of task.
Final Answer:
An On-premises data gateway
Azure Data Factory
HOTSPOT
You have an Azure subscription that contains the storage accounts shown in the following table.
Name Type Performance
storage1 StorageV2 Standard
storage2 SrorageV2 Premium
storage3 BlobStorage Standard
storage4 FileStorage Premium
You plan to implement two new apps that have the requirements shown in the following table.
Name Requirement
App1 Use lifecycle management to migrate app
data between storage tiers
App2 Store app data in an Azure file share
Which storage accounts should you recommend using for each app? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
App1:
Storage1 and storage2 only
Storage1 and storage3 only
Storage1, storage2, and storage3 only
Storage1, storage2, storage3, and storage4
App2:
Storage4 only
Storage1 and storage4 only
Storage1, storage2, and storage4 only
Storage1, storage2, storage3, and storage4
Final Answer:
App1: Storage1, storage2, and storage3 only
App2: Storage1, storage2, and storage4 only
App1 Requirement: Use lifecycle management to migrate app data between storage tiers
Lifecycle Management Feature: Azure Blob Storage lifecycle management is a feature that allows you to automatically transition blobs to different storage tiers (Hot, Cool, Archive) based on predefined rules. This feature is supported by General-purpose v2 (StorageV2) and Blob Storage accounts. Premium performance storage accounts are designed for low latency and high throughput and typically do not require lifecycle management as the data is intended to be accessed frequently. FileStorage accounts are for Azure File Shares and do not use lifecycle management in the same way as Blob Storage.
Analyzing Storage Accounts for App1:
storage1 (StorageV2, Standard): Supports lifecycle management.
storage2 (StorageV2, Premium): Supports lifecycle management (though less typical for premium due to cost optimization focus of lifecycle management, technically possible).
storage3 (BlobStorage, Standard): Supports lifecycle management.
storage4 (FileStorage, Premium): Does not support lifecycle management for blobs. FileStorage is for Azure File Shares.
Correct Option for App1: Storage accounts that support lifecycle management are storage1, storage2, and storage3. Therefore, the correct option for App1 is Storage1, storage2, and storage3 only.
App2 Requirement: Store app data in an Azure file share
Azure File Share Feature: Azure File Shares are fully managed file shares in the cloud, accessible via the Server Message Block (SMB) protocol. Azure File Shares can be hosted on General-purpose v2 (StorageV2) accounts and FileStorage accounts. FileStorage accounts are specifically designed for premium, high-performance file shares.
Analyzing Storage Accounts for App2:
storage1 (StorageV2, Standard): Supports Azure File Shares (standard file shares).
storage2 (StorageV2, Premium): Supports Azure File Shares (premium file shares).
storage3 (BlobStorage, Standard): Does not support Azure File Shares. BlobStorage accounts are designed for blobs (object storage), not file shares.
storage4 (FileStorage, Premium): Supports Azure File Shares (premium file shares).
Correct Option for App2: Storage accounts that support Azure File Shares are storage1, storage2, and storage4. Therefore, the correct option for App2 is Storage1, storage2, and storage4 only.
You are planning an Azure IoT Hub solution that will include 50,000 IoT devices.
Each device will stream data, including temperature, device ID, and time data. Approximately 50,000 records will be written every second. The data will be visualized in near real time.
You need to recommend a service to store and query the data.
Which two services can you recommend? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
Azure Table Storage
Azure Event Grid
Azure Cosmos DB SQL API
Azure Time Series Insights
Final Answer:
Azure Cosmos DB SQL API
Azure Time Series Insights
To address the requirement of storing and querying high-velocity IoT data for near real-time visualization, we need to evaluate each service based on its capabilities for handling such data:
Azure Table Storage:
Pros: NoSQL key-value store, scalable, cost-effective for large volumes of data, can handle high write throughput.
Cons: Limited querying capabilities, not optimized for complex queries or aggregations needed for real-time analytics and visualization. Queries are most efficient when filtering by partition key and row key. Less suitable for complex time-series analysis. Not ideal for “near real-time” complex visualizations that might require flexible querying.
Azure Event Grid:
Pros: Publishes and subscribes to events from Azure services and custom sources. Excellent for event-driven architectures.
Cons: Not a data storage service. Event Grid is for routing events, not storing and querying data for visualization. It would be used to trigger actions when data arrives at IoT Hub, but not for storing the data itself for querying and visualization.
Azure Cosmos DB SQL API:
Pros: NoSQL document database, highly scalable, globally distributed, supports high write throughput and low latency. Rich SQL query API allows for flexible and complex queries, including time-series queries and aggregations. Suitable for near real-time analytics and visualization. Can handle the 50,000 records/second ingestion rate.
Cons: Can be more expensive than Table Storage for very simple data access patterns, but the rich query and scalability features justify the cost for complex real-time scenarios.
Azure Time Series Insights:
Pros: Purpose-built for time-series data from IoT devices. Highly scalable for ingestion and querying of time-series data. Optimized for time-based queries, aggregations, and analytics. Provides near real-time dashboards and visualization capabilities out-of-the-box for time-series data. Designed to handle high-velocity data streams from IoT devices. Excellent for visualizing temperature, device ID, and time data in near real-time.
Cons: Specifically for time-series data. Less flexible for general-purpose NoSQL data storage compared to Cosmos DB if you have other data types beyond time-series.
Considering the requirements:
Store and query data: Azure Table Storage, Azure Cosmos DB, and Azure Time Series Insights are storage services. Azure Event Grid is not.
Near real-time visualization: Azure Cosmos DB and Azure Time Series Insights are well-suited for near real-time visualization due to their query capabilities and low latency. Azure Table Storage is less ideal for complex real-time visualizations.
High ingestion rate (50,000 records per second): Azure Cosmos DB and Azure Time Series Insights are designed for high-throughput data ingestion. Azure Table Storage can handle high throughput, but its querying limitations become more pronounced at scale for complex analytics.
Based on this analysis, the two best services for storing and querying IoT data for near real-time visualization, considering the high ingestion rate and the need for querying and visualization, are Azure Cosmos DB SQL API and Azure Time Series Insights. Azure Time Series Insights is purpose-built for this scenario and offers out-of-the-box visualization capabilities, making it a very strong choice. Azure Cosmos DB provides more general-purpose NoSQL capabilities and global distribution if needed, while still being excellent for time-series data and real-time querying.
You are designing an application that will aggregate content for users.
You need to recommend a database solution for the application.
The solution must meet the following requirements:
✑ Support SQL commands.
✑ Support multi-master writes.
✑ Guarantee low latency read operations.
What should you include in the recommendation?
Azure Cosmos DB SQL API
Azure SQL Database that uses active geo-replication
Azure SQL Database Hyperscale
Azure Database for PostgreSQL
Let’s analyze each option against the given requirements:
Azure Cosmos DB SQL API:
Support SQL commands: Yes. Azure Cosmos DB SQL API uses a subset of ANSI SQL, extended for JSON and NoSQL features. It’s designed to be familiar for SQL developers.
Support multi-master writes: Yes. Azure Cosmos DB is natively designed for multi-master writes. You can configure your Cosmos DB account to have multiple write regions, allowing you to perform write operations in any of the chosen regions. This is a core feature of Cosmos DB’s global distribution and low-latency write capabilities.
Guarantee low latency read operations: Yes. Cosmos DB is designed for low latency reads and writes at a global scale. By using the globally distributed nature of Cosmos DB and choosing read regions close to your users, you can ensure low latency read operations.
Azure SQL Database that uses active geo-replication:
Support SQL commands: Yes. Azure SQL Database fully supports T-SQL, the standard SQL dialect for SQL Server and Azure SQL Database.
Support multi-master writes: No. Azure SQL Database with active geo-replication is not multi-master. It operates on a primary-secondary model. Writes are only performed on the primary replica, and then asynchronously replicated to secondary replicas. While secondary replicas provide read scale and disaster recovery, they are read-only and do not support writes.
Guarantee low latency read operations: Yes, for read operations from the secondary replicas, especially if geographically close to users. However, write operations are always directed to the primary replica, which might introduce latency for writes and does not fulfill the multi-master write requirement.
Azure SQL Database Hyperscale:
Support SQL commands: Yes. Azure SQL Database Hyperscale fully supports T-SQL.
Support multi-master writes: No. Azure SQL Database Hyperscale is not multi-master. While Hyperscale has a distributed architecture with multiple read replicas for scalability, write operations are still processed through a single primary compute replica. It’s designed for read-heavy workloads and scalability, not for multi-master writes for globally distributed low-latency writes.
Guarantee low latency read operations: Yes. Hyperscale is designed for very high read scalability and performance, providing low latency reads from multiple replicas. However, it does not provide multi-master write capability.
Azure Database for PostgreSQL:
Support SQL commands: Yes. PostgreSQL is a relational database that supports SQL (ANSI SQL standard).
Support multi-master writes: No, not in the standard managed Azure Database for PostgreSQL service. While PostgreSQL has extensions and architectures that can achieve multi-master setups (like BDR - Bi-Directional Replication or Citus distributed PostgreSQL), these are not part of the standard Azure managed offering and add significant complexity. Azure Database for PostgreSQL Flexible Server offers read replicas for read scalability but not multi-master writes in the context asked. For a simple managed service comparison, it’s primarily single-master.
Guarantee low latency read operations: Read replicas in PostgreSQL can offer low latency reads, but the primary database is still the single point for writes, thus not fulfilling the multi-master write requirement.
Conclusion:
Only Azure Cosmos DB SQL API fully meets all three requirements: SQL command support, multi-master writes, and guaranteed low latency read operations. The other options fail on the multi-master write requirement, which is crucial for applications needing low-latency writes in a globally distributed or highly available manner.
Final Answer: Azure Cosmos DB SQL API
HOTSPOT
You have an Azure subscription that contains the SQL servers shown in the following table.
Name Resource group Location
SQLsvr1 RG1 East US
SQLsvr2 RG2 West US
The subscription contains the storage accounts shown in the following table.
Name Resource group Location Account kind
storage1 RG1 East US StorageV2 (general purpose v2)
storage2 RG2 Central US BlobStorage
You create the Azure SQL databases shown in the following table.
Name Resource group Server Pricing tier
SQLdb1 RG1 SQLsvr1 Standard
SQLdb2 RG1 SQLsvr1 Standard
SQLdb3 RG2 SQLsvr2 Premium
Answer Area
Statements Yes No
When you enable auditing for SQLdb1, you can store the audit information to storage1.
When you enable auditing for SQLdb2, you can store the audit information to storage2.
When you enable auditing for SQLdb3, you can store the audit information to storage2.
Answer:
Statements Yes No
When you enable auditing for SQLdb1, you can store the audit information to storage1. Yes
When you enable auditing for SQLdb2, you can store the audit information to storage2. No
When you enable auditing for SQLdb3, you can store the audit information to storage2. No
Explanation:
Statement 1: When you enable auditing for SQLdb1, you can store the audit information to storage1.
Yes. SQLdb1 is on SQLsvr1, which is in East US. storage1 is also in East US. Azure SQL Database auditing requires the storage account to be in the same region as the SQL server. storage1 is a StorageV2 account, which is compatible with Azure SQL Auditing.
Statement 2: When you enable auditing for SQLdb2, you can store the audit information to storage2.
No. SQLdb2 is on SQLsvr1, which is in East US. storage2 is in Central US. The storage account must be in the same region as the SQL server. storage2 is in a different region (Central US) than SQLsvr1 (East US).
Statement 3: When you enable auditing for SQLdb3, you can store the audit information to storage2.
No. SQLdb3 is on SQLsvr2, which is in West US. storage2 is in Central US. The storage account must be in the same region as the SQL server. storage2 is in a different region (Central US) than SQLsvr2 (West US).
Key takeaway for Azure SQL Database Auditing and Storage Accounts:
Region Co-location is Mandatory: The storage account used for storing Azure SQL Database audit logs must be in the same Azure region as the Azure SQL server or Managed Instance.
Storage Account Type: Generally, StorageV2 (general purpose v2) and BlobStorage account kinds are suitable for storing audit logs. FileStorage is not used for Azure SQL Auditing.
Resource Group is Irrelevant for Region Constraint: The resource group placement of the SQL server and storage account does not affect the region constraint for auditing. The critical factor is the Azure region of both resources.
You have SQL Server on an Azure virtual machine. The databases are written to nightly as part of a batch process.
You need to recommend a disaster recovery solution for the data.
The solution must meet the following requirements:
✑ Provide the ability to recover in the event of a regional outage.
✑ Support a recovery time objective (RTO) of 15 minutes.
✑ Support a recovery point objective (RPO) of 24 hours.
✑ Support automated recovery.
✑ Minimize costs.
What should you include in the recommendation?
Azure virtual machine availability sets
Azure Disk Backup
an Always On availability group
Azure Site Recovery
Final Answer: Azure Site Recovery
Let’s analyze each option against the disaster recovery requirements:
Azure virtual machine availability sets:
Regional outage recovery: No. Availability sets protect against hardware failures within a single datacenter, not regional outages.
RTO of 15 minutes: No. Availability sets do not directly address RTO in a disaster recovery scenario.
RPO of 24 hours: No. Availability sets do not directly address RPO in a disaster recovery scenario.
Automated recovery: No. Availability sets do not provide automated recovery in a disaster recovery scenario.
Minimize costs: Yes, availability sets are a basic feature and do not add significant cost beyond the VMs themselves.
Conclusion: Availability sets do not meet the requirements for regional disaster recovery, RTO, RPO, or automated recovery.
Azure Disk Backup:
Regional outage recovery: Yes. Azure Disk Backup, especially with Geo-redundant storage for backups, can allow recovery in a different region if the primary region fails.
RTO of 15 minutes: No. Restoring a VM and SQL Server from Azure Disk Backup can take significantly longer than 15 minutes, especially for large VMs and databases.
RPO of 24 hours: Yes. Azure Disk Backup can be configured to take backups frequently (e.g., daily or more often), easily meeting an RPO of 24 hours.
Automated recovery: No. While backup schedules are automated, the recovery process (restoring a VM and SQL Server) is not fully automated in the sense of automatic failover during a disaster. It requires manual steps or scripting.
Minimize costs: Yes. Azure Disk Backup is a relatively cost-effective backup solution.
Conclusion: Azure Disk Backup meets the RPO and regional outage recovery requirements and is cost-effective, but it fails to meet the RTO of 15 minutes and automated recovery.
An Always On availability group:
Regional outage recovery: Yes. By configuring an Always On Availability Group with synchronous or asynchronous replicas in a secondary Azure region, you can recover from a regional outage.
RTO of 15 minutes: Yes. Always On Availability Groups are designed for high availability and disaster recovery with fast failover times, typically within seconds to minutes, easily meeting the 15-minute RTO.
RPO of 24 hours: Yes. Always On Availability Groups, especially with synchronous replication (though often asynchronous is used for cross-region DR for performance reasons), can achieve a very low RPO, well within 24 hours, and practically close to zero data loss in many scenarios.
Automated recovery: Yes. Always On Availability Groups support automatic failover to a secondary replica in case of a primary replica failure, including regional outages (depending on configuration).
Minimize costs: No. Always On Availability Groups are the most expensive option. They require multiple VMs (at least two SQL Server VMs), SQL Server licensing for each VM, and potentially additional storage and networking costs.
Conclusion: Always On Availability Groups meet all functional requirements (regional outage recovery, RTO, RPO, automated recovery) but do not minimize costs.
Azure Site Recovery:
Regional outage recovery: Yes. Azure Site Recovery is specifically designed for disaster recovery, including regional outages. It replicates VMs to a secondary Azure region.
RTO of 15 minutes: Yes. Azure Site Recovery is designed to achieve low RTOs. With proper planning, runbooks, and pre-warming of standby resources, an RTO of 15 minutes is achievable.
RPO of 24 hours: Yes. Azure Site Recovery supports continuous replication, allowing for very low RPO, well within 24 hours, and typically in minutes. Point-in-time recovery is also available.
Automated recovery: Yes. Azure Site Recovery supports recovery plans that can automate the failover process, including VM startup order, script execution, and IP address updates, enabling automated recovery.
Minimize costs: No, but more cost-effective than Always On Availability Groups. Azure Site Recovery costs are incurred for replication, storage, and compute resources used in the recovery region only during testing or failover. You don’t need to pay for a fully licensed hot standby SQL Server VM continuously.
Conclusion: Azure Site Recovery meets all functional requirements (regional outage recovery, RTO, RPO, automated recovery) and is more cost-effective than Always On Availability Groups, although not as cheap as Azure Disk Backup.
Comparing and Choosing the Best Option:
Given the requirements and the need to “minimize costs” whenever possible, while still meeting all functional requirements, Azure Site Recovery is the most appropriate recommendation.
Always On Availability Groups are overkill and significantly more expensive for a 24-hour RPO.
Azure Disk Backup is cheaper but fails to meet the critical RTO of 15 minutes and automated recovery.
Availability Sets are irrelevant for regional DR.
Azure Site Recovery provides the best balance of meeting all the DR requirements (regional outage recovery, RTO of 15 mins, RPO of 24 hours, automated recovery) while being more cost-conscious than Always On Availability Groups. It’s not the absolute cheapest solution, but it effectively minimizes costs while still delivering the necessary DR capabilities.
You need to deploy resources to host a stateless web app in an Azure subscription.
The solution must meet the following requirements:
✑ Provide access to the full .NET framework.
✑ Provide redundancy if an Azure region fails.
✑ Grant administrators access to the operating system to install custom application dependencies.
Solution: You deploy two Azure virtual machines to two Azure regions, and you create a Traffic Manager profile.
Does this meet the goal?
Yes
No
The answer is Yes. Here’s why:
Stateless web app: The application being stateless is key. This means any of the VM instances can handle requests.
Azure Virtual Machines: Deploying to Azure Virtual Machines gives you full control over the operating system4. This allows administrators to install the custom application dependencies, including those related to the full .NET Framework, fulfilling that requirement.
Two Azure Regions: Deploying to two Azure regions provides redundancy in case one region fails5.
Traffic Manager: Azure Traffic Manager can be used to direct traffic to the available region, providing failover capabilities5.
Therefore, deploying two Azure virtual machines in two regions with a Traffic Manager profile meets all the stated requirements for a stateless web app that needs access to the full .NET framework, redundancy, and administrator access to the operating system.
You need to deploy resources to host a stateless web app in an Azure subscription.
The solution must meet the following requirements:
- Provide access to the full .NET framework.
- Provide redundancy if an Azure region fails.
- Grant administrators access to the operating system to install custom application dependencies.
Solution: You deploy an Azure virtual machine to two Azure regions, and you deploy an Azure Application Gateway.
Does this meet the goal?
Yes
No
The answer is Yes.
Here’s why:
Azure Virtual Machines: Deploying to Azure Virtual Machines (VMs) provides administrators with operating system-level access4. This allows for the installation of custom application dependencies, fulfilling that requirement.
Stateless Web App: While the question mentions the web app is stateless, this mostly impacts scaling strategy. VMs can host stateless web apps effectively.
Full .NET Framework: VMs allow you to install and run the full .NET Framework.
Redundancy: Deploying VMs to two Azure regions provides redundancy in case one region fails.
Azure Application Gateway: While not explicitly stated why it is used, Azure Application Gateway can be used to load balance traffic to the VMs in different regions, providing high availability and distributing traffic1. It can also provide features like SSL termination and a Web Application Firewall (WAF).
Therefore, deploying Azure VMs in two regions with Azure Application Gateway addresses all the requirements: OS-level access, custom dependencies, full .NET Framework support, and regional redundancy.
HOTSPOT
You plan to create an Azure Storage account that will host file shares. The shares will be accessed from on-premises applications that are transaction-intensive.
You need to recommend a solution to minimize latency when accessing the file shares. The solution must provide the highest-level of resiliency for the selected storage tier.
What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Storage tier:
Hot
Premium
Transaction optimized
Resiliency:
Geo-redundant storage (GRS)
Zone-redundant storage (ZRS)
Locally-redundant storage (LRS)
Answer:
Storage tier: Premium
Resiliency: Zone-redundant storage (ZRS)
Explanation:
Storage tier: Premium
Minimize Latency: For transaction-intensive applications accessing file shares, Premium storage tier is the optimal choice. Premium storage is designed for low latency and high IOPS (Input/Output Operations Per Second). It uses SSD (Solid State Drive) based storage, which provides significantly faster performance compared to the HDD-based Standard storage tiers (Hot and Cool).
Hot and Transaction optimized are not suitable here:
Hot storage is designed for frequently accessed data but still uses HDD for file shares, resulting in higher latency compared to Premium.
Transaction optimized is not a valid Azure Storage tier option in this context.
Resiliency: Zone-redundant storage (ZRS)
Highest Level of Resiliency for Premium: For Premium file shares, the available redundancy options are Locally-redundant storage (LRS) and Zone-redundant storage (ZRS).
Locally-redundant storage (LRS): Replicates your data three times within a single physical location in the primary region. It’s the lowest-cost redundancy option and protects against server rack and drive failures.
Zone-redundant storage (ZRS): Replicates your data synchronously across three Azure availability zones in the primary region. Each availability zone is a separate physical location with independent power, cooling, and networking. ZRS provides high availability by protecting against datacenter failures within a region.
Geo-redundant storage (GRS): Geo-redundant storage replicates your data to a secondary region that is hundreds of miles away from the primary region. While GRS offers the highest level of data durability and protection against regional disasters, it is not available for Premium file shares.
Why ZRS is the highest resiliency for Premium: Since GRS is not an option for Premium file shares, Zone-redundant storage (ZRS) becomes the highest level of resiliency available for the Premium tier. ZRS provides better resiliency than LRS by protecting against availability zone failures, which is more robust than just single datacenter protection offered by LRS.
In summary:
To minimize latency for transaction-intensive workloads, Premium storage tier is necessary. For the highest level of resiliency available within the Premium tier for file shares, Zone-redundant storage (ZRS) should be selected because GRS is not supported for Premium file shares.
You need to deploy resources to host a stateless web app in an Azure subscription.
The solution must meet the following requirements:
- Provide access to the full .NET framework.
- Provide redundancy if an Azure region fails.
- Grant administrators access to the operating system to install custom application dependencies.
Solution: You deploy an Azure virtual machine scale set that uses autoscaling.
Does this meet the goal?
Yes
No
Let’s examine the requirements and evaluate whether deploying an Azure virtual machine scale set (VMSS) with autoscaling meets them.
Requirements:
Provide access to the full .NET framework:
Azure Virtual Machine Scale Sets are based on Azure Virtual Machines. You can choose the operating system for the VMs in the scale set, including Windows Server.
On Windows Server, you can install and run applications that require the full .NET Framework.
Yes, VMSS can provide access to the full .NET framework.
Provide redundancy if an Azure region fails:
Azure Virtual Machine Scale Sets are designed for high availability and scalability within a single Azure region.
A single VMSS deployment is confined to a specific Azure region. If the entire Azure region experiences an outage, the VMSS and the web app hosted on it will be unavailable.
To achieve redundancy across Azure regions, you would need to deploy multiple VMSSs in different Azure regions and use a service like Azure Traffic Manager or Azure Front Door to distribute traffic and provide failover capabilities.
No, deploying a single Azure virtual machine scale set, even with autoscaling, does not provide redundancy in case of an Azure region failure. The solution description only mentions deploying an Azure virtual machine scale set, implying a single deployment within a single region.
Grant administrators access to the operating system to install custom application dependencies:
Azure Virtual Machine Scale Sets are built upon Azure Virtual Machines. You can configure the VM instances within a VMSS just like individual VMs.
Administrators can access the operating system of the VM instances in a VMSS using methods like RDP (for Windows) or SSH (for Linux) and install custom application dependencies.
Yes, VMSS grants administrators access to the operating system to install custom application dependencies.
Evaluation of the Solution:
The solution of deploying a single Azure virtual machine scale set with autoscaling meets two out of the three requirements: providing access to the full .NET framework and granting administrator access to the OS. However, it fails to meet the crucial requirement of providing redundancy if an Azure region fails. A single VMSS is region-bound and will be affected by a regional outage.
To achieve regional redundancy, you would need a more complex setup involving multiple VMSS deployments across different regions and a global load balancing solution, which is not described in the proposed solution.
Conclusion:
The proposed solution, as described, does not fully meet the goal because it does not provide redundancy in the event of an Azure region failure. A single VMSS, even with autoscaling, is not designed for cross-region disaster recovery.
Final Answer: No
You plan to move a web app named App1 from an on-premises datacenter to Azure.
App1 depends on a custom COM component that is installed on the host server.
You need to recommend a solution to host App1 in Azure. The solution must meet the following requirements:
✑ App1 must be available to users if an Azure datacenter becomes unavailable.
✑ Costs must be minimized.
What should you include in the recommendation?
A. In two Azure regions, deploy a load balancer and a web app.
B. In two Azure regions, deploy a load balancer and a virtual machine scale set.
C. Deploy a load balancer and a virtual machine scale set across two availability zones.
D. In two Azure regions, deploy an Azure Traffic Manager profile and a web app.
Quite obvious: custom COM + reduced costs + zone (or more) redundancy = C
Your company has the infrastructure shown in the following table.
Location Resource
Azure
* Azure subscription named Subscription1
* 20 Azure web apps
On-premises datacenter
* Active Directory domain
* Server running Azure AD Connect
* Linux computer named Server1
The on-premises Active Directory domain syncs to Azure Active Directory (Azure AD).
Server1 runs an application named Appl that uses LDAP queries to verify user identities in the on-premises Active Directory domain.
You plan to migrate Server1 to a virtual machine in Subscription1.
A company security policy states that the virtual machines and services deployed to Subscription1 must be prevented from accessing the on-premises network.
You need to recommend a solution to ensure that Appl continues to function after the migration. The solution must meet the security policy.
What should you include in the recommendation?
Azure AD Domain Services (Azure AD DS)
an Azure VPN gateway
the Active Directory Domain Services role on a virtual machine
Azure AD Application Proxy
The core requirement is to allow the migrated application (App1) to continue using LDAP queries for user identity verification, but without violating the security policy that prohibits Azure resources from accessing the on-premises network.
Let’s evaluate each option:
Azure AD Domain Services (Azure AD DS):
Functionality: Azure AD DS provides managed domain services such as domain join, group policy, LDAP, and Kerberos/NTLM authentication within Azure. It’s essentially a domain controller hosted in Azure but managed by Microsoft.
LDAP Support: Azure AD DS explicitly supports LDAP queries. Applications running on VMs joined to an Azure AD DS domain can perform LDAP queries against the Azure AD DS domain controllers.
Security Policy Compliance: By using Azure AD DS, the VM hosting App1 will authenticate against the Azure AD DS domain controllers hosted within Azure. This completely avoids accessing the on-premises Active Directory. Therefore, it fully complies with the security policy.
Suitability: This is a strong candidate as it directly addresses the need for LDAP functionality within Azure while adhering to the security policy.
An Azure VPN gateway:
Functionality: Azure VPN Gateway creates a secure, encrypted connection between your Azure virtual network and your on-premises network.
LDAP Support: With a VPN connection, the VM in Azure could directly query the on-premises Active Directory domain controllers via LDAP.
Security Policy Compliance: This option directly violates the security policy. A VPN gateway enables access from Azure resources to the on-premises network, which is explicitly prohibited.
Suitability: This option is incorrect because it violates the core security requirement.
The Active Directory Domain Services role on a virtual machine:
Functionality: This involves deploying a Windows Server virtual machine in Azure and installing the Active Directory Domain Services role to create a new domain controller within Azure.
LDAP Support: Applications on VMs joined to this newly created Azure-based domain can perform LDAP queries against this Azure-hosted domain controller.
Security Policy Compliance: This option complies with the security policy. The LDAP queries would be directed to the domain controller hosted within Azure, and not to the on-premises Active Directory. It isolates Azure resources from the on-premises network for authentication purposes.
Suitability: This is also a viable solution as it provides LDAP functionality within Azure and meets the security policy. However, it involves more management overhead compared to Azure AD DS, as you are responsible for managing the domain controller VM.
Azure AD Application Proxy:
Functionality: Azure AD Application Proxy is used to publish on-premises web applications to the internet through Azure AD authentication. It provides secure remote access to web applications.
LDAP Support: Azure AD Application Proxy is not related to enabling LDAP queries for an application running within Azure. It’s for providing reverse proxy and authentication for web applications, not for backend LDAP authentication for applications themselves.
Security Policy Compliance: Irrelevant in the context of this security policy, as it doesn’t address the LDAP query requirement for App1 within Azure.
Suitability: This option is incorrect as it does not address the requirement of LDAP queries for App1.
Comparison and Best Recommendation:
Both Azure AD Domain Services and The Active Directory Domain Services role on a virtual machine can technically solve the problem and meet the security policy. However, Azure AD Domain Services (Azure AD DS) is the more managed, streamlined, and generally recommended solution for this scenario.
Azure AD DS offers a managed service, reducing operational overhead for managing domain controllers. It’s designed for Azure environments and integrates well with Azure AD.
AD DS role on a VM requires managing the VM and the domain controller infrastructure yourself, which adds complexity and management overhead.
Considering the need for a recommendation that is efficient and aligned with Azure best practices for cloud-based identity and access management, Azure AD Domain Services (Azure AD DS) is the most appropriate and recommended solution.
Final Answer: Azure AD Domain Services (Azure AD DS)
You need to design a solution that will execute custom C# code in response to an event routed to Azure Event Grid.
The solution must meet the following requirements:
✑ The executed code must be able to access the private IP address of a Microsoft SQL Server instance that runs on an Azure virtual machine.
Costs must be minimized.
What should you include in the solution?
Azure Logic Apps in the integrated service environment
Azure Functions in the Dedicated plan and the Basic Azure App Service plan
Azure Logic Apps in the Consumption plan
Azure Functions in the Consumption plan
Let’s break down the requirements and evaluate each option:
Requirements:
Execute custom C# code: The solution must be capable of running custom C# code.
Access private IP of SQL Server VM: The code needs to connect to a SQL Server instance using its private IP address within an Azure Virtual Network.
Minimize costs: The solution should be cost-effective.
Option Analysis:
Azure Logic Apps in the integrated service environment (ISE):
Custom C# code: Logic Apps are primarily workflow orchestration services. While you can execute code within a Logic App, it’s not directly custom C# code. You would typically call an Azure Function or use inline code actions, which are more for expressions and data manipulation than complex C# logic.
Private IP access: Logic Apps in an ISE run within your Azure Virtual Network. This means they have direct access to resources within that VNet, including VMs with private IPs like the SQL Server VM.
Cost minimization: ISE is the most expensive deployment option for Logic Apps. It is designed for large enterprises and mission-critical workloads, and it incurs a fixed cost regardless of usage. This option does not minimize costs.
Azure Functions in the Dedicated plan and the Basic Azure App Service plan:
Custom C# code: Azure Functions fully support writing and executing custom C# code.
Private IP access: When Azure Functions run in a Dedicated App Service plan, they can be integrated into an Azure Virtual Network. VNet integration allows the Function App to access resources within the VNet using private IPs, including the SQL Server VM.
Cost minimization: Dedicated plans are more predictable in cost as you pay for the App Service plan instance regardless of the number of executions. The Basic tier is a lower-cost Dedicated plan, but it’s still not as cost-effective as serverless options when considering sporadic event-driven execution. It’s more expensive than Consumption plan if the function is not constantly running.
Azure Logic Apps in the Consumption plan:
Custom C# code: Similar to ISE, Logic Apps in the Consumption plan are workflow services, not direct C# code execution environments. You would likely need to integrate with Azure Functions to execute custom C# code.
Private IP access: Historically, Logic Apps in the Consumption plan did not natively have direct VNet integration for accessing private IPs. While workarounds existed (like using Data Gateway or API Management), they added complexity and potential cost. However, VNet integration capabilities have been added to Consumption Logic Apps, allowing them to access resources within a VNet, but it might involve more configuration than Dedicated plans.
Cost minimization: Consumption plan Logic Apps are generally cost-effective as you pay per execution, making them suitable for event-driven scenarios where the workflow is not constantly running. However, the complexity of VNet integration and potential need to use extra services might slightly offset the cost savings.
Azure Functions in the Consumption plan:
Custom C# code: Azure Functions fully support writing and executing custom C# code.
Private IP access: Azure Functions in the Consumption plan can now be integrated with Azure Virtual Networks to access resources with private IPs. This feature enhancement allows Consumption plan Functions to securely access resources like the SQL Server VM within the VNet. This VNet integration for Consumption plan Functions might require configuring outbound Network Address Translation (NAT) to handle outbound connections.
Cost minimization: Azure Functions in the Consumption plan are the most cost-effective option for event-driven workloads. You only pay for the actual execution time of the code, making it ideal for scenarios where the function is invoked sporadically in response to events.
Best Option based on Requirements and Cost:
Considering all factors, Azure Functions in the Consumption plan is the most suitable recommendation.
It directly supports custom C# code execution.
With VNet integration, it can securely access the SQL Server VM using its private IP address.
The Consumption plan is the most cost-effective option, especially for event-driven scenarios, aligning with the “minimize costs” requirement.
While Dedicated plans also offer VNet integration and C# support, they are generally more expensive than Consumption for event-driven workloads. Logic Apps, while powerful for workflow orchestration, are not primarily for direct C# code execution and ISE is too costly. Logic Apps Consumption plan has gained VNet integration capabilities, but still less direct for C# and might involve more complex setup than Consumption Functions for this specific scenario.
Final Answer: Azure Functions in the Consumption plan
You have an on-premises network and an Azure subscription. The on-premises network has several branch offices.
A branch office in Toronto contains a virtual machine named VM1 that is configured as a file server.
Users access the shared files on VM1 from all the offices.
You need to recommend a solution to ensure that the users can access the shares files as quickly as possible if the Toronto branch office is inaccessible.
What should you include in the recommendation?
a Recovery Services vault and Azure Backup
an Azure file share and Azure File Sync
Azure blob containers and Azure File Sync
a Recovery Services vault and Windows Server Backup
The goal is to provide users with fast access to shared files, even if the Toronto branch office (where VM1 file server is located) is inaccessible. This implies the need for a solution that replicates the file shares and allows access from alternative locations when Toronto is down.
Let’s evaluate each option:
a Recovery Services vault and Azure Backup:
Functionality: Azure Backup, in conjunction with a Recovery Services vault, is used for backing up and restoring data. It is primarily a data protection solution, not a solution for providing continuous file access during a site outage.
Fast Access if Toronto Inaccessible: No. If Toronto is inaccessible, users would need to initiate a restore process from the Recovery Services vault to access the files, which is not a fast or seamless access method for users during an outage. Backup is for recovery, not continuous availability.
Suitability: This option is not designed for providing fast access to files during a branch office outage.
an Azure file share and Azure File Sync:
Functionality: Azure File Share is a fully managed cloud file share accessible via SMB protocol. Azure File Sync is a service that can synchronize on-premises file servers with Azure File Shares.
Fast Access if Toronto Inaccessible: Yes. If the Toronto branch office becomes inaccessible, users can be redirected to access the Azure File Share directly. The Azure File Share is hosted in Azure and is independent of the Toronto office’s availability. Users from other offices can access the files through the internet connection to Azure. Additionally, Azure File Sync can be used to cache the Azure File Share content on file servers in other branch offices for even faster local access if required.
Suitability: This option directly addresses the requirement for fast file access during a Toronto office outage. Azure File Share provides a cloud-based, always-available copy of the files.
Azure blob containers and Azure File Sync:
Functionality: Azure Blob containers are object storage, designed for storing large amounts of unstructured data. Azure File Sync is designed to synchronize on-premises file servers with Azure File Shares, not Blob containers.
Fast Access if Toronto Inaccessible: No. Azure Blob containers are not directly accessed as file shares by users using standard file protocols (like SMB). While data could be in Blob storage, it’s not a solution for providing fast file share access to users during an outage. Azure File Sync is not compatible with Blob containers in this scenario.
Suitability: This option is not a valid or practical solution for providing file share access.
a Recovery Services vault and Windows Server Backup:
Functionality: Windows Server Backup is an on-premises backup tool. Combined with a Recovery Services vault in Azure, it provides offsite backups.
Fast Access if Toronto Inaccessible: No. Similar to the “Azure Backup” option, this is a backup and restore solution. It does not provide fast or continuous file access during an outage. Users would need to restore from backup, which is not designed for immediate access.
Suitability: This option is also not designed for providing fast access to files during a branch office outage.
Conclusion:
The most suitable recommendation to ensure users can access shared files quickly even if the Toronto branch office is inaccessible is an Azure file share and Azure File Sync. This solution provides a cloud-based, highly available copy of the files (Azure File Share) that can be accessed from any location, including other branch offices, when the primary file server in Toronto is unavailable. Azure File Sync can further enhance performance by caching the Azure File Share content on-premises in other offices if needed.
Final Answer: an Azure file share and Azure File Sync
HOTSPOT
You have an Azure subscription named Subscription1 that is linked to a hybrid Azure Active Directory (Azure AD) tenant.
You have an on-premises datacenter that does NOT have a VPN connection to Subscription1. The datacenter contains a computer named Server1 that has Microsoft SQL Server 2016 installed. Server1 is prevented from accessing the internet.
An Azure logic app named LogicApp1 requires write access to a database on Server1.
You need to recommend a solution to provide LogicApp1 with the ability to access Server1.
What should you recommend deploying on-premises and in Azure? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
On-premises:
A Web Application Proxy for Windows Server
An Azure AD Application Proxy connector
An On-premises data gateway
Hybrid Connection Manager
Azure:
A connection gateway resource
An Azure Application Gateway
An Azure Event Grid domain
An enterprise application
The scenario requires secure access from an Azure Logic App to an on-premises SQL Server that is not directly exposed to the internet and lacks a VPN connection to Azure. Let’s evaluate the options for both on-premises and Azure components.
On-premises Options:
A Web Application Proxy for Windows Server: This is designed for publishing web applications to the internet and providing reverse proxy and pre-authentication. It is not suitable for connecting Logic Apps to on-premises databases.
An Azure AD Application Proxy connector: Similar to Web Application Proxy, this is for publishing web applications for remote access with Azure AD authentication. Not relevant for database connectivity in this scenario.
An On-premises data gateway: This is specifically designed for securely connecting Azure services like Logic Apps, Power Automate, Power BI, and Azure Analysis Services to on-premises data sources. It acts as a bridge, allowing Azure services to access on-premises resources without requiring direct internet exposure or VPN tunnels for each service. This is the most appropriate on-premises component.
Hybrid Connection Manager: Hybrid Connections, part of Azure Relay, allow secure point-to-point connections between applications in Azure and on-premises. While it can provide connectivity, the On-premises data gateway is the recommended and more streamlined solution for Logic Apps to connect to on-premises data sources like SQL Server because it is specifically designed for this purpose and integrates directly with Logic App connectors.
Azure Options:
A connection gateway resource: When using the On-premises data gateway with Azure services, you create a “connection” within the Azure service (like Logic Apps). This connection configuration essentially acts as a pointer to the registered On-premises data gateway. While “connection gateway resource” is not a precisely defined Azure resource type name, it is likely referring to the configuration within the Logic App that facilitates the connection through the data gateway. This is the most relevant Azure-side component in the context of using an On-premises data gateway.
An Azure Application Gateway: Azure Application Gateway is a web traffic load balancer and reverse proxy for web applications. It is not relevant for connecting Logic Apps to on-premises databases.
An Azure Event Grid domain: Azure Event Grid is an event routing service. It is not related to establishing data connectivity for Logic Apps to on-premises SQL Server.
An enterprise application: Enterprise applications in Azure AD represent applications for identity management purposes. While authentication might be a consideration in some scenarios, it’s not the primary component for establishing the data connectivity bridge itself in this context. The On-premises data gateway handles the secure connection, and the “connection” within Logic Apps manages the details of using that gateway.
Justification for choosing “On-premises data gateway” and “A connection gateway resource”:
The On-premises data gateway is the Microsoft-recommended solution for securely connecting Azure services, including Logic Apps, to on-premises data sources like SQL Server behind a firewall and without a VPN. On the Azure side, the “connection gateway resource” (interpreted as the connection configuration in Logic Apps) is the necessary element to instruct Logic App to use the configured On-premises data gateway to reach Server1. The Logic App’s connection configuration is where you specify that you want to use an on-premises connection and select the registered data gateway.
Therefore, the combination of “An On-premises data gateway” on-premises and “A connection gateway resource” in Azure (representing the connection configuration) provides the required solution.
Final Answer:
Answer Area
On-premises:
An On-premises data gateway
Azure:
A connection gateway resource
Your company has 300 virtual machines hosted in a VMware environment. The virtual machines vary in size and have various utilization levels.
You plan to move all the virtual machines to Azure.
You need to recommend how many and what size Azure virtual machines will be required to move the current workloads to Azure. The solution must minimize administrative effort.
What should you use to make the recommendation?
Azure Cost Management
Azure Pricing calculator
Azure Migrate
Azure Advisor
To determine the optimal number and size of Azure virtual machines for migrating 300 on-premises VMware VMs while minimizing administrative effort, you need a tool that can assess the existing VMware environment and provide Azure VM sizing recommendations. Let’s evaluate each option:
Azure Cost Management: Azure Cost Management is a tool for monitoring, managing, and optimizing Azure spending. It helps you analyze costs, set budgets, and identify cost-saving opportunities for existing Azure resources. It does not directly assess on-premises VMware environments to recommend Azure VM sizes for migration. While it can inform cost considerations after you’ve chosen VM sizes, it doesn’t help in determining those sizes for migration.
Azure Pricing calculator: The Azure Pricing calculator is a tool to estimate the cost of Azure services. You can manually configure different Azure VM sizes and tiers to get cost estimates. However, it requires you to manually input the specifications (like VM size, OS, etc.) and does not automatically analyze your on-premises VMware environment to provide sizing recommendations. It’s useful for cost estimation once you have decided on the VM sizes, but not for determining the sizes initially based on on-premises workload characteristics.
Azure Migrate: Azure Migrate is a service specifically designed to simplify, guide, and accelerate your migration to Azure. It provides tools for:
Discovery: Discovering on-premises VMware, Hyper-V VMs, and physical servers.
Assessment: Assessing discovered VMs for Azure readiness and providing Azure VM size recommendations based on performance data and compatibility. Azure Migrate can analyze the CPU, memory, and disk utilization of your VMware VMs to suggest appropriate Azure VM sizes.
Migration: Tools to migrate VMs to Azure.
Azure Migrate directly addresses the need to recommend Azure VM sizes based on your existing VMware environment while minimizing administrative effort through automated discovery and assessment.
Azure Advisor: Azure Advisor analyzes your existing Azure resources and provides recommendations to optimize cost, security, reliability, operational excellence, and performance. It does not assess on-premises environments for migration planning. Azure Advisor helps optimize resources already in Azure, not for sizing recommendations during migration from on-premises.
Conclusion:
Azure Migrate is the most appropriate tool to use for recommending the number and size of Azure virtual machines needed to migrate your 300 VMware VMs to Azure while minimizing administrative effort. It is specifically designed for migration assessments and provides Azure VM size recommendations based on analyzing your on-premises VM configurations and performance data. The other options are not designed for this specific purpose.
Final Answer: Azure Migrate
You plan provision a High Performance Computing (HPC) cluster in Azure that will use a third-party scheduler.
You need to recommend a solution to provision and manage the HPC cluster node.
What should you include in the recommendation?
Azure Lighthouse
Azure CycleCloud
Azure Purview
Azure Automation
The correct answer is Azure CycleCloud.
Here’s why:
Azure CycleCloud is specifically designed for creating, managing, operating, and optimizing High Performance Computing (HPC) clusters in Azure. It’s tailored to handle the complexities of HPC environments, including:
Provisioning HPC Nodes: CycleCloud automates the deployment and configuration of virtual machines that serve as compute nodes in your HPC cluster. It can handle different VM sizes, operating systems, and networking configurations suitable for HPC workloads.
Third-Party Scheduler Integration: Crucially, CycleCloud is built to work with various schedulers, including popular third-party options like Slurm, PBS Pro, LSF, and Grid Engine. It understands how to integrate with these schedulers to manage job submissions and node allocation within the cluster. You can configure CycleCloud to deploy and manage the scheduler itself or integrate with an existing scheduler setup.
Cluster Lifecycle Management: CycleCloud goes beyond just provisioning. It handles the entire lifecycle of the cluster, including:
Scaling: Dynamically adding or removing nodes based on workload demands and scheduler requirements.
Monitoring: Providing visibility into cluster health and performance.
Termination: Gracefully shutting down the cluster when it’s no longer needed.
Infrastructure as Code: CycleCloud uses declarative configuration files to define your cluster, allowing you to version control and easily reproduce your HPC environment.
Let’s look at why the other options are less suitable:
Azure Lighthouse: Azure Lighthouse is for delegated resource management across multiple tenants. It’s primarily used by Managed Service Providers (MSPs) to manage Azure resources for their customers. While it’s related to management, it’s not directly focused on provisioning and managing HPC cluster nodes within a single tenant. It’s more about who can manage resources, not how to build and run an HPC cluster.
Azure Purview: Azure Purview is a data governance service. It helps you discover, understand, and govern your data assets across your organization. While data is crucial for HPC, Purview is not involved in provisioning or managing the compute infrastructure (HPC nodes) itself. It focuses on data cataloging, lineage, and security, not cluster orchestration.
Azure Automation: Azure Automation is a general-purpose automation service. You could potentially use Azure Automation to script the deployment of VMs and configure them as HPC nodes. However, it’s a much more manual and complex approach compared to using CycleCloud. Azure Automation lacks the HPC-specific features and scheduler integrations that CycleCloud provides out-of-the-box. You would need to write a significant amount of custom scripting to achieve the same level of functionality as CycleCloud, and it would be less robust and harder to manage for HPC cluster lifecycle management.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company plans to deploy various Azure App Service instances that will use Azure SQL databases.
The App Service instances will be deployed at the same time as the Azure SQL databases.
The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.
You need to recommend a solution to meet the regulatory requirement.
Solution: You recommend using an Azure policy initiative to enforce the location.
Does this meet the goal?
Yes
No
The correct answer is Yes.
Here’s why:
Azure Policy Initiatives for Location Enforcement: Azure Policy Initiatives (formerly called Policy Sets) are a powerful tool for managing and enforcing organizational standards and compliance at scale in Azure. One of the most common and effective uses of Azure Policy is to control resource locations.
How Azure Policy Enforces Location: You can create an Azure Policy (and include it in an initiative) that specifically restricts the locations where resources can be deployed within a subscription, resource group, or management group. For example, you can define a policy that only allows resources to be created in “East US 2” and “West US 2” regions.
Meeting the Regulatory Requirement: The company has a regulatory requirement to deploy App Service instances only to specific Azure regions. By implementing an Azure Policy Initiative that includes a policy to restrict allowed locations for App Service and Azure SQL Database resources, you directly address this requirement. When a deployment is attempted in a non-compliant region, Azure Policy will prevent the deployment from succeeding, ensuring that the regulatory requirement is met.
Simultaneous Deployment and Same Region: While Azure Policy itself doesn’t orchestrate the deployment of App Service and SQL Database at the same time, it works seamlessly with any deployment method (ARM templates, Bicep, Azure CLI, PowerShell, etc.). When you attempt to deploy both App Service and Azure SQL database (simultaneously or not), the location policy will be evaluated during the deployment process. If either resource is specified to be deployed in a disallowed region, the policy will block the deployment. To ensure both App Service and SQL Database are in the same region, you would configure your deployment template or script to specify the same region for both resource types. The location policy will then ensure that this chosen region is within the allowed regions.
Why other options are less relevant (or not applicable in this context): (Though not explicitly asked in this specific question, understanding why other options from the initial HPC question are not relevant here is helpful)
Azure Lighthouse: Lighthouse is for delegated access management across tenants, not for location enforcement within a single tenant to meet regulatory requirements.
Azure CycleCloud: CycleCloud is for HPC cluster management. It’s not directly related to enforcing location policies for App Service and SQL Database deployments.
Azure Purview: Purview is for data governance and cataloging, not resource location enforcement.
Azure Automation: While you could use Azure Automation to check locations after deployment, or even as part of a more complex deployment script, Azure Policy is the native and recommended Azure service for proactively enforcing location constraints during deployment. Policy is much more efficient and integrated for this specific purpose.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company plans to deploy various Azure App Service instances that will use Azure SQL databases.
The App Service instances will be deployed at the same time as the Azure SQL databases.
The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.
You need to recommend a solution to meet the regulatory requirement.
Solution: You recommend using the Regulatory compliance dashboard in Azure Security Center.
Does this meet the goal?
Yes
No
The correct answer is No.
Here’s why:
Azure Security Center Regulatory Compliance Dashboard’s Purpose: The Regulatory Compliance dashboard in Azure Security Center is designed to provide visibility and reporting on your Azure environment’s compliance posture against various regulatory standards and industry benchmarks (like PCI DSS, SOC 2, ISO 27001, Azure CIS, etc.).
What the Dashboard Does:
Assesses Compliance: It continuously assesses your Azure resources against the selected regulatory standards and security benchmarks.
Provides Insights: It shows you which controls are passing, failing, or need attention.
Offers Recommendations: It provides actionable recommendations to improve your compliance and security posture based on the identified issues.
Reporting: It generates reports on your compliance status.
What the Dashboard Does NOT Do:
Enforce Deployment Policies: The Regulatory Compliance dashboard does not actively prevent deployments of resources in non-compliant regions. It’s a monitoring and reporting tool, not an enforcement mechanism.
Control Resource Location During Deployment: It does not have the capability to block or redirect deployments based on region. It identifies compliance issues after resources are deployed.
Why it Doesn’t Meet the Goal: The company’s requirement is to ensure App Service instances are deployed only to specific Azure regions. The Regulatory Compliance dashboard can tell you if resources are deployed in compliant regions after they are deployed, but it cannot prevent deployments to non-compliant regions in the first place. Therefore, it does not meet the regulatory requirement of ensuring deployment only in specific regions.
Better Solution (as seen in the previous question): As discussed in the previous question, Azure Policy is the correct tool for enforcing location restrictions during resource deployment. Azure Policy can be configured to deny the creation of resources in regions that are not allowed, thus directly meeting the regulatory requirement.
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company plans to deploy various Azure App Service instances that will use Azure SQL databases.
The App Service instances will be deployed at the same time as the Azure SQL databases.
The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.
You need to recommend a solution to meet the regulatory requirement.
Solution: You recommend using an Azure policy to enforce the resource group location.
Does this meet the goal?
Yes
No
The correct answer is No.
Here’s why:
Resource Group Location vs. Resource Location: It’s crucial to understand the difference between resource group location and resource location in Azure.
Resource Group Location: The location you specify when creating a resource group is primarily for metadata storage and management operations related to the resource group itself. It doesn’t directly dictate where the resources you deploy within that resource group will be located.
Resource Location: Each Azure resource (like App Service, Azure SQL Database, VM, etc.) has its own independent location setting. This is the location where the actual service and its data are physically hosted.
Azure Policy for Resource Group Location: You can use Azure Policy to enforce the location where resource groups can be created. For example, you can create a policy that only allows resource groups to be created in “East US 2” and “West US 2”.
Why Enforcing Resource Group Location Doesn’t Meet the Goal:
No Control over Resource Location: Enforcing the resource group location does not automatically enforce the location of the resources deployed within that resource group. You can create a resource group in “East US” and then deploy an App Service in “West US” and an Azure SQL database in “Central US” within that “East US” resource group.
Regulatory Requirement on Resource Location: The regulatory requirement is about the location of the App Service instances (and by extension, the Azure SQL databases), not just the resource group. Enforcing resource group location alone does not guarantee that these resources will be in the required specific regions.
What would meet the goal (and was discussed in a previous question): To meet the goal, you need to use Azure Policy to enforce the location of the App Service and Azure SQL Database resources themselves. You would create policies that specify the allowed locations for resource types like Microsoft.Web/sites (App Service) and Microsoft.Sql/servers (SQL Database servers and databases).
In summary: While enforcing resource group location is a form of location control in Azure, it is not the correct or effective way to meet the regulatory requirement of ensuring App Service instances and Azure SQL databases are deployed to specific Azure regions. It only controls where the resource group’s metadata is stored, not the location of the actual services within it. Therefore, the solution of using Azure Policy to enforce resource group location does not meet the goal.
Final Answer: No
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company plans to deploy various Azure App Service instances that will use Azure SQL databases.
The App Service instances will be deployed at the same time as the Azure SQL databases.
The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.
You need to recommend a solution to meet the regulatory requirement.
Solution: You recommend creating resource groups based on locations and implementing resource locks on the resource groups.
Does this meet the goal?
Yes
No
The correct answer is No.
Here’s why:
Resource Groups for Organization, Not Location Enforcement: Creating resource groups based on locations is a good organizational practice. It helps in logically grouping resources deployed in a specific region, making management and billing easier to understand. However, resource groups themselves do not enforce the location of the resources deployed within them.
Resource Locks for Protection, Not Location Control: Resource locks are used to protect resources from accidental deletion or modification. They can be applied at the resource group level or individual resource level. Resource locks provide different levels of protection (CanNotDelete, ReadOnly). However, resource locks do not control or enforce the location where resources are deployed. They only come into play after resources have been deployed.
Why this Solution Fails to Meet the Goal:
No Location Enforcement During Deployment: This solution does not prevent a user from deploying an App Service or Azure SQL database to a region that is not one of the specific allowed regions. Someone could create a resource group named “EastUS2-Resources” (suggesting East US 2 location) but still deploy an App Service within it to West US or any other region.
Organizational, Not Enforceable: Creating resource groups by location is purely an organizational and naming convention. It’s helpful for humans to understand the intended location, but it’s not enforced by Azure itself.
Locks are Post-Deployment: Resource locks only prevent actions after the resources are deployed. They have no bearing on the initial deployment location choice.
The Regulatory Requirement is about Enforcement: The company has a regulatory requirement to deploy App Service instances only to specific regions. This implies a need for a mechanism that actively prevents deployments in non-compliant regions. Resource groups and resource locks, in combination or separately, do not provide this proactive enforcement.
The Correct Solution (from previous questions): As established in earlier questions, Azure Policy is the proper tool for enforcing location restrictions. Azure Policy can be configured to deny the creation of resources in regions that are not allowed, directly meeting the regulatory requirement.
In summary: While creating location-based resource groups and using resource locks are good management practices, they do not address the regulatory requirement of enforcing resource location during deployment. They do not prevent deployments in non-compliant regions. Therefore, this solution does not meet the goal.
Final Answer: No
HOTSPOT
You are planning an Azure Storage solution for sensitive data. The data will be accessed daily. The data set is less than 10 GB.
You need to recommend a storage solution that meets the following requirements:
- All the data written to storage must be retained for five years.
- Once the data is written, the data can only be read. Modifications and deletion must be prevented.
- After five years, the data can be deleted, but never modified.
- Data access charges must be minimized
What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Storage account type:
General purpose v2 with Archive access tier for blobs
General purpose v2 with Cool access tier for blobs
General purpose v2 with Hot access tier for blobs
Configuration to prevent modifications and deletions:
Container access level
Container access policy
Storage account resource lock
Answer:
Storage account type: General purpose v2 with Cool access tier for blobs
Configuration to prevent modifications and deletions: Storage account resource lock
Explanation:
Let’s break down each requirement and why these selections are the closest fit within the given options:
- Data Retention for Five Years:
General purpose v2 with Cool access tier for blobs: Both Cool and Archive access tiers are suitable for long-term retention. Cool is designed for data that is infrequently accessed but still needs to be available. Archive is for rarely accessed data with higher retrieval latency. Since the data is accessed daily, Archive might introduce unacceptable latency for daily reads. Cool tier offers a better balance of cost and accessibility for data that needs to be retained long-term but still accessed periodically.
- Write-Once, Read-Many (WORM) & Prevention of Modifications and Deletions (Initially):
Storage account resource lock: While ideally, for true WORM compliance and to prevent modifications and deletions of blob data itself, you would use Azure Blob Storage Immutability policies (Time-based retention policies or Legal Hold policies). However, these options are not provided in the “Configuration to prevent modifications and deletions” choices.
Container access level and Container access policy are related to controlling access to the container and blobs (authorization and authentication), not preventing modifications or deletions once data is written. They can restrict who can perform actions, but not inherently prevent actions by authorized users.
Storage account resource lock is the closest option from the provided list to preventing modifications and deletions, although it’s not the ideal solution for WORM at the blob level. A Resource Lock can be set at the Storage Account level (or Resource Group level containing the storage account) with a ReadOnly or CanNotDelete lock. This would:
CanNotDelete: Prevent accidental deletion of the entire storage account (and indirectly the data within it). While it doesn’t prevent modifying blob data, it adds a layer of protection against accidental account-level deletion, which could lead to data loss.
ReadOnly: Would prevent any write operations to the storage account, including modifications and deletions. However, it would also prevent new data from being written in the future, which might not be desirable for ongoing operations.
Important Note: Using a Storage account resource lock is NOT the same as implementing true WORM immutability policies on blobs. Resource locks are a broader Azure Resource Manager feature, not a blob storage-specific WORM feature. For true WORM and regulatory compliance, Azure Blob Storage Immutability policies are the recommended approach. However, given the limited options in the question, Storage account resource lock is the closest option to provide some level of prevention against modifications and deletions at the account level (primarily deletion).
- Deletion After Five Years, Never Modified:
Cool access tier and potential Lifecycle Management: Cool tier allows for deletion after 30 days. After five years, you would need a process (potentially using Azure Automation or Lifecycle Management policies) to identify and delete the data if required. The “never modified” part is addressed (as best as possible with the limited options) by the Storage account resource lock. Ideally, Immutability Policies would guarantee this.
- Minimize Data Access Charges:
Cool access tier: Cool tier has lower storage costs compared to Hot and higher access costs. Since the data is accessed daily, but the dataset is relatively small (10GB), the access costs for Cool are likely to be acceptable and still significantly lower than Hot tier storage costs over five years. Archive tier would minimize storage costs further, but the higher access costs and retrieval latency might be detrimental for daily access. Cool tier is a good compromise to minimize data access charges while still allowing reasonable daily access.
Why other options are less suitable:
Hot access tier: Unnecessarily expensive for long-term storage, especially if the daily access isn’t extremely frequent or high-bandwidth.
Archive access tier: While cheapest for storage, the high retrieval latency and access costs make it unsuitable for “daily access” even if the data set is small.
General purpose v2 with Archive access tier for blobs: Same issues as Archive tier above regarding daily access.
Container access level/Container access policy: These control access authorization, not data immutability or prevention of modifications/deletions after data is written. They don’t meet the WORM requirement
You have an Azure subscription.
You need to recommend an Azure Kubernetes service (AKS) solution that will use Linux nodes.
The solution must meet the following requirements:
- Minimize the time it takes to provision compute resources during scale-out operations.
- Support autoscaling of Linux containers.
- Minimize administrative effort.
Which scaling option should you recommend?
Virtual Kubetet
cluster autoscaler
virtual nodes
horizontal pod autoscaler
The correct answer is virtual nodes.
Here’s why:
Virtual Nodes and Minimized Provisioning Time: Virtual nodes in AKS leverage Azure Container Instances (ACI) to quickly provision compute resources. When you scale out with virtual nodes, pods are scheduled directly onto ACI, which can provision containers much faster than traditional virtual machines used by the cluster autoscaler. This directly addresses the requirement to “minimize the time it takes to provision compute resources during scale-out operations.”
Virtual Nodes and Autoscaling of Linux Containers: Virtual nodes are fully compatible with Linux containers. They are designed to seamlessly run Linux-based containerized workloads within AKS. The autoscaling capabilities of virtual nodes are inherently tied to the demand for pods, automatically scaling as needed to accommodate Linux containers.
Virtual Nodes and Minimized Administrative Effort: Virtual nodes significantly reduce administrative overhead because you don’t need to manage the underlying virtual machines that host the nodes. Azure manages the infrastructure for ACI. You focus solely on managing your Kubernetes workloads. This directly addresses the requirement to “minimize administrative effort.”
Let’s look at why the other options are less suitable:
Virtual Kubetet: This is not a recognized or valid term in Azure Kubernetes Service (AKS) or Kubernetes. It seems to be a misspelling or a non-existent option.
Cluster Autoscaler: While the cluster autoscaler is a valid and important component for AKS, it scales the number of nodes (VMs in the node pool) in your AKS cluster. While it does automate node scaling, it still relies on the provisioning of virtual machines, which takes longer than provisioning containers in ACI (as used by virtual nodes). Therefore, it doesn’t minimize provisioning time to the same extent as virtual nodes. Also, while it reduces admin effort, you still manage and configure node pools, which is more administrative overhead than virtual nodes.
Horizontal Pod Autoscaler (HPA): The Horizontal Pod Autoscaler (HPA) scales the number of pods within a deployment or replica set based on CPU utilization or other metrics. HPA does not directly provision compute resources (nodes). While HPA is crucial for autoscaling applications, it relies on having enough underlying compute capacity (nodes) available. If you only use HPA without a mechanism to scale the nodes themselves, your pods might be pending if there isn’t enough node capacity. HPA addresses application scaling, not node scaling for compute resource provisioning.
In Summary:
Virtual nodes are the best fit because they directly address all three requirements: minimizing provisioning time, supporting Linux container autoscaling, and minimizing administrative effort. They offer the fastest scale-out by leveraging serverless container instances and reduce management overhead by abstracting away node management. While Cluster Autoscaler is also a valid autoscaling option, virtual nodes are superior in terms of speed and reduced management for this specific scenario focusing on minimizing provisioning time and administrative effort.
Final Answer: virtual nodes
You have an Azure subscription.
You need to deploy an Azure Kubernetes Service (AKS) solution that will use Windows Server 2019 nodes.
The solution must meet the following requirements:
Minimize the time it takes to provision compute resources during scale-out operations.
Support autoscaling of Windows Server containers.
Which scaling option should you recommend?
A. cluster autoscaler
B. horizontal pod autoscaler
C. Kubernetes version 1.20.2 or newer
D. Virtual nodes with Virtual Kubelet ACI
Correct Answer:
A. Cluster Autoscaler
Why Cluster Autoscaler is Correct:
Minimize provisioning time during scale-out: The Cluster Autoscaler in AKS dynamically adjusts the number of nodes in the cluster based on resource demand. When a scale-out operation is triggered (e.g., due to unscheduled pods), it adds nodes to the cluster. While provisioning new nodes takes some time, AKS optimizes this process, and Cluster Autoscaler works seamlessly with Windows Server 2019 node pools. Compared to alternatives like virtual nodes, it’s designed to handle long-running workloads (common with Windows containers) efficiently, though provisioning is not instantaneous.
Support autoscaling of Windows Server containers: Cluster Autoscaler supports Windows Server nodes in AKS, ensuring that as the demand for Windows containers increases, additional nodes are automatically added to the cluster. It works in tandem with the Horizontal Pod Autoscaler (HPA) to scale pods, but Cluster Autoscaler specifically addresses node-level scaling, which is critical for Windows container support in AKS.
Fit for AZ-305: The AZ-305 exam focuses on designing infrastructure solutions, and Cluster Autoscaler is a standard AKS feature for node-level autoscaling, widely applicable to both Linux and Windows environments.
You plan to deploy 10 applications to Azure. The applications will be deployed to two Azure Kubernetes Service (AKS) clusters. Each cluster will be deployed to a separate Azure region.
The application deployment must meet the following requirements:
- Ensure that the applications remain available if a single AKS cluster fails.
- Ensure that the connection traffic over the internet is encrypted by using SSL without having to configure SSL on each container.
Which service should you include in the recommendation?
AKS ingress controller
Azure Traffic Manager
Azure Front Door
Azure Load Balancer
The correct answer is Azure Front Door.
Here’s why:
Ensure application availability if a single AKS cluster fails: Azure Front Door is a global, scalable entry point that uses Microsoft’s global edge network. It can route traffic to the closest and healthiest AKS cluster based on various routing methods, including priority-based routing for failover scenarios. If one AKS cluster fails, Azure Front Door can automatically direct traffic to the healthy cluster in the other region, ensuring application availability.
Ensure SSL encryption over the internet without configuring SSL on each container: Azure Front Door provides SSL termination at the edge. You can upload your SSL certificate to Azure Front Door, and it will handle the SSL encryption and decryption for all incoming traffic. This means you don’t need to configure SSL certificates and management within each AKS cluster or on each individual container application. Front Door will decrypt the traffic before forwarding it to the backend AKS clusters (using HTTP or HTTPS based on your backend configuration).
Let’s look at why the other options are less suitable:
AKS Ingress Controller: An Ingress Controller is essential for routing HTTP/HTTPS traffic within a single AKS cluster. It can handle SSL termination within the cluster, but it’s primarily a cluster-level component. It doesn’t inherently provide cross-region failover or global load balancing across multiple AKS clusters in different regions. While you can configure ingress controllers in both AKS clusters, you’d still need another service in front to distribute traffic and handle failover across regions.
Azure Traffic Manager: Azure Traffic Manager is a DNS-based traffic load balancer. It can route traffic to different endpoints (like your AKS cluster load balancer IPs) based on DNS resolution. While it can provide failover across regions, it operates at the DNS level (Layer 4) and does not provide SSL termination. You would still need to configure SSL termination within each AKS cluster or on your application containers if using Traffic Manager for regional failover. Traffic Manager is less sophisticated for web application traffic management compared to Front Door.
Azure Load Balancer: Azure Load Balancer is a regional service that provides Layer 4 load balancing. It’s used to distribute traffic within a Virtual Network or expose services to the internet within a single Azure region. It’s not designed for cross-region failover or global routing of web application traffic across multiple AKS clusters in different regions. While Azure Load Balancer can be configured for SSL termination, it’s typically done at the backend services level or requires more complex configurations for each container if you are doing layer 7 load balancing with SSL termination at the load balancer itself. It’s not the optimal solution for global SSL termination and cross-region application availability in this scenario.
In summary:
Azure Front Door is the most appropriate service because it directly addresses both requirements: ensuring application availability across regions through global routing and providing SSL termination at the edge, simplifying SSL management and improving security and performance.
Final Answer: Azure Front Door
HOTSPOT
You have an Azure web app named App1 and an Azure key vault named KV1.
App1 stores database connection strings in KV1.
App1 performs the following types of requests to KV1:
✑ Get
✑ List
✑ Wrap
✑ Delete
✑ Unwrap
✑ Backup
✑ Decrypt
✑ Encrypt
You are evaluating the continuity of service for App1.
You need to identify the following if the Azure region that hosts KV1 becomes unavailable:
✑ To where will KV1 fail over?
✑ During the failover, which request type will be unavailable?
What should you identify? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
To where will KV1 fail over?
A server in the same Availability Set
A server in the same fault domain
A server in the same paired region
A virtual machine in a scale set
During the failover, which request type will be unavailable?
Backup
Decrypt
Delete
Encrypt
Get
List
Unwrap
Wrap
To where will KV1 fail over?
A server in the same paired region
Explanation: Azure Key Vault is designed for high availability and disaster recovery. In the event of a regional outage, Azure Key Vault is designed to failover to its paired region. Azure paired regions are geographically separated to provide resilience against regional disasters but are still within the same geography to meet data residency and compliance requirements.
Paired Regions: Azure regions are often paired. For example, East US is paired with West US. In case of a regional disaster in East US, services are designed to failover to West US. Key Vault, as a critical service, follows this pattern.
Availability Sets and Fault Domains: These are mechanisms for high availability within a single region. They protect against hardware failures within a datacenter but do not protect against a regional outage.
Virtual machine in a scale set: VM scale sets are for compute resources and not relevant to Key Vault’s failover mechanism.
During the failover, which request type will be unavailable?
Decrypt
Encrypt
Get
List
Unwrap
Wrap
Explanation: During a failover event, there will be a period of unavailability while the service transitions to the paired region. The operations that are most likely to be unavailable during this transition are those that directly access and manipulate secrets and keys – the data plane operations.
Data Plane Operations (Likely Unavailable):
Get: Retrieves a secret, key, or certificate. This is a core data access operation and will likely be unavailable during failover.
List: Lists secrets, keys, or certificates. Also a data access operation and likely to be unavailable.
Wrap: Encrypts a symmetric key. This is a cryptographic operation and will likely be unavailable.
Unwrap: Decrypts a symmetric key. Also a cryptographic operation and likely to be unavailable.
Encrypt: Encrypts arbitrary data using a key. Cryptographic operation, likely unavailable.
Decrypt: Decrypts encrypted data using a key. Cryptographic operation, likely unavailable.
Management Plane Operations (Potentially Available but Less Critical for App1’s Continuity in this Scenario):
Backup: Backs up the entire vault. While important for DR planning in general, it’s less critical for immediate service continuity of App1 during a failover. Backup operations might be less prioritized during the initial failover phase.
Delete: Deletes a secret, key, or certificate. While a management operation, it might be less prioritized during a failover focused on restoring core access.
Reasoning for selecting Data Plane Operations:
The question is specifically about the continuity of service for App1. App1 uses Key Vault to retrieve database connection strings. The operations directly related to App1 accessing these connection strings are Get, List, Decrypt, Encrypt, Wrap, and Unwrap (if encryption/decryption of connection strings is happening within App1 using Key Vault keys).
During a failover, the primary goal is to restore the core functionality of the service, which for Key Vault means the ability to access and use secrets and keys. Until the failover is complete and the service in the paired region is fully operational, these data plane operations are highly likely to be unavailable, directly impacting App1’s ability to retrieve connection strings and function.
Therefore, the most accurate answer within the given options is:
To where will KV1 fail over? A server in the same paired region
During the failover, which request type will be unavailable? Decrypt, Encrypt, Get, List, Unwrap, Wrap
Final Answer:
To where will KV1 fail over? During the failover, which request type will be unavailable?
A server in the same paired region Decrypt
Encrypt
Get
List
Unwrap
Wrap
HOTSPOT
–
You have an Azure App Service web app named Webapp1 that connects to an Azure SQL database named DB1. Webapp1 and DB1 are deployed to the East US Azure region.
You need to ensure that all the traffic between Webapp1 and DB1 is sent via a private connection.
What should you do? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
Create a virtual network that contains at least:
1 subnet
2 subnets
3 subnets
From the virtual network, configure name resolution to use:
A private DNS zone
A public DNS zone
The Azure DNS Private Resolver
Answer Area:
Create a virtual network that contains at least: 1 subnet
From the virtual network, configure name resolution to use: A private DNS zone
Explanation:
To ensure that traffic between Webapp1 and DB1 is sent via a private connection, you need to implement Azure Private Link for Azure SQL Database and integrate your App Service with a virtual network. Here’s a breakdown of why the selected options are correct:
- Create a virtual network that contains at least: 1 subnet
Why a Virtual Network is Necessary: Azure Private Link works by extending Azure services into your virtual network via private endpoints. A virtual network provides the private network space within Azure where you can establish this private connection.
Why at least 1 subnet is sufficient: You need at least one subnet in the virtual network to host the private endpoint for the Azure SQL Database. While you might have other subnets in a real-world scenario for different components or for subnet delegation, a minimum of one subnet is required for the private endpoint itself. You will place the Private Endpoint for SQL Database in this subnet.
- From the virtual network, configure name resolution to use: A private DNS zone
Why a Private DNS Zone is Crucial: When you create a Private Endpoint for Azure SQL Database, Azure creates a network interface card (NIC) within your subnet and assigns it a private IP address from your virtual network’s address space. To access the SQL Database via this private IP, you need to resolve the SQL Database’s fully qualified domain name (FQDN) to this private IP address within your virtual network.
Private DNS Zones are designed for this: Azure Private DNS Zones allow you to manage DNS records for Azure services within your virtual network. When you create a Private Endpoint, Azure automatically integrates it with a Private DNS Zone (or you can manually configure it). This ensures that when Webapp1 (which will be integrated with the VNet) attempts to resolve the SQL Database’s FQDN, it will receive the private IP address of the Private Endpoint, directing traffic over the private connection.
Why not a public DNS zone: A public DNS zone resolves to public IP addresses, which is the opposite of what you want for a private connection.
Why not Azure DNS Private Resolver (directly): While Azure DNS Private Resolver is used for hybrid DNS resolution scenarios (e.g., resolving on-premises DNS from Azure or vice versa), for a purely Azure-to-Azure private connection within a VNet, a Private DNS Zone is the direct and simpler solution for name resolution. Private Resolver is more relevant when you have more complex hybrid networking requirements.
Steps to Achieve Private Connection (Implied by the Hotspot Options):
Create a Virtual Network and a Subnet: You would first create a virtual network in the East US region and at least one subnet within it.
Create a Private Endpoint for Azure SQL Database: You would create a Private Endpoint for your DB1 Azure SQL database. During Private Endpoint creation, you would:
Select the SQL Server resource type.
Select your DB1 SQL Server.
Choose the target subnet you created in the VNet.
Choose to integrate with a private DNS zone (or manually configure DNS later).
Integrate App Service Web App with the Virtual Network (VNet Integration): You would configure VNet Integration for Webapp1 to connect it to the subnet in the VNet. This makes the Web App part of the private network.
Name Resolution (Automatic with Private DNS Zone): If you chose to integrate with a Private DNS Zone during Private Endpoint creation (which is highly recommended and often automatic), Azure will handle the DNS configuration. Webapp1, being in the same VNet, will automatically use the Private DNS Zone and resolve the SQL Database’s FQDN to the private IP of the Private Endpoint.
HOTSPOT
–
Your on-premises network contains an Active Directory Domain Services (AD DS) domain. The domain contains a server named Server1. Server1 contains an app named App1 that uses AD DS authentication. Remote users access App1 by using a VPN connection to the on-premises network.
You have an Azure AD tenant that syncs with the AD DS domain by using Azure AD Connect.
You need to ensure that the remote users can access App1 without using a VPN. The solution must meet the following requirements:
- Ensure that the users authenticate by using Azure Multi-Factor Authentication (MFA).
- Minimize administrative effort.
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
In Azure AD:
A managed identity
An access package
An app registration
An enterprise application
On-premises:
A server that runs Windows Server and has the Azure AD Application Proxy connector installed
A server that runs Windows Server and has the on-premises data gateway (standard mode) installed
A server that runs Windows Server and has the Web Application Proxy role service installed
Answer Area:
In Azure AD: An enterprise application
On-premises: A server that runs Windows Server and has the Azure AD Application Proxy connector installed
Explanation:
Let’s break down why this is the correct solution and why the other options are not as suitable:
In Azure AD: An enterprise application
Why Enterprise Application? Azure AD Application Proxy, the core component of the solution, is configured as an enterprise application in Azure AD. When you set up Application Proxy, you are essentially registering your on-premises application with Azure AD so that Azure AD can manage authentication and access to it.
Functionality: Enterprise applications in Azure AD are used to manage single sign-on, provisioning, and access control for applications, including those published through Application Proxy.
Why not other options in Azure AD?
A managed identity: Managed identities are used for Azure resources to authenticate to other Azure services. They are not relevant for authenticating users accessing an on-premises application.
An access package: Access packages are used for managing user access to groups, applications, and SharePoint sites, typically within Azure AD and related cloud services. While they manage access, they are not the primary mechanism for exposing an on-premises app securely to the internet with Azure AD authentication.
An app registration: App registrations are used for registering applications with Azure AD, primarily for applications that directly use the Microsoft Identity Platform for authentication (like cloud-native apps or apps using OAuth/OIDC). While related to authentication in Azure AD, it’s not the direct component for publishing on-premises apps via Application Proxy. Enterprise Application is the higher-level concept that encompasses the Application Proxy setup.
On-premises: A server that runs Windows Server and has the Azure AD Application Proxy connector installed
Why Azure AD Application Proxy Connector? Azure AD Application Proxy is specifically designed to securely publish on-premises web applications to the internet, enabling access for remote users without requiring a VPN. The Azure AD Application Proxy connector is the essential on-premises component. It’s a lightweight agent that you install on a Windows Server within your on-premises network.
How it works:
Connector Installation: You install the connector on a server inside your on-premises network. This server needs outbound internet access to communicate with Azure AD Application Proxy services in the cloud.
Application Publishing: You configure an Enterprise Application in Azure AD, specifying the internal URL of App1 on Server1 and the external URL users will use to access it. You also configure pre-authentication to use Azure AD.
User Access: When a remote user tries to access the external URL, they are redirected to Azure AD for authentication. Azure AD enforces MFA as required.
Secure Proxy: After successful Azure AD authentication, Azure AD Application Proxy securely forwards the request to the connector on-premises.
Connector Access: The connector, acting on behalf of the user, then accesses App1 on Server1 using standard protocols (like HTTP/HTTPS) within your internal network.
Response: The response from App1 follows the reverse path back to the user through the connector and Azure AD Application Proxy.
Why not other on-premises options?
A server that runs Windows Server and has the on-premises data gateway (standard mode) installed: The on-premises data gateway is used to connect Azure services like Power BI, Logic Apps, and Power Automate to on-premises data sources (databases, file shares, etc.). It is not for publishing web applications for direct user access with Azure AD authentication.
A server that runs Windows Server and has the Web Application Proxy role service installed: Web Application Proxy (WAP) is an older technology, primarily used with Active Directory Federation Services (AD FS) for publishing web applications. While WAP can provide external access, Azure AD Application Proxy is the more modern, Azure AD-integrated, and simpler solution for this scenario, especially when the goal is to use Azure AD MFA and minimize administrative effort in an Azure AD environment. Azure AD Application Proxy is the direct successor and recommended replacement for WAP in Azure AD scenarios.
HOTSPOT
–
You need to recommend a solution to integrate Azure Cosmos DB and Azure Synapse. The solution must meet the following requirements:
- Traffic from an Azure Synapse workspace to the Azure Cosmos DB account must be sent via the Microsoft backbone network.
- Traffic from the Azure Synapse workspace to the Azure Cosmos DB account must NOT be routed over the internet.
- Implementation effort must be minimized.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
When provisioning the Azure Synapse workspace:
Configure a dedicated managed virtual network.
Disable public network access to the workspace endpoints.
Enable the use of the Azure AD authentication.
When configuring the Azure Cosmos DB account, enable:
Managed private endpoints
Server-level firewall rules
Service endpoint policies
Answer Area:
When provisioning the Azure Synapse workspace:
Configure a dedicated managed virtual network.
When configuring the Azure Cosmos DB account, enable:
Managed private endpoints
Explanation:
Let’s break down each selection and why they are the correct choices to meet the requirements:
When provisioning the Azure Synapse workspace:
Configure a dedicated managed virtual network.
Correct: Configuring a dedicated managed virtual network for the Azure Synapse workspace is crucial. A Managed Virtual Network (VNet) isolates the Synapse workspace within its own private network environment. This is the foundation for ensuring private connectivity and preventing internet exposure. By deploying Synapse within a Managed VNet, you ensure that all outbound connections from Synapse can be routed through private links.
Why it’s necessary: To establish private connections to services like Azure Cosmos DB, Synapse needs to be within a virtual network. Managed VNets simplify this by Azure managing the VNet infrastructure for Synapse.
Disable public network access to the workspace endpoints.
Correct: Disabling public network access to the workspace endpoints is essential to prevent traffic from being routed over the internet. This forces all traffic to go through private connections. By disabling public access, you explicitly restrict access to the Synapse workspace to only those networks and services that have private connectivity established.
Why it’s necessary: This enforces the “no internet routing” requirement and enhances security by limiting the attack surface.
Enable the use of the Azure AD authentication.
Incorrect: While Azure AD authentication is important for securing access to Azure Synapse and Azure Cosmos DB, it does not directly address the requirement of network traffic routing over the Microsoft backbone network and avoiding the internet. Azure AD authentication is about authentication and authorization, not network connectivity path. It’s a good security practice, but not directly relevant to the private networking requirement in this question.
When configuring the Azure Cosmos DB account, enable:
Managed private endpoints
Correct: Enabling Managed private endpoints on the Azure Cosmos DB account is the key to establishing a private link from the Synapse Managed VNet to Cosmos DB. Managed private endpoints in Synapse allow you to create private endpoints to other Azure PaaS services, including Cosmos DB, from within the Synapse Managed VNet. This ensures that the traffic between Synapse and Cosmos DB flows privately over the Microsoft backbone network and does not traverse the public internet.
Why it’s necessary: Private endpoints are the Azure Private Link technology that provides private connectivity to Azure services. Managed private endpoints simplify the creation and management of these private endpoints from Synapse.
Server-level firewall rules
Incorrect: While server-level firewall rules on Azure Cosmos DB can restrict access to specific IP ranges or virtual networks, they do not inherently guarantee that traffic will be routed via the Microsoft backbone network and avoid the internet. Firewall rules are primarily for access control, not for enforcing a private network path. While you can use firewall rules in conjunction with other private networking solutions, they are not the primary solution for achieving private connectivity in this scenario. They are more about authorization (who can connect) than routing path.
Service endpoint policies
Incorrect: Service endpoint policies are used in conjunction with service endpoints. Service endpoints provide secure and direct connectivity from virtual networks to Azure services, keeping traffic on the Azure backbone. However, service endpoints are typically configured on the subnet level and are generally being superseded by Private Link for many scenarios, especially for PaaS-to-PaaS private connections. Managed private endpoints are the more modern and recommended approach for private connections from Synapse to Cosmos DB and offer a simpler configuration for this integration. Service endpoints are also less granular and less flexible than Private Endpoints for this specific scenario.
In summary, to meet the requirements of private connectivity, Microsoft backbone network traffic, no internet routing, and minimized implementation effort, the optimal solution is to:
Provision Azure Synapse with a dedicated managed virtual network.
Disable public network access to the Synapse workspace.
Enable Managed private endpoints for the Azure Cosmos DB account and create a managed private endpoint from Synapse to Cosmos DB.
You are developing a sales application that will contain several Azure cloud services and handle different components of a transaction. Different cloud services will process customer orders, billing, payment, inventory, and shipping.
You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using XML messages.
What should you include in the recommendation?
A. Azure Notification Hubs
B. Azure Application Gateway
C. Azure Queue Storage
D. Azure Traffic Manager
The correct answer is C. Azure Queue Storage
Explanation:
Here’s why Azure Queue Storage is the most appropriate recommendation and why the other options are not suitable for this scenario:
Azure Queue Storage:
Asynchronous Communication: Azure Queue Storage is specifically designed for asynchronous message queuing. Cloud services can enqueue messages into a queue, and other services can independently dequeue and process these messages. This decouples the services and enables asynchronous communication.
XML Messages: Azure Queue Storage can handle messages in various formats, including XML. You can serialize your transaction information into XML and place it in the message body of queue messages.
Service-to-Service Communication: Queue Storage is ideal for communication between different cloud services within an application. Different services can access the same queue to send and receive messages, facilitating communication between order processing, billing, payment, inventory, and shipping services in your application.
Reliability and Scalability: Azure Queue Storage is a highly reliable and scalable service, ensuring message delivery and handling even under heavy load.
Why other options are incorrect:
A. Azure Notification Hubs: Azure Notification Hubs is designed for sending push notifications to mobile devices (iOS, Android, Windows, etc.). It is not intended for service-to-service communication or processing transaction information.
B. Azure Application Gateway: Azure Application Gateway is a web traffic load balancer and Application Delivery Controller (ADC). It operates at Layer 7 of the OSI model and is used to manage and route HTTP/HTTPS traffic to web applications. It’s not meant for general-purpose asynchronous message queuing between cloud services.
D. Azure Traffic Manager: Azure Traffic Manager is a DNS-based traffic load balancer. It directs user traffic to different endpoints based on factors like performance, geography, or priority. It is primarily used for improving the availability and responsiveness of web applications by distributing traffic across different Azure regions or services. It’s not designed for asynchronous service-to-service communication.
You are developing an app that will use Azure Functions to process Azure Event Hubs events. Request processing is estimated to take between five and 20 minutes.
You need to recommend a hosting solution that meets the following requirements:
- Supports estimates of request processing runtimes
- Supports event-driven autoscaling for the app
Which hosting plan should you recommend?
A. Dedicated
B. Consumption
C. App Service
D. Premium
The correct answer is D. Premium
Explanation:
Let’s analyze each hosting plan against the requirements:
A. Dedicated (App Service Plan):
Supports estimates of request processing runtimes: Yes, App Service plans have no inherent time limits on function execution duration (beyond the overall app service timeouts if applicable, but not typically an issue for Event Hub triggers). You can run functions for 20 minutes or longer within the resources allocated to your App Service plan.
Supports event-driven autoscaling for the app: While App Service plans offer autoscaling, it’s primarily based on metrics like CPU utilization, memory consumption, and queue length (for Service Bus queues, for example). It’s not directly event-driven in the same way as Consumption or Premium plans are for Event Hubs. You would need to configure metric-based autoscaling rules, which are less reactive to immediate event bursts.
Cost: Dedicated plans can be more expensive, especially if your event processing is sporadic, as you pay for dedicated resources continuously, even when idle.
B. Consumption:
Supports estimates of request processing runtimes: No, not reliably in its standard form. Consumption plan functions have a default timeout of 10 minutes. While you can increase this timeout to a maximum of 10 minutes (or up to 30 minutes in Premium Consumption plan and some regions), the base Consumption plan is limited. A 20-minute processing time exceeds the standard Consumption plan limits.
Supports event-driven autoscaling for the app: Yes, absolutely. Consumption plan is designed for event-driven scaling. It automatically scales based on the number of incoming events in the Event Hub. This is a key strength of the Consumption plan.
Cost: Consumption plan is generally the most cost-effective for event-driven workloads because you only pay for the actual compute time used when your functions are running.
C. App Service:
This is essentially the same as option A - Dedicated (App Service Plan). The analysis for option A applies here.
D. Premium:
Supports estimates of request processing runtimes: Yes. Premium plan significantly extends the execution timeout limits compared to Consumption. Premium plan functions can run for up to 60 minutes by default, and this can be further increased. 20 minutes is well within the capabilities of the Premium plan.
Supports event-driven autoscaling for the app: Yes. Premium plan also provides event-driven autoscaling, similar to the Consumption plan. It scales elastically based on the event load from Event Hubs. Premium plan also offers more control over scaling behavior and instance sizes compared to Consumption.
Cost: Premium plan is more expensive than Consumption but generally less expensive than Dedicated (App Service) plans for event-driven workloads, especially if your load is variable. It offers a balance of scalability, features, and cost.
Why Premium is the best choice:
Given the requirement for processing times of up to 20 minutes, the Consumption plan (B) is immediately ruled out due to its default 10-minute timeout limitation (and even the extended limit might be too close for comfort and might require Premium Consumption plan which essentially becomes option D).
Dedicated (App Service) plan (A and C) can handle the runtime and offers scaling, but the autoscaling is less directly event-driven, and it’s generally more costly for event-driven workloads than Premium.
Premium plan (D) is the ideal solution because it:
Easily supports the 20-minute processing time with its extended execution timeout.
Provides event-driven autoscaling specifically designed for event sources like Event Hubs.
Offers a good balance of cost and features for event-driven scenarios, being more cost-effective than dedicated plans and providing more guarantees and features than Consumption.
Therefore, the most appropriate hosting plan recommendation is D. Premium.
You need to design a highly available Azure SQL database that meets the following requirements:
- Failover between replicas of the database must occur without any data loss.
- The database must remain available in the event of a zone outage.
- Costs must be minimized.
Which deployment option should you use?
A. Azure SQL Database Basic
B. Azure SQL Database Business Critical
C. Azure SQL Database Standard
D. Azure SQL Managed Instance General Purpose
The correct answer is B. Azure SQL Database Business Critical
Explanation:
Let’s break down why Business Critical is the best option based on each requirement:
Failover between replicas of the database must occur without any data loss.
Azure SQL Database Business Critical is designed for mission-critical applications with the highest performance and high availability requirements. It uses synchronous replication to three replicas across availability zones within a region. Synchronous replication ensures that every transaction is committed to all replicas before being acknowledged to the client. This guarantees zero data loss during failover because all replicas are always in sync.
The database must remain available in the event of a zone outage.
Azure SQL Database Business Critical supports zone redundancy. When configured as zone-redundant, the three replicas are placed in different availability zones within the Azure region. If one availability zone fails, the database remains available because the other replicas in the healthy zones continue to operate.
Costs must be minimized.
While Azure SQL Database Business Critical is the most expensive deployment option among the single database options, it is the only option that fully guarantees zero data loss and zone outage resilience as explicitly stated in the requirements. The “minimize costs” requirement is important, but it must be balanced against the critical availability and data loss prevention requirements. In this scenario, the availability and zero data loss requirements are paramount, and Business Critical is the only option that fully satisfies them.
Let’s look at why the other options are less suitable:
A. Azure SQL Database Basic:
Basic tier is the least expensive option, but it does not offer high availability or zone redundancy. It is a single instance database and is not designed for zero data loss failover or zone outage resilience.
C. Azure SQL Database Standard:
Azure SQL Database Standard offers high availability with standard availability which uses standard storage and synchronous replication within a single datacenter (for non-zone redundant configuration). While it provides good availability and data durability, in the standard tier, failovers might have a very small potential for data loss in extreme scenarios (though Azure aims for near-zero data loss in typical failovers). Standard tier can be configured for zone redundancy, providing zone outage resilience. However, even with zone redundancy, the guarantee of zero data loss during failover is stronger in Business Critical due to its architecture with premium storage and more robust replication mechanism. Standard is more cost-effective than Business Critical, but doesn’t guarantee zero data loss as strongly.
D. Azure SQL Managed Instance General Purpose:
Azure SQL Managed Instance General Purpose also offers high availability and can be configured for zone redundancy. It uses standard storage and provides good performance. However, similar to Standard single database, while it aims for minimal data loss, it doesn’t have the same explicit guarantee of zero data loss failover as Business Critical. Also, for a single database, Managed Instance is typically more expensive and more complex to manage than a single Azure SQL Database.
You need to design a highly available Azure SQL database that meets the following requirements:
- Failover between replicas of the database must occur without any data loss.
- The database must remain available in the event of a zone outage.
- Costs must be minimized.
Which deployment option should you use?
A. Azure SQL Database Hyperscale
B. Azure SQL Database Premium
C. Azure SQL Database Standard
D. Azure SQL Managed Instance General Purpose
The correct answer is B. Azure SQL Database Premium.
Rationale:
Let’s break down why Azure SQL Database Premium is the most suitable option based on the requirements:
Failover between replicas of the database must occur without any data loss.
Azure SQL Database Premium (and Business Critical, which is often considered the evolution of Premium) tiers are designed for mission-critical workloads that require the highest levels of availability and data durability. These tiers utilize synchronous replication. Synchronous replication means that a transaction is not considered committed until it is written to both the primary replica and at least one secondary replica. This ensures zero data loss in the event of a failover because the secondary replicas are always transactionally consistent with the primary.
The database must remain available in the event of a zone outage.
Azure SQL Database Premium (and Business Critical) supports Zone Redundancy. When you configure a database as zone-redundant in the Premium tier, Azure automatically provisions and maintains replicas of your database across multiple availability zones within the same Azure region. Availability Zones are physically separate datacenters within an Azure region. If one zone experiences an outage, the database remains available because the replicas in the other zones continue to function.
Costs must be minimized.
While Azure SQL Database Premium is more expensive than Standard and Hyperscale tiers, it is the most cost-effective option that fully meets the zero data loss and zone outage availability requirements. The “minimize costs” requirement must be balanced with the other critical requirements. In this scenario, the need for zero data loss and zone redundancy takes precedence over minimizing costs to the absolute lowest possible level. Basic and Standard tiers are cheaper but do not guarantee zero data loss and zone outage resilience to the same degree as Premium. Hyperscale, while potentially cost-effective for very large databases, might be more expensive for smaller to medium-sized databases than Premium and is not specifically designed for the same level of guaranteed zero data loss in failovers as Premium/Business Critical.
Let’s look at why the other options are less suitable:
A. Azure SQL Database Hyperscale:
Hyperscale is designed for very large databases and high scalability. While it offers high availability and can be zone-redundant, its architecture prioritizes scalability and performance for massive datasets. While it aims for high data durability, it doesn’t offer the same explicit guarantee of zero data loss during failover as the Premium/Business Critical tiers with synchronous replication across replicas designed for that specific purpose. Also, for smaller databases, Hyperscale might be more complex and not necessarily the most cost-effective for the specific needs outlined.
C. Azure SQL Database Standard:
Azure SQL Database Standard offers high availability, and can be configured for zone redundancy. However, it uses standard storage and while it uses synchronous replication within a single datacenter (for non-zone redundant), it doesn’t provide the same level of guaranteed zero data loss during failovers as the Premium/Business Critical tiers. Failovers in Standard tier are generally fast, but might have a very slight potential for data loss in extreme scenarios.
D. Azure SQL Managed Instance General Purpose:
Azure SQL Managed Instance General Purpose also offers high availability and can be zone-redundant. However, for a single database requirement, using Managed Instance is often overkill and more complex and potentially more expensive than using a single Azure SQL Database. While General Purpose Managed Instance is cheaper than Business Critical Managed Instance, it still doesn’t offer the same guaranteed zero data loss as Azure SQL Database Premium/Business Critical.
Important Note: The term “Azure SQL Database Premium” is sometimes used interchangeably with “Azure SQL Database Business Critical” in older documentation or exam questions. Business Critical is the current name for the tier that provides the highest level of availability, zero data loss, and zone redundancy for single Azure SQL Databases. If “Premium” in this question is intended to refer to the current highest availability tier, then it means Business Critical.
HOTSPOT
–
You company has offices in New York City, Sydney, Paris, and Johannesburg.
The company has an Azure subscription.
You plan to deploy a new Azure networking solution that meets the following requirements:
- Connects to ExpressRoute circuits in the Azure regions of East US, Southeast Asia, North Europe, and South Africa
- Minimizes latency by supporting connection in three regions
- Supports Site-to-site VPN connections
- Minimizes costs
You need to identify the minimum number of Azure Virtual WAN hubs that you must deploy, and which virtual WAN SKU to use.
What should you identify? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
Number of Virtual WAN hubs:
1
2
3
4
Virtual WAN SKU:
Basic
Standard
Answer Area:
Number of Virtual WAN hubs: 3
Virtual WAN SKU: Standard
Explanation:
Number of Virtual WAN hubs: 3
Requirement for ExpressRoute in Four Regions: The company has ExpressRoute circuits in East US, Southeast Asia, North Europe, and South Africa. Virtual WAN hubs act as central connectivity points within Azure for these ExpressRoute circuits.
Minimizing Latency in Three Regions: To minimize latency for users in three of the four office locations, deploying Virtual WAN hubs in or near three of the four Azure regions is the most effective approach. You would strategically choose three locations that best serve the majority of your users and traffic patterns. For example, placing hubs in East US (for New York), North Europe (for Paris), and Southeast Asia (for Sydney) would cover three major office locations.
Connectivity to All Four Regions: Even with three hubs, you can still connect to ExpressRoute circuits in all four regions. A single Virtual WAN hub can connect to multiple ExpressRoute circuits, even if those circuits are in different Azure regions. The hubs act as aggregation points. You do not need a one-to-one mapping of hubs to ExpressRoute regions to achieve connectivity.
Minimizing Costs: Deploying three hubs is the minimum required to meet the latency requirement for three regions while still connecting to all four ExpressRoute circuits. Deploying four hubs would also technically work but would unnecessarily increase costs without providing additional benefit beyond the stated requirements.
Virtual WAN SKU: Standard
Requirement for ExpressRoute and Site-to-site VPN: The requirements explicitly state the need to connect to ExpressRoute circuits and support Site-to-site VPN connections.
SKU Capabilities:
Basic SKU: The Basic Virtual WAN SKU is limited. It only supports Site-to-site VPN connections. It does not support ExpressRoute connections.
Standard SKU: The Standard Virtual WAN SKU provides full functionality and supports both ExpressRoute and Site-to-site VPN connections, along with other advanced features like VPN encryption, routing policies, and more.
Choosing the Correct SKU: Since the solution must connect to ExpressRoute circuits, the Standard Virtual WAN SKU is mandatory. The Basic SKU is insufficient to meet the ExpressRoute connectivity requirement.
You have an Azure Functions microservice app named App1 that is hosted in the Consumption plan. App1 uses an Azure Queue Storage trigger.
You plan to migrate App1 to an Azure Kubernetes Service (AKS) cluster.
You need to prepare the AKS cluster to support App1. The solution must meet the following requirements:
- Use the same scaling mechanism as the current deployment.
- Support kubenet and Azure Container Networking Interface (CNI) networking.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct answer is worth one point.
A. Configure the horizontal pod autoscaler.
B. Install Virtual Kubelet.
C. Configure the AKS cluster autoscaler.
D. Configure the virtual node add-on.
E. Install Kubernetes-based Event Driven Autoscaling (KEDA).
The two correct actions are:
E. Install Kubernetes-based Event Driven Autoscaling (KEDA).
C. Configure the AKS cluster autoscaler.
Here’s why:
E. Install Kubernetes-based Event Driven Autoscaling (KEDA): The scenario specifies that you need to use the same scaling mechanism as the current deployment (Consumption plan) for Azure Functions. When Azure Functions are deployed to AKS, KEDA is the recommended way to achieve event-driven autoscaling, similar to the Consumption plan17. KEDA allows you to scale your function app based on the number of messages in the Azure Queue Storage queue, mirroring the Consumption plan’s behavior1.
C. Configure the AKS cluster autoscaler: While KEDA handles scaling the function pods based on events, the AKS cluster autoscaler is responsible for scaling the number of nodes in your AKS cluster1. If KEDA determines that more function pods are needed, the cluster autoscaler ensures that there are enough nodes available to accommodate those pods. This ensures that your cluster can handle the workload dynamically.
Here’s why the other options are incorrect:
A. Configure the horizontal pod autoscaler: While the Horizontal Pod Autoscaler (HPA) is a Kubernetes autoscaling mechanism, KEDA is more suitable for event-driven scaling of Azure Functions in AKS14. KEDA simplifies the configuration and provides scaling triggers specific to Azure services like Queue Storage.
B. Install Virtual Kubelet: Virtual Kubelet allows you to extend your AKS cluster to Azure Container Instances (ACI). This isn’t necessary for simply migrating Azure Functions to AKS with similar scaling capabilities1.
D. Configure the virtual node add-on: The virtual node add-on is related to Virtual Kubelet and is also not required for this migration scenario1.
Therefore, installing KEDA and configuring the AKS cluster autoscaler are the essential steps to prepare the AKS cluster to support App1 with the same scaling mechanism as the Consumption plan, while also supporting kubenet and Azure CNI networking.
You are developing a sales application that will contain several Azure cloud services and handle different components of a transaction. Different cloud services will process customer orders, billing, payment, inventory, and shipping.
You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using XML messages.
What should you include in the recommendation?
A. Azure Application Gateway
B. Azure Queue Storage
C. Azure Data Lake
D. Azure Traffic Manager
The correct answer is B. Azure Queue Storage.
Explanation:
Azure Queue Storage is a service specifically designed for asynchronous message queuing. It allows different components of an application to communicate reliably and asynchronously by sending messages to a queue.
Here’s why Azure Queue Storage is the best fit for the requirements and why the other options are not:
Asynchronous Communication: Azure Queue Storage excels at enabling asynchronous communication. Services that process customer orders, billing, payment, inventory, and shipping can operate independently and communicate by placing messages in queues. This decouples the services, improving resilience and scalability. One service can enqueue a message (e.g., “Order Placed”) and other services (billing, inventory) can dequeue and process that message at their own pace.
XML Messages: Azure Queue Storage can store messages in various formats, including XML. You can serialize your transaction information into XML format and use it as the message body in Azure Queue Storage.
Service-to-Service Communication: Azure Queue Storage is ideal for communication between different cloud services within an application architecture. The different cloud services in your sales application can use queues to exchange transaction information without needing to directly connect or wait for each other.
Reliability: Azure Queue Storage provides reliable message delivery. Messages are persisted and will be delivered even if components fail.
Why other options are incorrect:
A. Azure Application Gateway: Azure Application Gateway is a web traffic load balancer and Application Delivery Controller (ADC). It is used to manage and route HTTP/HTTPS traffic to web applications. It is not designed for general-purpose asynchronous message queuing between services. Application Gateway is for client-to-application traffic management, not service-to-service messaging.
C. Azure Data Lake: Azure Data Lake is a massively scalable and secure data lake for big data analytics workloads. It is designed for storing and analyzing large volumes of data, typically in batch processing scenarios. It’s not meant for real-time or near real-time asynchronous communication between services that process transactions. Data Lake is for data at rest and analytics, not for transactional messaging.
D. Azure Traffic Manager: Azure Traffic Manager is a DNS-based traffic load balancer. It directs client traffic to different endpoints based on routing methods like performance, priority, or geographic location. It is primarily used for improving the availability and responsiveness of web applications by distributing traffic across different Azure regions or services. It’s not designed for asynchronous service-to-service communication. Traffic Manager is also for client-to-application traffic management, focusing on global routing and availability.
Therefore, Azure Queue Storage is the most appropriate and recommended solution for enabling asynchronous communication of transaction information using XML messages between the different cloud services in your sales application.
Final Answer: B
You need to design a highly available Azure SQL database that meets the following requirements:
- Failover between replicas of the database must occur without any data loss.
- The database must remain available in the event of a zone outage.
- Costs must be minimized.
Which deployment option should you use?
A. Azure SQL Managed Instance General Purpose
B. Azure SQL Database Hyperscale
C. Azure SQL Database Premium
D. Azure SQL Managed Instance Business Critical
The correct answer is C. Azure SQL Database Premium.
Explanation:
Let’s break down why Azure SQL Database Premium is the most suitable choice for meeting all the requirements, while also considering the other options:
Failover between replicas of the database must occur without any data loss.
Azure SQL Database Premium (and Business Critical) tiers are designed for mission-critical applications and guarantee zero data loss during failovers. They achieve this through synchronous replication. In synchronous replication, a transaction is not considered committed until it is written to both the primary and secondary replicas. This ensures that in the event of a failover, the secondary replica is transactionally consistent with the primary, and no data is lost.
The database must remain available in the event of a zone outage.
Azure SQL Database Premium (and Business Critical) support Zone Redundancy. When you configure a database as zone-redundant in the Premium tier, Azure automatically provisions and maintains replicas across multiple availability zones within the same Azure region. If one zone fails, the database remains available as replicas in other zones continue to operate.
Costs must be minimized.
Azure SQL Database Premium is more expensive than Standard and Hyperscale, but it is less expensive than Azure SQL Managed Instance Business Critical. While “minimize costs” is a requirement, it’s balanced against the critical need for zero data loss and zone outage resilience. For achieving these high availability requirements, Premium offers a more cost-effective solution compared to Business Critical Managed Instance.
Let’s analyze why the other options are less suitable:
A. Azure SQL Managed Instance General Purpose:
Data Loss Failover: General Purpose Managed Instance aims for high availability but does not guarantee zero data loss in all failover scenarios. It uses standard storage and while it uses synchronous replication within a single availability zone (and across zones if zone-redundant), it might have a small potential for data loss (RPO > 0).
Zone Outage Resilience: General Purpose Managed Instance can be configured for zone redundancy.
Cost: General Purpose Managed Instance is generally less expensive than Business Critical Managed Instance, but often more expensive than Azure SQL Database Premium for comparable single database scenarios.
B. Azure SQL Database Hyperscale:
Data Loss Failover: Hyperscale is designed for very large databases and high performance. While it has high data durability and availability, its architecture, which separates compute and storage tiers, might not guarantee absolute zero data loss in all failover scenarios compared to the synchronous replication of Premium/Business Critical.
Zone Outage Resilience: Hyperscale can be configured for zone redundancy.
Cost: Hyperscale can be cost-effective for very large databases, but might be more expensive than Premium for smaller to medium-sized databases and is not specifically optimized for zero data loss guarantees in the same way as Premium/Business Critical.
D. Azure SQL Managed Instance Business Critical:
Data Loss Failover: Azure SQL Managed Instance Business Critical is designed for the highest levels of performance and availability and guarantees zero data loss during failover due to synchronous replication.
Zone Outage Resilience: Azure SQL Managed Instance Business Critical is zone-redundant by default and is designed to survive zone outages.
Cost: Business Critical Managed Instance is the most expensive option listed. While it meets the zero data loss and zone outage requirements, it is not the option that minimizes costs while meeting these requirements.
You are developing a sales application that will contain several Azure cloud services and handle different components of a transaction. Different cloud services will process customer orders, billing, payment, inventory, and shipping.
You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using XML messages.
What should you include in the recommendation?
A. Azure Service Fabric
B. Azure Traffic Manager
C. Azure Queue Storage
D. Azure Notification Hubs
The correct answer is C. Azure Queue Storage.
Explanation:
Azure Queue Storage is a service specifically designed for asynchronous message queuing. It enables different components of an application (in this case, the cloud services for orders, billing, payment, inventory, and shipping) to communicate reliably and asynchronously by sending and receiving messages from queues.
Here’s why Azure Queue Storage is the best fit and why the other options are not:
Asynchronous Communication: Azure Queue Storage’s primary purpose is to facilitate asynchronous communication. Services can enqueue messages into a queue without needing to wait for an immediate response from the receiving service. This is ideal for decoupling components and improving the overall responsiveness and resilience of the application.
XML Message Support: Azure Queue Storage can handle messages in various formats, including text-based formats like XML. You can easily serialize your transaction information into XML and use it as the message payload within Azure Queue Storage.
Service-to-Service Communication: Azure Queue Storage is designed for communication between different services within an application architecture. The various cloud services in the sales application can use queues to exchange transaction information without direct, synchronous dependencies.
Reliability and Scalability: Azure Queue Storage is a highly reliable and scalable service. Messages are persisted and guaranteed to be delivered, even if components fail or experience transient issues.
Why the other options are incorrect:
A. Azure Service Fabric: Azure Service Fabric is a distributed systems platform for packaging, deploying, and managing microservices and containerized applications. While Service Fabric can be used for building applications that communicate asynchronously (and is a powerful platform), it’s a much more complex and comprehensive platform than necessary for simply enabling asynchronous XML message communication between services. It’s overkill for this specific requirement. Service Fabric is for building and managing microservices architectures, not just message queuing.
B. Azure Traffic Manager: Azure Traffic Manager is a DNS-based traffic load balancer. It’s used to direct client traffic to different endpoints based on various routing methods (performance, geographic location, etc.). It’s designed for managing client-to-application traffic and improving the availability and responsiveness of web applications. It’s not intended for service-to-service asynchronous communication within the application backend.
D. Azure Notification Hubs: Azure Notification Hubs is a service for sending push notifications to mobile devices (iOS, Android, Windows, etc.) and other client applications. It’s designed for broadcasting notifications to end-users, not for service-to-service communication within a backend system.
Therefore, Azure Queue Storage is the most straightforward, cost-effective, and appropriate solution for enabling asynchronous XML message communication between the cloud services in the sales application.
Final Answer: C
You need to design a highly available Azure SQL database that meets the following requirements:
- Failover between replicas of the database must occur without any data loss.
- The database must remain available in the event of a zone outage.
- Costs must be minimized.
Which deployment option should you use?
A. Azure SQL Managed Instance Business Critical
B. Azure SQL Managed Instance General Purpose
C. Azure SQL Database Standard
D. Azure SQL Database Premium
The correct answer is D. Azure SQL Database Premium.
Explanation:
Let’s analyze each option against the given requirements:
A. Azure SQL Managed Instance Business Critical:
Failover without data loss: Yes. Business Critical uses synchronous replication to three replicas, ensuring zero data loss during failover.
Availability in zone outage: Yes. Business Critical is zone-redundant, placing replicas in different availability zones.
Minimize costs: No. Business Critical is the most expensive option among those listed. While it provides the highest level of performance and availability, it’s not the most cost-minimized solution.
B. Azure SQL Managed Instance General Purpose:
Failover without data loss: No. General Purpose uses standard storage and asynchronous replication for the data tier (though synchronous for compute). While data loss is minimized, it’s not guaranteed to be zero in all failover scenarios.
Availability in zone outage: Yes. General Purpose can be configured for zone redundancy.
Minimize costs: More cost-effective than Business Critical Managed Instance, but generally more expensive than single Azure SQL Database options for comparable workloads.
C. Azure SQL Database Standard:
Failover without data loss: No. Standard tier uses standard storage and synchronous replication within a single datacenter (in non-zone redundant configuration). While it aims for minimal data loss, it’s not guaranteed to be zero in all failover scenarios, especially compared to Premium/Business Critical.
Availability in zone outage: Yes. Standard tier can be configured for zone redundancy.
Minimize costs: More cost-effective than Premium and Managed Instance options.
D. Azure SQL Database Premium:
Failover without data loss: Yes. Premium tier uses synchronous replication to ensure zero data loss during failover.
Availability in zone outage: Yes. Premium tier can be configured to be zone-redundant.
Minimize costs: More cost-effective than Business Critical Managed Instance while still meeting the zero data loss and zone outage requirements. It is more expensive than Standard and Hyperscale, but delivers on the critical requirements.
Rationale for choosing Azure SQL Database Premium:
Azure SQL Database Premium strikes the best balance between the requirements:
It guarantees zero data loss during failover due to synchronous replication.
It provides zone redundancy, ensuring availability during zone outages.
It is more cost-effective than Azure SQL Managed Instance Business Critical, while still meeting the stringent availability and data loss prevention requirements.
While Business Critical Managed Instance also meets the first two requirements, it is significantly more expensive. Standard tier is cheaper but does not guarantee zero data loss. Hyperscale is not listed, but it also doesn’t provide the same level of zero data loss guarantee as Premium/Business Critical and might not be the most cost-effective for all scenarios.
Therefore, Azure SQL Database Premium is the most suitable deployment option when considering zero data loss failover, zone outage availability, and cost minimization.
Final Answer: D
You need to design a highly available Azure SQL database that meets the following requirements:
- Failover between replicas of the database must occur without any data loss.
- The database must remain available in the event of a zone outage.
- Costs must be minimized.
Which deployment option should you use?
A. Azure SQL Database Serverless
B. Azure SQL Managed Instance General Purpose
C. Azure SQL Database Basic
D. Azure SQL Database Business Critical
The correct answer is D. Azure SQL Database Business Critical.
Explanation:
Let’s break down each deployment option against the requirements:
A. Azure SQL Database Serverless:
Failover without data loss: While Serverless can be configured to run on the Premium tier (which offers synchronous replication), the Serverless compute tier itself doesn’t inherently guarantee zero data loss. The underlying service tier dictates data loss guarantees. If Serverless is on Standard tier, it will not guarantee zero data loss. If on Premium, it becomes similar to Azure SQL Database Premium (Option D) but with autoscaling compute, potentially adding complexity and not necessarily minimizing cost for continuous workloads.
Availability in zone outage: Serverless, when running on a zone-redundant service tier (like Premium or Business Critical), can be zone-redundant.
Minimize costs: Serverless can be cost-effective for intermittent workloads due to its auto-pausing and auto-scaling compute. However, for continuously available databases, the cost savings might be less significant, and the complexity of managing serverless compute scaling might outweigh the benefits for this specific scenario.
B. Azure SQL Managed Instance General Purpose:
Failover without data loss: No. General Purpose Managed Instance uses standard storage and asynchronous replication for the data tier (though synchronous for compute). This means that in the event of a failover, there is a potential for data loss (though typically minimal).
Availability in zone outage: Yes. General Purpose Managed Instance can be configured to be zone-redundant.
Minimize costs: More cost-effective than Business Critical Managed Instance, but generally more expensive than single Azure SQL Database options (like Standard, Premium) for comparable single database scenarios.
C. Azure SQL Database Basic:
Failover without data loss: No. Basic tier is a single instance database with no high availability. Failover will likely result in data loss and downtime.
Availability in zone outage: No. Basic tier is not zone-redundant and offers no protection against zone outages.
Minimize costs: Yes. Basic is the least expensive option, but it fails to meet the other critical requirements.
D. Azure SQL Database Business Critical:
Failover without data loss: Yes. Business Critical is designed for mission-critical workloads and guarantees zero data loss during failovers. It uses synchronous replication to three replicas, ensuring that every transaction is committed to multiple replicas before being acknowledged.
Availability in zone outage: Yes. Business Critical is zone-redundant by default, placing replicas in different availability zones to ensure availability even if a zone fails.
Minimize costs: No. Business Critical is the most expensive option among those listed. However, it is the only option that definitively meets the zero data loss and zone outage availability requirements. The “minimize costs” requirement is important but must be balanced against the other critical needs.
Rationale for choosing Azure SQL Database Business Critical:
Azure SQL Database Business Critical is the only option that definitively and reliably meets all the core requirements, especially the critical ones:
Guaranteed zero data loss failover: Achieved through synchronous replication.
Zone outage availability: Achieved through built-in zone redundancy.
You need to design a highly available Azure SQL database that meets the following requirements:
- Failover between replicas of the database must occur without any data loss.
- The database must remain available in the event of a zone outage.
- Costs must be minimized.
Which deployment option should you use?
A. Azure SQL Database Standard
B. Azure SQL Managed Instance General Purpose
C. Azure SQL Database Serverless
D. Azure SQL Database Premium
The correct answer is D. Azure SQL Database Premium.
Explanation:
Let’s analyze each option against the requirements:
A. Azure SQL Database Standard:
Failover without data loss: No. While Standard tier offers high availability, it uses synchronous replication within a single datacenter (for non-zone redundant deployments). It does not guarantee zero data loss in all failover scenarios, especially during zone outages. There’s a potential for minimal data loss in asynchronous replication scenarios or during certain types of failovers.
Availability in zone outage: Yes, Azure SQL Database Standard can be configured for zone redundancy.
Minimize costs: Yes, Standard is generally less expensive than Premium and Managed Instance options. However, it compromises on the zero data loss guarantee.
B. Azure SQL Managed Instance General Purpose:
Failover without data loss: No. General Purpose Managed Instance uses standard storage and asynchronous replication for the data tier (though synchronous for compute). This means that it does not guarantee zero data loss during failovers. There’s a potential for data loss, although Azure aims to minimize it.
Availability in zone outage: Yes, Azure SQL Managed Instance General Purpose can be configured for zone redundancy.
Minimize costs: More expensive than Azure SQL Database Standard, but generally less expensive than Business Critical Managed Instance and often less than Azure SQL Database Premium for comparable resource levels in some scenarios.
C. Azure SQL Database Serverless:
Failover without data loss: While Azure SQL Database Serverless can be configured to run on the Premium service tier (which does offer synchronous replication and zero data loss guarantees), the question doesn’t specify the service tier for Serverless. If we assume a typical cost-optimized Serverless deployment, it might not be on the Premium tier and therefore wouldn’t inherently guarantee zero data loss. If deployed on Premium, it effectively becomes very similar to Azure SQL Database Premium but with auto-scaling compute.
Availability in zone outage: Azure SQL Database Serverless can be configured to be zone-redundant, again, depending on the underlying service tier it’s using.
Minimize costs: Azure SQL Database Serverless is designed to minimize costs for intermittent workloads. For a database that needs to be highly available and continuously running, the cost benefits of Serverless might be less pronounced, and the management of auto-scaling might add complexity.
D. Azure SQL Database Premium:
Failover without data loss: Yes. Azure SQL Database Premium is designed for mission-critical workloads and guarantees zero data loss during failovers. It achieves this through synchronous replication.
Availability in zone outage: Yes. Azure SQL Database Premium can be configured to be zone-redundant, placing replicas across availability zones.
Minimize costs: While Azure SQL Database Premium is more expensive than Standard, it is generally less expensive than Azure SQL Managed Instance Business Critical, and it is the most cost-effective option among those listed that fully meets both the zero data loss and zone outage availability requirements.
Rationale for selecting Azure SQL Database Premium:
Azure SQL Database Premium is the optimal choice because it effectively balances all three requirements:
Guaranteed Zero Data Loss Failover: Achieved through synchronous replication.
Zone Outage Availability: Achieved through zone redundancy configuration.
Cost Minimization (within the context of the HA requirements): It provides these HA features at a lower cost than Azure SQL Managed Instance Business Critical, which is the only other option that definitively guarantees zero data loss and zone outage resilience among the choices.
Therefore, for a highly available Azure SQL database requiring zero data loss failover, zone outage availability, and minimized costs (while still meeting the HA needs), Azure SQL Database Premium is the most appropriate deployment option.
Final Answer: D
DRAG DROP
–
You plan to deploy an infrastructure solution that will contain the following configurations:
- External users will access the infrastructure by using Azure Front Door.
- External user access to the backend APIs hosted in Azure Kubernetes Service (AKS) will be controlled by using Azure API Management.
- External users will be authenticated by an Azure AD B2C tenant that uses OpenID Connect-based federation with a third-party identity provider.
Which function does each service provide? To answer, drag the appropriate functions to the correct services. Each function may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Functions
Protection against Open Web Application
Security Project (OWASP) vulnerabilities
IP filtering on a per-API level
Validation of Azure B2C JSON Web
Tokens (JWTs)
Answer Area
Front Door: Function
API Management: Function
The correct answer, selecting one function for each service that is the most primary and relevant based on typical Azure architecture best practices, is:
Front Door: Function
Protection against Open Web Application Security Project (OWASP) vulnerabilities
API Management: Function
Validation of Azure B2C JSON Web Tokens (JWTs)
Why this is the closest and most correct answer (with single selection per service):
Let’s break down each function and service again, focusing on the primary role of each service in the described architecture:
Protection against Open Web Application Security Project (OWASP) vulnerabilities:
Front Door is primarily designed to be the edge security layer. Its Web Application Firewall (WAF) capability is specifically built to protect against OWASP top 10 vulnerabilities. Placing WAF at the edge, in Front Door, is a best practice to filter out malicious traffic before it even reaches the backend services (AKS and API Management).
While API Management can have some security policies, including some basic request filtering, it is not the primary service for comprehensive OWASP protection. Front Door’s WAF is the more robust and appropriate service for this function in this architecture.
IP filtering on a per-API level:
API Management is well-suited for granular, per-API level access control. API Management policies can be defined at different scopes, including the API level, allowing you to implement IP filtering specifically for individual APIs. This is important for scenarios where different APIs might have different access requirements based on source IP.
Front Door can also perform IP filtering, but it’s generally at a more global routing rule level rather than specifically tied to individual backend APIs. API Management’s policy engine is designed for this level of API-specific control.
Validation of Azure B2C JSON Web Tokens (JWTs):
API Management is the central point for API security and authorization in this architecture. When external users are authenticated by Azure AD B2C, they receive JWTs. API Management, acting as the API gateway, is responsible for validating these JWTs to ensure that only authenticated and authorized users can access the backend APIs hosted in AKS. API Management has built-in policies (like validate-jwt) for JWT validation.
Front Door is not designed for JWT validation in the context of API authorization. Its role is more about edge routing, performance, and WAF. While it handles TLS termination, it doesn’t typically delve into application-level authorization like JWT validation.
Rationale for Single Selection (Primary Function):
If the question format forces you to select only one function for each service, you should choose the function that represents the primary and most defining security role of that service in the given architecture.
For Front Door, the most prominent security function is edge WAF and OWASP protection.
For API Management, the most prominent security function in this context is API authorization through JWT validation.
HOTSPOT
–
You are developing a multi-tier app named App1 that will be hosted on Azure virtual machines. The peak utilization periods for App1 will be from 8 AM to 9 AM and 4 PM to 5 PM on weekdays.
You need to deploy the infrastructure for App1. The solution must meet the following requirements:
- Support virtual machines deployed to four availability zones across two Azure regions.
- Minimize costs by accumulating CPU credits during periods of low utilization.
What is the minimum number of virtual networks you should deploy, and which virtual machine size should you use? To answer, select the appropriate options in the answer area.
Answer Area
Number of virtual networks:
1
2
3
4
Virtual machine size:
A-Series
B-Series
D-Series
M-Series
Answer Area:
Number of virtual networks: 2
Virtual machine size: B-Series
Explanation:
Number of virtual networks: 2
Virtual Networks and Azure Regions: Azure Virtual Networks are regional resources. They cannot span across multiple Azure regions. To deploy resources in two different Azure regions (as required for four availability zones across two regions), you need at least two virtual networks, one in each region.
Availability Zones within Regions: Availability Zones are physically separate datacenters within the same Azure region. You can deploy VMs across multiple Availability Zones within a single virtual network in a given region. However, to extend to another region, you need a separate virtual network in that second region.
Minimum Number: To cover two Azure regions, the minimum number of virtual networks is two. You would have one virtual network in the first region and another virtual network in the second region. Within each virtual network, you can then deploy VMs across the desired number of availability zones available in that region.
Virtual machine size: B-Series
CPU Credits and Burstable Performance: The requirement to “minimize costs by accumulating CPU credits during periods of low utilization” directly points to B-Series virtual machines.
B-Series VMs: B-Series VMs are designed to be economical for workloads that do not need to run at full CPU utilization continuously, such as web servers, development/test environments, and small databases. They operate on a credit system:
Credit Accumulation: When the VM utilizes less CPU than its baseline performance, it accumulates CPU credits.
Credit Consumption (Bursting): When the VM needs to perform at higher CPU levels (like during peak utilization from 8-9 AM and 4-5 PM), it can “burst” above its baseline performance by consuming the accumulated credits.
Cost Optimization: By accumulating credits during low utilization, you effectively pay less for the compute resources during those periods and use those credits for periods of higher demand, minimizing overall costs for variable workloads.
Why not other VM sizes:
A-Series: A-Series VMs are basic entry-level VMs and are not designed for bursting or credit accumulation. They are generally used for very light workloads or dev/test scenarios where consistent performance is not critical.
D-Series: D-Series VMs are general-purpose VMs that offer a good balance of compute, memory, and storage. They are designed for a wide range of workloads, but they do not have the burstable performance and credit accumulation feature of B-Series VMs. They are better suited for workloads with more consistent CPU demands.
M-Series: M-Series VMs are memory-optimized VMs, designed for memory-intensive workloads like large databases or in-memory analytics. They are not focused on CPU bursting or credit accumulation for cost optimization.
Therefore, the minimum number of virtual networks is 2, and the optimal virtual machine size to minimize costs using CPU credits is B-Series.
You need to recommend an Azure Storage solution that meets the following requirements:
✑ The storage must support 1 PB of data.
✑ The data must be stored in blob storage.
✑ The storage must support three levels of subfolders.
✑ The storage must support access control lists (ACLs).
What should you include in the recommendation?
A. a premium storage account that is configured for block blobs
B. a general purpose v2 storage account that has hierarchical namespace enabled
C. a premium storage account that is configured for page blobs
D. a premium storage account that is configured for file shares and supports large file shares
Final Answer: The final answer is B.
Let’s analyze each requirement and how the options address them:
Requirement 1: The storage must support 1 PB of data.
All Azure Storage account types, including Premium and General Purpose v2, can scale to petabytes of data, well beyond 1 PB. This requirement doesn’t eliminate any of the options.
Requirement 2: The data must be stored in blob storage.
Options A, B, and C explicitly mention blob storage (block blobs, hierarchical namespace in blob storage, page blobs).
Option D mentions file shares. While Azure File Shares are built on Azure Storage and use underlying blob storage, they are accessed via SMB protocol and are conceptually different from directly using blob storage APIs. Option D is less directly aligned with this requirement than A, B, and C.
Requirement 3: The storage must support three levels of subfolders.
Option A (Premium Block Blobs) and Option C (Premium Page Blobs): Standard blob storage (including premium without hierarchical namespace) is flat. While you can simulate folders using prefixes in blob names (e.g., folder1/folder2/blob.txt), this is not a true hierarchical namespace and doesn’t offer native folder management or ACLs at folder levels.
Option B (General Purpose v2 with Hierarchical Namespace): Hierarchical Namespace (HNS) for Azure Blob Storage is specifically designed to provide a true file system-like directory structure within blob storage. It supports multiple levels of subfolders, easily meeting the requirement of three levels.
Option D (Premium File Shares): Azure File Shares inherently support hierarchical folders and subfolders as they are designed as network file shares accessed via SMB protocol.
Requirement 4: The storage must support access control lists (ACLs).
Option A (Premium Block Blobs) and Option C (Premium Page Blobs): Standard blob storage supports container-level ACLs and blob-level ACLs. However, ACL management at the simulated folder level in flat blob storage is complex and not natively supported.
Option B (General Purpose v2 with Hierarchical Namespace): Hierarchical Namespace (HNS) enables POSIX-like ACLs on directories and files (blobs within the hierarchy). This allows for granular permission management at folder and file levels, exactly as required.
Option D (Premium File Shares): Azure File Shares support NTFS-style ACLs which are the standard ACLs used in Windows file systems. This provides robust access control at folder and file levels.
Analyzing the best fit:
Option B (General Purpose v2 storage account that has hierarchical namespace enabled) directly and completely satisfies all four requirements. It provides scalable blob storage, supports hierarchical folders, and offers ACLs for those folders and blobs within them.
Option D (Premium storage account that is configured for file shares and supports large file shares) is close in terms of folder and ACL support, but it deviates from the “must be stored in blob storage” requirement (requirement 2). While File Shares are built on Azure Storage, they are not directly accessed as blob storage.
Options A and C (Premium storage accounts without hierarchical namespace) fail to adequately meet the subfolder (requirement 3) and folder-level ACLs (requirement 4) requirements, as they are flat blob storage structures.
Conclusion:
Option B is the most correct and closest answer. It is the only option that fully addresses all four requirements, especially the crucial requirements for hierarchical subfolders and ACLs within blob storage, using the intended Azure feature (Hierarchical Namespace) for this purpose.
You need to design a highly available Azure SQL database that meets the following requirements:
- Failover between replicas of the database must occur without any data loss.
- The database must remain available in the event of a zone outage.
- Costs must be minimized.
Which deployment option should you use?
A. Azure SQL Database Basic
B. Azure SQL Managed Instance General Purpose
C. Azure SQL Database Business Critical
D. Azure SQL Managed Instance Business Critical
Let’s break down each requirement and evaluate the Azure SQL Database deployment options:
Requirements:
Failover without data loss: This necessitates synchronous data replication. In synchronous replication, a transaction is committed only after it is written to both the primary and secondary replicas. This ensures that in case of a failover, no committed data is lost.
Database remains available in a zone outage: This requires zone redundancy. Zone redundancy means the database replicas are spread across different availability zones within an Azure region. If one zone fails, the database remains available in another zone.
Costs must be minimized: We need to choose the least expensive option that meets the above two requirements.
Analyzing each option:
A. Azure SQL Database Basic:
High Availability: Basic tier offers local redundancy within a single data center. It does not provide zone redundancy. Failovers are possible, but they are not guaranteed to be without data loss as it uses standard storage and asynchronous replication concepts.
Zone Outage Resilience: No. Basic tier is not zone-redundant.
Cost: Basic is the least expensive tier.
Meets Requirements? No. Fails on data loss prevention and zone outage resilience.
B. Azure SQL Managed Instance General Purpose:
High Availability: General Purpose tier offers high availability using remote storage (Azure Premium Storage) with locally redundant storage (LRS). While failovers are generally fast, it uses asynchronous replication to the remote storage layer, which means there is a potential for data loss during a failover, especially for the most recent transactions not yet replicated. While it can be configured for zone redundancy, the underlying storage replication is still not synchronous across zones in the same way as Business Critical.
Zone Outage Resilience: General Purpose can be configured for zone redundancy.
Cost: General Purpose is less expensive than Business Critical.
Meets Requirements? No. While it can be zone-redundant, it does not guarantee zero data loss failover.
C. Azure SQL Database Business Critical:
High Availability: Business Critical tier is designed for the highest level of availability and performance. It uses synchronous replication to maintain data consistency between replicas. Failovers are designed to be zero data loss. It uses local SSD storage for very low latency.
Zone Outage Resilience: Business Critical can be configured for zone redundancy. Zone-redundant Business Critical deployments place replicas in different availability zones, ensuring database availability even during a zone outage.
Cost: Business Critical is more expensive than General Purpose and Basic, but less expensive than Managed Instance Business Critical in many scenarios.
Meets Requirements? Yes. Meets both zero data loss failover and zone outage resilience requirements when configured for zone redundancy.
D. Azure SQL Managed Instance Business Critical:
High Availability: Business Critical Managed Instance offers the same high availability characteristics as Business Critical Azure SQL Database, including synchronous replication and zero data loss failover.
Zone Outage Resilience: Business Critical Managed Instance can also be configured for zone redundancy.
Cost: Business Critical Managed Instance is generally more expensive than Business Critical Azure SQL Database for equivalent resources because of the added instance-level features and isolation.
Meets Requirements? Yes. Meets both zero data loss failover and zone outage resilience requirements when configured for zone redundancy.
Choosing the best option for cost minimization:
Both Business Critical options (C and D) meet the high availability and zone redundancy requirements. However, to minimize costs, we should choose the less expensive option between them. Azure SQL Database Business Critical (Option C) is generally less expensive than Azure SQL Managed Instance Business Critical (Option D) for similar performance and capacity, as Managed Instance includes additional management and instance-level features that add to the cost.
Conclusion:
Option C. Azure SQL Database Business Critical is the most appropriate deployment option. It meets all the requirements: zero data loss failover (due to synchronous replication), zone outage resilience (when configured for zone redundancy), and is the less expensive option compared to Business Critical Managed Instance while still fulfilling the high availability needs.
Final Answer: The final answer is C.
HOTSPOT –
You manage a database environment for a Microsoft Volume Licensing customer named Contoso, Ltd. Contoso uses License Mobility through Software
Assurance.
You need to deploy 50 databases. The solution must meet the following requirements:
✑ Support automatic scaling.
✑ Minimize Microsoft SQL Server licensing costs.
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Purchase model:
DTU
vCore
Azure reserved virtual machine instances
Deployment option:
An Azure SQL managed instance
An Azure SQL Database elastic pool
A SQL Server Always On availability group
Hot Area:
Purchase model: vCore
Deployment option: An Azure SQL Database elastic pool
To address the requirements, we need to select the purchase model and deployment option that best align with License Mobility, automatic scaling, and cost minimization for 50 databases.
Purchase Model:
vCore: The vCore-based purchasing model is the correct choice for leveraging License Mobility through Software Assurance. With vCore, you can choose to pay for the infrastructure (compute and storage) separately and apply your existing SQL Server licenses through License Mobility. This directly minimizes Microsoft SQL Server licensing costs because you are using licenses you already own.
DTU: The DTU (Database Transaction Unit) model includes the SQL Server license cost in the price. Using DTU would mean you are paying for new SQL Server licenses, even though Contoso has existing licenses through Software Assurance and License Mobility. This does not minimize licensing costs in this scenario.
Azure reserved virtual machine instances: Reserved VM instances are a pricing option for virtual machines. While you could technically deploy SQL Server on Azure VMs and use reserved instances for the VMs, this is not the most efficient or cost-effective way to deploy 50 databases with automatic scaling requirements in Azure SQL Database PaaS. It also doesn’t directly relate to the purchase model within Azure SQL Database itself (DTU vs vCore). Reserved instances are more relevant for IaaS deployments, not PaaS Azure SQL Database scenarios directly addressing License Mobility and automatic scaling as efficiently as vCore and Elastic Pools.
Deployment Option:
An Azure SQL Database elastic pool: Elastic pools are specifically designed to manage the performance and cost of a large number of databases with varying usage patterns. Databases in an elastic pool share a pool of resources, which allows for efficient resource utilization and cost optimization. Elastic pools support automatic scaling of resources allocated to the pool, and individual databases within the pool can benefit from these resources as needed. This is ideal for deploying 50 databases and managing them efficiently with automatic scaling while minimizing costs. Elastic pools also support License Mobility at the pool level when using vCore purchase model.
An Azure SQL managed instance: Managed Instance is a good option for migrating on-premises SQL Server instances to Azure with high compatibility. It also supports License Mobility and automatic scaling. However, for deploying 50 databases, using 50 separate Managed Instances would likely be more expensive and complex to manage than using an elastic pool, especially if these databases are not individually very large or resource-intensive. Managed instances are generally more suited for migrating entire applications with existing SQL Server instances, rather than deploying a large number of new databases from scratch where resource sharing and cost efficiency are primary concerns.
A SQL Server Always On availability group: Always On Availability Groups are a high-availability and disaster recovery (HA/DR) solution for SQL Server. While you can deploy Always On AGs in Azure VMs, this is a more complex Infrastructure-as-a-Service (IaaS) approach. It is not the best option for automatically scaling and minimizing costs for 50 databases compared to PaaS Azure SQL Database options like Elastic Pools. Setting up and managing Always On AGs is more complex and generally more expensive than using Elastic Pools, and it’s primarily focused on HA/DR, not on efficient management of a large number of databases with automatic scaling and License Mobility benefits.
Conclusion:
The combination of vCore purchase model and Azure SQL Database elastic pool is the most appropriate solution to meet all the given requirements: License Mobility, automatic scaling, and minimizing Microsoft SQL Server licensing costs for deploying 50 databases.
You have an on-premises application named App1 that uses an Oracle database.
You plan to use Azure Databricks to transform and load data from App1 to an Azure Synapse Analytics instance.
You need to ensure that the App1 data is available to Databricks.
Which two Azure services should you include in the solution? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
A. Azure Data Box Gateway
B. Azure Import/Export service
C. Azure Data Lake Storage
D. Azure Data Box Edge
E. Azure Data Factory
Correct Answer: BE
Data Factory is a data integration service that provides a low-code or no-code approach to construct extract, transform, and load (ETL) processes within a visual environment or by writing your own code.
Exporting data, either to another data technology or to another Dataverse environment, can use any of the same technologies for importing data, such as dataflows, Data Factory, Power Query, and Power Automate.
You have an Azure subscription.
You need to deploy an Azure Kubernetes Service (AKS) solution that will use Windows Server 2019 nodes. The solution must meet the following requirements:
- Minimize the time it takes to provision compute resources during scale-out operations.
- Support autoscaling of Windows Server containers.
Which scaling option should you recommend?
A. horizontal pod autoscaler
B. Virtual nodes
C. Kubernetes version 1.20.2 or newer
D. cluster autoscaler
Let’s analyze each option in the context of the requirements for an AKS solution with Windows Server 2019 nodes, focusing on minimizing provisioning time during scale-out operations and supporting Windows container autoscaling.
A. Horizontal Pod Autoscaler (HPA):
Function: HPA automatically scales the number of pods in a deployment, replica set, or stateful set based on observed CPU utilization, memory utilization, or custom metrics.
Provisioning Time: HPA operates within the existing nodes in the AKS cluster. It does not provision new compute resources (nodes). Therefore, it does not directly minimize the time it takes to provision compute resources during scale-out operations. It scales pods, but if there aren’t enough nodes to place those pods, it will not help with node provisioning speed.
Windows Container Autoscaling: HPA is compatible with Windows Server containers and can be used to autoscale Windows-based applications.
B. Virtual nodes:
Function: Virtual nodes use Azure Container Instances (ACI) to provision pods. When you deploy pods to virtual nodes, they are run in serverless ACI, bypassing the need to provision and manage Kubernetes nodes directly.
Provisioning Time: Virtual nodes are designed to significantly minimize the time it takes to provision compute resources. ACI instances can start very quickly compared to provisioning new virtual machines for node pools. This directly addresses the requirement to minimize provisioning time.
Windows Container Autoscaling: Virtual nodes do support Windows Server containers. This makes it a viable option for the given scenario.
Limitations: While Virtual Nodes are fast for scaling pods, they may have limitations compared to standard AKS nodes in terms of features, networking, and persistent storage options. However, for scaling speed, they are a strong contender.
C. Kubernetes version 1.20.2 or newer:
Function: Kubernetes versions introduce new features, bug fixes, and performance improvements. However, a specific Kubernetes version is not a scaling option itself. It might enable or improve certain scaling features, but it’s not the primary mechanism for scaling.
Provisioning Time: Upgrading to a newer Kubernetes version might have general performance improvements, but it does not directly minimize the time it takes to provision compute resources during scale-out.
Windows Container Autoscaling: Kubernetes version compatibility is important for Windows container support, but version alone is not the scaling mechanism.
Relevance: While keeping Kubernetes updated is best practice, it’s not the scaling option asked for in the question.
D. Cluster autoscaler:
Function: Cluster autoscaler automatically adjusts the number of nodes in an AKS cluster. When pods cannot be scheduled due to insufficient resources, the cluster autoscaler provisions new nodes (VMs) in the node pool.
Provisioning Time: Cluster autoscaler does provision new compute resources (nodes), but the time it takes to provision these resources is limited by the time it takes to provision new virtual machines in Azure. While it automates the scaling process, it does not minimize the provisioning time to the same extent as Virtual Nodes. VM provisioning is inherently slower than ACI container instance startup.
Windows Container Autoscaling: Cluster autoscaler is compatible with Windows Server node pools and can be used to autoscale Windows containers by adding more Windows nodes.
Comparing Options for Minimizing Provisioning Time and Windows Support:
Virtual nodes (B) directly address the requirement to minimize the time it takes to provision compute resources due to the fast startup of ACI instances. They also support Windows Server containers.
Cluster autoscaler (D) automates node scaling but is limited by VM provisioning time.
HPA (A) scales pods within existing nodes, not compute resources.
Kubernetes version (C) is not a scaling option.
Conclusion:
Considering the primary requirement of minimizing the time it takes to provision compute resources during scale-out operations for Windows Server 2019 nodes, Virtual nodes (B) are the most suitable scaling option. While Cluster Autoscaler is a valid and commonly used autoscaling solution for AKS, Virtual nodes are specifically designed to provide rapid scale-out by leveraging the serverless nature of Azure Container Instances, thus minimizing provisioning time significantly.
Final Answer: The final answer is B.
Your company has offices in North America and Europe.
You plan to migrate to Azure.
You need to recommend a networking solution for the new Azure infrastructure. The solution must meet the following requirements:
- The Point-to-Site (P2S) VPN connections of mobile users must connect automatically to the closest Azure region.
- The offices in each region must connect to their local Azure region by using an ExpressRoute circuit.
- Transitive routing between virtual networks and on-premises networks must be supported.
- The network traffic between virtual networks must be filtered by using FQDNs.
What should you include in the recommendation?
A. Azure Virtual WAN with a secured virtual hub
B. virtual network peering and application security groups
C. virtual network gateways and network security groups (NSGs)
D. Azure Route Server and Azure Network Function Manager
Therefore, the correct answer is A. Azure Virtual WAN with a secured virtual hub.
Let’s break down each requirement and see how the options align:
Requirement 1: P2S VPN connections connect automatically to the closest Azure region.
Azure Virtual WAN (Option A): Virtual WAN is designed for global connectivity and has built-in capabilities to route Point-to-Site VPN connections to the nearest virtual hub. This is a key feature of Virtual WAN.
Virtual network peering and application security groups (Option B): Virtual network peering does not inherently handle P2S VPN connections or regional routing. You would need separate VPN gateways in each region and manually configure clients or use complex routing mechanisms, which is not automatic.
Virtual network gateways and network security groups (NSGs) (Option C): While you can set up P2S VPN on virtual network gateways, achieving automatic routing to the closest region would require significant manual configuration and is not a built-in feature. You would need to manage multiple gateways and distribute different VPN client profiles based on user location, which is not “automatic” for the user.
Azure Route Server and Azure Network Function Manager (Option D): Azure Route Server is focused on simplifying routing within VNets and with NVAs, not directly on P2S VPN regional routing. Network Function Manager helps deploy and manage NVAs, but doesn’t inherently solve the P2S closest region routing requirement.
Requirement 2: Offices in each region connect to their local Azure region by using an ExpressRoute circuit.
Azure Virtual WAN (Option A): Virtual WAN is designed to connect to on-premises locations via ExpressRoute. You can connect ExpressRoute circuits to virtual hubs in each region, ensuring local connectivity.
Virtual network peering and application security groups (Option B): Virtual network peering is for connecting VNets. ExpressRoute circuits would be connected directly to individual VNets. While possible, it’s not the centralized and scalable approach for managing multiple ExpressRoute connections as Virtual WAN.
Virtual network gateways and network security groups (NSGs) (Option C): Virtual network gateways are used to terminate ExpressRoute circuits. You would place gateways in VNets in each region and connect ExpressRoute circuits, which is feasible but less centrally managed than Virtual WAN.
Azure Route Server and Azure Network Function Manager (Option D): Azure Route Server can enhance routing with ExpressRoute within VNets, but it’s not the primary solution for setting up and managing ExpressRoute connections to local regions.
Requirement 3: Transitive routing between virtual networks and on-premises networks must be supported.
Azure Virtual WAN (Option A): Virtual WAN inherently supports transitive routing. Traffic can flow between VNets connected to a virtual hub, and between VNets and on-premises branches connected to the same hub. This is a core design principle of Virtual WAN.
Virtual network peering and application security groups (Option B): Standard virtual network peering is not transitive. You would need to implement hub-and-spoke topologies with UDRs or use Global VNet Peering (which is transitive) but it’s still not as naturally transitive for on-premises routing as Virtual WAN.
Virtual network gateways and network security groups (NSGs) (Option C): Virtual network gateways alone do not automatically provide transitive routing. You would need to configure complex routing tables, potentially VNet peering, and possibly VPN connections between VNets to achieve transitive routing, which is complex and not the primary purpose of basic VNet gateways.
Azure Route Server and Azure Network Function Manager (Option D): Azure Route Server can simplify routing within a VNet and with NVAs, but it doesn’t automatically make the entire network transitive across VNets and on-premises locations without careful design and configuration.
Requirement 4: The network traffic between virtual networks must be filtered by using FQDNs.
Azure Virtual WAN (Option A): Azure Virtual WAN secured hubs include Azure Firewall. Azure Firewall is a cloud-native firewall service that can perform FQDN-based filtering in network rules. This directly meets the requirement.
Virtual network peering and application security groups (Option B): Application Security Groups (ASGs) work with NSGs, and while NSGs can use FQDN tags in rules, NSGs are primarily IP address and port based and less efficient and manageable for comprehensive FQDN-based filtering between VNets compared to a dedicated firewall.
Virtual network gateways and network security groups (NSGs) (Option C): NSGs are primarily IP address and port based. While FQDN tags exist, they are not the ideal solution for robust FQDN-based filtering between VNets at scale.
Azure Route Server and Azure Network Function Manager (Option D): Azure Route Server is about routing. Azure Network Function Manager can be used to deploy NVAs like firewalls. While you could deploy a firewall NVA, this option is not directly providing the FQDN filtering capability out of the box; it’s about the infrastructure to deploy such a solution.
Conclusion:
Option A, Azure Virtual WAN with a secured virtual hub, is the only option that comprehensively addresses all four requirements in a scalable and manageable way. It is specifically designed for global, transitively routed networks with built-in security features like Azure Firewall for FQDN filtering and optimized for both P2S VPN and ExpressRoute connectivity with regional considerations.
HOTSPOT
–
You have two Azure AD tenants named contoso.com and fabrikam.com. Each tenant is linked to 50 Azure subscriptions. Contoso.com contains two users named User1 and User2.
You need to meet the following requirements:
- Ensure that User1 can change the Azure AD tenant linked to specific Azure subscriptions.
- If an Azure subscription is liked to a new Azure AD tenant, and no available Azure AD accounts have full subscription-level permissions to the subscription, elevate the access of User2 to the subscription.
The solution must use the principle of least privilege.
Which role should you assign to each user? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
User1:
Co-administrator
Owner
Service administrator
User2:
Co-administrator
Owner
Service administrator
Answer Area:
User1: Owner
User2: Owner
Let’s break down each requirement and analyze the roles:
Requirement 1: User1 can change the Azure AD tenant linked to specific Azure subscriptions.
To change the Azure AD tenant associated with an Azure subscription, a user needs permissions at the subscription level to modify the directory property. In Azure RBAC, the Owner role is the built-in role that grants full access to manage all resources, including access to manage the subscription itself and its properties like the associated Azure AD tenant.
Owner: Owners have full control over the Azure subscription. This includes the ability to manage the subscription’s properties, such as changing the associated Azure AD tenant. Therefore, the Owner role can fulfill this requirement.
Co-administrator: Co-administrator is a legacy role from the Classic deployment model. While it grants high levels of access within a subscription, it’s not the recommended role in the modern Azure Resource Manager (ARM) and RBAC context. While historically Co-administrators had significant permissions, in the context of modern Azure and RBAC, it’s less clearly defined if they have the specific permission to change the Azure AD tenant association. It’s best to lean towards modern RBAC roles like Owner.
Service administrator: Service Administrator is also a legacy role from the Classic deployment model, and there is only one Service Administrator per Azure account. Similar to Co-administrator, while it has broad access, it’s less relevant in the modern RBAC context compared to built-in RBAC roles like Owner for managing subscriptions.
Requirement 2: If an Azure subscription is liked to a new Azure AD tenant, and no available Azure AD accounts have full subscription-level permissions to the subscription, elevate the access of User2 to the subscription.
This requirement addresses a break-glass scenario. When a subscription is moved to a new Azure AD tenant, there might be a situation where no users from the new tenant automatically have access. Azure has a built-in mechanism to handle this, allowing the original subscription administrator (often the account administrator or service administrator from the original subscription setup) to regain access and assign roles in the new tenant.
In the context of RBAC, the Owner role is the role that aligns most closely with having full administrative control to manage access. When access is elevated in such a scenario, the user performing the elevation effectively gains administrative rights over the subscription to re-establish access control within the new tenant. Assigning Owner role to User2 would enable User2 to manage access after the elevation process (assuming User2 is the intended person to manage access in this recovery scenario).
Owner: As stated above, Owner is the role with full control. In a break-glass scenario, having Owner role after elevation would allow User2 to fully manage access and re-establish administration within the new Azure AD tenant.
Co-administrator: Similar to requirement 1, Co-administrator is a legacy role and less relevant in modern RBAC for this type of scenario. While it grants high privileges, it’s not the most appropriate role to recommend in the context of modern Azure RBAC and break-glass access management.
Service administrator: Service Administrator is a legacy role. While conceptually related to subscription administration, in the modern RBAC context, “Owner” is the more fitting and recommended RBAC role for full subscription control and management, including recovery scenarios.
Principle of Least Privilege:
We need to assign the least privileged role that still meets the requirements. In this case, for both requirements, the Owner role is the most appropriate built-in RBAC role that provides the necessary permissions. While Owner is a highly privileged role, it is the role designed for full subscription management, which aligns with the actions described in the requirements. The legacy roles (Co-administrator, Service administrator) are less aligned with modern RBAC best practices and are not as clearly defined for these specific tasks in the ARM/RBAC context.
Conclusion:
To meet both requirements and adhere to the principle of least privilege (within the available role options and in the context of modern Azure RBAC), assigning the Owner role to both User1 and User2 is the most appropriate solution.
You are designing a point of sale (POS) solution that will be deployed across multiple locations and will use an Azure Databricks workspace in the Standard tier. The solution will include multiple apps deployed to the on-premises network of each location.
You need to configure the authentication method that will be used by the app to access the workspace. The solution must minimize the administrative effort associated with staff turnover and credential management.
What should you configure?
A. a managed identity
B. a service principal
C. a personal access token
Let’s analyze each option in the context of the requirements:
A. a managed identity
Pros: Managed identities are the most secure and least administrative overhead method for Azure resources to authenticate to other Azure services. Azure automatically manages the lifecycle of the credentials. They are tied to the lifecycle of the Azure resource they are assigned to.
Cons: Managed identities are designed to be used by Azure resources (like VMs, Function Apps, App Services, etc.) to authenticate to other Azure services within Azure. Managed identities cannot be directly used by applications running on-premises. On-premises applications are outside of the Azure control plane that manages these identities.
B. a service principal
Pros: Service principals are application identities in Azure Active Directory (Azure AD). They are designed for applications to authenticate to Azure services.
Security: Service principals are more secure than personal access tokens because they are application-specific identities, not tied to individual users. You can control the permissions granted to the service principal specifically for the application’s needs.
Centralized Credential Management: Service principal credentials (client secrets or certificates) are managed within Azure AD. Credential rotation policies can be implemented.
Applicable to On-premises Applications: Service principals can be used by applications running anywhere, including on-premises, by using an authentication library (like MSAL) and providing the service principal’s credentials (client ID and secret or certificate).
Minimized Administrative Effort: While initial setup is required to create and configure the service principal, ongoing management is relatively low, especially compared to personal access tokens. Staff turnover is less of an issue because the identity is application-based, not user-based.
C. a personal access token
Pros: Personal access tokens (PATs) are simple to generate initially within Azure Databricks.
Cons:
Security Risk: PATs are tied to a specific user’s account in Azure Databricks. If a user leaves the company or their account is compromised, the PAT needs to be revoked and all applications using it need to be updated.
High Administrative Overhead: Managing PATs across multiple locations and applications becomes complex and error-prone, especially with staff turnover. Each time a staff member leaves or roles change, PATs might need to be regenerated and applications reconfigured.
Not Ideal for Application Authentication: PATs are designed for user authentication, typically for CLI or API access as a specific user. Using them for application authentication in a production system is not a best practice due to the management and security concerns.
Credential Management Nightmare: Distributing and managing PATs securely across multiple on-premises locations for multiple apps is a significant credential management challenge.
Considering the Requirements:
Minimize administrative effort associated with staff turnover and credential management: Managed identities are best in Azure, but not applicable on-premises. Service principals offer significantly lower administrative overhead than personal access tokens for application authentication and credential management.
On-premises applications: Service principals and personal access tokens can be used by on-premises apps. Managed identities cannot be directly used.
Security: Service principals are much more secure and manageable for application authentication compared to personal access tokens.
Conclusion:
Given the requirements, a service principal (B)
You have a multi-tier app named App1 and an Azure SQL database named SQL1. The backend service of App1 writes data to SQL1. Users use the App1 client to read the data from SQL1.
During periods of high utilization, the users experience delays retrieving the data.
You need to minimize how long it takes for data requests.
What should you include in the solution?
A. Azure Cache for Redis
B. Azure Content Delivery Network (CDN)
C. Azure Data Factory
D. Azure Synapse Analytics
The correct answer is A. Azure Cache for Redis.
Here’s why:
Reducing Data Retrieval Latency: The problem states that users experience delays retrieving data from SQL1 during high utilization. Azure Cache for Redis is designed to store frequently accessed data in memory, which significantly reduces the time it takes to retrieve that data58. This directly addresses the stated need to minimize the time for data requests.
Complementing Azure SQL Database: Azure Cache for Redis is often used in conjunction with Azure SQL Database to improve performance5. It acts as a layer between the application and the database, caching frequently accessed data to reduce the load on the database and improve response times1.
Cache-Aside Pattern: Azure Cache for Redis utilizes a “cache-aside” pattern, storing and sharing database query results, session states, and static content to make applications more scalable and nimble5.
Here’s why the other options are not as suitable:
B. Azure Content Delivery Network (CDN): CDNs are used to cache static content (images, CSS, JavaScript files) closer to users. They do not cache dynamic data from a database. The scenario describes delays in retrieving data, not static content.
C. Azure Data Factory: Azure Data Factory is an integration service used for creating data workflows and pipelines to move and transform data. It does not directly address the need to reduce data retrieval latency for user requests in real-time.
D. Azure Synapse Analytics: Azure Synapse Analytics is a data warehouse and big data analytics service. While it can be used for data analysis and reporting, it is not the best choice for caching frequently accessed data to improve real-time data retrieval performance for an application.
You have an Azure subscription that contains the resources shown in the following table.
Name Type Description
VM1 Virtual machine Frontend component in the Central US Azure region
VM2 Virtual machine Backend component in the East US Azure region
VM3 Virtual machine Backend component in the West US 2 Azure region
VNet1 Virtual network Hosts VM1
VNet2 Virtual network Hosts VM2
VNet3 Virtual network Hosts VM3
You create peering between VNet1 and VNet2 and between VNet1 and VNet3.
The virtual machines host an HTTPS-based client/server application and are accessible only via the private IP address of each virtual machine.
You need to implement a load balancing solution for VM2 and VM3. The solution must ensure that if VM2 fails, requests will be routed automatically to VM3, and if VM3 fails, requests will be routed automatically to VM2.
What should you include in the solution?
A. Azure Firewall Premium
B. Azure Application Gateway v2
C. a cross-region load balancer
D. Azure Front Door Premium
The requirement is to load balance traffic between VM2 and VM3, which are located in different Azure regions (East US and West US 2). The load balancing must ensure automatic failover between these VMs if one of them fails. The application is HTTPS-based and accessible via private IP addresses, indicating internal application load balancing within Azure.
Let’s analyze each option:
A. Azure Firewall Premium:
Azure Firewall Premium is a network security service that provides advanced threat protection, TLS inspection, and IDPS capabilities. It is primarily used for securing network traffic and enforcing security policies, not for load balancing application traffic between virtual machines, especially across regions for high availability. Firewall is about security, not application load balancing or failover.
B. Azure Application Gateway v2:
Azure Application Gateway v2 is a regional web traffic load balancer that operates at Layer 7 (Application Layer). It is excellent for load balancing within a single Azure region and offers features like SSL termination, URL-based routing, and Web Application Firewall (WAF). While Application Gateway can provide high availability within a region, it is not inherently designed for cross-region load balancing and automatic failover of backend VMs across different regions. Although you can configure Application Gateway in each region and use traffic manager for DNS-based routing, this is not the most direct and efficient way to achieve automatic VM failover between regions at the application level.
C. a cross-region load balancer:
Azure Cross-Region Load Balancer is specifically designed for this scenario. It enables you to load balance traffic across different Azure regions, ensuring high availability and resilience. It allows you to distribute traffic to healthy backend instances across regions and automatically failover to a healthy region if one region experiences an outage or VM failure. This option directly addresses the need for load balancing VM2 and VM3 across East US and West US 2 with automatic failover capability. This is the most suitable service for active-active or active-passive cross-region application deployment and high availability.
D. Azure Front Door Premium:
Azure Front Door Premium is a global, scalable web application acceleration and security service. While Front Door can route traffic to backends in different regions and provides global load balancing and failover, it is primarily designed for public-facing web applications and content delivery networks. Front Door is optimized for internet-facing applications to improve performance and availability for users worldwide. While technically it could be used, it’s an over-engineered solution for internal application load balancing between VMs within Azure regions, especially when the VMs are accessed via private IPs. Front Door is more complex to configure for this specific private IP backend scenario compared to a Cross-Region Load Balancer.
Rationale for choosing Cross-Region Load Balancer:
Cross-Region Requirement: The primary requirement is to load balance VMs in different regions (East US and West US 2). Cross-Region Load Balancer is built for this purpose.
Automatic Failover: Cross-Region Load Balancer provides automatic failover capabilities. If VM2 (in East US) fails, traffic will be automatically routed to VM3 (in West US 2), and vice versa.
HTTPS Application: Cross-Region Load Balancer supports HTTPS traffic.
Private IP Access: While Cross-Region Load Balancer is often associated with public endpoints, it can also be configured to load balance backend instances accessed via private IPs within peered virtual networks.
Therefore, the most appropriate and direct solution is C. a cross-region load balancer.
HOTSPOT
–
You need to deploy an instance of SQL Server on Azure Virtual Machines. The solution must meet the following requirements:
- Support 15,000 disk IOPS.
- Support SR-IOV.
- Minimize costs.
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
Virtual machine series:
DS
NC
NV
Disk type:
Standard SSD
Premium SSD
Ultra Disk
To determine the correct options, let’s analyze each requirement against the provided choices for Virtual Machine series and Disk types.
Requirement 1: Support 15,000 disk IOPS.
Disk Type Consideration:
Standard SSD: Standard SSDs offer moderate IOPS and throughput. They are not typically designed for high IOPS workloads like SQL Server requiring 15,000 IOPS. They are the least expensive SSD option.
Premium SSD: Premium SSDs are designed for I/O intensive workloads and offer high IOPS and throughput with low latency. They can easily support 15,000 IOPS. They are more expensive than Standard SSD but less expensive than Ultra Disk.
Ultra Disk: Ultra Disks provide the highest IOPS, throughput, and lowest latency. They are designed for the most demanding, transaction-heavy workloads. Ultra Disks can also easily support 15,000 IOPS, but are the most expensive disk option.
To meet the 15,000 IOPS requirement, Premium SSD or Ultra Disk are viable options. Standard SSD is likely insufficient.
Requirement 2: Support SR-IOV.
Virtual Machine Series Consideration:
DS series: DS-series VMs (especially DSv3 and later) generally support SR-IOV. DS-series are general-purpose VMs, offering a balance of compute and memory.
NC series: NC-series VMs (optimized for compute and GPU) do support SR-IOV.
NV series: NV-series VMs (optimized for visualization and GPU) do support SR-IOV.
All three VM series listed (DS, NC, NV) can potentially support SR-IOV, depending on the specific sub-series and size chosen within each family. For cost minimization, DS-series is generally more cost-effective for general SQL Server workloads compared to GPU-focused NC and NV series, unless GPU acceleration is explicitly required (which is not indicated in the question).
Requirement 3: Minimize costs.
Cost Analysis:
Virtual Machine Series: DS-series VMs are generally more cost-effective for general-purpose workloads than NC or NV series, which are specialized and often more expensive due to GPU components.
Disk Type: Standard SSD is the least expensive, followed by Premium SSD, and then Ultra Disk being the most expensive.
To minimize costs while meeting the other requirements, we should prioritize the DS series for VMs and choose the least expensive disk type that can still achieve 15,000 IOPS, which is Premium SSD.
Combining the Requirements for the Optimal Solution:
Virtual Machine Series: DS series is a good balance of performance and cost for SQL Server and can support SR-IOV.
Disk Type: Premium SSD provides sufficient IOPS (15,000+) for SQL Server and is more cost-effective than Ultra Disk, while being performant enough. Standard SSD is unlikely to meet the IOPS requirement.
Therefore, the solution that meets all requirements while minimizing costs is to use a DS-series Virtual Machine and Premium SSD disks.
Final Answer:
Answer Area
Virtual machine series: DS
Disk type: Premium SSD
You have 100 devices that write performance data to Azure Blob Storage.
You plan to store and analyze the performance data in an Azure SQL database.
You need to recommend a solution to continually copy the performance data to the Azure SQL database.
What should you include in the recommendation?
A. Azure Data Factory
B. Data Migration Assistant (DMA)
C. Azure Data Box
D. Azure Database Migration Service
Correct Answer:
A. Azure Data Factory
Why Azure Data Factory is Correct:
Continual Copying: Data Factory’s pipelines can be scheduled or event-triggered to copy performance data from Blob Storage to Azure SQL Database on an ongoing basis, meeting the “continually copy” requirement directly.
Source and Sink: Natively supports Azure Blob Storage as a source and Azure SQL Database as a sink, handling the data flow from 100 devices seamlessly.
Scalability and Automation: Scales to process data from multiple devices and automates the process without manual intervention.
Data Transformation: Can transform unstructured performance data (e.g., JSON, CSV) into a structured format for SQL Database if needed, enhancing usability for analysis.
AZ-304 Relevance: Data Factory is a cornerstone of AZ-304 for designing data integration solutions, commonly recommended for ongoing ETL workflows between Azure services like Blob Storage and SQL Database.
You are planning a storage solution. The solution must meet the following requirements:
✑ Support at least 500 requests per second.
✑ Support a large image, video, and audio streams.
Which type of Azure Storage account should you provision?
A. standard general-purpose v2
B. premium block blobs
C. premium page blobs
D. premium file shares
Let’s break down the requirements and evaluate each Azure Storage account type:
Requirements:
Support at least 500 requests per second: This indicates a need for high throughput and low latency, especially for read operations if we are serving media streams.
Support large image, video, and audio streams: This implies storing and serving large Block Blobs, which are the most suitable storage type for media files in Azure Storage.
Analyzing each Storage Account Type:
A. Standard general-purpose v2:
Pros:
Cost-effective for general storage needs.
Supports all three blob types (block, append, page), files, queues, and tables.
Cons:
Performance is not guaranteed to be consistently high for very high request rates.
Latency might be higher compared to premium storage options.
Might struggle to consistently deliver 500+ requests per second, especially with large media files being served simultaneously.
Designed for general-purpose workloads, not specifically optimized for high-performance media streaming at scale.
B. Premium block blobs:
Pros:
Optimized for high transaction rates and low latency for block blobs. This directly addresses the “500 requests per second” requirement.
Designed for scenarios requiring high throughput, such as ingestion, event logging, and serving web content (including media).
Offers significantly better performance for block blob operations compared to standard storage, especially in terms of IOPS and latency.
Can handle large block blobs suitable for images, videos, and audio.
Cons:
More expensive than standard general-purpose v2 storage.
C. Premium page blobs:
Pros:
Optimized for random read and write operations and low latency for page blobs.
Ideal for Virtual Machine disks and databases where random access to data within a blob is frequent.
Cons:
Page blobs are not the primary storage type for large media files like images, videos, and audio streams. Block blobs are the standard for this.
Premium page blobs are more focused on VM disk performance and are not as cost-effective or optimized for serving large media streams as premium block blobs.
D. Premium file shares:
Pros:
Based on SSD media and optimized for IO-intensive and latency-sensitive file share workloads (SMB and NFS).
Designed for enterprise applications requiring high-performance file storage.
Cons:
File shares are not typically used for serving large image, video, and audio streams over the internet or to applications. Blob storage is the standard and more scalable solution for this.
File shares are accessed using SMB or NFS protocols, not directly over HTTP/HTTPS like blob storage for web content delivery.
Final Answer: The final answer is
B
DRAG DROP –
You are designing a virtual machine that will run Microsoft SQL Server and contain two data disks. The first data disk will store log files, and the second data disk will store data. Both disks are P40 managed disks.
You need to recommend a host caching method for each disk. The method must provide the best overall performance for the virtual machine while preserving the integrity of the SQL data and logs.
Which host caching method should you recommend for each disk? To answer, drag the appropriate methods to the correct disks. Each method may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:
Methods
None
ReadOnly
ReadWrite
Answer Area
Log: Method
Data: Method
Final Answer
Log: None
Data: ReadOnly
Why This Is Correct
Log: None
Integrity: Transaction logs require immediate durability to disk to avoid losing committed transactions. None ensures writes bypass caching, preserving SQL consistency.
Performance: P40’s high throughput (250 MB/s) supports sequential log writes without caching, sufficient for SQL Server.
Best Practice: Microsoft recommends disabling caching (or using write-through) for SQL log disks to prioritize durability.
Data: ReadOnly
Performance: Caching reads improves query response times, critical for data files with random read patterns. P40’s IOPS handle writes efficiently without caching.
Integrity: Writes bypass the cache, ensuring data durability while still benefiting from read caching.
Best Practice: Microsoft often recommends ReadOnly for SQL data disks on Premium SSDs to optimize performance safely.
Overall VM Performance
Log (None): Ensures fast, durable writes without compromising logs.
Data (ReadOnly): Boosts read performance for queries, leveraging host cache without risking write integrity.
P40 Disks: High-performance Premium SSDs complement this setup, minimizing bottlenecks.
You are designing a solution that calculates 3D geometry from height-map data.
You need to recommend a solution that meets the following requirements:
✑ Performs calculations in Azure.
✑ Ensures that each node can communicate data to every other node.
✑ Maximizes the number of nodes to calculate multiple scenes as fast as possible.
Minimizes the amount of effort to implement the solution.
Which two actions should you include in the recommendation? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
A. Enable parallel file systems on Azure.
B. Create a render farm that uses virtual machines.
C. Create a render farm that uses virtual machine scale sets.
D. Create a render farm that uses Azure Batch.
E. Enable parallel task execution on compute nodes.
Let’s analyze each option against the requirements:
Requirement 1: Performs calculations in Azure. All options suggest Azure-based solutions.
Requirement 2: Ensures that each node can communicate data to every other node.
B. Create a render farm that uses virtual machines: VMs in Azure Virtual Network can communicate with each other.
C. Create a render farm that uses virtual machine scale sets: VMSS are built on VMs and also reside in a Virtual Network, enabling inter-node communication.
D. Create a render farm that uses Azure Batch: Azure Batch compute nodes can communicate with each other within a pool.
A. Enable parallel file systems on Azure: While parallel file systems help with data sharing, they don’t directly ensure compute node communication for processing.
E. Enable parallel task execution on compute nodes: This is a software/application level requirement and is independent of the infrastructure choice, but crucial for leveraging distributed computing.
Requirement 3: Maximizes the number of nodes to calculate multiple scenes as fast as possible.
B. Create a render farm that uses virtual machines: Scalable but managing a large number of individual VMs can be complex.
C. Create a render farm that uses virtual machine scale sets: VMSS are designed for scaling out a large number of VMs easily.
D. Create a render farm that uses Azure Batch: Azure Batch is designed for large-scale parallel and HPC workloads and excels at scaling compute resources.
A. Enable parallel file systems on Azure: Parallel file systems can support high-performance data access for a large number of nodes, but they are not the compute resource themselves.
E. Enable parallel task execution on compute nodes: This is necessary to utilize a large number of nodes effectively.
Requirement 4: Minimizes the amount of effort to implement the solution.
D. Create a render farm that uses Azure Batch: Azure Batch is a managed service that abstracts away much of the infrastructure management, reducing implementation effort significantly for parallel computing.
C. Create a render farm that uses virtual machine scale sets: VMSS simplifies VM management compared to individual VMs, but still requires more configuration and management than Azure Batch for a full render farm solution.
B. Create a render farm that uses virtual machines: Managing individual VMs for a large render farm is the most effort-intensive option.
A. Enable parallel file systems on Azure: Setting up and managing parallel file systems adds some effort but is not the core compute solution.
E. Enable parallel task execution on compute nodes: This is a development/application configuration task and is necessary regardless of the infrastructure choice, but doesn’t minimize the infrastructure implementation effort itself.
Considering all requirements, Azure Batch (D) stands out as the service that best addresses scalability, inter-node communication, and minimizing implementation effort for parallel computing in Azure.
To effectively utilize the chosen compute resource (Azure Batch), enabling parallel task execution on compute nodes (E) is crucial. Without parallel task execution, simply having many nodes wouldn’t speed up the calculations. The application needs to be designed to distribute the workload across the nodes.
Therefore, the two actions that best meet all requirements are D. Create a render farm that uses Azure Batch and E. Enable parallel task execution on compute nodes.
You have an on-premises application that consumes data from multiple databases. The application code references database tables by using a combination of the server, database, and table name.
You need to migrate the application data to Azure.
To which two services can you migrate the application data to achieve the goal? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. SQL Server Stretch Database
B. SQL Server on an Azure virtual machine
C. Azure SQL Database
D. Azure SQL Managed Instance
Let’s analyze each option based on its ability to support the application’s requirement of referencing database tables using the server.database.table naming convention after migrating to Azure.
A. SQL Server Stretch Database
Incorrect. SQL Server Stretch Database is a feature, not a migration target service itself. It is used to transparently archive cold data from on-premises SQL Server to Azure, but it is not intended for migrating entire application databases to Azure as a primary solution. Moreover, Stretch Database is deprecated and not recommended for new solutions. It does not fulfill the requirement of migrating the application data to Azure in a way that maintains the application’s existing table referencing method for the entire dataset.
B. SQL Server on an Azure virtual machine
Correct. Migrating to SQL Server on an Azure virtual machine is essentially lifting and shifting the on-premises SQL Server to an Azure VM. This option provides the most compatibility with the on-premises environment. You can install SQL Server on an Azure VM and restore your databases there. The application can connect to this SQL Server instance in Azure using the same connection strings and server.database.table naming convention it uses on-premises. This is a complete solution for migrating the application data while maintaining the existing application code’s data access method.
C. Azure SQL Database
Correct. Azure SQL Database is a Platform-as-a-Service (PaaS) offering. While it is a different environment from SQL Server on a VM or on-premises SQL Server, Azure SQL Database does support cross-database queries and referencing tables in other databases within the same logical server using three-part names (database.schema.table). The “server” part in the connection string refers to the logical server name. If all the databases are migrated to the same Azure SQL Database logical server, the application can be adapted to use the database.schema.table or server.database.schema.table (where server is the logical server) naming, effectively meeting the requirement. This is also a complete solution, although might require slight adjustments in connection strings and understanding of logical server context compared to on-premises.
D. Azure SQL Managed Instance
Correct. Azure SQL Managed Instance is designed to provide near 100% compatibility with on-premises SQL Server while being a PaaS offering. It supports instance-level features and cross-database queries across databases within the same Managed Instance. Applications can connect to Azure SQL Managed Instance and use the familiar server.database.table (and even four-part names server.database.schema.table.column) naming convention, just like with on-premises SQL Server. Migrating to Azure SQL Managed Instance is a very suitable option for applications that require high compatibility with on-premises SQL Server and want to move to a managed service. This is also a complete solution for migrating the application data while preserving the existing table referencing method.
Given the requirement to choose two services that present a complete solution, the most fitting and robust options that offer direct migration paths and strong support for the server.database.table naming convention are:
B. SQL Server on an Azure virtual machine (Provides maximum compatibility and control, essentially a lift-and-shift)
D. Azure SQL Managed Instance (Provides high compatibility as a PaaS, designed for easy migration of on-premises SQL Server workloads)
While Azure SQL Database (C) is also a valid migration target and supports cross-database queries within a logical server, Managed Instance (D) is often considered a more direct and seamless migration path for applications heavily reliant on on-premises SQL Server compatibility and instance-level features. However, for the exam context, both Azure SQL Database and Azure SQL Managed Instance are commonly considered valid migration targets for SQL Server workloads.
Considering the most direct and “complete” solutions that minimize changes related to the table naming convention, SQL Server on an Azure virtual machine and Azure SQL Managed Instance are the most appropriate choices.
Final Answer: The final answer is
B,D
HOTSPOT –
You plan to migrate on-premises Microsoft SQL Server databases to Azure.
You need to recommend a deployment and resiliency solution that meets the following requirements:
✑ Supports user-initiated backups
✑ Supports multiple automatically replicated instances across Azure regions
✑ Minimizes administrative effort to implement and maintain business continuity
What should you recommend? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Deployment solution:
Azure SQL Managed Instance
SQL Server on Azure Virtual Machines
An Azure SQL Database single database
Resiliency solution:
Auto-failover group
Active geo-replication
Zone-redundant deployment
Let’s analyze each requirement against the provided Deployment and Resiliency solutions.
Deployment Solution Requirements:
Supports user-initiated backups:
Azure SQL Managed Instance: Supports user-initiated backups using T-SQL commands, Azure portal, and PowerShell.
SQL Server on Azure Virtual Machines: Supports user-initiated backups just like on-premises SQL Server, using SQL Server Management Studio, T-SQL, etc.
An Azure SQL Database single database: Supports user-initiated backups via Azure portal, PowerShell, and T-SQL (though slightly different mechanism).
Supports multiple automatically replicated instances across Azure regions:
Azure SQL Managed Instance: Supports this through Auto-failover groups and Active geo-replication.
SQL Server on Azure Virtual Machines: Can be configured for replication across regions using Always On Availability Groups or log shipping, but this is not fully automatic and requires more manual setup and management compared to PaaS solutions.
An Azure SQL Database single database: Supports this through Auto-failover groups and Active geo-replication.
Minimizes administrative effort to implement and maintain business continuity:
Azure SQL Managed Instance: PaaS offering, Microsoft handles underlying infrastructure, patching, and some aspects of HA/DR. Auto-failover groups simplify the setup and management of regional DR.
SQL Server on Azure Virtual Machines: IaaS offering, requires more administrative effort to manage the OS, SQL Server installation, patching, and configuring HA/DR solutions like Always On AG across regions.
An Azure SQL Database single database: PaaS offering, Microsoft manages infrastructure. Auto-failover groups and Active geo-replication minimize administrative effort for BC.
Resiliency Solution Requirements:
Auto-failover group:
Provides automatic failover to a secondary region in case of a regional outage.
Minimizes administrative effort for managing failover and DR.
Works with both Azure SQL Database and Azure SQL Managed Instance.
Active geo-replication:
Provides asynchronous replication to a secondary region.
Allows for manual failover (and can be configured for near-automatic failover with monitoring).
Relatively low administrative overhead for setup and maintenance.
Works with both Azure SQL Database and Azure SQL Managed Instance.
Zone-redundant deployment:
Provides high availability within a single Azure region by replicating across availability zones.
Does not protect against regional outages, thus not meeting the requirement for replication across Azure regions for business continuity.
Optimal Solution Selection:
Considering all requirements, especially minimizing administrative effort for implementing and maintaining business continuity with automatic regional replication and user-initiated backups, Azure SQL Managed Instance as the Deployment solution and Auto-failover group as the Resiliency solution are the most suitable choices.
Azure SQL Managed Instance is a PaaS offering that reduces administrative overhead compared to SQL Server on VMs. It inherently supports user backups and excels in regional DR scenarios.
Auto-failover group is specifically designed for simplifying regional disaster recovery for Azure SQL Database and Managed Instance, providing automatic failover and minimizing administrative effort for business continuity.
While Azure SQL Database single database with Auto-failover group or Active geo-replication is also a valid and highly manageable option, Azure SQL Managed Instance is often favored when migrating existing on-premises SQL Server databases due to its higher compatibility with on-premises SQL Server features and easier migration path for many workloads.
Hot Area Selection:
Deployment solution: Azure SQL Managed Instance
Resiliency solution: Auto-failover group
Final Answer:
Answer Area
Deployment solution: Azure SQL Managed Instance
Resiliency solution: Auto-failover group
You have an Azure web app that uses an Azure key vault named KeyVault1 in the West US Azure region.
You are designing a disaster recovery plan for KeyVault1.
You plan to back up the keys in KeyVault1.
You need to identify to where you can restore the backup.
What should you identify?
A. any region worldwide
B. the same region only
C. KeyVault1 only
D. the same geography only
The correct answer is D. the same geography only.
Explanation:
Azure Key Vault backups are designed for disaster recovery and data protection within a specific Azure geography. Here’s why and why the other options are incorrect:
D. the same geography only (Correct): Azure Key Vault backups can be restored to another Key Vault within the same Azure geography. Azure geographies are defined regions designed to meet data residency and compliance requirements. For example, if your Key Vault is in West US, which is part of the US geography, you would typically restore it to another region within the US geography (like East US, Central US, etc.). This ensures data remains within the designated geographical boundary, which is often crucial for compliance and regulatory reasons.
A. any region worldwide (Incorrect): Restoring a Key Vault backup to any region worldwide is not supported and is generally not a best practice for security and compliance reasons. Key Vaults often contain highly sensitive cryptographic keys, secrets, and certificates. Moving these across geographies without careful consideration and potentially violating data residency rules is strongly discouraged. Azure Key Vault’s design prioritizes security and regional isolation for these sensitive assets.
B. the same region only (Incorrect): While you can restore a backup to the same region, limiting it to only the same region defeats the purpose of disaster recovery to some extent. If the entire West US region were to experience a catastrophic event, restoring within the same region would be impossible. Disaster recovery usually implies having a secondary location outside the primary failure zone, but still within a geographically relevant and compliant boundary.
C. KeyVault1 only (Incorrect): You can restore a backup to KeyVault1 if you are recovering from data corruption or accidental deletion within the KeyVault itself. However, the question is about disaster recovery, which implies a broader failure scenario. Restricting restoration to only the original KeyVault1 doesn’t address scenarios where KeyVault1 itself is unavailable or irrecoverable due to a larger infrastructure issue. For disaster recovery, you would typically restore to a different Key Vault instance in a secondary location within the same geography to ensure business continuity if the primary Key Vault or region becomes unavailable.
You have an on-premises line-of-business (LOB) application that uses a Microsoft SQL Server instance as the backend.
You plan to migrate the on-premises SQL Server instance to Azure virtual machines.
You need to recommend a highly available SQL Server deployment that meets the following requirements:
✑ Minimizes costs
Minimizes failover time if a single server fails
What should you include in the recommendation?
A. an Always On availability group that has premium storage disks and a virtual network name (VNN)
B. an Always On Failover Cluster Instance that has a virtual network name (VNN) and a standard file share
C. an Always On availability group that has premium storage disks and a distributed network name (DNN)
D. an Always On Failover Cluster Instance that has a virtual network name (VNN) and a premium file share
Let’s analyze each option based on the requirements of minimizing cost and minimizing failover time for a highly available SQL Server deployment in Azure VMs.
Requirement 1: Minimize Costs
Always On Availability Group (AG) vs. Always On Failover Cluster Instance (FCI): Generally, Always On Availability Groups are considered more cost-effective than Failover Cluster Instances. FCIs often require shared storage and more complex configurations, which can increase infrastructure costs.
Premium Storage Disks vs. Standard Storage Disks: Premium storage disks are more expensive than standard storage disks. However, they offer significantly better performance (IOPS and throughput), which can be crucial for SQL Server workloads, especially for minimizing failover time.
Virtual Network Name (VNN) vs. Distributed Network Name (DNN): DNN listeners for AGs generally simplify configuration and can potentially reduce costs associated with Azure Load Balancers in some scenarios compared to VNN listeners, although the direct cost difference is usually minimal.
Requirement 2: Minimize Failover Time
Always On Availability Group (AG) vs. Always On Failover Cluster Instance (FCI): Always On Availability Groups generally offer faster failover times than Failover Cluster Instances. AGs perform failover at the database level, while FCIs perform failover at the entire SQL Server instance level, which can involve more overhead.
Premium Storage Disks vs. Standard Storage Disks: Premium storage disks, with their lower latency and higher IOPS, can contribute to faster failover times as the system can recover and access data more quickly.
Virtual Network Name (VNN) vs. Distributed Network Name (DNN): Distributed Network Name (DNN) listeners for Availability Groups are designed to provide faster and more reliable failover times compared to Virtual Network Name (VNN) listeners. DNN listeners eliminate the dependency on Azure Load Balancer for client connections, which can be a point of delay in VNN-based configurations.
Analyzing the Options:
A. an Always On availability group that has premium storage disks and a virtual network name (VNN):
AG: Good for HA and generally more cost-effective than FCI.
Premium Storage: Improves performance and potentially failover time but increases storage costs.
VNN: Simpler to set up initially but can have slightly longer failover times compared to DNN.
B. an Always On Failover Cluster Instance that has a virtual network name (VNN) and a standard file share:
FCI: More complex and potentially more expensive than AG. Failover might be slower than AG.
VNN: Listener for FCI, requires Load Balancer.
Standard File Share: Reduces storage cost for the file share component but standard storage generally has lower performance. Using Standard file share for critical components in a HA SQL Server setup is not recommended for performance and reliability.
C. an Always On availability group that has premium storage disks and a distributed network name (DNN):
AG: Good for HA and cost-effective.
Premium Storage: Improves performance and failover time, worth the cost for minimizing failover time.
DNN: Offers faster failover and simplified management compared to VNN.
D. an Always On Failover Cluster Instance that has a virtual network name (VNN) and a premium file share:
FCI: More complex and expensive. Slower failover than AG.
VNN: Listener for FCI.
Premium File Share: Increases cost. While premium file share offers better performance than standard, FCI itself is generally more expensive and slower to failover compared to AG.
Final Answer: The final answer is
C
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases.
The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.
You need to recommend a solution to meet the regulatory requirement.
Solution: You recommend creating resource groups based on locations and implementing resource locks on the resource groups.
Does this meet the goal?
A. Yes
B. No
The proposed solution is to create resource groups based on locations and implement resource locks on those resource groups. Let’s break down why this solution does not fully meet the goal and why the answer is No.
Goal Breakdown:
Deploy App Service instances only to specific Azure regions: This is the core regulatory requirement. The solution must prevent deployment in non-compliant regions.
Resources for App Service instances must reside in the same region: This is about resource colocation within a region, not about region restriction itself.
Solution Analysis:
Creating resource groups based on locations:
Benefit: This is a good organizational practice. Grouping resources by region makes management and tracking easier. It also helps in visualizing resource distribution across regions.
Limitation: Resource groups are logical containers. Creating them based on locations is a good convention, but it does not inherently prevent someone from deploying resources outside of the intended region for that resource group. Resource groups themselves do not enforce region-specific deployments.
Implementing resource locks on the resource groups:
Benefit: Resource locks (like CanNotDelete or ReadOnly) prevent accidental deletion or modification of resources after they have been deployed. This is good for stability and preventing unintended changes.
Limitation: Resource locks are applied after resources are already deployed within a resource group. They do not prevent the initial deployment of resources into a resource group in a non-compliant region. Locks are about post-deployment governance, not pre-deployment enforcement of region restrictions.
Why the Solution Fails to Meet the Goal:
The key issue is that neither resource groups nor resource locks prevent the initial deployment of an App Service instance (or any other resource) into an unapproved Azure region. A user with sufficient permissions can still choose any Azure region during the deployment process, regardless of the resource group’s name or whether locks are in place.
To truly enforce the regulatory requirement, you need to use Azure Policy. Azure Policy allows you to define and enforce organizational standards and assess compliance at-scale. Specifically, you would use Azure Policy to:
Define allowed locations: Create a policy that specifies the permitted Azure regions for deploying resources.
Assign the policy: Assign this policy at the subscription or resource group level (or management group for broader scope).
Enforce the policy (Deny effect): Configure the policy to have a “Deny” effect. This means that if someone attempts to deploy a resource in a region that is not allowed by the policy, the deployment will be blocked and fail.
Answer:
B. No
You plan to move a web app named App1 from an on-premises datacenter to Azure.
App1 depends on a custom COM component that is installed on the host server.
You need to recommend a solution to host App1 in Azure. The solution must meet the following requirements:
✑ App1 must be available to users if an Azure datacenter becomes unavailable.
✑ Costs must be minimized.
What should you include in the recommendation?
A. In two Azure regions, deploy a load balancer and a web app.
B. In two Azure regions, deploy a load balancer and a virtual machine scale set.
C. Deploy a load balancer and a virtual machine scale set across two availability zones.
D. In two Azure regions, deploy an Azure Traffic Manager profile and a web app.
Correct Answer: C. Deploy a load balancer and a virtual machine scale set across two availability zones
Why it’s correct:
COM Component: VMSS allows Windows VMs where the COM component can be installed and configured, meeting the app’s dependency.
Availability: Spreading VMSS across two availability zones (within one region) ensures App1 remains available if one zone (datacenter) fails. Azure Load Balancer in zone-redundant mode supports this setup, providing high availability without requiring multi-region complexity.
Minimize Costs: A single-region, zone-redundant deployment avoids the higher costs of multi-region setups (e.g., data egress, duplicate resources). VMSS also scales efficiently, reducing costs compared to fixed VMs.
Exam Context (AZ-305): AZ-305 emphasizes practical, cost-effective architectures. Zones provide sufficient HA for many scenarios, and “datacenter” often implies a zone in Azure terminology unless regions are explicitly required.
How it works:
Deploy a VMSS with Windows VMs in a region supporting availability zones (e.g., East US 2).
Configure the VMSS to span two zones (e.g., Zone 1 and Zone 2).
Install the custom COM component and App1 on the VM image used by VMSS.
Deploy a zone-redundant Azure Load Balancer to distribute traffic to the VMSS instances.
Configure auto-scaling rules (optional) to optimize costs further.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases.
The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.
You need to recommend a solution to meet the regulatory requirement.
Solution: You recommend using the Regulatory compliance dashboard in Microsoft Defender for Cloud.
Does this meet the goal?
A. Yes
B. No
The proposed solution is to use the Regulatory compliance dashboard in Microsoft Defender for Cloud to meet the regulatory requirement of deploying Azure App Service instances only to specific Azure regions. Let’s analyze if this approach effectively achieves the goal.
Goal Breakdown:
Deploy App Service instances only to specific Azure regions: This is the core requirement, implying a need for prevention of deployment in non-approved regions.
Resources for App Service instances must reside in the same region: This is a related constraint about resource colocation, but the primary focus is on regional compliance.
Solution Analysis: Regulatory Compliance Dashboard in Microsoft Defender for Cloud
Functionality of the Dashboard: The Regulatory Compliance Dashboard in Microsoft Defender for Cloud is designed to:
Assess your Azure environment against regulatory standards: It evaluates your Azure resources and configurations against various compliance benchmarks (like PCI DSS, HIPAA, SOC 2, ISO 27001, etc.).
Provide visibility into your compliance posture: It gives you a centralized view of your compliance status, highlighting areas where you are meeting or failing to meet regulatory requirements.
Offer recommendations for improvement: Based on the compliance assessments, it provides actionable recommendations to remediate non-compliant configurations and improve your overall security and compliance.
Generate reports and track compliance over time: It allows you to monitor your compliance posture and demonstrate compliance to auditors and stakeholders.
Limitations for Region Enforcement: The Regulatory Compliance Dashboard in Microsoft Defender for Cloud is primarily a monitoring and reporting tool. It does not inherently enforce or prevent resource deployments in specific Azure regions at the time of deployment.
Detection, not Prevention: The dashboard can detect after resources are deployed if they are in non-compliant regions based on predefined policies and regulatory standards. It can then report on this non-compliance.
No Real-time Deployment Blocking: It does not act as a gatekeeper during the deployment process to block deployments to unapproved regions. It provides visibility after the resources exist.
Why the Solution Fails to Meet the Goal:
The regulatory requirement is about enforcement – ensuring that App Service instances are only deployed to specific Azure regions. The Regulatory Compliance Dashboard is a monitoring tool. It can tell you if you are compliant after deployment, but it cannot make you compliant by preventing non-compliant deployments in the first place.
The Correct Tool for Region Enforcement: Azure Policy
As established in similar questions, Azure Policy is the Azure service designed for enforcing organizational standards and assessing compliance, including regional restrictions. Azure Policy allows you to:
Define Allowed Locations Policy: Create a policy definition that specifies the allowed Azure regions for resource deployments.
Assign the Policy: Assign this policy to the relevant scope (subscription, resource group, management group).
Enforce with “Deny” Effect: Set the policy effect to “Deny”. This will prevent any resource deployment that violates the policy (i.e., attempts to deploy to a non-allowed region).
Conclusion:
Using the Regulatory compliance dashboard in Microsoft Defender for Cloud is not sufficient to meet the goal of ensuring App Service instances are deployed only to specific Azure regions. It is a valuable tool for compliance monitoring and reporting, but not for active enforcement of deployment restrictions.
Answer:
B. No
You plan to deploy an application named App1 that will run in containers on Azure Kubernetes Service (AKS) clusters. The AKS clusters will be distributed across four Azure regions.
You need to recommend a storage solution to ensure that updated container images are replicated automatically to all the Azure regions hosting the AKS clusters.
Which storage solution should you recommend?
A. geo-redundant storage (GRS) accounts
B. Premium SKU Azure Container Registry
C. Azure Content Delivery Network (CDN)
D. Azure Cache for Redis
The correct answer is B. Premium SKU Azure Container Registry.
Explanation:
Let’s analyze each option in the context of the requirements:
A. Geo-redundant storage (GRS) accounts:
Functionality: Geo-redundant storage (GRS) replicates your storage account data to a secondary region that is hundreds of miles away from the primary region. This is primarily for disaster recovery and data durability.
Suitability for Container Images: While you could store container images in Azure Blob Storage (which can use GRS), GRS is not the intended mechanism for distributing and managing container images for AKS deployments across regions. GRS is asynchronous replication designed for data durability in case of a regional disaster, not for fast and automated propagation of container image updates for application deployment. It also doesn’t directly integrate with AKS for container image pull.
Why it’s not the best choice: GRS is more about data backup and DR for storage accounts themselves, not efficient or designed for the specific task of container image replication for AKS deployments.
B. Premium SKU Azure Container Registry:
Functionality: Azure Container Registry (ACR) Premium SKU offers geo-replication. This feature is specifically designed to replicate container images across multiple Azure regions.
Suitability for Container Images: ACR is the Azure service for storing and managing container images. The Premium SKU’s geo-replication feature directly addresses the requirement. When you push a new container image to your ACR in one region, ACR automatically replicates the image to all other regions you’ve configured for replication. This ensures that your AKS clusters in all four regions can pull the latest container images from a registry that is geographically close to them, reducing latency and improving deployment speed.
Why it’s the best choice: ACR Premium SKU with geo-replication is purpose-built for distributing container images across multiple regions, making it the ideal solution for this requirement. It provides automatic, managed replication, minimizing administrative effort.
C. Azure Content Delivery Network (CDN):
Functionality: Azure CDN is used to cache and deliver static web content (like images, videos, JavaScript, CSS files) to users from geographically distributed edge servers.
Suitability for Container Images: While CDN can cache content, it’s not designed for replicating and managing container images for AKS deployments. CDN is optimized for delivering static content to end-users over the internet, not for distributing container images within Azure datacenters for application deployment.
Why it’s not the best choice: CDN is for content delivery to end-users, not for container image replication for AKS clusters across regions. It doesn’t integrate with AKS for image pulling in the same way ACR does.
D. Azure Cache for Redis:
Functionality: Azure Cache for Redis is an in-memory data cache service used to improve the performance of applications by caching frequently accessed data.
Suitability for Container Images: Azure Cache for Redis is completely unrelated to container image storage or replication.
Why it’s not the best choice: Redis Cache serves a different purpose (caching application data) and is not relevant to container image management or replication.
Therefore, the most appropriate and effective storage solution for automatically replicating updated container images to AKS clusters in multiple Azure regions is B. Premium SKU Azure Container Registry.
Final Answer: The final answer is
B
You have an Azure Active Directory (Azure AD) tenant.
You plan to deploy Azure Cosmos DB databases that will use the SQL API.
You need to recommend a solution to provide specific Azure AD user accounts with read access to the Cosmos DB databases.
What should you include in the recommendation?
A. shared access signatures (SAS) and Conditional Access policies
B. certificates and Azure Key Vault
C. master keys and Azure Information Protection policies
D. a resource token and an Access control (IAM) role assignment
The correct answer is D. a resource token and an Access control (IAM) role assignment.
Explanation:
Let’s break down why this is the correct answer and why the others are not:
D. a resource token and an Access control (IAM) role assignment (Correct):
Access control (IAM) role assignment: This is the core of the solution. Azure Role-Based Access Control (RBAC) via Azure IAM is the standard and recommended way to manage access to Azure resources, including Azure Cosmos DB. You can assign built-in Cosmos DB roles (like Cosmos DB Built-in Data Reader, Cosmos DB Built-in Data Contributor, etc.) or custom roles to Azure AD user accounts. This assignment is done at the Cosmos DB account, database, or container level, allowing for granular control.
Resource token: While the term “resource token” in this context is a bit misleading for Azure AD user access, it likely refers to the underlying mechanism at play. When an Azure AD user authenticates and is authorized via RBAC to access Cosmos DB data, Azure generates a short-lived token (not typically called “resource token” in Azure AD context, but functionally similar in granting access to a specific resource). This token is then used by the user’s application or client to access the Cosmos DB data. Essentially, RBAC with Azure AD results in the issuance and use of a token that grants access.
How it works in practice:
You assign an appropriate Cosmos DB built-in role (e.g., Cosmos DB Built-in Data Reader) to the specific Azure AD user accounts using Azure IAM on the Cosmos DB account or database.
The Azure AD user authenticates to Azure.
When the user’s application (using Cosmos DB SDK with Azure AD authentication) attempts to access Cosmos DB, Azure AD authenticates the user and, based on the IAM role assignment, authorizes the access.
Cosmos DB grants access based on the valid Azure AD token derived from the RBAC authorization.
A. shared access signatures (SAS) and Conditional Access policies (Incorrect):
Shared Access Signatures (SAS): SAS are used to grant limited, time-bound access to Azure Storage resources (like Blob Storage, Queue Storage, Table Storage). SAS are not the primary mechanism for granting Azure AD users access to Cosmos DB databases, especially for ongoing read access. SAS are more suitable for scenarios like temporary access for applications or services, not for managing user access based on Azure AD identities.
Conditional Access Policies: Conditional Access policies in Azure AD enforce authentication policies based on various conditions (user location, device, risk level, etc.). While Conditional Access can enhance the security of Azure AD authentication, it’s not the mechanism for authorizing access to Cosmos DB resources. Conditional Access is about authentication, not authorization to Cosmos DB data.
B. certificates and Azure Key Vault (Incorrect):
Certificates and Azure Key Vault: Certificates are used for secure authentication (e.g., for service principals or applications). Azure Key Vault is used for securely storing and managing secrets, including certificates. While certificates and Key Vault are essential for secure application authentication to Cosmos DB (using service principals or managed identities), they are not the primary method for granting individual Azure AD user accounts read access. This option is more relevant for application-to-Cosmos DB authentication, not user-to-Cosmos DB access based on Azure AD identities.
C. master keys and Azure Information Protection policies (Incorrect):
Master keys: Master keys are the primary access keys for Cosmos DB accounts. Sharing master keys with individual users is a major security risk and violates the principle of least privilege. Master keys grant full administrative access to the entire Cosmos DB account, which is far beyond the requirement of providing read access to specific users.
Azure Information Protection policies: Azure Information Protection (AIP) is used for data classification and protection (labeling, encryption, rights management) for documents and emails. AIP policies are not related to database access control or authentication for Cosmos DB.
HOTSPOT –
You are designing an application that will use Azure Linux virtual machines to analyze video files. The files will be uploaded from corporate offices that connect to
Azure by using ExpressRoute.
You plan to provision an Azure Storage account to host the files.
You need to ensure that the storage account meets the following requirements:
✑ Supports video files of up to 7 TB
✑ Provides the highest availability possible
✑ Ensures that storage is optimized for the large video files
✑ Ensures that files from the on-premises network are uploaded by using ExpressRoute
How should you configure the storage account? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Storage account type:
Premium files shares
Premium page blobs
Standard general-purpose v2
Data redundancy:
Zone-redundant storage (ZRS)
Locally-redundant storage (LRS)
Geo-redundant storage (GRS)
Networking:
Azure Route Server
A private endpoint
A service endpoint
To address each requirement for the Azure Storage account configuration, we need to select the most appropriate options for storage account type, data redundancy, and networking.
Storage account type:
Requirement: Supports video files of up to 7 TB
Standard general-purpose v2: Supports Block Blobs which are ideal for storing large files like videos. Standard v2 accounts can handle large files and are cost-effective for general purposes.
Premium files shares: While Premium file shares offer high performance, they are primarily for file sharing scenarios and might be more expensive for storing large video files compared to block blobs. Also, the question implies storing files for processing, not necessarily file sharing in the traditional sense.
Premium page blobs: Premium page blobs are designed for random read/write operations and are primarily used for virtual machine disks. They are not optimized for storing and streaming large video files.
Best Choice for file size and general purpose video storage: Standard general-purpose v2
Data redundancy:
Requirement: Provides the highest availability possible
Zone-redundant storage (ZRS): Replicates data across availability zones in a single region, providing high availability within that region.
Locally-redundant storage (LRS): Replicates data within a single data center, offering the lowest level of redundancy and availability compared to ZRS and GRS.
Geo-redundant storage (GRS): Replicates data to a secondary region hundreds of miles away, providing the highest level of availability and disaster recovery by protecting against regional outages.
Best Choice for highest availability: Geo-redundant storage (GRS)
Networking:
Requirement: Ensures that files from the on-premises network are uploaded by using ExpressRoute
Azure Route Server: Azure Route Server simplifies dynamic routing between your network virtual appliance (NVA) in Azure and your virtual network. It’s not directly related to securing or directing ExpressRoute traffic to a storage account.
A private endpoint: Private Endpoint provides private connectivity to Azure services from your virtual network. By creating a private endpoint for the storage account within your Azure VNet, you ensure that traffic from your on-premises network (connected via ExpressRoute to the VNet) will access the storage account privately over the ExpressRoute connection, without traversing the public internet.
A service endpoint: Service endpoints secure access to Azure services to only your Azure virtual network. While service endpoints can secure access from a VNet, they still use public IP addresses for the Azure service endpoint. Private endpoints offer a more secure and private connection by using private IP addresses within your VNet for the storage account.
Best Choice for ensuring ExpressRoute upload and private connectivity: A private endpoint
Final Configuration:
Based on the analysis of each requirement, the optimal configuration is:
Storage account type: Standard general-purpose v2 (Cost-effective and suitable for large video files)
Data redundancy: Geo-redundant storage (GRS) (Highest availability for disaster recovery)
Networking: A private endpoint (Ensures secure and private upload over ExpressRoute)
Final Answer:
Answer Area
Storage account type: Standard general-purpose v2
Data redundancy: Geo-redundant storage (GRS)
Networking: A private endpoint
You have data files in Azure Blob Storage.
You plan to transform the files and move them to Azure Data Lake Storage.
You need to transform the data by using mapping data flow.
Which service should you use?
A. Azure Databricks
B. Azure Storage Sync
C. Azure Data Factory
D. Azure Data Box Gateway
The correct answer is C. Azure Data Factory.
Here’s why:
Azure Data Factory (ADF) is a fully managed, serverless data integration service for orchestrating and automating data movement and transformation at scale. Mapping data flows are a core feature of ADF, specifically designed for transforming data in a visual, code-free environment. It is most appropriate for ELT.
A. Azure Databricks: Azure Databricks is an Apache Spark-based analytics service, powerful for data processing but primarily code-driven (using Scala, Python, R, or SQL). It’s less suited for a no-code/low-code data transformation requirement like mapping data flow. And mainly for ETL.
B. Azure Storage Sync: Azure Storage Sync is primarily used for synchronizing file shares between on-premises file servers and Azure File Storage. It’s not for transforming data.
D. Azure Data Box Gateway: Azure Data Box Gateway is a virtual appliance that resides on-premises and enables you to transfer data to Azure in a hybrid cloud environment. It’s for data ingestion, not transformation.
Why Data Factory is the closest/most correct: Data Factory has mapping data flows that allows you to visually transform data using a set of transformations.
HOTSPOT –
Your on-premises network contains a file server named Server1 that stores 500 GB of data.
You need to use Azure Data Factory to copy the data from Server1 to Azure Storage.
You add a new data factory.
What should you do next? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
From Server1:
Install an Azure File Sync agent.
Install a self-hosted integration runtime.
Install the File Server Resource Manager role service.
From the data factory:
Create a pipeline.
Create an Azure Import/Export job.
Provision an Azure-SQL Server Integration Services (SSIS) integration runtime.
To copy data from an on-premises file server (Server1) to Azure Storage using Azure Data Factory, you need to establish a connection and define the data movement process. Let’s break down the necessary steps and evaluate each option:
From Server1:
Goal: Enable Azure Data Factory to access the on-premises file server (Server1). Azure Data Factory is a cloud service and cannot directly access on-premises resources without a bridge.
Option 1: Install an Azure File Sync agent.
Incorrect. Azure File Sync agent is used to synchronize on-premises file servers with Azure File Shares in the cloud. It’s for creating a hybrid file sharing solution, not for copying data from a file server to Azure Storage using Data Factory.
Option 2: Install a self-hosted integration runtime.
Correct. A self-hosted integration runtime (SHIR) is the component in Azure Data Factory that acts as a bridge between the cloud service and on-premises data sources. You need to install the SHIR on a machine within your on-premises network (ideally close to Server1 for performance). This SHIR will then be used by Azure Data Factory to access Server1 and copy data.
Option 3: Install the File Server Resource Manager role service.
Incorrect. File Server Resource Manager (FSRM) is a role service in Windows Server used for managing and classifying files on file servers. It’s not related to Azure Data Factory connectivity or data copying.
From the data factory:
Goal: Define the data copy operation within Azure Data Factory.
Option 1: Create a pipeline.
Correct. A pipeline in Azure Data Factory is a logical grouping of activities that perform a specific task. To copy data, you must create a pipeline and add a “Copy activity” to that pipeline. The Copy activity will be configured to use the SHIR to connect to Server1 (as the source) and Azure Storage (as the sink).
Option 2: Create an Azure Import/Export job.
Incorrect. Azure Import/Export service is used for bulk data transfer to Azure Storage by shipping physical disk drives. It’s not related to Azure Data Factory or orchestrated data copying. It’s for very large, one-time data migrations, not for ongoing or scheduled data movement using Data Factory.
Option 3: Provision an Azure-SQL Server Integration Services (SSIS) integration runtime.
Incorrect. Azure-SQL Server Integration Services (SSIS) integration runtime is used to run SSIS packages in Azure Data Factory. While SSIS packages can be used for file copying, it’s an over-engineered solution for a simple file copy task described in the question. Also, the question doesn’t imply the use of SSIS packages. Using a native Data Factory Copy activity with SHIR is more straightforward for this scenario.
Therefore, the correct next steps are:
From Server1: Install a self-hosted integration runtime.
From the data factory: Create a pipeline.
Final Answer:
Answer Area
From Server1: Install a self-hosted integration runtime.
From the data factory: Create a pipeline.
You have an Azure subscription.
You need to deploy an Azure Kubernetes Service (AKS) solution that will use Windows Server 2019 nodes. The solution must meet the following requirements:
✑ Minimize the time it takes to provision compute resources during scale-out operations.
✑ Support autoscaling of Windows Server containers.
Which scaling option should you recommend?
A. Kubernetes version 1.20.2 or newer
B. Virtual nodes with Virtual Kubelet ACI
C. cluster autoscaler
D. horizontal pod autoscaler
The correct answer is C. cluster autoscaler.
Here’s why:
C. Cluster Autoscaler: The cluster autoscaler is designed to automatically adjust the number of nodes in the AKS cluster based on the resource requests of the pods. This directly addresses the requirement to minimize the time it takes to provision compute resources during scale-out operations. It monitors the Kubernetes scheduler and adds/removes nodes as needed to meet the demand. And also supports autoscaling of Windows Server containers.
A. Kubernetes version 1.20.2 or newer: While using a recent version of Kubernetes is generally a good practice, it doesn’t directly solve the scaling requirements. Newer versions might have performance improvements, but they don’t automatically provision nodes.
B. Virtual nodes with Virtual Kubelet ACI: Virtual nodes allow you to extend your AKS cluster to Azure Container Instances (ACI), provisioning pods without managing the underlying virtual machines. While this might seem like a good option for quick scaling, there’s no support for Windows Server containers.
D. Horizontal Pod Autoscaler (HPA): HPA automatically scales the number of pods within a deployment or replica set based on observed CPU utilization or other select metrics. It doesn’t provision new nodes, only increases or decreases the number of pods on existing nodes.
Why cluster autoscaler is the closest/most correct: The cluster autoscaler is the only option that dynamically adjusts the number of nodes in the AKS cluster, fulfilling the requirement to minimize provisioning time during scale-out and it will allow for autoscaling of Windows Server containers.
You have an Azure subscription.
You need to recommend an Azure Kubernetes Service (AKS) solution that will use Linux nodes. The solution must meet the following requirements:
✑ Minimize the time it takes to provision compute resources during scale-out operations.
✑ Support autoscaling of Linux containers.
✑ Minimize administrative effort.
Which scaling option should you recommend?
A. horizontal pod autoscaler
B. cluster autoscaler
C. virtual nodes
D. Virtual Kubelet
The correct answer is C. virtual nodes.
Here’s why:
Minimize the time it takes to provision compute resources during scale-out operations:
Virtual Nodes leverage Azure Container Instances (ACI) to run containers. ACI provides serverless container execution, which means that provisioning new compute resources (containers in ACI) is extremely fast. There’s no need to wait for virtual machines to be created and configured.
Cluster Autoscaler (CA) scales by adding or removing virtual machines (nodes) in the AKS cluster’s node pools. Provisioning new VMs takes time, typically several minutes, which is significantly slower than Virtual Nodes/ACI.
Horizontal Pod Autoscaler (HPA) scales the number of pods within the existing nodes. It’s fast because it’s just creating more pod replicas on already provisioned nodes, but it doesn’t scale the underlying compute resources if the existing nodes are already fully utilized.
Virtual Kubelet is a more general term and an open-source project. In the context of AKS, Virtual Nodes are the AKS implementation of Virtual Kubelet using ACI as the backend. Therefore, Virtual Nodes benefit from the fast provisioning of ACI.
Support autoscaling of Linux containers:
Virtual Nodes fully support running Linux containers on ACI.
Cluster Autoscaler supports autoscaling of node pools which can contain Linux nodes and therefore Linux containers.
Horizontal Pod Autoscaler autoscales Linux containers within deployments or replica sets.
Virtual Kubelet (and AKS Virtual Nodes) supports Linux containers via ACI.
Minimize administrative effort:
Virtual Nodes are designed to minimize administrative effort. You don’t need to manage the underlying VMs that run the containers in ACI. AKS and ACI handle the infrastructure management.
Cluster Autoscaler reduces administrative effort compared to manually scaling node pools, but you still need to manage the node pools themselves, including node images, scaling configurations, etc.
Horizontal Pod Autoscaler is relatively easy to configure and manage, but it addresses pod scaling, not compute resource scaling.
Virtual Kubelet (AKS Virtual Nodes) is specifically aimed at reducing operational overhead by leveraging serverless container execution.
Why other options are less suitable:
A. horizontal pod autoscaler: While HPA is essential for scaling applications based on load, it doesn’t address the requirement of minimizing compute resource provisioning time during scale-out. HPA scales pods within the existing capacity, not the underlying compute nodes. If the cluster needs more compute capacity, HPA alone won’t solve that.
B. cluster autoscaler: Cluster Autoscaler does scale compute resources (nodes), but the VM provisioning process is slower than Virtual Nodes/ACI, and it involves more administrative overhead in managing node pools.
D. Virtual Kubelet: While technically related, Virtual Nodes is the specific AKS feature that utilizes Virtual Kubelet with ACI backend to achieve the desired scaling characteristics. Choosing “Virtual Nodes” is more direct and specific to AKS in this context. “Virtual Kubelet” is a broader, more conceptual term.
In summary, Virtual Nodes (C) is the best option because it directly addresses all three requirements: fast compute provisioning, Linux container support, and minimal administrative effort by leveraging the serverless nature of Azure Container Instances.
Final Answer: The final answer is
C
You have an Azure virtual machine named VM1 that runs Windows Server 2019 and contains 500 GB of data files.
You are designing a solution that will use Azure Data Factory to transform the data files, and then load the files to Azure Data Lake Storage.
What should you deploy on VM1 to support the design?
A. the On-premises data gateway
B. the Azure Pipelines agent
C. the self-hosted integration runtime
D. the Azure File Sync agent
The correct answer is C. the self-hosted integration runtime.
Here’s why:
C. Self-hosted Integration Runtime (SHIR): This is the correct choice. The SHIR is an Azure Data Factory component that you install in an on-premises or virtual network to provide secure data integration capabilities. It acts as a bridge between ADF in the cloud and your private data sources (like VM1’s file system). You’ll use it to access the files on VM1, transform them, and move them to Azure Data Lake Storage.
A. On-premises Data Gateway: This is used by Power BI, PowerApps, Logic Apps, and Microsoft Flow to connect to on-premises data sources. While it shares some similarities with the SHIR, it’s not directly used by Azure Data Factory.
B. Azure Pipelines agent: This agent is for Azure DevOps Pipelines, used for building and deploying applications. It’s irrelevant to Azure Data Factory’s data integration tasks.
D. Azure File Sync agent: Azure File Sync is used to synchronize files between on-premises file servers and Azure File Storage. Again, it’s unrelated to data transformation in Azure Data Factory.
Why Self-hosted integration runtime is the closest/most correct: To access data on VM1 for data transformation in ADF, you must use the Self-hosted integration runtime.
You are designing an order processing system in Azure that will contain the Azure resources shown in the following table.
Name Type Purpose
App1 App Service web app Processes customer orders
Function1 Function Checks product availability at vendor 1
Function2 Function Checks product availability at vendor 2
storage2 Storage account Stores order processing logs
The order processing system will have the following transaction flow:
✑ A customer will place an order by using App1.
✑ When the order is received, App1 will generate a message to check for product availability at vendor 1 and vendor 2.
✑ An integration component will process the message, and then trigger either Function1 or Function2 depending on the type of order.
✑ Once a vendor confirms the product availability, a status message for App1 will be generated by Function1 or Function2.
✑ All the steps of the transaction will be logged to storage1.
Which type of resource should you recommend for the integration component?
A. an Azure Service Bus queue
B. an Azure Data Factory pipeline
C. an Azure Event Grid domain
D. an Azure Event Hubs capture
For the integration component in this Azure-based order processing system, the best choice among the options provided is:
A. an Azure Service Bus queue
Why Azure Service Bus Queue is Correct:
Azure Service Bus is a messaging service designed for reliable, asynchronous communication between distributed components of an application, which perfectly aligns with the requirements of this scenario. Here’s how it fits the transaction flow:
Message Generation by App1: When a customer places an order, App1 generates a message to check product availability. Azure Service Bus queues allow App1 to send messages asynchronously, decoupling the web app from the downstream processing.
Message Processing: The integration component needs to process the message and trigger either Function1 or Function2 based on the order type. Azure Service Bus queues support this by allowing message routing and processing logic (e.g., using topics and subscriptions or queue message handlers) to direct the message to the appropriate function.
Reliability and Scalability: Service Bus ensures that messages are reliably delivered to the functions, even under high load or if a function is temporarily unavailable. It supports features like message queuing, dead-letter queues, and retries, which are critical for a robust order processing system.
Status Message Back to App1: After Function1 or Function2 confirms availability, a status message is sent back to App1. This can be achieved by having the functions send a response message to another Service Bus queue or topic that App1 listens to.
Logging: While the logging to storage2 is handled separately, Service Bus integrates well with Azure ecosystems, making it easy to log transaction steps alongside the messaging workflow.
You have an Azure Active Directory (Azure AD) tenant that syncs with an on-premises Active Directory domain.
Your company has a line-of-business (LOB) application that was developed internally.
You need to implement SAML single sign-on (SSO) and enforce multi-factor authentication (MFA) when users attempt to access the application from an unknown location.
Which two features should you include in the solution? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
A. Azure AD Privileged Identity Management (PIM)
B. Azure Application Gateway
C. Azure AD enterprise applications
D. Azure AD Identity Protection
E. Conditional Access policies
The two correct features to include in the solution are:
C. Azure AD enterprise applications
E. Conditional Access policies
Here’s why:
C. Azure AD enterprise applications: To enable SAML SSO for a line-of-business application, you need to register it as an enterprise application in Azure AD. This registration allows Azure AD to act as the identity provider (IdP) for the application. The enterprise application configuration will handle the trust relationship and SAML token exchange between Azure AD and the LOB application.
E. Conditional Access policies: This feature allows you to enforce MFA based on conditions, such as the user’s location. You can create a policy that requires MFA when users access the LOB application from an “unknown location.” This addresses the second requirement: MFA for unknown locations.
Why other Options are not correct:
A. Azure AD Privileged Identity Management (PIM): PIM is used to manage, control, and monitor access to important resources in your organization. While it’s good for security, it’s not directly related to implementing SAML SSO or enforcing location-based MFA.
B. Azure Application Gateway: Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. Application Gateway is not directly related to SSO or MFA.
D. Azure AD Identity Protection: Identity Protection detects risks, allows users to remediate, and enables your organization to export risk detections to third-party SIEM tools. Though a useful security tool, it is not the tool for implementing SAML or conditional access policies.
Why Azure AD enterprise applications and Conditional Access policies are the closest/most correct: Registering the application as an enterprise application is essential for SAML SSO, and Conditional Access policies are the feature that allows you to enforce MFA based on location and other conditions.
You plan to automata the deployment of resources to Azure subscriptions.
What is a difference between using Azure Blueprints and Azure Resource Manager (ARM) templates?
A. ARM templates remain connected to the deployed resources.
B. Only blueprints can contain policy definitions.
C. Only ARM templates can contain policy definitions.
D. Blueprints remain connected to the deployed resources.
The correct answer is D. Blueprints remain connected to the deployed resources.
Explanation:
Here’s a breakdown of why option D is correct and why the other options are incorrect:
D. Blueprints remain connected to the deployed resources. (Correct)
Azure Blueprints are designed for governance and compliance in addition to infrastructure deployment. A key feature of Blueprints is that once you assign a blueprint to a subscription or management group, a relationship is maintained between the blueprint definition and the deployed resources (called a Blueprint assignment).
This connection allows for:
Versioning: Blueprints are versioned, so you can track changes and rollback if needed.
Updates: When you update a blueprint definition, you can roll out those changes to existing blueprint assignments, ensuring consistency across your environments.
Governance: Blueprints help enforce compliance by including policy assignments and role assignments as part of the blueprint definition. The ongoing connection helps ensure that deployed resources continue to adhere to these policies.
Lifecycle Management: Blueprints are designed to manage the lifecycle of environments, not just the initial deployment.
A. ARM templates remain connected to the deployed resources. (Incorrect)
Azure Resource Manager (ARM) templates are primarily for infrastructure-as-code deployments. They are declarative JSON files that define the resources you want to deploy to Azure.
Once you deploy an ARM template, the template itself is not actively connected to the deployed resources. It’s a one-time deployment mechanism.
If you want to make changes to resources deployed via an ARM template, you typically need to redeploy the template (or a modified version of it). There isn’t a persistent, managed connection between the original template and the resources after deployment.
B. Only blueprints can contain policy definitions. (Incorrect)
Both Azure Blueprints and ARM templates can work with policy definitions.
ARM templates can deploy and assign Azure Policies. You can include Microsoft.Authorization/policyAssignments resources within an ARM template to assign policies during resource deployment.
Azure Blueprints natively integrate with Azure Policy. Policy assignments are a core component of a blueprint definition. Blueprints are designed to manage and enforce policies across environments.
C. Only ARM templates can contain policy definitions. (Incorrect)
As explained above, Azure Blueprints are specifically designed to contain and manage policy definitions as a key part of their governance capabilities. They are not limited in this regard compared to ARM templates.
In summary, the crucial difference is the persistent connection and lifecycle management capabilities of Blueprints compared to the deployment-centric, disconnected nature of ARM templates. Blueprints are designed for ongoing governance and environment management, while ARM templates are primarily for initial infrastructure provisioning.
Final Answer: The final answer is
D
HOTSPOT –
You have the resources shown in the following table.
Name Type Resource group
VM1 Azure virtual machine RG1
VM2 On-premises virtual machine Not applicable
You create a new resource group in Azure named RG2.
You need to move the virtual machines to RG2.
What should you use to move each virtual machine? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
VM1
Azure Arc
Azure Lighthouse
Azure Migrate
Azure Resource Mover
The Data Migration Assistant (DMA)
VM2
Azure Arc
Azure Lighthouse
Azure Migrate
Azure Resource Mover
The Data Migration Assistant (DMA)
VM1 (Azure virtual machine):
Problem: VM1 is an Azure virtual machine currently located in Resource Group RG1. You need to move it to a new Resource Group RG2 within Azure.
Solution: Azure Resource Mover is the service designed specifically for moving Azure resources, including virtual machines, between resource groups, regions, and subscriptions. It simplifies the process by handling dependencies and ensuring a consistent move.
Why other options are incorrect for VM1:
Azure Arc: Azure Arc is used to manage on-premises, multi-cloud, and edge servers, Kubernetes clusters, and applications from Azure. It’s not for moving Azure resources between resource groups.
Azure Lighthouse: Azure Lighthouse enables multi-tenant management, allowing service providers to manage resources across multiple customer tenants from their own Azure tenant. It’s not for moving resources within a single tenant.
Azure Migrate: Azure Migrate is used for migrating on-premises workloads to Azure. VM1 is already in Azure, so Azure Migrate is not the appropriate tool for moving it between resource groups.
The Data Migration Assistant (DMA): DMA is a tool used for database migration and upgrades. It’s not relevant for moving virtual machines.
VM2 (On-premises virtual machine):
Problem: VM2 is an on-premises virtual machine. You need to “move” it to Azure Resource Group RG2. In the context of on-premises VMs and Azure resource groups, “move” typically means managing or representing the on-premises VM within Azure and associating it with RG2.
Solution: Azure Arc is the service that allows you to project, manage, and govern your on-premises and multi-cloud servers, applications, and data as if they were native Azure resources. By connecting VM2 to Azure Arc, you can manage it from the Azure portal, organize it within Resource Group RG2, and apply Azure policies and management capabilities.
Why other options are incorrect for VM2:
Azure Lighthouse: As explained before, Azure Lighthouse is for multi-tenant management, not for managing on-premises resources within your Azure tenant.
Azure Migrate: While Azure Migrate could be used to migrate VM2 to Azure and place the resulting Azure VM in RG2, this is a full migration process, not simply “moving” an on-premises VM to be managed within RG2. The question implies a management scenario within Azure, not necessarily a full migration. Azure Arc addresses management of on-premises resources in Azure, which is a more direct interpretation of “move to RG2” in this context for an on-premises VM.
Azure Resource Mover: Azure Resource Mover is for moving Azure resources. It cannot directly move or manage on-premises virtual machines.
The Data Migration Assistant (DMA): DMA is for database migrations and is not relevant for managing or migrating virtual machines.
Therefore, the correct answer is:
VM1: Azure Resource Mover
VM2: Azure Arc
Answer Area:
VM1: Azure Resource Mover
VM2: Azure Arc
You plan to deploy an Azure App Service web app that will have multiple instances across multiple Azure regions.
You need to recommend a load balancing service for the planned deployment The solution must meet the following requirements:
✑ Maintain access to the app in the event of a regional outage.
✑ Support Azure Web Application Firewall (WAF).
✑ Support cookie-based affinity.
✑ Support URL routing.
What should you include in the recommendation?
A. Azure Front Door
B. Azure Traffic Manager
C. Azure Application Gateway
D. Azure Load Balancer
The correct answer is A. Azure Front Door.
Explanation:
Let’s break down each requirement and see how Azure Front Door and the other options compare:
Maintain access to the app in the event of a regional outage.
Azure Front Door: Azure Front Door is a global, scalable entry point that uses Microsoft’s global edge network. It is designed to provide high availability and resilience, including protection against regional outages. It can automatically route traffic to the healthiest and closest backend in another region if one region becomes unavailable.
Azure Traffic Manager: Azure Traffic Manager is also a global, DNS-based traffic routing service that can route traffic to different regions based on various routing methods (Priority, Performance, Geographic, Weighted). It can be used for regional failover, but it is DNS-based, meaning failover is dependent on DNS propagation which can take time.
Azure Application Gateway: Azure Application Gateway is a regional service. It provides high availability and scalability within a single Azure region but is not designed for cross-region failover in the event of a regional outage without additional configuration (like using Traffic Manager in front).
Azure Load Balancer: Azure Load Balancer is also a regional service. It provides high availability within a single Azure region but does not provide cross-region failover.
Support Azure Web Application Firewall (WAF).
Azure Front Door: Azure Front Door has a built-in, globally distributed Web Application Firewall (WAF). This WAF protects your web application from common web exploits and vulnerabilities at the network edge.
Azure Traffic Manager: Azure Traffic Manager itself does not provide WAF capabilities. You would need to place a WAF service (like Application Gateway WAF or Azure Firewall Premium) in front of each regional endpoint that Traffic Manager is routing to.
Azure Application Gateway: Azure Application Gateway has a built-in WAF option. You can enable WAF on Application Gateway to protect web applications within a region.
Azure Load Balancer: Azure Load Balancer is a Layer 4 load balancer and does not provide WAF capabilities. WAF is a Layer 7 feature.
Support cookie-based affinity.
Azure Front Door: Azure Front Door supports session affinity, which can be configured to use cookies to ensure that requests from the same client session are routed to the same backend instance within a region (or the same region, depending on configuration).
Azure Traffic Manager: Azure Traffic Manager is DNS-based and does not provide cookie-based affinity. It routes traffic at the DNS level, not at the application session level.
Azure Application Gateway: Azure Application Gateway supports cookie-based session affinity (also known as cookie-based persistence or sticky sessions).
Azure Load Balancer: Azure Load Balancer (Standard SKU) supports session persistence based on source IP address, but it does not directly support cookie-based affinity.
Support URL routing.
Azure Front Door: Azure Front Door supports URL path-based routing. You can configure routing rules to direct traffic to different backends based on the URL path of the incoming request.
Azure Traffic Manager: Azure Traffic Manager is DNS-based and does not support URL routing. It routes traffic based on DNS names, not URL paths.
Azure Application Gateway: Azure Application Gateway is a Layer 7 load balancer and fully supports URL path-based routing.
Azure Load Balancer: Azure Load Balancer is a Layer 4 load balancer and does not support URL routing. It routes traffic based on IP protocol and ports, not URL paths.
Conclusion:
Based on the analysis, Azure Front Door (A) is the only service that comprehensively meets all four requirements: regional outage resilience, WAF support, cookie-based affinity, and URL routing.
Final Answer: The final answer is
A
HOTSPOT –
You have the Azure resources shown in the following table.
Name Type Description
VNET1 Virtual network Connected to an on-premises network by using ExpressRoute
VM1 Virtual machine Configured as a DNS server
SQLDB1 Azure SQL Database Single instance
PE1 Private endpoint Provides connectivity to SQLDB1
contoso.com Private DNS zone Linked to VNET1 and contains an A record for PE1
contoso.com Public DNS zone Contains a C NAME record for SQLDB1
You need to design a solution that provides on-premises network connectivity to SQLDB1 through PE1.
How should you configure name resolution? To answer select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
— —
Answer Area
Azure configuration:
Configure VM1 to forward contoso.com to the public DNS zone
Configure VM1 to forward contoso.com to the Azure-provided DNS at 168.63.129.16
In VNet1, configure a custom DNS server set to the Azure provided DNS at 168.63.129.16
On-premises DNS configuration:
Forward contoso.com to VM1
Forward contoso.com to the public DNS zone
Forward contoso.com to the Azure-provisioned DNS at 168.63.129.16
The goal is to resolve the Azure SQL Database name to its private endpoint’s IP address from the on-premises network. Let’s analyze the components and how they should be configured for name resolution.
Azure Configuration:
VM1 as DNS Server: VM1 is configured as a DNS server within VNET1. This VM will be responsible for resolving DNS queries originating from on-premises for resources in Azure, specifically for SQLDB1 via its private endpoint.
Private DNS Zone contoso.com: This zone is linked to VNET1 and contains the A record for PE1. This ensures that within VNET1, PE1.contoso.com (or whatever the FQDN is in the private zone) resolves to the private IP address of PE1. Azure-provided DNS within VNET1 (168.63.129.16) automatically uses these linked private DNS zones.
Public DNS Zone contoso.com: This zone contains a CNAME record for SQLDB1, likely pointing to the public endpoint of SQLDB1. This is for public internet access and is not relevant to the on-premises private connectivity requirement.
On-premises Configuration:
ExpressRoute Connectivity: ExpressRoute provides a private connection between the on-premises network and VNET1. This is essential for connectivity to the private endpoint.
Name Resolution Flow from On-premises to SQLDB1 via PE1:
On-premises client attempts to resolve the FQDN of SQLDB1 (which should resolve to the private endpoint). Let’s assume the FQDN to be resolved is within the contoso.com domain or a subdomain of it.
On-premises DNS server receives the DNS query. To resolve names in Azure private DNS zones, the query needs to be forwarded to a DNS server that can access the Azure private DNS zone. VM1 in VNET1 is set up for this purpose.
On-premises DNS server should be configured to forward queries for the contoso.com domain (or relevant subdomain) to VM1. This ensures that queries for Azure private resources are directed to the Azure DNS infrastructure.
VM1 DNS server receives the forwarded query. VM1 needs to be able to resolve names within the contoso.com private DNS zone. To do this, VM1 should be configured to forward queries for the contoso.com domain to the Azure-provided DNS server at 168.63.129.16. Azure-provided DNS is authoritative for private DNS zones linked to the VNET.
Azure-provided DNS (168.63.129.16) within VNET1 receives the query. Because VNET1 is linked to the contoso.com private DNS zone, it will look up the record for SQLDB1 (or PE1’s FQDN) in the private zone.
The private DNS zone contoso.com contains an A record for PE1, which resolves to the private IP address of the private endpoint.
Azure-provided DNS returns the private IP address of PE1 to VM1.
VM1 DNS server returns the private IP address of PE1 to the on-premises DNS server.
On-premises DNS server returns the private IP address of PE1 to the on-premises client.
On-premises client can now connect to SQLDB1 using the private IP address of PE1 over the ExpressRoute connection.
Analyzing the Answer Options:
Azure configuration:
Configure VM1 to forward contoso.com to the public DNS zone: Incorrect. This would resolve to the public endpoint, not the private endpoint.
Configure VM1 to forward contoso.com to the Azure-provided DNS at 168.63.129.16: Correct. VM1 needs to use Azure DNS to resolve names within the private DNS zone.
In VNet1, configure a custom DNS server set to the Azure provided DNS at 168.63.129.16: Incorrect/Redundant. We are already configuring VM1 as a custom DNS server to forward to Azure-provided DNS. Changing VNet1’s DNS settings is not directly related to forwarding from VM1 for on-premises resolution in this specific scenario. While setting VNet DNS to custom DNS (VM1) might be part of a larger architecture, for this specific name resolution requirement, configuring VM1 to forward to Azure DNS is the direct solution.
On-premises DNS configuration:
Forward contoso.com to VM1: Correct. On-premises DNS needs to forward queries for the relevant domain to VM1.
Forward contoso.com to the public DNS zone: Incorrect. This would resolve to the public endpoint.
Forward contoso.com to the Azure-provisioned DNS at 168.63.129.16: Incorrect. On-premises DNS cannot directly reach Azure’s internal DNS IP 168.63.129.16. On-premises needs to forward to VM1, which can then access Azure DNS internally.
Therefore, the correct answer is:
Azure configuration: Configure VM1 to forward contoso.com to the Azure-provided DNS at 168.63.129.16
On-premises DNS configuration: Forward contoso.com to VM1
Final Answer:
Answer Area:
Azure configuration: Configure VM1 to forward contoso.com to the Azure-provided DNS at 168.63.129.16
On-premises DNS configuration: Forward contoso.com to VM1
You are designing a microservices architecture that will support a web application.
The solution must meet the following requirements:
✑ Deploy the solution on-premises and to Azure.
Support low-latency and hyper-scale operations.
✑ Allow independent upgrades to each microservice.
✑ Set policies for performing automatic repairs to the microservices.
You need to recommend a technology.
What should you recommend?
A. Azure Container Instance
B. Azure Logic App
C. Azure Service Fabric
D. Azure virtual machine scale set
The correct answer is C. Azure Service Fabric.
Explanation:
Let’s break down each requirement and how Azure Service Fabric and the other options align:
Deploy the solution on-premises and to Azure:
Azure Service Fabric (C): Azure Service Fabric is designed to be a hybrid platform. It can be deployed and run in Azure, on-premises in your own datacenter (as a standalone cluster), or in other clouds. This is a core strength of Service Fabric.
Azure Container Instance (A): Azure Container Instance (ACI) is a purely Azure-based service. It cannot be deployed on-premises.
Azure Logic App (B): Azure Logic Apps are primarily a cloud-based integration service, not a compute platform for hosting and running microservices on-premises. While Logic Apps can connect to on-premises systems, the Logic App runtime itself is not deployed on-premises in the way required for hosting microservices there.
Azure virtual machine scale set (D): Azure virtual machine scale sets (VMSS) are an Azure compute resource and are deployed in Azure. They are not designed for on-premises deployment.
Support low-latency and hyper-scale operations:
Azure Service Fabric (C): Service Fabric is built for building scalable, low-latency, and highly reliable microservices and distributed applications. It is designed to handle hyper-scale workloads.
Azure Container Instance (A): ACI is serverless and can scale quickly for individual containers, offering low latency. However, managing and orchestrating a complex microservices application solely with ACI might become more challenging at very large scale compared to a dedicated orchestrator like Service Fabric or Kubernetes.
Azure Logic App (B): Logic Apps are not primarily designed for low-latency, hyper-scale compute for microservices. They are more focused on integration and orchestration of workflows, which may involve calling microservices, but are not the microservice hosting platform itself.
Azure virtual machine scale set (D): VMSS can scale to a large number of VMs, and if microservices are properly architected and deployed within them (e.g., using containers orchestrated by Kubernetes or similar within the VMSS), they can support hyper-scale and low latency. However, VMSS alone is just the infrastructure scaling component; you need additional orchestration for microservices within VMSS.
Allow independent upgrades to each microservice:
Azure Service Fabric (C): Service Fabric is designed for microservices and supports independent, rolling upgrades of individual microservices without downtime.
Azure Container Instance (A): Containers inherently support independent upgrades, and you can upgrade individual ACI instances.
Azure Logic App (B): Logic Apps are individually managed and upgraded, but again, not in the context of microservices as compute units.
Azure virtual machine scale set (D): Upgrading applications within VMSS can be managed, but it requires orchestration and is not as natively designed for independent microservice upgrades as Service Fabric.
Set policies for performing automatic repairs to the microservices:
Azure Service Fabric (C): Service Fabric has built-in health monitoring and automatic repair capabilities. It can detect failed microservice instances and automatically restart or redeploy them based on health policies.
Azure Container Instance (A): ACI automatically restarts containers if they fail, offering basic automatic repair at the container level.
Azure Logic App (B): Logic Apps have retry policies and error handling, but not in the context of automatically repairing microservice instances.
Azure virtual machine scale set (D): VMSS has auto-repair features for VMs themselves (e.g., automatic instance replacement if a VM fails health checks), but not specifically designed for automatic repair of microservices running within the VMs. You would need additional application-level health checks and repair mechanisms.
Conclusion:
Azure Service Fabric is the technology that best fits all the requirements, especially the crucial requirement for on-premises and Azure deployment. It is specifically designed for building and managing scalable, reliable microservices and provides the necessary features for independent upgrades and automatic repairs. While other options may address some requirements, Service Fabric is the most comprehensive solution for the given scenario.
Final Answer: The final answer is
C
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to deploy multiple instances of an Azure web app across several Azure regions.
You need to design an access solution for the app. The solution must meet the following replication requirements:
✑ Support rate limiting.
✑ Balance requests between all instances.
✑ Ensure that users can access the app in the event of a regional outage.
Solution: You use Azure Front Door to provide access to the app.
Does this meet the goal?
A. Yes
B. No
Let’s analyze each requirement against the capabilities of Azure Front Door:
Support rate limiting: Azure Front Door integrates with Web Application Firewall (WAF). WAF policies within Front Door can be configured to implement rate limiting rules based on various criteria like IP address, request frequency, and more. Therefore, Front Door does support rate limiting.
Balance requests between all instances: Azure Front Door is a global HTTP load balancer. It can distribute traffic across multiple backend instances (in this case, web app instances in different regions) using various routing methods. Latency-based routing, for example, automatically directs traffic to the backend with the lowest latency, effectively balancing requests across healthy instances. Front Door does balance requests between instances.
Ensure that users can access the app in the event of a regional outage: Azure Front Door is a globally distributed service. If one Azure region experiences an outage, Front Door can automatically detect the unhealthy backend instances in that region and route traffic to healthy instances in other regions. This provides high availability and ensures users can still access the app even if a regional outage occurs. Front Door does ensure access during regional outages.
Since Azure Front Door meets all three stated requirements (rate limiting, load balancing across instances, and regional outage resilience), the solution is valid.
Therefore, the answer is A. Yes.
You have an Azure subscription.
You need to recommend a solution to provide developers with the ability to provision Azure virtual machines. The solution must meet the following requirements:
✑ Only allow the creation of the virtual machines in specific regions.
✑ Only allow the creation of specific sizes of virtual machines.
What should you include in the recommendation?
A. Attribute-based access control (ABAC)
B. Azure Policy
C. Conditional Access policies
D. role-based access control (RBAC)
The correct answer is B. Azure Policy.
Explanation:
Let’s analyze each option in the context of the requirements:
B. Azure Policy (Correct):
Functionality: Azure Policy is a service in Azure that allows you to define and enforce organizational standards and assess compliance at-scale. It provides capabilities to:
Define policies: Create policy definitions that specify rules and conditions for Azure resources.
Assign policies: Assign these policy definitions to scopes (subscriptions, resource groups, management groups).
Enforce policies: Policies can have different effects, including Deny (preventing non-compliant resource creation or modification), Audit (logging non-compliance), and Modify (automatically correcting non-compliant resources).
Meeting the Requirements:
Only allow the creation of virtual machines in specific regions: Azure Policy has built-in policy definitions and allows custom policy creation to restrict resource locations. You can define a policy that specifies the allowed Azure regions and set the effect to Deny. This will prevent developers from creating VMs in regions outside the allowed list.
Only allow the creation of specific sizes of virtual machines: Azure Policy also allows you to restrict allowed SKUs (sizes) for virtual machines. You can define a policy that specifies the allowed VM sizes (e.g., Standard_DS1_v2, Standard_DS2_v2) and set the effect to Deny. This will prevent developers from creating VMs of sizes not on the allowed list.
Ease of Implementation: Azure Policy is designed to be relatively straightforward to implement for these types of common governance requirements. You can use built-in policies or create custom policies with JSON definitions.
A. Attribute-based access control (ABAC):
Functionality: ABAC is a fine-grained authorization system that grants access based on attributes of the subject (user), resource, and environment. While ABAC is powerful and flexible, it is primarily focused on controlling access to resources (i.e., who can perform what actions), not on restricting resource properties during creation like location or size.
Suitability for Requirements: ABAC is not the primary tool for enforcing restrictions on VM regions and sizes during deployment. While you might be able to devise complex ABAC rules that indirectly achieve this, it would be an overly complex and less direct approach compared to Azure Policy. ABAC is better suited for dynamic and context-aware access control scenarios, not for static restrictions on resource properties.
C. Conditional Access policies:
Functionality: Conditional Access policies in Azure Active Directory (Azure AD) are used to control authentication and authorization to Azure AD-integrated applications and services. They enforce policies based on conditions like user location, device, risk level, etc., primarily at the authentication layer.
Suitability for Requirements: Conditional Access policies are not relevant for restricting the Azure region or VM size during resource deployment. They are about controlling who can log in and access applications, not about governing how resources are provisioned within Azure subscriptions.
D. role-based access control (RBAC):
Functionality: Role-Based Access Control (RBAC) in Azure manages access to Azure resources by assigning roles (like Owner, Contributor, Reader, Virtual Machine Contributor) to users, groups, or service principals. RBAC controls who has what permissions to manage resources.
Suitability for Requirements: RBAC controls who can create virtual machines (e.g., by assigning the “Virtual Machine Contributor” role), but it does not inherently restrict where or what size VMs can be created. While you could create custom roles and potentially try to limit actions based on resource properties, it would be a very complex and indirect way to achieve region and size restrictions. RBAC is not designed for this level of granular resource property control during deployment.
In summary, Azure Policy is the most direct, effective, and purpose-built solution for enforcing restrictions on Azure resource properties like allowed regions and VM sizes during deployment.
Final Answer: The final answer is
B
You have an Azure subscription that contains a storage account.
An application sometimes writes duplicate files to the storage account.
You have a PowerShell script that identifies and deletes duplicate files in the storage account. Currently, the script is run manually after approval from the operations manager.
You need to recommend a serverless solution that performs the following actions:
✑ Runs the script once an hour to identify whether duplicate files exist
✑ Sends an email notification to the operations manager requesting approval to delete the duplicate files
✑ Processes an email response from the operations manager specifying whether the deletion was approved
✑ Runs the script if the deletion was approved
What should you include in the recommendation?
A. Azure Logic Apps and Azure Event Grid
B. Azure Logic Apps and Azure Functions
C. Azure Pipelines and Azure Service Fabric
D. Azure Functions and Azure Batch
Let’s analyze each requirement and how the proposed options fulfill them:
Requirements:
Runs the script once an hour: Needs a scheduling mechanism.
Identify duplicate files: Requires script execution.
Email notification for approval: Needs email sending capability.
Process email response: Needs to handle email responses and extract approval status.
Run script if approved: Conditional execution based on email response.
Option A: Azure Logic Apps and Azure Event Grid
Azure Logic Apps: Excellent for orchestration, scheduling (using recurrence triggers), email integration (send and potentially receive/process email responses with connectors), and conditional logic. Logic Apps can also integrate with other Azure services, including running scripts indirectly (e.g., by calling Azure Functions or Automation Runbooks).
Azure Event Grid: Primarily designed for event-driven architectures. It’s great for reacting to events happening within Azure services. While it can trigger Logic Apps, it’s not the ideal component for scheduling a task hourly or directly handling email responses for approval workflows.
Option B: Azure Logic Apps and Azure Functions
Azure Logic Apps: As described above, Logic Apps excels at orchestration, scheduling, email handling, and conditional workflows.
Azure Functions: Serverless compute service that can execute code in various languages, including PowerShell (using PowerShell Functions). Azure Functions are perfect for running the PowerShell script to identify and delete duplicate files. Logic Apps can easily call Azure Functions.
Option C: Azure Pipelines and Azure Service Fabric
Azure Pipelines: Primarily a CI/CD (Continuous Integration/Continuous Delivery) service. While Pipelines can be scheduled and run scripts, they are not designed for complex orchestration, email-based approval workflows, or serverless execution in the same way as Logic Apps and Functions. Email integration and response processing are not core strengths of Pipelines.
Azure Service Fabric: A distributed systems platform for packaging, deploying, and managing scalable and reliable microservices and containers. It’s overkill for this simple automation task and not serverless in the same sense as Logic Apps and Functions. Service Fabric is more for building and managing complex applications, not simple scheduled scripts with approval flows.
Option D: Azure Functions and Azure Batch
Azure Functions: Good for running the PowerShell script serverlessly. However, Functions alone are not ideal for scheduling, email handling, or complex orchestration of an approval workflow. While you could code email sending and response processing within a Function, it becomes more complex than using Logic Apps for orchestration.
Azure Batch: Designed for large-scale parallel and high-performance computing (HPC) workloads. It’s not relevant for this scenario of scheduled script execution and email-based approval. Batch is for processing large datasets or running compute-intensive tasks in parallel, not for orchestration and email workflows.
Why Option B is the best fit:
Option B (Azure Logic Apps and Azure Functions) provides the most comprehensive and suitable solution because:
Logic Apps handles orchestration, scheduling, email, and approval workflow: Logic Apps can be scheduled to run hourly, send email notifications requesting approval, process email responses (potentially by parsing email content or using actions that can wait for external events), and implement conditional logic to run the script based on approval.
Azure Functions executes the PowerShell script: Azure Functions provide serverless compute to run the existing PowerShell script that identifies and deletes duplicate files. Logic Apps can easily call Azure Functions to perform these script executions.
Detailed Flow using Option B (Logic Apps and Azure Functions):
Logic App Trigger: Use a Recurrence trigger to run the Logic App hourly.
Logic App Action 1: Call an Azure Function (Function 1 - Identify Duplicates). This Function executes the PowerShell script to identify duplicate files in the storage account. The Function returns a list of duplicate file names (or a boolean indicating if duplicates exist).
Logic App Condition 1: Check if duplicate files were found (based on Function 1 output).
If duplicates found (True branch):
Logic App Action 2: Send an email notification to the operations manager using an email connector (e.g., Office 365 Outlook, SendGrid). The email should request approval to delete the duplicate files and could include a list of the files. You can use “Send email with options” connector in Logic Apps to get direct approval/reject responses from email.
Logic App Action 3: Wait for a response from the operations manager (if using “Send email with options”, this step is implicitly handled). If not, you might need a mechanism to process incoming emails into a queue or storage and have the Logic App check for a response.
If no duplicates found (False branch):
Logic App Action 4: Terminate or do nothing.
Logic App Condition 2 (after receiving approval - if explicit waiting step is needed): Check the operations manager’s response.
If Approved (True branch):
Logic App Action 5: Call another Azure Function (Function 2 - Delete Duplicates). This Function executes the PowerShell script to delete the duplicate files.
If Rejected (False branch):
Logic App Action 6: Send a notification email (optional) and terminate.
Final Answer: B. Azure Logic Apps and Azure Functions
Your company has the infrastructure shown in the following table.
Location Resource
Azure:
* Azure subscription named Subscription1
* 20 Azure web apps
On-premises datacenter:
* Active Directory domain
* Server running Azure AD Connect
* Linux computer named Server1
The on-premises Active Directory domain syncs with Azure Active Directory (Azure AD).
Server1 runs an application named App1 that uses LDAP queries to verify user identities in the on-premises Active Directory domain.
You plan to migrate Server1 to a virtual machine in Subscription1.
A company security policy states that the virtual machines and services deployed to Subscription1 must be prevented from accessing the on-premises network.
You need to recommend a solution to ensure that App1 continues to function after the migration. The solution must meet the security policy.
What should you include in the recommendation?
A. Azure AD Application Proxy
B. the Active Directory Domain Services role on a virtual machine
C. an Azure VPN gateway
D. Azure AD Domain Services (Azure AD DS)
The correct answer is D. Azure AD Domain Services (Azure AD DS).
Explanation:
Let’s analyze each option in the context of the requirements and constraints:
Problem: App1 on Server1 currently uses LDAP queries to authenticate against the on-premises Active Directory. After migrating Server1 to an Azure VM, we need to maintain App1’s authentication functionality without allowing the Azure VM to access the on-premises network, due to a security policy.
A. Azure AD Application Proxy:
Incorrect. Azure AD Application Proxy is designed to publish on-premises web applications to the internet, enabling secure remote access through Azure AD pre-authentication. It is used for inbound access to on-premises applications from the internet.
It does not facilitate outbound LDAP queries from an Azure VM to an authentication service. Application Proxy is not relevant for the scenario where App1 needs to perform LDAP authentication.
B. the Active Directory Domain Services role on a virtual machine:
Incorrect. While installing the Active Directory Domain Services (AD DS) role on an Azure VM creates a domain controller in Azure, it would still require connectivity back to the on-premises Active Directory to authenticate against the on-premises user identities, or to replicate the on-premises domain to Azure.
Establishing connectivity (like VPN or ExpressRoute) to the on-premises network from Azure VMs is explicitly prohibited by the security policy. Creating a standalone AD DS forest in Azure would also not solve the problem because App1 is designed to authenticate against the on-premises domain users.
C. an Azure VPN gateway:
Incorrect. An Azure VPN gateway establishes a VPN connection between Azure and the on-premises network. This would enable the Azure VM to directly query the on-premises Active Directory via LDAP, which would allow App1 to function as before.
However, this solution directly violates the security policy that explicitly prevents Azure VMs and services in Subscription1 from accessing the on-premises network.
D. Azure AD Domain Services (Azure AD DS):
Correct. Azure AD Domain Services (Azure AD DS) provides managed domain services in Azure. It’s essentially a domain controller as a service, managed by Microsoft.
How it works for this scenario:
User Synchronization: Azure AD DS synchronizes users and groups from Azure AD. Since your on-premises Active Directory is already syncing with Azure AD using Azure AD Connect, the user accounts are already in Azure AD and will be available in Azure AD DS.
LDAP Support: Azure AD DS supports LDAP (Lightweight Directory Access Protocol) and Kerberos authentication.
Migration and Configuration: After migrating Server1 to an Azure VM, you can configure App1 to perform LDAP queries against the Azure AD DS managed domain instead of the on-premises Active Directory.
No On-premises Network Access: Azure AD DS is a standalone domain service within Azure and does not require any network connectivity back to the on-premises network for authentication purposes. The Azure VM running App1 will communicate with Azure AD DS entirely within Azure.
Advantages of Azure AD DS:
Meets Security Policy: Completely avoids accessing the on-premises network from Azure VMs.
Maintains Functionality: Allows App1 to continue using LDAP-based authentication.
Managed Service: Reduces administrative overhead as Azure AD DS is a managed service.
Therefore, the best solution is D. Azure AD Domain Services (Azure AD DS). It allows App1 to continue using LDAP authentication in Azure without violating the security policy of preventing Azure VMs from accessing the on-premises network.
Final Answer: The final answer is
D
You need to design a solution that will execute custom C# code in response to an event routed to Azure Event Grid. The solution must meet the following requirements:
✑ The executed code must be able to access the private IP address of a Microsoft SQL Server instance that runs on an Azure virtual machine.
✑ Costs must be minimized.
What should you include in the solution?
A. Azure Logic Apps in the Consumption plan
B. Azure Functions in the Premium plan
C. Azure Functions in the Consumption plan
D. Azure Logic Apps in the integrated service environment
Correct Answer
B. Azure Functions in the Premium plan
Azure Functions in the Premium plan (B):
More expensive than Consumption due to reserved instances (minimum cost even when idle) and higher per-execution rates.
Offers pre-warmed instances, VNet integration, and higher performance, but at a higher baseline cost.
HOTSPOT –
You have an Azure subscription named Subscription1 that is linked to a hybrid Azure Active Directory (Azure AD) tenant.
You have an on-premises datacenter that does NOT have a VPN connection to Subscription1. The datacenter contains a computer named Server1 that has
Microsoft SQL Server 2016 installed. Server is prevented from accessing the internet.
An Azure logic app resource named LogicApp1 requires write access to a database on Server1.
You need to recommend a solution to provide LogicApp1 with the ability to access Server1.
What should you recommend deploying on-premises and in Azure? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
On-premises:
A Web Application Proxy for Windows Server
An Azure AD Application Proxy connector
An On-premises data gateway
Hybrid Connection Manager
Azure:
A connection gateway resource
An Azure Application Gateway
An Azure Event Grid domain
An enterprise application
To enable an Azure Logic App to access an on-premises SQL Server database without a VPN and with the server restricted from internet access, we need a secure bridge that allows outbound communication from the on-premises network to Azure, but not inbound access to the on-premises network from the internet.
Let’s evaluate the options for the On-premises component:
A Web Application Proxy for Windows Server: Web Application Proxy is used to publish on-premises web applications to the internet, providing reverse proxy functionality and pre-authentication. It’s not designed for connecting Logic Apps to on-premises databases.
An Azure AD Application Proxy connector: Azure AD Application Proxy connector is also for enabling secure remote access to on-premises web applications via Azure AD. It is not for connecting Logic Apps to databases.
An On-premises data gateway: The On-premises data gateway is specifically designed to act as a secure bridge between Azure cloud services (like Logic Apps, Power Automate, Power BI, Azure Analysis Services) and on-premises data sources (like SQL Server, file shares, etc.). It establishes an outbound connection to Azure and receives requests from Azure services, then relays these requests to the on-premises data source. This fits the requirement perfectly, especially since Server1 cannot access the internet directly but can make outbound connections.
Hybrid Connection Manager: Hybrid Connection Manager, part of Azure Relay Hybrid Connections, allows Azure services to securely access applications running on-premises. While it can be used for various scenarios, for connecting Logic Apps to on-premises SQL Server, the On-premises data gateway is generally the more direct and recommended solution, especially as Logic Apps have native connectors and integration with the data gateway.
Now let’s evaluate the options for the Azure component:
A connection gateway resource: While not standard Azure terminology in the exact phrase “connection gateway resource”, in the context of On-premises Data Gateway, when you configure a connection to an on-premises data source in Logic Apps using a data gateway, you are effectively creating a “connection” resource in Azure that points to and uses the data gateway. This option is likely intended to represent the Azure-side configuration needed to utilize the data gateway.
An Azure Application Gateway: Azure Application Gateway is a web traffic load balancer and Web Application Firewall (WAF). It is not designed for establishing secure data connections between Logic Apps and on-premises databases.
An Azure Event Grid domain: Azure Event Grid is a service for event routing and delivery. It is not relevant for connecting Logic Apps to on-premises SQL Server databases.
An enterprise application: “Enterprise application” in Azure refers to application registrations in Azure AD, used for authentication and authorization. While important for security, it’s not the component that directly facilitates the data connection between Logic Apps and on-premises SQL Server.
Based on the functionality and typical use cases, the On-premises data gateway is the correct on-premises component, and in the context of data gateway connections within Logic Apps, A connection gateway resource is the most fitting Azure-side component from the given options, even if not perfectly named in standard Azure documentation.
Final Answer:
Answer Area
On-premises: An On-premises data gateway
Azure: A connection gateway resource
You are designing a microservices architecture that will be hosted in an Azure Kubernetes Service (AKS) cluster. Apps that will consume the microservices will be hosted on Azure virtual machines. The virtual machines and the AKS cluster will reside on the same virtual network.
You need to design a solution to expose the microservices to the consumer apps. The solution must meet the following requirements:
✑ Ingress access to the microservices must be restricted to a single private IP address and protected by using mutual TLS authentication.
✑ The number of incoming microservice calls must be rate-limited.
✑ Costs must be minimized.
What should you include in the solution?
A. Azure App Gateway with Azure Web Application Firewall (WAF)
B. Azure API Management Standard tier with a service endpoint
C. Azure Front Door with Azure Web Application Firewall (WAF)
D. Azure API Management Premium tier with virtual network connection
Correct Answer: A. Azure App Gateway with Azure Web Application Firewall (WAF)
Why This Is Correct
Private IP: App Gateway can be deployed with a private IP in the VNet and integrated with AKS using AGIC, providing a single entry point for the microservices.
mTLS: Supports mutual TLS authentication, meeting the security requirement.
Rate-Limiting: While not as feature-rich as API Management, WAF rules can enforce basic rate-limiting (e.g., request thresholds), and additional logic could be implemented in the AKS microservices if needed. The requirement doesn’t specify advanced rate-limiting, so this suffices.
Cost: App Gateway is significantly cheaper than API Management Premium, aligning with the “minimize costs” goal while still meeting the core requirements.
AKS Context: App Gateway is a common choice for AKS ingress, making it a natural fit for this microservices scenario.
Your company has 300 virtual machines hosted in a VMware environment. The virtual machines vary in size and have various utilization levels.
You plan to move all the virtual machines to Azure.
You need to recommend how many and what size Azure virtual machines will be required to move the current workloads to Azure. The solution must minimize administrative effort.
What should you use to make the recommendation?
A. Azure Pricing calculator
B. Azure Advisor
C. Azure Migrate
D. Azure Cost Management
The correct answer is C. Azure Migrate.
Here’s why:
Azure Migrate: Azure Migrate is a service specifically designed to assess and migrate on-premises servers to Azure. One of its key features is the ability to perform a sizing assessment. This assessment analyzes the performance characteristics of your on-premises VMware VMs (CPU, memory, disk I/O) and recommends appropriately sized Azure VMs for the migration target. It also takes into account utilization levels to optimize the recommendation and avoid over-provisioning.
Here’s why the other options are incorrect:
A. Azure Pricing calculator: The Azure Pricing calculator is useful for estimating the cost of Azure resources, but it doesn’t provide any information about the resource requirements of your on-premises VMs. You would need to manually determine the appropriate VM sizes before using the calculator, which defeats the purpose of minimizing administrative effort.
B. Azure Advisor: Azure Advisor provides recommendations for optimizing your existing Azure resources. It doesn’t analyze on-premises environments or provide sizing recommendations for migrating VMs.
D. Azure Cost Management: Azure Cost Management helps you analyze and manage your Azure spending, but it doesn’t have any capabilities for assessing on-premises environments or recommending Azure VM sizes for migration.
You have the Azure resources shown in the following table.
Name Type Location
US-Central-Firewall-policy Azure Firewall policy Central US
US-East-Firewall-policy Azure Firewall policy East US
EU-Firewall-policy Azure Firewall policy West Europe
USEastfirewall Azure Firewall Central US
USWestfirewall Azure Firewall East US
EUFirewall Azure Firewall West Europe
You need to deploy a new Azure Firewall policy that will contain mandatory rules for all Azure Firewall deployments. The new policy will be configured as a parent policy for the existing policies.
What is the minimum number of additional Azure Firewall policies you should create?
A. 0
B. 1
C. 2
D. 3
Correct Answer
B. 1
Why This Is Correct
Single Parent Policy: One new policy is sufficient to enforce mandatory rules across all firewalls by setting it as the parent of the existing policies. Azure Firewall’s hierarchical model allows a single parent policy to govern multiple child policies, regardless of region or firewall association.
Existing Policies Reused: The three existing policies (US-Central-Firewall-policy, US-East-Firewall-policy, EU-Firewall-policy) don’t need to be replaced or supplemented with additional policies; they just need their parent updated.
Minimization: Creating more than one additional policy (e.g., one per region) would be unnecessary and violate the “minimum number” constraint, as a single parent policy achieves the goal.
AZ-305 Context: This question tests your understanding of Azure Firewall policy hierarchy and efficient resource design, a key focus of the exam.
Your company has an app named App1 that uses data from the on-premises Microsoft SQL Server databases shown in the following table.
NAME SIZE
DB1 400 GB
DB2 250 GB
DB3 300 GB
DB4 50 GB
App1 and the data are used on the first day of the month only. The data is not expected to grow more than 3 percent each year.
The company is rewriting App1 as an Azure web app and plans to migrate all the data to Azure.
You need to migrate the data to Azure SQL Database and ensure that the database is only available on the first day of each month.
Which service tier should you use?
A. vCore-based General Purpose
B. DTU-based Standard
C. vCore-based Business Critical
D. DTU-based Basic
Let’s break down the requirements and evaluate each service tier option.
Requirements:
Migrate data to Azure SQL Database: The solution needs to be an Azure SQL Database service tier.
Database size: Total data is 400 GB + 250 GB + 300 GB + 50 GB = 1000 GB = 1 TB.
Availability: Only available on the first day of each month. This implies the database can be stopped/paused for the rest of the month to minimize costs.
Workload: Online Transaction Processing (OLTP).
Cost Optimization: Minimize cost as the database is used only one day a month.
Evaluating Service Tiers:
A. vCore-based General Purpose:
Scalability: Offers good scalability and performance for general workloads, including OLTP.
Cost: Generally cost-effective, especially when you can stop/pause compute when not in use. In vCore model, you can pause compute and only pay for storage, significantly reducing costs when the database is not needed for most of the month.
Data Size: Supports databases larger than 1 TB, easily accommodating the 1 TB requirement.
Availability (on first day only): Suitable because vCore-based General Purpose allows you to pause and resume compute resources. You can automate starting the database on the first day and pausing it afterward to minimize costs for the rest of the month.
B. DTU-based Standard:
Scalability: Less scalable than vCore-based tiers. For a 1 TB database, you would likely need a higher DTU level within the Standard tier, which might become less cost-effective compared to General Purpose vCore, especially when considering pausing compute.
Cost: Can be cheaper for smaller databases, but for a 1 TB database, the required DTU level might increase the cost, and DTU model doesn’t offer the same granular control over compute pausing and cost reduction as vCore.
Data Size: Standard tier has size limits, but higher DTU levels can support databases around 1 TB.
Availability (on first day only): Less optimal for pausing and resuming compute compared to vCore.
C. vCore-based Business Critical:
Scalability: Highest performance and scalability, designed for mission-critical applications.
Cost: Most expensive tier due to high performance and HA features. Overkill for a database used only one day a month. Stopping compute might still leave higher base costs compared to General Purpose.
Data Size: Supports databases larger than 1 TB.
Availability (on first day only): Designed for continuous, high availability, which is not needed for this scenario. Very expensive and not cost-effective.
D. DTU-based Basic:
Scalability: Very limited scalability and performance.
Cost: Cheapest DTU tier.
Data Size: Has very small database size limits (typically a few GB), which is far too small for the 1 TB requirement. Not suitable at all.
Conclusion:
Given the requirements, especially the need to minimize cost and the database being used only one day a month, the vCore-based General Purpose tier is the most appropriate. It provides sufficient performance for OLTP, can handle the 1 TB data size, and crucially, allows for pausing and resuming compute resources. By automating the start and stop of the database around the first day of each month, the company can significantly reduce costs, paying primarily for storage for most of the month and compute only for the day it’s needed.
Final Answer: The final answer is
A
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to deploy multiple instances of an Azure web app across several Azure regions.
You need to design an access solution for the app. The solution must meet the following replication requirements:
✑ Support rate limiting.
✑ Balance requests between all instances.
✑ Ensure that users can access the app in the event of a regional outage.
Solution: You use Azure Load Balancer to provide access to the app.
Does this meet the goal?
A. Yes
B. No
Let’s analyze the proposed solution (Azure Load Balancer) against each of the stated requirements for a multi-region web app deployment.
Support rate limiting: Azure Load Balancer, in its standard configurations (Basic and Standard Load Balancer), does not inherently provide rate limiting capabilities. Rate limiting is typically a feature handled by services like Azure Application Gateway with WAF, Azure Front Door, or API Management at the edge or within the application itself. Azure Load Balancer focuses on load distribution based on network layer (Layer 4) or application layer (Layer 7 for Application Gateway, but question specifies Azure Load Balancer, implying Basic or Standard).
Balance requests between all instances: Azure Load Balancer can balance requests across multiple instances of a web app. However, a key limitation is that Azure Load Balancer is a regional service. A single Azure Load Balancer instance is deployed within a specific Azure region and can only distribute traffic to backend instances within the same region. It cannot inherently balance requests across web app instances deployed in different Azure regions. To achieve cross-region load balancing with Azure Load Balancer alone, you would need to deploy multiple Load Balancers, one in each region, and then implement a separate global traffic management solution (like Azure Traffic Manager or Azure Front Door) in front of them to distribute traffic across regions.
Ensure that users can access the app in the event of a regional outage: Because Azure Load Balancer is a regional service, if the Azure region where the Load Balancer and the primary app instances are deployed experiences an outage, the Azure Load Balancer in that region will also be affected. A single regional Azure Load Balancer cannot provide regional outage resilience across multiple regions. To achieve regional failover, you would need a global load balancing solution that can direct traffic to healthy regions if one region becomes unavailable. Azure Load Balancer itself is not designed for this cross-region failover scenario.
Conclusion:
Azure Load Balancer, by itself, fails to meet all the stated requirements. It does not inherently support rate limiting, and it is a regional service, so it cannot directly balance requests across regions or ensure regional outage resilience without additional components. While it can balance requests within a region, the multi-region context and outage requirement are not met by a standalone Azure Load Balancer.
Therefore, the solution does not meet the goal.
Final Answer: The final answer is
B
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to deploy multiple instances of an Azure web app across several Azure regions.
You need to design an access solution for the app. The solution must meet the following replication requirements:
✑ Support rate limiting.
✑ Balance requests between all instances.
✑ Ensure that users can access the app in the event of a regional outage.
Solution: You use Azure Traffic Manager to provide access to the app.
Does this meet the goal?
A. Yes
B. No
Let’s evaluate if Azure Traffic Manager meets each of the stated requirements for a multi-region web app access solution:
Support rate limiting: Azure Traffic Manager is a DNS-based traffic routing service. It directs traffic to different endpoints based on DNS resolution and the chosen routing method (e.g., Performance, Priority, Weighted, Geographic). Azure Traffic Manager itself does not provide rate limiting capabilities. Rate limiting is typically implemented at the application level, using a Web Application Firewall (WAF), or an API Gateway. Traffic Manager operates at the DNS layer, and decisions are made at the DNS resolution phase, not at the HTTP request level.
Balance requests between all instances: Azure Traffic Manager can distribute traffic across multiple instances of a web app deployed in different regions. Depending on the routing method chosen, Traffic Manager can direct users to the “closest” instance (Performance routing), distribute traffic based on weights (Weighted routing), or use other criteria to distribute traffic across available endpoints. It’s important to note that Traffic Manager is not a load balancer in the traditional sense that it’s not inspecting individual HTTP requests and balancing them in real-time. It’s a DNS-based traffic director. However, for the purpose of distributing users and requests across different regional instances, Traffic Manager can fulfill this requirement to a degree.
Ensure that users can access the app in the event of a regional outage: Azure Traffic Manager is specifically designed for this scenario and excels at providing regional outage resilience. It continuously monitors the health of the configured endpoints (which would be the regional web app instances). If an entire region or a specific web app instance becomes unhealthy, Traffic Manager can automatically detect this failure and stop directing traffic to the failed endpoint. It will then route users to the healthy instances in other regions, ensuring continued access to the application even if one region experiences an outage. This is a primary and very strong use case for Azure Traffic Manager.
Overall Assessment:
Azure Traffic Manager effectively addresses the requirements for balancing requests across multiple instances and ensuring access during regional outages. However, it does not support rate limiting.
Given that the question states that all replication requirements must be met, and Azure Traffic Manager does not offer rate limiting, the solution does not fully meet the goal.
Therefore, the answer is No.
Final Answer: The final answer is
B
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to deploy multiple instances of an Azure web app across several Azure regions.
You need to design an access solution for the app. The solution must meet the following replication requirements:
✑ Support rate limiting.
✑ Balance requests between all instances.
✑ Ensure that users can access the app in the event of a regional outage.
Solution: You use Azure Application Gateway to provide access to the app.
Does this meet the goal?
A. Yes
B. No
Let’s analyze the proposed solution (Azure Application Gateway) against each of the stated requirements for a multi-region web app deployment.
Support rate limiting: Azure Application Gateway, particularly when configured with the Web Application Firewall (WAF) feature, does support rate limiting. WAF rules can be configured to limit the number of requests from a specific source (e.g., IP address) within a defined time window. This allows for protection against denial-of-service attacks and helps manage traffic volume. Thus, this requirement is met.
Balance requests between all instances: Azure Application Gateway excels at load balancing HTTP/HTTPS traffic. It can distribute requests across multiple backend servers. However, standard Azure Application Gateway is a regional service. A single instance of Application Gateway is deployed in a specific Azure region and can only directly load balance traffic to backend instances located within the same region. It cannot natively load balance across web app instances deployed in different Azure regions with a single Application Gateway instance. To achieve cross-region load balancing, you would typically need a global load balancer in front of regional Application Gateways (like Azure Front Door). For the question as posed, using only Azure Application Gateway, this requirement is not fully met for all instances across several regions. It can balance instances within a region, but not between regions in a single deployment.
Ensure that users can access the app in the event of a regional outage: Since Azure Application Gateway is a regional service, if the Azure region where the Application Gateway is deployed experiences a complete outage, the Application Gateway itself will also become unavailable. A single regional Azure Application Gateway deployment cannot inherently provide regional outage resilience across multiple regions. To ensure access during a regional outage, you would need a solution that operates at a global level and can route traffic away from a failed region to a healthy one. Azure Application Gateway alone does not provide this cross-region failover capability. Thus, this requirement is not met.
Conclusion:
While Azure Application Gateway effectively supports rate limiting and load balancing within a region, it is a regional service and falls short of meeting the requirements of balancing requests across instances in multiple regions and ensuring access during a regional outage when used as a standalone solution. To achieve cross-region capabilities, typically a global service like Azure Front Door would be placed in front of regional Application Gateways. However, the proposed solution only mentions “Azure Application Gateway” without mentioning a global service in front.
Therefore, the solution, as stated, does not meet the goal of providing a multi-region access solution that fulfills all the requirements.
Final Answer: The final answer is
B
You have an Azure subscription that contains the resources shown in the following table.
— —
Name Type Description
VNet1 Virtual Network None
LB1 Public load balancer Includes a backend pool name BP1
VMSS1 Azure Virtual Machine Scale Sets Included in BP1
Connected to VNet1
NVA1 Network Virtual Appliance (NVA) Connected to VNet1
Performs security filtering of traffic for
VMSS1
NVA2 Network Virtual Appliance (NVA) Connected to VNet1
Performs security filtering of traffic for
VMSS1
You need to recommend a load balancing solution that will distribute incoming traffic for VMSS1 across NVA1 and NVA2. The solution must minimize administrative effort.
What should you include in the recommendation?
A. Gateway Load Balancer
B. Azure Front Door
C. Azure Application Gateway
D. Azure Traffic Manager
The requirement is to distribute incoming traffic for VMSS1 across NVA1 and NVA2 within the same Azure Virtual Network (VNet1), with minimal administrative effort. The NVAs are performing security filtering for VMSS1. Let’s evaluate each option based on its suitability for this scenario:
A. Gateway Load Balancer:
Purpose: Gateway Load Balancer is specifically designed to simplify the deployment and management of Network Virtual Appliances (NVAs) in Azure. It allows you to chain NVAs to your application traffic path without complex User Defined Routes (UDRs).
Functionality: It provides a single entry point for traffic, and can distribute this traffic across a pool of NVAs. This perfectly matches the requirement of distributing traffic for VMSS1 across NVA1 and NVA2.
Administrative Effort: Gateway Load Balancer is designed to minimize administrative effort by simplifying the NVA integration. It removes the need for complex UDR configurations to force traffic through NVAs.
B. Azure Front Door:
Purpose: Azure Front Door is a global, scalable web application acceleration and load balancing service. It’s designed for HTTP/HTTPS traffic and global routing, typically used for improving performance and availability of web applications across different regions.
Functionality: While Front Door can load balance, it’s intended for internet-facing web applications and global distribution. It’s not the appropriate tool for load balancing traffic within a VNet to NVAs for security filtering of VMSS traffic. It adds unnecessary complexity and is not optimized for this internal VNet scenario.
C. Azure Application Gateway:
Purpose: Azure Application Gateway is a web traffic load balancer for managing traffic to web applications. It operates at Layer 7 and offers features like SSL termination, WAF, and URL-based routing.
Functionality: Application Gateway is primarily used for load balancing web traffic to application servers. While it can load balance VMs, using it solely to distribute traffic to NVAs for security filtering within the same VNet is not its primary use case and is less efficient than Gateway Load Balancer for this particular scenario. It’s more complex than needed for simple NVA load balancing.
D. Azure Traffic Manager:
Purpose: Azure Traffic Manager is a DNS-based traffic routing service. It directs traffic based on DNS resolution and routing methods (Performance, Priority, etc.).
Functionality: Traffic Manager is used for global traffic management, directing users to different endpoints based on health and routing policies. It operates at the DNS layer and is not suitable for load balancing network traffic within a VNet to NVAs for traffic inspection. It is not designed for this type of internal load balancing scenario.
Conclusion:
Considering the specific requirement to load balance traffic across NVAs for VMSS1 within the same VNet and minimize administrative effort, Gateway Load Balancer is the most appropriate and purpose-built solution. It is designed exactly for this NVA integration scenario, simplifying the traffic routing and minimizing configuration complexity compared to other load balancing options.
Final Answer: The final answer is
A
HOTSPOT
–
You have the Azure subscriptions shown in the following table.
Name Location Azure AD tenant
Sub1 East US contoso.onmicrosoft.com
Sub2 East US contoso-recovery.onmicrosoft.com
Contoso.onmicrosft.com contains a user named User1.
You need to deploy a solution to protect against ransomware attacks. The solution must meet the following requirements:
- Ensure that all the resources in Sub1 are backed up by using Azure Backup.
- Require that User1 first be assigned a role for Sub2 before the user can make major changes to the backup configuration.
What should you create in each subscription? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
Sub1:
A Recovery Services vault
A Resource Guard
An Azure Site Recovery job
Microsoft Azure Backup Server (MABS)
The Microsoft Azure Recovery Services (MARS) agent
Sub2:
A Recovery Services vault
A Resource Guard
An Azure Site Recovery job
Microsoft Azure Backup Server (MABS)
The Microsoft Azure Recovery Services (MARS) agent
For Sub1, the most critical component to enable backups is A Recovery Services vault.
Why A Recovery Services vault is the closest and best single option for Sub1:
Core Backup Service: Azure Backup is fundamentally managed through a Recovery Services vault. It’s the central administrative entity where you configure backup policies, initiate backups, and manage recovery points for your resources.
Enabling Backups: Without a Recovery Services vault in Sub1, you cannot use Azure Backup to protect the resources within that subscription. It’s the prerequisite for fulfilling the requirement “Ensure that all the resources in Sub1 are backed up by using Azure Backup.”
Other Options are Secondary or Incorrect in this context:
Resource Guard: While Resource Guard enhances security and is part of a robust ransomware protection strategy, it protects the Recovery Services vault. You need the vault first. Resource Guard without a vault doesn’t enable backups.
Azure Site Recovery job: Azure Site Recovery is for disaster recovery (DR), not backup in the sense of regular data protection against ransomware. It’s a different service with a different purpose.
Microsoft Azure Backup Server (MABS) and The Microsoft Azure Recovery Services (MARS) agent: These are used for backing up on-premises resources or specific Azure VMs. The question states “all the resources in Sub1,” implying a broader Azure-native backup strategy, not just VM-level or on-premises backups.
For Sub2, the most critical component to enforce the authorization requirement is A Resource Guard.
Why A Resource Guard is the closest and best single option for Sub2:
Enforcing Authorization Control: The primary purpose of Resource Guard in this scenario is to “Require that User1 first be assigned a role for Sub2 before the user can make major changes to the backup configuration.” Resource Guard is specifically designed to enforce multi-user authorization and other security measures for Recovery Services vaults.
Cross-Subscription Authorization: Placing Resource Guard in Sub2, a separate Azure AD tenant, is key to enforcing the requirement that User1 needs permissions in Sub2 to affect backups in Sub1. This cross-subscription/tenant control is the core of the security enhancement.
Other Options are Irrelevant in this context:
A Recovery Services vault, An Azure Site Recovery job, Microsoft Azure Backup Server (MABS), and The Microsoft Azure Recovery Services (MARS) agent: These options in Sub2 do not directly contribute to the requirement of enforcing authorization control over backup configurations in Sub1. Sub2 is acting as a security administration subscription in this scenario, and Resource Guard is the component that enables that security function.
In summary, if you can only select ONE option for each subscription:
Sub1: A Recovery Services vault (because it’s the fundamental component for Azure Backup).
Sub2: A Resource Guard (because it directly enforces the authorization requirement).
HOTSPOT
–
You have 10 on-premises servers that run Windows Server.
You need to perform daily backups of the servers to a Recovery Services vault. The solution must meet the following requirements:
- Back up all the files and folders on the servers.
- Maintain three copies of the backups in Azure.
- Minimize costs.
What should you configure? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
On the servers:
The Azure Site Recovery Mobility service
The Microsoft Azure Recovery Services (MARS) agent
Volume Shadow Copy Service (VSS)
For the storage:
Geo-redundant storage (GRS)
Locally-redundant storage (LRS)
Zone-redundant storage (ZRS)
Let’s analyze each option for the Hotspot question to determine the best single choice for “On the servers” and “For the storage” based on the requirements:
On the servers - Best Single Option: The Microsoft Azure Recovery Services (MARS) agent
Why correct: The Microsoft Azure Recovery Services (MARS) agent, also known as the Azure Backup agent, is specifically designed for backing up files, folders, and system state from on-premises Windows servers directly to an Azure Recovery Services vault. This directly addresses the requirement to “Back up all the files and folders on the servers.” It is the correct agent to install on the servers to enable Azure Backup for files and folders.
Why other options are less suitable as single choices:
The Azure Site Recovery Mobility service: This service is for Azure Site Recovery (ASR), used for replicating entire VMs for disaster recovery, not file/folder level backups for daily operations as specified in the question. It’s the wrong tool for the stated backup requirement.
Volume Shadow Copy Service (VSS): VSS is a Windows technology, not an Azure component that you directly “configure” for backups to Azure. VSS is used by backup applications (including the MARS agent) to ensure consistent backups, but it’s an underlying service, not the primary component to select for enabling backups to Azure.
For the storage - Best Single Option: Locally-redundant storage (LRS)
Why correct: Locally-redundant storage (LRS) is the lowest-cost storage redundancy option in Azure. It replicates your data three times within a single data center. This directly and minimally meets the requirement to “Maintain three copies of the backups in Azure” while also fulfilling the requirement to “Minimize costs.” LRS is sufficient for protecting against hardware failures within a data center.
Why other options are less suitable as single choices:
Geo-redundant storage (GRS): GRS provides higher redundancy and resilience by replicating data to a secondary region, maintaining six copies in total. However, GRS is significantly more expensive than LRS. The requirement is to minimize costs. GRS is overkill for the stated scenario and contradicts the cost minimization goal. While it provides more than three copies, LRS is sufficient and cheaper.
Zone-redundant storage (ZRS): ZRS replicates data across three availability zones within a region, providing higher availability than LRS and protection against datacenter failures within a region. However, ZRS is also more expensive than LRS. For simple daily backups aiming to minimize costs, and where datacenter-level redundancy might be considered sufficient, LRS is the more cost-effective and thus better single choice given the “minimize costs” requirement. While ZRS also maintains three copies, LRS is cheaper.
HOTSPOT
–
You plan to deploy a containerized web-app that will be hosted in five Azure Kubernetes Service (AKS) clusters. Each cluster will be hosted in a different Azure region.
You need to provide access to the app from the internet. The solution must meet the following requirements:
- Incoming HTTPS requests must be routed to the cluster that has the lowest network latency.
- HTTPS traffic to individual pods must be routed via an ingress controller.
- In the event of an AKS cluster outage, failover time must be minimized.
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
For global load balancing:
Azure Front Door
Azure Traffic Manager
Cross-region load balancing in Azure
Standard Load Balancer
As the ingress controller:
Azure Application Gateway
Azure Standard Load Balancer
Basic Azure Load Balancer
Let’s analyze each requirement and the suitability of the provided options for global load balancing and ingress controller.
Requirements:
Latency-based Routing (HTTPS): Incoming HTTPS requests must be routed to the AKS cluster with the lowest network latency.
Ingress Controller Routing (HTTPS to Pods): HTTPS traffic to individual pods must be routed via an ingress controller.
Minimal Failover Time: In the event of an AKS cluster outage, failover time must be minimized.
For Global Load Balancing:
Azure Front Door:
Latency-based Routing: Yes. Azure Front Door’s “Performance routing” is specifically designed to route traffic to the backend with the lowest latency. This perfectly matches the first requirement.
Multi-region & Global: Yes. Azure Front Door is a global service and is designed for distributing traffic across multiple regions, fitting the multi-AKS cluster scenario.
Failover: Yes. Azure Front Door provides automatic failover to the next closest healthy backend in case of an outage, minimizing failover time.
HTTPS: Yes. Azure Front Door is designed for handling HTTPS traffic, including SSL termination and routing.
Azure Traffic Manager:
Latency-based Routing: Yes. Azure Traffic Manager can use the “Performance” routing method to direct traffic to the endpoint with the lowest latency.
Multi-region & Global: Yes. Azure Traffic Manager is a global, DNS-based traffic manager suitable for multi-region applications.
Failover: Yes. Azure Traffic Manager provides automatic failover by monitoring endpoint health and redirecting traffic away from failed endpoints.
HTTPS: Yes. Azure Traffic Manager works with HTTPS, although it is DNS-based and does not perform SSL termination itself.
Cross-region load balancing in Azure: This is a descriptive phrase rather than a specific Azure service. It describes the desired outcome, not a tool.
Standard Load Balancer:
Latency-based Routing: No. Azure Standard Load Balancer is a regional load balancer. It does not provide global, latency-based routing across different Azure regions. It balances traffic within a region.
Multi-region & Global: No. Azure Standard Load Balancer is a regional service.
Failover: No. Azure Standard Load Balancer provides high availability within a region, but not cross-region failover.
Conclusion for Global Load Balancing: Azure Front Door is the superior choice because it directly addresses all the requirements for global load balancing, especially latency-based routing and fast failover in a multi-region AKS setup. Azure Traffic Manager is also a valid option, but Front Door is often preferred for web applications requiring Layer 7 features and faster failover in web scenarios.
As the Ingress Controller:
Azure Application Gateway:
Ingress Controller: Yes. Azure Application Gateway can be used as an ingress controller for AKS, especially with the Azure Application Gateway Ingress Controller (AGIC).
HTTPS Routing to Pods: Yes. Application Gateway can handle HTTPS termination and route traffic to pods based on ingress rules (path-based, host-based routing).
Azure Standard Load Balancer:
Ingress Controller (Indirectly): Yes, Azure Standard Load Balancer can act as the service load balancer in front of a software-based ingress controller (like Nginx Ingress Controller or Traefik) in AKS. However, it is not an ingress controller itself. It’s the underlying Layer 4 load balancer that exposes the ingress controller service. It does not provide the Layer 7 routing and HTTPS termination features required of an ingress controller for routing to pods based on HTTP rules.
Basic Azure Load Balancer:
Ingress Controller (Indirectly): Yes, similar to Standard Load Balancer, but Basic Load Balancer is less feature-rich, has limitations, and is not generally recommended for production AKS ingress scenarios.
Conclusion for Ingress Controller: Azure Application Gateway is the best choice as the ingress controller because it is a fully managed Layer 7 load balancer that can directly act as an ingress controller for AKS, providing HTTPS termination, advanced routing, and integration through AGIC. While Standard Load Balancer is necessary at a lower layer for exposing services, Application Gateway is the appropriate choice for the ingress controller role as per the requirements.
Final Answer:
For global load balancing: Azure Front Door
As the ingress controller: Azure Application Gateway
HOTSPOT
–
You have an Azure subscription.
You create a storage account that will store documents.
You need to configure the storage account to meet the following requirements:
- Ensure that retention policies are standardized across the subscription.
- Ensure that data can be purged if the data is copied to an unauthorized location.
Which two settings should you enable? To answer, select the appropriate settings in the answer area.
NOTE: Each correct selection is worth one point.
Recovery
Enable operational backup with Azure Backup
Enable point-in-time restore for containers
Enable soft delete for blobs
Enable soft delete for containers
Enable permanent delete for soft deleted items
Tracking
Enable versioning for blobs
Enable blob change feed
Access control
Enable version-level immutability support
Correct Answer:
Recovery: Enable soft delete for blobs
Access control: Enable version-level immutability support
Explanation:
- Ensure that retention policies are standardized across the subscription.
Enable version-level immutability support helps achieve this. Immutability policies, including retention policies, can be set at the subscription or resource group level and inherited by storage accounts and containers within. This ensures consistent policy application across the board.
- Ensure that data can be purged if the data is copied to an unauthorized location.
This requirement is a bit tricky, as Azure Storage doesn’t have a built-in feature to automatically purge data copied elsewhere. However, by combining immutability with soft delete, you can achieve a similar outcome:
Enable version-level immutability support: If immutability is enabled, even if data is copied, the original data within the storage account will be protected by the retention policy. It cannot be deleted or modified until the policy allows it.
Enable soft delete for blobs: Soft delete adds a safety net. If someone tries to delete a blob that’s protected by immutability, it will go into a soft-deleted state instead of being permanently deleted immediately. This gives you a window to recover the data if needed, but also allows for eventual deletion if the retention period of the soft-deleted blob expires.
Why other options are incorrect:
Enable operational backup with Azure Backup: While essential for disaster recovery, it doesn’t directly standardize retention policies or enable purging based on unauthorized copies.
Enable point-in-time restore for containers: This allows restoring containers to a previous state, but doesn’t address retention standardization or unauthorized copy purging.
Enable soft delete for containers: Similar to soft delete for blobs but applied to entire containers. It doesn’t address the core requirements as effectively as blob-level soft delete combined with immutability.
Enable permanent delete for soft deleted items: This feature accelerates the deletion process but is contrary to the requirement of being able to purge in specific cases of unauthorized copying.
Enable versioning for blobs: Useful for tracking changes but doesn’t ensure standardized retention or provide a direct mechanism for purging.
Enable blob change feed: Provides a log of changes to blobs, helpful for auditing but not directly related to the requirements.
You have SQL Server on an Azure virtual machine. The databases are written to nightly as part of a batch process.
You need to recommend a disaster recovery solution for the data. The solution must meet the following requirements:
✑ Provide the ability to recover in the event of a regional outage.
✑ Support a recovery time objective (RTO) of 15 minutes.
✑ Support a recovery point objective (RPO) of 24 hours.
✑ Support automated recovery.
✑ Minimize costs.
What should you include in the recommendation?
A. Azure virtual machine availability sets
B. Azure Disk Backup
C. an Always On availability group
D. Azure Site Recovery
The best recommendation to meet the requirements is D. Azure Site Recovery.
Here’s why:
Azure Site Recovery
Regional Outage Recovery: Azure Site Recovery replicates your virtual machine to a secondary Azure region. In case of a regional outage in the primary region, you can fail over to the secondary region, ensuring business continuity.
RTO of 15 minutes: Azure Site Recovery can achieve an RTO of 15 minutes or even less, depending on the configuration and the size of the virtual machine. This meets the requirement.
RPO of 24 hours: Azure Site Recovery supports various replication frequencies. For a nightly batch process, you can configure replication to occur less frequently (e.g., every few hours or even just once a day after the batch process completes), allowing you to easily meet an RPO of 24 hours. You can adjust the frequency based on your specific needs.
Automated Recovery: Azure Site Recovery provides automated failover and failback capabilities. You can define recovery plans that automate the entire failover process, including starting up VMs in the correct order and configuring network settings.
Cost-Effectiveness: While there are costs associated with Azure Site Recovery (replication storage, compute during failover), it’s generally more cost-effective than maintaining a full-fledged, continuously running secondary environment as you would with an Always On availability group, especially when your RPO is 24 hours. You only pay for the full compute resources when a failover occurs.
Why other options are less suitable:
A. Azure virtual machine availability sets
Regional Outage Recovery: Availability sets protect against hardware failures within a single Azure region. They do not provide protection against regional outages.
RTO, RPO, Automation: They don’t directly contribute to RTO/RPO goals or automated recovery in the context of a regional disaster.
Cost: They don’t incur additional costs but don’t address the core DR requirements.
B. Azure Disk Backup
Regional Outage Recovery: While Azure Disk Backup can be configured for cross-region backups, it’s primarily a backup solution, not a disaster recovery solution with a focus on quick recovery.
RTO of 15 minutes: Achieving an RTO of 15 minutes with disk backups would be challenging, as it would require restoring the disks and then attaching them to a new VM, which takes time.
RPO of 24 hours: Easily achievable with scheduled backups.
Automated Recovery: Automation of a full VM recovery from disk backups is less streamlined compared to Azure Site Recovery.
Cost: Relatively cost-effective for backup purposes, but not ideal for the required RTO.
C. an Always On availability group
Regional Outage Recovery: Always On availability groups can provide high availability and disaster recovery across regions by setting up a secondary replica in a different region.
RTO of 15 minutes: Possible with synchronous replication, but potentially more challenging with asynchronous replication which is often preferred for cross-region setups to avoid performance impact.
RPO of 24 hours: Achievable, even with asynchronous replication between regions, if configured appropriately. However, for an RPO of 24 hours, synchronous replication is not required, and would add unnecessary cost and complexity in this case.
Automated Recovery: Supports automated failover.
Cost: Can be expensive, especially for cross-region setups, as it requires running a secondary SQL Server instance continuously. This is not cost-effective if the main workload is a nightly batch process.
HOTSPOT –
You plan to deploy the backup policy shown in the following exhibit.
Policy 1
Backup schedule
*Frequency: Daily
*Time: 6:00 PM
*Timezone: (UTC) Coordinated Universal Time
Instant Restore
- Retain instant recovery snapshot(s) for 3 Day(s)
Retention range
- Retention of daily backup point.
*At: 6:00 PM For 90 Day(s)
- Retention of weekly backup point.
*On: Sunday
*At: 6:00 PM For 26 Week(s) - Retention of monthly backup point.
Week Based / Day Based
*On: First
*Day: Sunday
*At: 6:00 PM For 36 Month(s) - Retention of yearly backup point: Not Configured
Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Virtual machines that are backed up by using the
policy can be recovered for up to a maximum of
[answer choice]:
90 days
26 weeks
36 months
45 months
The minimum recovery point objective (RPO) for
virtual machines that are backed up by using the
policy is [answer choice]:
1 hour
1 day
1 week
1 month
1 year
Answer Area:
Virtual machines that are backed up by using the policy can be recovered for up to a maximum of 36 months.
The minimum recovery point objective (RPO) for virtual machines that are backed up by using the policy is 1 day.
Explanation:
- Maximum Recovery Time:
The policy defines several retention periods:
Instant Restore: 3 days (this is for quick, operational recovery, not long-term)
Daily: 90 days
Weekly: 26 weeks (approximately 6 months)
Monthly: 36 months
Yearly: Not configured
The longest retention period determines the maximum time you can go back to recover a virtual machine. In this case, it’s the monthly backup, which is retained for 36 months.
- Minimum Recovery Point Objective (RPO):
RPO represents the maximum amount of data loss that is acceptable in a disaster. It’s essentially how far back in time your backups go.
The policy has the following backup frequencies:
Daily: Backups are taken every day at 6:00 PM.
Weekly: Backups are taken every Sunday at 6:00 PM.
Monthly: Backups are taken on the first Sunday of every month at 6:00 PM.
The most frequent backup defines the minimum RPO. Here, backups are taken daily. This means that in the worst-case scenario (a failure just before the next scheduled backup), you might lose up to 24 hours (or slightly less, since it is exactly at 6pm) of data. Therefore, the minimum RPO is 1 day.
Why other options are incorrect:
Maximum Recovery Time:
90 days: This is only the daily retention, not the maximum.
26 weeks: This is the weekly retention, shorter than the monthly retention.
45 months: There’s no 45-month retention configured.
Minimum RPO:
1 hour: The policy doesn’t back up hourly.
1 week: Weekly backups don’t provide a 1-week RPO; daily backups do.
1 month: Monthly backups have a larger RPO.
1 year: Yearly backups are not configured, and even if they were, they would have the largest RPO.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements:
✑ Provide access to the full .NET framework.
✑ Provide redundancy if an Azure region fails.
✑ Grant administrators access to the operating system to install custom application dependencies.
Solution: You deploy two Azure virtual machines to two Azure regions, and you deploy an Azure Application Gateway.
Does this meet the goal?
A. Yes
B. No
A. Yes
This solution meets all the stated requirements. Let’s break down why:
Requirements and How the Solution Meets Them:
Provide access to the full .NET framework:
Solution: Azure virtual machines (VMs) allow you to choose the operating system, including Windows Server versions that fully support the .NET framework.
Provide redundancy if an Azure region fails:
Solution: Deploying two VMs in two different Azure regions provides geographical redundancy. If one region experiences an outage, the other region can continue to serve the application. The Application Gateway will handle routing traffic to the healthy region.
Grant administrators access to the operating system to install custom application dependencies:
Solution: Azure VMs give administrators full control over the operating system. They can log in (via RDP for Windows or SSH for Linux) and install any necessary software, including custom dependencies for the application.
Why Azure Application Gateway is Suitable Here:
Load Balancing: It distributes traffic between the two VMs, ensuring that the load is balanced and the application is highly available.
Regional Redundancy: When configured with instances in multiple regions, Application Gateway can route traffic to the healthy region if one region fails. This is key for meeting the redundancy requirement.
Web Application Firewall (WAF): (Optional, but recommended) You can enable the WAF functionality of Application Gateway to provide additional security for your web app.
Other Considerations (Not Explicitly Stated but Important):
Stateless App: The question specifies a stateless web app, which makes this solution even more suitable. Since there’s no shared state between VM instances, you can easily distribute traffic and fail over without worrying about data consistency issues.
Autoscaling (Optional): While not required, you could consider using virtual machine scale sets (VMSS) instead of individual VMs to enable autoscaling based on demand, further enhancing availability and scalability.
Therefore, deploying two Azure VMs to two regions and using an Azure Application Gateway is a valid solution that meets all the specified requirements.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements:
✑ Provide access to the full .NET framework.
✑ Provide redundancy if an Azure region fails.
✑ Grant administrators access to the operating system to install custom application dependencies.
Solution: You deploy an Azure virtual machine scale set that uses autoscaling.
Does this meet the goal?
A. Yes
B. No
B. No
While a virtual machine scale set (VMSS) with autoscaling offers some benefits, it doesn’t fully meet all the requirements on its own, specifically regarding regional redundancy.
Here’s a breakdown:
Requirements and How the Solution Fails or Succeeds:
Provide access to the full .NET framework:
Success: VMSS allows you to choose a VM image that supports the full .NET framework (e.g., a Windows Server image).
Provide redundancy if an Azure region fails:
Failure: A single VMSS, even with instances spread across availability zones, is still confined to a single Azure region. If that entire region experiences an outage, the VMSS and the application it hosts will become unavailable. To achieve regional redundancy, you would need to deploy the application infrastructure in another region.
Grant administrators access to the operating system to install custom application dependencies:
Success: Similar to individual VMs, administrators can access the underlying VMs in a VMSS to install custom dependencies. This can be done by customizing the VM image used by the scale set or by using extensions or scripts during deployment.
Why Autoscaling Isn’t Sufficient for Regional Redundancy:
Autoscaling primarily focuses on adjusting the number of VM instances within a scale set based on demand. It does not automatically distribute instances across multiple regions.
To achieve regional redundancy with VMSS, you would need to:
Deploy multiple VMSS instances: Create at least two VMSS instances, each in a separate Azure region.
Use a traffic manager: Implement a solution like Azure Traffic Manager or Azure Front Door to distribute traffic between the VMSS instances in different regions. In case of a regional outage, the traffic manager would automatically redirect traffic to the healthy region.
HOTSPOT –
You need to recommend an Azure Storage account configuration for two applications named Application1 and Application2. The configuration must meet the following requirements:
✑ Storage for Application1 must provide the highest possible transaction rates and the lowest possible latency.
✑ Storage for Application2 must provide the lowest possible storage costs per GB.
✑ Storage for both applications must be available in an event of datacenter failure.
✑ Storage for both applications must be optimized for uploads and downloads.
What should you recommend? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Application1:
BlobStorage with Standard performance, Hot access tier, and Read- access geo-redundant storage (RA-GRS) replication
BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication
General purpose v1 with Premium performance and Locally- redundant storage (LRS) replication
General purpose v2 with Standard performance, Hot access tier, and Locally-redundant storage (LRS) replication
Application2:
BlobStorage with Standard performance, Cool access tier, and Geo- redundant storage (GRS) replication
BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication
General purpose v1 with Standard performance and Read-access geo-redundant storage (RA-GRS) replication
General purpose v2 with Standard performance, Cool access tier, and Read-access geo-redundant storage (RA-GRS) replication
Answer Area:
Application1: BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication
Application2: General purpose v2 with Standard performance, Cool access tier, and Read-access geo-redundant storage (RA-GRS) replication
Explanation:
Application1 Requirements:
Highest possible transaction rates and lowest possible latency: This indicates a need for Premium performance tier. Premium storage uses SSDs and is optimized for I/O-intensive workloads.
Available in the event of a datacenter failure: ZRS (Zone-redundant storage) replicates your data synchronously across three availability zones in a single region. This provides high availability and protects against datacenter failures.
Optimized for uploads and downloads: Block blobs are specifically designed for storing and streaming large objects and thus best for upload/download optimization.
Why not other options for Application1:
BlobStorage with Standard performance, Hot access tier, RA-GRS: Standard performance won’t offer the lowest latency.
General purpose v1 with Premium performance, LRS: General-purpose v1 accounts do not support the Premium performance tier. Also, LRS doesn’t protect against datacenter failures.
General purpose v2 with Standard performance, Hot access tier, LRS: Standard performance is not the highest possible, and LRS doesn’t protect against datacenter failures.
Application2 Requirements:
Lowest possible storage costs per GB: This calls for the Cool access tier, which is designed for infrequently accessed data and offers lower storage costs compared to the Hot tier.
Available in the event of a datacenter failure: Read-access geo-redundant storage (RA-GRS) provides redundancy. It replicates your data to a secondary region, and you can read from the secondary region. It is important to note that this option also guarantees that data will be available if there is a data center failure in the primary region.
Optimized for uploads and downloads: General purpose v2 accounts support block blobs, which are ideal for storing and retrieving documents.
Why not other options for Application2:
BlobStorage with Standard performance, Cool access tier, GRS: BlobStorage accounts do not support cool access tier. Also, GRS does not allow read access to the secondary region.
BlockBlobStorage with Premium performance, ZRS: Premium performance is much more expensive and not necessary for cost optimization.
General purpose v1 with Standard performance, RA-GRS: General-purpose v1 accounts are generally being superseded by v2, and it’s recommended to use v2 for new deployments. Also, it does not support cool access tier.
HOTSPOT –
You plan to develop a new app that will store business critical data. The app must meet the following requirements:
✑ Prevent new data from being modified for one year.
✑ Maximize data resiliency.
✑ Minimize read latency.
What storage solution should you recommend for the app? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Storage Account type:
Premium block blobs
Standard general-purpose v1
Standard general-purpose v2
Redundancy:
Zone-redundant storage (ZRS)
Locally-redundant storage (LRS)
Answer Area:
Storage Account type: Standard general-purpose v2
Redundancy: Zone-redundant storage (ZRS)
Explanation:
Storage Account Type:
Standard general-purpose v2: This is the best choice for several reasons:
Immutability Support: General-purpose v2 accounts support blob immutability policies, which are essential for preventing data modification. You can set time-based retention policies to meet the requirement of preventing new data from being modified for one year.
Broad Feature Set: General-purpose v2 accounts provide access to all Azure Storage services (blobs, files, queues, tables), giving you flexibility for future needs.
Cost-Effectiveness: Standard performance is generally more cost-effective than Premium, and for this scenario, the performance of Standard is likely sufficient.
Why not other options:
Premium block blobs: Premium block blob storage accounts are designed for high-performance, low-latency scenarios. While they support immutability, they are significantly more expensive than Standard accounts and are not necessary when the primary requirement is data immutability and not extreme performance.
Standard general-purpose v1: General-purpose v1 accounts are older and lack some features found in v2 accounts, including full support for immutability policies at the account or container level. It’s generally recommended to use v2 for new deployments.
Redundancy:
Zone-redundant storage (ZRS):
Maximizes Data Resiliency: ZRS replicates your data synchronously across three different availability zones within a single region. This provides high availability and protects your data against datacenter failures within that region.
Meets Immutability Requirements: ZRS is compatible with immutability policies, so you can still enforce the one-year data modification restriction.
Why not Locally-redundant storage (LRS):
Lower Resiliency: LRS only replicates your data within a single datacenter. It does not protect against datacenter-level failures, which could lead to data loss.
Does not fully support immutability.
Geo-redundant storage (GRS) or Read-access geo-redundant storage (RA-GRS): While GRS and RA-GRS provide even higher resiliency by replicating data to a secondary region, they are not explicitly required here. ZRS is sufficient to meet the requirement of maximizing data resiliency in this context. Also, GRS does not support immutability in secondary region. RA-GRS has lower read latency than GRS, but it still has higher read latency compared with ZRS.
HOTSPOT –
You have an Azure web app named App1 and an Azure key vault named KV1.
App1 stores database connection strings in KV1.
App1 performs the following types of requests to KV1:
✑ Get
✑ List
✑ Wrap
✑ Delete
Unwrap –
✑ Backup
✑ Decrypt
✑ Encrypt
You are evaluating the continuity of service for App1.
You need to identify the following if the Azure region that hosts KV1 becomes unavailable:
✑ To where will KV1 fail over?
✑ During the failover, which request type will be unavailable?
What should you identify? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
To where will KV1 fail over?
A server in the same availability set
A server in the same fault domain
A server in the paired region
A virtual machine in a scale set
During the failover, which request type will be unavailable?
Get
List
Wrap
Delete
Unwrap
Backup
Decrypt
Encrypt
Answer Area:
To where will KV1 fail over? A server in the paired region
During the failover, which request type will be unavailable? Delete
Explanation:
- Key Vault Failover:
Azure Key Vault automatically fails over to the paired region if the primary region becomes unavailable. Azure regions are paired to provide geo-redundancy for services like Key Vault. You can find a list of region pairs in the Azure documentation.
Why not other options:
A server in the same availability set: Availability sets provide redundancy within a single region, not across regions.
A server in the same fault domain: Fault domains are part of availability sets and also only offer redundancy within a region.
A virtual machine in a scale set: Key Vault is a managed Azure service and not hosted on user-managed VMs or scale sets.
- Request Type Unavailable During Failover:
Write Operations During Failover: When Key Vault fails over to the secondary region, it becomes read-only for a short period. This is because the data needs to be fully synchronized to the secondary region before write operations can be safely allowed.
Delete as a Write Operation: The Delete operation is a write operation because it modifies the state of the Key Vault (by removing a secret, key, or certificate). Therefore, Delete operations will be unavailable during the failover.
Other Operations:
Get, List: These are read operations and will be available during the failover once the read replica is ready.
Wrap, Unwrap, Encrypt, Decrypt: These operations rely on keys, and as long as the keys are available for read operations in the secondary region, these crypto operations should continue to work.
Backup: Backup is also a write operation. This operation would not be available during failover.
DRAG DROP –
Your company identifies the following business continuity and disaster recovery objectives for virtual machines that host sales, finance, and reporting applications in the company’s on-premises data center:
✑ The sales application must be able to fail over to a second on-premises data center.
✑ The reporting application must be able to recover point-in-time data at a daily granularity. The RTO is eight hours.
✑ The finance application requires that data be retained for seven years. In the event of a disaster, the application must be able to run from Azure. The recovery time objective (RTO) is 10 minutes.
You need to recommend which services meet the business continuity and disaster recovery objectives. The solution must minimize costs.
What should you recommend for each application? To answer, drag the appropriate services to the correct applications. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:
Services
Azure Backup only
Azure Site Recovery and Azure Backup
Azure Site Recovery only
Answer Area
Sales: Service or Services
Finance: Service or Services
Reporting: Service or Services
Final Answer
Select and Place:
Sales: Azure Site Recovery only
Finance: Azure Site Recovery and Azure Backup
Reporting: Azure Backup only
Summary of Why Each is Correct
Sales: ASR handles on-premises-to-on-premises failover, meeting the requirement without extra services, thus minimizing costs.
Finance: ASR ensures the app can run in Azure with a 10-minute RTO, and Backup meets the seven-year retention need—both are essential.
Reporting: Backup alone provides daily point-in-time recovery within the eight-hour RTO, avoiding the cost of ASR, which isn’t needed.
HOTSPOT
–
You have an on-premises Microsoft SQL Server database named SQL1.
You plan to migrate SQL1 to Azure.
You need to recommend a hosting solution for SQL1. The solution must meet the following requirements:
- Support the deployment of multiple secondary, read-only replicas.
- Support automatic replication between primary and secondary replicas.
- Support failover between primary and secondary replicas within a 15-minute recovery time objective (RTO).
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
Azure service or service tier:
Azure SQL Database
Azure SQL managed Instance
The Hyperscale service tier
Replication mechanism:
Active geo-replication
Auto-failover groups
Standard geo-replication
Final Answer
Answer Area:
Azure service or service tier: Azure SQL Managed Instance
Replication mechanism: Auto-failover groups
Why Correct?
Azure SQL Managed Instance: This service is tailored for migrating on-premises SQL Server databases to Azure with minimal changes. It supports multiple read-only replicas (via geo-replication), automatic replication, and failover within a 15-minute RTO. It’s more suitable than Azure SQL Database for a full SQL Server migration and offers broader compatibility than the Hyperscale tier alone.
Auto-failover Groups: This mechanism builds on geo-replication to provide automatic replication and failover, ensuring an RTO of less than 1 minute (well within 15 minutes). It simplifies disaster recovery by automating the process and maintaining connection strings, aligning with AZ-305 exam best practices for high availability and resilience.
HOTSPOT
–
You have two on-premises Microsoft SQL Server 2017 instances that host an Always On availability group named AG1. AG1 contains a single database named DB1.
You have an Azure subscription that contains a virtual machine named VM1. VM1 runs Linux and contains a SQL Server 2019 instance.
You need to migrate DB1 to VM1. The solution must minimize downtime on DB1.
What should you do? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Prepare for the migration by:
Adding a secondary replica to AG1
Creating an Always On availability group on VM1
Upgrading the on-premises SQL Server instances
Perform the migration by using:
A distributed availability group
Azure Migrate
Log shipping
Answer Area:
Prepare for the migration by: Adding a secondary replica to AG1
Perform the migration by using: A distributed availability group
Explanation:
- Prepare for the migration by: Adding a secondary replica to AG1
Why this is necessary: To use a distributed availability group (which is the best method for minimal downtime migration in this scenario), you need to first add the Azure SQL Server instance (VM1) as a secondary replica to your existing on-premises availability group (AG1). This establishes the initial synchronization and replication between the on-premises environment and Azure.
Why other options are incorrect:
Creating an Always On availability group on VM1: You don’t need to create a separate availability group on VM1 initially. The distributed availability group will span both the on-premises AG and the Azure instance.
Upgrading the on-premises SQL Server instances: While it’s generally a good practice to keep your SQL Server instances up-to-date, it’s not strictly necessary for this migration. The key is that the Azure SQL Server instance (SQL Server 2019) is a later version than the on-premises instances (SQL Server 2017), which is the case here. Distributed availability groups support migration from older to newer SQL Server versions.
- Perform the migration by using: A distributed availability group
Why this is the best approach: A distributed availability group extends an existing Always On availability group across two separate availability groups: one on-premises and one in Azure. This setup provides a minimal-downtime migration path.
Synchronization: Once the Azure replica is added to AG1, data is continuously synchronized from the on-premises primary to the Azure secondary.
Failover: When you’re ready to migrate, you can perform a planned failover within the distributed availability group, making the Azure replica the new primary. This failover process is typically very quick, minimizing downtime.
Cutover: After the failover, the Azure SQL Server instance becomes the primary, and the on-premises instances can be removed from the configuration.
Why other options are incorrect:
Azure Migrate: Azure Migrate is a great tool for assessing and migrating entire servers or applications to Azure. However, for a minimal-downtime migration of a single database within an existing availability group, a distributed availability group is more efficient.
Log shipping: Log shipping is a more traditional method for database migration that involves backing up transaction logs on the primary and restoring them on the secondary. While it can work, it typically results in more downtime compared to using a distributed availability group.
Steps for Minimal Downtime Migration with a Distributed Availability Group:
Add VM1 as a Secondary Replica to AG1: Add the Azure SQL Server instance (VM1) as a secondary replica to your existing on-premises Always On availability group (AG1).
Create a Distributed Availability Group: Create a distributed availability group that spans AG1 (on-premises) and a new availability group in Azure that includes VM1 as the primary replica (the new AG created in Azure will be created automatically by the wizard).
Monitor Synchronization: Ensure that the data is being synchronized properly between the on-premises primary and the Azure replica.
Planned Failover: Perform a planned manual failover within the distributed availability group to make the Azure SQL Server instance (VM1) the new primary.
Remove On-Premises Instances: Once the failover is complete and you’ve verified that everything is working correctly, you can remove the on-premises SQL Server instances from the availability group and the distributed availability group.
HOTSPOT
–
You are building an Azure web app that will store the Personally Identifiable Information (PII) of employees.
You need to recommend an Azure SQL. Database solution for the web app. The solution must meet the following requirements:
- Maintain availability in the event of a single datacenter outage.
- Support the encryption of specific columns that contain PII.
- Automatically scale up during payroll operations.
- Minimize costs.
What should you include in the recommendations? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
Service tier and computer tier:
Business Critical service tier and Serverless computer tier
General Purpose service tier and Serverless computer tier
Hyperscale service tier and Provisioned compute tier
Encryption method:
Always Encrypted
Microsoft SQL Server and database encryption keys
Transparent Data Encryption (TDE)
Answer Area:
Service tier and compute tier: General Purpose service tier and Serverless compute tier
Encryption method: Always Encrypted
Explanation:
- Service Tier and Compute Tier:
General Purpose service tier and Serverless compute tier:
Availability: The General Purpose service tier, when configured with zone redundancy, provides high availability and can withstand a single datacenter outage by automatically failing over to a secondary replica in a different availability zone.
Automatic Scaling: The Serverless compute tier automatically scales the database up or down based on workload demands. This is ideal for situations with variable workloads like payroll operations, where you need more resources during specific periods.
Cost Minimization: Serverless is very cost-effective because you only pay for the compute resources used when the database is active. When the database is idle, you’re only charged for storage.
Why not other options:
Business Critical: Business Critical is designed for mission-critical workloads with the highest availability and performance requirements. It’s more expensive than General Purpose and not necessary for this scenario where cost minimization is a priority.
Hyperscale: Hyperscale is suitable for very large databases (over 100 TB) and high-throughput scenarios. It’s not the most cost-effective option for this use case.
Provisioned compute tier: The Provisioned compute tier requires you to pre-allocate a fixed amount of compute resources, which can lead to overspending if the workload is variable.
- Encryption Method:
Always Encrypted:
Column-Level Encryption: Always Encrypted allows you to encrypt specific columns within a table, such as those containing PII. This is ideal for protecting sensitive data while still allowing other parts of the database to be accessed without decryption.
Client-Side Encryption: The encryption and decryption keys are managed on the client-side (e.g., in the web app), meaning that the data is encrypted before it even reaches the Azure SQL Database server. This provides an extra layer of security because the database server and administrators do not have access to the unencrypted data.
Why not other options:
Microsoft SQL Server and database encryption keys: This option is less secure and not recommended for this scenario, as the keys are managed on the database.
Transparent Data Encryption (TDE): TDE encrypts the entire database at rest, including all files and backups. While it’s a good security measure, it’s not granular enough for this requirement, which specifies encrypting only specific columns.
You plan to deploy an Azure Database for MySQL flexible server named Server1 to the East US Azure region.
You need to implement a business continuity solution for Server1. The solution must minimize downtime in the event of a failover to a paired region.
What should you do?
A. Create a read replica.
B. Store the database files in Azure premium file shares.
C. Implement Geo-redundant backup.
D. Configure native MySQL replication.
Recommended Solution:
A. Create a read replica.
Why Correct:
Minimize Downtime: Among the options, a read replica in the paired region (e.g., West US) offers the lowest downtime for cross-region failover. After a failure in East US, you manually promote the replica to primary, typically taking a few minutes (e.g., 5-10 minutes, depending on replication lag and promotion speed). This is significantly faster than restoring from backups (hours) and aligns with “minimize downtime” better than other choices.
Paired Region Support: Read replicas can be deployed in a paired region, fulfilling the DR requirement. Azure handles replication asynchronously, ensuring data is near-current (minimal RPO).
Managed Feature: Flexible Server’s read replica is a native Azure feature, requiring no manual MySQL configuration, making it practical and supported.
AZ-305 Fit: Reflects the exam’s focus on leveraging Azure-managed features for business continuity (e.g., HA/DR) while optimizing RTO and RPO.
HOTSPOT
–
You are designing a data analytics solution that will use Azure Synapse and Azure Data Lake Storage Gen2.
You need to recommend Azure Synapse pools to meet the following requirements:
* Ingest data from Data Lake Storage into hash-distributed tables.
* Implement query, and update data in Delta Lake.
What should you recommend for each requirement? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
Ingest data from Data Lake Storage into hash-distributed tables:
A dedicated SQL pool
A serverless Apache Spark pool
A serverless SQL pool
Implement, query, and update data in Delta Lake:
A dedicated SQL pool
A serverless Apache Spark pool
A serverless SQL pool
Final Answer
Ingest data from Data Lake Storage into hash-distributed tables: A dedicated SQL pool
Implement, query, and update data in Delta Lake: A serverless Apache Spark pool
Ingest data from Data Lake Storage into hash-distributed tables
Correct Answer: A dedicated SQL pool
Why it’s correct:
In Azure Synapse Analytics, a dedicated SQL pool (formerly known as SQL Data Warehouse) is designed for structured data workloads and supports hash-distributed tables. These tables are a key feature of dedicated SQL pools, allowing data to be distributed across nodes using a hash function for efficient querying and scalability.
The requirement specifies ingesting data from Azure Data Lake Storage Gen2 into hash-distributed tables, which implies loading data into a structured, relational format optimized for performance. Dedicated SQL pools support PolyBase or COPY statements to efficiently ingest data from Data Lake Storage into these tables.
Implement, query, and update data in Delta Lake
Correct Answer: A serverless Apache Spark pool
Why it’s correct:
Delta Lake is an open-source storage layer that brings ACID transactions, scalability, and reliability to data lakes. In Azure Synapse, Delta Lake is natively supported by Apache Spark pools. A serverless Apache Spark pool allows you to implement, query, and update Delta Lake tables stored in Azure Data Lake Storage Gen2 using Spark SQL, Python, or Scala.
The requirement includes “implement” (create Delta tables), “query” (read data), and “update” (modify data), all of which require a compute engine that supports Delta Lake’s transactional capabilities. Spark pools provide this functionality seamlessly.
DRAG DROP
–
You have an on-premises app named App1.
Customers use App1 to manage digital images.
You plan to migrate App1 to Azure.
You need to recommend a data storage solution for App1. The solution must meet the following image storage requirements:
- Encrypt images at rest.
- Allow files up to 50 MB.
- Manage access to the images by using Azure Web Application Firewall (WAF) on Azure Front Door.
The solution must meet the following customer account requirements:
- Support automatic scale out of the storage.
- Maintain the availability of App1 if a datacenter fails.
- Support reading and writing data from multiple Azure regions.
Which service should you include in the recommendation for each type of data? To answer, drag the appropriate services to the correct type of data. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct answer is worth one point.
Services
Azure Blob storage
Azure Cosmos DB
Azure SQL Database
Answer Area
Image storage:
Customer accounts:
Image storage:
Azure Blob storage: This service is designed for storing large amounts of unstructured data, such as images. It meets all the requirements for image storage:
Encrypt images at rest: Azure Blob storage encrypts data at rest by default using Storage Service Encryption (SSE).
Allow files up to 50 MB: Blob storage can handle files much larger than 50 MB.
Manage access to the images by using Azure Web Application Firewall (WAF) on Azure Front Door: Azure Front Door can be configured to serve content from Azure Blob storage, and WAF can be applied to Front Door to protect access to the images.
Azure Cosmos DB, Azure SQL Database, and Azure Table storage: These services are not primarily designed for storing large binary files like images efficiently and cost-effectively. While they can technically store images (e.g., as base64 encoded strings or binary data types), it’s not their optimal use case, especially for serving images directly through a CDN and WAF.
Customer accounts:
Azure Cosmos DB: This NoSQL database is highly scalable and globally distributed, making it an excellent choice for customer accounts. It meets all the customer account requirements:
Support automatic scale out of the storage: Cosmos DB is designed for automatic and elastic scaling.
Maintain the availability of App1 if a datacenter fails: Cosmos DB offers global distribution and multi-region write capabilities, ensuring high availability and disaster recovery.
Support reading and writing data from multiple Azure regions: Cosmos DB is designed for multi-region writes, allowing low-latency access and high availability for applications distributed across regions.
Azure SQL Database and Azure Table storage: While Azure SQL Database can be scaled and made highly available, and Azure Table storage is scalable, they are not as inherently globally distributed and multi-region write capable as Cosmos DB without significant additional configuration and complexity. They are also generally more costly and less flexible for globally distributed applications than Cosmos DB. Azure Table storage is more limited in its data model compared to Cosmos DB’s flexible schema and indexing. Azure Blob Storage is not designed for structured customer account data.
Therefore, the correct answer is:
Answer Area
Image storage: Azure Blob storage
Customer accounts: Azure Cosmos DB
You plan to migrate on-premises MySQL databases to Azure Database for MySQL Flexible Server.
You need to recommend a solution for the Azure Database for MySQL Flexible Server configuration. The solution must meet the following requirements:
- The databases must be accessible if a datacenter fails.
- Costs must be minimized.
Which compute tier should you recommend?
A. Burstable
B. General Purpose
C. Memory Optimized
The correct answer is B. General Purpose.
Explanation:
Let’s break down why General Purpose is the most suitable option and why the others are less appropriate based on the given requirements:
Requirements Analysis:
Datacenter Failure Protection: This is the primary requirement. It means we need a solution that offers High Availability (HA) and resilience against a complete datacenter outage. In Azure, this typically translates to leveraging Availability Zones or Geo-redundancy.
Cost Minimization: This is a secondary but important requirement. We need to choose the most cost-effective option that still satisfies the HA requirement.
Compute Tier Evaluation:
A. Burstable:
Pros: Burstable is the most cost-effective compute tier for Azure Database for MySQL Flexible Server. It’s designed for workloads with low CPU utilization that occasionally need to burst to higher performance levels.
Cons: Burstable does NOT inherently offer High Availability for datacenter failures. While you can configure read replicas in different availability zones with Flexible Server (regardless of tier), the Burstable tier itself doesn’t provide the built-in HA features like Zone Redundancy that are critical for automatic failover in case of a datacenter outage. Relying solely on read replicas for HA can be more complex to manage for failover and might not provide the same level of automated resilience as built-in HA options. Furthermore, the performance of a burstable tier might be less predictable during a failover event and under sustained load.
B. General Purpose:
Pros: General Purpose offers Zone Redundant High Availability (HA) within the same Azure region. This is crucial for meeting the datacenter failure requirement. Zone Redundant HA automatically deploys the server across multiple availability zones within a region, providing automatic failover in case of a zone outage. It also offers a balance of compute and memory resources suitable for most production workloads. It’s generally more cost-effective than Memory Optimized while still providing robust performance and HA.
Cons: More expensive than Burstable.
C. Memory Optimized:
Pros: Memory Optimized also supports Zone Redundant High Availability (HA). It’s designed for memory-intensive workloads requiring fast performance and low latency.
Cons: Memory Optimized is the most expensive compute tier. If the workload is not specifically memory-bound, choosing Memory Optimized solely for HA would be an overkill and unnecessarily costly solution, violating the cost minimization requirement.
Why General Purpose is the Best Choice:
General Purpose strikes the right balance between meeting both requirements:
Datacenter Failure Protection (Meets): General Purpose with Zone Redundant HA directly addresses the need for database accessibility even if a datacenter fails.
Cost Minimization (Closest to Meeting): While Burstable is cheaper, it compromises on the essential HA requirement for datacenter failure resilience. Memory Optimized is more expensive and unnecessary if the workload isn’t memory-intensive. General Purpose provides HA at a more reasonable cost compared to Memory Optimized, making it the most cost-effective option that still fulfills the primary requirement.
In Summary:
For production workloads requiring high availability and resilience against datacenter failures while aiming to minimize costs, General Purpose compute tier with Zone Redundant HA is the recommended and most appropriate solution for Azure Database for MySQL Flexible Server. Burstable is too risky for HA, and Memory Optimized is too expensive unless specifically needed for memory-intensive workloads.
You are designing an app that will use Azure Cosmos DB to collate sales from multiple countries.
You need to recommend an API for the app. The solution must meet the following requirements:
- Support SQL queries.
- Support geo-replication.
- Store and access data relationally.
Which API should you recommend?
A. Apache Cassandra
B. PostgreSQL
C. MongoDB
D. NoSQL
The correct answer is B. PostgreSQL.
Explanation:
Let’s analyze each requirement against the capabilities of the provided Azure Cosmos DB API options:
Support SQL queries.
PostgreSQL API: Yes. The PostgreSQL API for Azure Cosmos DB is designed to be wire-protocol compatible with native PostgreSQL. This means it directly supports standard SQL queries, including complex queries, joins, and aggregations, just like a traditional PostgreSQL database.
Apache Cassandra API: No. The Apache Cassandra API for Azure Cosmos DB uses Cassandra Query Language (CQL), which is not standard SQL. CQL is a NoSQL query language specific to Cassandra and is different in syntax and capabilities from SQL.
MongoDB API: No. The MongoDB API for Azure Cosmos DB uses the MongoDB Query Language, which is a document-based query language and not SQL.
NoSQL API: No. “NoSQL API” is not a specific API option in Azure Cosmos DB. It’s a general term that refers to non-relational database APIs. Within Cosmos DB, you choose specific APIs like SQL API (Core SQL), MongoDB API, Cassandra API, Gremlin API, or Table API. If “NoSQL API” were intended to mean the “Core (SQL) API”, then it would support SQL queries, but “NoSQL API” itself is not a valid or specific option to select. If we interpret “NoSQL API” as any non-SQL API, it would obviously not support SQL queries.
Support geo-replication.
PostgreSQL API: Yes. All Azure Cosmos DB APIs, including the PostgreSQL API, fully support geo-replication. Cosmos DB is a globally distributed database service, and geo-replication is a core feature across all APIs. You can easily configure multi-region writes and reads with the PostgreSQL API.
Apache Cassandra API: Yes. Geo-replication is supported.
MongoDB API: Yes. Geo-replication is supported.
NoSQL API: Yes. If interpreted as Core (SQL) API, geo-replication is supported. If interpreted as a general term, then it depends on the specific NoSQL API, but in the context of Cosmos DB, all major APIs support geo-replication.
Store and access data relationally.
PostgreSQL API: Yes. PostgreSQL is a relational database system. The PostgreSQL API in Cosmos DB allows you to create tables, define schemas, enforce relationships, and perform relational operations.
Apache Cassandra API: No. Apache Cassandra is a NoSQL wide-column store database. While you can model relationships to some extent, it’s not inherently relational in the same way as a relational database like PostgreSQL. Cassandra is optimized for scalability and fault tolerance, not for strict relational data modeling.
MongoDB API: No. MongoDB is a NoSQL document database. It stores data in JSON-like documents and is not relational. While you can embed related data in documents, it’s not based on relational database principles.
NoSQL API: No. If interpreted as Core (SQL) API, while it’s called “SQL API”, Cosmos DB’s Core API is actually a NoSQL document database under the hood, even though it uses a SQL-like query language. It is not truly relational in the same sense as PostgreSQL. If interpreted as other NoSQL APIs, they are generally non-relational.
Conclusion:
Based on the analysis, the PostgreSQL API (Option B) is the only API that directly and completely satisfies all three requirements: SQL queries, geo-replication, and relational data storage and access.
Final Answer: The final answer is
B
HOTSPOT
–
You have an app that generates 50,000 events daily.
You plan to stream the events to an Azure event hub and use Event Hubs Capture to implement cold path processing of the events. The output of Event Hubs Capture will be consumed by a reporting system.
You need to identify which type of Azure storage must be provisioned to support Event Hubs Capture, and which inbound data format the reporting system must support.
What should you identify? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
Storage type:
Azure Data Lake Storage Gen2
Premium block blobs
Premium file shares
Data format:
Apache Parquet
Avro
JSON
Storage type:
Azure Data Lake Storage Gen2: Azure Data Lake Storage Gen2 is built on top of Azure Blob Storage and is designed for big data analytics workloads. It provides hierarchical namespace and is optimized for storing and processing large volumes of data. Event Hubs Capture is specifically designed to output data to Azure Blob Storage or Azure Data Lake Storage Gen2. ADLS Gen2 is a very suitable option for cold path processing and reporting systems that consume large volumes of event data.
Premium block blobs: Premium block blobs are designed for high transaction rates and low latency, typically used for hot storage scenarios. While Event Hubs Capture could technically write to standard tier block blobs within Azure Blob Storage, Premium block blobs are not the typical recommendation or cost-effective choice for cold path processing and large volume capture scenarios like this. Standard tier or ADLS Gen2 are more appropriate.
Premium file shares: Azure file shares are designed for file system semantics (SMB, NFS) and are not a supported storage target for Event Hubs Capture. Event Hubs Capture is designed to write streaming data to blob storage optimized for large data volumes.
Data format:
Avro: Avro is a row-oriented data serialization framework. It is the default and recommended format for Event Hubs Capture. Avro is designed for data-intensive applications and is efficient for both storage and processing. It also supports schema evolution, which is beneficial for event data that might change over time.
Apache Parquet: Parquet is a columnar storage format optimized for analytical queries. While Parquet is excellent for reporting systems, Event Hubs Capture does not directly output data in Parquet format. Event Hubs Capture outputs in Avro. To get Parquet, you would typically need to use another service (like Azure Data Factory, Azure Databricks, or Stream Analytics) to process the Avro files from Capture and convert them to Parquet for the reporting system.
JSON: JSON (JavaScript Object Notation) is a human-readable text-based format. While individual events within Event Hubs can be in JSON format, Event Hubs Capture does not output the captured data as raw JSON files. It outputs in Avro, which is a binary format optimized for efficiency.
Conclusion:
For Event Hubs Capture, Azure Data Lake Storage Gen2 is a highly suitable and recommended storage type for cold path processing and reporting. The default and primary data format outputted by Event Hubs Capture is Avro. Therefore, the reporting system must support Avro to directly consume the output.
Answer Area:
Storage type: Azure Data Lake Storage Gen2
Data format: Avro
HOTSPOT
–
You are designing a storage solution that will ingest, store, and analyze petabytes (PBs) of structured, semi-structured, and unstructured text data. The analyzed data will be offloaded to Azure Data Lake Storage Gen2 for long-term retention.
You need to recommend a storage and analytics solution that meets the following requirements:
- Stores the processed data
- Provides interactive analytics
- Supports manual scaling, built-in autoscaling, and custom autoscaling
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
For storage and interactive analytics:
Azure Data Explorer
Azure Data Lake Analytics
Log Analytics
Query language:
KQL
Transact-SQL
U-SQL
Let’s break down each option and why the selected answers are the most appropriate for the given scenario.
For storage and interactive analytics:
Azure Data Explorer: This is the correct choice. Azure Data Explorer (ADX) is a fully managed, high-performance, big data analytics service that is specifically designed for fast exploration and analysis of large volumes of diverse data. It excels at:
Ingesting and storing: Petabytes of structured, semi-structured, and unstructured data, including text data.
Interactive analytics: ADX is built for ad-hoc queries and fast insights using its Kusto Query Language (KQL).
Scaling: It supports manual scaling (resizing clusters), built-in autoscaling (based on workload), and custom autoscaling through APIs.
Data Types: Handles structured, semi-structured, and unstructured data effectively.
Azure Data Lake Analytics: While Azure Data Lake Analytics can process petabytes of data and uses Azure Data Lake Storage Gen2, it is not primarily designed for interactive analytics. It’s more suited for batch processing and complex data transformations using U-SQL. Interactive querying is not its strong point compared to ADX.
Log Analytics: Log Analytics (part of Azure Monitor) is primarily designed for collecting and analyzing log and telemetry data for monitoring and troubleshooting purposes. While it uses KQL and can perform analytics on large volumes of data, its primary focus is operational monitoring, not general-purpose interactive analytics across diverse data types for business insights like the scenario describes. It’s also less focused on storing processed data for long-term retention in the context of a data lake strategy.
Query language:
KQL (Kusto Query Language): This is the correct choice. KQL is the query language used by Azure Data Explorer and Log Analytics. Since Azure Data Explorer is the best choice for storage and interactive analytics in this scenario, KQL is the corresponding query language. KQL is specifically designed for querying large volumes of data quickly and efficiently, making it ideal for interactive analysis in ADX.
Transact-SQL (T-SQL): T-SQL is the query language for SQL Server and Azure SQL Database. It’s designed for relational databases and is not the native query language for Azure Data Explorer or Azure Data Lake Analytics. While you might be able to connect SQL tools to some of these services, it’s not the optimal or primary query method for interactive analytics in this context.
U-SQL: U-SQL is the query language used by Azure Data Lake Analytics. While relevant to Azure Data Lake services, it’s specifically tied to Data Lake Analytics and not used with Azure Data Explorer. Since Azure Data Explorer is the better choice for interactive analytics, KQL is the appropriate query language, not U-SQL.
Therefore, the best solution is:
For storage and interactive analytics: Azure Data Explorer
Query language: KQL
This combination directly addresses all the requirements: storing processed data, providing interactive analytics, and supporting various scaling options while being well-suited for petabyte-scale structured, semi-structured, and unstructured text data.
Answer Area:
For storage and interactive analytics: ✔️ Azure Data Explorer
Query language: ✔️ KQL
HOTSPOT
–
You plan to use Azure SQL as a database platform.
You need to recommend an Azure SQL product and service tier that meets the following requirements:
- Automatically scales compute resources based on the workload demand
- Provides per second billing
What should you recommend? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
Azure SQL product:
A single Azure SQL database
An Azure SQL Database elastic pool
Azure SQL Managed Instance
Service tier:
Basic
Business Critical
General Purpose
Hyperscale
Standard
Let’s analyze each requirement and the provided options for Azure SQL product and service tier.
Requirement 1: Automatically scales compute resources based on workload demand
Azure SQL product:
A single Azure SQL database: Single Azure SQL databases, when configured with the Serverless compute tier, are designed to automatically scale compute resources based on workload demand. They can automatically pause during inactive periods and resume when activity resumes.
An Azure SQL Database elastic pool: Elastic pools are designed to share resources among multiple databases, and they can also be configured in the Serverless compute tier for automatic scaling of resources shared across the pool based on the collective demand.
Azure SQL Managed Instance: Azure SQL Managed Instance is a fully managed SQL Server instance. While it offers scalability, it does not inherently offer the same level of automatic, per-second scaling based on workload demand as the Serverless compute tier available in single databases and elastic pools. Managed Instance typically uses provisioned compute, though it can be resized.
Service tier:
Hyperscale: The Hyperscale service tier in Azure SQL Database is designed for highly scalable workloads and offers rapid scaling capabilities, though it is not explicitly per-second billing. It scales storage and compute independently and quickly, but it’s not the “Serverless” tier which offers pause/resume and per-second billing.
General Purpose: General Purpose tier is a standard tier, offering a balance of compute and storage, but does not inherently offer automatic scaling to the extent of Serverless or Hyperscale for compute resources in a per-second billing manner.
Business Critical: Business Critical tier is designed for mission-critical workloads with high performance and availability needs, but it is also based on provisioned resources and not per-second billing autoscaling.
Standard, Basic: Basic and Standard tiers are entry-level tiers with more limited performance and scalability options and are not designed for automatic scaling based on demand with per-second billing.
Requirement 2: Provides per second billing
Azure SQL product (with Serverless compute tier):
A single Azure SQL database (Serverless): Yes. Single Azure SQL databases configured with the Serverless compute tier offer per-second billing. You are billed only for the compute resources consumed per second when the database is active. Billing pauses during periods of inactivity.
An Azure SQL Database elastic pool (Serverless): Yes. Elastic pools configured with the Serverless compute tier also offer per-second billing for the vCores consumed by the pool.
Azure SQL Managed Instance: No. Azure SQL Managed Instance uses a vCore-based pricing model, which is typically billed hourly, not per second.
Service tier:
Hyperscale: No. While Hyperscale is highly scalable and cost-effective for large workloads, it does not offer per-second billing. Hyperscale uses provisioned resources, billed hourly.
General Purpose, Business Critical, Standard, Basic: No. These tiers also use provisioned resources and are not billed per second.
Conclusion:
To meet both requirements of automatic scaling based on workload demand and per-second billing, the Serverless compute tier is essential. The Serverless compute tier is available for both A single Azure SQL database and An Azure SQL Database elastic pool. From the provided Service tier options, none directly represent the “Serverless compute tier” name, but Hyperscale is the most scalable option listed and may be intended to represent a highly scalable and cost-optimized choice in the context of the question, even if not precisely per-second billing in the “Serverless” manner.
However, if we strictly interpret “per second billing” and “automatically scales compute resources based on workload demand” as defining characteristics of the Serverless compute tier, and consider the Azure SQL product options that can use this tier, then “A single Azure SQL database” is a valid product. From the Service tier options, none perfectly match “Serverless”, but Hyperscale is the most scalable option from the list provided, and might be included to represent a scalable and cost-aware choice, even if not strictly per-second billing as Serverless.
Given the options and the core requirements, the best fit is:
Azure SQL product: A single Azure SQL database (as it can utilize the Serverless compute tier)
Service tier: Hyperscale (as it is the most scalable option from the provided list and is often associated with cost optimization and scalability, although not strictly per-second billing like Serverless). If “Serverless” was an option in the service tier list, that would be the most direct answer for per-second billing and autoscaling.
Final Answer:
Answer Area
Azure SQL product: A single Azure SQL database
Service tier: Hyperscale
HOTSPOT
–
You have an Azure subscription.
You need to deploy a solution that will provide point-in-time restore for blobs in storage accounts that have blob versioning and blob soft delete enabled.
Which type of blob should you create, and what should you enable for the accounts? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
Blob type:
Append
Block
Page
Enable:
A stored access policy
Immutable blob storage
Object replication
The change feed
To achieve point-in-time restore for blobs in Azure Storage accounts with blob versioning and soft delete enabled, we need to understand the prerequisites and supporting features for this functionality.
Blob Type:
Append, Block, and Page Blobs: Azure Storage supports point-in-time restore for general-purpose v2 and BlockBlobStorage accounts, which primarily use Block Blobs for general object storage. While Append and Page blobs serve specific purposes, Block Blobs are the most common type for general storage needs and are fully compatible with point-in-time restore. The blob type itself doesn’t restrict the point-in-time restore feature as long as it’s within a supported account type. For general blob storage and point-in-time restore, Block Blob is the most relevant and common type.
Enable:
Let’s evaluate the options for enabling features:
A stored access policy: Stored access policies are used to manage shared access signatures (SAS) for controlled access to storage resources. They do not directly enable point-in-time restore functionality.
Immutable blob storage: Immutable blob storage (with policy or time-based retention) is a feature that prevents blobs from being deleted or modified for a specified period. While immutability enhances data protection and can be used in conjunction with point-in-time restore for robust data governance, it is not the feature that enables point-in-time restore itself.
Object replication: Object replication is for asynchronously copying blobs between storage accounts for disaster recovery or other purposes. It’s not related to point-in-time restore within a single storage account.
The change feed: The change feed is the fundamental feature that enables point-in-time restore. Point-in-time restore relies on the change feed to understand the history of operations performed on the blobs within the storage account. The change feed provides a log of all create, modify, and delete operations. Azure Storage uses this change feed data to reconstruct the state of the storage account at a specified point in time. To use point-in-time restore, the change feed must be enabled on the storage account.
Therefore, to enable point-in-time restore for blobs, you should use Block Blobs (as it’s the general and relevant type for this scenario) and enable The change feed for the storage accounts.
Answer Area:
Blob type: Block
Enable: The change feed
HOTSPOT
–
Your company, named Contoso, Ltd., has an Azure subscription that contains the following resources:
- An Azure Synapse Analytics workspace named contosoworkspace1
- An Azure Data Lake Storage account named contosolake1
- An Azure SQL database named contososql1
The product data of Contoso is copied from contososql1 to contosolake1.
Contoso has a partner company named Fabrikam Inc. Fabrikam has an Azure subscription that contains the following resources:
- A virtual machine named FabrikamVM1 that runs Microsoft SQL Server 2019
- An Azure Storage account named fabrikamsa1
Contoso plans to upload the research data on FabrikamVM1 to contosolake1. During the upload, the research data must be transformed to the data formats used by Contoso.
The data in contosolake1 will be analyzed by using contosoworkspace1.
You need to recommend a solution that meets the following requirements:
- Upload and transform the FabrikamVM1 research data.
- Provide Fabrikam with restricted access to snapshots of the data in contosoworkspace1.
What should you recommend for each requirement? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
Upload and transform the data:
Provide restricted access:
Azure Data Box Gateway
Azure Data Share
Azure Synapse pipelines
Azure Data Box Gateway
Azure Data Share
Azure Synapse pipelines
Let’s analyze each requirement and the available options:
Requirement 1: Upload and transform the FabrikamVM1 research data.
Azure Data Box Gateway: Azure Data Box Gateway is a hybrid cloud storage solution that enables you to transfer data to Azure in a fast and cost-effective manner. It is primarily used for large volumes of data and scenarios where network bandwidth is limited. While it can upload data to Azure, it is not designed for real-time data transformation during the upload process. Data Box Gateway is more focused on efficient transfer of data as-is.
Azure Data Share: Azure Data Share is a service for sharing data securely with external organizations. It’s about sharing existing data in Azure, not about uploading and transforming data from an external source into Azure.
Azure Synapse Pipelines: Azure Synapse Pipelines (within Azure Synapse Analytics workspace) are a powerful, serverless data integration service. They are designed for building complex ETL/ELT pipelines. You can create a pipeline that:
Connects to FabrikamVM1’s SQL Server: Using a Linked Service to access the SQL Server database on the VM.
Extracts the research data: Using a Copy Activity to read data from the SQL Server database.
Transforms the data: Using various activities within the pipeline (like Data Flow, Mapping Data Flow, or even activities calling external services like Azure Functions or Databricks) to transform the data into Contoso’s required formats.
Loads the transformed data into contosolake1: Using a Copy Activity to write the transformed data to Azure Data Lake Storage Gen2.
Therefore, Azure Synapse Pipelines is the most suitable option for uploading and transforming data.
Requirement 2: Provide Fabrikam with restricted access to snapshots of the data in contosoworkspace1.
Azure Data Box Gateway: Azure Data Box Gateway is for data transfer and not for providing access to existing data or snapshots.
Azure Data Share: Azure Data Share is specifically designed for securely sharing data with external organizations like Fabrikam. It allows you to:
Share data from various Azure data stores: Including Azure Data Lake Storage Gen2 (contosolake1, which is used by contosoworkspace1).
Share snapshots: You can share snapshots of the data, ensuring Fabrikam gets a consistent view of the data at a specific point in time.
Control access: You can define the level of access Fabrikam has to the shared data, restricting them to read-only access to the snapshots.
Manage sharing relationships: You can manage and monitor the data sharing relationship with Fabrikam.
Azure Synapse Pipelines: Azure Synapse Pipelines are for data integration and processing, not directly for managing external data access to snapshots. While pipelines can create snapshots or copies of data, they are not the service for sharing those snapshots with restricted access to external partners in a managed and secure way.
Therefore, Azure Data Share is the most suitable option for providing restricted access to snapshots of the data in contosoworkspace1.
Answer Area:
Upload and transform the data: ✔️ Azure Synapse pipelines
Provide restricted access: ✔️ Azure Data Share