test5 Flashcards

https://freedumps.certqueen.com/?s=AZ-304

1
Q

Case Study

This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.

To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.

At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

To start the case study

To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Overview. General Overview

Litware, Inc. is a medium-sized finance company.

Overview. Physical Locations

Litware has a main office in Boston.

Existing Environment. Identity Environment

The network contains an Active Directory forest named Litware.com that is linked to an Azure Active Directory (Azure AD) tenant named Litware.com. All users have Azure Active Directory Premium P2 licenses.

Litware has a second Azure AD tenant named dev.Litware.com that is used as a development environment.

The Litware.com tenant has a conditional acess policy named capolicy1. Capolicy1 requires that when users manage the Azure subscription for a production environment by

using the Azure portal, they must connect from a hybrid Azure AD-joined device.

Existing Environment. Azure Environment

Litware has 10 Azure subscriptions that are linked to the Litware.com tenant and five Azure subscriptions that are linked to the dev.Litware.com tenant. All the subscriptions are in an Enterprise Agreement (EA).

The Litware.com tenant contains a custom Azure role-based access control (Azure RBAC) role named Role1 that grants the DataActions read permission to the blobs and files in Azure Storage.

Existing Environment. On-premises Environment

The on-premises network of Litware contains the resources shown in the following table.

Name Type Configuration
SERVER1 Ubuntu 18.04 vitual machines hosted on Hyper-V The vitual machines host a third-party app named App1. App1 uses an external storage solution that provides Apache Hadoop-compatible data storage. The data storage supports POSIX access control list (ACL) file-level permissions.
SERVER10 Server that runs Windows Server 2016 The server contains a Microsoft SQL Server instance that hosts two databases named DB1 and DB2.

Existing Environment. Network Environment

Litware has ExpressRoute connectivity to Azure.

Planned Changes and Requirements. Planned Changes

Litware plans to implement the following changes:

✑ Migrate DB1 and DB2 to Azure.

✑ Migrate App1 to Azure virtual machines.

✑ Deploy the Azure virtual machines that will host App1 to Azure dedicated hosts.

Planned Changes and Requirements. Authentication and Authorization Requirements

Litware identifies the following authentication and authorization requirements:

✑ Users that manage the production environment by using the Azure portal must connect from a hybrid Azure AD-joined device and authenticate by using Azure Multi-Factor Authentication (MFA).

✑ The Network Contributor built-in RBAC role must be used to grant permission to all the virtual networks in all the Azure subscriptions.

✑ To access the resources in Azure, App1 must use the managed identity of the virtual machines that will host the app.

✑ Role1 must be used to assign permissions to the storage accounts of all the Azure subscriptions.

✑ RBAC roles must be applied at the highest level possible.

Planned Changes and Requirements. Resiliency Requirements

Litware identifies the following resiliency requirements:

✑ Once migrated to Azure, DB1 and DB2 must meet the following requirements:

  • Maintain availability if two availability zones in the local Azure region fail.
  • Fail over automatically.
  • Minimize I/O latency.

✑ App1 must meet the following requirements:

  • Be hosted in an Azure region that supports availability zones.
  • Be hosted on Azure virtual machines that support automatic scaling.
  • Maintain availability if two availability zones in the local Azure region fail.

Planned Changes and Requirements. Security and Compliance Requirements

Litware identifies the following security and compliance requirements:

✑ Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.

✑ On-premises users and services must be able to access the Azure Storage account that will host the data in App1.

✑ Access to the public endpoint of the Azure Storage account that will host the App1 data must be prevented.

✑ All Azure SQL databases in the production environment must have Transparent Data Encryption (TDE) enabled.

✑ App1 must not share physical hardware with other workloads.

Planned Changes and Requirements. Business Requirements

Litware identifies the following business requirements:

✑ Minimize administrative effort.

✑ Minimize costs.

You migrate App1 to Azure. You need to ensure that the data storage for App1 meets the security and compliance requirement

What should you do?

Create an access policy for the blob
Modify the access level of the blob service.
Implement Azure resource locks.
Create Azure RBAC assignments.

A

The security and compliance requirement states: “Once App1 is migrated to Azure, you must ensure that new data can be written to the app, and the modification of new and existing data is prevented for a period of three years.” This is a requirement for data immutability or Write-Once-Read-Many (WORM) storage.

Let’s examine each option:

Create an access policy for the blob: Azure Blob Storage offers a feature called Immutable Storage for Blob Storage, which allows you to store business-critical data in a WORM state. You can implement time-based retention policies to retain data for a specified period, during which blobs cannot be modified or deleted. This directly addresses the requirement of preventing modification for three years. An access policy in this context would refer to configuring an immutability policy.

Modify the access level of the blob service: Blob storage access tiers (Hot, Cool, Archive) are related to data access frequency and cost. Changing the access tier does not provide any immutability or write protection for the data. This option is irrelevant to the requirement.

Implement Azure resource locks: Azure Resource Locks are used to protect Azure resources (like storage accounts, virtual machines, etc.) from accidental deletion or modification at the Azure Resource Manager level. While you can lock a storage account to prevent deletion of the account itself, resource locks do not prevent modifications to the data within the blobs in the storage account. Resource locks are not designed for data immutability within a storage service.

Create Azure RBAC assignments: Azure Role-Based Access Control (RBAC) is used to manage access to Azure resources. You can use RBAC to control who can read, write, or delete blobs. However, RBAC is about authorization and permissions, not about enforcing immutability or retention policies. RBAC cannot prevent authorized users from modifying data within the retention period.

Considering the requirement for data immutability and prevention of modification for three years, the most appropriate solution is to Create an access policy for the blob. This refers to using the Immutable Storage feature of Azure Blob Storage and setting up a time-based retention policy for a duration of three years. This will ensure that once data is written, it cannot be modified or deleted for the specified period, meeting the security and compliance requirement.

Final Answer: Create an access policy for the blob

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You need to implement the Azure RBAC role assignments for the Network Contributor role. The solution must meet the authentication and authorization requirements.

What is the minimum number of assignments that you must use?

1
2
5
10
15

A

The requirement is to use the Network Contributor built-in RBAC role to grant permission to all virtual networks in all Azure subscriptions. The principle is to apply RBAC roles at the highest level possible to minimize administrative effort.

Litware has:

10 Azure subscriptions in the Litware.com tenant (production environment)

5 Azure subscriptions in the dev.Litware.com tenant (development environment)

Total of 15 Azure subscriptions

The requirement is to grant the Network Contributor role to all virtual networks in all Azure subscriptions. This implies we need to cover all 15 subscriptions.

The highest level at which you can apply an RBAC role assignment that would affect all virtual networks within a subscription is the subscription level itself.

If there was a Management Group structure in place, and if all 15 subscriptions were under a single Management Group, then assigning the Network Contributor role at the Management Group level would be the most efficient way, requiring only 1 assignment. However, the case study does not explicitly mention the use of Management Groups.

In the absence of explicitly mentioned Management Groups that encompass all subscriptions, the highest level to apply RBAC to cover all virtual networks within each subscription is the subscription level.

Therefore, to grant the Network Contributor role to all virtual networks in all 15 subscriptions, and applying the role at the highest possible level (which we assume to be subscription level in this context), you would need to make 15 assignments, one assignment for each subscription.

If we were to assign at a lower level, such as resource group level, it would not meet the requirement of covering all virtual networks in all subscriptions with the minimum number of assignments. We would need many more assignments at the resource group level, and it would be much more complex to manage.

Since the question asks for the minimum number of assignments and to apply at the highest level possible, and assuming the highest manageable level to affect all virtual networks in a subscription is the subscription itself, the answer is 15. If a management group was implied and covered all subscriptions, the answer would be 1. However, based on the information provided, and to cover all subscriptions, 15 is the minimum number of assignments at the subscription level.

Final Answer: 15

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

HOTSPOT

You plan to migrate DB1 and DB2 to Azure.

You need to ensure that the Azure database and the service tier meet the resiliency and business requirements.

What should you configure? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area
Database:
A single Azure SQL database
Azure SQL Managed Instance
An Azure SQL Database elastic pool
Service tier:
Hyperscale
Business Critical
General Purpose

A

Explanation:

Box 1: SQL Managed Instance

Scenario: Once migrated to Azure, DB1 and DB2 must meet the following requirements:

✑ Maintain availability if two availability zones in the local Azure region fail.

✑ Fail over automatically.

✑ Minimize I/O latency.

The auto-failover groups feature allows you to manage the replication and failover of a group of databases on a server or all databases in a managed instance to another region. It is a declarative abstraction on top of the existing active geo-replication feature, designed to simplify deployment and management of geo-replicated databases at scale. You can initiate a geo-failover manually or you can delegate it to the Azure service based on a user-defined policy. The latter option allows you to automatically recover multiple related databases in a secondary region after a catastrophic failure or other unplanned event that results in full or partial loss of the SQL Database or SQL Managed Instance availability in the primary region.

Box 2: Business critical

SQL Managed Instance is available in two service tiers:

General purpose: Designed for applications with typical performance and I/O latency requirements.

Business critical: Designed for applications with low I/O latency requirements and minimal impact of underlying maintenance operations on the workload.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Overview

This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.

To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.

At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

To start the case study

To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Existing Environment: Technical Environment

The on-premises network contains a single Active Directory domain named contoso.com.

Contoso has a single Azure subscription.

Existing Environment: Business Partnerships

Contoso has a business partnership with Fabrikam, Inc. Fabrikam users access some Contoso applications over the internet by using Azure Active Directory (Azure AD) guest accounts.

Requirements: Planned Changes

Contoso plans to deploy two applications named App1 and App2 to Azure.

Requirements: App1

App1 will be a Python web app hosted in Azure App Service that requires a Linux runtime.

Users from Contoso and Fabrikam will access App1.

App1 will access several services that require third-party credentials and access strings.

The credentials and access strings are stored in Azure Key Vault.

App1 will have six instances: three in the East US Azure region and three in the West Europe Azure region.

App1 has the following data requirements:

✑ Each instance will write data to a data store in the same availability zone as the instance.

✑ Data written by any App1 instance must be visible to all App1 instances.

App1 will only be accessible from the internet. App1 has the following connection requirements:

✑ Connections to App1 must pass through a web application firewall (WAF).

✑ Connections to App1 must be active-active load balanced between instances.

✑ All connections to App1 from North America must be directed to the East US region. All other connections must be directed to the West Europe region.

Every hour, you will run a maintenance task by invoking a PowerShell script that copies files from all the App1 instances. The PowerShell script will run from a central location.

Requirements: App2

App2 will be a NET app hosted in App Service that requires a Windows runtime.

App2 has the following file storage requirements:

✑ Save files to an Azure Storage account.

✑ Replicate files to an on-premises location.

✑ Ensure that on-premises clients can read the files over the LAN by using the SMB protocol.

You need to monitor App2 to analyze how long it takes to perform different transactions within the application. The solution must not require changes to the application code.

Application Development Requirements

Application developers will constantly develop new versions of App1 and App2.

The development process must meet the following requirements:

✑ A staging instance of a new application version must be deployed to the application host before the new version is used in production.

✑ After testing the new version, the staging version of the application will replace the production version.

✑ The switch to the new application version from staging to production must occur without any downtime of the application.

Identity Requirements

Contoso identifies the following requirements for managing Fabrikam access to resources:

✑ uk.co.certification.simulator.questionpool.PList@1863e940

✑ The solution must minimize development effort.

Security Requirement

All secrets used by Azure services must be stored in Azure Key Vault.

Services that require credentials must have the credentials tied to the service instance. The credentials must NOT be shared between services.

You need to recommend a solution that meets the application development requirements.

What should you include in the recommendation?

an Azure Container Registry instance
deployment slots
Continuous Integration/Continuous Deployment (CI/CD) sources
the Azure App Configuration service

A

The application development requirements explicitly state the need for a staging instance, testing, and zero-downtime deployment for new application versions. Let’s evaluate each option in the context of these requirements:

an Azure Container Registry instance: Azure Container Registry (ACR) is a service for building, storing, and managing container images. While ACR is crucial for containerized applications and can be part of a CI/CD pipeline, App1 and App2 are deployed to Azure App Service, which, according to the description, doesn’t explicitly mention containerization. ACR, by itself, does not directly enable staging or zero-downtime deployment for App Service applications.

deployment slots: Azure App Service deployment slots are a feature specifically designed to address the application development requirements outlined. Deployment slots allow you to:

Deploy a new version of your application to a staging slot.

Test the staged application in an environment that mirrors production.

Swap the staging slot into the production slot with minimal to zero downtime. This swap operation is very quick because it primarily involves changing the virtual IP addresses associated with the slots, not redeploying the application.
This option directly and effectively addresses all three application development requirements.

Continuous Integration/Continuous Deployment (CI/CD) sources: CI/CD sources like Azure DevOps, GitHub, or Bitbucket are tools and platforms that facilitate the automation of the software development lifecycle, including building, testing, and deploying applications. While CI/CD pipelines are essential for automating deployments to deployment slots, CI/CD sources themselves are not the mechanism for staging and zero-downtime deployment. They are used to manage and drive deployments, potentially to deployment slots, but they are not the solution itself for the stated requirement.

the Azure App Configuration service: Azure App Configuration is a service for centrally managing application settings and feature flags. It helps decouple configuration from code, enabling dynamic configuration updates without application redeployments. While App Configuration is valuable for managing application settings and can be integrated with CI/CD pipelines, it does not directly address the core requirement of staging new application versions and achieving zero-downtime swaps between versions.

Considering the explicit requirements for staging, testing, and zero-downtime deployment, deployment slots are the most direct and effective Azure App Service feature to meet these needs. They provide the necessary infrastructure to deploy a staging version, test it, and then swap it into production without downtime.

Final Answer: deployment slots

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What should you recommend lo meet the monitoring requirements for App2?

Azure Application Insights
Container insights
Microsoft Sentinel
VM insights

A

The requirement is to monitor App2 to analyze transaction times without modifying the application code. App2 is a .NET application hosted in Azure App Service. Let’s evaluate each option:

Azure Application Insights: Application Insights is an Application Performance Monitoring (APM) service in Azure. It is designed specifically for web applications, including those hosted in Azure App Service. Application Insights can automatically instrument .NET applications running in App Service without requiring code changes through the use of the Application Insights Extension or Auto-Instrumentation. This feature automatically collects performance data, including request durations and transaction traces, which directly addresses the requirement to analyze transaction times.

Container insights: Container insights is a feature of Azure Monitor designed to monitor the performance and health of containerized workloads, primarily in Azure Kubernetes Service (AKS) and Azure Container Instances (ACI). Since App2 is hosted in Azure App Service (which is a PaaS service, not containers directly managed by the user), Container insights is not the appropriate monitoring solution for App2.

Microsoft Sentinel: Microsoft Sentinel is Azure’s cloud-native SIEM (Security Information and Event Management) and SOAR (Security Orchestration, Automation, and Response) solution. Sentinel is focused on security monitoring, threat detection, and incident response. While Sentinel can ingest data from various sources, including Azure Monitor logs (which could include Application Insights data), it is not primarily designed for application performance monitoring in the way that Application Insights is. Using Sentinel for this specific transaction monitoring requirement would be an indirect and overly complex approach compared to using Application Insights directly.

VM insights: VM insights is designed to monitor the performance and health of virtual machines and virtual machine scale sets. While Azure App Service instances run on virtual machines in the backend, VM insights focuses on monitoring the infrastructure level metrics of the VMs (CPU, memory, disk, network). It does not provide application-level transaction monitoring or analysis for applications running within App Service. VM insights is not the right tool to analyze application transaction times.

Considering the requirement for monitoring App2 transactions without code changes, and App2 being an App Service .NET application, Azure Application Insights is the most suitable and direct recommendation. It provides automatic instrumentation for App Service applications, enabling transaction monitoring without requiring any modifications to the application’s code.

Final Answer: Azure Application Insights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What should you recommend to meet the monitoring requirements for App2?

Microsoft Sentinel
Azure Application Insights
Container insights
VM insights

A

The requirement is to monitor App2 to analyze transaction times without requiring any changes to the application code. App2 is a .NET application hosted in Azure App Service.

Let’s evaluate each option again:

Microsoft Sentinel: Microsoft Sentinel is a cloud-native SIEM (Security Information and Event Management) and SOAR (Security Orchestration, Automation, and Response) solution. It is primarily focused on security monitoring, threat detection, and incident response. While Sentinel can ingest logs and metrics from various Azure services, it is not designed for application performance monitoring of transaction times in the way that APM tools are. It is not the appropriate service for this specific requirement.

Azure Application Insights: Azure Application Insights is an Application Performance Monitoring (APM) service in Azure. It is specifically designed for web applications and services, including those hosted in Azure App Service. A key feature of Application Insights is its ability to automatically instrument applications running in App Service without requiring changes to the application code. For .NET applications in App Service, you can enable the Application Insights Extension or Auto-Instrumentation. This automatically collects performance data, including request durations, dependencies, exceptions, and traces, which directly addresses the requirement to analyze transaction times within App2.

Container insights: Container insights is a feature of Azure Monitor that is designed to monitor the performance and health of containerized workloads, primarily in Azure Kubernetes Service (AKS) and Azure Container Instances (ACI). Since App2 is hosted in Azure App Service, which is a Platform-as-a-Service (PaaS) offering and not directly containerized by the user in the same way as AKS or ACI, Container insights is not the appropriate monitoring solution for App2.

VM insights: VM insights is a feature of Azure Monitor designed to monitor the performance and health of virtual machines and virtual machine scale sets. It collects data about the operating system and hardware metrics of VMs, such as CPU utilization, memory pressure, disk I/O, and network traffic. While App Service instances run on VMs in the backend, VM insights focuses on monitoring the infrastructure level metrics of these VMs, not the application-level transaction performance within App2. VM insights will not provide the detailed transaction timing analysis required for App2.

Considering the specific requirement of monitoring App2 transaction times without code changes for a .NET application in Azure App Service, Azure Application Insights is the most suitable and direct solution. It provides automatic instrumentation and is designed exactly for this type of application performance monitoring scenario.

Final Answer: Azure Application Insights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Overview

An insurance company, HABInsurance, operates in three states and provides home, auto, and boat insurance. Besides the head office, HABInsurance has three regional offices.

Current environment

General

An insurance company, HABInsurance, operates in three states and provides home, auto, and boat insurance. Besides the head office, HABInsurance has three regional offices.

Technology assessment

The company has two Active Directory forests: main.habinsurance.com and region.habinsurance.com. HABInsurance’s primary internal system is Insurance Processing System (IPS). It is an ASP.Net/C# application running on IIS/Windows Servers hosted in a data center. IPS has three tiers: web, business logic API, and a datastore on a back end. The company uses Microsoft SQL Server and MongoDB for the backend. The system has two parts: Customer data and Insurance forms and documents. Customer data is stored in Microsoft SQL Server and Insurance forms and documents ― in MongoDB. The company also has 10 TB of Human Resources (HR) data stored on NAS at the head office location. Requirements

General

HABInsurance plans to migrate its workloads to Azure. They purchased an Azure subscription.

Changes

During a transition period, HABInsurance wants to create a hybrid identity model along with a Microsoft Office 365 deployment. The company intends to sync its AD forests to Azure AD and benefit from Azure AD administrative units functionality.

HABInsurance needs to migrate the current IPSCustomers SQL database to a new fully managed SQL database in Azure that would be budget-oriented, balanced with scalable compute and storage options. The management team expects the Azure database service to scale the database resources dynamically with minimal downtime. The technical team proposes implementing a DTU-based purchasing model for the new database.

HABInsurance wants to migrate Insurance forms and documents to Azure database service. HABInsurance plans to move IPS first two tiers to Azure without any modifications. The technology team discusses the possibility of running IPS tiers on a set of virtual machines instances. The number of instances should be adjusted automatically based on the CPU utilization. An SLA of 99.95% must be guaranteed for the compute infrastructure.

The company needs to move HR data to Azure File shares.

In their new Azure ecosystem, HABInsurance plans to use internal and third-party applications. The company considers adding user consent for data access to the registered applications

Later, the technology team contemplates adding a customer self-service portal to IPS and deploying a new IPS to multi-region ASK. But the management team is worried about performance and availability of the multi-region AKS deployments during regional outages.

A company has an on-premises file server cbflserver that runs Windows Server 2019. Windows Admin Center manages this server. The company owns an Azure subscription. You need to provide an Azure solution to prevent data loss if the file server fails.

Solution: You decide to create an Azure Recovery Services vault. You then decide to install the Azure Backup agent and then schedule the backup.

Would this meet the requirement?

Yes
No

A

The requirement is to prevent data loss if the on-premises file server cbflserver running Windows Server 2019 fails. The proposed solution involves using Azure Recovery Services vault and the Azure Backup agent. Let’s break down why this solution is effective:

Azure Recovery Services Vault: Creating an Azure Recovery Services vault is the foundational step for setting up Azure Backup. The vault acts as a management container for backup and recovery points, and it handles the storage and management of backup data in Azure. This is the correct Azure service to use for backup purposes.

Azure Backup Agent: Installing the Azure Backup agent (also known as the MARS agent - Microsoft Azure Recovery Services agent) on the cbflserver is the correct approach for backing up files and folders from an on-premises Windows Server to Azure. This agent is specifically designed to communicate with the Azure Recovery Services vault and securely transfer backup data to Azure storage.

Scheduling Backup: Scheduling backups is essential for data protection. By scheduling backups, you ensure that data is regularly copied to Azure. In the event of a file server failure, you can restore the data from the latest backup stored in the Azure Recovery Services vault, thus preventing data loss.

By combining these three steps - creating a Recovery Services vault, installing the Azure Backup agent, and scheduling backups - you establish a functional backup system for the cbflserver. This system will create copies of the server’s data in Azure on a regular basis. If the cbflserver fails, the data can be restored from these backups, effectively preventing data loss.

Therefore, the proposed solution directly addresses the requirement of preventing data loss in case of file server failure.

Final Answer: Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company is planning on deploying an application onto Azure. The application will be based on the .Net core programming language. The application would be hosted using Azure Web apps. Below is part of the various requirements for the application

Give the ability to correlate Azure resource usage and the performance data with the actual application configuration and performance data

Give the ability to visualize the relationships between application components

Give the ability to track requests and exceptions to specific lines of code from within the application Give the ability to actually analyse how uses return to an application and see how often they only select a particular drop-down value

Which of the following service would be best suited for fulfilling the requirement of “Give the ability to correlate Azure resource usage and the performance data with the actual application configuration and performance data”

Azure Application Insights
Azure Service Map
Azure Log Analytics
Azure Activity Log

A

The question specifically asks for a service that provides the ability to correlate Azure resource usage and performance data with application configuration and performance data. Let’s analyze each option in relation to this requirement:

Azure Application Insights: Azure Application Insights is an Application Performance Monitoring (APM) service designed for web applications and services. It excels at collecting and analyzing application performance data such as request rates, response times, exceptions, and dependencies. Critically, Application Insights also integrates with Azure Monitor metrics. This integration allows you to see Azure resource utilization (like CPU usage, memory consumption, etc. of the underlying App Service plan) alongside your application performance data within the same interface. Furthermore, Application Insights allows you to track custom properties and telemetry, which can include application configuration data if you choose to send it. Therefore, Application Insights directly facilitates the correlation of Azure resource usage and performance data with application configuration and performance data.

Azure Service Map: Azure Service Map automatically discovers application components and their dependencies, visualizing the relationships between servers, processes, and third-party services. While it provides a great visual representation of application architecture and dependencies, it is not primarily focused on correlating Azure resource usage metrics with detailed application performance and configuration data. Service Map is more about understanding the topology and connections within your application environment.

Azure Log Analytics: Azure Log Analytics is a powerful service for collecting and analyzing log and metric data from various sources across your Azure and on-premises environments. You could potentially use Log Analytics to collect both Azure resource logs (containing resource usage metrics) and application performance logs (which might include performance and configuration data). Then, you could write complex queries to try and correlate this data. However, this approach is more manual and requires significant configuration and query writing effort. Application Insights provides a more direct and out-of-the-box solution for this specific correlation requirement, especially for web applications hosted in Azure App Service.

Azure Activity Log: Azure Activity Log provides audit logs for operations performed on Azure resources. It records control plane operations like creating, updating, or deleting Azure resources. Activity Log is primarily for auditing and governance purposes, not for monitoring application performance or correlating resource usage with application configuration data. It does not contain the detailed performance metrics or application-level data needed for this requirement.

Considering the specific requirement to “correlate Azure resource usage and the performance data with the actual application configuration and performance data,” Azure Application Insights is the most directly and effectively suited service. It is designed for APM and has built-in features to integrate resource usage metrics with application performance telemetry, making correlation straightforward.

Final Answer: Azure Application Insights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A company has an on-premises file server cbflserver that runs Windows Server 2019. Windows Admin Center manages this server. The company owns an Azure subscription. You need to provide an Azure solution to prevent data loss if the file server fails.

Solution: You decide to register Windows Admin Center in Azure and then configure Azure Backup.

Would this meet the requirement?

Yes
No

A

The requirement is to prevent data loss for an on-premises file server cbflserver running Windows Server 2019 in case of failure. The proposed solution is to register Windows Admin Center in Azure and then configure Azure Backup. Let’s analyze if this solution meets the requirement.

Registering Windows Admin Center in Azure: Windows Admin Center (WAC) is a browser-based management tool for Windows Servers. Registering Windows Admin Center in Azure connects your on-premises WAC instance to your Azure subscription. This provides several benefits, including:

Hybrid Management: Allows you to manage your on-premises servers from within the Azure portal.

Azure Service Integration: Enables easier integration and configuration of Azure services for your on-premises servers directly from the WAC interface.

Configuring Azure Backup: Azure Backup is a cloud-based backup service that is part of Azure Recovery Services. It is designed to backup data from various sources, including on-premises Windows Servers. By configuring Azure Backup for cbflserver, you will be able to create backups of the server’s data in Azure.

How Windows Admin Center facilitates Azure Backup:
Windows Admin Center provides a user-friendly interface to configure Azure Backup for servers it manages. When you register WAC in Azure and then use WAC to configure Azure Backup for cbflserver, it simplifies the process by:

Guiding you through the Azure Backup setup: WAC can help you create a Recovery Services vault in Azure if you don’t already have one.

Simplifying agent installation: WAC can assist in deploying the Azure Backup agent to cbflserver.

Providing a centralized management point: You can manage backups for cbflserver directly from the WAC interface, which is integrated with Azure.

Does this solution meet the requirement of preventing data loss?
Yes. By configuring Azure Backup for cbflserver, regardless of whether you initiate the configuration through Windows Admin Center or directly through the Azure portal, you are setting up a backup process that will store copies of your server’s data in Azure. In the event of a failure of the cbflserver, you can restore the data from the backups stored in Azure, thus preventing data loss.

Registering Windows Admin Center in Azure is not strictly necessary for Azure Backup to function. You can configure Azure Backup directly from the Azure portal or using PowerShell. However, using Windows Admin Center, especially when it’s already used for server management, simplifies the configuration and management of Azure Backup for on-premises servers.

Therefore, the solution of registering Windows Admin Center in Azure and then configuring Azure Backup is a valid and effective way to prevent data loss for the on-premises file server cbflserver.

Final Answer: Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You have an Azure subscription that contains a custom application named Application was developed by an external company named fabric, Ltd. Developers at Fabrikam were assigned role-based access control (RBAV) permissions to the Application components. All users are licensed for the Microsoft 365 E5 plan.

You need to recommends a solution to verify whether the Faricak developers still require permissions to Application1.

The solution must the following requirements.

  • To the manager of the developers, send a monthly email message that lists the access permissions to Application1.
  • If the manager does not verify access permission, automatically revoke that permission.
  • Minimize development effort.

What should you recommend?

In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources
Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet
Create an Azure Automation runbook that runs the Get-AzureRmRoleAssignment cmdlet
In Azure Active Directory (Azure AD), create an access review of Application1

A

The question asks for the best solution to verify if Fabrikam developers still require permissions to Application1, with specific requirements for monthly email notifications to managers, automatic revocation upon non-verification, and minimal development effort. Let’s evaluate each option against these requirements.

In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources: Azure AD Privileged Identity Management (PIM) is primarily used for managing, controlling, and monitoring access within an organization by enforcing just-in-time access for privileged roles. While PIM can manage role assignments, it is not inherently designed for periodic access reviews and automated revocations based on manager verification in the way described in the requirements. Creating a custom role assignment in PIM does not directly address the need for a monthly review and automatic revocation workflow.

Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet: This option involves using Azure Automation and PowerShell scripting. Get-AzureADUserAppRoleAssignment cmdlet can retrieve application role assignments in Azure AD. An Azure Automation runbook could be created to:

Run on a monthly schedule.

Use Get-AzureADUserAppRoleAssignment to list Fabrikam developers’ permissions to Application1.

Send an email to the managers with this list, requesting verification.

Implement logic to track responses and, if no response is received within a timeframe, use PowerShell cmdlets to revoke the permissions.
While technically feasible, this solution requires significant development effort to create the automation runbook, handle email notifications, track responses, and implement the revocation logic. It does not minimize development effort.

Create an Azure Automation runbook that runs the Get-AzureRmRoleAssignment cmdlet: Get-AzureRmRoleAssignment (or its modern equivalent Get-AzRoleAssignment in Az PowerShell module) retrieves Azure Role-Based Access Control (RBAC) assignments at the resource level. Similar to the previous option, an Azure Automation runbook could be developed to retrieve RBAC assignments for Application1 resources, notify managers, and revoke permissions if not verified. This option also suffers from the same drawback: it requires considerable custom development effort to build the entire verification and revocation process within the runbook.

In Azure Active Directory (Azure AD), create an access review of Application1: Azure AD Access Reviews are a built-in feature in Azure AD Premium P2 (which the users have with Microsoft 365 E5 licenses) specifically designed for this type of access governance scenario. Azure AD Access Reviews provide a streamlined way to:

Define the scope of the review: In this case, access to Application1.

Select reviewers: Managers of the Fabrikam developers.

Set a review schedule: Monthly.

Configure automatic actions: Specifically, “Auto-apply results to resource” which can be set to “Remove access” if reviewers don’t respond or deny access.

Send notifications: Reviewers (managers) are automatically notified by email to perform the review.

Track review progress and results: Azure AD provides a dashboard to monitor the review process.

Azure AD Access Reviews directly address all the specified requirements with minimal configuration and essentially zero development effort. It is a built-in feature designed for access governance and periodic reviews, making it the most efficient and appropriate solution.

Final Answer: In Azure Active Directory (Azure AD), create an access review of Application1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You have an Azure subscription. The subscription has a blob container that contains multiple blobs. Ten users in the finance department of your company plan to access the blobs during the month of April. You need to recommend a solution to enable access to the blobs during the month of April only.

Which security solution should you include in the recommendation?

shared access signatures (SAS)
access keys
conditional access policies
certificates

A

The requirement is to enable access to Azure Blob Storage for ten users in the finance department during the month of April only. Let’s examine each security solution in the context of this requirement:

Shared Access Signatures (SAS): Shared Access Signatures (SAS) are a powerful feature in Azure Storage that allows you to grant granular, time-bound, and restricted access to storage resources like blobs. You can create a SAS token with specific permissions (like read access) and set an expiry date (for example, April 30th). This SAS token can then be distributed to the ten finance users, allowing them access to the blobs only during April. After April 30th, the SAS token will expire, and access will be automatically revoked. SAS tokens are ideal for granting temporary access without sharing storage account keys.

Access Keys: Storage account access keys provide full administrative access to the entire storage account. Sharing access keys is highly insecure and not recommended, especially for temporary access for multiple users. Access keys grant unrestricted access to all resources within the storage account, which is far more permission than needed for the finance department’s temporary blob access. Furthermore, access keys do not inherently provide a mechanism for time-limited access.

Conditional Access Policies: Conditional Access Policies in Azure Active Directory (Azure AD) are used to enforce organizational policies during authentication. They can control access based on various conditions like user location, device, application, and risk. While Conditional Access is excellent for enforcing broader security policies, it is not the right tool for granting time-limited access to specific storage resources for a group of users. Conditional Access is more about controlling who can access resources based on conditions, not for generating temporary access credentials with expiry dates for specific storage resources.

Certificates: Certificates are used for authentication and encryption. While client certificates can be used for authentication with Azure Storage, they are not designed for managing temporary access for multiple users in the way required. Managing and distributing certificates to ten users for temporary access would be complex and overkill compared to using SAS tokens. Certificates are more suitable for secure machine-to-machine communication or long-term authentication scenarios.

Considering the requirement for time-limited access (during April only) and the need to grant access to specific users (finance department) for blobs, Shared Access Signatures (SAS) is the most appropriate and recommended security solution. SAS tokens are specifically designed for this type of scenario, offering granular control over access permissions and expiry times, and minimizing security risks by avoiding the sharing of storage account keys.

Final Answer: shared access signatures (SAS)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You have an Azure Active Directory (Azure AD) tenant that syncs with an on-premises Active Directory domain.

You have an internal web app named WebApp1 that is hosted on-premises. WebApp1 uses Integrated Windows authentication.

Some users work remotely and do NOT have VPN access to the on-premises network.

You need to provide the remote users with single sign-on (SSO) access to WebApp1.

Which two features should you include in the solution? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

Azure AD Application Proxy
Azure AD Privileged Identity Management (PIM)
Conditional Access policies
Azure Arc
Azure AD enterprise applications
Azure Application Gateway

A

To provide remote users with single sign-on (SSO) access to an on-premises web application (WebApp1) that uses Integrated Windows Authentication (IWA), without VPN access, you should use the following two Azure AD features:

Azure AD Application Proxy

Azure AD enterprise applications

Here’s why these two features are the correct combination:

  1. Azure AD Application Proxy:

Purpose: Azure AD Application Proxy is specifically designed to publish on-premises web applications to remote users securely through Azure AD authentication. It acts as a reverse proxy, sitting between the internet and your on-premises application.

How it helps in this scenario:

Secure Remote Access without VPN: It eliminates the need for users to connect via VPN to access WebApp1. Remote users access the application through an external URL provided by Application Proxy.

SSO with Azure AD: Application Proxy integrates with Azure AD for authentication. Users authenticate with their Azure AD credentials.

Handles Integrated Windows Authentication (IWA): Application Proxy can be configured to handle the backend Integrated Windows Authentication required by WebApp1. It does this by using Kerberos Constrained Delegation (KCD) and a Connector agent installed on-premises. The Connector agent performs the IWA on behalf of the user within the on-premises network.

  1. Azure AD enterprise applications:

Purpose: Azure AD enterprise applications are the representation of applications within your Azure AD tenant. They are used to manage authentication and authorization for applications that you want to integrate with Azure AD.

How it helps in this scenario:

Application Registration: You need to register WebApp1 as an enterprise application in your Azure AD tenant. This registration allows Azure AD to understand and manage authentication for WebApp1.

Configuration for Application Proxy: When you set up Azure AD Application Proxy for WebApp1, you will configure it based on this enterprise application registration. The enterprise application defines the authentication methods, user assignments, and other settings for accessing WebApp1 through Application Proxy.

Why other options are not the primary solution:

Azure AD Privileged Identity Management (PIM): PIM is for managing, controlling, and monitoring privileged access to Azure resources and Azure AD roles. It’s not directly involved in providing SSO access to web applications for remote users.

Conditional Access policies: Conditional Access policies are used to enforce authentication requirements based on conditions (like location, device, risk level). While you can use Conditional Access to enhance the security of access to WebApp1 through Application Proxy, it’s not the feature that enables the SSO access in the first place. Conditional Access would be a secondary security layer, not the core solution for SSO.

Azure Arc: Azure Arc is for managing on-premises and multi-cloud infrastructure from Azure. It does not provide SSO capabilities for web applications.

Azure Application Gateway: Azure Application Gateway is a web traffic load balancer and WAF for Azure-hosted web applications. It is not designed to provide reverse proxy and SSO for on-premises applications like Azure AD Application Proxy.

Therefore, the correct two features are Azure AD Application Proxy and Azure AD enterprise applications.

Final Answer: Azure AD Application Proxy and Azure AD enterprise applications

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You have an Azure Active Directory (Azure AD) tenant named contoso.com that has a security group named Group’. Group i is configured Tor assigned membership. Group I has 50 members. including 20 guest users.

You need To recommend a solution for evaluating the member ship of Group1.

The solution must meet the following requirements:

  • The evaluation must be repeated automatically every three months
  • Every member must be able to report whether they need to be in Group1
  • Users who report that they do not need to be in Group 1 must be removed from Group1 automatically
  • Users who do not report whether they need to be m Group1 must be removed from Group1 automatically.

What should you include in me recommendation?

implement Azure AU Identity Protection.
Change the Membership type of Group1 to Dynamic User.
Implement Azure AD Privileged Identity Management.
Create an access review.

A

The question requires a solution for evaluating and managing the membership of an Azure AD Security Group (Group1) with specific requirements for automation, self-attestation, and automatic removal. Let’s analyze each option:

Implement Azure AD Identity Protection: Azure AD Identity Protection is focused on security and risk management for user identities. It detects risky sign-ins and vulnerabilities, and helps to remediate them. It does not provide features for group membership reviews, self-attestation, or automated removal based on user feedback regarding group membership. Therefore, this option does not meet the requirements.

Change the Membership type of Group1 to Dynamic User: Dynamic User groups manage membership based on rules that are evaluated against user attributes. While this automates group membership management based on predefined rules, it does not address the requirements for periodic reviews, self-attestation, or automatic removal based on user feedback or lack of response. Dynamic groups are rule-driven, not review-driven. Therefore, this option does not meet the requirements.

Implement Azure AD Privileged Identity Management (PIM): Azure AD Privileged Identity Management is used to manage, control, and monitor privileged access to resources in Azure AD and Azure. While PIM can be used for group membership management, it is primarily focused on roles that grant elevated privileges and managing just-in-time access. It is not designed for general group membership reviews and self-attestation across a broad group like Group1. Although PIM has some review capabilities, it’s not the most appropriate tool for this scenario compared to Access Reviews.

Create an access review: Azure AD Access Reviews are specifically designed to manage and review access to groups, applications, and roles. Access Reviews can be configured to meet all the stated requirements:

Periodic Reviews: Access Reviews can be set up to run automatically on a recurring schedule, such as every three months.

Self-Attestation: Access Reviews can be configured to allow users to self-attest to their need for continued access to the group. In this case, members of Group1 can be reviewers and attest if they need to remain in the group.

Automatic Removal Based on User Report: Access Reviews can be configured to automatically remove users who, during the review process, indicate that they no longer need access to the group.

Automatic Removal for Non-Response: Access Reviews can be configured to automatically remove users who do not respond to the access review within a specified time period.

Azure AD Access Reviews directly address all the requirements of the question and are the intended feature for managing group memberships in this way.

Final Answer: Create an access review.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

HOTSPOT

You plan to deploy Azure Databricks to support a machine learning application. Data engineers will mount an Azure Data Lake Storage account to the Databricks file system. Permissions to folders are granted directly to the data engineers.

You need to recommend a design for the planned Databrick deployment.

The solution must meet the following requirements:

✑ Ensure that the data engineers can only access folders to which they have permissions.

✑ Minimize development effort.

✑ Minimize costs.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Databricks SKU:
Premium
Standard
Cluster configuration:
Credential passthrough
Managed identities
MLflow
A runtime that contains Photon
Secret scope

A

Databricks SKU: Premium

Requirement: Ensure that data engineers can only access folders to which they have permissions.

Explanation: Premium SKU is required to enable credential passthrough. Credential passthrough allows Databricks clusters to leverage the Azure Active Directory identity of the user submitting queries to access Azure Data Lake Storage (ADLS). This means that Databricks will use the data engineer’s own Azure AD credentials to authenticate and authorize access to ADLS. If the data engineer has permissions to a specific folder in ADLS, they can access it through Databricks; otherwise, they will be denied access. Standard SKU does not support credential passthrough for ADLS Gen2.

Cluster configuration: Credential passthrough

Requirement: Ensure that data engineers can only access folders to which they have permissions.

Explanation: Credential passthrough is the key feature that directly addresses the requirement of granular access control based on user permissions in ADLS. When credential passthrough is enabled on a Databricks cluster, the identity of the user running a job is passed through to ADLS. ADLS then uses its own access control mechanisms (like ACLs or RBAC) to determine if the user has permission to access the requested data. This directly ensures that data engineers can only access folders they are permitted to access.

Why other options are not the best fit or incorrect:

Standard Databricks SKU: Standard SKU does not support credential passthrough for Azure Data Lake Storage Gen2, which is essential for enforcing user-level permissions on folders in ADLS as described in the scenario.

Managed identities: While managed identities are a secure way for Azure resources to authenticate to other Azure services, they do not directly address the requirement of individual data engineers accessing data based on their own permissions. Managed identities would require granting permissions to the Databricks cluster’s managed identity, not to individual data engineers. This would mean all users of the cluster would have the same level of access, which contradicts the requirement of granular user-based permissions.

MLflow: MLflow is a platform for managing the machine learning lifecycle. It’s not directly related to data access control or minimizing costs in the context of storage access permissions. While useful for ML projects, it doesn’t contribute to solving the specific requirements outlined.

A runtime that contains Photon: Photon is a high-performance query engine optimized for Databricks. While it can improve performance and potentially reduce costs in the long run by running jobs faster, it is not directly related to data access control or minimizing development effort in the context of setting up permissions. Choosing a runtime with or without Photon does not address the core security and access control requirements.

Secret scope: Secret scopes are used to securely store and manage secrets (like passwords, API keys, etc.) in Databricks. While important for security in general, secret scopes are not directly related to the requirement of user-based folder permissions in ADLS. They are more relevant for managing credentials used by the Databricks cluster itself, not for enforcing user-level data access control using Azure AD identities.

Minimizing Development Effort & Costs:

Credential passthrough minimizes development effort because it leverages the existing Azure AD and ADLS permissions model. No custom access control mechanisms need to be developed within Databricks.

Standard runtime is generally less costly than Photon if performance gains are not a primary driver.

Choosing the Premium SKU is necessary for credential passthrough, even though it’s more expensive than Standard, because it’s the only way to meet the core security requirement of user-based folder permissions with minimal development effort. Trying to implement a custom permission system with Standard SKU and Managed Identities would be significantly more complex and potentially more costly in development time.

Therefore, the optimal solution to meet all requirements with minimal development effort and cost-effectiveness, while ensuring secure user-based access to folders in ADLS, is to choose Premium Databricks SKU and configure the cluster with Credential passthrough.

Final Answer:

Databricks SKU: Premium
Cluster configuration: Credential passthrough
MLflow:
A runtime that contains Photon:
Secret scope:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

HOTSPOT

You plan to deploy an Azure web app named Appl that will use Azure Active Directory (Azure AD) authentication.

App1 will be accessed from the internet by the users at your company. All the users have computers that run Windows 10 and are joined to Azure AD.

You need to recommend a solution to ensure that the users can connect to App1 without being prompted for authentication and can access App1 only from company-owned computers.

What should you recommend for each requirement? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
The users can connect to App1 without
being prompted for authentication:
The users can access App1 only from
company-owned computers:
An Azure AD app registration
An Azure AD managed identity
Azure AD Application Proxy
A conditional access policy
An Azure AD administrative unit
Azure Application Gateway
Azure Blueprints
Azure Policy

A

The users can connect to App1 without being prompted for authentication: An Azure AD app registration

Explanation: To enable Azure AD authentication for App1, you must first register App1 as an application in Azure AD. This app registration establishes a trust relationship between App1 and Azure AD, allowing Azure AD to authenticate users for App1.

Why it enables SSO (Single Sign-On): When a user on an Azure AD joined Windows 10 computer attempts to access App1, and App1 is configured for Azure AD authentication, the web browser on the user’s machine can automatically pass the user’s existing Azure AD credentials to App1’s authentication request. This happens seamlessly in the background because the user is already logged into Azure AD on their Windows 10 machine. App registration is the fundamental step to enable this authentication flow, which leads to SSO in this scenario.

Why other options are not suitable for SSO in this context:

Azure AD managed identity: Managed identities are for Azure resources (like App1 itself) to authenticate to other Azure services, not for user authentication to App1.

Azure AD Application Proxy: Application Proxy is for publishing on-premises web applications to the internet via Azure AD. App1 is already an Azure web app and internet-facing, so Application Proxy is not needed for basic internet access or SSO for it.

A conditional access policy: Conditional access policies enforce conditions after authentication. While they can contribute to a better user experience, they are not the primary mechanism for enabling SSO itself.

An Azure AD administrative unit: Administrative units are for organizational management and delegation within Azure AD, not related to authentication flows or SSO.

Azure Application Gateway: Application Gateway is a web traffic load balancer and WAF. It doesn’t directly handle Azure AD authentication or SSO in this context.

Azure Blueprints & Azure Policy: These are for resource deployment and governance, not related to application authentication or SSO.

The users can access App1 only from company-owned computers: A conditional access policy

Explanation: Azure AD Conditional Access policies are specifically designed to enforce access controls based on various conditions, including device state. You can create a Conditional Access policy that targets App1 and requires devices to be marked as “compliant” or “hybrid Azure AD joined” to grant access.

How it works for company-owned computers: For Windows 10 computers joined to Azure AD, you can configure them to be either Hybrid Azure AD joined (if also domain-joined to on-premises AD) or simply Azure AD joined and managed by Intune (or other MDM). You can then use Conditional Access to require that devices accessing App1 are either Hybrid Azure AD joined or marked as compliant by Intune. This effectively restricts access to only company-managed and compliant devices, which are considered “company-owned” in this context.

Why other options are not suitable for device-based access control:

An Azure AD app registration: App registration is necessary for authentication but doesn’t enforce device-based restrictions.

Azure AD managed identity: Irrelevant to device-based access control for users.

Azure AD Application Proxy: Not relevant to device-based access control for Azure web apps.

An Azure AD administrative unit: Not relevant to device-based access control.

Azure Application Gateway, Azure Blueprints, Azure Policy: These are not directly designed for enforcing device-based access control for Azure AD authenticated applications.

Therefore, the most appropriate recommendations are:

The users can connect to App1 without being prompted for authentication: An Azure AD app registration

The users can access App1 only from company-owned computers: A conditional access policy

Final Answer:

The users can connect to App1 without
being prompted for authentication: An Azure AD app registration
The users can access App1 only from
company-owned computers: A conditional access policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is being deployed and configured for on-premises to Azure connectivity.

Several virtual machines exhibit network connectivity issues.

You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines.

Solution: Use Azure Traffic Analytics in Azure Network Watcher to analyze the network traffic.

Does this meet the goal?

Yes
No

A

The goal is to analyze network traffic to identify whether packets are being allowed or denied to virtual machines in a hybrid environment (on-premises and Azure connected via ExpressRoute). The proposed solution is to use Azure Traffic Analytics in Azure Network Watcher.

Let’s evaluate if Azure Traffic Analytics meets this goal:

Azure Traffic Analytics:

Functionality: Azure Traffic Analytics analyzes Network Security Group (NSG) flow logs, Azure Firewall logs, and Virtual Network Gateway logs to provide insights into network traffic in Azure. It helps visualize traffic patterns, identify security threats, and pinpoint network misconfigurations.

Scope: Traffic Analytics is focused on analyzing network traffic within Azure. It primarily works with Azure network resources like NSGs, Azure Firewalls, and Virtual Network Gateways.

Data Source: It relies on logs generated by Azure network components.

Hybrid Environment and ExpressRoute:

ExpressRoute Connectivity: ExpressRoute provides a private connection between on-premises networks and Azure.

Network Traffic Flow: Traffic flows between on-premises VMs and Azure VMs through the ExpressRoute connection.

On-premises VMs Visibility: Azure Traffic Analytics does not have direct visibility into the network traffic of on-premises virtual machines. It cannot analyze NSG flow logs or Azure Firewall logs for on-premises resources because these logs are generated by Azure network security components, which are not directly involved in securing on-premises networks.

Analyzing Network Connectivity Issues:

Azure VM Issues: For VMs in Azure that are protected by NSGs or Azure Firewall, Traffic Analytics can be helpful to understand if traffic is being allowed or denied by these Azure security components.

On-premises VM Issues: For VMs located on-premises, Azure Traffic Analytics is not directly applicable. Network connectivity issues for on-premises VMs would need to be analyzed using on-premises network monitoring tools and firewall logs.

Conclusion:

Azure Traffic Analytics is a valuable tool for analyzing network traffic and identifying allowed/denied packets within Azure.

However, it is not designed to analyze network traffic for on-premises virtual machines, even when they are connected to Azure via ExpressRoute. It lacks visibility into the on-premises network infrastructure.

Therefore, using Azure Traffic Analytics alone is insufficient to meet the goal of analyzing network traffic for all virtual machines (both on-premises and Azure) exhibiting network connectivity issues in this hybrid scenario. It will only provide insights into the Azure-side network traffic.

Final Answer: No

Why No is the correct answer: Azure Traffic Analytics is limited to analyzing network traffic within the Azure environment based on Azure network component logs (NSGs, Azure Firewall, etc.). It does not have visibility into on-premises network traffic, even when connected to Azure via ExpressRoute. Since the scenario involves VMs both on-premises and in Azure, and the need is to analyze network traffic to identify allowed/denied packets for all VMs, Azure Traffic Analytics by itself is not a sufficient solution. It can help with Azure VMs but not on-premises VMs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company has deployed several virtual machines (VMs) on-premises and to Azure. Azure ExpressRoute has been deployed and configured for on-premises to Azure connectivity.

Several VMs are exhibiting network connectivity issues.

You need to analyze the network traffic to determine whether packets are being allowed or denied to the VMs.

Solution: Use the Azure Advisor to analyze the network traffic.

Does the solution meet the goal?

Yes
No

A

The goal is to analyze network traffic to determine whether packets are being allowed or denied to VMs in a hybrid environment. The proposed solution is to use Azure Advisor.

Let’s evaluate if Azure Advisor is suitable for this task:

Azure Advisor’s Purpose: Azure Advisor is a service in Azure that provides recommendations on how to optimize your Azure deployments for cost, security, reliability, operational excellence, and performance. It analyzes your Azure resource configurations and usage telemetry.

Azure Advisor’s Capabilities Related to Networking: Azure Advisor can provide recommendations related to networking, such as:

Security Recommendations: Suggesting improvements to Network Security Groups (NSGs) to enhance security, like closing exposed ports or recommending the use of Azure Firewall.

Performance Recommendations: Identifying potential network bottlenecks or underutilized network resources.

Cost Optimization: Identifying potential cost savings in network configurations.

Reliability: Recommending configurations for better network resilience.

Limitations of Azure Advisor for Network Traffic Analysis:

Not a Packet-Level Analyzer: Azure Advisor does not perform real-time or detailed packet-level network traffic analysis. It does not capture network packets or analyze packet headers to determine if packets are being allowed or denied by network security rules.

Recommendation-Based, Not Diagnostic: Azure Advisor provides recommendations based on configuration and usage patterns. It’s not a diagnostic tool to troubleshoot specific network connectivity issues by analyzing traffic flow in real-time or near real-time.

Focus on Azure Resources: Azure Advisor primarily focuses on Azure resources and their configurations. It does not have direct visibility into on-premises network traffic or detailed configurations of on-premises network devices.

Analyzing Network Connectivity Issues: To determine if packets are being allowed or denied, you need tools that can inspect network traffic flows, such as:

Network Watcher (Packet Capture, NSG Flow Logs, Connection Troubleshoot): These tools in Azure Network Watcher are designed for diagnosing network connectivity issues by capturing packets, analyzing NSG rule hits, and testing connectivity.

Network Monitoring Tools (e.g., Wireshark, tcpdump): These tools can capture and analyze network traffic at the packet level on both on-premises and Azure VMs (if installed and configured appropriately).

Firewall Logs: Analyzing logs from firewalls (Azure Firewall or on-premises firewalls) can show which traffic is being allowed or denied based on firewall rules.

Conclusion: Azure Advisor is a valuable tool for getting recommendations to improve your Azure environment, including some aspects of networking. However, it is not designed for or capable of analyzing network traffic at the packet level to determine if packets are being allowed or denied. It’s not a network traffic analysis tool in the sense required to troubleshoot network connectivity issues at a detailed level.

Final Answer: No

Explanation: Azure Advisor is not designed for real-time or packet-level network traffic analysis. It provides recommendations based on configuration and usage patterns but does not have the capability to analyze network traffic flows to determine if packets are being allowed or denied. To achieve the goal of analyzing network traffic for allowed/denied packets, tools like Azure Network Watcher (Packet Capture, NSG Flow Logs) or traditional network monitoring tools are required, not Azure Advisor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company has deployed several virtual machines (VMs) on-premises and to Azure. Azure ExpressRoute has been deployed and configured for on-premises to Azure connectivity.

Several VMs are exhibiting network connectivity issues.

You need to analyze the network traffic to determine whether packets are being allowed or denied to the VMs.

Solution: Use Azure Network Watcher to run IP flow verify to analyze the network traffic.

Does the solution meet the goal?

Yes
No

A

The goal is to analyze network traffic to determine if packets are being allowed or denied to VMs in a hybrid environment. The proposed solution is to use Azure Network Watcher’s IP flow verify.

Let’s analyze if Azure Network Watcher’s IP flow verify is suitable for this goal:

Azure Network Watcher IP Flow Verify: This tool allows you to specify a source and destination IP address, port, and protocol, and then it checks the configured Network Security Groups (NSGs) and Azure Firewall rules in Azure to determine if the traffic would be allowed or denied.

How it helps in the hybrid scenario:

Azure VMs: For VMs in Azure, IP flow verify is directly applicable. You can use it to check if NSGs or Azure Firewall rules are blocking traffic to or from these VMs. This is crucial for diagnosing connectivity issues related to Azure network security configurations.

On-premises VMs communicating with Azure VMs: When on-premises VMs are experiencing connectivity issues with Azure VMs, IP flow verify can be used to check the Azure side of the connection. You can test if traffic from the on-premises VM’s IP range (or a representative IP) to the Azure VM is being blocked by Azure NSGs or Azure Firewall. This helps isolate whether the problem lies within Azure’s network security rules. While it doesn’t directly analyze on-premises firewalls or network configurations, it can pinpoint if the block is happening at the Azure perimeter.

Limitations: IP flow verify is primarily focused on the Azure network security layer (NSGs and Azure Firewall). It does not analyze on-premises firewalls, routers, or network configurations. Therefore, it will not provide a complete picture of the entire network path from on-premises to Azure.

Does it meet the goal? Yes, in part. IP flow verify does directly address the need to analyze network traffic to determine if packets are being allowed or denied, specifically in the context of Azure network security. For the Azure side of the hybrid connection, and for understanding if Azure NSGs or Firewall are causing the issues, IP flow verify is a valuable and relevant tool. While it doesn’t cover the on-premises network completely, it’s a significant step in diagnosing network connectivity problems in a hybrid environment, especially when Azure resources are involved in the communication path.

Considering the question asks “Does the solution meet the goal?”, and IP flow verify is a tool to analyze network traffic for allow/deny rules (within the Azure context which is part of the hybrid environment), the answer is Yes. It provides a mechanism to analyze a portion of the network path and identify potential packet blocking due to Azure security rules. It’s not a complete end-to-end hybrid solution, but it directly addresses the core requirement within the scope of Azure networking, which is relevant to the overall hybrid connectivity scenario.

Final Answer: Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

DRAG DROP

You have an Azure subscription. The subscription contains Azure virtual machines that run Windows Server 2016 and Linux.

You need to use Azure Log Analytics design an alerting strategy for security-related events.

Which Log Analytics tables should you query? To answer, drag the appropriate tables to the correct log types. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Tables
AzureActivity
AzureDiagnostics
Event
Syslog

Answer Area
Events from Linux system logging: Table
Events from Windows event logs: Table

A

To design an alerting strategy for security-related events using Azure Log Analytics for both Windows and Linux VMs, you need to query the tables that specifically store operating system level logs, especially security logs.

Let’s analyze each table and determine its purpose:

AzureActivity: This table stores Azure subscription activity logs. These logs provide insights into the operations performed on Azure resources at the subscription level. While it may contain some security-related activities like changes to security configurations in Azure, it is not the primary source for OS-level security events from within the VMs.

AzureDiagnostics: This table stores diagnostic logs for various Azure services and resources. For Virtual Machines, Azure Diagnostics can collect guest OS logs and performance metrics. However, by default, it might not be configured to collect detailed security event logs. You would need to specifically configure Azure Diagnostics to collect Windows Security Events or Linux Security logs and send them to this table, which is less common for standard security event monitoring.

Event: This table is specifically designed to store Windows Event Logs collected from Windows VMs. Windows Security Events are a critical source of security-related information in Windows environments. Therefore, the Event table is the correct table to query for security events from Windows VMs.

Syslog: This table is specifically designed to store Syslog messages collected from Linux VMs. Syslog is the standard logging facility in Linux systems, and security-related events are often logged via Syslog. Therefore, the Syslog table is the correct table to query for security events from Linux VMs.

Based on this understanding:

Events from Linux system logging: The appropriate table is Syslog.

Events from Windows event logs: The appropriate table is Event.

Answer Area:

Events from Linux system logging: Table Syslog
Events from Windows event logs: Table Event

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

You are designing a large Azure environment that will contain many subscriptions.

You plan to use Azure Policy as part of a governance solution.

To which three scopes can you assign Azure Policy definitions? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

management groups
subscriptions
Azure Active Directory (Azure AD) tenants
resource groups
Azure Active Directory (Azure AD) administrative units
compute resources

A

Azure Policy is a service in Azure that enables you to create, assign, and manage policies that enforce different rules and effects over your resources. These policies help you stay compliant with your corporate standards and service level agreements. A key aspect of Azure Policy is understanding the scope at which policies can be applied. Scope determines the resources to which the policy will be enforced.

Let’s examine each option and determine if it’s a valid scope for Azure Policy assignment:

management groups: Correct. Management groups are containers for managing access, policy, and compliance across multiple Azure subscriptions. Azure Policy can be assigned at the management group level. Policies assigned at this level apply to all subscriptions within that management group and all resource groups and resources within those subscriptions. This is useful for enforcing organization-wide policies.

subscriptions: Correct. Subscriptions are a fundamental unit in Azure and represent a logical container for your resources. Azure Policy can be assigned at the subscription level. Policies assigned at this level apply to all resource groups and resources within that subscription. This is a common scope for enforcing policies specific to a project, department, or environment represented by a subscription.

Azure Active Directory (Azure AD) tenants: Incorrect. While Azure Policy is managed and integrated within the Azure AD tenant, the Azure AD tenant itself is not a direct scope for assigning Azure Policy definitions in the context of resource governance. Azure Policy is primarily concerned with the governance of Azure resources within subscriptions and management groups. While policies can interact with Azure AD in terms of identity and access management, the scope of policy assignment for resource governance is not the Azure AD tenant itself.

resource groups: Correct. Resource groups are logical containers for Azure resources within a subscription. Azure Policy can be assigned at the resource group level. Policies assigned at this level apply only to the resources within that specific resource group. This allows for very granular policy enforcement, tailored to specific applications or workloads within a resource group.

Azure Active Directory (Azure AD) administrative units: Incorrect. Azure AD administrative units are used for delegated administration within Azure AD. They allow you to grant administrative permissions to a subset of users and groups within your Azure AD organization. While they are related to Azure AD and management, they are not scopes for Azure Policy definitions in the context of Azure resource governance. Azure Policy focuses on the Azure resource hierarchy (management groups, subscriptions, resource groups).

compute resources: Incorrect. Compute resources, such as virtual machines, virtual machine scale sets, or Azure Kubernetes Service clusters, are individual Azure resources. While Azure Policy effects can be applied to compute resources to control their configuration and behavior, you do not directly assign Azure Policy definitions to individual compute resources as a scope. Policy definitions are assigned at the container levels (management groups, subscriptions, resource groups), and then they apply to the resources within those containers, including compute resources.

Therefore, the three correct scopes for assigning Azure Policy definitions are:

management groups

subscriptions

resource groups

Final Answer:

management groups
subscriptions
resource groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

DRAG DROP

Your on-premises network contains a server named Server1 that runs an ASP.NET application named App1.

You have a hybrid deployment of Azure Active Directory (Azure AD).

You need to recommend a solution to ensure that users sign in by using their Azure AD account and Azure Multi-Factor Authentication (MFA) when they connect to App1 from the internet.

Which three Azure services should you recommend be deployed and configured in sequence? To answer, move the appropriate services from the list of services to the answer area and arrange them in the correct order.

Services
an internal Azure Load Balancer
an Azure AD conditional access policy
Azure AD Application Proxy
an Azure AD managed identity
a public Azure Load Balancer
an Azure AD enterprise application
an App Service plan
Answer Area

A

Answer Area

Azure AD enterprise application

Azure AD Application Proxy

an Azure AD conditional access policy

Explanation:

Here’s the step-by-step rationale for the recommended sequence:

Azure AD enterprise application:

Reason: Before you can use Azure AD to manage authentication and access to App1, you must first register App1 as an application within your Azure AD tenant. This is done by creating an Azure AD enterprise application.

Function: Registering App1 as an enterprise application establishes an identity for App1 in Azure AD. This identity is crucial for Azure AD to understand that it needs to manage authentication for requests directed to App1. It also allows you to configure settings specific to App1, such as authentication methods and Conditional Access policies.

Azure AD Application Proxy:

Reason: Azure AD Application Proxy is the core service that enables secure remote access to on-premises web applications like App1 using Azure AD authentication.

Function:

Publishing to the Internet: Application Proxy publishes App1 to the internet through a public endpoint. Users access App1 via this public endpoint.

Reverse Proxy: It acts as a reverse proxy, intercepting user requests to App1 from the internet.

Azure AD Authentication Gateway: It handles the Azure AD authentication process. When a user accesses the Application Proxy endpoint, they are redirected to Azure AD for sign-in.

Secure Connection to On-premises: After successful Azure AD authentication, Application Proxy securely connects to Server1 (where App1 is hosted) on your on-premises network using an outbound connection from the Application Proxy connector.

an Azure AD conditional access policy:

Reason: To enforce Azure Multi-Factor Authentication (MFA) specifically when users access App1 from the internet, you need to configure an Azure AD Conditional Access policy.

Function:

Policy Enforcement: Conditional Access policies allow you to define conditions under which users can access specific applications.

MFA Requirement: You create a Conditional Access policy that targets the Azure AD enterprise application representing App1. Within this policy, you specify that MFA is required for users accessing App1, especially when accessing from outside the corporate network (which is implied when accessing from the internet).

Granular Control: Conditional Access provides granular control over access based on user, location, device, application, and risk signals.

Why other options are not in the sequence or not used:

an internal Azure Load Balancer / a public Azure Load Balancer: While load balancers are important in many architectures, they are not directly part of the core sequence for enabling Azure AD authentication and MFA for an on-premises app via Application Proxy in this basic scenario. Application Proxy itself handles the initial internet-facing endpoint. Load balancers could be relevant for scaling the application behind Server1 on-premises, but not for the core authentication and publishing flow using Application Proxy.

an Azure AD managed identity: Managed identities are used for Azure resources to authenticate to other Azure services. They are not relevant for user authentication to an on-premises application via Application Proxy.

an App Service plan: App Service plans are for hosting Azure App Services (PaaS). App1 is an on-premises application, not an Azure App Service, so App Service Plan is not needed.

Correct Sequence and Justification Summary:

The sequence Azure AD enterprise application -> Azure AD Application Proxy -> Azure AD conditional access policy is the correct order because it represents the logical flow of setting up Azure AD authentication and MFA for an on-premises application:

Register the Application: First, you must register App1 in Azure AD as an enterprise application.

Publish via Application Proxy: Then, you use Azure AD Application Proxy to publish App1 to the internet and handle the initial authentication handshake with Azure AD.

Enforce MFA: Finally, you create a Conditional Access policy to enforce MFA for access to App1, ensuring enhanced security.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

You have 100 servers that run Windows Server 2012 R2 and host Microsoft SQL Server 2012 R2 instances.

The instances host databases that have the following characteristics:

✑ The largest database is currently 3 TB. None of the databases will ever exceed 4 TB.

✑ Stored procedures are implemented by using CLR.

You plan to move all the data from SQL Server to Azure.

You need to recommend an Azure service to host the databases.

The solution must meet the following requirements:

✑ Whenever possible, minimize management overhead for the migrated databases.

✑ Minimize the number of database changes required to facilitate the migration.

✑ Ensure that users can authenticate by using their Active Directory credentials.

What should you include in the recommendation?

Azure SQL Database single databases
Azure SQL Database Managed Instance
Azure SQL Database elastic pools
SQL Server 2016 on Azure virtual machines

A

Let’s analyze each option based on the requirements:

  1. Azure SQL Database single databases

Minimize management overhead: Azure SQL Database single databases are a Platform-as-a-Service (PaaS) offering. Microsoft manages most of the underlying infrastructure, including patching, backups, and high availability. This significantly minimizes management overhead compared to managing SQL Server on VMs.

Minimize database changes: Migration to Azure SQL Database single databases might require some database changes. While the T-SQL surface area is largely compatible, there can be differences in features and behaviors. Specifically, CLR support in single databases has some limitations compared to on-premises SQL Server or Managed Instance. You might need to review and potentially adjust your CLR stored procedures.

Ensure Active Directory authentication: Azure SQL Database single databases support Azure Active Directory (Azure AD) authentication, allowing users to authenticate using their Active Directory credentials.

  1. Azure SQL Database Managed Instance

Minimize management overhead: Azure SQL Database Managed Instance is also a PaaS offering, but it provides more features and control compared to single databases, resembling a traditional SQL Server instance. Microsoft still manages the underlying infrastructure, reducing management overhead compared to VMs, although slightly more than single databases due to the instance-level management capabilities.

Minimize database changes: Azure SQL Database Managed Instance is designed for near 100% compatibility with on-premises SQL Server, including feature parity for SQL Server 2012 R2 and later. This includes full CLR support with fewer restrictions than single databases. Migration to Managed Instance generally requires minimal database changes, making it ideal for applications with complex dependencies or features like CLR.

Ensure Active Directory authentication: Azure SQL Database Managed Instance fully supports Azure Active Directory (Azure AD) authentication and integration, allowing users to use their Active Directory credentials.

  1. Azure SQL Database elastic pools

Azure SQL Database elastic pools are a deployment method for Azure SQL Database single databases, not a separate service tier with different capabilities. They are used to cost-effectively manage and scale multiple single databases that have variable usage patterns. The characteristics regarding management overhead, database changes, and AD authentication are the same as for Azure SQL Database single databases. Therefore, this option doesn’t fundamentally change the analysis compared to single databases.

  1. SQL Server 2016 on Azure virtual machines

Minimize management overhead: SQL Server on Azure virtual machines is an Infrastructure-as-a-Service (IaaS) offering. You have full control over the SQL Server and operating system, but you are also responsible for managing everything, including OS patching, SQL Server patching, backups, high availability configuration, and infrastructure maintenance. This option has the highest management overhead.

Minimize database changes: Migrating to SQL Server on Azure VMs requires the fewest database changes. It is essentially running SQL Server in a VM, providing the highest compatibility with your existing on-premises SQL Server 2012 R2 instances, including full CLR support without changes.

Ensure Active Directory authentication: SQL Server on Azure VMs can be joined to an Active Directory domain (if you extend your on-premises AD to Azure or use Azure AD Domain Services) and can use Windows Authentication for users to authenticate with their Active Directory credentials.

Comparison and Best Choice:

Feature Azure SQL DB Single DB Azure SQL DB Managed Instance SQL Server on Azure VMs
Management Overhead Lowest Low Highest
Database Changes Medium Minimal Minimal
AD Authentication Yes Yes Yes
CLR Support Limited Full Full
Considering all requirements:

Minimize management overhead: Azure SQL Database options (Single DB and Managed Instance) are better than SQL Server on VMs.

Minimize database changes: Azure SQL Database Managed Instance and SQL Server on VMs are better than single databases, especially for CLR compatibility.

Ensure AD authentication: All options satisfy this requirement.

Azure SQL Database Managed Instance strikes the best balance. It significantly minimizes management overhead compared to VMs, minimizes database changes (especially important for CLR), and supports Active Directory authentication. While single databases have even lower management overhead, the potential for database changes due to CLR limitations makes Managed Instance a more suitable recommendation for minimizing database changes and ensuring feature compatibility, especially for applications relying on CLR. SQL Server on Azure VMs minimizes database changes the most but fails to minimize management overhead.

Final Answer: Azure SQL Database Managed Instance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

You have an Azure subscription that contains an Azure Blob storage account named store1.

You have an on-premises file server named Setver1 that runs Windows Sewer 2016. Server1 stores 500 GB of company files.

You need to store a copy of the company files from Server 1 in store1.

Which two possible Azure services achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point

an Azure Batch account
an integration account
an On-premises data gateway
an Azure Import/Export job
Azure Data factory

A

Let’s analyze each Azure service option to determine its suitability for copying files from an on-premises file server to Azure Blob storage:

An Azure Batch account: Azure Batch is designed for large-scale parallel compute workloads. While technically you could write a custom application using Azure Batch to copy files, it’s not the intended use case, and it would be an overly complex solution for a simple file copy task. It’s not a direct file transfer service.

An integration account: Integration accounts are used in Azure Logic Apps and Azure Functions to store integration artifacts like schemas, maps, and certificates. They are not related to directly transferring files from on-premises to Azure Blob storage.

An On-premises data gateway: The On-premises data gateway acts as a bridge between on-premises data sources and Azure cloud services. It enables Azure services like Azure Data Factory, Logic Apps, Power BI, and Power Apps to securely access data behind a firewall in your on-premises network. For copying files from an on-premises file server to Azure Blob Storage, the On-premises data gateway is a crucial component to establish connectivity and secure data transfer.

An Azure Import/Export job: Azure Import/Export service is used for transferring large amounts of data to Azure Blob Storage and Azure Files by physically shipping disk drives to an Azure datacenter. This is suitable for very large datasets when network bandwidth is limited or slow, but it’s not ideal for a routine file copy of 500 GB from an active file server if a network connection is available. This method is not an online transfer service.

Azure Data Factory: Azure Data Factory (ADF) is a cloud-based data integration service. It allows you to create data-driven workflows to orchestrate and automate data movement and transformation. ADF has connectors for various data sources and sinks, including on-premises file systems (via a Self-hosted Integration Runtime, which is based on the same technology as the On-premises data gateway) and Azure Blob Storage. ADF is a well-suited and efficient service for copying files from an on-premises file server to Azure Blob storage.

Considering the requirements and the options:

On-premises data gateway is essential to enable Azure services to access the on-premises file server securely.

Azure Data Factory is a service designed for data movement and can utilize the On-premises data gateway to connect to the on-premises file server and copy files to Azure Blob storage.

Therefore, the two Azure services that, when used together, achieve the goal of copying files from an on-premises server to Azure Blob storage are:

An On-premises data gateway (required to provide secure access to the on-premises file server).

Azure Data Factory (to orchestrate the data copy process using the gateway to connect to the on-premises source and write to Azure Blob storage).

While they work together, the question asks for two possible Azure services that achieve this goal. In the context of the options provided and typical Azure hybrid scenarios, Azure Data Factory and On-premises data gateway are the most relevant and commonly used services for this type of task.

Final Answer:

An On-premises data gateway
Azure Data Factory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

HOTSPOT

You have an Azure subscription that contains the storage accounts shown in the following table.
Name Type Performance
storage1 StorageV2 Standard
storage2 SrorageV2 Premium
storage3 BlobStorage Standard
storage4 FileStorage Premium

You plan to implement two new apps that have the requirements shown in the following table.
Name Requirement
App1 Use lifecycle management to migrate app
data between storage tiers
App2 Store app data in an Azure file share

Which storage accounts should you recommend using for each app? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
App1:
Storage1 and storage2 only
Storage1 and storage3 only
Storage1, storage2, and storage3 only
Storage1, storage2, storage3, and storage4
App2:
Storage4 only
Storage1 and storage4 only
Storage1, storage2, and storage4 only
Storage1, storage2, storage3, and storage4

A

App1 Requirement: Use lifecycle management to migrate app data between storage tiers

Lifecycle Management Feature: Azure Blob Storage lifecycle management is a feature that allows you to automatically transition blobs to different storage tiers (Hot, Cool, Archive) based on predefined rules. This feature is supported by General-purpose v2 (StorageV2) and Blob Storage accounts. Premium performance storage accounts are designed for low latency and high throughput and typically do not require lifecycle management as the data is intended to be accessed frequently. FileStorage accounts are for Azure File Shares and do not use lifecycle management in the same way as Blob Storage.

Analyzing Storage Accounts for App1:

storage1 (StorageV2, Standard): Supports lifecycle management.

storage2 (StorageV2, Premium): Supports lifecycle management (though less typical for premium due to cost optimization focus of lifecycle management, technically possible).

storage3 (BlobStorage, Standard): Supports lifecycle management.

storage4 (FileStorage, Premium): Does not support lifecycle management for blobs. FileStorage is for Azure File Shares.

Correct Option for App1: Storage accounts that support lifecycle management are storage1, storage2, and storage3. Therefore, the correct option for App1 is Storage1, storage2, and storage3 only.

App2 Requirement: Store app data in an Azure file share

Azure File Share Feature: Azure File Shares are fully managed file shares in the cloud, accessible via the Server Message Block (SMB) protocol. Azure File Shares can be hosted on General-purpose v2 (StorageV2) accounts and FileStorage accounts. FileStorage accounts are specifically designed for premium, high-performance file shares.

Analyzing Storage Accounts for App2:

storage1 (StorageV2, Standard): Supports Azure File Shares (standard file shares).

storage2 (StorageV2, Premium): Supports Azure File Shares (premium file shares).

storage3 (BlobStorage, Standard): Does not support Azure File Shares. BlobStorage accounts are designed for blobs (object storage), not file shares.

storage4 (FileStorage, Premium): Supports Azure File Shares (premium file shares).

Correct Option for App2: Storage accounts that support Azure File Shares are storage1, storage2, and storage4. Therefore, the correct option for App2 is Storage1, storage2, and storage4 only.

Final Answer:

App1: Storage1, storage2, and storage3 only
App2: Storage1, storage2, and storage4 only

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

You are planning an Azure IoT Hub solution that will include 50,000 IoT devices.

Each device will stream data, including temperature, device ID, and time data. Approximately 50,000 records will be written every second. The data will be visualized in near real time.

You need to recommend a service to store and query the data.

Which two services can you recommend? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

Azure Table Storage
Azure Event Grid
Azure Cosmos DB SQL API
Azure Time Series Insights

A

To address the requirement of storing and querying high-velocity IoT data for near real-time visualization, we need to evaluate each service based on its capabilities for handling such data:

Azure Table Storage:

Pros: NoSQL key-value store, scalable, cost-effective for large volumes of data, can handle high write throughput.

Cons: Limited querying capabilities, not optimized for complex queries or aggregations needed for real-time analytics and visualization. Queries are most efficient when filtering by partition key and row key. Less suitable for complex time-series analysis. Not ideal for “near real-time” complex visualizations that might require flexible querying.

Azure Event Grid:

Pros: Publishes and subscribes to events from Azure services and custom sources. Excellent for event-driven architectures.

Cons: Not a data storage service. Event Grid is for routing events, not storing and querying data for visualization. It would be used to trigger actions when data arrives at IoT Hub, but not for storing the data itself for querying and visualization.

Azure Cosmos DB SQL API:

Pros: NoSQL document database, highly scalable, globally distributed, supports high write throughput and low latency. Rich SQL query API allows for flexible and complex queries, including time-series queries and aggregations. Suitable for near real-time analytics and visualization. Can handle the 50,000 records/second ingestion rate.

Cons: Can be more expensive than Table Storage for very simple data access patterns, but the rich query and scalability features justify the cost for complex real-time scenarios.

Azure Time Series Insights:

Pros: Purpose-built for time-series data from IoT devices. Highly scalable for ingestion and querying of time-series data. Optimized for time-based queries, aggregations, and analytics. Provides near real-time dashboards and visualization capabilities out-of-the-box for time-series data. Designed to handle high-velocity data streams from IoT devices. Excellent for visualizing temperature, device ID, and time data in near real-time.

Cons: Specifically for time-series data. Less flexible for general-purpose NoSQL data storage compared to Cosmos DB if you have other data types beyond time-series.

Considering the requirements:

Store and query data: Azure Table Storage, Azure Cosmos DB, and Azure Time Series Insights are storage services. Azure Event Grid is not.

Near real-time visualization: Azure Cosmos DB and Azure Time Series Insights are well-suited for near real-time visualization due to their query capabilities and low latency. Azure Table Storage is less ideal for complex real-time visualizations.

High ingestion rate (50,000 records per second): Azure Cosmos DB and Azure Time Series Insights are designed for high-throughput data ingestion. Azure Table Storage can handle high throughput, but its querying limitations become more pronounced at scale for complex analytics.

Based on this analysis, the two best services for storing and querying IoT data for near real-time visualization, considering the high ingestion rate and the need for querying and visualization, are Azure Cosmos DB SQL API and Azure Time Series Insights. Azure Time Series Insights is purpose-built for this scenario and offers out-of-the-box visualization capabilities, making it a very strong choice. Azure Cosmos DB provides more general-purpose NoSQL capabilities and global distribution if needed, while still being excellent for time-series data and real-time querying.

Final Answer:

Azure Cosmos DB SQL API
Azure Time Series Insights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

You are designing an application that will aggregate content for users.

You need to recommend a database solution for the application.

The solution must meet the following requirements:

✑ Support SQL commands.

✑ Support multi-master writes.

✑ Guarantee low latency read operations.

What should you include in the recommendation?

Azure Cosmos DB SQL API
Azure SQL Database that uses active geo-replication
Azure SQL Database Hyperscale
Azure Database for PostgreSQL

A

Let’s analyze each option against the given requirements:

Azure Cosmos DB SQL API:

Support SQL commands: Yes. Azure Cosmos DB SQL API uses a subset of ANSI SQL, extended for JSON and NoSQL features. It’s designed to be familiar for SQL developers.

Support multi-master writes: Yes. Azure Cosmos DB is natively designed for multi-master writes. You can configure your Cosmos DB account to have multiple write regions, allowing you to perform write operations in any of the chosen regions. This is a core feature of Cosmos DB’s global distribution and low-latency write capabilities.

Guarantee low latency read operations: Yes. Cosmos DB is designed for low latency reads and writes at a global scale. By using the globally distributed nature of Cosmos DB and choosing read regions close to your users, you can ensure low latency read operations.

Azure SQL Database that uses active geo-replication:

Support SQL commands: Yes. Azure SQL Database fully supports T-SQL, the standard SQL dialect for SQL Server and Azure SQL Database.

Support multi-master writes: No. Azure SQL Database with active geo-replication is not multi-master. It operates on a primary-secondary model. Writes are only performed on the primary replica, and then asynchronously replicated to secondary replicas. While secondary replicas provide read scale and disaster recovery, they are read-only and do not support writes.

Guarantee low latency read operations: Yes, for read operations from the secondary replicas, especially if geographically close to users. However, write operations are always directed to the primary replica, which might introduce latency for writes and does not fulfill the multi-master write requirement.

Azure SQL Database Hyperscale:

Support SQL commands: Yes. Azure SQL Database Hyperscale fully supports T-SQL.

Support multi-master writes: No. Azure SQL Database Hyperscale is not multi-master. While Hyperscale has a distributed architecture with multiple read replicas for scalability, write operations are still processed through a single primary compute replica. It’s designed for read-heavy workloads and scalability, not for multi-master writes for globally distributed low-latency writes.

Guarantee low latency read operations: Yes. Hyperscale is designed for very high read scalability and performance, providing low latency reads from multiple replicas. However, it does not provide multi-master write capability.

Azure Database for PostgreSQL:

Support SQL commands: Yes. PostgreSQL is a relational database that supports SQL (ANSI SQL standard).

Support multi-master writes: No, not in the standard managed Azure Database for PostgreSQL service. While PostgreSQL has extensions and architectures that can achieve multi-master setups (like BDR - Bi-Directional Replication or Citus distributed PostgreSQL), these are not part of the standard Azure managed offering and add significant complexity. Azure Database for PostgreSQL Flexible Server offers read replicas for read scalability but not multi-master writes in the context asked. For a simple managed service comparison, it’s primarily single-master.

Guarantee low latency read operations: Read replicas in PostgreSQL can offer low latency reads, but the primary database is still the single point for writes, thus not fulfilling the multi-master write requirement.

Conclusion:

Only Azure Cosmos DB SQL API fully meets all three requirements: SQL command support, multi-master writes, and guaranteed low latency read operations. The other options fail on the multi-master write requirement, which is crucial for applications needing low-latency writes in a globally distributed or highly available manner.

Final Answer: Azure Cosmos DB SQL API

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

HOTSPOT

You have an Azure subscription that contains the SQL servers shown in the following table.
Name Resource group Location
SQLsvr1 RG1 East US
SQLsvr2 RG2 West US
The subscription contains the storage accounts shown in the following table.
Name Resource group Location Account kind
storage1 RG1 East US StorageV2 (general purpose v2)
storage2 RG2 Central US BlobStorage
You create the Azure SQL databases shown in the following table.
Name Resource group Server Pricing tier
SQLdb1 RG1 SQLsvr1 Standard
SQLdb2 RG1 SQLsvr1 Standard
SQLdb3 RG2 SQLsvr2 Premium
Answer Area
Statements Yes No
When you enable auditing for SQLdb1, you can store the audit information to storage1.
When you enable auditing for SQLdb2, you can store the audit information to storage2.
When you enable auditing for SQLdb3, you can store the audit information to storage2.

A

Answer:

Statements Yes No
When you enable auditing for SQLdb1, you can store the audit information to storage1. Yes
When you enable auditing for SQLdb2, you can store the audit information to storage2. No
When you enable auditing for SQLdb3, you can store the audit information to storage2. No
Explanation:

Statement 1: When you enable auditing for SQLdb1, you can store the audit information to storage1.

Yes. SQLdb1 is on SQLsvr1, which is in East US. storage1 is also in East US. Azure SQL Database auditing requires the storage account to be in the same region as the SQL server. storage1 is a StorageV2 account, which is compatible with Azure SQL Auditing.

Statement 2: When you enable auditing for SQLdb2, you can store the audit information to storage2.

No. SQLdb2 is on SQLsvr1, which is in East US. storage2 is in Central US. The storage account must be in the same region as the SQL server. storage2 is in a different region (Central US) than SQLsvr1 (East US).

Statement 3: When you enable auditing for SQLdb3, you can store the audit information to storage2.

No. SQLdb3 is on SQLsvr2, which is in West US. storage2 is in Central US. The storage account must be in the same region as the SQL server. storage2 is in a different region (Central US) than SQLsvr2 (West US).

Key takeaway for Azure SQL Database Auditing and Storage Accounts:

Region Co-location is Mandatory: The storage account used for storing Azure SQL Database audit logs must be in the same Azure region as the Azure SQL server or Managed Instance.

Storage Account Type: Generally, StorageV2 (general purpose v2) and BlobStorage account kinds are suitable for storing audit logs. FileStorage is not used for Azure SQL Auditing.

Resource Group is Irrelevant for Region Constraint: The resource group placement of the SQL server and storage account does not affect the region constraint for auditing. The critical factor is the Azure region of both resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

You have SQL Server on an Azure virtual machine. The databases are written to nightly as part of a batch process.

You need to recommend a disaster recovery solution for the data.

The solution must meet the following requirements:

✑ Provide the ability to recover in the event of a regional outage.

✑ Support a recovery time objective (RTO) of 15 minutes.

✑ Support a recovery point objective (RPO) of 24 hours.

✑ Support automated recovery.

✑ Minimize costs.

What should you include in the recommendation?

Azure virtual machine availability sets
Azure Disk Backup
an Always On availability group
Azure Site Recovery

A

Let’s analyze each option against the disaster recovery requirements:

Azure virtual machine availability sets:

Regional outage recovery: No. Availability sets protect against hardware failures within a single datacenter, not regional outages.

RTO of 15 minutes: No. Availability sets do not directly address RTO in a disaster recovery scenario.

RPO of 24 hours: No. Availability sets do not directly address RPO in a disaster recovery scenario.

Automated recovery: No. Availability sets do not provide automated recovery in a disaster recovery scenario.

Minimize costs: Yes, availability sets are a basic feature and do not add significant cost beyond the VMs themselves.

Conclusion: Availability sets do not meet the requirements for regional disaster recovery, RTO, RPO, or automated recovery.

Azure Disk Backup:

Regional outage recovery: Yes. Azure Disk Backup, especially with Geo-redundant storage for backups, can allow recovery in a different region if the primary region fails.

RTO of 15 minutes: No. Restoring a VM and SQL Server from Azure Disk Backup can take significantly longer than 15 minutes, especially for large VMs and databases.

RPO of 24 hours: Yes. Azure Disk Backup can be configured to take backups frequently (e.g., daily or more often), easily meeting an RPO of 24 hours.

Automated recovery: No. While backup schedules are automated, the recovery process (restoring a VM and SQL Server) is not fully automated in the sense of automatic failover during a disaster. It requires manual steps or scripting.

Minimize costs: Yes. Azure Disk Backup is a relatively cost-effective backup solution.

Conclusion: Azure Disk Backup meets the RPO and regional outage recovery requirements and is cost-effective, but it fails to meet the RTO of 15 minutes and automated recovery.

An Always On availability group:

Regional outage recovery: Yes. By configuring an Always On Availability Group with synchronous or asynchronous replicas in a secondary Azure region, you can recover from a regional outage.

RTO of 15 minutes: Yes. Always On Availability Groups are designed for high availability and disaster recovery with fast failover times, typically within seconds to minutes, easily meeting the 15-minute RTO.

RPO of 24 hours: Yes. Always On Availability Groups, especially with synchronous replication (though often asynchronous is used for cross-region DR for performance reasons), can achieve a very low RPO, well within 24 hours, and practically close to zero data loss in many scenarios.

Automated recovery: Yes. Always On Availability Groups support automatic failover to a secondary replica in case of a primary replica failure, including regional outages (depending on configuration).

Minimize costs: No. Always On Availability Groups are the most expensive option. They require multiple VMs (at least two SQL Server VMs), SQL Server licensing for each VM, and potentially additional storage and networking costs.

Conclusion: Always On Availability Groups meet all functional requirements (regional outage recovery, RTO, RPO, automated recovery) but do not minimize costs.

Azure Site Recovery:

Regional outage recovery: Yes. Azure Site Recovery is specifically designed for disaster recovery, including regional outages. It replicates VMs to a secondary Azure region.

RTO of 15 minutes: Yes. Azure Site Recovery is designed to achieve low RTOs. With proper planning, runbooks, and pre-warming of standby resources, an RTO of 15 minutes is achievable.

RPO of 24 hours: Yes. Azure Site Recovery supports continuous replication, allowing for very low RPO, well within 24 hours, and typically in minutes. Point-in-time recovery is also available.

Automated recovery: Yes. Azure Site Recovery supports recovery plans that can automate the failover process, including VM startup order, script execution, and IP address updates, enabling automated recovery.

Minimize costs: No, but more cost-effective than Always On Availability Groups. Azure Site Recovery costs are incurred for replication, storage, and compute resources used in the recovery region only during testing or failover. You don’t need to pay for a fully licensed hot standby SQL Server VM continuously.

Conclusion: Azure Site Recovery meets all functional requirements (regional outage recovery, RTO, RPO, automated recovery) and is more cost-effective than Always On Availability Groups, although not as cheap as Azure Disk Backup.

Comparing and Choosing the Best Option:

Given the requirements and the need to “minimize costs” whenever possible, while still meeting all functional requirements, Azure Site Recovery is the most appropriate recommendation.

Always On Availability Groups are overkill and significantly more expensive for a 24-hour RPO.

Azure Disk Backup is cheaper but fails to meet the critical RTO of 15 minutes and automated recovery.

Availability Sets are irrelevant for regional DR.

Azure Site Recovery provides the best balance of meeting all the DR requirements (regional outage recovery, RTO of 15 mins, RPO of 24 hours, automated recovery) while being more cost-conscious than Always On Availability Groups. It’s not the absolute cheapest solution, but it effectively minimizes costs while still delivering the necessary DR capabilities.

Final Answer: Azure Site Recovery

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

You need to deploy resources to host a stateless web app in an Azure subscription.

The solution must meet the following requirements:

✑ Provide access to the full .NET framework.

✑ Provide redundancy if an Azure region fails.

✑ Grant administrators access to the operating system to install custom application dependencies.

Solution: You deploy two Azure virtual machines to two Azure regions, and you create a Traffic Manager profile.

Does this meet the goal?

Yes
No

A

Let’s break down the requirements and analyze if the proposed solution meets them.

Requirements:

Provide access to the full .NET framework: Virtual Machines allow you to install and configure the operating system as needed. You can install Windows Server and the full .NET Framework on Azure VMs.

Provide redundancy if an Azure region fails: Deploying VMs in two different Azure regions inherently provides geographical redundancy. If one Azure region experiences a failure, the VMs in the other region can continue to operate.

Grant administrators access to the operating system to install custom application dependencies: Azure Virtual Machines provide full administrative access to the operating system. Administrators can log in and install any necessary custom application dependencies directly on the VM.

Solution:

Deploy two Azure virtual machines to two Azure regions: This directly addresses the redundancy requirement. Having VMs in separate regions ensures that if one region fails, the application can still be served from the VMs in the other region.

Create a Traffic Manager profile: Azure Traffic Manager is a DNS-based traffic load balancer. It can be configured to route traffic to healthy endpoints based on different routing methods, including failover. In this scenario, Traffic Manager can be configured to monitor the health of the web app running on the VMs in both regions. If a region fails and the VMs become unhealthy, Traffic Manager will automatically redirect traffic to the VMs in the healthy region.

Evaluation:

Full .NET Framework: Azure VMs allow you to install any version of Windows Server and the .NET Framework, fulfilling this requirement.

Redundancy if an Azure region fails: By deploying VMs in two regions and using Traffic Manager, the solution achieves regional redundancy and failover capabilities. If one region goes down, Traffic Manager will route traffic to the healthy region.

Administrator access to OS: Azure VMs provide full administrator access to the operating system, allowing for the installation of custom application dependencies.

Conclusion:

The proposed solution effectively addresses all the stated requirements. Deploying two Azure VMs in separate regions combined with Azure Traffic Manager provides a redundant and resilient architecture for a stateless web app, while also allowing for full .NET framework support and OS-level administrative access for custom dependencies.

Final Answer: Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

You need to deploy resources to host a stateless web app in an Azure subscription.

The solution must meet the following requirements:

  • Provide access to the full .NET framework.
  • Provide redundancy if an Azure region fails.
  • Grant administrators access to the operating system to install custom application dependencies.

Solution: You deploy an Azure virtual machine to two Azure regions, and you deploy an Azure Application Gateway.

Does this meet the goal?

Yes
No

A

Let’s analyze the requirements and the proposed solution step-by-step.

Requirements:

Provide access to the full .NET framework: Azure Virtual Machines (VMs) allow you to install and configure the operating system, including installing the full .NET Framework. This requirement can be met by using Azure VMs.

Provide redundancy if an Azure region fails: To achieve regional redundancy, you need to deploy resources in multiple Azure regions.

Grant administrators access to the operating system to install custom application dependencies: Azure Virtual Machines provide full administrator access to the operating system, allowing administrators to install any custom dependencies. This requirement can be met by using Azure VMs.

Solution:

You deploy an Azure virtual machine to two Azure regions: This statement is fundamentally incorrect and impossible in Azure. You cannot deploy a single Azure virtual machine across two different Azure regions. Azure virtual machines are region-specific resources. To achieve regional redundancy using VMs, you would need to deploy separate sets of VMs in each region.

You deploy an Azure Application Gateway: Azure Application Gateway is a web traffic load balancer that operates at Layer 7 of the OSI model. It can provide features like SSL termination, web application firewall (WAF), and routing traffic to backend pools. Application Gateway can be configured to distribute traffic across backend pools located in different regions, which can contribute to regional redundancy. However, Application Gateway itself does not create the underlying compute resources in multiple regions. It merely distributes traffic to those resources.

Evaluation:

Full .NET framework: If we assume the intention was to deploy VMs in two regions (even though the wording is wrong), then VMs can support the full .NET framework. However, as described, it is deploying one VM.

Regional redundancy: Deploying one VM to two regions is not possible and therefore does not provide regional redundancy. Application Gateway can help with distributing traffic if there are redundant backends in different regions, but the solution description does not create redundant VMs in different regions. It starts with an impossible deployment scenario.

Administrator access to the operating system: If we assume the intention was to deploy VMs in two regions, then VMs provide admin access. However, as described, it is deploying one VM in two regions, which is not a valid setup.

Conclusion:

The core issue is the statement “You deploy an Azure virtual machine to two Azure regions.” This is technically incorrect and invalid in Azure. You cannot deploy a single VM across regions. To achieve regional redundancy with VMs, you need to deploy at least one VM in each region, and then use a service like Azure Traffic Manager or Application Gateway (in a multi-region setup) to distribute traffic and handle failover.

Because the fundamental premise of deploying a single VM to two regions is incorrect, the proposed solution does not meet the goal of providing regional redundancy as described. The solution description is flawed from the outset.

Final Answer: No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

HOTSPOT

You plan to create an Azure Storage account that will host file shares. The shares will be accessed from on-premises applications that are transaction-intensive.

You need to recommend a solution to minimize latency when accessing the file shares. The solution must provide the highest-level of resiliency for the selected storage tier.

What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Storage tier:
Hot
Premium
Transaction optimized
Resiliency:
Geo-redundant storage (GRS)
Zone-redundant storage (ZRS)
Locally-redundant storage (LRS)

A

Answer:
Storage tier: Premium
Resiliency: Zone-redundant storage (ZRS)

Explanation:

Storage tier: Premium

Minimize Latency: For transaction-intensive applications accessing file shares, Premium storage tier is the optimal choice. Premium storage is designed for low latency and high IOPS (Input/Output Operations Per Second). It uses SSD (Solid State Drive) based storage, which provides significantly faster performance compared to the HDD-based Standard storage tiers (Hot and Cool).

Hot and Transaction optimized are not suitable here:

Hot storage is designed for frequently accessed data but still uses HDD for file shares, resulting in higher latency compared to Premium.

Transaction optimized is not a valid Azure Storage tier option in this context.

Resiliency: Zone-redundant storage (ZRS)

Highest Level of Resiliency for Premium: For Premium file shares, the available redundancy options are Locally-redundant storage (LRS) and Zone-redundant storage (ZRS).

Locally-redundant storage (LRS): Replicates your data three times within a single physical location in the primary region. It’s the lowest-cost redundancy option and protects against server rack and drive failures.

Zone-redundant storage (ZRS): Replicates your data synchronously across three Azure availability zones in the primary region. Each availability zone is a separate physical location with independent power, cooling, and networking. ZRS provides high availability by protecting against datacenter failures within a region.

Geo-redundant storage (GRS): Geo-redundant storage replicates your data to a secondary region that is hundreds of miles away from the primary region. While GRS offers the highest level of data durability and protection against regional disasters, it is not available for Premium file shares.

Why ZRS is the highest resiliency for Premium: Since GRS is not an option for Premium file shares, Zone-redundant storage (ZRS) becomes the highest level of resiliency available for the Premium tier. ZRS provides better resiliency than LRS by protecting against availability zone failures, which is more robust than just single datacenter protection offered by LRS.

In summary:

To minimize latency for transaction-intensive workloads, Premium storage tier is necessary. For the highest level of resiliency available within the Premium tier for file shares, Zone-redundant storage (ZRS) should be selected because GRS is not supported for Premium file shares.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

You need to deploy resources to host a stateless web app in an Azure subscription.

The solution must meet the following requirements:

  • Provide access to the full .NET framework.
  • Provide redundancy if an Azure region fails.
  • Grant administrators access to the operating system to install custom application dependencies.

Solution: You deploy an Azure virtual machine scale set that uses autoscaling.

Does this meet the goal?

Yes
No

A

Let’s examine the requirements and evaluate whether deploying an Azure virtual machine scale set (VMSS) with autoscaling meets them.

Requirements:

Provide access to the full .NET framework:

Azure Virtual Machine Scale Sets are based on Azure Virtual Machines. You can choose the operating system for the VMs in the scale set, including Windows Server.

On Windows Server, you can install and run applications that require the full .NET Framework.

Yes, VMSS can provide access to the full .NET framework.

Provide redundancy if an Azure region fails:

Azure Virtual Machine Scale Sets are designed for high availability and scalability within a single Azure region.

A single VMSS deployment is confined to a specific Azure region. If the entire Azure region experiences an outage, the VMSS and the web app hosted on it will be unavailable.

To achieve redundancy across Azure regions, you would need to deploy multiple VMSSs in different Azure regions and use a service like Azure Traffic Manager or Azure Front Door to distribute traffic and provide failover capabilities.

No, deploying a single Azure virtual machine scale set, even with autoscaling, does not provide redundancy in case of an Azure region failure. The solution description only mentions deploying an Azure virtual machine scale set, implying a single deployment within a single region.

Grant administrators access to the operating system to install custom application dependencies:

Azure Virtual Machine Scale Sets are built upon Azure Virtual Machines. You can configure the VM instances within a VMSS just like individual VMs.

Administrators can access the operating system of the VM instances in a VMSS using methods like RDP (for Windows) or SSH (for Linux) and install custom application dependencies.

Yes, VMSS grants administrators access to the operating system to install custom application dependencies.

Evaluation of the Solution:

The solution of deploying a single Azure virtual machine scale set with autoscaling meets two out of the three requirements: providing access to the full .NET framework and granting administrator access to the OS. However, it fails to meet the crucial requirement of providing redundancy if an Azure region fails. A single VMSS is region-bound and will be affected by a regional outage.

To achieve regional redundancy, you would need a more complex setup involving multiple VMSS deployments across different regions and a global load balancing solution, which is not described in the proposed solution.

Conclusion:

The proposed solution, as described, does not fully meet the goal because it does not provide redundancy in the event of an Azure region failure. A single VMSS, even with autoscaling, is not designed for cross-region disaster recovery.

Final Answer: No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

You plan to move a web application named App1 from an on-premises data center to Azure.

App1 depends on a custom COM component that is installed on the host server.

You need to recommend a solution to host App1 in Azure.

The solution must meet the following requirements:

✑ App1 must be available to users if an Azure data center becomes unavailable.

✑ Costs must be minimized.

What should you include in the recommendation?

In two Azure regions, deploy a load balancer and a virtual machine scale set.
In two Azure regions, deploy a Traffic Manager profile and a web app.
In two Azure regions, deploy a load balancer and a web app.
Deploy a load balancer and a virtual machine scale set across two availability zones.

A

Let’s analyze each option against the stated requirements:

Requirement 1: App1 must be available to users if an Azure data center becomes unavailable (Regional Redundancy).

Requirement 2: Costs must be minimized.

Requirement 3: App1 depends on a custom COM component.

Option 1: In two Azure regions, deploy a load balancer and a virtual machine scale set.

Regional Redundancy: Yes, deploying resources in two Azure regions directly addresses regional outages. Using a load balancer (like Azure Load Balancer or Application Gateway in each region) and VM scale sets in each region enables regional failover.

Cost Minimization: VM scale sets can be cost-effective for stateless web applications, especially when combined with autoscaling. You only pay for the VMs that are running.

COM Component Support: Yes, virtual machines provide full control over the operating system, allowing you to install and register custom COM components required by App1.

Overall: This option effectively addresses all requirements.

Option 2: In two Azure regions, deploy a Traffic Manager profile and a web app.

Regional Redundancy: Yes, Traffic Manager can route traffic to web apps in different regions, providing regional failover. Azure Web Apps (App Service) can be deployed in multiple regions.

Cost Minimization: Azure Web Apps are generally a cost-effective PaaS solution with less management overhead than VMs.

COM Component Support: No. Azure Web Apps (App Service) is a Platform-as-a-Service (PaaS) offering. You cannot install custom COM components on Azure Web Apps. Web Apps run in a managed environment where you do not have operating system level access to install custom components.

Overall: This option fails to meet the COM component requirement.

Option 3: In two Azure regions, deploy a load balancer and a web app.

Regional Redundancy: Yes, similar to option 2, if “load balancer” is interpreted as a global service like Traffic Manager or Front Door to distribute traffic across regional web app deployments.

Cost Minimization: Azure Web Apps are generally cost-effective.

COM Component Support: No. Same limitation as option 2; Azure Web Apps do not support custom COM components.

Overall: This option also fails to meet the COM component requirement.

Option 4: Deploy a load balancer and a virtual machine scale set across two availability zones.

Regional Redundancy: No. Availability zones provide high availability within a single Azure region by distributing resources across physically separate zones within the same region. They do not protect against regional outages. A regional outage will affect all availability zones within that region.

Cost Minimization: VM scale sets can be cost-effective. Availability zones do not drastically increase costs, but they don’t provide regional DR.

COM Component Support: Yes, VM scale sets allow you to install custom COM components on the VMs.

Overall: This option fails to meet the regional redundancy requirement.

Conclusion:

Considering all requirements, the only option that meets all of them is Option 1: In two Azure regions, deploy a load balancer and a virtual machine scale set. This option provides regional redundancy, can be cost-minimized, and most importantly, supports the custom COM component dependency by using virtual machines where you can install the component on the OS. Options involving Web Apps fail due to the COM component limitation, and the Availability Zone option fails to provide regional disaster recovery.

Final Answer: In two Azure regions, deploy a load balancer and a virtual machine scale set.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Your company has the infrastructure shown in the following table.
Location Resource
Azure
* Azure subscription named Subscription1
* 20 Azure web apps
On-premises datacenter
* Active Directory domain
* Server running Azure AD Connect
* Linux computer named Server1

The on-premises Active Directory domain syncs to Azure Active Directory (Azure AD).

Server1 runs an application named Appl that uses LDAP queries to verify user identities in the on-premises Active Directory domain.

You plan to migrate Server1 to a virtual machine in Subscription1.

A company security policy states that the virtual machines and services deployed to Subscription1 must be prevented from accessing the on-premises network.

You need to recommend a solution to ensure that Appl continues to function after the migration. The solution must meet the security policy.

What should you include in the recommendation?

Azure AD Domain Services (Azure AD DS)
an Azure VPN gateway
the Active Directory Domain Services role on a virtual machine
Azure AD Application Proxy

A

The core requirement is to allow the migrated application (App1) to continue using LDAP queries for user identity verification, but without violating the security policy that prohibits Azure resources from accessing the on-premises network.

Let’s evaluate each option:

Azure AD Domain Services (Azure AD DS):

Functionality: Azure AD DS provides managed domain services such as domain join, group policy, LDAP, and Kerberos/NTLM authentication within Azure. It’s essentially a domain controller hosted in Azure but managed by Microsoft.

LDAP Support: Azure AD DS explicitly supports LDAP queries. Applications running on VMs joined to an Azure AD DS domain can perform LDAP queries against the Azure AD DS domain controllers.

Security Policy Compliance: By using Azure AD DS, the VM hosting App1 will authenticate against the Azure AD DS domain controllers hosted within Azure. This completely avoids accessing the on-premises Active Directory. Therefore, it fully complies with the security policy.

Suitability: This is a strong candidate as it directly addresses the need for LDAP functionality within Azure while adhering to the security policy.

An Azure VPN gateway:

Functionality: Azure VPN Gateway creates a secure, encrypted connection between your Azure virtual network and your on-premises network.

LDAP Support: With a VPN connection, the VM in Azure could directly query the on-premises Active Directory domain controllers via LDAP.

Security Policy Compliance: This option directly violates the security policy. A VPN gateway enables access from Azure resources to the on-premises network, which is explicitly prohibited.

Suitability: This option is incorrect because it violates the core security requirement.

The Active Directory Domain Services role on a virtual machine:

Functionality: This involves deploying a Windows Server virtual machine in Azure and installing the Active Directory Domain Services role to create a new domain controller within Azure.

LDAP Support: Applications on VMs joined to this newly created Azure-based domain can perform LDAP queries against this Azure-hosted domain controller.

Security Policy Compliance: This option complies with the security policy. The LDAP queries would be directed to the domain controller hosted within Azure, and not to the on-premises Active Directory. It isolates Azure resources from the on-premises network for authentication purposes.

Suitability: This is also a viable solution as it provides LDAP functionality within Azure and meets the security policy. However, it involves more management overhead compared to Azure AD DS, as you are responsible for managing the domain controller VM.

Azure AD Application Proxy:

Functionality: Azure AD Application Proxy is used to publish on-premises web applications to the internet through Azure AD authentication. It provides secure remote access to web applications.

LDAP Support: Azure AD Application Proxy is not related to enabling LDAP queries for an application running within Azure. It’s for providing reverse proxy and authentication for web applications, not for backend LDAP authentication for applications themselves.

Security Policy Compliance: Irrelevant in the context of this security policy, as it doesn’t address the LDAP query requirement for App1 within Azure.

Suitability: This option is incorrect as it does not address the requirement of LDAP queries for App1.

Comparison and Best Recommendation:

Both Azure AD Domain Services and The Active Directory Domain Services role on a virtual machine can technically solve the problem and meet the security policy. However, Azure AD Domain Services (Azure AD DS) is the more managed, streamlined, and generally recommended solution for this scenario.

Azure AD DS offers a managed service, reducing operational overhead for managing domain controllers. It’s designed for Azure environments and integrates well with Azure AD.

AD DS role on a VM requires managing the VM and the domain controller infrastructure yourself, which adds complexity and management overhead.

Considering the need for a recommendation that is efficient and aligned with Azure best practices for cloud-based identity and access management, Azure AD Domain Services (Azure AD DS) is the most appropriate and recommended solution.

Final Answer: Azure AD Domain Services (Azure AD DS)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

You need to design a solution that will execute custom C# code in response to an event routed to Azure Event Grid.

The solution must meet the following requirements:

✑ The executed code must be able to access the private IP address of a Microsoft SQL Server instance that runs on an Azure virtual machine.

Costs must be minimized.

What should you include in the solution?

Azure Logic Apps in the integrated service environment
Azure Functions in the Dedicated plan and the Basic Azure App Service plan
Azure Logic Apps in the Consumption plan
Azure Functions in the Consumption plan

A

Let’s break down the requirements and evaluate each option:

Requirements:

Execute custom C# code: The solution must be capable of running custom C# code.

Access private IP of SQL Server VM: The code needs to connect to a SQL Server instance using its private IP address within an Azure Virtual Network.

Minimize costs: The solution should be cost-effective.

Option Analysis:

Azure Logic Apps in the integrated service environment (ISE):

Custom C# code: Logic Apps are primarily workflow orchestration services. While you can execute code within a Logic App, it’s not directly custom C# code. You would typically call an Azure Function or use inline code actions, which are more for expressions and data manipulation than complex C# logic.

Private IP access: Logic Apps in an ISE run within your Azure Virtual Network. This means they have direct access to resources within that VNet, including VMs with private IPs like the SQL Server VM.

Cost minimization: ISE is the most expensive deployment option for Logic Apps. It is designed for large enterprises and mission-critical workloads, and it incurs a fixed cost regardless of usage. This option does not minimize costs.

Azure Functions in the Dedicated plan and the Basic Azure App Service plan:

Custom C# code: Azure Functions fully support writing and executing custom C# code.

Private IP access: When Azure Functions run in a Dedicated App Service plan, they can be integrated into an Azure Virtual Network. VNet integration allows the Function App to access resources within the VNet using private IPs, including the SQL Server VM.

Cost minimization: Dedicated plans are more predictable in cost as you pay for the App Service plan instance regardless of the number of executions. The Basic tier is a lower-cost Dedicated plan, but it’s still not as cost-effective as serverless options when considering sporadic event-driven execution. It’s more expensive than Consumption plan if the function is not constantly running.

Azure Logic Apps in the Consumption plan:

Custom C# code: Similar to ISE, Logic Apps in the Consumption plan are workflow services, not direct C# code execution environments. You would likely need to integrate with Azure Functions to execute custom C# code.

Private IP access: Historically, Logic Apps in the Consumption plan did not natively have direct VNet integration for accessing private IPs. While workarounds existed (like using Data Gateway or API Management), they added complexity and potential cost. However, VNet integration capabilities have been added to Consumption Logic Apps, allowing them to access resources within a VNet, but it might involve more configuration than Dedicated plans.

Cost minimization: Consumption plan Logic Apps are generally cost-effective as you pay per execution, making them suitable for event-driven scenarios where the workflow is not constantly running. However, the complexity of VNet integration and potential need to use extra services might slightly offset the cost savings.

Azure Functions in the Consumption plan:

Custom C# code: Azure Functions fully support writing and executing custom C# code.

Private IP access: Azure Functions in the Consumption plan can now be integrated with Azure Virtual Networks to access resources with private IPs. This feature enhancement allows Consumption plan Functions to securely access resources like the SQL Server VM within the VNet. This VNet integration for Consumption plan Functions might require configuring outbound Network Address Translation (NAT) to handle outbound connections.

Cost minimization: Azure Functions in the Consumption plan are the most cost-effective option for event-driven workloads. You only pay for the actual execution time of the code, making it ideal for scenarios where the function is invoked sporadically in response to events.

Best Option based on Requirements and Cost:

Considering all factors, Azure Functions in the Consumption plan is the most suitable recommendation.

It directly supports custom C# code execution.

With VNet integration, it can securely access the SQL Server VM using its private IP address.

The Consumption plan is the most cost-effective option, especially for event-driven scenarios, aligning with the “minimize costs” requirement.

While Dedicated plans also offer VNet integration and C# support, they are generally more expensive than Consumption for event-driven workloads. Logic Apps, while powerful for workflow orchestration, are not primarily for direct C# code execution and ISE is too costly. Logic Apps Consumption plan has gained VNet integration capabilities, but still less direct for C# and might involve more complex setup than Consumption Functions for this specific scenario.

Final Answer: Azure Functions in the Consumption plan

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

You have an on-premises network and an Azure subscription. The on-premises network has several branch offices.

A branch office in Toronto contains a virtual machine named VM1 that is configured as a file server.

Users access the shared files on VM1 from all the offices.

You need to recommend a solution to ensure that the users can access the shares files as quickly as possible if the Toronto branch office is inaccessible.

What should you include in the recommendation?

a Recovery Services vault and Azure Backup
an Azure file share and Azure File Sync
Azure blob containers and Azure File Sync
a Recovery Services vault and Windows Server Backup

A

The goal is to provide users with fast access to shared files, even if the Toronto branch office (where VM1 file server is located) is inaccessible. This implies the need for a solution that replicates the file shares and allows access from alternative locations when Toronto is down.

Let’s evaluate each option:

a Recovery Services vault and Azure Backup:

Functionality: Azure Backup, in conjunction with a Recovery Services vault, is used for backing up and restoring data. It is primarily a data protection solution, not a solution for providing continuous file access during a site outage.

Fast Access if Toronto Inaccessible: No. If Toronto is inaccessible, users would need to initiate a restore process from the Recovery Services vault to access the files, which is not a fast or seamless access method for users during an outage. Backup is for recovery, not continuous availability.

Suitability: This option is not designed for providing fast access to files during a branch office outage.

an Azure file share and Azure File Sync:

Functionality: Azure File Share is a fully managed cloud file share accessible via SMB protocol. Azure File Sync is a service that can synchronize on-premises file servers with Azure File Shares.

Fast Access if Toronto Inaccessible: Yes. If the Toronto branch office becomes inaccessible, users can be redirected to access the Azure File Share directly. The Azure File Share is hosted in Azure and is independent of the Toronto office’s availability. Users from other offices can access the files through the internet connection to Azure. Additionally, Azure File Sync can be used to cache the Azure File Share content on file servers in other branch offices for even faster local access if required.

Suitability: This option directly addresses the requirement for fast file access during a Toronto office outage. Azure File Share provides a cloud-based, always-available copy of the files.

Azure blob containers and Azure File Sync:

Functionality: Azure Blob containers are object storage, designed for storing large amounts of unstructured data. Azure File Sync is designed to synchronize on-premises file servers with Azure File Shares, not Blob containers.

Fast Access if Toronto Inaccessible: No. Azure Blob containers are not directly accessed as file shares by users using standard file protocols (like SMB). While data could be in Blob storage, it’s not a solution for providing fast file share access to users during an outage. Azure File Sync is not compatible with Blob containers in this scenario.

Suitability: This option is not a valid or practical solution for providing file share access.

a Recovery Services vault and Windows Server Backup:

Functionality: Windows Server Backup is an on-premises backup tool. Combined with a Recovery Services vault in Azure, it provides offsite backups.

Fast Access if Toronto Inaccessible: No. Similar to the “Azure Backup” option, this is a backup and restore solution. It does not provide fast or continuous file access during an outage. Users would need to restore from backup, which is not designed for immediate access.

Suitability: This option is also not designed for providing fast access to files during a branch office outage.

Conclusion:

The most suitable recommendation to ensure users can access shared files quickly even if the Toronto branch office is inaccessible is an Azure file share and Azure File Sync. This solution provides a cloud-based, highly available copy of the files (Azure File Share) that can be accessed from any location, including other branch offices, when the primary file server in Toronto is unavailable. Azure File Sync can further enhance performance by caching the Azure File Share content on-premises in other offices if needed.

Final Answer: an Azure file share and Azure File Sync

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

HOTSPOT

You have an Azure subscription named Subscription1 that is linked to a hybrid Azure Active Directory (Azure AD) tenant.

You have an on-premises datacenter that does NOT have a VPN connection to Subscription1. The datacenter contains a computer named Server1 that has Microsoft SQL Server 2016 installed. Server1 is prevented from accessing the internet.

An Azure logic app named LogicApp1 requires write access to a database on Server1.

You need to recommend a solution to provide LogicApp1 with the ability to access Server1.

What should you recommend deploying on-premises and in Azure? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
On-premises:
A Web Application Proxy for Windows Server
An Azure AD Application Proxy connector
An On-premises data gateway
Hybrid Connection Manager
Azure:
A connection gateway resource
An Azure Application Gateway
An Azure Event Grid domain
An enterprise application

A

Answer:

On-premises: An On-premises data gateway
Azure: An enterprise application

Explanation:

On-premises: An On-premises data gateway

Reason: The On-premises data gateway acts as a bridge between Azure cloud services and on-premises data sources. Since Server1 (the SQL Server) is on-premises and not directly accessible from the internet or Azure, the Data Gateway is essential to establish a secure connection.

Functionality:

Secure Tunnel: The Data Gateway creates an outbound-only connection from your on-premises network to Azure. No inbound ports need to be opened on your firewall, enhancing security.

Data Transfer: It facilitates secure data transfer between the on-premises SQL Server and Azure Logic Apps.

Connectivity for Multiple Azure Services: The same Data Gateway can be used by other Azure services like Power Automate, Power BI, and Power Apps to access on-premises data sources.

Azure: An enterprise application

Reason: While not directly related to data transfer itself, an Azure AD enterprise application is necessary for managing authentication and authorization for the Data Gateway and Logic App connection.

Functionality (in this context):

Authentication for Data Gateway: When you configure the Data Gateway, it needs to be registered and authenticated with Azure AD. An enterprise application in Azure AD represents the Data Gateway registration and allows Azure to manage its identity and access.

Logic App Connection Authentication: When you create a connection in Logic Apps to the on-premises SQL Server via the Data Gateway, this connection often relies on Azure AD for authentication. The enterprise application could be used to manage permissions and authentication for this connection, although implicitly through the Data Gateway setup.

Authorization and Governance: Enterprise applications are fundamental for managing application identities and applying Azure AD governance policies.

Why other options are not correct or less suitable:

On-premises:

A Web Application Proxy for Windows Server and An Azure AD Application Proxy connector: These are specifically for publishing web applications to the internet using Azure AD pre-authentication. They are not designed for generic data connectivity between Logic Apps and on-premises databases.

Hybrid Connection Manager: While Hybrid Connections can also provide secure connectivity, the On-premises data gateway is generally the preferred and simpler solution for connecting Logic Apps to on-premises data sources like SQL Server. Hybrid Connections are often used for more general-purpose network connectivity scenarios.

Azure:

A connection gateway resource: This is not a standard Azure service name in this context. While you do create a “Connection” resource in Logic Apps to configure the Data Gateway connection, it’s not typically referred to as a “connection gateway resource” in Azure documentation. This option is likely a distractor.

An Azure Application Gateway: Application Gateway is a web traffic load balancer and WAF. It’s not related to backend data connectivity for Logic Apps to on-premises databases.

An Azure Event Grid domain: Event Grid is an event routing service, not related to data connectivity for Logic Apps to on-premises SQL Server.

Therefore, the most accurate and relevant answer is:

On-premises: An On-premises data gateway

Azure: An enterprise application

This combination correctly represents the standard approach for securely connecting an Azure Logic App to an on-premises SQL Server that is not internet-accessible and does not have a VPN to Azure, using the Azure On-premises Data Gateway and leveraging Azure AD Enterprise Application for authentication and management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Your company has 300 virtual machines hosted in a VMware environment. The virtual machines vary in size and have various utilization levels.

You plan to move all the virtual machines to Azure.

You need to recommend how many and what size Azure virtual machines will be required to move the current workloads to Azure. The solution must minimize administrative effort.

What should you use to make the recommendation?

Azure Cost Management
Azure Pricing calculator
Azure Migrate
Azure Advisor

A

To determine the optimal number and size of Azure virtual machines for migrating 300 on-premises VMware VMs while minimizing administrative effort, you need a tool that can assess the existing VMware environment and provide Azure VM sizing recommendations. Let’s evaluate each option:

Azure Cost Management: Azure Cost Management is a tool for monitoring, managing, and optimizing Azure spending. It helps you analyze costs, set budgets, and identify cost-saving opportunities for existing Azure resources. It does not directly assess on-premises VMware environments to recommend Azure VM sizes for migration. While it can inform cost considerations after you’ve chosen VM sizes, it doesn’t help in determining those sizes for migration.

Azure Pricing calculator: The Azure Pricing calculator is a tool to estimate the cost of Azure services. You can manually configure different Azure VM sizes and tiers to get cost estimates. However, it requires you to manually input the specifications (like VM size, OS, etc.) and does not automatically analyze your on-premises VMware environment to provide sizing recommendations. It’s useful for cost estimation once you have decided on the VM sizes, but not for determining the sizes initially based on on-premises workload characteristics.

Azure Migrate: Azure Migrate is a service specifically designed to simplify, guide, and accelerate your migration to Azure. It provides tools for:

Discovery: Discovering on-premises VMware, Hyper-V VMs, and physical servers.

Assessment: Assessing discovered VMs for Azure readiness and providing Azure VM size recommendations based on performance data and compatibility. Azure Migrate can analyze the CPU, memory, and disk utilization of your VMware VMs to suggest appropriate Azure VM sizes.

Migration: Tools to migrate VMs to Azure.

Azure Migrate directly addresses the need to recommend Azure VM sizes based on your existing VMware environment while minimizing administrative effort through automated discovery and assessment.

Azure Advisor: Azure Advisor analyzes your existing Azure resources and provides recommendations to optimize cost, security, reliability, operational excellence, and performance. It does not assess on-premises environments for migration planning. Azure Advisor helps optimize resources already in Azure, not for sizing recommendations during migration from on-premises.

Conclusion:

Azure Migrate is the most appropriate tool to use for recommending the number and size of Azure virtual machines needed to migrate your 300 VMware VMs to Azure while minimizing administrative effort. It is specifically designed for migration assessments and provides Azure VM size recommendations based on analyzing your on-premises VM configurations and performance data. The other options are not designed for this specific purpose.

Final Answer: Azure Migrate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

You plan provision a High Performance Computing (HPC) cluster in Azure that will use a third-party scheduler.

You need to recommend a solution to provision and manage the HPC cluster node.

What should you include in the recommendation?

Azure Lighthouse
Azure CycleCloud
Azure Purview
Azure Automation

A

The correct answer is Azure CycleCloud.

Here’s why:

Azure CycleCloud is specifically designed for creating, managing, operating, and optimizing High Performance Computing (HPC) clusters in Azure. It’s tailored to handle the complexities of HPC environments, including:

Provisioning HPC Nodes: CycleCloud automates the deployment and configuration of virtual machines that serve as compute nodes in your HPC cluster. It can handle different VM sizes, operating systems, and networking configurations suitable for HPC workloads.

Third-Party Scheduler Integration: Crucially, CycleCloud is built to work with various schedulers, including popular third-party options like Slurm, PBS Pro, LSF, and Grid Engine. It understands how to integrate with these schedulers to manage job submissions and node allocation within the cluster. You can configure CycleCloud to deploy and manage the scheduler itself or integrate with an existing scheduler setup.

Cluster Lifecycle Management: CycleCloud goes beyond just provisioning. It handles the entire lifecycle of the cluster, including:

Scaling: Dynamically adding or removing nodes based on workload demands and scheduler requirements.

Monitoring: Providing visibility into cluster health and performance.

Termination: Gracefully shutting down the cluster when it’s no longer needed.

Infrastructure as Code: CycleCloud uses declarative configuration files to define your cluster, allowing you to version control and easily reproduce your HPC environment.

Let’s look at why the other options are less suitable:

Azure Lighthouse: Azure Lighthouse is for delegated resource management across multiple tenants. It’s primarily used by Managed Service Providers (MSPs) to manage Azure resources for their customers. While it’s related to management, it’s not directly focused on provisioning and managing HPC cluster nodes within a single tenant. It’s more about who can manage resources, not how to build and run an HPC cluster.

Azure Purview: Azure Purview is a data governance service. It helps you discover, understand, and govern your data assets across your organization. While data is crucial for HPC, Purview is not involved in provisioning or managing the compute infrastructure (HPC nodes) itself. It focuses on data cataloging, lineage, and security, not cluster orchestration.

Azure Automation: Azure Automation is a general-purpose automation service. You could potentially use Azure Automation to script the deployment of VMs and configure them as HPC nodes. However, it’s a much more manual and complex approach compared to using CycleCloud. Azure Automation lacks the HPC-specific features and scheduler integrations that CycleCloud provides out-of-the-box. You would need to write a significant amount of custom scripting to achieve the same level of functionality as CycleCloud, and it would be less robust and harder to manage for HPC cluster lifecycle management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company plans to deploy various Azure App Service instances that will use Azure SQL databases.

The App Service instances will be deployed at the same time as the Azure SQL databases.

The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.

You need to recommend a solution to meet the regulatory requirement.

Solution: You recommend using an Azure policy initiative to enforce the location.

Does this meet the goal?

Yes
No

A

The correct answer is Yes.

Here’s why:

Azure Policy Initiatives for Location Enforcement: Azure Policy Initiatives (formerly called Policy Sets) are a powerful tool for managing and enforcing organizational standards and compliance at scale in Azure. One of the most common and effective uses of Azure Policy is to control resource locations.

How Azure Policy Enforces Location: You can create an Azure Policy (and include it in an initiative) that specifically restricts the locations where resources can be deployed within a subscription, resource group, or management group. For example, you can define a policy that only allows resources to be created in “East US 2” and “West US 2” regions.

Meeting the Regulatory Requirement: The company has a regulatory requirement to deploy App Service instances only to specific Azure regions. By implementing an Azure Policy Initiative that includes a policy to restrict allowed locations for App Service and Azure SQL Database resources, you directly address this requirement. When a deployment is attempted in a non-compliant region, Azure Policy will prevent the deployment from succeeding, ensuring that the regulatory requirement is met.

Simultaneous Deployment and Same Region: While Azure Policy itself doesn’t orchestrate the deployment of App Service and SQL Database at the same time, it works seamlessly with any deployment method (ARM templates, Bicep, Azure CLI, PowerShell, etc.). When you attempt to deploy both App Service and Azure SQL database (simultaneously or not), the location policy will be evaluated during the deployment process. If either resource is specified to be deployed in a disallowed region, the policy will block the deployment. To ensure both App Service and SQL Database are in the same region, you would configure your deployment template or script to specify the same region for both resource types. The location policy will then ensure that this chosen region is within the allowed regions.

Why other options are less relevant (or not applicable in this context): (Though not explicitly asked in this specific question, understanding why other options from the initial HPC question are not relevant here is helpful)

Azure Lighthouse: Lighthouse is for delegated access management across tenants, not for location enforcement within a single tenant to meet regulatory requirements.

Azure CycleCloud: CycleCloud is for HPC cluster management. It’s not directly related to enforcing location policies for App Service and SQL Database deployments.

Azure Purview: Purview is for data governance and cataloging, not resource location enforcement.

Azure Automation: While you could use Azure Automation to check locations after deployment, or even as part of a more complex deployment script, Azure Policy is the native and recommended Azure service for proactively enforcing location constraints during deployment. Policy is much more efficient and integrated for this specific purpose.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company plans to deploy various Azure App Service instances that will use Azure SQL databases.

The App Service instances will be deployed at the same time as the Azure SQL databases.

The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.

You need to recommend a solution to meet the regulatory requirement.

Solution: You recommend using the Regulatory compliance dashboard in Azure Security Center.

Does this meet the goal?

Yes
No

A

The correct answer is No.

Here’s why:

Azure Security Center Regulatory Compliance Dashboard’s Purpose: The Regulatory Compliance dashboard in Azure Security Center is designed to provide visibility and reporting on your Azure environment’s compliance posture against various regulatory standards and industry benchmarks (like PCI DSS, SOC 2, ISO 27001, Azure CIS, etc.).

What the Dashboard Does:

Assesses Compliance: It continuously assesses your Azure resources against the selected regulatory standards and security benchmarks.

Provides Insights: It shows you which controls are passing, failing, or need attention.

Offers Recommendations: It provides actionable recommendations to improve your compliance and security posture based on the identified issues.

Reporting: It generates reports on your compliance status.

What the Dashboard Does NOT Do:

Enforce Deployment Policies: The Regulatory Compliance dashboard does not actively prevent deployments of resources in non-compliant regions. It’s a monitoring and reporting tool, not an enforcement mechanism.

Control Resource Location During Deployment: It does not have the capability to block or redirect deployments based on region. It identifies compliance issues after resources are deployed.

Why it Doesn’t Meet the Goal: The company’s requirement is to ensure App Service instances are deployed only to specific Azure regions. The Regulatory Compliance dashboard can tell you if resources are deployed in compliant regions after they are deployed, but it cannot prevent deployments to non-compliant regions in the first place. Therefore, it does not meet the regulatory requirement of ensuring deployment only in specific regions.

Better Solution (as seen in the previous question): As discussed in the previous question, Azure Policy is the correct tool for enforcing location restrictions during resource deployment. Azure Policy can be configured to deny the creation of resources in regions that are not allowed, thus directly meeting the regulatory requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Note: This question is part of a series of questions that present the same scenario. Each question in

the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company plans to deploy various Azure App Service instances that will use Azure SQL databases.

The App Service instances will be deployed at the same time as the Azure SQL databases.

The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.

You need to recommend a solution to meet the regulatory requirement.

Solution: You recommend using an Azure policy to enforce the resource group location.

Does this meet the goal?

Yes
No

A

The correct answer is No.

Here’s why:

Resource Group Location vs. Resource Location: It’s crucial to understand the difference between resource group location and resource location in Azure.

Resource Group Location: The location you specify when creating a resource group is primarily for metadata storage and management operations related to the resource group itself. It doesn’t directly dictate where the resources you deploy within that resource group will be located.

Resource Location: Each Azure resource (like App Service, Azure SQL Database, VM, etc.) has its own independent location setting. This is the location where the actual service and its data are physically hosted.

Azure Policy for Resource Group Location: You can use Azure Policy to enforce the location where resource groups can be created. For example, you can create a policy that only allows resource groups to be created in “East US 2” and “West US 2”.

Why Enforcing Resource Group Location Doesn’t Meet the Goal:

No Control over Resource Location: Enforcing the resource group location does not automatically enforce the location of the resources deployed within that resource group. You can create a resource group in “East US” and then deploy an App Service in “West US” and an Azure SQL database in “Central US” within that “East US” resource group.

Regulatory Requirement on Resource Location: The regulatory requirement is about the location of the App Service instances (and by extension, the Azure SQL databases), not just the resource group. Enforcing resource group location alone does not guarantee that these resources will be in the required specific regions.

What would meet the goal (and was discussed in a previous question): To meet the goal, you need to use Azure Policy to enforce the location of the App Service and Azure SQL Database resources themselves. You would create policies that specify the allowed locations for resource types like Microsoft.Web/sites (App Service) and Microsoft.Sql/servers (SQL Database servers and databases).

In summary: While enforcing resource group location is a form of location control in Azure, it is not the correct or effective way to meet the regulatory requirement of ensuring App Service instances and Azure SQL databases are deployed to specific Azure regions. It only controls where the resource group’s metadata is stored, not the location of the actual services within it. Therefore, the solution of using Azure Policy to enforce resource group location does not meet the goal.

Final Answer: No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company plans to deploy various Azure App Service instances that will use Azure SQL databases.

The App Service instances will be deployed at the same time as the Azure SQL databases.

The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.

You need to recommend a solution to meet the regulatory requirement.

Solution: You recommend creating resource groups based on locations and implementing resource locks on the resource groups.

Does this meet the goal?

Yes
No

A

The correct answer is No.

Here’s why:

Resource Groups for Organization, Not Location Enforcement: Creating resource groups based on locations is a good organizational practice. It helps in logically grouping resources deployed in a specific region, making management and billing easier to understand. However, resource groups themselves do not enforce the location of the resources deployed within them.

Resource Locks for Protection, Not Location Control: Resource locks are used to protect resources from accidental deletion or modification. They can be applied at the resource group level or individual resource level. Resource locks provide different levels of protection (CanNotDelete, ReadOnly). However, resource locks do not control or enforce the location where resources are deployed. They only come into play after resources have been deployed.

Why this Solution Fails to Meet the Goal:

No Location Enforcement During Deployment: This solution does not prevent a user from deploying an App Service or Azure SQL database to a region that is not one of the specific allowed regions. Someone could create a resource group named “EastUS2-Resources” (suggesting East US 2 location) but still deploy an App Service within it to West US or any other region.

Organizational, Not Enforceable: Creating resource groups by location is purely an organizational and naming convention. It’s helpful for humans to understand the intended location, but it’s not enforced by Azure itself.

Locks are Post-Deployment: Resource locks only prevent actions after the resources are deployed. They have no bearing on the initial deployment location choice.

The Regulatory Requirement is about Enforcement: The company has a regulatory requirement to deploy App Service instances only to specific regions. This implies a need for a mechanism that actively prevents deployments in non-compliant regions. Resource groups and resource locks, in combination or separately, do not provide this proactive enforcement.

The Correct Solution (from previous questions): As established in earlier questions, Azure Policy is the proper tool for enforcing location restrictions. Azure Policy can be configured to deny the creation of resources in regions that are not allowed, directly meeting the regulatory requirement.

In summary: While creating location-based resource groups and using resource locks are good management practices, they do not address the regulatory requirement of enforcing resource location during deployment. They do not prevent deployments in non-compliant regions. Therefore, this solution does not meet the goal.

Final Answer: No

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

HOTSPOT

You are planning an Azure Storage solution for sensitive data. The data will be accessed daily. The data set is less than 10 GB.

You need to recommend a storage solution that meets the following requirements:

  • All the data written to storage must be retained for five years.
  • Once the data is written, the data can only be read. Modifications and deletion must be prevented.
  • After five years, the data can be deleted, but never modified.
  • Data access charges must be minimized

What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Storage account type:
General purpose v2 with Archive access tier for blobs
General purpose v2 with Cool access tier for blobs
General purpose v2 with Hot access tier for blobs
Configuration to prevent modifications and deletions:
Container access level
Container access policy
Storage account resource lock

A

Answer:

Storage account type: General purpose v2 with Cool access tier for blobs

Configuration to prevent modifications and deletions: Storage account resource lock

Explanation:

Let’s break down each requirement and why these selections are the closest fit within the given options:

  1. Data Retention for Five Years:

General purpose v2 with Cool access tier for blobs: Both Cool and Archive access tiers are suitable for long-term retention. Cool is designed for data that is infrequently accessed but still needs to be available. Archive is for rarely accessed data with higher retrieval latency. Since the data is accessed daily, Archive might introduce unacceptable latency for daily reads. Cool tier offers a better balance of cost and accessibility for data that needs to be retained long-term but still accessed periodically.

  1. Write-Once, Read-Many (WORM) & Prevention of Modifications and Deletions (Initially):

Storage account resource lock: While ideally, for true WORM compliance and to prevent modifications and deletions of blob data itself, you would use Azure Blob Storage Immutability policies (Time-based retention policies or Legal Hold policies). However, these options are not provided in the “Configuration to prevent modifications and deletions” choices.

Container access level and Container access policy are related to controlling access to the container and blobs (authorization and authentication), not preventing modifications or deletions once data is written. They can restrict who can perform actions, but not inherently prevent actions by authorized users.

Storage account resource lock is the closest option from the provided list to preventing modifications and deletions, although it’s not the ideal solution for WORM at the blob level. A Resource Lock can be set at the Storage Account level (or Resource Group level containing the storage account) with a ReadOnly or CanNotDelete lock. This would:

CanNotDelete: Prevent accidental deletion of the entire storage account (and indirectly the data within it). While it doesn’t prevent modifying blob data, it adds a layer of protection against accidental account-level deletion, which could lead to data loss.

ReadOnly: Would prevent any write operations to the storage account, including modifications and deletions. However, it would also prevent new data from being written in the future, which might not be desirable for ongoing operations.

Important Note: Using a Storage account resource lock is NOT the same as implementing true WORM immutability policies on blobs. Resource locks are a broader Azure Resource Manager feature, not a blob storage-specific WORM feature. For true WORM and regulatory compliance, Azure Blob Storage Immutability policies are the recommended approach. However, given the limited options in the question, Storage account resource lock is the closest option to provide some level of prevention against modifications and deletions at the account level (primarily deletion).

  1. Deletion After Five Years, Never Modified:

Cool access tier and potential Lifecycle Management: Cool tier allows for deletion after 30 days. After five years, you would need a process (potentially using Azure Automation or Lifecycle Management policies) to identify and delete the data if required. The “never modified” part is addressed (as best as possible with the limited options) by the Storage account resource lock. Ideally, Immutability Policies would guarantee this.

  1. Minimize Data Access Charges:

Cool access tier: Cool tier has lower storage costs compared to Hot and higher access costs. Since the data is accessed daily, but the dataset is relatively small (10GB), the access costs for Cool are likely to be acceptable and still significantly lower than Hot tier storage costs over five years. Archive tier would minimize storage costs further, but the higher access costs and retrieval latency might be detrimental for daily access. Cool tier is a good compromise to minimize data access charges while still allowing reasonable daily access.

Why other options are less suitable:

Hot access tier: Unnecessarily expensive for long-term storage, especially if the daily access isn’t extremely frequent or high-bandwidth.

Archive access tier: While cheapest for storage, the high retrieval latency and access costs make it unsuitable for “daily access” even if the data set is small.

General purpose v2 with Archive access tier for blobs: Same issues as Archive tier above regarding daily access.

Container access level/Container access policy: These control access authorization, not data immutability or prevention of modifications/deletions after data is written. They don’t meet the WORM requirement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

You have an Azure subscription.

You need to recommend an Azure Kubernetes service (AKS) solution that will use Linux nodes.

The solution must meet the following requirements:

  • Minimize the time it takes to provision compute resources during scale-out operations.
  • Support autoscaling of Linux containers.
  • Minimize administrative effort.

Which scaling option should you recommend?

Virtual Kubetet
cluster autoscaler
virtual nodes
horizontal pod autoscaler

A

The correct answer is virtual nodes.

Here’s why:

Virtual Nodes and Minimized Provisioning Time: Virtual nodes in AKS leverage Azure Container Instances (ACI) to quickly provision compute resources. When you scale out with virtual nodes, pods are scheduled directly onto ACI, which can provision containers much faster than traditional virtual machines used by the cluster autoscaler. This directly addresses the requirement to “minimize the time it takes to provision compute resources during scale-out operations.”

Virtual Nodes and Autoscaling of Linux Containers: Virtual nodes are fully compatible with Linux containers. They are designed to seamlessly run Linux-based containerized workloads within AKS. The autoscaling capabilities of virtual nodes are inherently tied to the demand for pods, automatically scaling as needed to accommodate Linux containers.

Virtual Nodes and Minimized Administrative Effort: Virtual nodes significantly reduce administrative overhead because you don’t need to manage the underlying virtual machines that host the nodes. Azure manages the infrastructure for ACI. You focus solely on managing your Kubernetes workloads. This directly addresses the requirement to “minimize administrative effort.”

Let’s look at why the other options are less suitable:

Virtual Kubetet: This is not a recognized or valid term in Azure Kubernetes Service (AKS) or Kubernetes. It seems to be a misspelling or a non-existent option.

Cluster Autoscaler: While the cluster autoscaler is a valid and important component for AKS, it scales the number of nodes (VMs in the node pool) in your AKS cluster. While it does automate node scaling, it still relies on the provisioning of virtual machines, which takes longer than provisioning containers in ACI (as used by virtual nodes). Therefore, it doesn’t minimize provisioning time to the same extent as virtual nodes. Also, while it reduces admin effort, you still manage and configure node pools, which is more administrative overhead than virtual nodes.

Horizontal Pod Autoscaler (HPA): The Horizontal Pod Autoscaler (HPA) scales the number of pods within a deployment or replica set based on CPU utilization or other metrics. HPA does not directly provision compute resources (nodes). While HPA is crucial for autoscaling applications, it relies on having enough underlying compute capacity (nodes) available. If you only use HPA without a mechanism to scale the nodes themselves, your pods might be pending if there isn’t enough node capacity. HPA addresses application scaling, not node scaling for compute resource provisioning.

In Summary:

Virtual nodes are the best fit because they directly address all three requirements: minimizing provisioning time, supporting Linux container autoscaling, and minimizing administrative effort. They offer the fastest scale-out by leveraging serverless container instances and reduce management overhead by abstracting away node management. While Cluster Autoscaler is also a valid autoscaling option, virtual nodes are superior in terms of speed and reduced management for this specific scenario focusing on minimizing provisioning time and administrative effort.

Final Answer: virtual nodes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

You have an Azure subscription.

You need to deploy an Azure Kubernetes Service (AKS) solution that will use Windows Server 2019 nodes.

The solution must meet the following requirements:

  • Minimize the time it takes to provision compute resources during scale-out operations.
  • Support autoscaling of Windows Server containers.

Which scaling option should you recommend?

horizontal pod autoscaler
Kubernetes version 1.20.2 or newer
cluster autoscaler
Virtual nodes
with Virtual Kubelet ACI

A

The correct answer is cluster autoscaler.

Here’s why:

Cluster Autoscaler for Node Scaling: The cluster autoscaler is specifically designed to automatically scale the number of nodes (virtual machines) in your AKS cluster based on the demands of your workloads. This is the primary mechanism in AKS to dynamically adjust compute resources. It monitors the Kubernetes scheduler for pending pods due to insufficient resources and adds new nodes to the node pool when needed.

Support for Autoscaling Windows Server Containers: The cluster autoscaler works seamlessly with Windows Server node pools in AKS. You can configure a dedicated node pool running Windows Server 2019, and the cluster autoscaler will scale this Windows node pool up and down based on the resource requests and limits of your Windows Server containers.

Provisioning Time for VMs: While the cluster autoscaler relies on provisioning virtual machines for scaling, and VM provisioning inherently takes time, it’s still the most effective and standard way to automatically scale compute resources (nodes) for both Linux and Windows workloads in AKS. While VM provisioning isn’t as instantaneous as container provisioning with ACI (Virtual Nodes for Linux), it’s the necessary approach for adding more Windows Server compute capacity to your AKS cluster.

Let’s examine why the other options are less suitable:

Horizontal Pod Autoscaler (HPA): As explained in the previous question, HPA scales the number of pods within a deployment or replica set. HPA does not scale the nodes themselves. While HPA is essential for scaling your application workloads within the existing node capacity, it does not address the need to provision more compute resources (nodes) when the cluster runs out of capacity. For Windows containers, you still need to scale the underlying Windows nodes, and HPA won’t do that.

Kubernetes version 1.20.2 or newer: While Kubernetes version is important for feature support and stability, it’s not a scaling option itself. A newer Kubernetes version might have performance improvements or bug fixes related to autoscaling, but it doesn’t directly provide the scaling mechanism. The cluster autoscaler is a component that works within Kubernetes, regardless of the specific minor version (within supported ranges). Upgrading Kubernetes version alone won’t scale your Windows nodes.

Virtual nodes with Virtual Kubelet ACI: Virtual nodes in AKS, backed by Azure Container Instances (ACI), are primarily designed and optimized for Linux containers. While technically you might be able to run Windows containers on ACI directly outside of AKS, the AKS Virtual Nodes feature (Virtual Kubelet ACI integration) is not generally supported or recommended for Windows Server containers. Virtual Nodes are intended to provide fast, serverless compute for Linux workloads. The architecture and underlying technology of ACI are more aligned with Linux container execution. Using Virtual Nodes for Windows containers in AKS would likely be unsupported, perform poorly, or not function as expected. Therefore, Virtual Nodes are not a viable scaling option for Windows Server 2019 nodes in AKS.

In Summary:

For an AKS cluster with Windows Server 2019 nodes that needs to autoscale, the cluster autoscaler is the correct and recommended solution. It is the standard and supported mechanism for scaling the number of Windows nodes in AKS based on workload demand. While VM provisioning takes time, it’s the necessary approach for adding Windows compute capacity. The other options are either not relevant for node scaling (HPA, Kubernetes version) or not applicable/recommended for Windows nodes (Virtual Nodes).

Final Answer: cluster autoscaler

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

You plan to deploy 10 applications to Azure. The applications will be deployed to two Azure Kubernetes Service (AKS) clusters. Each cluster will be deployed to a separate Azure region.

The application deployment must meet the following requirements:

  • Ensure that the applications remain available if a single AKS cluster fails.
  • Ensure that the connection traffic over the internet is encrypted by using SSL without having to configure SSL on each container.

Which service should you include in the recommendation?

AKS ingress controller
Azure Traffic Manager
Azure Front Door
Azure Load Balancer

A

The correct answer is Azure Front Door.

Here’s why:

Ensure application availability if a single AKS cluster fails: Azure Front Door is a global, scalable entry point that uses Microsoft’s global edge network. It can route traffic to the closest and healthiest AKS cluster based on various routing methods, including priority-based routing for failover scenarios. If one AKS cluster fails, Azure Front Door can automatically direct traffic to the healthy cluster in the other region, ensuring application availability.

Ensure SSL encryption over the internet without configuring SSL on each container: Azure Front Door provides SSL termination at the edge. You can upload your SSL certificate to Azure Front Door, and it will handle the SSL encryption and decryption for all incoming traffic. This means you don’t need to configure SSL certificates and management within each AKS cluster or on each individual container application. Front Door will decrypt the traffic before forwarding it to the backend AKS clusters (using HTTP or HTTPS based on your backend configuration).

Let’s look at why the other options are less suitable:

AKS Ingress Controller: An Ingress Controller is essential for routing HTTP/HTTPS traffic within a single AKS cluster. It can handle SSL termination within the cluster, but it’s primarily a cluster-level component. It doesn’t inherently provide cross-region failover or global load balancing across multiple AKS clusters in different regions. While you can configure ingress controllers in both AKS clusters, you’d still need another service in front to distribute traffic and handle failover across regions.

Azure Traffic Manager: Azure Traffic Manager is a DNS-based traffic load balancer. It can route traffic to different endpoints (like your AKS cluster load balancer IPs) based on DNS resolution. While it can provide failover across regions, it operates at the DNS level (Layer 4) and does not provide SSL termination. You would still need to configure SSL termination within each AKS cluster or on your application containers if using Traffic Manager for regional failover. Traffic Manager is less sophisticated for web application traffic management compared to Front Door.

Azure Load Balancer: Azure Load Balancer is a regional service that provides Layer 4 load balancing. It’s used to distribute traffic within a Virtual Network or expose services to the internet within a single Azure region. It’s not designed for cross-region failover or global routing of web application traffic across multiple AKS clusters in different regions. While Azure Load Balancer can be configured for SSL termination, it’s typically done at the backend services level or requires more complex configurations for each container if you are doing layer 7 load balancing with SSL termination at the load balancer itself. It’s not the optimal solution for global SSL termination and cross-region application availability in this scenario.

In summary:

Azure Front Door is the most appropriate service because it directly addresses both requirements: ensuring application availability across regions through global routing and providing SSL termination at the edge, simplifying SSL management and improving security and performance.

Final Answer: Azure Front Door

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

HOTSPOT

You have an Azure web app named App1 and an Azure key vault named KV1.

App1 stores database connection strings in KV1.

App1 performs the following types of requests to KV1:

✑ Get

✑ List

✑ Wrap

✑ Delete

✑ Unwrap

✑ Backup

✑ Decrypt

✑ Encrypt

You are evaluating the continuity of service for App1.

You need to identify the following if the Azure region that hosts KV1 becomes unavailable:

✑ To where will KV1 fail over?

✑ During the failover, which request type will be unavailable?

What should you identify? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
To where will KV1 fail over?
A server in the same Availability Set
A server in the same fault domain
A server in the same paired region
A virtual machine in a scale set
During the failover, which request type will be unavailable?
Backup
Decrypt
Delete
Encrypt
Get
List
Unwrap
Wrap

A

To where will KV1 fail over?

A server in the same paired region

Explanation: Azure Key Vault is designed for high availability and disaster recovery. In the event of a regional outage, Azure Key Vault is designed to failover to its paired region. Azure paired regions are geographically separated to provide resilience against regional disasters but are still within the same geography to meet data residency and compliance requirements.

Paired Regions: Azure regions are often paired. For example, East US is paired with West US. In case of a regional disaster in East US, services are designed to failover to West US. Key Vault, as a critical service, follows this pattern.

Availability Sets and Fault Domains: These are mechanisms for high availability within a single region. They protect against hardware failures within a datacenter but do not protect against a regional outage.

Virtual machine in a scale set: VM scale sets are for compute resources and not relevant to Key Vault’s failover mechanism.

During the failover, which request type will be unavailable?

Decrypt
Encrypt
Get
List
Unwrap
Wrap

Explanation: During a failover event, there will be a period of unavailability while the service transitions to the paired region. The operations that are most likely to be unavailable during this transition are those that directly access and manipulate secrets and keys – the data plane operations.

Data Plane Operations (Likely Unavailable):

Get: Retrieves a secret, key, or certificate. This is a core data access operation and will likely be unavailable during failover.

List: Lists secrets, keys, or certificates. Also a data access operation and likely to be unavailable.

Wrap: Encrypts a symmetric key. This is a cryptographic operation and will likely be unavailable.

Unwrap: Decrypts a symmetric key. Also a cryptographic operation and likely to be unavailable.

Encrypt: Encrypts arbitrary data using a key. Cryptographic operation, likely unavailable.

Decrypt: Decrypts encrypted data using a key. Cryptographic operation, likely unavailable.

Management Plane Operations (Potentially Available but Less Critical for App1’s Continuity in this Scenario):

Backup: Backs up the entire vault. While important for DR planning in general, it’s less critical for immediate service continuity of App1 during a failover. Backup operations might be less prioritized during the initial failover phase.

Delete: Deletes a secret, key, or certificate. While a management operation, it might be less prioritized during a failover focused on restoring core access.

Reasoning for selecting Data Plane Operations:

The question is specifically about the continuity of service for App1. App1 uses Key Vault to retrieve database connection strings. The operations directly related to App1 accessing these connection strings are Get, List, Decrypt, Encrypt, Wrap, and Unwrap (if encryption/decryption of connection strings is happening within App1 using Key Vault keys).

During a failover, the primary goal is to restore the core functionality of the service, which for Key Vault means the ability to access and use secrets and keys. Until the failover is complete and the service in the paired region is fully operational, these data plane operations are highly likely to be unavailable, directly impacting App1’s ability to retrieve connection strings and function.

Therefore, the most accurate answer within the given options is:

To where will KV1 fail over? A server in the same paired region

During the failover, which request type will be unavailable? Decrypt, Encrypt, Get, List, Unwrap, Wrap

Final Answer:

To where will KV1 fail over? During the failover, which request type will be unavailable?
A server in the same paired region Decrypt
Encrypt
Get
List
Unwrap
Wrap

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

HOTSPOT

You have an Azure App Service web app named Webapp1 that connects to an Azure SQL database named DB1. Webapp1 and DB1 are deployed to the East US Azure region.

You need to ensure that all the traffic between Webapp1 and DB1 is sent via a private connection.

What should you do? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
Create a virtual network that contains at least:
1 subnet
2 subnets
3 subnets
From the virtual network, configure name resolution to use:
A private DNS zone
A public DNS zone
The Azure DNS Private Resolver

A

Answer Area:

Create a virtual network that contains at least: 1 subnet
From the virtual network, configure name resolution to use: A private DNS zone

Explanation:

To ensure that traffic between Webapp1 and DB1 is sent via a private connection, you need to implement Azure Private Link for Azure SQL Database and integrate your App Service with a virtual network. Here’s a breakdown of why the selected options are correct:

  1. Create a virtual network that contains at least: 1 subnet

Why a Virtual Network is Necessary: Azure Private Link works by extending Azure services into your virtual network via private endpoints. A virtual network provides the private network space within Azure where you can establish this private connection.

Why at least 1 subnet is sufficient: You need at least one subnet in the virtual network to host the private endpoint for the Azure SQL Database. While you might have other subnets in a real-world scenario for different components or for subnet delegation, a minimum of one subnet is required for the private endpoint itself. You will place the Private Endpoint for SQL Database in this subnet.

  1. From the virtual network, configure name resolution to use: A private DNS zone

Why a Private DNS Zone is Crucial: When you create a Private Endpoint for Azure SQL Database, Azure creates a network interface card (NIC) within your subnet and assigns it a private IP address from your virtual network’s address space. To access the SQL Database via this private IP, you need to resolve the SQL Database’s fully qualified domain name (FQDN) to this private IP address within your virtual network.

Private DNS Zones are designed for this: Azure Private DNS Zones allow you to manage DNS records for Azure services within your virtual network. When you create a Private Endpoint, Azure automatically integrates it with a Private DNS Zone (or you can manually configure it). This ensures that when Webapp1 (which will be integrated with the VNet) attempts to resolve the SQL Database’s FQDN, it will receive the private IP address of the Private Endpoint, directing traffic over the private connection.

Why not a public DNS zone: A public DNS zone resolves to public IP addresses, which is the opposite of what you want for a private connection.

Why not Azure DNS Private Resolver (directly): While Azure DNS Private Resolver is used for hybrid DNS resolution scenarios (e.g., resolving on-premises DNS from Azure or vice versa), for a purely Azure-to-Azure private connection within a VNet, a Private DNS Zone is the direct and simpler solution for name resolution. Private Resolver is more relevant when you have more complex hybrid networking requirements.

Steps to Achieve Private Connection (Implied by the Hotspot Options):

Create a Virtual Network and a Subnet: You would first create a virtual network in the East US region and at least one subnet within it.

Create a Private Endpoint for Azure SQL Database: You would create a Private Endpoint for your DB1 Azure SQL database. During Private Endpoint creation, you would:

Select the SQL Server resource type.

Select your DB1 SQL Server.

Choose the target subnet you created in the VNet.

Choose to integrate with a private DNS zone (or manually configure DNS later).

Integrate App Service Web App with the Virtual Network (VNet Integration): You would configure VNet Integration for Webapp1 to connect it to the subnet in the VNet. This makes the Web App part of the private network.

Name Resolution (Automatic with Private DNS Zone): If you chose to integrate with a Private DNS Zone during Private Endpoint creation (which is highly recommended and often automatic), Azure will handle the DNS configuration. Webapp1, being in the same VNet, will automatically use the Private DNS Zone and resolve the SQL Database’s FQDN to the private IP of the Private Endpoint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

HOTSPOT

Your on-premises network contains an Active Directory Domain Services (AD DS) domain. The domain contains a server named Server1. Server1 contains an app named App1 that uses AD DS authentication. Remote users access App1 by using a VPN connection to the on-premises network.

You have an Azure AD tenant that syncs with the AD DS domain by using Azure AD Connect.

You need to ensure that the remote users can access App1 without using a VPN. The solution must meet the following requirements:

  • Ensure that the users authenticate by using Azure Multi-Factor Authentication (MFA).
  • Minimize administrative effort.

What should you include in the solution? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
In Azure AD:
A managed identity
An access package
An app registration
An enterprise application
On-premises:
A server that runs Windows Server and has the Azure AD Application Proxy connector installed
A server that runs Windows Server and has the on-premises data gateway (standard mode) installed
A server that runs Windows Server and has the Web Application Proxy role service installed

A

Answer Area:

In Azure AD: An enterprise application
On-premises: A server that runs Windows Server and has the Azure AD Application Proxy connector installed

Explanation:

Let’s break down why this is the correct solution and why the other options are not as suitable:

In Azure AD: An enterprise application

Why Enterprise Application? Azure AD Application Proxy, the core component of the solution, is configured as an enterprise application in Azure AD. When you set up Application Proxy, you are essentially registering your on-premises application with Azure AD so that Azure AD can manage authentication and access to it.

Functionality: Enterprise applications in Azure AD are used to manage single sign-on, provisioning, and access control for applications, including those published through Application Proxy.

Why not other options in Azure AD?

A managed identity: Managed identities are used for Azure resources to authenticate to other Azure services. They are not relevant for authenticating users accessing an on-premises application.

An access package: Access packages are used for managing user access to groups, applications, and SharePoint sites, typically within Azure AD and related cloud services. While they manage access, they are not the primary mechanism for exposing an on-premises app securely to the internet with Azure AD authentication.

An app registration: App registrations are used for registering applications with Azure AD, primarily for applications that directly use the Microsoft Identity Platform for authentication (like cloud-native apps or apps using OAuth/OIDC). While related to authentication in Azure AD, it’s not the direct component for publishing on-premises apps via Application Proxy. Enterprise Application is the higher-level concept that encompasses the Application Proxy setup.

On-premises: A server that runs Windows Server and has the Azure AD Application Proxy connector installed

Why Azure AD Application Proxy Connector? Azure AD Application Proxy is specifically designed to securely publish on-premises web applications to the internet, enabling access for remote users without requiring a VPN. The Azure AD Application Proxy connector is the essential on-premises component. It’s a lightweight agent that you install on a Windows Server within your on-premises network.

How it works:

Connector Installation: You install the connector on a server inside your on-premises network. This server needs outbound internet access to communicate with Azure AD Application Proxy services in the cloud.

Application Publishing: You configure an Enterprise Application in Azure AD, specifying the internal URL of App1 on Server1 and the external URL users will use to access it. You also configure pre-authentication to use Azure AD.

User Access: When a remote user tries to access the external URL, they are redirected to Azure AD for authentication. Azure AD enforces MFA as required.

Secure Proxy: After successful Azure AD authentication, Azure AD Application Proxy securely forwards the request to the connector on-premises.

Connector Access: The connector, acting on behalf of the user, then accesses App1 on Server1 using standard protocols (like HTTP/HTTPS) within your internal network.

Response: The response from App1 follows the reverse path back to the user through the connector and Azure AD Application Proxy.

Why not other on-premises options?

A server that runs Windows Server and has the on-premises data gateway (standard mode) installed: The on-premises data gateway is used to connect Azure services like Power BI, Logic Apps, and Power Automate to on-premises data sources (databases, file shares, etc.). It is not for publishing web applications for direct user access with Azure AD authentication.

A server that runs Windows Server and has the Web Application Proxy role service installed: Web Application Proxy (WAP) is an older technology, primarily used with Active Directory Federation Services (AD FS) for publishing web applications. While WAP can provide external access, Azure AD Application Proxy is the more modern, Azure AD-integrated, and simpler solution for this scenario, especially when the goal is to use Azure AD MFA and minimize administrative effort in an Azure AD environment. Azure AD Application Proxy is the direct successor and recommended replacement for WAP in Azure AD scenarios.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

HOTSPOT

You need to recommend a solution to integrate Azure Cosmos DB and Azure Synapse. The solution must meet the following requirements:

  • Traffic from an Azure Synapse workspace to the Azure Cosmos DB account must be sent via the Microsoft backbone network.
  • Traffic from the Azure Synapse workspace to the Azure Cosmos DB account must NOT be routed over the internet.
  • Implementation effort must be minimized.

What should you include in the recommendation? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Answer Area
When provisioning the Azure Synapse workspace:
Configure a dedicated managed virtual network.
Disable public network access to the workspace endpoints.
Enable the use of the Azure AD authentication.
When configuring the Azure Cosmos DB account, enable:
Managed private endpoints
Server-level firewall rules
Service endpoint policies

A

Answer Area:

When provisioning the Azure Synapse workspace:
Configure a dedicated managed virtual network.
Disable public network access to the workspace endpoints.

When configuring the Azure Cosmos DB account, enable:
Managed private endpoints

Explanation:

Let’s break down each selection and why they are the correct choices to meet the requirements:

When provisioning the Azure Synapse workspace:

Configure a dedicated managed virtual network.

Correct: Configuring a dedicated managed virtual network for the Azure Synapse workspace is crucial. A Managed Virtual Network (VNet) isolates the Synapse workspace within its own private network environment. This is the foundation for ensuring private connectivity and preventing internet exposure. By deploying Synapse within a Managed VNet, you ensure that all outbound connections from Synapse can be routed through private links.

Why it’s necessary: To establish private connections to services like Azure Cosmos DB, Synapse needs to be within a virtual network. Managed VNets simplify this by Azure managing the VNet infrastructure for Synapse.

Disable public network access to the workspace endpoints.

Correct: Disabling public network access to the workspace endpoints is essential to prevent traffic from being routed over the internet. This forces all traffic to go through private connections. By disabling public access, you explicitly restrict access to the Synapse workspace to only those networks and services that have private connectivity established.

Why it’s necessary: This enforces the “no internet routing” requirement and enhances security by limiting the attack surface.

Enable the use of the Azure AD authentication.

Incorrect: While Azure AD authentication is important for securing access to Azure Synapse and Azure Cosmos DB, it does not directly address the requirement of network traffic routing over the Microsoft backbone network and avoiding the internet. Azure AD authentication is about authentication and authorization, not network connectivity path. It’s a good security practice, but not directly relevant to the private networking requirement in this question.

When configuring the Azure Cosmos DB account, enable:

Managed private endpoints

Correct: Enabling Managed private endpoints on the Azure Cosmos DB account is the key to establishing a private link from the Synapse Managed VNet to Cosmos DB. Managed private endpoints in Synapse allow you to create private endpoints to other Azure PaaS services, including Cosmos DB, from within the Synapse Managed VNet. This ensures that the traffic between Synapse and Cosmos DB flows privately over the Microsoft backbone network and does not traverse the public internet.

Why it’s necessary: Private endpoints are the Azure Private Link technology that provides private connectivity to Azure services. Managed private endpoints simplify the creation and management of these private endpoints from Synapse.

Server-level firewall rules

Incorrect: While server-level firewall rules on Azure Cosmos DB can restrict access to specific IP ranges or virtual networks, they do not inherently guarantee that traffic will be routed via the Microsoft backbone network and avoid the internet. Firewall rules are primarily for access control, not for enforcing a private network path. While you can use firewall rules in conjunction with other private networking solutions, they are not the primary solution for achieving private connectivity in this scenario. They are more about authorization (who can connect) than routing path.

Service endpoint policies

Incorrect: Service endpoint policies are used in conjunction with service endpoints. Service endpoints provide secure and direct connectivity from virtual networks to Azure services, keeping traffic on the Azure backbone. However, service endpoints are typically configured on the subnet level and are generally being superseded by Private Link for many scenarios, especially for PaaS-to-PaaS private connections. Managed private endpoints are the more modern and recommended approach for private connections from Synapse to Cosmos DB and offer a simpler configuration for this integration. Service endpoints are also less granular and less flexible than Private Endpoints for this specific scenario.

In summary, to meet the requirements of private connectivity, Microsoft backbone network traffic, no internet routing, and minimized implementation effort, the optimal solution is to:

Provision Azure Synapse with a dedicated managed virtual network.

Disable public network access to the Synapse workspace.

Enable Managed private endpoints for the Azure Cosmos DB account and create a managed private endpoint from Synapse to Cosmos DB.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

You are developing a sales application that will contain several Azure cloud services and handle different components of a transaction. Different cloud services will process customer orders, billing, payment, inventory, and shipping.

You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using XML messages.

What should you include in the recommendation?

A. Azure Notification Hubs
B. Azure Application Gateway
C. Azure Queue Storage
D. Azure Traffic Manager

A

The correct answer is C. Azure Queue Storage

Explanation:

Here’s why Azure Queue Storage is the most appropriate recommendation and why the other options are not suitable for this scenario:

Azure Queue Storage:

Asynchronous Communication: Azure Queue Storage is specifically designed for asynchronous message queuing. Cloud services can enqueue messages into a queue, and other services can independently dequeue and process these messages. This decouples the services and enables asynchronous communication.

XML Messages: Azure Queue Storage can handle messages in various formats, including XML. You can serialize your transaction information into XML and place it in the message body of queue messages.

Service-to-Service Communication: Queue Storage is ideal for communication between different cloud services within an application. Different services can access the same queue to send and receive messages, facilitating communication between order processing, billing, payment, inventory, and shipping services in your application.

Reliability and Scalability: Azure Queue Storage is a highly reliable and scalable service, ensuring message delivery and handling even under heavy load.

Why other options are incorrect:

A. Azure Notification Hubs: Azure Notification Hubs is designed for sending push notifications to mobile devices (iOS, Android, Windows, etc.). It is not intended for service-to-service communication or processing transaction information.

B. Azure Application Gateway: Azure Application Gateway is a web traffic load balancer and Application Delivery Controller (ADC). It operates at Layer 7 of the OSI model and is used to manage and route HTTP/HTTPS traffic to web applications. It’s not meant for general-purpose asynchronous message queuing between cloud services.

D. Azure Traffic Manager: Azure Traffic Manager is a DNS-based traffic load balancer. It directs user traffic to different endpoints based on factors like performance, geography, or priority. It is primarily used for improving the availability and responsiveness of web applications by distributing traffic across different Azure regions or services. It’s not designed for asynchronous service-to-service communication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

You are developing an app that will use Azure Functions to process Azure Event Hubs events. Request processing is estimated to take between five and 20 minutes.

You need to recommend a hosting solution that meets the following requirements:

  • Supports estimates of request processing runtimes
  • Supports event-driven autoscaling for the app

Which hosting plan should you recommend?

A. Dedicated
B. Consumption
C. App Service
D. Premium

A

The correct answer is D. Premium

Explanation:

Let’s analyze each hosting plan against the requirements:

A. Dedicated (App Service Plan):

Supports estimates of request processing runtimes: Yes, App Service plans have no inherent time limits on function execution duration (beyond the overall app service timeouts if applicable, but not typically an issue for Event Hub triggers). You can run functions for 20 minutes or longer within the resources allocated to your App Service plan.

Supports event-driven autoscaling for the app: While App Service plans offer autoscaling, it’s primarily based on metrics like CPU utilization, memory consumption, and queue length (for Service Bus queues, for example). It’s not directly event-driven in the same way as Consumption or Premium plans are for Event Hubs. You would need to configure metric-based autoscaling rules, which are less reactive to immediate event bursts.

Cost: Dedicated plans can be more expensive, especially if your event processing is sporadic, as you pay for dedicated resources continuously, even when idle.

B. Consumption:

Supports estimates of request processing runtimes: No, not reliably in its standard form. Consumption plan functions have a default timeout of 10 minutes. While you can increase this timeout to a maximum of 10 minutes (or up to 30 minutes in Premium Consumption plan and some regions), the base Consumption plan is limited. A 20-minute processing time exceeds the standard Consumption plan limits.

Supports event-driven autoscaling for the app: Yes, absolutely. Consumption plan is designed for event-driven scaling. It automatically scales based on the number of incoming events in the Event Hub. This is a key strength of the Consumption plan.

Cost: Consumption plan is generally the most cost-effective for event-driven workloads because you only pay for the actual compute time used when your functions are running.

C. App Service:

This is essentially the same as option A - Dedicated (App Service Plan). The analysis for option A applies here.

D. Premium:

Supports estimates of request processing runtimes: Yes. Premium plan significantly extends the execution timeout limits compared to Consumption. Premium plan functions can run for up to 60 minutes by default, and this can be further increased. 20 minutes is well within the capabilities of the Premium plan.

Supports event-driven autoscaling for the app: Yes. Premium plan also provides event-driven autoscaling, similar to the Consumption plan. It scales elastically based on the event load from Event Hubs. Premium plan also offers more control over scaling behavior and instance sizes compared to Consumption.

Cost: Premium plan is more expensive than Consumption but generally less expensive than Dedicated (App Service) plans for event-driven workloads, especially if your load is variable. It offers a balance of scalability, features, and cost.

Why Premium is the best choice:

Given the requirement for processing times of up to 20 minutes, the Consumption plan (B) is immediately ruled out due to its default 10-minute timeout limitation (and even the extended limit might be too close for comfort and might require Premium Consumption plan which essentially becomes option D).

Dedicated (App Service) plan (A and C) can handle the runtime and offers scaling, but the autoscaling is less directly event-driven, and it’s generally more costly for event-driven workloads than Premium.

Premium plan (D) is the ideal solution because it:

Easily supports the 20-minute processing time with its extended execution timeout.

Provides event-driven autoscaling specifically designed for event sources like Event Hubs.

Offers a good balance of cost and features for event-driven scenarios, being more cost-effective than dedicated plans and providing more guarantees and features than Consumption.

Therefore, the most appropriate hosting plan recommendation is D. Premium.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

You need to design a highly available Azure SQL database that meets the following requirements:

  • Failover between replicas of the database must occur without any data loss.
  • The database must remain available in the event of a zone outage.
  • Costs must be minimized.

Which deployment option should you use?

A. Azure SQL Database Basic
B. Azure SQL Database Business Critical
C. Azure SQL Database Standard
D. Azure SQL Managed Instance General Purpose

A

The correct answer is B. Azure SQL Database Business Critical

Explanation:

Let’s break down why Business Critical is the best option based on each requirement:

Failover between replicas of the database must occur without any data loss.

Azure SQL Database Business Critical is designed for mission-critical applications with the highest performance and high availability requirements. It uses synchronous replication to three replicas across availability zones within a region. Synchronous replication ensures that every transaction is committed to all replicas before being acknowledged to the client. This guarantees zero data loss during failover because all replicas are always in sync.

The database must remain available in the event of a zone outage.

Azure SQL Database Business Critical supports zone redundancy. When configured as zone-redundant, the three replicas are placed in different availability zones within the Azure region. If one availability zone fails, the database remains available because the other replicas in the healthy zones continue to operate.

Costs must be minimized.

While Azure SQL Database Business Critical is the most expensive deployment option among the single database options, it is the only option that fully guarantees zero data loss and zone outage resilience as explicitly stated in the requirements. The “minimize costs” requirement is important, but it must be balanced against the critical availability and data loss prevention requirements. In this scenario, the availability and zero data loss requirements are paramount, and Business Critical is the only option that fully satisfies them.

Let’s look at why the other options are less suitable:

A. Azure SQL Database Basic:

Basic tier is the least expensive option, but it does not offer high availability or zone redundancy. It is a single instance database and is not designed for zero data loss failover or zone outage resilience.

C. Azure SQL Database Standard:

Azure SQL Database Standard offers high availability with standard availability which uses standard storage and synchronous replication within a single datacenter (for non-zone redundant configuration). While it provides good availability and data durability, in the standard tier, failovers might have a very small potential for data loss in extreme scenarios (though Azure aims for near-zero data loss in typical failovers). Standard tier can be configured for zone redundancy, providing zone outage resilience. However, even with zone redundancy, the guarantee of zero data loss during failover is stronger in Business Critical due to its architecture with premium storage and more robust replication mechanism. Standard is more cost-effective than Business Critical, but doesn’t guarantee zero data loss as strongly.

D. Azure SQL Managed Instance General Purpose:

Azure SQL Managed Instance General Purpose also offers high availability and can be configured for zone redundancy. It uses standard storage and provides good performance. However, similar to Standard single database, while it aims for minimal data loss, it doesn’t have the same explicit guarantee of zero data loss failover as Business Critical. Also, for a single database, Managed Instance is typically more expensive and more complex to manage than a single Azure SQL Database.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

You need to design a highly available Azure SQL database that meets the following requirements:

  • Failover between replicas of the database must occur without any data loss.
  • The database must remain available in the event of a zone outage.
  • Costs must be minimized.

Which deployment option should you use?

A. Azure SQL Database Hyperscale
B. Azure SQL Database Premium
C. Azure SQL Database Standard
D. Azure SQL Managed Instance General Purpose

A

The correct answer is B. Azure SQL Database Premium.

Rationale:

Let’s break down why Azure SQL Database Premium is the most suitable option based on the requirements:

Failover between replicas of the database must occur without any data loss.

Azure SQL Database Premium (and Business Critical, which is often considered the evolution of Premium) tiers are designed for mission-critical workloads that require the highest levels of availability and data durability. These tiers utilize synchronous replication. Synchronous replication means that a transaction is not considered committed until it is written to both the primary replica and at least one secondary replica. This ensures zero data loss in the event of a failover because the secondary replicas are always transactionally consistent with the primary.

The database must remain available in the event of a zone outage.

Azure SQL Database Premium (and Business Critical) supports Zone Redundancy. When you configure a database as zone-redundant in the Premium tier, Azure automatically provisions and maintains replicas of your database across multiple availability zones within the same Azure region. Availability Zones are physically separate datacenters within an Azure region. If one zone experiences an outage, the database remains available because the replicas in the other zones continue to function.

Costs must be minimized.

While Azure SQL Database Premium is more expensive than Standard and Hyperscale tiers, it is the most cost-effective option that fully meets the zero data loss and zone outage availability requirements. The “minimize costs” requirement must be balanced with the other critical requirements. In this scenario, the need for zero data loss and zone redundancy takes precedence over minimizing costs to the absolute lowest possible level. Basic and Standard tiers are cheaper but do not guarantee zero data loss and zone outage resilience to the same degree as Premium. Hyperscale, while potentially cost-effective for very large databases, might be more expensive for smaller to medium-sized databases than Premium and is not specifically designed for the same level of guaranteed zero data loss in failovers as Premium/Business Critical.

Let’s look at why the other options are less suitable:

A. Azure SQL Database Hyperscale:

Hyperscale is designed for very large databases and high scalability. While it offers high availability and can be zone-redundant, its architecture prioritizes scalability and performance for massive datasets. While it aims for high data durability, it doesn’t offer the same explicit guarantee of zero data loss during failover as the Premium/Business Critical tiers with synchronous replication across replicas designed for that specific purpose. Also, for smaller databases, Hyperscale might be more complex and not necessarily the most cost-effective for the specific needs outlined.

C. Azure SQL Database Standard:

Azure SQL Database Standard offers high availability, and can be configured for zone redundancy. However, it uses standard storage and while it uses synchronous replication within a single datacenter (for non-zone redundant), it doesn’t provide the same level of guaranteed zero data loss during failovers as the Premium/Business Critical tiers. Failovers in Standard tier are generally fast, but might have a very slight potential for data loss in extreme scenarios.

D. Azure SQL Managed Instance General Purpose:

Azure SQL Managed Instance General Purpose also offers high availability and can be zone-redundant. However, for a single database requirement, using Managed Instance is often overkill and more complex and potentially more expensive than using a single Azure SQL Database. While General Purpose Managed Instance is cheaper than Business Critical Managed Instance, it still doesn’t offer the same guaranteed zero data loss as Azure SQL Database Premium/Business Critical.

Important Note: The term “Azure SQL Database Premium” is sometimes used interchangeably with “Azure SQL Database Business Critical” in older documentation or exam questions. Business Critical is the current name for the tier that provides the highest level of availability, zero data loss, and zone redundancy for single Azure SQL Databases. If “Premium” in this question is intended to refer to the current highest availability tier, then it means Business Critical.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

HOTSPOT

You company has offices in New York City, Sydney, Paris, and Johannesburg.

The company has an Azure subscription.

You plan to deploy a new Azure networking solution that meets the following requirements:

  • Connects to ExpressRoute circuits in the Azure regions of East US, Southeast Asia, North Europe, and South Africa
  • Minimizes latency by supporting connection in three regions
  • Supports Site-to-site VPN connections
  • Minimizes costs

You need to identify the minimum number of Azure Virtual WAN hubs that you must deploy, and which virtual WAN SKU to use.

What should you identify? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
Number of Virtual WAN hubs:
1
2
3
4
Virtual WAN SKU:
Basic
Standard

A

Answer Area:
Number of Virtual WAN hubs: 3
Virtual WAN SKU: Standard

Explanation:

Number of Virtual WAN hubs: 3

Requirement for ExpressRoute in Four Regions: The company has ExpressRoute circuits in East US, Southeast Asia, North Europe, and South Africa. Virtual WAN hubs act as central connectivity points within Azure for these ExpressRoute circuits.

Minimizing Latency in Three Regions: To minimize latency for users in three of the four office locations, deploying Virtual WAN hubs in or near three of the four Azure regions is the most effective approach. You would strategically choose three locations that best serve the majority of your users and traffic patterns. For example, placing hubs in East US (for New York), North Europe (for Paris), and Southeast Asia (for Sydney) would cover three major office locations.

Connectivity to All Four Regions: Even with three hubs, you can still connect to ExpressRoute circuits in all four regions. A single Virtual WAN hub can connect to multiple ExpressRoute circuits, even if those circuits are in different Azure regions. The hubs act as aggregation points. You do not need a one-to-one mapping of hubs to ExpressRoute regions to achieve connectivity.

Minimizing Costs: Deploying three hubs is the minimum required to meet the latency requirement for three regions while still connecting to all four ExpressRoute circuits. Deploying four hubs would also technically work but would unnecessarily increase costs without providing additional benefit beyond the stated requirements.

Virtual WAN SKU: Standard

Requirement for ExpressRoute and Site-to-site VPN: The requirements explicitly state the need to connect to ExpressRoute circuits and support Site-to-site VPN connections.

SKU Capabilities:

Basic SKU: The Basic Virtual WAN SKU is limited. It only supports Site-to-site VPN connections. It does not support ExpressRoute connections.

Standard SKU: The Standard Virtual WAN SKU provides full functionality and supports both ExpressRoute and Site-to-site VPN connections, along with other advanced features like VPN encryption, routing policies, and more.

Choosing the Correct SKU: Since the solution must connect to ExpressRoute circuits, the Standard Virtual WAN SKU is mandatory. The Basic SKU is insufficient to meet the ExpressRoute connectivity requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

You have an Azure Functions microservice app named App1 that is hosted in the Consumption plan. App1 uses an Azure Queue Storage trigger.

You plan to migrate App1 to an Azure Kubernetes Service (AKS) cluster.

You need to prepare the AKS cluster to support App1. The solution must meet the following requirements:

  • Use the same scaling mechanism as the current deployment.
  • Support kubenet and Azure Container Networking Interface (CNI) networking.

Which two actions should you perform? Each correct answer presents part of the solution.

NOTE: Each correct answer is worth one point.

A. Configure the horizontal pod autoscaler.
B. Install Virtual Kubelet.
C. Configure the AKS cluster autoscaler.
D. Configure the virtual node add-on.
E. Install Kubernetes-based Event Driven Autoscaling (KEDA).

A

The correct answers are A. Configure the horizontal pod autoscaler and E. Install Kubernetes-based Event Driven Autoscaling (KEDA).

Here’s why:

E. Install Kubernetes-based Event Driven Autoscaling (KEDA):

Mimicking Consumption Plan Scaling: The key requirement is to use the “same scaling mechanism as the current deployment.” Azure Functions Consumption plan with a Queue Storage trigger scales automatically based on the number of messages in the queue. KEDA is specifically designed to bring this type of event-driven autoscaling to Kubernetes. KEDA can monitor the length of Azure Queue Storage queues (and many other event sources) and automatically scale the number of pods in your AKS deployment up or down based on queue message count. This directly mirrors the scaling behavior of the Consumption plan for queue triggers.

Support for Networking: KEDA operates at the application level within Kubernetes and is independent of the underlying AKS networking model (kubenet or Azure CNI). It will function correctly with both networking options.

A. Configure the horizontal pod autoscaler (HPA):

Kubernetes Pod Autoscaling: While KEDA provides the event-driven trigger for scaling based on queue length, the Horizontal Pod Autoscaler (HPA) is the standard Kubernetes component that actually performs the scaling of pods. HPA scales the number of pod replicas in a deployment or statefulset based on observed CPU utilization, memory utilization, or custom metrics.

Integration with KEDA: KEDA often works in conjunction with HPA. KEDA can act as a custom metrics provider for HPA. KEDA monitors the queue length and exposes this as a custom metric. HPA can then be configured to scale the deployment based on this custom metric provided by KEDA. This creates a complete event-driven autoscaling solution.

General Kubernetes Scaling: Even if not directly integrated with KEDA, HPA is a fundamental and essential component for autoscaling in Kubernetes in general. It’s good practice to have HPA configured in an AKS cluster to handle scaling based on resource utilization, even if event-driven scaling via KEDA is the primary mechanism for this specific application.

Why other options are incorrect:

B. Install Virtual Kubelet and D. Configure the virtual node add-on:

Purpose of Virtual Kubelet/Virtual Nodes: Virtual Kubelet and the virtual node add-on are about integrating AKS with Azure Container Instances (ACI). They allow you to offload pod execution to ACI to achieve serverless compute, potentially for cost optimization or burst scaling. However, they are not directly related to event-driven autoscaling based on Azure Queue Storage triggers or mimicking the Consumption plan scaling mechanism. They are more about where the pods run, not when and how many pods to run based on events. They don’t replace the need for an autoscaler like KEDA for event-driven scenarios.

C. Configure the AKS cluster autoscaler:

Cluster Autoscaler vs. Pod Autoscaler: The AKS cluster autoscaler scales the number of nodes in your AKS cluster. It adds or removes virtual machines from the AKS node pool based on the overall resource demands of the pods. While important for overall cluster capacity management, it’s not the same as autoscaling the application pods (App1) based on queue events, which is the requirement here. The cluster autoscaler reacts to resource pressure at the node level, not directly to application-specific events like queue messages. Pod autoscaling (HPA and KEDA) is what directly scales the application instances.

57
Q

You are developing a sales application that will contain several Azure cloud services and handle different components of a transaction. Different cloud services will process customer orders, billing, payment, inventory, and shipping.

You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using XML messages.

What should you include in the recommendation?

A. Azure Application Gateway
B. Azure Queue Storage
C. Azure Data Lake
D. Azure Traffic Manager

A

The correct answer is B. Azure Queue Storage.

Explanation:

Azure Queue Storage is a service specifically designed for asynchronous message queuing. It allows different components of an application to communicate reliably and asynchronously by sending messages to a queue.

Here’s why Azure Queue Storage is the best fit for the requirements and why the other options are not:

Asynchronous Communication: Azure Queue Storage excels at enabling asynchronous communication. Services that process customer orders, billing, payment, inventory, and shipping can operate independently and communicate by placing messages in queues. This decouples the services, improving resilience and scalability. One service can enqueue a message (e.g., “Order Placed”) and other services (billing, inventory) can dequeue and process that message at their own pace.

XML Messages: Azure Queue Storage can store messages in various formats, including XML. You can serialize your transaction information into XML format and use it as the message body in Azure Queue Storage.

Service-to-Service Communication: Azure Queue Storage is ideal for communication between different cloud services within an application architecture. The different cloud services in your sales application can use queues to exchange transaction information without needing to directly connect or wait for each other.

Reliability: Azure Queue Storage provides reliable message delivery. Messages are persisted and will be delivered even if components fail.

Why other options are incorrect:

A. Azure Application Gateway: Azure Application Gateway is a web traffic load balancer and Application Delivery Controller (ADC). It is used to manage and route HTTP/HTTPS traffic to web applications. It is not designed for general-purpose asynchronous message queuing between services. Application Gateway is for client-to-application traffic management, not service-to-service messaging.

C. Azure Data Lake: Azure Data Lake is a massively scalable and secure data lake for big data analytics workloads. It is designed for storing and analyzing large volumes of data, typically in batch processing scenarios. It’s not meant for real-time or near real-time asynchronous communication between services that process transactions. Data Lake is for data at rest and analytics, not for transactional messaging.

D. Azure Traffic Manager: Azure Traffic Manager is a DNS-based traffic load balancer. It directs client traffic to different endpoints based on routing methods like performance, priority, or geographic location. It is primarily used for improving the availability and responsiveness of web applications by distributing traffic across different Azure regions or services. It’s not designed for asynchronous service-to-service communication. Traffic Manager is also for client-to-application traffic management, focusing on global routing and availability.

Therefore, Azure Queue Storage is the most appropriate and recommended solution for enabling asynchronous communication of transaction information using XML messages between the different cloud services in your sales application.

Final Answer: B

58
Q

You need to design a highly available Azure SQL database that meets the following requirements:

  • Failover between replicas of the database must occur without any data loss.
  • The database must remain available in the event of a zone outage.
  • Costs must be minimized.

Which deployment option should you use?

A. Azure SQL Managed Instance General Purpose
B. Azure SQL Database Hyperscale
C. Azure SQL Database Premium
D. Azure SQL Managed Instance Business Critical

A

The correct answer is C. Azure SQL Database Premium.

Explanation:

Let’s break down why Azure SQL Database Premium is the most suitable choice for meeting all the requirements, while also considering the other options:

Failover between replicas of the database must occur without any data loss.

Azure SQL Database Premium (and Business Critical) tiers are designed for mission-critical applications and guarantee zero data loss during failovers. They achieve this through synchronous replication. In synchronous replication, a transaction is not considered committed until it is written to both the primary and secondary replicas. This ensures that in the event of a failover, the secondary replica is transactionally consistent with the primary, and no data is lost.

The database must remain available in the event of a zone outage.

Azure SQL Database Premium (and Business Critical) support Zone Redundancy. When you configure a database as zone-redundant in the Premium tier, Azure automatically provisions and maintains replicas across multiple availability zones within the same Azure region. If one zone fails, the database remains available as replicas in other zones continue to operate.

Costs must be minimized.

Azure SQL Database Premium is more expensive than Standard and Hyperscale, but it is less expensive than Azure SQL Managed Instance Business Critical. While “minimize costs” is a requirement, it’s balanced against the critical need for zero data loss and zone outage resilience. For achieving these high availability requirements, Premium offers a more cost-effective solution compared to Business Critical Managed Instance.

Let’s analyze why the other options are less suitable:

A. Azure SQL Managed Instance General Purpose:

Data Loss Failover: General Purpose Managed Instance aims for high availability but does not guarantee zero data loss in all failover scenarios. It uses standard storage and while it uses synchronous replication within a single availability zone (and across zones if zone-redundant), it might have a small potential for data loss (RPO > 0).

Zone Outage Resilience: General Purpose Managed Instance can be configured for zone redundancy.

Cost: General Purpose Managed Instance is generally less expensive than Business Critical Managed Instance, but often more expensive than Azure SQL Database Premium for comparable single database scenarios.

B. Azure SQL Database Hyperscale:

Data Loss Failover: Hyperscale is designed for very large databases and high performance. While it has high data durability and availability, its architecture, which separates compute and storage tiers, might not guarantee absolute zero data loss in all failover scenarios compared to the synchronous replication of Premium/Business Critical.

Zone Outage Resilience: Hyperscale can be configured for zone redundancy.

Cost: Hyperscale can be cost-effective for very large databases, but might be more expensive than Premium for smaller to medium-sized databases and is not specifically optimized for zero data loss guarantees in the same way as Premium/Business Critical.

D. Azure SQL Managed Instance Business Critical:

Data Loss Failover: Azure SQL Managed Instance Business Critical is designed for the highest levels of performance and availability and guarantees zero data loss during failover due to synchronous replication.

Zone Outage Resilience: Azure SQL Managed Instance Business Critical is zone-redundant by default and is designed to survive zone outages.

Cost: Business Critical Managed Instance is the most expensive option listed. While it meets the zero data loss and zone outage requirements, it is not the option that minimizes costs while meeting these requirements.

59
Q

You are developing a sales application that will contain several Azure cloud services and handle different components of a transaction. Different cloud services will process customer orders, billing, payment, inventory, and shipping.

You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using XML messages.

What should you include in the recommendation?

A. Azure Service Fabric
B. Azure Traffic Manager
C. Azure Queue Storage
D. Azure Notification Hubs

A

The correct answer is C. Azure Queue Storage.

Explanation:

Azure Queue Storage is a service specifically designed for asynchronous message queuing. It enables different components of an application (in this case, the cloud services for orders, billing, payment, inventory, and shipping) to communicate reliably and asynchronously by sending and receiving messages from queues.

Here’s why Azure Queue Storage is the best fit and why the other options are not:

Asynchronous Communication: Azure Queue Storage’s primary purpose is to facilitate asynchronous communication. Services can enqueue messages into a queue without needing to wait for an immediate response from the receiving service. This is ideal for decoupling components and improving the overall responsiveness and resilience of the application.

XML Message Support: Azure Queue Storage can handle messages in various formats, including text-based formats like XML. You can easily serialize your transaction information into XML and use it as the message payload within Azure Queue Storage.

Service-to-Service Communication: Azure Queue Storage is designed for communication between different services within an application architecture. The various cloud services in the sales application can use queues to exchange transaction information without direct, synchronous dependencies.

Reliability and Scalability: Azure Queue Storage is a highly reliable and scalable service. Messages are persisted and guaranteed to be delivered, even if components fail or experience transient issues.

Why the other options are incorrect:

A. Azure Service Fabric: Azure Service Fabric is a distributed systems platform for packaging, deploying, and managing microservices and containerized applications. While Service Fabric can be used for building applications that communicate asynchronously (and is a powerful platform), it’s a much more complex and comprehensive platform than necessary for simply enabling asynchronous XML message communication between services. It’s overkill for this specific requirement. Service Fabric is for building and managing microservices architectures, not just message queuing.

B. Azure Traffic Manager: Azure Traffic Manager is a DNS-based traffic load balancer. It’s used to direct client traffic to different endpoints based on various routing methods (performance, geographic location, etc.). It’s designed for managing client-to-application traffic and improving the availability and responsiveness of web applications. It’s not intended for service-to-service asynchronous communication within the application backend.

D. Azure Notification Hubs: Azure Notification Hubs is a service for sending push notifications to mobile devices (iOS, Android, Windows, etc.) and other client applications. It’s designed for broadcasting notifications to end-users, not for service-to-service communication within a backend system.

Therefore, Azure Queue Storage is the most straightforward, cost-effective, and appropriate solution for enabling asynchronous XML message communication between the cloud services in the sales application.

Final Answer: C

60
Q

You need to design a highly available Azure SQL database that meets the following requirements:

  • Failover between replicas of the database must occur without any data loss.
  • The database must remain available in the event of a zone outage.
  • Costs must be minimized.

Which deployment option should you use?

A. Azure SQL Managed Instance Business Critical
B. Azure SQL Managed Instance General Purpose
C. Azure SQL Database Standard
D. Azure SQL Database Premium

A

The correct answer is D. Azure SQL Database Premium.

Explanation:

Let’s analyze each option against the given requirements:

A. Azure SQL Managed Instance Business Critical:

Failover without data loss: Yes. Business Critical uses synchronous replication to three replicas, ensuring zero data loss during failover.

Availability in zone outage: Yes. Business Critical is zone-redundant, placing replicas in different availability zones.

Minimize costs: No. Business Critical is the most expensive option among those listed. While it provides the highest level of performance and availability, it’s not the most cost-minimized solution.

B. Azure SQL Managed Instance General Purpose:

Failover without data loss: No. General Purpose uses standard storage and asynchronous replication for the data tier (though synchronous for compute). While data loss is minimized, it’s not guaranteed to be zero in all failover scenarios.

Availability in zone outage: Yes. General Purpose can be configured for zone redundancy.

Minimize costs: More cost-effective than Business Critical Managed Instance, but generally more expensive than single Azure SQL Database options for comparable workloads.

C. Azure SQL Database Standard:

Failover without data loss: No. Standard tier uses standard storage and synchronous replication within a single datacenter (in non-zone redundant configuration). While it aims for minimal data loss, it’s not guaranteed to be zero in all failover scenarios, especially compared to Premium/Business Critical.

Availability in zone outage: Yes. Standard tier can be configured for zone redundancy.

Minimize costs: More cost-effective than Premium and Managed Instance options.

D. Azure SQL Database Premium:

Failover without data loss: Yes. Premium tier uses synchronous replication to ensure zero data loss during failover.

Availability in zone outage: Yes. Premium tier can be configured to be zone-redundant.

Minimize costs: More cost-effective than Business Critical Managed Instance while still meeting the zero data loss and zone outage requirements. It is more expensive than Standard and Hyperscale, but delivers on the critical requirements.

Rationale for choosing Azure SQL Database Premium:

Azure SQL Database Premium strikes the best balance between the requirements:

It guarantees zero data loss during failover due to synchronous replication.

It provides zone redundancy, ensuring availability during zone outages.

It is more cost-effective than Azure SQL Managed Instance Business Critical, while still meeting the stringent availability and data loss prevention requirements.

While Business Critical Managed Instance also meets the first two requirements, it is significantly more expensive. Standard tier is cheaper but does not guarantee zero data loss. Hyperscale is not listed, but it also doesn’t provide the same level of zero data loss guarantee as Premium/Business Critical and might not be the most cost-effective for all scenarios.

Therefore, Azure SQL Database Premium is the most suitable deployment option when considering zero data loss failover, zone outage availability, and cost minimization.

Final Answer: D

61
Q

You need to design a highly available Azure SQL database that meets the following requirements:

  • Failover between replicas of the database must occur without any data loss.
  • The database must remain available in the event of a zone outage.
  • Costs must be minimized.

Which deployment option should you use?

A. Azure SQL Database Serverless
B. Azure SQL Managed Instance General Purpose
C. Azure SQL Database Basic
D. Azure SQL Database Business Critical

A

The correct answer is D. Azure SQL Database Business Critical.

Explanation:

Let’s break down each deployment option against the requirements:

A. Azure SQL Database Serverless:

Failover without data loss: While Serverless can be configured to run on the Premium tier (which offers synchronous replication), the Serverless compute tier itself doesn’t inherently guarantee zero data loss. The underlying service tier dictates data loss guarantees. If Serverless is on Standard tier, it will not guarantee zero data loss. If on Premium, it becomes similar to Azure SQL Database Premium (Option D) but with autoscaling compute, potentially adding complexity and not necessarily minimizing cost for continuous workloads.

Availability in zone outage: Serverless, when running on a zone-redundant service tier (like Premium or Business Critical), can be zone-redundant.

Minimize costs: Serverless can be cost-effective for intermittent workloads due to its auto-pausing and auto-scaling compute. However, for continuously available databases, the cost savings might be less significant, and the complexity of managing serverless compute scaling might outweigh the benefits for this specific scenario.

B. Azure SQL Managed Instance General Purpose:

Failover without data loss: No. General Purpose Managed Instance uses standard storage and asynchronous replication for the data tier (though synchronous for compute). This means that in the event of a failover, there is a potential for data loss (though typically minimal).

Availability in zone outage: Yes. General Purpose Managed Instance can be configured to be zone-redundant.

Minimize costs: More cost-effective than Business Critical Managed Instance, but generally more expensive than single Azure SQL Database options (like Standard, Premium) for comparable single database scenarios.

C. Azure SQL Database Basic:

Failover without data loss: No. Basic tier is a single instance database with no high availability. Failover will likely result in data loss and downtime.

Availability in zone outage: No. Basic tier is not zone-redundant and offers no protection against zone outages.

Minimize costs: Yes. Basic is the least expensive option, but it fails to meet the other critical requirements.

D. Azure SQL Database Business Critical:

Failover without data loss: Yes. Business Critical is designed for mission-critical workloads and guarantees zero data loss during failovers. It uses synchronous replication to three replicas, ensuring that every transaction is committed to multiple replicas before being acknowledged.

Availability in zone outage: Yes. Business Critical is zone-redundant by default, placing replicas in different availability zones to ensure availability even if a zone fails.

Minimize costs: No. Business Critical is the most expensive option among those listed. However, it is the only option that definitively meets the zero data loss and zone outage availability requirements. The “minimize costs” requirement is important but must be balanced against the other critical needs.

Rationale for choosing Azure SQL Database Business Critical:

Azure SQL Database Business Critical is the only option that definitively and reliably meets all the core requirements, especially the critical ones:

Guaranteed zero data loss failover: Achieved through synchronous replication.

Zone outage availability: Achieved through built-in zone redundancy.

62
Q

You need to design a highly available Azure SQL database that meets the following requirements:

  • Failover between replicas of the database must occur without any data loss.
  • The database must remain available in the event of a zone outage.
  • Costs must be minimized.

Which deployment option should you use?

A. Azure SQL Database Standard
B. Azure SQL Managed Instance General Purpose
C. Azure SQL Database Serverless
D. Azure SQL Database Premium

A

The correct answer is D. Azure SQL Database Premium.

Explanation:

Let’s analyze each option against the requirements:

A. Azure SQL Database Standard:

Failover without data loss: No. While Standard tier offers high availability, it uses synchronous replication within a single datacenter (for non-zone redundant deployments). It does not guarantee zero data loss in all failover scenarios, especially during zone outages. There’s a potential for minimal data loss in asynchronous replication scenarios or during certain types of failovers.

Availability in zone outage: Yes, Azure SQL Database Standard can be configured for zone redundancy.

Minimize costs: Yes, Standard is generally less expensive than Premium and Managed Instance options. However, it compromises on the zero data loss guarantee.

B. Azure SQL Managed Instance General Purpose:

Failover without data loss: No. General Purpose Managed Instance uses standard storage and asynchronous replication for the data tier (though synchronous for compute). This means that it does not guarantee zero data loss during failovers. There’s a potential for data loss, although Azure aims to minimize it.

Availability in zone outage: Yes, Azure SQL Managed Instance General Purpose can be configured for zone redundancy.

Minimize costs: More expensive than Azure SQL Database Standard, but generally less expensive than Business Critical Managed Instance and often less than Azure SQL Database Premium for comparable resource levels in some scenarios.

C. Azure SQL Database Serverless:

Failover without data loss: While Azure SQL Database Serverless can be configured to run on the Premium service tier (which does offer synchronous replication and zero data loss guarantees), the question doesn’t specify the service tier for Serverless. If we assume a typical cost-optimized Serverless deployment, it might not be on the Premium tier and therefore wouldn’t inherently guarantee zero data loss. If deployed on Premium, it effectively becomes very similar to Azure SQL Database Premium but with auto-scaling compute.

Availability in zone outage: Azure SQL Database Serverless can be configured to be zone-redundant, again, depending on the underlying service tier it’s using.

Minimize costs: Azure SQL Database Serverless is designed to minimize costs for intermittent workloads. For a database that needs to be highly available and continuously running, the cost benefits of Serverless might be less pronounced, and the management of auto-scaling might add complexity.

D. Azure SQL Database Premium:

Failover without data loss: Yes. Azure SQL Database Premium is designed for mission-critical workloads and guarantees zero data loss during failovers. It achieves this through synchronous replication.

Availability in zone outage: Yes. Azure SQL Database Premium can be configured to be zone-redundant, placing replicas across availability zones.

Minimize costs: While Azure SQL Database Premium is more expensive than Standard, it is generally less expensive than Azure SQL Managed Instance Business Critical, and it is the most cost-effective option among those listed that fully meets both the zero data loss and zone outage availability requirements.

Rationale for selecting Azure SQL Database Premium:

Azure SQL Database Premium is the optimal choice because it effectively balances all three requirements:

Guaranteed Zero Data Loss Failover: Achieved through synchronous replication.

Zone Outage Availability: Achieved through zone redundancy configuration.

Cost Minimization (within the context of the HA requirements): It provides these HA features at a lower cost than Azure SQL Managed Instance Business Critical, which is the only other option that definitively guarantees zero data loss and zone outage resilience among the choices.

Therefore, for a highly available Azure SQL database requiring zero data loss failover, zone outage availability, and minimized costs (while still meeting the HA needs), Azure SQL Database Premium is the most appropriate deployment option.

Final Answer: D

62
Q

DRAG DROP

You plan to deploy an infrastructure solution that will contain the following configurations:

  • External users will access the infrastructure by using Azure Front Door.
  • External user access to the backend APIs hosted in Azure Kubernetes Service (AKS) will be controlled by using Azure API Management.
  • External users will be authenticated by an Azure AD B2C tenant that uses OpenID Connect-based federation with a third-party identity provider.

Which function does each service provide? To answer, drag the appropriate functions to the correct services. Each function may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.
Functions
Protection against Open Web Application
Security Project (OWASP) vulnerabilities
IP filtering on a per-API level
Validation of Azure B2C JSON Web
Tokens (JWTs)
Answer Area
Front Door: Function
API Management: Function

A

The correct answer, selecting one function for each service that is the most primary and relevant based on typical Azure architecture best practices, is:

Front Door: Function

Protection against Open Web Application Security Project (OWASP) vulnerabilities

API Management: Function

Validation of Azure B2C JSON Web Tokens (JWTs)

Why this is the closest and most correct answer (with single selection per service):

Let’s break down each function and service again, focusing on the primary role of each service in the described architecture:

Protection against Open Web Application Security Project (OWASP) vulnerabilities:

Front Door is primarily designed to be the edge security layer. Its Web Application Firewall (WAF) capability is specifically built to protect against OWASP top 10 vulnerabilities. Placing WAF at the edge, in Front Door, is a best practice to filter out malicious traffic before it even reaches the backend services (AKS and API Management).

While API Management can have some security policies, including some basic request filtering, it is not the primary service for comprehensive OWASP protection. Front Door’s WAF is the more robust and appropriate service for this function in this architecture.

IP filtering on a per-API level:

API Management is well-suited for granular, per-API level access control. API Management policies can be defined at different scopes, including the API level, allowing you to implement IP filtering specifically for individual APIs. This is important for scenarios where different APIs might have different access requirements based on source IP.

Front Door can also perform IP filtering, but it’s generally at a more global routing rule level rather than specifically tied to individual backend APIs. API Management’s policy engine is designed for this level of API-specific control.

Validation of Azure B2C JSON Web Tokens (JWTs):

API Management is the central point for API security and authorization in this architecture. When external users are authenticated by Azure AD B2C, they receive JWTs. API Management, acting as the API gateway, is responsible for validating these JWTs to ensure that only authenticated and authorized users can access the backend APIs hosted in AKS. API Management has built-in policies (like validate-jwt) for JWT validation.

Front Door is not designed for JWT validation in the context of API authorization. Its role is more about edge routing, performance, and WAF. While it handles TLS termination, it doesn’t typically delve into application-level authorization like JWT validation.

Rationale for Single Selection (Primary Function):

If the question format forces you to select only one function for each service, you should choose the function that represents the primary and most defining security role of that service in the given architecture.

For Front Door, the most prominent security function is edge WAF and OWASP protection.

For API Management, the most prominent security function in this context is API authorization through JWT validation.

63
Q

HOTSPOT

You are developing a multi-tier app named App1 that will be hosted on Azure virtual machines. The peak utilization periods for App1 will be from 8 AM to 9 AM and 4 PM to 5 PM on weekdays.

You need to deploy the infrastructure for App1. The solution must meet the following requirements:

  • Support virtual machines deployed to four availability zones across two Azure regions.
  • Minimize costs by accumulating CPU credits during periods of low utilization.

What is the minimum number of virtual networks you should deploy, and which virtual machine size should you use? To answer, select the appropriate options in the answer area.
Answer Area
Number of virtual networks:
1
2
3
4
Virtual machine size:
A-Series
B-Series
D-Series
M-Series

A

Answer Area:
Number of virtual networks: 2
Virtual machine size: B-Series

Explanation:

Number of virtual networks: 2

Virtual Networks and Azure Regions: Azure Virtual Networks are regional resources. They cannot span across multiple Azure regions. To deploy resources in two different Azure regions (as required for four availability zones across two regions), you need at least two virtual networks, one in each region.

Availability Zones within Regions: Availability Zones are physically separate datacenters within the same Azure region. You can deploy VMs across multiple Availability Zones within a single virtual network in a given region. However, to extend to another region, you need a separate virtual network in that second region.

Minimum Number: To cover two Azure regions, the minimum number of virtual networks is two. You would have one virtual network in the first region and another virtual network in the second region. Within each virtual network, you can then deploy VMs across the desired number of availability zones available in that region.

Virtual machine size: B-Series

CPU Credits and Burstable Performance: The requirement to “minimize costs by accumulating CPU credits during periods of low utilization” directly points to B-Series virtual machines.

B-Series VMs: B-Series VMs are designed to be economical for workloads that do not need to run at full CPU utilization continuously, such as web servers, development/test environments, and small databases. They operate on a credit system:

Credit Accumulation: When the VM utilizes less CPU than its baseline performance, it accumulates CPU credits.

Credit Consumption (Bursting): When the VM needs to perform at higher CPU levels (like during peak utilization from 8-9 AM and 4-5 PM), it can “burst” above its baseline performance by consuming the accumulated credits.

Cost Optimization: By accumulating credits during low utilization, you effectively pay less for the compute resources during those periods and use those credits for periods of higher demand, minimizing overall costs for variable workloads.

Why not other VM sizes:

A-Series: A-Series VMs are basic entry-level VMs and are not designed for bursting or credit accumulation. They are generally used for very light workloads or dev/test scenarios where consistent performance is not critical.

D-Series: D-Series VMs are general-purpose VMs that offer a good balance of compute, memory, and storage. They are designed for a wide range of workloads, but they do not have the burstable performance and credit accumulation feature of B-Series VMs. They are better suited for workloads with more consistent CPU demands.

M-Series: M-Series VMs are memory-optimized VMs, designed for memory-intensive workloads like large databases or in-memory analytics. They are not focused on CPU bursting or credit accumulation for cost optimization.

Therefore, the minimum number of virtual networks is 2, and the optimal virtual machine size to minimize costs using CPU credits is B-Series.

64
Q

You need to recommend an Azure Storage solution that meets the following requirements:

✑ The storage must support 1 PB of data.

✑ The data must be stored in blob storage.

✑ The storage must support three levels of subfolders.

✑ The storage must support access control lists (ACLs).

What should you include in the recommendation?

A. a premium storage account that is configured for block blobs
B. a general purpose v2 storage account that has hierarchical namespace enabled
C. a premium storage account that is configured for page blobs
D. a premium storage account that is configured for file shares and supports large file shares

A

Let’s analyze each requirement and how the options address them:

Requirement 1: The storage must support 1 PB of data.

All Azure Storage account types, including Premium and General Purpose v2, can scale to petabytes of data, well beyond 1 PB. This requirement doesn’t eliminate any of the options.

Requirement 2: The data must be stored in blob storage.

Options A, B, and C explicitly mention blob storage (block blobs, hierarchical namespace in blob storage, page blobs).

Option D mentions file shares. While Azure File Shares are built on Azure Storage and use underlying blob storage, they are accessed via SMB protocol and are conceptually different from directly using blob storage APIs. Option D is less directly aligned with this requirement than A, B, and C.

Requirement 3: The storage must support three levels of subfolders.

Option A (Premium Block Blobs) and Option C (Premium Page Blobs): Standard blob storage (including premium without hierarchical namespace) is flat. While you can simulate folders using prefixes in blob names (e.g., folder1/folder2/blob.txt), this is not a true hierarchical namespace and doesn’t offer native folder management or ACLs at folder levels.

Option B (General Purpose v2 with Hierarchical Namespace): Hierarchical Namespace (HNS) for Azure Blob Storage is specifically designed to provide a true file system-like directory structure within blob storage. It supports multiple levels of subfolders, easily meeting the requirement of three levels.

Option D (Premium File Shares): Azure File Shares inherently support hierarchical folders and subfolders as they are designed as network file shares accessed via SMB protocol.

Requirement 4: The storage must support access control lists (ACLs).

Option A (Premium Block Blobs) and Option C (Premium Page Blobs): Standard blob storage supports container-level ACLs and blob-level ACLs. However, ACL management at the simulated folder level in flat blob storage is complex and not natively supported.

Option B (General Purpose v2 with Hierarchical Namespace): Hierarchical Namespace (HNS) enables POSIX-like ACLs on directories and files (blobs within the hierarchy). This allows for granular permission management at folder and file levels, exactly as required.

Option D (Premium File Shares): Azure File Shares support NTFS-style ACLs which are the standard ACLs used in Windows file systems. This provides robust access control at folder and file levels.

Analyzing the best fit:

Option B (General Purpose v2 storage account that has hierarchical namespace enabled) directly and completely satisfies all four requirements. It provides scalable blob storage, supports hierarchical folders, and offers ACLs for those folders and blobs within them.

Option D (Premium storage account that is configured for file shares and supports large file shares) is close in terms of folder and ACL support, but it deviates from the “must be stored in blob storage” requirement (requirement 2). While File Shares are built on Azure Storage, they are not directly accessed as blob storage.

Options A and C (Premium storage accounts without hierarchical namespace) fail to adequately meet the subfolder (requirement 3) and folder-level ACLs (requirement 4) requirements, as they are flat blob storage structures.

Conclusion:

Option B is the most correct and closest answer. It is the only option that fully addresses all four requirements, especially the crucial requirements for hierarchical subfolders and ACLs within blob storage, using the intended Azure feature (Hierarchical Namespace) for this purpose.

Final Answer: The final answer is B.

65
Q

You need to design a highly available Azure SQL database that meets the following requirements:

  • Failover between replicas of the database must occur without any data loss.
  • The database must remain available in the event of a zone outage.
  • Costs must be minimized.

Which deployment option should you use?

A. Azure SQL Database Basic
B. Azure SQL Managed Instance General Purpose
C. Azure SQL Database Business Critical
D. Azure SQL Managed Instance Business Critical

A

Let’s break down each requirement and evaluate the Azure SQL Database deployment options:

Requirements:

Failover without data loss: This necessitates synchronous data replication. In synchronous replication, a transaction is committed only after it is written to both the primary and secondary replicas. This ensures that in case of a failover, no committed data is lost.

Database remains available in a zone outage: This requires zone redundancy. Zone redundancy means the database replicas are spread across different availability zones within an Azure region. If one zone fails, the database remains available in another zone.

Costs must be minimized: We need to choose the least expensive option that meets the above two requirements.

Analyzing each option:

A. Azure SQL Database Basic:

High Availability: Basic tier offers local redundancy within a single data center. It does not provide zone redundancy. Failovers are possible, but they are not guaranteed to be without data loss as it uses standard storage and asynchronous replication concepts.

Zone Outage Resilience: No. Basic tier is not zone-redundant.

Cost: Basic is the least expensive tier.

Meets Requirements? No. Fails on data loss prevention and zone outage resilience.

B. Azure SQL Managed Instance General Purpose:

High Availability: General Purpose tier offers high availability using remote storage (Azure Premium Storage) with locally redundant storage (LRS). While failovers are generally fast, it uses asynchronous replication to the remote storage layer, which means there is a potential for data loss during a failover, especially for the most recent transactions not yet replicated. While it can be configured for zone redundancy, the underlying storage replication is still not synchronous across zones in the same way as Business Critical.

Zone Outage Resilience: General Purpose can be configured for zone redundancy.

Cost: General Purpose is less expensive than Business Critical.

Meets Requirements? No. While it can be zone-redundant, it does not guarantee zero data loss failover.

C. Azure SQL Database Business Critical:

High Availability: Business Critical tier is designed for the highest level of availability and performance. It uses synchronous replication to maintain data consistency between replicas. Failovers are designed to be zero data loss. It uses local SSD storage for very low latency.

Zone Outage Resilience: Business Critical can be configured for zone redundancy. Zone-redundant Business Critical deployments place replicas in different availability zones, ensuring database availability even during a zone outage.

Cost: Business Critical is more expensive than General Purpose and Basic, but less expensive than Managed Instance Business Critical in many scenarios.

Meets Requirements? Yes. Meets both zero data loss failover and zone outage resilience requirements when configured for zone redundancy.

D. Azure SQL Managed Instance Business Critical:

High Availability: Business Critical Managed Instance offers the same high availability characteristics as Business Critical Azure SQL Database, including synchronous replication and zero data loss failover.

Zone Outage Resilience: Business Critical Managed Instance can also be configured for zone redundancy.

Cost: Business Critical Managed Instance is generally more expensive than Business Critical Azure SQL Database for equivalent resources because of the added instance-level features and isolation.

Meets Requirements? Yes. Meets both zero data loss failover and zone outage resilience requirements when configured for zone redundancy.

Choosing the best option for cost minimization:

Both Business Critical options (C and D) meet the high availability and zone redundancy requirements. However, to minimize costs, we should choose the less expensive option between them. Azure SQL Database Business Critical (Option C) is generally less expensive than Azure SQL Managed Instance Business Critical (Option D) for similar performance and capacity, as Managed Instance includes additional management and instance-level features that add to the cost.

Conclusion:

Option C. Azure SQL Database Business Critical is the most appropriate deployment option. It meets all the requirements: zero data loss failover (due to synchronous replication), zone outage resilience (when configured for zone redundancy), and is the less expensive option compared to Business Critical Managed Instance while still fulfilling the high availability needs.

Final Answer: The final answer is C.

65
Q

HOTSPOT –

You manage a database environment for a Microsoft Volume Licensing customer named Contoso, Ltd. Contoso uses License Mobility through Software

Assurance.

You need to deploy 50 databases. The solution must meet the following requirements:

✑ Support automatic scaling.

✑ Minimize Microsoft SQL Server licensing costs.

What should you include in the solution? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:
Answer Area
Purchase model:
DTU
vCore
Azure reserved virtual machine instances
Deployment option:
An Azure SQL managed instance
An Azure SQL Database elastic pool
A SQL Server Always On availability group

A

To address the requirements, we need to select the purchase model and deployment option that best align with License Mobility, automatic scaling, and cost minimization for 50 databases.

Purchase Model:

vCore: The vCore-based purchasing model is the correct choice for leveraging License Mobility through Software Assurance. With vCore, you can choose to pay for the infrastructure (compute and storage) separately and apply your existing SQL Server licenses through License Mobility. This directly minimizes Microsoft SQL Server licensing costs because you are using licenses you already own.

DTU: The DTU (Database Transaction Unit) model includes the SQL Server license cost in the price. Using DTU would mean you are paying for new SQL Server licenses, even though Contoso has existing licenses through Software Assurance and License Mobility. This does not minimize licensing costs in this scenario.

Azure reserved virtual machine instances: Reserved VM instances are a pricing option for virtual machines. While you could technically deploy SQL Server on Azure VMs and use reserved instances for the VMs, this is not the most efficient or cost-effective way to deploy 50 databases with automatic scaling requirements in Azure SQL Database PaaS. It also doesn’t directly relate to the purchase model within Azure SQL Database itself (DTU vs vCore). Reserved instances are more relevant for IaaS deployments, not PaaS Azure SQL Database scenarios directly addressing License Mobility and automatic scaling as efficiently as vCore and Elastic Pools.

Deployment Option:

An Azure SQL Database elastic pool: Elastic pools are specifically designed to manage the performance and cost of a large number of databases with varying usage patterns. Databases in an elastic pool share a pool of resources, which allows for efficient resource utilization and cost optimization. Elastic pools support automatic scaling of resources allocated to the pool, and individual databases within the pool can benefit from these resources as needed. This is ideal for deploying 50 databases and managing them efficiently with automatic scaling while minimizing costs. Elastic pools also support License Mobility at the pool level when using vCore purchase model.

An Azure SQL managed instance: Managed Instance is a good option for migrating on-premises SQL Server instances to Azure with high compatibility. It also supports License Mobility and automatic scaling. However, for deploying 50 databases, using 50 separate Managed Instances would likely be more expensive and complex to manage than using an elastic pool, especially if these databases are not individually very large or resource-intensive. Managed instances are generally more suited for migrating entire applications with existing SQL Server instances, rather than deploying a large number of new databases from scratch where resource sharing and cost efficiency are primary concerns.

A SQL Server Always On availability group: Always On Availability Groups are a high-availability and disaster recovery (HA/DR) solution for SQL Server. While you can deploy Always On AGs in Azure VMs, this is a more complex Infrastructure-as-a-Service (IaaS) approach. It is not the best option for automatically scaling and minimizing costs for 50 databases compared to PaaS Azure SQL Database options like Elastic Pools. Setting up and managing Always On AGs is more complex and generally more expensive than using Elastic Pools, and it’s primarily focused on HA/DR, not on efficient management of a large number of databases with automatic scaling and License Mobility benefits.

Conclusion:

The combination of vCore purchase model and Azure SQL Database elastic pool is the most appropriate solution to meet all the given requirements: License Mobility, automatic scaling, and minimizing Microsoft SQL Server licensing costs for deploying 50 databases.

Hot Area:

Purchase model: vCore
Deployment option: An Azure SQL Database elastic pool

66
Q

You have an on-premises application named App1 that uses an Oracle database.

You plan to use Azure Databricks to transform and load data from App1 to an Azure Synapse Analytics instance.

You need to ensure that the App1 data is available to Databricks.

Which two Azure services should you include in the solution? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A. Azure Data Box Gateway
B. Azure Import/Export service
C. Azure Data Lake Storage
D. Azure Data Box Edge
E. Azure Data Factory

A

Correct Answer: BE

Data Factory is a data integration service that provides a low-code or no-code approach to construct extract, transform, and load (ETL) processes within a visual environment or by writing your own code.

Exporting data, either to another data technology or to another Dataverse environment, can use any of the same technologies for importing data, such as dataflows, Data Factory, Power Query, and Power Automate.

66
Q

You have an Azure subscription.

You need to deploy an Azure Kubernetes Service (AKS) solution that will use Windows Server 2019 nodes. The solution must meet the following requirements:

  • Minimize the time it takes to provision compute resources during scale-out operations.
  • Support autoscaling of Windows Server containers.

Which scaling option should you recommend?

A. horizontal pod autoscaler
B. Virtual nodes
C. Kubernetes version 1.20.2 or newer
D. cluster autoscaler

A

Let’s analyze each option in the context of the requirements for an AKS solution with Windows Server 2019 nodes, focusing on minimizing provisioning time during scale-out operations and supporting Windows container autoscaling.

A. Horizontal Pod Autoscaler (HPA):

Function: HPA automatically scales the number of pods in a deployment, replica set, or stateful set based on observed CPU utilization, memory utilization, or custom metrics.

Provisioning Time: HPA operates within the existing nodes in the AKS cluster. It does not provision new compute resources (nodes). Therefore, it does not directly minimize the time it takes to provision compute resources during scale-out operations. It scales pods, but if there aren’t enough nodes to place those pods, it will not help with node provisioning speed.

Windows Container Autoscaling: HPA is compatible with Windows Server containers and can be used to autoscale Windows-based applications.

B. Virtual nodes:

Function: Virtual nodes use Azure Container Instances (ACI) to provision pods. When you deploy pods to virtual nodes, they are run in serverless ACI, bypassing the need to provision and manage Kubernetes nodes directly.

Provisioning Time: Virtual nodes are designed to significantly minimize the time it takes to provision compute resources. ACI instances can start very quickly compared to provisioning new virtual machines for node pools. This directly addresses the requirement to minimize provisioning time.

Windows Container Autoscaling: Virtual nodes do support Windows Server containers. This makes it a viable option for the given scenario.

Limitations: While Virtual Nodes are fast for scaling pods, they may have limitations compared to standard AKS nodes in terms of features, networking, and persistent storage options. However, for scaling speed, they are a strong contender.

C. Kubernetes version 1.20.2 or newer:

Function: Kubernetes versions introduce new features, bug fixes, and performance improvements. However, a specific Kubernetes version is not a scaling option itself. It might enable or improve certain scaling features, but it’s not the primary mechanism for scaling.

Provisioning Time: Upgrading to a newer Kubernetes version might have general performance improvements, but it does not directly minimize the time it takes to provision compute resources during scale-out.

Windows Container Autoscaling: Kubernetes version compatibility is important for Windows container support, but version alone is not the scaling mechanism.

Relevance: While keeping Kubernetes updated is best practice, it’s not the scaling option asked for in the question.

D. Cluster autoscaler:

Function: Cluster autoscaler automatically adjusts the number of nodes in an AKS cluster. When pods cannot be scheduled due to insufficient resources, the cluster autoscaler provisions new nodes (VMs) in the node pool.

Provisioning Time: Cluster autoscaler does provision new compute resources (nodes), but the time it takes to provision these resources is limited by the time it takes to provision new virtual machines in Azure. While it automates the scaling process, it does not minimize the provisioning time to the same extent as Virtual Nodes. VM provisioning is inherently slower than ACI container instance startup.

Windows Container Autoscaling: Cluster autoscaler is compatible with Windows Server node pools and can be used to autoscale Windows containers by adding more Windows nodes.

Comparing Options for Minimizing Provisioning Time and Windows Support:

Virtual nodes (B) directly address the requirement to minimize the time it takes to provision compute resources due to the fast startup of ACI instances. They also support Windows Server containers.

Cluster autoscaler (D) automates node scaling but is limited by VM provisioning time.

HPA (A) scales pods within existing nodes, not compute resources.

Kubernetes version (C) is not a scaling option.

Conclusion:

Considering the primary requirement of minimizing the time it takes to provision compute resources during scale-out operations for Windows Server 2019 nodes, Virtual nodes (B) are the most suitable scaling option. While Cluster Autoscaler is a valid and commonly used autoscaling solution for AKS, Virtual nodes are specifically designed to provide rapid scale-out by leveraging the serverless nature of Azure Container Instances, thus minimizing provisioning time significantly.

Final Answer: The final answer is B.

67
Q

Your company has offices in North America and Europe.

You plan to migrate to Azure.

You need to recommend a networking solution for the new Azure infrastructure. The solution must meet the following requirements:

  • The Point-to-Site (P2S) VPN connections of mobile users must connect automatically to the closest Azure region.
  • The offices in each region must connect to their local Azure region by using an ExpressRoute circuit.
  • Transitive routing between virtual networks and on-premises networks must be supported.
  • The network traffic between virtual networks must be filtered by using FQDNs.

What should you include in the recommendation?

A. Azure Virtual WAN with a secured virtual hub
B. virtual network peering and application security groups
C. virtual network gateways and network security groups (NSGs)
D. Azure Route Server and Azure Network Function Manager

A
68
Q

HOTSPOT

You have two Azure AD tenants named contoso.com and fabrikam.com. Each tenant is linked to 50 Azure subscriptions. Contoso.com contains two users named User1 and User2.

You need to meet the following requirements:

  • Ensure that User1 can change the Azure AD tenant linked to specific Azure subscriptions.
  • If an Azure subscription is liked to a new Azure AD tenant, and no available Azure AD accounts have full subscription-level permissions to the subscription, elevate the access of User2 to the subscription.

The solution must use the principle of least privilege.

Which role should you assign to each user? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
User1:
Co-administrator
Owner
Service administrator
User2:
Co-administrator
Owner
Service administrator

A
69
Q

You are designing a point of sale (POS) solution that will be deployed across multiple locations and will use an Azure Databricks workspace in the Standard tier. The solution will include multiple apps deployed to the on-premises network of each location.

You need to configure the authentication method that will be used by the app to access the workspace. The solution must minimize the administrative effort associated with staff turnover and credential management.

What should you configure?

A. a managed identity
B. a service principal
C. a personal access token

A
70
Q

You have a multi-tier app named App1 and an Azure SQL database named SQL1. The backend service of App1 writes data to SQL1. Users use the App1 client to read the data from SQL1.

During periods of high utilization, the users experience delays retrieving the data.

You need to minimize how long it takes for data requests.

What should you include in the solution?

A. Azure Cache for Redis
B. Azure Content Delivery Network (CDN)
C. Azure Data Factory
D. Azure Synapse Analytics

A
71
Q

You have an Azure subscription that contains the resources shown in the following table.

Name Type Description
VM1 Virtual machine Frontend component in the Central US Azure region
VM2 Virtual machine Backend component in the East US Azure region
VM3 Virtual machine Backend component in the West US 2 Azure region
VNet1 Virtual network Hosts VM1
VNet2 Virtual network Hosts VM2
VNet3 Virtual network Hosts VM3

You create peering between VNet1 and VNet2 and between VNet1 and VNet3.

The virtual machines host an HTTPS-based client/server application and are accessible only via the private IP address of each virtual machine.

You need to implement a load balancing solution for VM2 and VM3. The solution must ensure that if VM2 fails, requests will be routed automatically to VM3, and if VM3 fails, requests will be routed automatically to VM2.

What should you include in the solution?

A. Azure Firewall Premium
B. Azure Application Gateway v2
C. a cross-region load balancer
D. Azure Front Door Premium

A
72
Q

HOTSPOT

You need to deploy an instance of SQL Server on Azure Virtual Machines. The solution must meet the following requirements:

  • Support 15,000 disk IOPS.
  • Support SR-IOV.
  • Minimize costs.

What should you include in the solution? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
Virtual machine series:
DS
NC
NV
Disk type:
Standard SSD
Premium SSD
Ultra Disk

A
73
Q

You have 100 devices that write performance data to Azure Blob Storage.

You plan to store and analyze the performance data in an Azure SQL database.

You need to recommend a solution to continually copy the performance data to the Azure SQL database.

What should you include in the recommendation?

A. Azure Data Factory
B. Data Migration Assistant (DMA)
C. Azure Data Box
D. Azure Database Migration Service

A
74
Q

You are planning a storage solution. The solution must meet the following requirements:

✑ Support at least 500 requests per second.

✑ Support a large image, video, and audio streams.

Which type of Azure Storage account should you provision?

A. standard general-purpose v2
B. premium block blobs
C. premium page blobs
D. premium file shares

A
75
Q

DRAG DROP –

You are designing a virtual machine that will run Microsoft SQL Server and contain two data disks. The first data disk will store log files, and the second data disk will store data. Both disks are P40 managed disks.

You need to recommend a host caching method for each disk. The method must provide the best overall performance for the virtual machine while preserving the integrity of the SQL data and logs.

Which host caching method should you recommend for each disk? To answer, drag the appropriate methods to the correct disks. Each method may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

Select and Place:

Methods
None
ReadOnly
ReadWrite
Answer Area
Log: Method
Data: Method

A
76
Q

You are designing a solution that calculates 3D geometry from height-map data.

You need to recommend a solution that meets the following requirements:

✑ Performs calculations in Azure.

✑ Ensures that each node can communicate data to every other node.

✑ Maximizes the number of nodes to calculate multiple scenes as fast as possible.

Minimizes the amount of effort to implement the solution.

Which two actions should you include in the recommendation? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A. Enable parallel file systems on Azure.
B. Create a render farm that uses virtual machines.
C. Create a render farm that uses virtual machine scale sets.
D. Create a render farm that uses Azure Batch.
E. Enable parallel task execution on compute nodes.

A
77
Q

You have an on-premises application that consumes data from multiple databases. The application code references database tables by using a combination of the server, database, and table name.

You need to migrate the application data to Azure.

To which two services can you migrate the application data to achieve the goal? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

A. SQL Server Stretch Database
B. SQL Server on an Azure virtual machine
C. Azure SQL Database
D. Azure SQL Managed Instance

A
78
Q

HOTSPOT –

You plan to migrate on-premises Microsoft SQL Server databases to Azure.

You need to recommend a deployment and resiliency solution that meets the following requirements:

✑ Supports user-initiated backups

✑ Supports multiple automatically replicated instances across Azure regions

✑ Minimizes administrative effort to implement and maintain business continuity

What should you recommend? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:
Answer Area
Deployment solution:
Azure SQL Managed Instance
SQL Server on Azure Virtual Machines
An Azure SQL Database single database
Resiliency solution:
Auto-failover group
Active geo-replication
Zone-redundant deployment

A
79
Q

You have an Azure web app that uses an Azure key vault named KeyVault1 in the West US Azure region.

You are designing a disaster recovery plan for KeyVault1.

You plan to back up the keys in KeyVault1.

You need to identify to where you can restore the backup.

What should you identify?

A. any region worldwide
B. the same region only
C. KeyVault1 only
D. the same geography only

A
80
Q

You have an on-premises line-of-business (LOB) application that uses a Microsoft SQL Server instance as the backend.

You plan to migrate the on-premises SQL Server instance to Azure virtual machines.

You need to recommend a highly available SQL Server deployment that meets the following requirements:

✑ Minimizes costs

Minimizes failover time if a single server fails

What should you include in the recommendation?

A. an Always On availability group that has premium storage disks and a virtual network name (VNN)
B. an Always On Failover Cluster Instance that has a virtual network name (VNN) and a standard file share
C. an Always On availability group that has premium storage disks and a distributed network name (DNN)
D. an Always On Failover Cluster Instance that has a virtual network name (VNN) and a premium file share

A
81
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases.

The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.

You need to recommend a solution to meet the regulatory requirement.

Solution: You recommend creating resource groups based on locations and implementing resource locks on the resource groups.

Does this meet the goal?

A. Yes
B. No

A
82
Q

You plan to move a web app named App1 from an on-premises datacenter to Azure.

App1 depends on a custom COM component that is installed on the host server.

You need to recommend a solution to host App1 in Azure. The solution must meet the following requirements:

✑ App1 must be available to users if an Azure datacenter becomes unavailable.

✑ Costs must be minimized.

What should you include in the recommendation?

A. In two Azure regions, deploy a load balancer and a web app.
B. In two Azure regions, deploy a load balancer and a virtual machine scale set.
C. Deploy a load balancer and a virtual machine scale set across two availability zones.
D. In two Azure regions, deploy an Azure Traffic Manager profile and a web app.

A
83
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases.

The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region.

You need to recommend a solution to meet the regulatory requirement.

Solution: You recommend using the Regulatory compliance dashboard in Microsoft Defender for Cloud.

Does this meet the goal?

A. Yes
B. No

A
84
Q

You plan to deploy an application named App1 that will run in containers on Azure Kubernetes Service (AKS) clusters. The AKS clusters will be distributed across four Azure regions.

You need to recommend a storage solution to ensure that updated container images are replicated automatically to all the Azure regions hosting the AKS clusters.

Which storage solution should you recommend?

A. geo-redundant storage (GRS) accounts
B. Premium SKU Azure Container Registry
C. Azure Content Delivery Network (CDN)
D. Azure Cache for Redis

A
85
Q

You have an Azure Active Directory (Azure AD) tenant.

You plan to deploy Azure Cosmos DB databases that will use the SQL API.

You need to recommend a solution to provide specific Azure AD user accounts with read access to the Cosmos DB databases.

What should you include in the recommendation?

A. shared access signatures (SAS) and Conditional Access policies
B. certificates and Azure Key Vault
C. master keys and Azure Information Protection policies
D. a resource token and an Access control (IAM) role assignment

A
86
Q

HOTSPOT –

You are designing an application that will use Azure Linux virtual machines to analyze video files. The files will be uploaded from corporate offices that connect to

Azure by using ExpressRoute.

You plan to provision an Azure Storage account to host the files.

You need to ensure that the storage account meets the following requirements:

✑ Supports video files of up to 7 TB

✑ Provides the highest availability possible

✑ Ensures that storage is optimized for the large video files

✑ Ensures that files from the on-premises network are uploaded by using ExpressRoute

How should you configure the storage account? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:
Answer Area
Storage account type:
Premium files shares
Premium page blobs
Standard general-purpose v2
Data redundancy:
Zone-redundant storage (ZRS)
Locally-redundant storage (LRS)
Geo-redundant storage (GRS)
Networking:
Azure Route Server
A private endpoint
A service endpoint

A
87
Q

You have data files in Azure Blob Storage.

You plan to transform the files and move them to Azure Data Lake Storage.

You need to transform the data by using mapping data flow.

Which service should you use?

A. Azure Databricks
B. Azure Storage Sync
C. Azure Data Factory
D. Azure Data Box Gateway

A
87
Q

HOTSPOT –

Your on-premises network contains a file server named Server1 that stores 500 GB of data.

You need to use Azure Data Factory to copy the data from Server1 to Azure Storage.

You add a new data factory.

What should you do next? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:
Answer Area
From Server1:
Install an Azure File Sync agent.
Install a self-hosted integration runtime.
Install the File Server Resource Manager role service.
From the data factory:
Create a pipeline.
Create an Azure Import/Export job.
Provision an Azure-SQL Server Integration Services (SSIS) integration runtime.

A
88
Q

You have an Azure subscription.

You need to deploy an Azure Kubernetes Service (AKS) solution that will use Windows Server 2019 nodes. The solution must meet the following requirements:

✑ Minimize the time it takes to provision compute resources during scale-out operations.

✑ Support autoscaling of Windows Server containers.

Which scaling option should you recommend?

A. Kubernetes version 1.20.2 or newer
B. Virtual nodes with Virtual Kubelet ACI
C. cluster autoscaler
D. horizontal pod autoscaler

A
89
Q

You have an Azure subscription.

You need to recommend an Azure Kubernetes Service (AKS) solution that will use Linux nodes. The solution must meet the following requirements:

✑ Minimize the time it takes to provision compute resources during scale-out operations.

✑ Support autoscaling of Linux containers.

✑ Minimize administrative effort.

Which scaling option should you recommend?

A. horizontal pod autoscaler
B. cluster autoscaler
C. virtual nodes
D. Virtual Kubelet

A
90
Q

You have an Azure virtual machine named VM1 that runs Windows Server 2019 and contains 500 GB of data files.

You are designing a solution that will use Azure Data Factory to transform the data files, and then load the files to Azure Data Lake Storage.

What should you deploy on VM1 to support the design?

A. the On-premises data gateway
B. the Azure Pipelines agent
C. the self-hosted integration runtime
D. the Azure File Sync agent

A
91
Q

You are designing an order processing system in Azure that will contain the Azure resources shown in the following table.

Name Type Purpose
App1 App Service web app Processes customer orders
Function1 Function Checks product availability at vendor 1
Function2 Function Checks product availability at vendor 2
storage2 Storage account Stores order processing logs

The order processing system will have the following transaction flow:

✑ A customer will place an order by using App1.

✑ When the order is received, App1 will generate a message to check for product availability at vendor 1 and vendor 2.

✑ An integration component will process the message, and then trigger either Function1 or Function2 depending on the type of order.

✑ Once a vendor confirms the product availability, a status message for App1 will be generated by Function1 or Function2.

✑ All the steps of the transaction will be logged to storage1.

Which type of resource should you recommend for the integration component?

A. an Azure Service Bus queue
B. an Azure Data Factory pipeline
C. an Azure Event Grid domain
D. an Azure Event Hubs capture

A
92
Q

You have an Azure Active Directory (Azure AD) tenant that syncs with an on-premises Active Directory domain.

Your company has a line-of-business (LOB) application that was developed internally.

You need to implement SAML single sign-on (SSO) and enforce multi-factor authentication (MFA) when users attempt to access the application from an unknown location.

Which two features should you include in the solution? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A. Azure AD Privileged Identity Management (PIM)
B. Azure Application Gateway
C. Azure AD enterprise applications
D. Azure AD Identity Protection
E. Conditional Access policies

A
93
Q

You plan to automata the deployment of resources to Azure subscriptions.

What is a difference between using Azure Blueprints and Azure Resource Manager (ARM) templates?

A. ARM templates remain connected to the deployed resources.
B. Only blueprints can contain policy definitions.
C. Only ARM templates can contain policy definitions.
D. Blueprints remain connected to the deployed resources.

A
94
Q

HOTSPOT –

You have the resources shown in the following table.

Name Type Resource group
VM1 Azure virtual machine RG1
VM2 On-premises virtual machine Not applicable

You create a new resource group in Azure named RG2.

You need to move the virtual machines to RG2.

What should you use to move each virtual machine? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:
Answer Area
VM1
Azure Arc
Azure Lighthouse
Azure Migrate
Azure Resource Mover
The Data Migration Assistant (DMA)
VM2
Azure Arc
Azure Lighthouse
Azure Migrate
Azure Resource Mover
The Data Migration Assistant (DMA)

A
95
Q

You plan to deploy an Azure App Service web app that will have multiple instances across multiple Azure regions.

You need to recommend a load balancing service for the planned deployment The solution must meet the following requirements:

✑ Maintain access to the app in the event of a regional outage.

✑ Support Azure Web Application Firewall (WAF).

✑ Support cookie-based affinity.

✑ Support URL routing.

What should you include in the recommendation?

A. Azure Front Door
B. Azure Traffic Manager
C. Azure Application Gateway
D. Azure Load Balancer

A
96
Q

HOTSPOT –

You have the Azure resources shown in the following table.

Name Type Description
VNET1 Virtual network Connected to an on-premises network by using ExpressRoute
VM1 Virtual machine Configured as a DNS server
SQLDB1 Azure SQL Database Single instance
PE1 Private endpoint Provides connectivity to SQLDB1
contoso.com Private DNS zone Linked to VNET1 and contains an A record for PE1
contoso.com Public DNS zone Contains a C NAME record for SQLDB1

You need to design a solution that provides on-premises network connectivity to SQLDB1 through PE1.

How should you configure name resolution? To answer select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:
— —
Answer Area
Azure configuration
On-premises DNS configuration
Configure VM1 to forward contoso.com to the public DNS zone
Configure VM1 to forward contoso.com to the Azure-provided DNS at 168.63.129.16
In VNet1, configure a custom DNS server set to the Azure provided DNS at 168.63.129.16
Forward contoso.com to VM1
Forward contoso.com to the public DNS zone
Forward contoso.com to the Azure-provisioned DNS at 168.63.129.16

A
97
Q

You are designing a microservices architecture that will support a web application.

The solution must meet the following requirements:

✑ Deploy the solution on-premises and to Azure.

Support low-latency and hyper-scale operations.

✑ Allow independent upgrades to each microservice.

✑ Set policies for performing automatic repairs to the microservices.

You need to recommend a technology.

What should you recommend?

A. Azure Container Instance
B. Azure Logic App
C. Azure Service Fabric
D. Azure virtual machine scale set

A
98
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You plan to deploy multiple instances of an Azure web app across several Azure regions.

You need to design an access solution for the app. The solution must meet the following replication requirements:

✑ Support rate limiting.

✑ Balance requests between all instances.

✑ Ensure that users can access the app in the event of a regional outage.

Solution: You use Azure Front Door to provide access to the app.

Does this meet the goal?

A. Yes
B. No

A
99
Q

You need to recommend a solution to generate a monthly report of all the new Azure Resource Manager (ARM) resource deployments in your Azure subscription.

What should you include in the recommendation?

A. Azure Activity Log
B. Azure Arc
C. Azure Analysis Services
D. Azure Monitor action groups

A
100
Q

You have an Azure subscription.

You need to recommend a solution to provide developers with the ability to provision Azure virtual machines. The solution must meet the following requirements:

✑ Only allow the creation of the virtual machines in specific regions.

✑ Only allow the creation of specific sizes of virtual machines.

What should you include in the recommendation?

A. Attribute-based access control (ABAC)
B. Azure Policy
C. Conditional Access policies
D. role-based access control (RBAC)

A
101
Q

You have an Azure subscription that contains a storage account.

An application sometimes writes duplicate files to the storage account.

You have a PowerShell script that identifies and deletes duplicate files in the storage account. Currently, the script is run manually after approval from the operations manager.

You need to recommend a serverless solution that performs the following actions:

✑ Runs the script once an hour to identify whether duplicate files exist

✑ Sends an email notification to the operations manager requesting approval to delete the duplicate files

✑ Processes an email response from the operations manager specifying whether the deletion was approved

✑ Runs the script if the deletion was approved

What should you include in the recommendation?

A. Azure Logic Apps and Azure Event Grid
B. Azure Logic Apps and Azure Functions
C. Azure Pipelines and Azure Service Fabric
D. Azure Functions and Azure Batch

A
102
Q

Your company has the infrastructure shown in the following table.
Location Resource
Azure:
* Azure subscription named Subscription1
* 20 Azure web apps
On-premises datacenter:
* Active Directory domain
* Server running Azure AD Connect
* Linux computer named Server1

The on-premises Active Directory domain syncs with Azure Active Directory (Azure AD).

Server1 runs an application named App1 that uses LDAP queries to verify user identities in the on-premises Active Directory domain.

You plan to migrate Server1 to a virtual machine in Subscription1.

A company security policy states that the virtual machines and services deployed to Subscription1 must be prevented from accessing the on-premises network.

You need to recommend a solution to ensure that App1 continues to function after the migration. The solution must meet the security policy.

What should you include in the recommendation?

A. Azure AD Application Proxy
B. the Active Directory Domain Services role on a virtual machine
C. an Azure VPN gateway
D. Azure AD Domain Services (Azure AD DS)

A
103
Q

You need to design a solution that will execute custom C# code in response to an event routed to Azure Event Grid. The solution must meet the following requirements:

✑ The executed code must be able to access the private IP address of a Microsoft SQL Server instance that runs on an Azure virtual machine.

✑ Costs must be minimized.

What should you include in the solution?

A. Azure Logic Apps in the Consumption plan
B. Azure Functions in the Premium plan
C. Azure Functions in the Consumption plan
D. Azure Logic Apps in the integrated service environment

A
104
Q

HOTSPOT –

You have an Azure subscription named Subscription1 that is linked to a hybrid Azure Active Directory (Azure AD) tenant.

You have an on-premises datacenter that does NOT have a VPN connection to Subscription1. The datacenter contains a computer named Server1 that has

Microsoft SQL Server 2016 installed. Server is prevented from accessing the internet.

An Azure logic app resource named LogicApp1 requires write access to a database on Server1.

You need to recommend a solution to provide LogicApp1 with the ability to access Server1.

What should you recommend deploying on-premises and in Azure? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:
Answer Area
On-premises:
A Web Application Proxy for Windows Server
An Azure AD Application Proxy connector
An On-premises data gateway
Hybrid Connection Manager
Azure:
A connection gateway resource
An Azure Application Gateway
An Azure Event Grid domain
An enterprise application

A
105
Q

You are designing a microservices architecture that will be hosted in an Azure Kubernetes Service (AKS) cluster. Apps that will consume the microservices will be hosted on Azure virtual machines. The virtual machines and the AKS cluster will reside on the same virtual network.

You need to design a solution to expose the microservices to the consumer apps. The solution must meet the following requirements:

✑ Ingress access to the microservices must be restricted to a single private IP address and protected by using mutual TLS authentication.

✑ The number of incoming microservice calls must be rate-limited.

✑ Costs must be minimized.

What should you include in the solution?

A. Azure App Gateway with Azure Web Application Firewall (WAF)
B. Azure API Management Standard tier with a service endpoint
C. Azure Front Door with Azure Web Application Firewall (WAF)
D. Azure API Management Premium tier with virtual network connection

A

The correct answer is A. an Azure App Service web app

Explanation:

Let’s break down why Azure App Service web app is the most suitable option and why the others are less ideal based on the requirements:

Requirements:

Minimize maintenance overhead: This favors Platform-as-a-Service (PaaS) solutions where Azure manages the underlying infrastructure.

Minimize costs: This also favors PaaS solutions and efficient resource utilization.

.NET web service: App Service Web Apps have excellent support for .NET applications.

Reads and writes temporary files to the local file system: App Service Web Apps provide access to local storage within their sandbox environment, specifically the D:\home directory. This can be used for temporary files. While this storage is not persistent across instances or restarts in all scenarios, it’s generally sufficient for temporary files.

Writes to the Application event log: App Service Web Apps provide robust logging capabilities, including application logs that can be streamed, viewed in the Azure portal, and integrated with Azure Monitor. While it’s not direct access to the Windows Application Event Log like on a traditional VM, it provides equivalent cloud-native application logging functionality.

Analyzing each option:

A. Azure App Service web app:

Maintenance: App Service is a PaaS offering, meaning Azure handles most of the infrastructure maintenance, including OS patching, server management, and scaling. This significantly minimizes maintenance overhead.

Cost: App Service is generally cost-effective, especially for web applications. There are various pricing tiers to choose from to optimize costs based on resource needs.

Local File System: Web Apps have access to local storage (within the D:\home directory) for temporary files.

Application Event Log: App Service provides application logging that fulfills the requirement, although it’s through Azure’s logging mechanisms rather than direct Windows Event Log.

B. Azure virtual machine scale set:

Maintenance: VM scale sets are Infrastructure-as-a-Service (IaaS). While they simplify managing multiple VMs, you are still responsible for OS patching, server maintenance, and VM configuration. This leads to higher maintenance overhead compared to PaaS.

Cost: VMs are generally more expensive than App Service for hosting web applications, especially when considering the management overhead.

Local File System: VMs have full access to the local file system, which perfectly meets the requirement.

Application Event Log: VMs are like traditional servers, so writing to the Application Event Log works as expected.

C. App Service Environment (ASE):

Maintenance: ASE is a more isolated and scaled-up version of App Service. While still PaaS, it involves more configuration and can have higher operational complexity than standard App Service. Maintenance is higher than a basic Web App.

Cost: ASE is the most expensive App Service offering. It’s designed for large-scale, isolated, and high-security applications. It’s likely overkill and not cost-effective for a single web service, especially when cost minimization is a requirement.

Local File System: Similar to standard App Service, ASE has access to local storage but it’s still ephemeral.

Application Event Log: Similar to standard App Service, logging is available through Azure’s logging mechanisms.

D. Azure Functions app:

Maintenance: Azure Functions are serverless and even more maintenance-free than App Service Web Apps. Azure fully manages the infrastructure and scaling.

Cost: Functions are extremely cost-effective, especially for event-driven workloads and can be very cost-efficient for web services with variable traffic.

Local File System: Functions have limited access to local storage (ephemeral storage). This storage is very small (typically around 500MB - 1GB) and might not be suitable if Service1 requires significant temporary file storage. This is the main potential limitation.

Application Event Log: Functions have their own logging mechanisms integrated with Azure Monitor, similar to App Service.

105
Q

Your company has 300 virtual machines hosted in a VMware environment. The virtual machines vary in size and have various utilization levels.

You plan to move all the virtual machines to Azure.

You need to recommend how many and what size Azure virtual machines will be required to move the current workloads to Azure. The solution must minimize administrative effort.

What should you use to make the recommendation?

A. Azure Pricing calculator
B. Azure Advisor
C. Azure Migrate
D. Azure Cost Management

A

The correct answer is C. Azure Migrate.

Here’s why:

Azure Migrate: Azure Migrate is a service specifically designed to assess and migrate on-premises servers to Azure. One of its key features is the ability to perform a sizing assessment. This assessment analyzes the performance characteristics of your on-premises VMware VMs (CPU, memory, disk I/O) and recommends appropriately sized Azure VMs for the migration target. It also takes into account utilization levels to optimize the recommendation and avoid over-provisioning.

Here’s why the other options are incorrect:

A. Azure Pricing calculator: The Azure Pricing calculator is useful for estimating the cost of Azure resources, but it doesn’t provide any information about the resource requirements of your on-premises VMs. You would need to manually determine the appropriate VM sizes before using the calculator, which defeats the purpose of minimizing administrative effort.

B. Azure Advisor: Azure Advisor provides recommendations for optimizing your existing Azure resources. It doesn’t analyze on-premises environments or provide sizing recommendations for migrating VMs.

D. Azure Cost Management: Azure Cost Management helps you analyze and manage your Azure spending, but it doesn’t have any capabilities for assessing on-premises environments or recommending Azure VM sizes for migration.

106
Q

You have the Azure resources shown in the following table.

Name Type Location
US-Central-Firewall-policy Azure Firewall policy Central US
US-East-Firewall-policy Azure Firewall policy East US
EU-Firewall-policy Azure Firewall policy West Europe
USEastfirewall Azure Firewall Central US
USWestfirewall Azure Firewall East US
EUFirewall Azure Firewall West Europe

You need to deploy a new Azure Firewall policy that will contain mandatory rules for all Azure Firewall deployments. The new policy will be configured as a parent policy for the existing policies.

What is the minimum number of additional Azure Firewall policies you should create?

A. 0
B. 1
C. 2
D. 3

A

The question asks for the minimum number of additional Azure Firewall policies needed to create a new parent policy for the existing policies. Let’s analyze the resources and constraints.

Existing Azure Firewall Policies:

US-Central-Firewall-policy (Central US)

US-East-Firewall-policy (East US)

EU-Firewall-policy (West Europe)

Key Constraint: Child Azure Firewall Policies must be in the same region as the Parent Azure Firewall Policy.

Requirement: Deploy a new Azure Firewall policy as a parent policy for the existing policies.

To minimize the number of additional policies, we should consider if we can re-use one of the existing policies to serve as the parent policy.

If we select US-Central-Firewall-policy to be the parent policy, it is already in Central US. Since child policies must be in the same region as the parent, we could potentially make other policies in Central US children of US-Central-Firewall-policy if there were any. However, in this scenario, the other existing policies are in different regions (East US and West Europe).

According to the documentation and understanding of Azure Firewall Policies, a parent policy can manage rules that are inherited by child policies. However, the critical constraint is the region. Child policies must be in the same region as the parent. Therefore, we cannot make US-East-Firewall-policy (East US) or EU-Firewall-policy (West Europe) children of a parent policy in Central US.

However, the question asks for the minimum number of additional policies. If we interpret the question as finding the most efficient way to implement a parent policy for rule management, we could consider re-using one of the existing policies to act as the central “parent” for rule management purposes, even if not in a strict parent-child relationship with the other regional policies due to the region constraint.

If we designate US-Central-Firewall-policy as the parent policy, and decide to manage mandatory rules within it, we do not need to create any additional Azure Firewall policies. We can modify US-Central-Firewall-policy to include the mandatory rules. While US-East-Firewall-policy and EU-Firewall-policy cannot become technical “child” policies of US-Central-Firewall-policy because of region differences, we could still manage mandatory rules from US-Central-Firewall-policy and potentially apply similar rule sets manually or through scripting to the other regional policies to maintain consistency.

Given the options, and the emphasis on “minimum number of additional policies,” option A (0) is the most fitting if we consider re-using an existing policy as the basis for the parent policy without creating new ones.

Therefore, the minimum number of additional Azure Firewall policies we need to create is 0, by repurposing US-Central-Firewall-policy as the parent.

Final Answer: The final answer is
A

106
Q

Your company has an app named App1 that uses data from the on-premises Microsoft SQL Server databases shown in the following table.
NAME SIZE
DB1 400 GB
DB2 250 GB
DB3 300 GB
DB4 50 GB

App1 and the data are used on the first day of the month only. The data is not expected to grow more than 3 percent each year.

The company is rewriting App1 as an Azure web app and plans to migrate all the data to Azure.

You need to migrate the data to Azure SQL Database and ensure that the database is only available on the first day of each month.

Which service tier should you use?

A. vCore-based General Purpose
B. DTU-based Standard
C. vCore-based Business Critical
D. DTU-based Basic

A

Let’s break down the requirements and evaluate each service tier option.

Requirements:

Migrate data to Azure SQL Database: The solution needs to be an Azure SQL Database service tier.

Database size: Total data is 400 GB + 250 GB + 300 GB + 50 GB = 1000 GB = 1 TB.

Availability: Only available on the first day of each month. This implies the database can be stopped/paused for the rest of the month to minimize costs.

Workload: Online Transaction Processing (OLTP).

Cost Optimization: Minimize cost as the database is used only one day a month.

Evaluating Service Tiers:

A. vCore-based General Purpose:

Scalability: Offers good scalability and performance for general workloads, including OLTP.

Cost: Generally cost-effective, especially when you can stop/pause compute when not in use. In vCore model, you can pause compute and only pay for storage, significantly reducing costs when the database is not needed for most of the month.

Data Size: Supports databases larger than 1 TB, easily accommodating the 1 TB requirement.

Availability (on first day only): Suitable because vCore-based General Purpose allows you to pause and resume compute resources. You can automate starting the database on the first day and pausing it afterward to minimize costs for the rest of the month.

B. DTU-based Standard:

Scalability: Less scalable than vCore-based tiers. For a 1 TB database, you would likely need a higher DTU level within the Standard tier, which might become less cost-effective compared to General Purpose vCore, especially when considering pausing compute.

Cost: Can be cheaper for smaller databases, but for a 1 TB database, the required DTU level might increase the cost, and DTU model doesn’t offer the same granular control over compute pausing and cost reduction as vCore.

Data Size: Standard tier has size limits, but higher DTU levels can support databases around 1 TB.

Availability (on first day only): Less optimal for pausing and resuming compute compared to vCore.

C. vCore-based Business Critical:

Scalability: Highest performance and scalability, designed for mission-critical applications.

Cost: Most expensive tier due to high performance and HA features. Overkill for a database used only one day a month. Stopping compute might still leave higher base costs compared to General Purpose.

Data Size: Supports databases larger than 1 TB.

Availability (on first day only): Designed for continuous, high availability, which is not needed for this scenario. Very expensive and not cost-effective.

D. DTU-based Basic:

Scalability: Very limited scalability and performance.

Cost: Cheapest DTU tier.

Data Size: Has very small database size limits (typically a few GB), which is far too small for the 1 TB requirement. Not suitable at all.

Conclusion:

Given the requirements, especially the need to minimize cost and the database being used only one day a month, the vCore-based General Purpose tier is the most appropriate. It provides sufficient performance for OLTP, can handle the 1 TB data size, and crucially, allows for pausing and resuming compute resources. By automating the start and stop of the database around the first day of each month, the company can significantly reduce costs, paying primarily for storage for most of the month and compute only for the day it’s needed.

Final Answer: The final answer is
A

106
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You plan to deploy multiple instances of an Azure web app across several Azure regions.

You need to design an access solution for the app. The solution must meet the following replication requirements:

✑ Support rate limiting.

✑ Balance requests between all instances.

✑ Ensure that users can access the app in the event of a regional outage.

Solution: You use Azure Load Balancer to provide access to the app.

Does this meet the goal?

A. Yes
B. No

A

Let’s analyze the proposed solution (Azure Load Balancer) against each of the stated requirements for a multi-region web app deployment.

Support rate limiting: Azure Load Balancer, in its standard configurations (Basic and Standard Load Balancer), does not inherently provide rate limiting capabilities. Rate limiting is typically a feature handled by services like Azure Application Gateway with WAF, Azure Front Door, or API Management at the edge or within the application itself. Azure Load Balancer focuses on load distribution based on network layer (Layer 4) or application layer (Layer 7 for Application Gateway, but question specifies Azure Load Balancer, implying Basic or Standard).

Balance requests between all instances: Azure Load Balancer can balance requests across multiple instances of a web app. However, a key limitation is that Azure Load Balancer is a regional service. A single Azure Load Balancer instance is deployed within a specific Azure region and can only distribute traffic to backend instances within the same region. It cannot inherently balance requests across web app instances deployed in different Azure regions. To achieve cross-region load balancing with Azure Load Balancer alone, you would need to deploy multiple Load Balancers, one in each region, and then implement a separate global traffic management solution (like Azure Traffic Manager or Azure Front Door) in front of them to distribute traffic across regions.

Ensure that users can access the app in the event of a regional outage: Because Azure Load Balancer is a regional service, if the Azure region where the Load Balancer and the primary app instances are deployed experiences an outage, the Azure Load Balancer in that region will also be affected. A single regional Azure Load Balancer cannot provide regional outage resilience across multiple regions. To achieve regional failover, you would need a global load balancing solution that can direct traffic to healthy regions if one region becomes unavailable. Azure Load Balancer itself is not designed for this cross-region failover scenario.

Conclusion:

Azure Load Balancer, by itself, fails to meet all the stated requirements. It does not inherently support rate limiting, and it is a regional service, so it cannot directly balance requests across regions or ensure regional outage resilience without additional components. While it can balance requests within a region, the multi-region context and outage requirement are not met by a standalone Azure Load Balancer.

Therefore, the solution does not meet the goal.

Final Answer: The final answer is
B

106
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You plan to deploy multiple instances of an Azure web app across several Azure regions.

You need to design an access solution for the app. The solution must meet the following replication requirements:

✑ Support rate limiting.

✑ Balance requests between all instances.

✑ Ensure that users can access the app in the event of a regional outage.

Solution: You use Azure Traffic Manager to provide access to the app.

Does this meet the goal?

A. Yes
B. No

A

Let’s evaluate if Azure Traffic Manager meets each of the stated requirements for a multi-region web app access solution:

Support rate limiting: Azure Traffic Manager is a DNS-based traffic routing service. It directs traffic to different endpoints based on DNS resolution and the chosen routing method (e.g., Performance, Priority, Weighted, Geographic). Azure Traffic Manager itself does not provide rate limiting capabilities. Rate limiting is typically implemented at the application level, using a Web Application Firewall (WAF), or an API Gateway. Traffic Manager operates at the DNS layer, and decisions are made at the DNS resolution phase, not at the HTTP request level.

Balance requests between all instances: Azure Traffic Manager can distribute traffic across multiple instances of a web app deployed in different regions. Depending on the routing method chosen, Traffic Manager can direct users to the “closest” instance (Performance routing), distribute traffic based on weights (Weighted routing), or use other criteria to distribute traffic across available endpoints. It’s important to note that Traffic Manager is not a load balancer in the traditional sense that it’s not inspecting individual HTTP requests and balancing them in real-time. It’s a DNS-based traffic director. However, for the purpose of distributing users and requests across different regional instances, Traffic Manager can fulfill this requirement to a degree.

Ensure that users can access the app in the event of a regional outage: Azure Traffic Manager is specifically designed for this scenario and excels at providing regional outage resilience. It continuously monitors the health of the configured endpoints (which would be the regional web app instances). If an entire region or a specific web app instance becomes unhealthy, Traffic Manager can automatically detect this failure and stop directing traffic to the failed endpoint. It will then route users to the healthy instances in other regions, ensuring continued access to the application even if one region experiences an outage. This is a primary and very strong use case for Azure Traffic Manager.

Overall Assessment:

Azure Traffic Manager effectively addresses the requirements for balancing requests across multiple instances and ensuring access during regional outages. However, it does not support rate limiting.

Given that the question states that all replication requirements must be met, and Azure Traffic Manager does not offer rate limiting, the solution does not fully meet the goal.

Therefore, the answer is No.

Final Answer: The final answer is
B

107
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You plan to deploy multiple instances of an Azure web app across several Azure regions.

You need to design an access solution for the app. The solution must meet the following replication requirements:

✑ Support rate limiting.

✑ Balance requests between all instances.

✑ Ensure that users can access the app in the event of a regional outage.

Solution: You use Azure Application Gateway to provide access to the app.

Does this meet the goal?

A. Yes
B. No

A

Let’s analyze the proposed solution (Azure Application Gateway) against each of the stated requirements for a multi-region web app deployment.

Support rate limiting: Azure Application Gateway, particularly when configured with the Web Application Firewall (WAF) feature, does support rate limiting. WAF rules can be configured to limit the number of requests from a specific source (e.g., IP address) within a defined time window. This allows for protection against denial-of-service attacks and helps manage traffic volume. Thus, this requirement is met.

Balance requests between all instances: Azure Application Gateway excels at load balancing HTTP/HTTPS traffic. It can distribute requests across multiple backend servers. However, standard Azure Application Gateway is a regional service. A single instance of Application Gateway is deployed in a specific Azure region and can only directly load balance traffic to backend instances located within the same region. It cannot natively load balance across web app instances deployed in different Azure regions with a single Application Gateway instance. To achieve cross-region load balancing, you would typically need a global load balancer in front of regional Application Gateways (like Azure Front Door). For the question as posed, using only Azure Application Gateway, this requirement is not fully met for all instances across several regions. It can balance instances within a region, but not between regions in a single deployment.

Ensure that users can access the app in the event of a regional outage: Since Azure Application Gateway is a regional service, if the Azure region where the Application Gateway is deployed experiences a complete outage, the Application Gateway itself will also become unavailable. A single regional Azure Application Gateway deployment cannot inherently provide regional outage resilience across multiple regions. To ensure access during a regional outage, you would need a solution that operates at a global level and can route traffic away from a failed region to a healthy one. Azure Application Gateway alone does not provide this cross-region failover capability. Thus, this requirement is not met.

Conclusion:

While Azure Application Gateway effectively supports rate limiting and load balancing within a region, it is a regional service and falls short of meeting the requirements of balancing requests across instances in multiple regions and ensuring access during a regional outage when used as a standalone solution. To achieve cross-region capabilities, typically a global service like Azure Front Door would be placed in front of regional Application Gateways. However, the proposed solution only mentions “Azure Application Gateway” without mentioning a global service in front.

Therefore, the solution, as stated, does not meet the goal of providing a multi-region access solution that fulfills all the requirements.

Final Answer: The final answer is
B

108
Q

You have an Azure subscription that contains the resources shown in the following table.
— —
Name Type Description
VNet1 Virtual Network None
LB1 Public load balancer Includes a backend pool name BP1
VMSS1 Azure Virtual Machine Scale Sets Included in BP1
Connected to VNet1
NVA1 Network Virtual Appliance (NVA) Connected to VNet1
Performs security filtering of traffic for
VMSS1
NVA2 Network Virtual Appliance (NVA) Connected to VNet1
Performs security filtering of traffic for
VMSS1

You need to recommend a load balancing solution that will distribute incoming traffic for VMSS1 across NVA1 and NVA2. The solution must minimize administrative effort.

What should you include in the recommendation?

A. Gateway Load Balancer
B. Azure Front Door
C. Azure Application Gateway
D. Azure Traffic Manager

A

The requirement is to distribute incoming traffic for VMSS1 across NVA1 and NVA2 within the same Azure Virtual Network (VNet1), with minimal administrative effort. The NVAs are performing security filtering for VMSS1. Let’s evaluate each option based on its suitability for this scenario:

A. Gateway Load Balancer:

Purpose: Gateway Load Balancer is specifically designed to simplify the deployment and management of Network Virtual Appliances (NVAs) in Azure. It allows you to chain NVAs to your application traffic path without complex User Defined Routes (UDRs).

Functionality: It provides a single entry point for traffic, and can distribute this traffic across a pool of NVAs. This perfectly matches the requirement of distributing traffic for VMSS1 across NVA1 and NVA2.

Administrative Effort: Gateway Load Balancer is designed to minimize administrative effort by simplifying the NVA integration. It removes the need for complex UDR configurations to force traffic through NVAs.

B. Azure Front Door:

Purpose: Azure Front Door is a global, scalable web application acceleration and load balancing service. It’s designed for HTTP/HTTPS traffic and global routing, typically used for improving performance and availability of web applications across different regions.

Functionality: While Front Door can load balance, it’s intended for internet-facing web applications and global distribution. It’s not the appropriate tool for load balancing traffic within a VNet to NVAs for security filtering of VMSS traffic. It adds unnecessary complexity and is not optimized for this internal VNet scenario.

C. Azure Application Gateway:

Purpose: Azure Application Gateway is a web traffic load balancer for managing traffic to web applications. It operates at Layer 7 and offers features like SSL termination, WAF, and URL-based routing.

Functionality: Application Gateway is primarily used for load balancing web traffic to application servers. While it can load balance VMs, using it solely to distribute traffic to NVAs for security filtering within the same VNet is not its primary use case and is less efficient than Gateway Load Balancer for this particular scenario. It’s more complex than needed for simple NVA load balancing.

D. Azure Traffic Manager:

Purpose: Azure Traffic Manager is a DNS-based traffic routing service. It directs traffic based on DNS resolution and routing methods (Performance, Priority, etc.).

Functionality: Traffic Manager is used for global traffic management, directing users to different endpoints based on health and routing policies. It operates at the DNS layer and is not suitable for load balancing network traffic within a VNet to NVAs for traffic inspection. It is not designed for this type of internal load balancing scenario.

Conclusion:

Considering the specific requirement to load balance traffic across NVAs for VMSS1 within the same VNet and minimize administrative effort, Gateway Load Balancer is the most appropriate and purpose-built solution. It is designed exactly for this NVA integration scenario, simplifying the traffic routing and minimizing configuration complexity compared to other load balancing options.

Final Answer: The final answer is
A

109
Q

HOTSPOT

You have the Azure subscriptions shown in the following table.

Name Location Azure AD tenant
Sub1 East US contoso.onmicrosoft.com
Sub2 East US contoso-recovery.onmicrosoft.com

Contoso.onmicrosft.com contains a user named User1.

You need to deploy a solution to protect against ransomware attacks. The solution must meet the following requirements:

  • Ensure that all the resources in Sub1 are backed up by using Azure Backup.
  • Require that User1 first be assigned a role for Sub2 before the user can make major changes to the backup configuration.

What should you create in each subscription? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
Sub1:
A Recovery Services vault
A Resource Guard
An Azure Site Recovery job
Microsoft Azure Backup Server (MABS)
The Microsoft Azure Recovery Services (MARS) agent
Sub2:
A Recovery Services vault
A Resource Guard
An Azure Site Recovery job
Microsoft Azure Backup Server (MABS)
The Microsoft Azure Recovery Services (MARS) agent

A

For Sub1, the most critical component to enable backups is A Recovery Services vault.

Why A Recovery Services vault is the closest and best single option for Sub1:

Core Backup Service: Azure Backup is fundamentally managed through a Recovery Services vault. It’s the central administrative entity where you configure backup policies, initiate backups, and manage recovery points for your resources.

Enabling Backups: Without a Recovery Services vault in Sub1, you cannot use Azure Backup to protect the resources within that subscription. It’s the prerequisite for fulfilling the requirement “Ensure that all the resources in Sub1 are backed up by using Azure Backup.”

Other Options are Secondary or Incorrect in this context:

Resource Guard: While Resource Guard enhances security and is part of a robust ransomware protection strategy, it protects the Recovery Services vault. You need the vault first. Resource Guard without a vault doesn’t enable backups.

Azure Site Recovery job: Azure Site Recovery is for disaster recovery (DR), not backup in the sense of regular data protection against ransomware. It’s a different service with a different purpose.

Microsoft Azure Backup Server (MABS) and The Microsoft Azure Recovery Services (MARS) agent: These are used for backing up on-premises resources or specific Azure VMs. The question states “all the resources in Sub1,” implying a broader Azure-native backup strategy, not just VM-level or on-premises backups.

For Sub2, the most critical component to enforce the authorization requirement is A Resource Guard.

Why A Resource Guard is the closest and best single option for Sub2:

Enforcing Authorization Control: The primary purpose of Resource Guard in this scenario is to “Require that User1 first be assigned a role for Sub2 before the user can make major changes to the backup configuration.” Resource Guard is specifically designed to enforce multi-user authorization and other security measures for Recovery Services vaults.

Cross-Subscription Authorization: Placing Resource Guard in Sub2, a separate Azure AD tenant, is key to enforcing the requirement that User1 needs permissions in Sub2 to affect backups in Sub1. This cross-subscription/tenant control is the core of the security enhancement.

Other Options are Irrelevant in this context:

A Recovery Services vault, An Azure Site Recovery job, Microsoft Azure Backup Server (MABS), and The Microsoft Azure Recovery Services (MARS) agent: These options in Sub2 do not directly contribute to the requirement of enforcing authorization control over backup configurations in Sub1. Sub2 is acting as a security administration subscription in this scenario, and Resource Guard is the component that enables that security function.

In summary, if you can only select ONE option for each subscription:

Sub1: A Recovery Services vault (because it’s the fundamental component for Azure Backup).

Sub2: A Resource Guard (because it directly enforces the authorization requirement).

110
Q

HOTSPOT

You have 10 on-premises servers that run Windows Server.

You need to perform daily backups of the servers to a Recovery Services vault. The solution must meet the following requirements:

  • Back up all the files and folders on the servers.
  • Maintain three copies of the backups in Azure.
  • Minimize costs.

What should you configure? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
On the servers:
The Azure Site Recovery Mobility service
The Microsoft Azure Recovery Services (MARS) agent
Volume Shadow Copy Service (VSS)
For the storage:
Geo-redundant storage (GRS)
Locally-redundant storage (LRS)
Zone-redundant storage (ZRS)

A

Let’s analyze each option for the Hotspot question to determine the best single choice for “On the servers” and “For the storage” based on the requirements:

On the servers - Best Single Option: The Microsoft Azure Recovery Services (MARS) agent

Why correct: The Microsoft Azure Recovery Services (MARS) agent, also known as the Azure Backup agent, is specifically designed for backing up files, folders, and system state from on-premises Windows servers directly to an Azure Recovery Services vault. This directly addresses the requirement to “Back up all the files and folders on the servers.” It is the correct agent to install on the servers to enable Azure Backup for files and folders.

Why other options are less suitable as single choices:

The Azure Site Recovery Mobility service: This service is for Azure Site Recovery (ASR), used for replicating entire VMs for disaster recovery, not file/folder level backups for daily operations as specified in the question. It’s the wrong tool for the stated backup requirement.

Volume Shadow Copy Service (VSS): VSS is a Windows technology, not an Azure component that you directly “configure” for backups to Azure. VSS is used by backup applications (including the MARS agent) to ensure consistent backups, but it’s an underlying service, not the primary component to select for enabling backups to Azure.

For the storage - Best Single Option: Locally-redundant storage (LRS)

Why correct: Locally-redundant storage (LRS) is the lowest-cost storage redundancy option in Azure. It replicates your data three times within a single data center. This directly and minimally meets the requirement to “Maintain three copies of the backups in Azure” while also fulfilling the requirement to “Minimize costs.” LRS is sufficient for protecting against hardware failures within a data center.

Why other options are less suitable as single choices:

Geo-redundant storage (GRS): GRS provides higher redundancy and resilience by replicating data to a secondary region, maintaining six copies in total. However, GRS is significantly more expensive than LRS. The requirement is to minimize costs. GRS is overkill for the stated scenario and contradicts the cost minimization goal. While it provides more than three copies, LRS is sufficient and cheaper.

Zone-redundant storage (ZRS): ZRS replicates data across three availability zones within a region, providing higher availability than LRS and protection against datacenter failures within a region. However, ZRS is also more expensive than LRS. For simple daily backups aiming to minimize costs, and where datacenter-level redundancy might be considered sufficient, LRS is the more cost-effective and thus better single choice given the “minimize costs” requirement. While ZRS also maintains three copies, LRS is cheaper.

111
Q

HOTSPOT

You plan to deploy a containerized web-app that will be hosted in five Azure Kubernetes Service (AKS) clusters. Each cluster will be hosted in a different Azure region.

You need to provide access to the app from the internet. The solution must meet the following requirements:

  • Incoming HTTPS requests must be routed to the cluster that has the lowest network latency.
  • HTTPS traffic to individual pods must be routed via an ingress controller.
  • In the event of an AKS cluster outage, failover time must be minimized.

What should you include in the solution? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
For global load balancing:
Azure Front Door
Azure Traffic Manager
Cross-region load balancing in Azure
Standard Load Balancer
As the ingress controller:
Azure Application Gateway
Azure Standard Load Balancer
Basic Azure Load Balancer

A

Let’s analyze each requirement and the suitability of the provided options for global load balancing and ingress controller.

Requirements:

Latency-based Routing (HTTPS): Incoming HTTPS requests must be routed to the AKS cluster with the lowest network latency.

Ingress Controller Routing (HTTPS to Pods): HTTPS traffic to individual pods must be routed via an ingress controller.

Minimal Failover Time: In the event of an AKS cluster outage, failover time must be minimized.

For Global Load Balancing:

Azure Front Door:

Latency-based Routing: Yes. Azure Front Door’s “Performance routing” is specifically designed to route traffic to the backend with the lowest latency. This perfectly matches the first requirement.

Multi-region & Global: Yes. Azure Front Door is a global service and is designed for distributing traffic across multiple regions, fitting the multi-AKS cluster scenario.

Failover: Yes. Azure Front Door provides automatic failover to the next closest healthy backend in case of an outage, minimizing failover time.

HTTPS: Yes. Azure Front Door is designed for handling HTTPS traffic, including SSL termination and routing.

Azure Traffic Manager:

Latency-based Routing: Yes. Azure Traffic Manager can use the “Performance” routing method to direct traffic to the endpoint with the lowest latency.

Multi-region & Global: Yes. Azure Traffic Manager is a global, DNS-based traffic manager suitable for multi-region applications.

Failover: Yes. Azure Traffic Manager provides automatic failover by monitoring endpoint health and redirecting traffic away from failed endpoints.

HTTPS: Yes. Azure Traffic Manager works with HTTPS, although it is DNS-based and does not perform SSL termination itself.

Cross-region load balancing in Azure: This is a descriptive phrase rather than a specific Azure service. It describes the desired outcome, not a tool.

Standard Load Balancer:

Latency-based Routing: No. Azure Standard Load Balancer is a regional load balancer. It does not provide global, latency-based routing across different Azure regions. It balances traffic within a region.

Multi-region & Global: No. Azure Standard Load Balancer is a regional service.

Failover: No. Azure Standard Load Balancer provides high availability within a region, but not cross-region failover.

Conclusion for Global Load Balancing: Azure Front Door is the superior choice because it directly addresses all the requirements for global load balancing, especially latency-based routing and fast failover in a multi-region AKS setup. Azure Traffic Manager is also a valid option, but Front Door is often preferred for web applications requiring Layer 7 features and faster failover in web scenarios.

As the Ingress Controller:

Azure Application Gateway:

Ingress Controller: Yes. Azure Application Gateway can be used as an ingress controller for AKS, especially with the Azure Application Gateway Ingress Controller (AGIC).

HTTPS Routing to Pods: Yes. Application Gateway can handle HTTPS termination and route traffic to pods based on ingress rules (path-based, host-based routing).

Azure Standard Load Balancer:

Ingress Controller (Indirectly): Yes, Azure Standard Load Balancer can act as the service load balancer in front of a software-based ingress controller (like Nginx Ingress Controller or Traefik) in AKS. However, it is not an ingress controller itself. It’s the underlying Layer 4 load balancer that exposes the ingress controller service. It does not provide the Layer 7 routing and HTTPS termination features required of an ingress controller for routing to pods based on HTTP rules.

Basic Azure Load Balancer:

Ingress Controller (Indirectly): Yes, similar to Standard Load Balancer, but Basic Load Balancer is less feature-rich, has limitations, and is not generally recommended for production AKS ingress scenarios.

Conclusion for Ingress Controller: Azure Application Gateway is the best choice as the ingress controller because it is a fully managed Layer 7 load balancer that can directly act as an ingress controller for AKS, providing HTTPS termination, advanced routing, and integration through AGIC. While Standard Load Balancer is necessary at a lower layer for exposing services, Application Gateway is the appropriate choice for the ingress controller role as per the requirements.

Final Answer:

For global load balancing: Azure Front Door

As the ingress controller: Azure Application Gateway

112
Q

HOTSPOT

You have an Azure subscription.

You create a storage account that will store documents.

You need to configure the storage account to meet the following requirements:

  • Ensure that retention policies are standardized across the subscription.
  • Ensure that data can be purged if the data is copied to an unauthorized location.

Which two settings should you enable? To answer, select the appropriate settings in the answer area.

NOTE: Each correct selection is worth one point.
Recovery
Enable operational backup with Azure Backup
Enable point-in-time restore for containers
Enable soft delete for blobs
Enable soft delete for containers
Enable permanent delete for soft deleted items
Tracking
Enable versioning for blobs
Enable blob change feed
Access control
Enable version-level immutability support

A

Correct Answer:

Recovery: Enable soft delete for blobs

Access control: Enable version-level immutability support

Explanation:

  1. Ensure that retention policies are standardized across the subscription.

Enable version-level immutability support helps achieve this. Immutability policies, including retention policies, can be set at the subscription or resource group level and inherited by storage accounts and containers within. This ensures consistent policy application across the board.

  1. Ensure that data can be purged if the data is copied to an unauthorized location.

This requirement is a bit tricky, as Azure Storage doesn’t have a built-in feature to automatically purge data copied elsewhere. However, by combining immutability with soft delete, you can achieve a similar outcome:

Enable version-level immutability support: If immutability is enabled, even if data is copied, the original data within the storage account will be protected by the retention policy. It cannot be deleted or modified until the policy allows it.

Enable soft delete for blobs: Soft delete adds a safety net. If someone tries to delete a blob that’s protected by immutability, it will go into a soft-deleted state instead of being permanently deleted immediately. This gives you a window to recover the data if needed, but also allows for eventual deletion if the retention period of the soft-deleted blob expires.

Why other options are incorrect:

Enable operational backup with Azure Backup: While essential for disaster recovery, it doesn’t directly standardize retention policies or enable purging based on unauthorized copies.

Enable point-in-time restore for containers: This allows restoring containers to a previous state, but doesn’t address retention standardization or unauthorized copy purging.

Enable soft delete for containers: Similar to soft delete for blobs but applied to entire containers. It doesn’t address the core requirements as effectively as blob-level soft delete combined with immutability.

Enable permanent delete for soft deleted items: This feature accelerates the deletion process but is contrary to the requirement of being able to purge in specific cases of unauthorized copying.

Enable versioning for blobs: Useful for tracking changes but doesn’t ensure standardized retention or provide a direct mechanism for purging.

Enable blob change feed: Provides a log of changes to blobs, helpful for auditing but not directly related to the requirements.

113
Q

You have SQL Server on an Azure virtual machine. The databases are written to nightly as part of a batch process.

You need to recommend a disaster recovery solution for the data. The solution must meet the following requirements:

✑ Provide the ability to recover in the event of a regional outage.

✑ Support a recovery time objective (RTO) of 15 minutes.

✑ Support a recovery point objective (RPO) of 24 hours.

✑ Support automated recovery.

✑ Minimize costs.

What should you include in the recommendation?

A. Azure virtual machine availability sets
B. Azure Disk Backup
C. an Always On availability group
D. Azure Site Recovery

A

The best recommendation to meet the requirements is D. Azure Site Recovery.

Here’s why:

Azure Site Recovery

Regional Outage Recovery: Azure Site Recovery replicates your virtual machine to a secondary Azure region. In case of a regional outage in the primary region, you can fail over to the secondary region, ensuring business continuity.

RTO of 15 minutes: Azure Site Recovery can achieve an RTO of 15 minutes or even less, depending on the configuration and the size of the virtual machine. This meets the requirement.

RPO of 24 hours: Azure Site Recovery supports various replication frequencies. For a nightly batch process, you can configure replication to occur less frequently (e.g., every few hours or even just once a day after the batch process completes), allowing you to easily meet an RPO of 24 hours. You can adjust the frequency based on your specific needs.

Automated Recovery: Azure Site Recovery provides automated failover and failback capabilities. You can define recovery plans that automate the entire failover process, including starting up VMs in the correct order and configuring network settings.

Cost-Effectiveness: While there are costs associated with Azure Site Recovery (replication storage, compute during failover), it’s generally more cost-effective than maintaining a full-fledged, continuously running secondary environment as you would with an Always On availability group, especially when your RPO is 24 hours. You only pay for the full compute resources when a failover occurs.

Why other options are less suitable:

A. Azure virtual machine availability sets

Regional Outage Recovery: Availability sets protect against hardware failures within a single Azure region. They do not provide protection against regional outages.

RTO, RPO, Automation: They don’t directly contribute to RTO/RPO goals or automated recovery in the context of a regional disaster.

Cost: They don’t incur additional costs but don’t address the core DR requirements.

B. Azure Disk Backup

Regional Outage Recovery: While Azure Disk Backup can be configured for cross-region backups, it’s primarily a backup solution, not a disaster recovery solution with a focus on quick recovery.

RTO of 15 minutes: Achieving an RTO of 15 minutes with disk backups would be challenging, as it would require restoring the disks and then attaching them to a new VM, which takes time.

RPO of 24 hours: Easily achievable with scheduled backups.

Automated Recovery: Automation of a full VM recovery from disk backups is less streamlined compared to Azure Site Recovery.

Cost: Relatively cost-effective for backup purposes, but not ideal for the required RTO.

C. an Always On availability group

Regional Outage Recovery: Always On availability groups can provide high availability and disaster recovery across regions by setting up a secondary replica in a different region.

RTO of 15 minutes: Possible with synchronous replication, but potentially more challenging with asynchronous replication which is often preferred for cross-region setups to avoid performance impact.

RPO of 24 hours: Achievable, even with asynchronous replication between regions, if configured appropriately. However, for an RPO of 24 hours, synchronous replication is not required, and would add unnecessary cost and complexity in this case.

Automated Recovery: Supports automated failover.

Cost: Can be expensive, especially for cross-region setups, as it requires running a secondary SQL Server instance continuously. This is not cost-effective if the main workload is a nightly batch process.

114
Q

HOTSPOT –

You plan to deploy the backup policy shown in the following exhibit.
Policy 1

Backup schedule

*Frequency: Daily
*Time: 6:00 PM
*Timezone: (UTC) Coordinated Universal Time

Instant Restore
- Retain instant recovery snapshot(s) for 3 Day(s)

Retention range
- Retention of daily backup point.
*At: 6:00 PM For 90 Day(s)

  • Retention of weekly backup point.
    *On: Sunday
    *At: 6:00 PM For 26 Week(s)
  • Retention of monthly backup point.
    Week Based / Day Based
    *On: First
    *Day: Sunday
    *At: 6:00 PM For 36 Month(s)
  • Retention of yearly backup point: Not Configured

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.

NOTE: Each correct selection is worth one point.

Hot Area:
Answer Area
Virtual machines that are backed up by using the
policy can be recovered for up to a maximum of
[answer choice]:
90 days
26 weeks
36 months
45 months
The minimum recovery point objective (RPO) for
virtual machines that are backed up by using the
policy is [answer choice]:
1 hour
1 day
1 week
1 month
1 year

A

Answer Area:

Virtual machines that are backed up by using the policy can be recovered for up to a maximum of 36 months.

The minimum recovery point objective (RPO) for virtual machines that are backed up by using the policy is 1 day.

Explanation:

  1. Maximum Recovery Time:

The policy defines several retention periods:

Instant Restore: 3 days (this is for quick, operational recovery, not long-term)

Daily: 90 days

Weekly: 26 weeks (approximately 6 months)

Monthly: 36 months

Yearly: Not configured

The longest retention period determines the maximum time you can go back to recover a virtual machine. In this case, it’s the monthly backup, which is retained for 36 months.

  1. Minimum Recovery Point Objective (RPO):

RPO represents the maximum amount of data loss that is acceptable in a disaster. It’s essentially how far back in time your backups go.

The policy has the following backup frequencies:

Daily: Backups are taken every day at 6:00 PM.

Weekly: Backups are taken every Sunday at 6:00 PM.

Monthly: Backups are taken on the first Sunday of every month at 6:00 PM.

The most frequent backup defines the minimum RPO. Here, backups are taken daily. This means that in the worst-case scenario (a failure just before the next scheduled backup), you might lose up to 24 hours (or slightly less, since it is exactly at 6pm) of data. Therefore, the minimum RPO is 1 day.

Why other options are incorrect:

Maximum Recovery Time:

90 days: This is only the daily retention, not the maximum.

26 weeks: This is the weekly retention, shorter than the monthly retention.

45 months: There’s no 45-month retention configured.

Minimum RPO:

1 hour: The policy doesn’t back up hourly.

1 week: Weekly backups don’t provide a 1-week RPO; daily backups do.

1 month: Monthly backups have a larger RPO.

1 year: Yearly backups are not configured, and even if they were, they would have the largest RPO.

115
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements:

✑ Provide access to the full .NET framework.

✑ Provide redundancy if an Azure region fails.

✑ Grant administrators access to the operating system to install custom application dependencies.

Solution: You deploy two Azure virtual machines to two Azure regions, and you deploy an Azure Application Gateway.

Does this meet the goal?

A. Yes
B. No

A

A. Yes

This solution meets all the stated requirements. Let’s break down why:

Requirements and How the Solution Meets Them:

Provide access to the full .NET framework:

Solution: Azure virtual machines (VMs) allow you to choose the operating system, including Windows Server versions that fully support the .NET framework.

Provide redundancy if an Azure region fails:

Solution: Deploying two VMs in two different Azure regions provides geographical redundancy. If one region experiences an outage, the other region can continue to serve the application. The Application Gateway will handle routing traffic to the healthy region.

Grant administrators access to the operating system to install custom application dependencies:

Solution: Azure VMs give administrators full control over the operating system. They can log in (via RDP for Windows or SSH for Linux) and install any necessary software, including custom dependencies for the application.

Why Azure Application Gateway is Suitable Here:

Load Balancing: It distributes traffic between the two VMs, ensuring that the load is balanced and the application is highly available.

Regional Redundancy: When configured with instances in multiple regions, Application Gateway can route traffic to the healthy region if one region fails. This is key for meeting the redundancy requirement.

Web Application Firewall (WAF): (Optional, but recommended) You can enable the WAF functionality of Application Gateway to provide additional security for your web app.

Other Considerations (Not Explicitly Stated but Important):

Stateless App: The question specifies a stateless web app, which makes this solution even more suitable. Since there’s no shared state between VM instances, you can easily distribute traffic and fail over without worrying about data consistency issues.

Autoscaling (Optional): While not required, you could consider using virtual machine scale sets (VMSS) instead of individual VMs to enable autoscaling based on demand, further enhancing availability and scalability.

Therefore, deploying two Azure VMs to two regions and using an Azure Application Gateway is a valid solution that meets all the specified requirements.

116
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements:

✑ Provide access to the full .NET framework.

✑ Provide redundancy if an Azure region fails.

✑ Grant administrators access to the operating system to install custom application dependencies.

Solution: You deploy an Azure virtual machine scale set that uses autoscaling.

Does this meet the goal?

A. Yes
B. No

A

B. No

While a virtual machine scale set (VMSS) with autoscaling offers some benefits, it doesn’t fully meet all the requirements on its own, specifically regarding regional redundancy.

Here’s a breakdown:

Requirements and How the Solution Fails or Succeeds:

Provide access to the full .NET framework:

Success: VMSS allows you to choose a VM image that supports the full .NET framework (e.g., a Windows Server image).

Provide redundancy if an Azure region fails:

Failure: A single VMSS, even with instances spread across availability zones, is still confined to a single Azure region. If that entire region experiences an outage, the VMSS and the application it hosts will become unavailable. To achieve regional redundancy, you would need to deploy the application infrastructure in another region.

Grant administrators access to the operating system to install custom application dependencies:

Success: Similar to individual VMs, administrators can access the underlying VMs in a VMSS to install custom dependencies. This can be done by customizing the VM image used by the scale set or by using extensions or scripts during deployment.

Why Autoscaling Isn’t Sufficient for Regional Redundancy:

Autoscaling primarily focuses on adjusting the number of VM instances within a scale set based on demand. It does not automatically distribute instances across multiple regions.

To achieve regional redundancy with VMSS, you would need to:

Deploy multiple VMSS instances: Create at least two VMSS instances, each in a separate Azure region.

Use a traffic manager: Implement a solution like Azure Traffic Manager or Azure Front Door to distribute traffic between the VMSS instances in different regions. In case of a regional outage, the traffic manager would automatically redirect traffic to the healthy region.

117
Q

HOTSPOT –

You need to recommend an Azure Storage account configuration for two applications named Application1 and Application2. The configuration must meet the following requirements:

✑ Storage for Application1 must provide the highest possible transaction rates and the lowest possible latency.

✑ Storage for Application2 must provide the lowest possible storage costs per GB.

✑ Storage for both applications must be available in an event of datacenter failure.

✑ Storage for both applications must be optimized for uploads and downloads.

What should you recommend? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:
Answer Area
Application1:
BlobStorage with Standard performance, Hot access tier, and Read- access geo-redundant storage (RA-GRS) replication
BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication
General purpose v1 with Premium performance and Locally- redundant storage (LRS) replication
General purpose v2 with Standard performance, Hot access tier, and Locally-redundant storage (LRS) replication
Application2:
BlobStorage with Standard performance, Cool access tier, and Geo- redundant storage (GRS) replication
BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication
General purpose v1 with Standard performance and Read-access geo-redundant storage (RA-GRS) replication
General purpose v2 with Standard performance, Cool access tier, and Read-access geo-redundant storage (RA-GRS) replication

A

Answer Area:

Application1: BlockBlobStorage with Premium performance and Zone-redundant storage (ZRS) replication

Application2: General purpose v2 with Standard performance, Cool access tier, and Read-access geo-redundant storage (RA-GRS) replication

Explanation:

Application1 Requirements:

Highest possible transaction rates and lowest possible latency: This indicates a need for Premium performance tier. Premium storage uses SSDs and is optimized for I/O-intensive workloads.

Available in the event of a datacenter failure: ZRS (Zone-redundant storage) replicates your data synchronously across three availability zones in a single region. This provides high availability and protects against datacenter failures.

Optimized for uploads and downloads: Block blobs are specifically designed for storing and streaming large objects and thus best for upload/download optimization.

Why not other options for Application1:

BlobStorage with Standard performance, Hot access tier, RA-GRS: Standard performance won’t offer the lowest latency.

General purpose v1 with Premium performance, LRS: General-purpose v1 accounts do not support the Premium performance tier. Also, LRS doesn’t protect against datacenter failures.

General purpose v2 with Standard performance, Hot access tier, LRS: Standard performance is not the highest possible, and LRS doesn’t protect against datacenter failures.

Application2 Requirements:

Lowest possible storage costs per GB: This calls for the Cool access tier, which is designed for infrequently accessed data and offers lower storage costs compared to the Hot tier.

Available in the event of a datacenter failure: Read-access geo-redundant storage (RA-GRS) provides redundancy. It replicates your data to a secondary region, and you can read from the secondary region. It is important to note that this option also guarantees that data will be available if there is a data center failure in the primary region.

Optimized for uploads and downloads: General purpose v2 accounts support block blobs, which are ideal for storing and retrieving documents.

Why not other options for Application2:

BlobStorage with Standard performance, Cool access tier, GRS: BlobStorage accounts do not support cool access tier. Also, GRS does not allow read access to the secondary region.

BlockBlobStorage with Premium performance, ZRS: Premium performance is much more expensive and not necessary for cost optimization.

General purpose v1 with Standard performance, RA-GRS: General-purpose v1 accounts are generally being superseded by v2, and it’s recommended to use v2 for new deployments. Also, it does not support cool access tier.

118
Q

HOTSPOT –

You plan to develop a new app that will store business critical data. The app must meet the following requirements:

✑ Prevent new data from being modified for one year.

✑ Maximize data resiliency.

✑ Minimize read latency.

What storage solution should you recommend for the app? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:
Answer Area
Storage Account type:
Premium block blobs
Standard general-purpose v1
Standard general-purpose v2
Redundancy:
Zone-redundant storage (ZRS)
Locally-redundant storage (LRS)

A

Answer Area:

Storage Account type: Standard general-purpose v2

Redundancy: Zone-redundant storage (ZRS)

Explanation:

Storage Account Type:

Standard general-purpose v2: This is the best choice for several reasons:

Immutability Support: General-purpose v2 accounts support blob immutability policies, which are essential for preventing data modification. You can set time-based retention policies to meet the requirement of preventing new data from being modified for one year.

Broad Feature Set: General-purpose v2 accounts provide access to all Azure Storage services (blobs, files, queues, tables), giving you flexibility for future needs.

Cost-Effectiveness: Standard performance is generally more cost-effective than Premium, and for this scenario, the performance of Standard is likely sufficient.

Why not other options:

Premium block blobs: Premium block blob storage accounts are designed for high-performance, low-latency scenarios. While they support immutability, they are significantly more expensive than Standard accounts and are not necessary when the primary requirement is data immutability and not extreme performance.

Standard general-purpose v1: General-purpose v1 accounts are older and lack some features found in v2 accounts, including full support for immutability policies at the account or container level. It’s generally recommended to use v2 for new deployments.

Redundancy:

Zone-redundant storage (ZRS):

Maximizes Data Resiliency: ZRS replicates your data synchronously across three different availability zones within a single region. This provides high availability and protects your data against datacenter failures within that region.

Meets Immutability Requirements: ZRS is compatible with immutability policies, so you can still enforce the one-year data modification restriction.

Why not Locally-redundant storage (LRS):

Lower Resiliency: LRS only replicates your data within a single datacenter. It does not protect against datacenter-level failures, which could lead to data loss.

Does not fully support immutability.

Geo-redundant storage (GRS) or Read-access geo-redundant storage (RA-GRS): While GRS and RA-GRS provide even higher resiliency by replicating data to a secondary region, they are not explicitly required here. ZRS is sufficient to meet the requirement of maximizing data resiliency in this context. Also, GRS does not support immutability in secondary region. RA-GRS has lower read latency than GRS, but it still has higher read latency compared with ZRS.

119
Q

HOTSPOT –

You have an Azure web app named App1 and an Azure key vault named KV1.

App1 stores database connection strings in KV1.

App1 performs the following types of requests to KV1:

✑ Get

✑ List

✑ Wrap

✑ Delete

Unwrap –

✑ Backup

✑ Decrypt

✑ Encrypt

You are evaluating the continuity of service for App1.

You need to identify the following if the Azure region that hosts KV1 becomes unavailable:

✑ To where will KV1 fail over?

✑ During the failover, which request type will be unavailable?

What should you identify? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:
Answer Area
To where will KV1 fail over?
A server in the same availability set
A server in the same fault domain
A server in the paired region
A virtual machine in a scale set
During the failover, which request type will be unavailable?
Get
List
Wrap
Delete
Unwrap
Backup
Decrypt
Encrypt

A

Answer Area:

To where will KV1 fail over? A server in the paired region

During the failover, which request type will be unavailable? Delete

Explanation:

  1. Key Vault Failover:

Azure Key Vault automatically fails over to the paired region if the primary region becomes unavailable. Azure regions are paired to provide geo-redundancy for services like Key Vault. You can find a list of region pairs in the Azure documentation.

Why not other options:

A server in the same availability set: Availability sets provide redundancy within a single region, not across regions.

A server in the same fault domain: Fault domains are part of availability sets and also only offer redundancy within a region.

A virtual machine in a scale set: Key Vault is a managed Azure service and not hosted on user-managed VMs or scale sets.

  1. Request Type Unavailable During Failover:

Write Operations During Failover: When Key Vault fails over to the secondary region, it becomes read-only for a short period. This is because the data needs to be fully synchronized to the secondary region before write operations can be safely allowed.

Delete as a Write Operation: The Delete operation is a write operation because it modifies the state of the Key Vault (by removing a secret, key, or certificate). Therefore, Delete operations will be unavailable during the failover.

Other Operations:

Get, List: These are read operations and will be available during the failover once the read replica is ready.

Wrap, Unwrap, Encrypt, Decrypt: These operations rely on keys, and as long as the keys are available for read operations in the secondary region, these crypto operations should continue to work.

Backup: Backup is also a write operation. This operation would not be available during failover.

120
Q

DRAG DROP –

Your company identifies the following business continuity and disaster recovery objectives for virtual machines that host sales, finance, and reporting applications in the company’s on-premises data center:

✑ The sales application must be able to fail over to a second on-premises data center.

✑ The reporting application must be able to recover point-in-time data at a daily granularity. The RTO is eight hours.

✑ The finance application requires that data be retained for seven years. In the event of a disaster, the application must be able to run from Azure. The recovery time objective (RTO) is 10 minutes.

You need to recommend which services meet the business continuity and disaster recovery objectives. The solution must minimize costs.

What should you recommend for each application? To answer, drag the appropriate services to the correct applications. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

Select and Place:
Services
Azure Backup only
Azure Site Recovery and Azure Backup
Azure Site Recovery only

Answer Area
Sales: Service or Services
Finance: Service or Services
Reporting: Service or Services

A

Answer Area:

Sales: Azure Site Recovery only

Finance: Azure Site Recovery only

Reporting: Azure Backup only

Explanation:

  1. Sales Application:

Requirement: Failover to a second on-premises data center.

Solution: Azure Site Recovery only

Azure Site Recovery can be used to replicate on-premises virtual machines to another on-premises data center. This is achieved by deploying Site Recovery components in both on-premises locations.

Why not other options:

Azure Backup only: Azure Backup is primarily for backing up data to Azure, not for replicating and failing over to another on-premises location.

Azure Site Recovery and Azure Backup: While you could use Azure Backup for additional data protection, it’s not required to meet the primary requirement of on-premises to on-premises failover. Site Recovery alone is sufficient and minimizes costs.

  1. Finance Application:

Requirement: Data retention for seven years, failover to Azure, RTO of 10 minutes.

Solution: Azure Site Recovery only

Failover to Azure with RTO of 10 minutes: Azure Site Recovery can replicate the finance application’s virtual machines to Azure. In case of a disaster, you can quickly fail over to Azure, and Site Recovery can achieve an RTO of 10 minutes or less.

Data Retention (Indirectly): Site Recovery does not have built-in long-term retention capabilities. However, once the application is failed over to Azure, you can leverage Azure Backup or other retention mechanisms within Azure to meet the 7-year data retention policy.

Why not other options:

Azure Backup only: While Azure Backup can satisfy the retention requirement, it cannot achieve an RTO of 10 minutes for a full application failover.

Azure Site Recovery and Azure Backup: Using both is acceptable, but to minimize cost as per the requirement, Azure Site Recovery is sufficient. You would set up Azure Backup after failing over to Azure if needed to ensure ongoing backups.

  1. Reporting Application:

Requirement: Daily point-in-time recovery, RTO of eight hours.

Solution: Azure Backup only

Daily Point-in-Time Recovery: Azure Backup allows you to schedule daily backups of your on-premises virtual machines.

RTO of Eight Hours: Restoring from a backup in Azure Backup will take some time, but an RTO of eight hours is generally achievable, especially if you are only restoring data and not performing a full application failover.

Why not other options:

Azure Site Recovery only: Site Recovery is overkill for this scenario. It’s designed for quick failover, which is not the primary requirement here.

Azure Site Recovery and Azure Backup: Using both is redundant. Azure Backup alone meets the requirements.

121
Q

HOTSPOT

You have an on-premises Microsoft SQL Server database named SQL1.

You plan to migrate SQL1 to Azure.

You need to recommend a hosting solution for SQL1. The solution must meet the following requirements:

  • Support the deployment of multiple secondary, read-only replicas.
  • Support automatic replication between primary and secondary replicas.
  • Support failover between primary and secondary replicas within a 15-minute recovery time objective (RTO).

What should you include in the solution? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
Azure service or service tier:
Azure SQL Database
Azure SQL managed Instance
The Hyperscale service tier
Replication mechanism:
Active geo-replication
Auto-failover groups
Standard geo-replication

A

Answer Area:

Azure service or service tier: Azure SQL Database The Hyperscale service tier

Replication mechanism: Active geo-replication

Explanation:

  1. Azure Service or Service Tier:

Azure SQL Database Hyperscale:

Multiple Read-Only Replicas: The Hyperscale service tier of Azure SQL Database is specifically designed for large databases and supports up to four high-availability secondary replicas, with only one allowing writes and also supports a number of read-scale named replicas. This meets the requirement for multiple secondary, read-only replicas.

Automatic Replication: Hyperscale uses a unique architecture with page servers that automatically handle data replication to secondary replicas.

RTO: It will be difficult to achieve RTO of 15 minutes, but, manual failover can be implemented, with minimal downtime.

Why not other options:

Azure SQL Database (other tiers): The Business Critical tier supports one high-availability replica and up to three named replicas. The General Purpose tier supports one high-availability replica but no named replicas. Only Hyperscale supports up to four high-availability replicas. Other tiers do not support multiple high-availability secondary, read-only replicas.

Azure SQL Managed Instance: SQL Managed Instance is closer to a full SQL Server instance and is a good option for lift-and-shift migrations. However, it doesn’t natively support multiple read-only replicas in the same way that Hyperscale does. Also, it does not support RTO of 15 minutes.

  1. Replication Mechanism:

Active Geo-Replication:

Multiple Replicas: Active geo-replication allows you to create up to four readable secondary replicas in different Azure regions.

Automatic Replication: It automatically and asynchronously replicates data from the primary database to the secondary replicas.

Read-Scale: The secondary replicas in active geo-replication can be used for read workloads, offloading traffic from the primary database.

Manual Failover: If the primary database becomes unavailable, you can manually fail over to one of the secondary replicas, making it the new primary.

Why not other options:

Auto-failover groups: Auto-failover groups are great for automatic failover, but they only support a single secondary replica for failover purposes, not for read-scale. It will also be difficult to achieve RTO of 15 minutes.

Standard geo-replication: This is a legacy feature and doesn’t offer the same flexibility and scalability as active geo-replication, especially regarding multiple read-only replicas.

Important Considerations about RTO:

Manual Failover: Active geo-replication requires manual failover. This means that to achieve an RTO of 15 minutes, your operations team must be prepared to execute the failover process within that timeframe.

Hyperscale Architecture: Hyperscale’s unique architecture with page servers can result in very fast failover times, often significantly faster than traditional SQL Database tiers. However, the exact failover time can vary based on factors like database size and workload.

122
Q

HOTSPOT

You have two on-premises Microsoft SQL Server 2017 instances that host an Always On availability group named AG1. AG1 contains a single database named DB1.

You have an Azure subscription that contains a virtual machine named VM1. VM1 runs Linux and contains a SQL Server 2019 instance.

You need to migrate DB1 to VM1. The solution must minimize downtime on DB1.

What should you do? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Prepare for the migration by:
Adding a secondary replica to AG1
Creating an Always On availability group on VM1
Upgrading the on-premises SQL Server instances
Perform the migration by using:
A distributed availability group
Azure Migrate
Log shipping

A

Answer Area:

Prepare for the migration by: Adding a secondary replica to AG1

Perform the migration by using: A distributed availability group

Explanation:

  1. Prepare for the migration by: Adding a secondary replica to AG1

Why this is necessary: To use a distributed availability group (which is the best method for minimal downtime migration in this scenario), you need to first add the Azure SQL Server instance (VM1) as a secondary replica to your existing on-premises availability group (AG1). This establishes the initial synchronization and replication between the on-premises environment and Azure.

Why other options are incorrect:

Creating an Always On availability group on VM1: You don’t need to create a separate availability group on VM1 initially. The distributed availability group will span both the on-premises AG and the Azure instance.

Upgrading the on-premises SQL Server instances: While it’s generally a good practice to keep your SQL Server instances up-to-date, it’s not strictly necessary for this migration. The key is that the Azure SQL Server instance (SQL Server 2019) is a later version than the on-premises instances (SQL Server 2017), which is the case here. Distributed availability groups support migration from older to newer SQL Server versions.

  1. Perform the migration by using: A distributed availability group

Why this is the best approach: A distributed availability group extends an existing Always On availability group across two separate availability groups: one on-premises and one in Azure. This setup provides a minimal-downtime migration path.

Synchronization: Once the Azure replica is added to AG1, data is continuously synchronized from the on-premises primary to the Azure secondary.

Failover: When you’re ready to migrate, you can perform a planned failover within the distributed availability group, making the Azure replica the new primary. This failover process is typically very quick, minimizing downtime.

Cutover: After the failover, the Azure SQL Server instance becomes the primary, and the on-premises instances can be removed from the configuration.

Why other options are incorrect:

Azure Migrate: Azure Migrate is a great tool for assessing and migrating entire servers or applications to Azure. However, for a minimal-downtime migration of a single database within an existing availability group, a distributed availability group is more efficient.

Log shipping: Log shipping is a more traditional method for database migration that involves backing up transaction logs on the primary and restoring them on the secondary. While it can work, it typically results in more downtime compared to using a distributed availability group.

Steps for Minimal Downtime Migration with a Distributed Availability Group:

Add VM1 as a Secondary Replica to AG1: Add the Azure SQL Server instance (VM1) as a secondary replica to your existing on-premises Always On availability group (AG1).

Create a Distributed Availability Group: Create a distributed availability group that spans AG1 (on-premises) and a new availability group in Azure that includes VM1 as the primary replica (the new AG created in Azure will be created automatically by the wizard).

Monitor Synchronization: Ensure that the data is being synchronized properly between the on-premises primary and the Azure replica.

Planned Failover: Perform a planned manual failover within the distributed availability group to make the Azure SQL Server instance (VM1) the new primary.

Remove On-Premises Instances: Once the failover is complete and you’ve verified that everything is working correctly, you can remove the on-premises SQL Server instances from the availability group and the distributed availability group.

123
Q

HOTSPOT

You are building an Azure web app that will store the Personally Identifiable Information (PII) of employees.

You need to recommend an Azure SQL. Database solution for the web app. The solution must meet the following requirements:

  • Maintain availability in the event of a single datacenter outage.
  • Support the encryption of specific columns that contain PII.
  • Automatically scale up during payroll operations.
  • Minimize costs.

What should you include in the recommendations? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
Service tier and computer tier:
Business Critical service tier and Serverless computer tier
General Purpose service tier and Serverless computer tier
Hyperscale service tier and Provisioned compute tier
Encryption method:
Always Encrypted
Microsoft SQL Server and database encryption keys
Transparent Data Encryption (TDE)

A

Answer Area:

Service tier and compute tier: General Purpose service tier and Serverless compute tier

Encryption method: Always Encrypted

Explanation:

  1. Service Tier and Compute Tier:

General Purpose service tier and Serverless compute tier:

Availability: The General Purpose service tier, when configured with zone redundancy, provides high availability and can withstand a single datacenter outage by automatically failing over to a secondary replica in a different availability zone.

Automatic Scaling: The Serverless compute tier automatically scales the database up or down based on workload demands. This is ideal for situations with variable workloads like payroll operations, where you need more resources during specific periods.

Cost Minimization: Serverless is very cost-effective because you only pay for the compute resources used when the database is active. When the database is idle, you’re only charged for storage.

Why not other options:

Business Critical: Business Critical is designed for mission-critical workloads with the highest availability and performance requirements. It’s more expensive than General Purpose and not necessary for this scenario where cost minimization is a priority.

Hyperscale: Hyperscale is suitable for very large databases (over 100 TB) and high-throughput scenarios. It’s not the most cost-effective option for this use case.

Provisioned compute tier: The Provisioned compute tier requires you to pre-allocate a fixed amount of compute resources, which can lead to overspending if the workload is variable.

  1. Encryption Method:

Always Encrypted:

Column-Level Encryption: Always Encrypted allows you to encrypt specific columns within a table, such as those containing PII. This is ideal for protecting sensitive data while still allowing other parts of the database to be accessed without decryption.

Client-Side Encryption: The encryption and decryption keys are managed on the client-side (e.g., in the web app), meaning that the data is encrypted before it even reaches the Azure SQL Database server. This provides an extra layer of security because the database server and administrators do not have access to the unencrypted data.

Why not other options:

Microsoft SQL Server and database encryption keys: This option is less secure and not recommended for this scenario, as the keys are managed on the database.

Transparent Data Encryption (TDE): TDE encrypts the entire database at rest, including all files and backups. While it’s a good security measure, it’s not granular enough for this requirement, which specifies encrypting only specific columns.

124
Q

You plan to deploy an Azure Database for MySQL flexible server named Server1 to the East US Azure region.

You need to implement a business continuity solution for Server1. The solution must minimize downtime in the event of a failover to a paired region.

What should you do?

A. Create a read replica.
B. Store the database files in Azure premium file shares.
C. Implement Geo-redundant backup.
D. Configure native MySQL replication.

A

The correct answer is A. Create a read replica.

Here’s why:

A. Create a read replica:

Minimized Downtime: Read replicas in Azure Database for MySQL flexible server are designed for business continuity and disaster recovery. They provide asynchronous replication to a read-only replica in a different region (including paired regions).

Fast Failover: In case of a failure in the primary region (East US in this case), you can promote the read replica to become a standalone, read-write server. This failover process is typically much faster than restoring from a backup.

Paired Region Support: You can create read replicas in the paired region of your primary server, which is the recommended practice for disaster recovery.

Note: Azure Database for MySQL flexible server creates a physical replica server in paired region, in Standby mode, when high availability is enabled.

Why other options are not as effective:

B. Store the database files in Azure premium file shares: Azure Database for MySQL is a managed database service. You don’t have direct access to the underlying storage mechanism like Azure file shares. This option is not applicable.

C. Implement Geo-redundant backup: Geo-redundant backups are essential for disaster recovery, but they are designed for restoring data in case of data loss or corruption. The restore process from a backup takes significantly longer than failing over to a read replica, thus not minimizing downtime as effectively.

D. Configure native MySQL replication: While you could theoretically set up native MySQL replication, it’s more complex to manage and less integrated with Azure’s features than using the built-in read replica functionality. Azure’s read replica feature handles the complexities of replication, monitoring, and failover for you.

How to Implement a Read Replica for Business Continuity:

Create a Read Replica: In the Azure portal, go to your Azure Database for MySQL flexible server (Server1) and select “Replication” under “Settings.”

Choose a Region: Select the paired region (or another region if desired) for your read replica.

Promote in Case of Failover: If the primary region fails, go to the read replica in the portal and choose the “Promote” option to make it a read-write server.

Update Application: Update your application’s connection string to point to the new primary server (the former read replica).

125
Q

HOTSPOT

You are designing a data analytics solution that will use Azure Synapse and Azure Data Lake Storage Gen2.

You need to recommend Azure Synapse pools to meet the following requirements:

  • Ingest data from Data Lake Storage into hash-distributed tables.
  • Implement query, and update data in Delta Lake.

What should you recommend for each requirement? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
Ingest data from Data Lake Storage
into hash-distributed tables:
Implement, query, and update data
in Delta Lake:
A dedicated SQL pool
A serverless Apache Spark pool
A serverless SQL pool
A dedicated SQL pool
A serverless Apache Spark pool
A serverless SQL pool

A
126
Q

DRAG DROP

You have an on-premises app named App1.

Customers use App1 to manage digital images.

You plan to migrate App1 to Azure.

You need to recommend a data storage solution for App1. The solution must meet the following image storage requirements:

  • Encrypt images at rest.
  • Allow files up to 50 MB.
  • Manage access to the images by using Azure Web Application Firewall (WAF) on Azure Front Door.

The solution must meet the following customer account requirements:

  • Support automatic scale out of the storage.
  • Maintain the availability of App1 if a datacenter fails.
  • Support reading and writing data from multiple Azure regions.

Which service should you include in the recommendation for each type of data? To answer, drag the appropriate services to the correct type of data. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct answer is worth one point.

Services
Azure Blob storage
Azure Cosmos DB
Azure SQL Database
Answer Area
Image storage:
Customer accounts:
Azure Table storage

A
127
Q

You plan to migrate on-premises MySQL databases to Azure Database for MySQL Flexible Server.

You need to recommend a solution for the Azure Database for MySQL Flexible Server configuration. The solution must meet the following requirements:

  • The databases must be accessible if a datacenter fails.
  • Costs must be minimized.

Which compute tier should you recommend?

A. Burstable
B. General Purpose
C. Memory Optimized

A
128
Q

You are designing an app that will use Azure Cosmos DB to collate sales from multiple countries.

You need to recommend an API for the app. The solution must meet the following requirements:

  • Support SQL queries.
  • Support geo-replication.
  • Store and access data relationally.

Which API should you recommend?

A. Apache Cassandra
B. PostgreSQL
C. MongoDB
D. NoSQL

A
129
Q

HOTSPOT

You have an app that generates 50,000 events daily.

You plan to stream the events to an Azure event hub and use Event Hubs Capture to implement cold path processing of the events. The output of Event Hubs Capture will be consumed by a reporting system.

You need to identify which type of Azure storage must be provisioned to support Event Hubs Capture, and which inbound data format the reporting system must support.

What should you identify? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
Storage type:
Azure Data Lake Storage Gen2
Premium block blobs
Premium file shares
Data format:
Apache Parquet
Avro
JSON

A
130
Q

HOTSPOT

You are designing a storage solution that will ingest, store, and analyze petabytes (PBs) of structured, semi-structured, and unstructured text data. The analyzed data will be offloaded to Azure Data Lake Storage Gen2 for long-term retention.

You need to recommend a storage and analytics solution that meets the following requirements:

  • Stores the processed data
  • Provides interactive analytics
  • Supports manual scaling, built-in autoscaling, and custom autoscaling

What should you include in the recommendation? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
For storage and interactive analytics:
Azure Data Explorer
Azure Data Lake Analytics
Log Analytics
Query language:
KQL
Transact-SQL
U-SQL

A
131
Q

HOTSPOT

You plan to use Azure SQL as a database platform.

You need to recommend an Azure SQL product and service tier that meets the following requirements:

  • Automatically scales compute resources based on the workload demand
  • Provides per second billing

What should you recommend? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
Azure SQL product:
A single Azure SQL database
An Azure SQL Database elastic pool
Azure SQL Managed Instance
Service tier:
Basic
Business Critical
General Purpose
Hyperscale
Standard

A
132
Q

HOTSPOT

You have an Azure subscription.

You need to deploy a solution that will provide point-in-time restore for blobs in storage accounts that have blob versioning and blob soft delete enabled.

Which type of blob should you create, and what should you enable for the accounts? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
Blob type:
Append
Block
Page
Enable:
A stored access policy
Immutable blob storage
Object replication
The change feed

A
133
Q

HOTSPOT

Your company, named Contoso, Ltd., has an Azure subscription that contains the following resources:

  • An Azure Synapse Analytics workspace named contosoworkspace1
  • An Azure Data Lake Storage account named contosolake1
  • An Azure SQL database named contososql1

The product data of Contoso is copied from contososql1 to contosolake1.

Contoso has a partner company named Fabrikam Inc. Fabrikam has an Azure subscription that contains the following resources:

  • A virtual machine named FabrikamVM1 that runs Microsoft SQL Server 2019
  • An Azure Storage account named fabrikamsa1

Contoso plans to upload the research data on FabrikamVM1 to contosolake1. During the upload, the research data must be transformed to the data formats used by Contoso.

The data in contosolake1 will be analyzed by using contosoworkspace1.

You need to recommend a solution that meets the following requirements:

  • Upload and transform the FabrikamVM1 research data.
  • Provide Fabrikam with restricted access to snapshots of the data in contosoworkspace1.

What should you recommend for each requirement? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
Upload and transform the data:
Provide restricted access:
Azure Data Box Gateway
Azure Data Share
Azure Synapse pipelines
Azure Data Box Gateway
Azure Data Share
Azure Synapse pipelines

A