tom Flashcards

1
Q

What should you include in the identity management strategy to accommodate the planned changes?
A. Deploy domain controllers for corp.fabrikam.com to virtual networks in Azure.
B. Move all the domain controllers from corp.fabrikam.com to virtual networks in Azure.
C. Deploy a new Azure AD tenant for authenticating new R&D projects.

A

Answer: A. Deploy domain controllers for corp.fabrikam.com to virtual networks in Azure.

Reasoning: The question asks about accommodating planned changes in the identity management strategy. Deploying domain controllers for corp.fabrikam.com to virtual networks in Azure allows for extending the on-premises Active Directory environment into Azure; which is a common strategy for hybrid identity management. This approach supports seamless integration with existing infrastructure and provides flexibility for scaling and managing identities in a cloud environment.

Breakdown of non-selected options:
- B. Move all the domain controllers from corp.fabrikam.com to virtual networks in Azure: This option suggests moving all domain controllers to Azure; which might not be suitable if there is a need to maintain on-premises infrastructure for redundancy; compliance; or performance reasons. It could also introduce risks if connectivity to Azure is disrupted.

  • C. Deploy a new Azure AD tenant for authenticating new R&D projects: Creating a new Azure AD tenant would separate the identity management for R&D projects from the existing corp.fabrikam.com domain; which might not align with the goal of accommodating planned changes within the existing identity management framework. This option could lead to increased complexity in managing multiple identity systems.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You have an Azure subscription that includes a virtual network. You need to ensure that the traffic between this virtual network and an on-premises network is encrypted. What should you recommend?
A. Azure AD Privileged Identity Management
B. Azure AD Conditional Access
C. Azure VPN Gateway
D. Azure Security Center

A

Answer: C. Azure VPN Gateway

Reasoning:
The requirement is to ensure that the traffic between an Azure virtual network and an on-premises network is encrypted. The most suitable solution for this scenario is to use a VPN (Virtual Private Network) connection; which encrypts the data transmitted between the two networks. Azure VPN Gateway is specifically designed to provide secure cross-premises connectivity; making it the appropriate choice for encrypting traffic between an Azure virtual network and an on-premises network.

Breakdown of non-selected options:
A. Azure AD Privileged Identity Management - This service is used for managing; controlling; and monitoring access within Azure AD; not for encrypting network traffic between Azure and on-premises networks.

B. Azure AD Conditional Access - This feature is used to enforce access controls on Azure AD resources based on conditions; not for encrypting network traffic.

D. Azure Security Center - This service provides security management and threat protection for Azure resources; but it does not specifically handle encryption of network traffic between Azure and on-premises networks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You have an application that uses three on-premises Microsoft SQL Server databases. You plan to migrate these databases to Azure. The application requires server-side transactions across all three databases. What Azure solution should you recommend to meet this requirement?
A. Azure SQL Database Hyperscale
B. Azure SQL Database Managed Instance
C. Azure SQL Database Elastic Pool
D. Azure SQL Database Single Database

A

Answer: B. Azure SQL Database Managed Instance

Reasoning: The requirement is to support server-side transactions across three databases; which implies the need for features like distributed transactions or cross-database transactions. Azure SQL Database Managed Instance supports distributed transactions across multiple databases; making it suitable for this scenario. It provides near 100% compatibility with on-premises SQL Server; including support for features like cross-database queries and transactions; which are essential for the application in question.

Breakdown of non-selected options:
- A. Azure SQL Database Hyperscale: This option is designed for single databases with high scalability needs. It does not inherently support cross-database transactions; which are required in this scenario.
- C. Azure SQL Database Elastic Pool: Elastic Pools are used to manage and scale multiple databases with varying and unpredictable usage demands. However; they do not support cross-database transactions; which are necessary for the application.
- D. Azure SQL Database Single Database: This option is for single; isolated databases and does not support cross-database transactions; which are needed for the application to function correctly across the three databases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You have an on-premises server named Server1 running Windows Server 2016. Server1 hosts a SQL Server database that is 4 TB in size. You need to migrate this database to an Azure Blob Storage account named store1. The migration process must be secure and encrypted. Which Azure service should you recommend?
A. Azure Data Box
B. Azure Site Recovery
C. Azure Database Migration Service
D. Azure Import/Export

A

Answer: A. Azure Data Box

Reasoning:
Azure Data Box is a service designed to transfer large amounts of data to Azure in a secure and efficient manner. Given the size of the SQL Server database (4 TB); Azure Data Box is suitable because it provides a physical device that can be shipped to the customer to load data securely and then sent back to Microsoft for uploading to Azure. This method ensures data is encrypted during transit and is ideal for large datasets where network transfer might be impractical due to bandwidth limitations or time constraints.

Breakdown of non-selected options:
- B. Azure Site Recovery: This service is primarily used for disaster recovery and business continuity; allowing you to replicate on-premises servers to Azure for failover purposes. It is not designed for one-time data migrations to Azure Blob Storage.

  • C. Azure Database Migration Service: This service is typically used for migrating databases to Azure SQL Database or Azure SQL Managed Instance; not directly to Azure Blob Storage. It focuses on database schema and data migration rather than bulk data transfer to storage accounts.
  • D. Azure Import/Export: While this service can be used to transfer data to Azure by shipping hard drives; it is generally less efficient and secure compared to Azure Data Box for large data sizes like 4 TB. Azure Data Box is specifically designed for such scenarios; offering a more streamlined and secure process.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Your company is migrating its on-premises virtual machines to Azure. These virtual machines will communicate with each other within the same virtual network using private IP addresses. You need to recommend a solution to prevent virtual machines that are not part of the migration from communicating with the migrating virtual machines. Which solution should you recommend?
A. Azure ExpressRoute
B. Network Security Groups (NSGs)
C. Azure Bastion
D. Azure Private Link

A

Answer: B. Network Security Groups (NSGs)

Reasoning: Network Security Groups (NSGs) are designed to filter network traffic to and from Azure resources in an Azure virtual network. They can be used to control inbound and outbound traffic to network interfaces; VMs; and subnets; making them suitable for isolating the migrating virtual machines from those not part of the migration. By configuring NSGs; you can specify rules that allow or deny traffic based on source and destination IP addresses; ports; and protocols; effectively preventing unwanted communication.

Breakdown of non-selected options:
- A. Azure ExpressRoute: This is a service that provides a private connection between an on-premises network and Azure; bypassing the public internet. It is not used for controlling communication between virtual machines within a virtual network.
- C. Azure Bastion: This is a service that provides secure and seamless RDP and SSH connectivity to virtual machines directly through the Azure portal. It is not used for controlling network traffic between virtual machines.
- D. Azure Private Link: This service provides private connectivity to Azure services over a private endpoint in your virtual network. It is not designed to control communication between virtual machines within the same virtual network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You plan to deploy a microservices-based application to Azure. The application consists of several containerized services that need to communicate with each other. The application deployment must meet the following requirements: ✑ Ensure that each service can scale independently. ✑ Ensure that internet traffic is encrypted using SSL without configuring SSL on each container. Which service should you include in the recommendation?
A. Azure Front Door
B. Azure Traffic Manager
C. AKS ingress controller
D. Azure Application Gateway

A

Answer: C. AKS ingress controller

Reasoning: The question requires a solution for deploying a microservices-based application with containerized services that can scale independently and have encrypted internet traffic using SSL without configuring SSL on each container. An AKS ingress controller is suitable for this scenario because it allows for managing external access to the services in a Kubernetes cluster; including SSL termination; which means SSL can be managed at the ingress level rather than on each individual container. This allows each service to scale independently within the Kubernetes environment.

Breakdown of non-selected options:
- A. Azure Front Door: While Azure Front Door can handle SSL termination and provide global load balancing; it is more suited for routing traffic across multiple regions and does not inherently support scaling individual microservices within a Kubernetes cluster.
- B. Azure Traffic Manager: This service is primarily used for DNS-based traffic routing and does not handle SSL termination or provide the ability to scale individual services within a microservices architecture.
- D. Azure Application Gateway: Although it supports SSL termination and can route traffic to backend services; it is more suited for traditional web applications rather than containerized microservices that require independent scaling within a Kubernetes environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You have an on-premises storage solution that supports the Hadoop Distributed File System (HDFS) and uses Kerberos for authentication. You need to migrate this solution to Azure while ensuring it continues to use Kerberos. What should you use?
A. Azure Data Lake Storage Gen2
B. Azure NetApp Files
C. Azure Files
D. Azure Blob Storage

A

Answer: A. Azure Data Lake Storage Gen2

Reasoning: Azure Data Lake Storage Gen2 is designed to work with big data analytics and supports the Hadoop Distributed File System (HDFS) natively. It also integrates with Azure Active Directory (AAD) for authentication; which can be configured to support Kerberos authentication. This makes it the most suitable option for migrating an on-premises HDFS solution that uses Kerberos authentication to Azure.

Breakdown of non-selected options:

B. Azure NetApp Files: While Azure NetApp Files is a high-performance file storage service that supports NFS and SMB protocols; it is not specifically designed for HDFS workloads and does not natively support Kerberos authentication for HDFS.

C. Azure Files: Azure Files provides fully managed file shares in the cloud that are accessible via the SMB protocol. It is not designed for HDFS workloads and does not natively support Kerberos authentication for HDFS.

D. Azure Blob Storage: Azure Blob Storage is a scalable object storage solution for unstructured data. It does not natively support HDFS or Kerberos authentication; making it unsuitable for this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You are designing an application that requires a MySQL database in Azure. The application must be highly available and support automatic failover. Which service tier should you recommend?

A. Basic
B. General Purpose
C. Memory Optimized
D. Serverless

A

Answer: B. General Purpose

Reasoning: The requirement is for a MySQL database in Azure that is highly available and supports automatic failover. Azure Database for MySQL offers different service tiers; each with specific features and capabilities. The General Purpose tier is designed to provide balanced compute and memory resources with high availability and automatic failover capabilities; making it suitable for most business workloads that require these features.

Breakdown of non-selected options:

A. Basic - The Basic tier is designed for workloads that do not require high availability or automatic failover. It is more suitable for development or testing environments rather than production environments that require high availability.

C. Memory Optimized - While the Memory Optimized tier provides high performance for memory-intensive workloads; it is not specifically designed for high availability and automatic failover. It focuses more on performance rather than availability.

D. Serverless - The Serverless tier is designed for intermittent; unpredictable workloads and offers automatic scaling and billing based on the actual usage. However; it does not inherently provide high availability and automatic failover; which are the key requirements in this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You are designing an IoT solution that involves 100;000 devices. These devices will stream data; including device ID; location; and sensor data; at a rate of 100 messages per second. The solution must store and analyze the data in real time. Which Azure service should you recommend?

A. Azure Data Explorer
B. Azure Stream Analytics
C. Azure Cosmos DB
D. Azure IoT Hub

A

Answer: B. Azure Stream Analytics

Reasoning: Azure Stream Analytics is specifically designed for real-time data processing and analysis. It can handle large volumes of data streaming from IoT devices; making it suitable for scenarios where data needs to be analyzed in real time. Given the requirement to store and analyze data in real time from 100;000 devices streaming at 100 messages per second; Azure Stream Analytics is the most appropriate choice.

Breakdown of non-selected options:
- A. Azure Data Explorer: While Azure Data Explorer is excellent for analyzing large volumes of data; it is more suited for exploratory data analysis and interactive analytics rather than real-time streaming analytics.
- C. Azure Cosmos DB: Azure Cosmos DB is a globally distributed; multi-model database service. It is ideal for storing data with low latency but does not provide real-time analytics capabilities.
- D. Azure IoT Hub: Azure IoT Hub is a service for managing IoT devices and ingesting data from them. While it is essential for the IoT solution; it does not provide real-time data analysis capabilities on its own.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You are designing a highly available Azure web application that must remain operational during a regional outage. You need to minimize costs while ensuring no data loss during failover. Which Azure service should you use?
A. Azure App Service Standard
B. Azure App Service Premium
C. Azure Kubernetes Service (AKS)
D. Azure Service Fabric

A

Answer: B. Azure App Service Premium

Reasoning:
To ensure high availability and operational continuity during a regional outage; the application must be able to failover to another region without data loss. Azure App Service Premium provides features such as traffic manager integration and geo-distribution; which are essential for maintaining availability across regions. It also includes built-in backup and restore capabilities; which help in minimizing data loss during failover. Additionally; the Premium tier offers better performance and scaling options compared to the Standard tier; which is crucial for handling increased loads during failover scenarios.

Breakdown of non-selected options:
- A. Azure App Service Standard: While this option provides basic scaling and availability features; it lacks the advanced geo-distribution and traffic management capabilities of the Premium tier; which are necessary for handling regional outages effectively.

  • C. Azure Kubernetes Service (AKS): AKS is a container orchestration service that can provide high availability; but it requires more complex setup and management compared to Azure App Service. It may not be the most cost-effective solution for a web application that needs to minimize costs while ensuring no data loss.
  • D. Azure Service Fabric: This is a distributed systems platform that can provide high availability and resilience. However; it is more complex to manage and may not be the most cost-effective solution for a simple web application compared to Azure App Service Premium; which offers built-in features for high availability and disaster recovery.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You are developing a sales application that will include several Azure cloud services to manage various components of a transaction. These services will handle customer orders; billing; payment; inventory; and shipping. You need to recommend a solution that allows these cloud services to communicate transaction information asynchronously using XML messages. What should you include in your recommendation?

A. Azure Service Fabric
B. Azure Data Lake
C. Azure Service Bus
D. Azure Traffic Manager

A

Answer: C. Azure Service Bus

Reasoning: Azure Service Bus is a messaging service that facilitates asynchronous communication between different services and applications. It supports various messaging protocols; including XML; and is designed to handle complex messaging workflows; making it suitable for scenarios where different components of a system need to communicate asynchronously. In this case; the sales application requires asynchronous communication between services handling customer orders; billing; payment; inventory; and shipping; which aligns well with the capabilities of Azure Service Bus.

Breakdown of non-selected options:

A. Azure Service Fabric: Azure Service Fabric is a distributed systems platform used to build and manage scalable and reliable microservices and containers. While it is useful for developing applications; it is not specifically designed for asynchronous messaging between services; which is the requirement in this scenario.

B. Azure Data Lake: Azure Data Lake is a storage service optimized for big data analytics workloads. It is not designed for messaging or communication between services; making it unsuitable for the requirement of asynchronous communication using XML messages.

D. Azure Traffic Manager: Azure Traffic Manager is a DNS-based traffic load balancer that enables you to distribute traffic optimally to services across global Azure regions. It is not related to messaging or communication between services; and therefore does not meet the requirement for asynchronous communication using XML messages.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You need to implement disaster recovery for an on-premises Hadoop cluster that uses HDFS; with Azure as the replication target. Which Azure service should you use?
A. Azure Blob Storage
B. Azure Data Lake Storage Gen2
C. Azure Backup
D. Azure Site Recovery

A

Answer: B. Azure Data Lake Storage Gen2

Reasoning:
The question requires a solution for disaster recovery of an on-premises Hadoop cluster using HDFS; with Azure as the replication target. Azure Data Lake Storage Gen2 is specifically designed for big data analytics and is optimized for Hadoop workloads. It provides hierarchical namespace and is compatible with HDFS; making it the most suitable choice for replicating Hadoop data.

Breakdown of non-selected options:
- A. Azure Blob Storage: While Azure Blob Storage can store large amounts of unstructured data; it does not provide the hierarchical namespace and HDFS compatibility that Azure Data Lake Storage Gen2 offers; which are crucial for Hadoop workloads.

  • C. Azure Backup: Azure Backup is primarily used for backing up and restoring data; but it is not designed for replicating Hadoop clusters or handling HDFS data specifically.
  • D. Azure Site Recovery: Azure Site Recovery is used for disaster recovery of entire virtual machines and applications; but it is not tailored for Hadoop clusters or HDFS data replication.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You have been tasked with implementing a governance solution for a large Azure environment containing numerous resource groups. You need to ensure that all resource groups comply with the organization’s policies. Which Azure Policy scope should you use?

A. Azure Active Directory (Azure AD) administrative units
B. Azure Active Directory (Azure AD) tenants
C. Subscriptions
D. Compute resources
E. Resource groups
F. Management groups

A

Answer: F. Management groups

Reasoning:
Azure Policy is a service in Azure that you use to create; assign; and manage policies. These policies enforce different rules and effects over your resources; so those resources stay compliant with your corporate standards and service level agreements. When dealing with a large Azure environment containing numerous resource groups; it is important to apply policies at a level that can encompass all these resource groups efficiently. Management groups are designed to help manage access; policy; and compliance across multiple subscriptions. By applying policies at the management group level; you can ensure that all underlying subscriptions and their respective resource groups comply with the organization’s policies.

Breakdown of non-selected options:
A. Azure Active Directory (Azure AD) administrative units - These are used to delegate administrative permissions within Azure AD and are not related to Azure Policy scope for resource compliance.
B. Azure Active Directory (Azure AD) tenants - A tenant is a dedicated instance of Azure AD that an organization receives when it signs up for a Microsoft cloud service. It is not used for Azure Policy scope.
C. Subscriptions - While policies can be applied at the subscription level; using management groups allows for broader policy application across multiple subscriptions; which is more suitable for large environments.
D. Compute resources - This is a specific type of resource and not a scope for applying Azure Policies.
E. Resource groups - Policies can be applied at the resource group level; but this would require applying policies individually to each resource group; which is not efficient for a large environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You have an on-premises data center hosting several SQL Server instances. You plan to migrate some of these databases to Azure SQL Database Managed Instance. You need to recommend a migration solution that meets the following requirements: • Ensures minimal downtime during migration. • Supports on-premises instances running SQL Server 2008 R2. • Allows the migration of multiple databases in parallel. • Maintains compatibility with all SQL Server features. What should you include in your recommendation?

A. Use Azure Database Migration Service to migrate the databases.
B. Use SQL Server Integration Services to migrate the databases.
C. Upgrade the on-premises instances to SQL Server 2016; then use Azure Database Migration Service to migrate the databases.
D. Use Data Migration Assistant to migrate the databases.

A

Answer: A. Use Azure Database Migration Service to migrate the databases.

Reasoning:
Azure Database Migration Service (DMS) is designed to facilitate the migration of databases to Azure with minimal downtime; which is a key requirement in this scenario. It supports migrations from SQL Server 2008 R2; allowing for the migration of multiple databases in parallel; and maintains compatibility with SQL Server features. DMS is specifically built to handle such migrations efficiently and is the most suitable option given the requirements.

Breakdown of non-selected options:
B. Use SQL Server Integration Services to migrate the databases.
- SQL Server Integration Services (SSIS) is primarily used for data transformation and ETL processes rather than full database migrations. It does not inherently support minimal downtime or parallel migrations of multiple databases as effectively as DMS.

C. Upgrade the on-premises instances to SQL Server 2016; then use Azure Database Migration Service to migrate the databases.
- While upgrading to SQL Server 2016 could be beneficial for other reasons; it is not necessary for the migration process itself. Azure DMS supports SQL Server 2008 R2 directly; making this step redundant and not aligned with the requirement for minimal downtime.

D. Use Data Migration Assistant to migrate the databases.
- Data Migration Assistant (DMA) is a tool used to assess and identify compatibility issues when migrating to Azure SQL Database; but it is not designed for the actual migration process; especially when minimal downtime and parallel migrations are required.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements: Provide access to the full .NET Framework; ensure redundancy in case an Azure region fails; and allow administrators access to the operating system to install custom application dependencies. Solution: You deploy an Azure VM Scale Set across two Azure regions and use an Azure Load Balancer to distribute traffic between the VMs in the Scale Set. Does this meet the goal?
A. Yes
B. No

A

Answer: A. Yes

Reasoning: The requirements for hosting a stateless web app include providing access to the full .NET Framework; ensuring redundancy in case an Azure region fails; and allowing administrators access to the operating system to install custom application dependencies. Deploying an Azure VM Scale Set across two Azure regions with an Azure Load Balancer meets these requirements as follows:

  • Access to the full .NET Framework: Azure VMs can run Windows Server; which supports the full .NET Framework.
  • Redundancy in case an Azure region fails: By deploying the VM Scale Set across two regions; the solution ensures that if one region fails; the other can continue to serve the application.
  • Administrator access to the operating system: Azure VMs provide full access to the OS; allowing administrators to install custom application dependencies.

Breakdown of non-selected answer option:
B. No: This option is incorrect because the proposed solution does meet all the specified requirements. Deploying an Azure VM Scale Set across two regions with a load balancer provides the necessary redundancy; access to the full .NET Framework; and administrative access to the OS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You have an Azure subscription that includes an Azure Storage account. You plan to implement Azure File Sync. What is the first step you should take to prepare the storage account for Azure File Sync?

A. Register the Microsoft.Storage resource provider.
B. Create a file share in the storage account.
C. Create a virtual network.
D. Install the Azure File Sync agent on a server.

A

Answer: B. Create a file share in the storage account.

Reasoning: To implement Azure File Sync; the first step is to create a file share in the Azure Storage account. Azure File Sync requires a file share to sync files between the on-premises server and the Azure cloud. This file share acts as the cloud endpoint for the sync process.

Breakdown of non-selected answer options:
- A. Register the Microsoft.Storage resource provider: This step is not necessary for preparing the storage account specifically for Azure File Sync. The Microsoft.Storage resource provider is typically registered by default in Azure subscriptions; and it is not a specific requirement for Azure File Sync setup.
- C. Create a virtual network: Creating a virtual network is not directly related to setting up Azure File Sync. Azure File Sync does not require a virtual network configuration as part of its initial setup process.
- D. Install the Azure File Sync agent on a server: While installing the Azure File Sync agent is a necessary step in the overall process; it is not the first step in preparing the storage account itself. The agent is installed on the on-premises server that will sync with the Azure file share.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

You have a highly available application running on an AKS cluster in Azure. You need to ensure that the application is accessible over HTTPS without configuring SSL on each container. Which Azure service should you use?
A. Azure Front Door
B. Azure Traffic Manager
C. AKS Ingress Controller
D. Azure Application Gateway

A

Answer: C. AKS Ingress Controller

AKS Ingress Controller: An Ingress Controller can manage SSL termination; but it requires additional configuration and management within the AKS cluster. Azure Application Gateway provides a more integrated and managed solution for SSL termination outside the cluster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

You have an on-premises storage solution that supports the Hadoop Distributed File System (HDFS). You need to migrate this solution to Azure and ensure it is accessible from multiple regions. What should you use?
A. Azure Data Lake Storage Gen2
B. Azure NetApp Files
C. Azure Files
D. Azure Blob Storage

A

Answer: A. Azure Data Lake Storage Gen2

Reasoning: Azure Data Lake Storage Gen2 is specifically designed to handle big data analytics workloads and is fully compatible with HDFS; making it an ideal choice for migrating an on-premises HDFS solution to Azure. It also provides high scalability and can be accessed from multiple regions; which aligns with the requirement of ensuring accessibility from multiple regions.

Breakdown of non-selected options:
- B. Azure NetApp Files: While Azure NetApp Files provides high-performance file storage; it is not specifically designed for HDFS compatibility and big data analytics workloads; making it less suitable for this scenario.
- C. Azure Files: Azure Files offers fully managed file shares in the cloud that are accessible via the SMB protocol. However; it does not natively support HDFS; which is a critical requirement for this migration.
- D. Azure Blob Storage: Although Azure Blob Storage is highly scalable and can be accessed from multiple regions; it does not natively support HDFS. It is more suited for object storage rather than file system compatibility required for HDFS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

You plan to deploy an Azure virtual machine to run a mission-critical application. The virtual machine will store data on a disk with BitLocker Drive Encryption enabled. You need to use Azure Backup to back up the virtual machine. Which two backup solutions should you use? Each option presents part of the solution.

A. Azure Backup (MARS) agent
B. Azure Backup Server
C. Azure Site Recovery
D. Backup Pre-Checks

A

Answer: B. Azure Backup Server
Answer: D. Backup Pre-Checks

Reasoning:
When backing up an Azure virtual machine with BitLocker Drive Encryption enabled; it’s important to ensure that the backup solution supports encrypted disks. Azure Backup Server is a suitable option because it can handle the backup of encrypted disks. Additionally; Backup Pre-Checks are essential to ensure that the backup configuration is correct and that there are no issues that could prevent a successful backup. These pre-checks help identify potential problems before the backup process begins; which is crucial for mission-critical applications.

Breakdown of non-selected options:
A. Azure Backup (MARS) agent - The MARS agent is typically used for backing up files; folders; and system state from on-premises machines to Azure. It is not suitable for backing up Azure virtual machines directly; especially those with BitLocker encryption.
C. Azure Site Recovery - This is primarily a disaster recovery solution rather than a backup solution. It is used to replicate and failover virtual machines to another region; not for regular backup purposes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

You have an Azure subscription. You need to deploy an Azure Kubernetes Service (AKS) solution that will use Windows Server 2019 nodes. The solution must meet the following requirements:
✑ Minimize the time it takes to provision compute resources during scale-out operations.
✑ Support autoscaling of Windows Server containers.
Which scaling option should you recommend?
A. Kubernetes version 1.20.2 or newer
B. Virtual nodes with Virtual Kubelet ACI
C. Cluster autoscaler
D. Horizontal pod autoscaler

A

Answer: C. Cluster autoscaler

Reasoning:
The question requires a solution that minimizes the time it takes to provision compute resources during scale-out operations and supports autoscaling of Windows Server containers. The Cluster autoscaler is designed to automatically adjust the size of the Kubernetes cluster by adding or removing nodes based on the resource requirements of the workloads. This is particularly useful for scaling out operations as it can quickly provision additional nodes when needed; which aligns with the requirement to minimize provisioning time. Additionally; the Cluster autoscaler supports Windows Server nodes; making it suitable for the given scenario.

Breakdown of non-selected options:
A. Kubernetes version 1.20.2 or newer - While using a newer version of Kubernetes might provide some performance improvements and additional features; it does not directly address the requirement of minimizing provisioning time or supporting autoscaling specifically for Windows Server containers.

B. Virtual nodes with Virtual Kubelet ACI - Virtual nodes with Virtual Kubelet allow for burstable workloads using Azure Container Instances (ACI); but they are more suited for scenarios where you need to run containers without managing the underlying infrastructure. This option does not directly address the requirement for autoscaling Windows Server containers or minimizing provisioning time for compute resources.

D. Horizontal pod autoscaler - The Horizontal pod autoscaler automatically scales the number of pods in a deployment or replica set based on observed CPU utilization or other select metrics. While it helps in scaling applications; it does not directly manage the scaling of the underlying compute resources (nodes); which is necessary to minimize provisioning time during scale-out operations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

You have an Azure Active Directory (Azure AD) tenant that syncs with an on-premises Active Directory. Your company has a line-of-business (LOB) application developed internally. You need to implement SAML single sign-on (SSO) and enforce multi-factor authentication (MFA) when users attempt to access the application from an unknown location. Which two features should you include in the solution? Each selection is worth one point.

A. Azure AD Privileged Identity Management (PIM)
B. Azure Application Gateway
C. Azure AD enterprise applications
D. Azure AD Identity Protection
E. Conditional Access policies

A

Answer: C. Azure AD enterprise applications
Answer: E. Conditional Access policies

Reasoning:
To implement SAML single sign-on (SSO) and enforce multi-factor authentication (MFA) for an internally developed line-of-business (LOB) application; you need to use Azure AD enterprise applications and Conditional Access policies. Azure AD enterprise applications allow you to configure SAML-based SSO for applications. Conditional Access policies enable you to enforce MFA based on specific conditions; such as accessing the application from an unknown location.

Breakdown of non-selected options:
A. Azure AD Privileged Identity Management (PIM) - This is used for managing; controlling; and monitoring access within Azure AD; Azure; and other Microsoft Online Services. It is not directly related to implementing SSO or enforcing MFA for applications.
B. Azure Application Gateway - This is a web traffic load balancer that enables you to manage traffic to your web applications. It does not provide SSO or MFA capabilities.
D. Azure AD Identity Protection - This is used to identify potential vulnerabilities affecting your organization’s identities and to configure automated responses to detected suspicious actions. While it can enhance security; it is not directly used to implement SSO or enforce MFA for specific applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

You are storing user profile data in an Azure Cosmos DB database. You want to set up a process to automatically back up the data to Azure Storage every week. What should you use to achieve this?

A. Azure Backup
B. Azure Cosmos DB backup and restore
C. Azure Import/Export Service
D. Azure Data Factory

A

Answer: D. Azure Data Factory

Reasoning: Azure Data Factory is a cloud-based data integration service that allows you to create data-driven workflows for orchestrating and automating data movement and data transformation. It is suitable for setting up a process to automatically back up data from Azure Cosmos DB to Azure Storage on a weekly basis. You can create a pipeline in Azure Data Factory to copy data from Cosmos DB to Azure Storage and schedule it to run weekly.

Breakdown of non-selected options:

A. Azure Backup: Azure Backup is primarily used for backing up Azure VMs; SQL databases; and other Azure resources. It does not natively support backing up data from Azure Cosmos DB to Azure Storage.

B. Azure Cosmos DB backup and restore: While Azure Cosmos DB has built-in backup and restore capabilities; it does not provide a direct mechanism to back up data to Azure Storage on a scheduled basis. It is more focused on point-in-time restore within the Cosmos DB service itself.

C. Azure Import/Export Service: This service is used for transferring large amounts of data to and from Azure using physical disks. It is not suitable for setting up automated; scheduled backups of Cosmos DB data to Azure Storage.

Therefore; Azure Data Factory is the most suitable option for automating the backup process from Azure Cosmos DB to Azure Storage on a weekly schedule.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

You have a highly available application running on an AKS cluster in Azure. To ensure the application remains available even if a single availability zone fails; which Azure service should you use?
A. Azure Front Door
B. Azure Traffic Manager
C. AKS ingress controller
D. Azure Load Balancer

A

Answer: A. Azure Front Door

Reasoning: Azure Front Door is a global; scalable entry point that uses the Microsoft global edge network to create fast; secure; and highly available web applications. It provides high availability and can route traffic across multiple regions or availability zones; ensuring that your application remains available even if a single availability zone fails. This makes it the most suitable option for ensuring high availability in the scenario described.

Breakdown of non-selected options:
- B. Azure Traffic Manager: While Azure Traffic Manager can route traffic based on DNS and provide high availability by directing traffic to different regions; it operates at the DNS level and does not provide the same level of real-time failover and global load balancing as Azure Front Door.

  • C. AKS ingress controller: An AKS ingress controller is used to manage inbound traffic to applications running in an AKS cluster. However; it does not inherently provide cross-zone or cross-region failover capabilities; which are necessary to ensure availability in the event of an availability zone failure.
  • D. Azure Load Balancer: Azure Load Balancer is a regional service that distributes traffic within a single region. It does not provide cross-region or cross-zone failover capabilities; which are required to maintain availability if an entire availability zone fails.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

You are planning to migrate a large-scale PostgreSQL database to Azure. The database must be highly available and support read replicas to scale out read operations. Which Azure database service should you recommend?
A. Azure SQL Managed Instance
B. Azure Database for PostgreSQL
C. Azure Cosmos DB

A

Answer: B. Azure Database for PostgreSQL

Reasoning: The requirement is to migrate a large-scale PostgreSQL database to Azure with high availability and support for read replicas to scale out read operations. Azure Database for PostgreSQL is specifically designed to handle PostgreSQL databases and offers features such as high availability and read replicas; making it the most suitable choice for this scenario.

Breakdown of non-selected options:
- A. Azure SQL Managed Instance: This option is designed for SQL Server databases; not PostgreSQL. It does not natively support PostgreSQL databases; so it is not suitable for this requirement.
- C. Azure Cosmos DB: While Cosmos DB is a globally distributed; multi-model database service; it is not specifically designed for PostgreSQL databases. It does not natively support PostgreSQL features like read replicas in the same way Azure Database for PostgreSQL does; making it less suitable for this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

You have an on-premises server named Server1 running Windows Server 2019. Server1 hosts a SQL Server database that is 2 TB in size. You want to copy this database to an Azure Blob Storage account named store1. You need to recommend an Azure service that can achieve this goal while minimizing costs and ensuring high availability of the database. Which Azure service should you recommend?
A. Azure Storage Explorer
B. Azure Backup
C. Azure Site Recovery
D. Azure Database Migration Service

A

Answer: D. Azure Database Migration Service

Reasoning:
Azure Database Migration Service is specifically designed to facilitate the migration of databases to Azure; including SQL Server databases to Azure SQL Database or Azure Blob Storage. It provides a streamlined; reliable; and cost-effective way to migrate large databases while ensuring high availability during the migration process.

Breakdown of non-selected options:
- A. Azure Storage Explorer: This is a tool for managing Azure Storage resources; not a service for migrating databases. It does not provide the capabilities needed for a large-scale database migration with high availability.
- B. Azure Backup: This service is primarily used for backing up data to Azure; not for migrating databases. It does not offer the necessary features for database migration.
- C. Azure Site Recovery: This service is designed for disaster recovery and not specifically for database migration. It is not optimized for migrating SQL Server databases to Azure Blob Storage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

You need to deploy a highly available; globally scalable web application on Azure. The solution must meet the following requirements: support autoscaling based on user traffic; ensure high availability across multiple regions; and provide low latency for users worldwide. Which Azure services should you use to achieve these requirements?

A. Azure Traffic Manager; Azure Load Balancer; Azure Virtual Machines
B. Azure App Service; Azure Traffic Manager; Azure SQL Database
C. Azure App Service; Azure Front Door; Azure Cosmos DB
D. Azure Kubernetes Service; Azure Traffic Manager; Azure Cosmos DB

A

Answer: C. Azure App Service; Azure Front Door; Azure Cosmos DB

Reasoning:
- Azure App Service: This service is ideal for deploying web applications as it supports autoscaling and high availability. It can automatically scale out to handle increased traffic and provides built-in load balancing.
- Azure Front Door: This service is designed for global routing and provides low latency by directing user traffic to the nearest available backend. It also supports high availability across multiple regions.
- Azure Cosmos DB: This globally distributed database service ensures low latency and high availability for data access worldwide; making it suitable for applications that require global scalability.

Breakdown of non-selected options:
- A. Azure Traffic Manager; Azure Load Balancer; Azure Virtual Machines: While this combination can provide high availability and load balancing; it requires more manual configuration and management compared to Azure App Service. Azure Load Balancer is regional and does not provide global routing; which is necessary for low latency worldwide.
- B. Azure App Service; Azure Traffic Manager; Azure SQL Database: Azure Traffic Manager provides DNS-based traffic routing; but it does not offer the same level of global load balancing and low latency as Azure Front Door. Azure SQL Database; while highly available; is not as globally distributed as Azure Cosmos DB.
- D. Azure Kubernetes Service; Azure Traffic Manager; Azure Cosmos DB: Azure Kubernetes Service is a powerful option for containerized applications but requires more management and configuration than Azure App Service. Azure Traffic Manager; as mentioned; does not provide the same global routing capabilities as Azure Front Door.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Introductory Information Case Study - This is a case study. Case studies are not timed separately. You can use as much exam time as you need to complete each case. However; there may be additional case studies and sections on this exam. You must manage your time to ensure that you can complete all questions included in this exam within the time provided. To answer the questions included in a case study; you will need to reference information provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario described in the case study. Each question is independent of the other questions in this case study. At the end of this case study; a review screen will appear. This screen allows you to review your answers and make changes before you move to the next section of the exam. After you begin a new section; you cannot return to this section. To start the case study - To display the first question in this case study; click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements; existing environment; and problem statements. If the case study has an All Information tab; note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question; click the Question button to return to the question. Overview - Fabrikam; Inc. is an engineering company that has offices throughout Europe. The company has a main office in London and three branch offices in Amsterdam; Berlin; and Rome. Existing Environment: Active Directory Environment The network contains two Active Directory forests named corp.fabrikam.com and rd.fabrikam.com. There are no trust relationships between the forests. Corp.fabrikam.com is a production forest that contains identities used for internal user and computer authentication. Rd.fabrikam.com is used by the research and development (R&D) department only. The R&D department is restricted to using on-premises resources only. Existing Environment: Network Infrastructure Each office contains at least one domain controller from the corp.fabrikam.com forest. The main office contains all the domain controllers for the rd.fabrikam.com forest. All the offices have a high-speed connection to the internet. An existing application named WebApp1 is hosted in the data center of the London office. WebApp1 is used by customers to place and track orders. WebApp1 has a web tier that uses Microsoft Internet Information Services (IIS) and a database tier that runs Microsoft SQL Server 2016. The web tier and the database tier are deployed to virtual machines that run on Hyper-V. The IT department currently uses a separate Hyper-V environment to test updates to WebApp1. Fabrikam purchases all Microsoft licenses through a Microsoft Enterprise Agreement that includes Software Assurance. Existing Environment: Problem Statements The use of WebApp1 is unpredictable. At peak times; users often report delays. At other times; many resources for WebApp1 are underutilized. Requirements: Planned Changes - Fabrikam plans to move most of its production workloads to Azure during the next few years; including virtual machines that rely on Active Directory for authentication. As one of its first projects; the company plans to establish a hybrid identity model; facilitating an upcoming Microsoft 365 deployment. All R&D operations will remain on-premises. Fabrikam plans to migrate the production and test instances of WebApp1 to Azure. Requirements: Technical Requirements Fabrikam identifies the following technical requirements: Website content must be easily updated from a single point. User input must be minimized when provisioning new web app instances. Whenever possible; existing on-premises licenses must be used to reduce cost. Users must always authenticate by using their corp.fabrikam.com UPN identity. Any new deployments to Azure must be redundant in case an Azure region fails. Whenever possible; solutions must be deployed to Azure by using the Standard pricing tier of Azure App Service. An email distribution group named IT Support must be notified of any issues relating to the directory synchronization services. In the event that a link fails between Azure and the on-premises network; ensure that the virtual machines hosted in Azure can authenticate to Active Directory. Directory synchronization between Azure Active Directory (Azure AD) and corp.fabrikam.com must not be affected by a link failure between Azure and the on-premises network. Requirements: Database Requirements Fabrikam identifies the following database requirements: Database metrics for the production instance of WebApp1 must be available for analysis so that database administrators can optimize the performance settings. To avoid disrupting customer access; database downtime must be minimized when databases are migrated. Database backups must be retained for a minimum of seven years to meet compliance requirements. Requirements: Security Requirements Fabrikam identifies the following security requirements: Company information including policies; templates; and data must be inaccessible to anyone outside the company. Users on the on-premises network must be able to authenticate to corp.fabrikam.com if an internet link fails. Administrators must be able to authenticate to the Azure portal by using their corp.fabrikam.com credentials. All administrative access to the Azure portal must be secured by using multi-factor authentication (MFA). The testing of WebApp1 updates must not be visible to anyone outside the company.

Question: You need to recommend a notification solution for the IT Support distribution group. What should you include in the recommendation?

A. a SendGrid account with advanced reporting
B. an action group
C. Azure Network Watcher
D. Azure AD Connect Health

A

Answer: D. Azure AD Connect Health

Reasoning: The question requires a notification solution for the IT Support distribution group specifically related to directory synchronization services. Azure AD Connect Health is designed to monitor and provide insights into the health of your on-premises identity infrastructure; including directory synchronization. It can send alerts and notifications to specified recipients; such as the IT Support distribution group; when issues are detected with the directory synchronization services.

Breakdown of non-selected options:

A. a SendGrid account with advanced reporting - SendGrid is primarily used for sending emails and managing email campaigns. While it can send notifications; it is not specifically designed for monitoring directory synchronization services or providing health insights related to Azure AD Connect.

B. an action group - Action groups are used in Azure Monitor to trigger actions like sending emails or SMS when an alert is fired. While they can be used for notifications; they are not specifically tailored for directory synchronization services. Azure AD Connect Health provides more targeted monitoring and alerting for this purpose.

C. Azure Network Watcher - Azure Network Watcher is a network performance monitoring; diagnostic; and analytics service. It is not related to directory synchronization services and would not be suitable for notifying the IT Support group about issues with directory synchronization.

D. Azure AD Connect Health - This option is specifically designed to monitor the health of Azure AD Connect and related services. It provides alerts and notifications for issues related to directory synchronization; making it the most suitable choice for the requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

You have an Azure Storage account containing sensitive data; and you want to encrypt the data at rest using customer-managed keys. Which encryption algorithm and key length should you use for the encryption keys?
A. RSA 2048
B. RSA 3072
C. AES 128
D. AES 256

A

Answer: D. AES 256

Reasoning: Azure Storage supports encryption of data at rest using customer-managed keys; and the recommended encryption algorithm for this purpose is AES (Advanced Encryption Standard) with a key length of 256 bits. AES 256 is widely recognized for its strong security and is a standard choice for encrypting sensitive data. It provides a good balance between security and performance; making it suitable for encrypting data at rest in Azure Storage.

Breakdown of non-selected options:
- A. RSA 2048: RSA is an asymmetric encryption algorithm; which is not typically used for encrypting data at rest due to its computational intensity and inefficiency for large data volumes. It is more commonly used for encrypting small amounts of data; such as keys or digital signatures.
- B. RSA 3072: Similar to RSA 2048; RSA 3072 is an asymmetric encryption algorithm and is not suitable for encrypting large volumes of data at rest. It is primarily used for secure key exchange and digital signatures.
- C. AES 128: While AES 128 is a symmetric encryption algorithm like AES 256; it offers a lower level of security due to its shorter key length. AES 256 is preferred for encrypting sensitive data because it provides a higher level of security.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

You plan to deploy 50 applications to Azure. These applications will be deployed across five Azure Kubernetes Service (AKS) clusters; with each cluster located in a different Azure region. The application deployment must meet the following requirements: ✑ Ensure that the applications remain available if a single AKS cluster fails. ✑ Ensure that internet traffic is encrypted using SSL without configuring SSL on each container. Which service should you include in the recommendation?
A. Azure Front Door
B. Azure Traffic Manager
C. AKS ingress controller
D. Azure Load Balancer

A

Answer: A. Azure Front Door

Reasoning: Azure Front Door is a global; scalable entry point that uses the Microsoft global edge network to create fast; secure; and highly available web applications. It provides SSL termination; which means it can handle SSL encryption and decryption; allowing you to offload SSL from your applications. This meets the requirement of ensuring internet traffic is encrypted using SSL without configuring SSL on each container. Additionally; Azure Front Door can route traffic to multiple regions; providing high availability and resilience in case a single AKS cluster fails.

Breakdown of non-selected options:

B. Azure Traffic Manager: While Azure Traffic Manager can distribute traffic across multiple regions and provide high availability; it does not handle SSL termination. Therefore; it does not meet the requirement of encrypting internet traffic using SSL without configuring SSL on each container.

C. AKS ingress controller: An ingress controller can manage SSL termination; but it operates at the cluster level. This means you would need to configure SSL for each AKS cluster individually; which does not meet the requirement of avoiding SSL configuration on each container.

D. Azure Load Balancer: Azure Load Balancer operates at the network layer (Layer 4) and does not provide SSL termination. It is primarily used for distributing traffic within a single region or cluster; and it does not meet the requirement of ensuring internet traffic is encrypted using SSL without configuring SSL on each container.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

You need to deploy a highly available web application on Azure. The solution must meet the following requirements: use a managed database service; be highly available within a single region; and support autoscaling based on user traffic. Which Azure services should you use to achieve these requirements?
A. Azure Virtual Machines; Azure Load Balancer; Azure SQL Database
B. Azure App Service; Azure Load Balancer; Azure SQL Database
C. Azure Kubernetes Service; Azure Load Balancer; Azure Cosmos DB
D. Azure App Service; Azure Application Gateway; Azure Cosmos DB

A

Answer: D. Azure App Service; Azure Application Gateway; Azure Cosmos DB

Reasoning:
- Azure App Service is a fully managed platform for building; deploying; and scaling web apps. It supports autoscaling and is highly available within a single region; making it suitable for the requirement of deploying a highly available web application.
- Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. It provides high availability and autoscaling features; which align with the requirements.
- Azure Cosmos DB is a fully managed NoSQL database service that offers high availability and scalability. It is a managed database service; which meets the requirement of using a managed database service.

Breakdown of non-selected options:
- A. Azure Virtual Machines; Azure Load Balancer; Azure SQL Database: While Azure SQL Database is a managed database service; using Azure Virtual Machines requires more management overhead compared to Azure App Service. Azure Load Balancer provides load balancing but does not offer the same level of application-level routing and autoscaling as Azure Application Gateway.
- B. Azure App Service; Azure Load Balancer; Azure SQL Database: Azure App Service and Azure SQL Database are suitable choices; but Azure Load Balancer is more suited for network-level load balancing rather than application-level; which is better handled by Azure Application Gateway.
- C. Azure Kubernetes Service; Azure Load Balancer; Azure Cosmos DB: Azure Kubernetes Service is a good option for containerized applications but requires more management compared to Azure App Service. Azure Load Balancer is not as suitable as Azure Application Gateway for application-level traffic management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

You need to design a highly available Azure Function App that meets the following requirements:
✑ The function app must remain available during a zone outage.
✑ The function app must be scalable.
✑ Costs must be minimized.
Which deployment option should you use?
A. Function App on App Service Environment
B. Function App on Linux
C. Function App with Traffic Manager
D. Function App with Azure Load Balancer

A

Answer: C. Function App with Traffic Manager

Reasoning:
To design a highly available Azure Function App that remains available during a zone outage; is scalable; and minimizes costs; the most suitable option is to use a Function App with Traffic Manager. Traffic Manager allows you to distribute traffic across multiple regions; providing high availability and resilience against zone outages. It also supports automatic failover and load balancing; which ensures scalability. Additionally; Traffic Manager is a cost-effective solution compared to deploying in an App Service Environment.

Breakdown of non-selected options:
A. Function App on App Service Environment: While this option provides high availability and scalability; it is more expensive than using Traffic Manager. App Service Environment is typically used for isolated and high-security environments; which may not be necessary for this scenario.

B. Function App on Linux: This option does not inherently provide high availability across zones. It is simply a hosting option for the Function App and does not address the requirement to remain available during a zone outage.

D. Function App with Azure Load Balancer: Azure Load Balancer is typically used for distributing traffic within a single region and does not provide global distribution or failover capabilities across multiple regions; which are necessary to meet the requirement of remaining available during a zone outage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

You need to design a highly available Azure Storage account that meets the following requirements: ✑ The storage account must remain available during a zone outage. ✑ The storage account must be highly performant. ✑ Costs must be minimized. Which deployment option should you choose?
A. Geo-redundant storage
B. Zone-redundant storage
C. Premium storage
D. Standard storage with read-access geo-redundant storage

A

Answer: B. Zone-redundant storage

Reasoning:
The question requires a storage solution that remains available during a zone outage; is highly performant; and minimizes costs. Zone-redundant storage (ZRS) is designed to provide high availability by replicating data across multiple availability zones within a region; ensuring that the storage account remains available even if one zone goes down. ZRS also offers good performance and is generally more cost-effective than geo-redundant options; making it the most suitable choice given the requirements.

Breakdown of non-selected options:
A. Geo-redundant storage - While this option provides high availability by replicating data across regions; it is more expensive than ZRS and not necessary for the requirement of zone-level redundancy. It also may introduce higher latency compared to ZRS.

C. Premium storage - This option is designed for high-performance workloads but does not inherently provide zone redundancy. It is also more costly; which does not align with the requirement to minimize costs.

D. Standard storage with read-access geo-redundant storage - This option provides geo-redundancy and read-access during regional outages; but it is more expensive than ZRS and not necessary for zone-level redundancy. The performance might also not be as high as required.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

You are designing a highly secure Azure solution that requires encryption of data at rest for a SQL Server database. You plan to use Azure SQL Managed Instance. Which encryption algorithm and key length should you use for Transparent Data Encryption (TDE)?

A. RSA 2048
B. AES 128
C. AES 256
D. RSA 3072

A

Answer: C. AES 256

Reasoning: Azure SQL Managed Instance uses Transparent Data Encryption (TDE) to encrypt data at rest. TDE in Azure SQL Managed Instance uses the AES encryption algorithm. Among the options provided; AES 256 is the most suitable choice because it offers a higher level of security compared to AES 128 due to its longer key length. AES 256 is a widely accepted standard for strong encryption and is commonly used for securing sensitive data.

Breakdown of non-selected options:

A. RSA 2048 - RSA is an asymmetric encryption algorithm; which is not used for encrypting data at rest in databases like Azure SQL Managed Instance. TDE uses symmetric encryption; specifically AES.

B. AES 128 - While AES 128 is a valid encryption algorithm for TDE; AES 256 provides a stronger level of encryption due to its longer key length; making it a more secure choice.

D. RSA 3072 - Similar to RSA 2048; RSA 3072 is an asymmetric encryption algorithm and is not used for TDE in Azure SQL Managed Instance. TDE relies on symmetric encryption with AES.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

You are designing an Azure IoT solution involving 100;000 IoT devices; each streaming data such as location; speed; and time. Approximately 100;000 records will be written every second. You need to recommend a service to store and query the data. Which two services can you recommend? Each option presents a complete solution.
A. Azure Cosmos DB for NoSQL
B. Azure Stream Analytics
C. Azure Event Hubs
D. Azure SQL Database

A

Answer: A. Azure Cosmos DB for NoSQL
Answer: D. Azure SQL Database

Reasoning:
The question requires a solution for storing and querying data from 100;000 IoT devices; with a high write throughput of approximately 100;000 records per second. The solution should be able to handle large-scale data ingestion and provide querying capabilities.

  • Azure Cosmos DB for NoSQL: This service is designed for high throughput and low latency; making it suitable for IoT scenarios with massive data ingestion. It supports horizontal scaling and can handle the required write throughput efficiently. Additionally; it offers rich querying capabilities; which makes it a suitable choice for this scenario.
  • Azure SQL Database: This service can handle large volumes of data and provides robust querying capabilities. With features like elastic pools and scaling options; it can be configured to manage high write throughput. It is a suitable choice for scenarios where relational data storage and complex querying are required.

Breakdown of non-selected options:

  • Azure Stream Analytics: This service is primarily used for real-time data processing and analytics rather than storage. It is designed to process data streams and provide insights; but it is not a storage solution. Therefore; it is not suitable for the requirement of storing and querying data.
  • Azure Event Hubs: This service is designed for data ingestion and event streaming; not for storage or querying. It acts as an event ingestor that can capture and store data temporarily; but it is not intended for long-term storage or complex querying. Hence; it is not suitable for the requirement of storing and querying data.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

You need to deploy a web application that requires horizontal scaling in an Azure subscription. The solution must meet the following requirements: The web application must have access to the full .NET Framework; be hosted in a Platform as a Service (PaaS) environment; and provide automatic scaling based on CPU utilization. Which Azure service should you use?
A. Azure App Service
B. Azure Virtual Machines
C. Azure Kubernetes Service (AKS)
D. Azure Container Instances (ACI)

A

Answer: A. Azure App Service

Reasoning: Azure App Service is a Platform as a Service (PaaS) offering that supports web applications with access to the full .NET Framework. It provides automatic scaling based on CPU utilization; which aligns with the requirement for horizontal scaling. Azure App Service is specifically designed for hosting web applications and offers built-in scaling features; making it the most suitable choice for this scenario.

Breakdown of non-selected options:
- B. Azure Virtual Machines: This option provides Infrastructure as a Service (IaaS); not PaaS. While it can run the full .NET Framework; it does not inherently provide automatic scaling based on CPU utilization without additional configuration and management; making it less suitable for the requirements.
- C. Azure Kubernetes Service (AKS): AKS is a container orchestration service that can provide horizontal scaling; but it is more complex to set up and manage compared to Azure App Service. It is not specifically tailored for hosting web applications with the full .NET Framework in a PaaS environment.
- D. Azure Container Instances (ACI): ACI is a service for running containers without managing servers; but it does not provide the full PaaS experience for web applications with the full .NET Framework. It also lacks built-in automatic scaling based on CPU utilization; making it less suitable for the given requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

You are designing an Azure IoT solution involving 10;000 IoT devices that will each stream data; including temperature; humidity; and timestamp data. Approximately 10;000 records will be written every second. You need to recommend a service to store and query the data. Which two services can you recommend? Each option presents a complete solution.
A. Azure Table Storage
B. Azure Event Grid
C. Azure Cosmos DB for NoSQL
D. Azure Time Series Insights

A

Answer: C. Azure Cosmos DB for NoSQL
Answer: D. Azure Time Series Insights

Reasoning:
The question requires a solution to store and query high-volume streaming data from IoT devices. The solution must handle approximately 10;000 records per second; which implies the need for a scalable and efficient data storage and querying service.

  • Azure Cosmos DB for NoSQL: This service is designed for high-throughput and low-latency data access; making it suitable for IoT scenarios where large volumes of data are ingested and queried. It supports flexible schema and global distribution; which are beneficial for IoT data storage and querying.
  • Azure Time Series Insights: This service is specifically designed for IoT data; providing capabilities to store; query; and visualize time-series data. It is optimized for handling large volumes of time-stamped data; such as temperature and humidity readings from IoT devices; making it an ideal choice for this scenario.

Breakdown of non-selected options:

  • A. Azure Table Storage: While Azure Table Storage can handle large volumes of data; it is not optimized for querying complex data patterns or high-throughput scenarios like those required for IoT data streams. It lacks the advanced querying capabilities needed for this use case.
  • B. Azure Event Grid: This service is primarily used for event routing and handling; not for data storage or querying. It is designed to manage events and notifications rather than storing large volumes of IoT data; making it unsuitable for the requirements of this question.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

You have an Azure subscription. You need to recommend an Azure Kubernetes Service (AKS) solution that will use Linux nodes. The solution must meet the following requirements:
✑ Minimize the time it takes to provision compute resources during scale-out operations.
✑ Support autoscaling of Linux containers.
✑ Minimize administrative effort.
Which scaling option should you recommend?
A. Horizontal Pod Autoscaler
B. Cluster Autoscaler
C. Virtual Nodes
D. Virtual Kubelet

A

Answer: C. Virtual Nodes

Reasoning:
- The requirement is to minimize the time it takes to provision compute resources during scale-out operations; support autoscaling of Linux containers; and minimize administrative effort.
- Virtual Nodes in AKS allow for burstable workloads by integrating with Azure Container Instances (ACI); which can provision compute resources very quickly; thus minimizing the time for scale-out operations.
- Virtual Nodes support autoscaling by allowing AKS to scale out to ACI when the cluster runs out of capacity; which also reduces the administrative effort since it offloads the management of additional nodes.

Breakdown of non-selected options:
- A. Horizontal Pod Autoscaler: This option adjusts the number of pods in a deployment based on CPU utilization or other select metrics. While it supports autoscaling; it does not directly minimize the time to provision compute resources or reduce administrative effort related to node management.
- B. Cluster Autoscaler: This option automatically adjusts the number of nodes in a cluster based on the pending pods. While it supports autoscaling; it involves provisioning new VMs; which can take more time compared to using Virtual Nodes with ACI.
- D. Virtual Kubelet: This is an open-source project that allows Kubernetes to connect to other APIs; such as ACI. However; in the context of AKS; Virtual Nodes is the specific implementation that integrates with ACI; making Virtual Nodes the more appropriate choice for the requirements given.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

You need to recommend a solution to generate a monthly report of all new Azure Resource Manager (ARM) resource deployments in your Azure subscription. What should you include in the recommendation?
A. Azure Activity Log
B. Azure Arc
C. Azure Analysis Services
D. Azure Monitor action groups

A

Answer: A. Azure Activity Log

Reasoning:
The Azure Activity Log provides a record of all activities related to resource management in your Azure subscription. It includes information about new resource deployments; modifications; and deletions. This makes it the most suitable option for generating a monthly report of all new Azure Resource Manager (ARM) resource deployments.

Breakdown of non-selected options:
- B. Azure Arc: Azure Arc is a service that extends Azure management and services to any infrastructure. It is not specifically designed for tracking or reporting on resource deployments within an Azure subscription.
- C. Azure Analysis Services: Azure Analysis Services is a fully managed platform as a service (PaaS) that provides enterprise-grade data models in the cloud. It is used for data analysis and does not directly track or report on Azure resource deployments.
- D. Azure Monitor action groups: Azure Monitor action groups are used to define a set of actions to take when an alert is triggered. They are not used for generating reports on resource deployments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

You have an Azure VM running a Windows Server 2019 image. You plan to enable BitLocker on the VM to encrypt the system drive. Which encryption algorithm and key length should you use for BitLocker on the system drive?
A. AES-128
B. AES-256
C. XTS-AES 128
D. XTS-AES 256

A

Answer: D. XTS-AES 256

Reasoning: When enabling BitLocker on a system drive; especially in a cloud environment like Azure; it is important to choose an encryption algorithm that provides strong security. XTS-AES is a mode of AES encryption that is specifically designed for disk encryption and is considered more secure than the older CBC mode. Between the two options for XTS-AES; the 256-bit key length (XTS-AES 256) offers a higher level of security compared to the 128-bit key length (XTS-AES 128). Therefore; XTS-AES 256 is the most suitable choice for encrypting the system drive with BitLocker on an Azure VM.

Breakdown of non-selected options:
- A. AES-128: While AES-128 is a secure encryption algorithm; it is not as strong as AES-256. Additionally; it does not use the XTS mode; which is more suitable for disk encryption.
- B. AES-256: Although AES-256 provides strong encryption; it does not utilize the XTS mode; which is specifically designed for disk encryption and offers additional security benefits.
- C. XTS-AES 128: XTS-AES 128 uses the XTS mode; which is suitable for disk encryption; but it provides a lower level of security compared to XTS-AES 256 due to the shorter key length.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements: ✑ Provide access to the full .NET Framework. ✑ Ensure redundancy in case an Azure region fails. ✑ Allow administrators access to the operating system to install custom application dependencies. Solution: You deploy an Azure virtual machine to each Azure region and configure Azure Traffic Manager. Does this meet the goal?
A. Yes
B. No

A

Answer: A. Yes

Reasoning: The solution requires hosting a stateless web app with access to the full .NET Framework; redundancy across Azure regions; and administrative access to the operating system. Deploying an Azure virtual machine (VM) in each region satisfies these requirements:

  1. Full .NET Framework Access: Azure VMs can run Windows Server; which supports the full .NET Framework.
  2. Redundancy: By deploying VMs in multiple regions and using Azure Traffic Manager; which provides DNS-based traffic routing; the solution ensures redundancy and failover capabilities in case one region fails.
  3. Administrative Access: Azure VMs provide full administrative access to the operating system; allowing installation of custom application dependencies.

Breakdown of Non-Selected Answer Option:
- B. No: This option is incorrect because the proposed solution does meet all the specified requirements. Deploying VMs in multiple regions with Traffic Manager ensures redundancy; and VMs provide the necessary access to the full .NET Framework and administrative control over the OS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

You need to deploy a web application with persistent storage in an Azure subscription. The solution must meet these requirements: The web application must use the full .NET framework; be hosted in a Platform as a Service (PaaS) environment; and provide persistent storage for application data. Which Azure service should you use?
A. Azure App Service
B. Azure Virtual Machines
C. Azure Kubernetes Service (AKS)
D. Azure Container Instances (ACI)

A

Answer: A. Azure App Service

Reasoning: Azure App Service is a Platform as a Service (PaaS) offering that supports hosting web applications using the full .NET framework. It also provides options for persistent storage through Azure Storage or other integrated services. This makes it the most suitable choice for deploying a web application with persistent storage in a PaaS environment.

Breakdown of non-selected options:
- B. Azure Virtual Machines: This is an Infrastructure as a Service (IaaS) offering; not PaaS. While it can host applications using the full .NET framework and provide persistent storage; it does not meet the requirement of being a PaaS solution.
- C. Azure Kubernetes Service (AKS): While AKS is a PaaS offering for container orchestration; it is more complex and typically used for microservices architectures. It is not the most straightforward choice for hosting a simple web application with the full .NET framework.
- D. Azure Container Instances (ACI): ACI is a PaaS offering for running containers without managing servers; but it is not specifically designed for hosting full .NET framework applications. It also does not inherently provide persistent storage solutions suitable for this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

You have a large volume of structured data that needs to be stored in Azure. Which Azure service should you choose to ensure high scalability; availability; and performance?

A. Azure Blob Storage
B. Azure Data Lake Storage Gen2
C. Azure Cosmos DB
D. Azure SQL Database

A

Answer: C. Azure Cosmos DB

Reasoning: Azure Cosmos DB is a globally distributed; multi-model database service that provides high scalability; availability; and performance. It is designed to handle large volumes of structured data with low latency and offers features like automatic scaling; global distribution; and multiple consistency models; making it highly suitable for applications requiring these capabilities.

Breakdown of non-selected options:

A. Azure Blob Storage: While Azure Blob Storage is highly scalable and available; it is primarily designed for unstructured data storage; such as documents; images; and backups. It is not optimized for structured data and does not provide the database functionalities required for structured data management.

B. Azure Data Lake Storage Gen2: This service is optimized for big data analytics and is suitable for storing large volumes of data in a hierarchical file system. However; it is not specifically designed for structured data and does not offer the database features needed for high-performance structured data operations.

D. Azure SQL Database: Azure SQL Database is a fully managed relational database service that provides high availability and performance for structured data. However; it may not offer the same level of global distribution and scalability as Azure Cosmos DB; especially for applications requiring multi-region deployments and low-latency access across the globe.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

You need to deploy a highly scalable and resilient web application in Azure. The application must meet the following requirements: It must be stateless; use a managed database service; and scale to handle large volumes of user traffic. Which Azure services should you use to achieve these requirements?
A. Azure Functions; Azure Cosmos DB
B. Azure App Service; Azure SQL Database
C. Azure Kubernetes Service; Azure Cosmos DB
D. Azure Container Instances; Azure SQL Database

A

Answer: B. Azure App Service; Azure SQL Database

Reasoning:
- The requirement is to deploy a highly scalable and resilient web application that is stateless; uses a managed database service; and can handle large volumes of user traffic.
- Azure App Service is a fully managed platform for building; deploying; and scaling web apps. It supports stateless applications and can automatically scale to handle large volumes of traffic; making it suitable for this requirement.
- Azure SQL Database is a fully managed relational database service that offers high availability; scalability; and security. It is a managed database service that fits the requirement for a managed database.

Breakdown of non-selected options:
- A. Azure Functions; Azure Cosmos DB: Azure Functions is suitable for stateless applications and can scale; but it is more suited for event-driven; serverless applications rather than a traditional web application. Azure Cosmos DB is a managed database service; but the combination is not the most typical for a web application.
- C. Azure Kubernetes Service; Azure Cosmos DB: Azure Kubernetes Service (AKS) is suitable for deploying containerized applications and can scale; but it requires more management overhead compared to Azure App Service. Azure Cosmos DB is a managed database service; but the combination is more complex than necessary for a typical web application.
- D. Azure Container Instances; Azure SQL Database: Azure Container Instances is suitable for running containers but does not provide the same level of scalability and management as Azure App Service. Azure SQL Database is a managed database service; but the combination is not as optimal for a scalable web application as Azure App Service with Azure SQL Database.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

You are planning to deploy an Azure SQL Database instance. You need to ensure that you can restore the database to any point within the past hour. The solution must minimize costs. Which pricing tier should you choose?
A. General Purpose
B. Basic
C. Standard
D. Premium

A

Answer: A. General pourpus

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

You are designing an application to store confidential medical records. The application must be highly available and offer fast read and write performance. Additionally; the data must be encrypted both at rest and in transit. Which Azure storage option should you recommend?

A. Azure Files
B. Azure Blob Storage
C. Azure Disk Storage
D. Azure Cosmos DB

A

Answer: B. Azure Blob Storage

Reasoning: Azure Blob Storage is a highly scalable and durable object storage solution that is suitable for storing large amounts of unstructured data; such as medical records. It offers high availability and fast read/write performance; which are key requirements for the application. Additionally; Azure Blob Storage supports encryption at rest using Azure Storage Service Encryption (SSE) and encryption in transit using HTTPS; meeting the security requirements for confidential data.

Breakdown of non-selected options:

A. Azure Files: While Azure Files provides managed file shares and supports encryption; it is typically used for scenarios where file sharing is needed across multiple virtual machines. It may not offer the same level of performance and scalability as Azure Blob Storage for large-scale unstructured data storage.

C. Azure Disk Storage: Azure Disk Storage is primarily used for persistent storage for Azure Virtual Machines. While it offers encryption and high performance; it is not optimized for storing large volumes of unstructured data like medical records; making it less suitable for this scenario.

D. Azure Cosmos DB: Azure Cosmos DB is a globally distributed; multi-model database service. While it offers high availability and low latency; it is designed for scenarios requiring complex querying and transactional capabilities; which may not be necessary for storing medical records. Additionally; it may be more costly and complex than needed for simple storage of unstructured data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

You are designing a healthcare application that will collect real-time patient data from various IoT devices. The application requires a database solution that can scale horizontally; support JSON documents; and provide low-latency reads and writes. Which database solution should you recommend?
A. Azure Cosmos DB with MongoDB API
B. Azure SQL Database with JSON support
C. Azure Database for PostgreSQL with Hyperscale
D. Azure Time Series Insights

A

Answer: A. Azure Cosmos DB with MongoDB API

Reasoning:
The requirements for the database solution include the ability to scale horizontally; support JSON documents; and provide low-latency reads and writes. Azure Cosmos DB with MongoDB API is designed to meet these requirements. It is a globally distributed; multi-model database service that natively supports JSON documents and offers horizontal scaling with low-latency access to data. The MongoDB API allows for compatibility with MongoDB applications; making it a suitable choice for applications that require JSON document support.

Breakdown of non-selected options:
- B. Azure SQL Database with JSON support: While Azure SQL Database does support JSON; it is a relational database and does not inherently provide the same level of horizontal scalability and low-latency reads and writes as Azure Cosmos DB. It is more suited for structured data and traditional relational database use cases.

  • C. Azure Database for PostgreSQL with Hyperscale: Although Hyperscale (Citus) can provide horizontal scaling for PostgreSQL; it is primarily designed for relational data and does not natively support JSON documents in the same way as a NoSQL database like Cosmos DB. It may not provide the same low-latency performance for JSON document workloads.
  • D. Azure Time Series Insights: This is a fully managed analytics; storage; and visualization service for managing IoT-scale time-series data. It is not a general-purpose database solution and is specifically designed for time-series data analysis; making it unsuitable for the broader requirements of the healthcare application described.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

You are designing an Azure solution that requires encryption of data at rest for a highly sensitive database. You plan to use Azure SQL Database. Which encryption algorithm and key length should you choose for Transparent Data Encryption (TDE)?

A. RSA 2048
B. AES 128
C. AES 256
D. RSA 3072

A

Answer: C. AES 256

Reasoning: Azure SQL Database uses Transparent Data Encryption (TDE) to encrypt data at rest. TDE in Azure SQL Database uses the AES encryption algorithm. Among the options provided; AES 256 is the most suitable choice because it offers a higher level of security compared to AES 128 due to its longer key length. AES 256 is a standard choice for encrypting sensitive data; providing a strong balance between security and performance.

Breakdown of non-selected options:
- A. RSA 2048: RSA is an asymmetric encryption algorithm; which is not typically used for encrypting data at rest due to its computational intensity and inefficiency for large data volumes. TDE uses symmetric encryption; making RSA unsuitable for this purpose.
- B. AES 128: While AES 128 is a valid encryption algorithm for TDE; AES 256 provides a higher level of security due to its longer key length; making it a more suitable choice for highly sensitive data.
- D. RSA 3072: Similar to RSA 2048; RSA 3072 is an asymmetric encryption algorithm and is not used for encrypting data at rest in TDE. Symmetric encryption like AES is preferred for this purpose.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

You are designing a SQL database solution for an e-commerce company that needs to store and process customer orders and inventory data. The company requires data replication to a disaster recovery site for high availability. The solution must meet a Service Level Agreement (SLA) of 99.99% uptime and have reserved capacity; while minimizing compute charges. Which database platform should you recommend?
A. Azure SQL Database vCore
B. Azure SQL Database Managed Instance
C. Azure SQL Database Hyperscale
D. Azure SQL Database Zone-redundant configuration

A

Answer: B. Azure SQL Database Managed Instance

Reasoning: Azure SQL Database Managed Instance is the most suitable option for this scenario because it provides a fully managed SQL Server instance with built-in high availability and disaster recovery capabilities. It supports data replication to a disaster recovery site; which is essential for the company’s requirement of high availability. Managed Instance also offers a 99.99% SLA for uptime; meeting the company’s SLA requirement. Additionally; it allows for reserved capacity pricing; which can help minimize compute charges over time.

Breakdown of non-selected options:
A. Azure SQL Database vCore: While it offers flexibility in terms of compute and storage resources; it does not inherently provide the same level of built-in high availability and disaster recovery features as Managed Instance. It may require additional configuration and resources to meet the disaster recovery requirements.

C. Azure SQL Database Hyperscale: This option is designed for very large databases and provides high scalability; but it may not be necessary for the company’s needs unless they have extremely large data volumes. It also does not specifically address the disaster recovery requirement as effectively as Managed Instance.

D. Azure SQL Database Zone-redundant configuration: This configuration provides high availability within a single region by distributing replicas across availability zones. However; it does not inherently provide cross-region disaster recovery; which is a key requirement for the company.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

You plan to deploy a containerized application to Azure Kubernetes Service (AKS). The application is critical to the business and must be highly available. The application deployment must meet the following requirements: ✑ Ensure that the application remains available if a single AKS node fails. ✑ Ensure that the internet traffic is encrypted using SSL without configuring SSL on each container. Which service should you include in the recommendation?
A. Azure Front Door
B. Azure Traffic Manager
C. AKS ingress controller
D. Azure Load Balancer

A

Answer: C. AKS ingress controller

Reasoning:
To ensure high availability and SSL termination for a containerized application on Azure Kubernetes Service (AKS); an AKS ingress controller is the most suitable option. An ingress controller can manage external access to the services in a cluster; typically HTTP; and can provide SSL termination; meaning it can handle SSL encryption and decryption; thus offloading this task from the individual containers. This aligns with the requirement to encrypt internet traffic using SSL without configuring SSL on each container.

Breakdown of non-selected options:
- A. Azure Front Door: While Azure Front Door can provide SSL termination and global load balancing; it is more suited for global routing and web application acceleration. It is not directly integrated with AKS for managing internal traffic and high availability within a Kubernetes cluster.
- B. Azure Traffic Manager: This service is used for DNS-based traffic routing and is not suitable for SSL termination or managing traffic within an AKS cluster. It is more appropriate for distributing traffic across multiple regions or endpoints.
- D. Azure Load Balancer: This service provides Layer 4 (TCP/UDP) load balancing and does not handle SSL termination. It is not suitable for managing HTTP/HTTPS traffic directly or providing SSL termination for AKS applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

You are designing a web application that requires a database backend. The application will have a high number of concurrent users and must support complex queries. Which Azure database service should you select?

A. Azure SQL Database
B. Azure Database for MySQL
C. Azure Cosmos DB

A

Answer: A. Azure SQL Database

Reasoning: Azure SQL Database is a fully managed relational database service that is highly suitable for applications requiring complex queries and a high number of concurrent users. It supports advanced querying capabilities; including complex joins; stored procedures; and full-text search; making it ideal for applications with complex query requirements. Additionally; Azure SQL Database offers scalability and performance features that can handle a high number of concurrent users effectively.

Breakdown of non-selected options:

B. Azure Database for MySQL: While Azure Database for MySQL is a managed relational database service; it is typically used for applications that are already using MySQL or require specific MySQL features. It can handle complex queries; but Azure SQL Database is generally more optimized for high concurrency and complex query scenarios in the Azure ecosystem.

C. Azure Cosmos DB: Azure Cosmos DB is a globally distributed; multi-model database service designed for high availability and low latency. It is excellent for scenarios requiring global distribution and horizontal scaling; but it is not primarily optimized for complex relational queries. It is more suitable for NoSQL workloads and scenarios requiring flexible schema and high throughput.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

You have 100 Microsoft SQL Server Integration Services (SSIS) packages configured to use 10 on-premises SQL Server databases as their destinations. You plan to migrate these 10 on-premises databases to Azure SQL Database. You need to recommend a solution to create Azure SQL Server Integration Services (SSIS) packages. The solution must ensure that the packages can target the SQL Database instances as their destinations. What should you include in the recommendation?

A. Data Migration Assistant (DMA)
B. Azure Data Factory
C. Azure Data Catalog
D. SQL Server Migration Assistant (SSMA)

A

Answer: B. Azure Data Factory

Reasoning: Azure Data Factory (ADF) is a cloud-based data integration service that allows you to create data-driven workflows for orchestrating and automating data movement and data transformation. It supports running SSIS packages in the cloud; which makes it suitable for migrating and running existing SSIS packages targeting Azure SQL Database. ADF provides the capability to lift and shift existing SSIS packages to Azure; ensuring they can target Azure SQL Database instances as their destinations.

Breakdown of non-selected options:

A. Data Migration Assistant (DMA): DMA is primarily used for assessing and migrating on-premises SQL Server databases to Azure SQL Database. It helps identify compatibility issues and provides recommendations for migration. However; it is not used for creating or running SSIS packages.

C. Azure Data Catalog: Azure Data Catalog is an enterprise-wide metadata catalog that enables self-service data asset discovery. It is not used for creating or running SSIS packages; nor does it facilitate data migration or integration tasks.

D. SQL Server Migration Assistant (SSMA): SSMA is a tool designed to automate the migration of database schemas and data from various database platforms to SQL Server or Azure SQL Database. While it assists in database migration; it does not handle SSIS package creation or execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

You need to deploy resources to host a stateful web app in an Azure subscription. The solution must meet the following requirements: ✑ Ensure high availability of the database. ✑ Provide access to the .NET Core runtime environment. ✑ Allow administrators to manage the database. Solution: You deploy an Azure SQL Database with geo-replication and an Azure Virtual Machine running the .NET Core runtime environment. You grant the necessary permissions to administrators. Does this solution meet the requirements?
A. Yes
B. No

A

Answer: A. Yes

Reasoning: The solution involves deploying an Azure SQL Database with geo-replication and an Azure Virtual Machine running the .NET Core runtime environment. This setup meets the requirements as follows:

  • High availability of the database is ensured through Azure SQL Database with geo-replication; which provides redundancy and failover capabilities.
  • The .NET Core runtime environment is provided by the Azure Virtual Machine; which can be configured to run .NET Core applications.
  • Administrators can manage the database through Azure SQL Database; which offers various management tools and permissions settings.

Breakdown of non-selected answer option:
B. No - This option is not selected because the proposed solution does meet all the specified requirements: high availability; .NET Core runtime; and database management capabilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

You are planning to deploy an app that will utilize an Azure Storage account. You need to deploy a storage account that meets the following requirements: ✑ Store data for multiple users. ✑ Encrypt each user’s data with a separate key. ✑ Encrypt all data in the storage account using customer-managed keys. What should you deploy?

A. Blobs in a general-purpose v2 storage account
B. Files in a premium file share storage account
C. Blobs in an Azure Data Lake Storage Gen2 account
D. Files in a general-purpose v2 storage account

A

Answer: C. Blobs in an Azure Data Lake Storage Gen2 account

Reasoning:
The requirements specify that the storage account must store data for multiple users; encrypt each user’s data with a separate key; and use customer-managed keys for encryption. Azure Data Lake Storage Gen2 is designed for big data analytics and supports hierarchical namespace; which is beneficial for organizing data for multiple users. It also supports encryption with customer-managed keys; allowing for separate encryption keys for different data sets or users; which aligns with the requirement to encrypt each user’s data with a separate key.

Breakdown of non-selected options:
A. Blobs in a general-purpose v2 storage account - While general-purpose v2 storage accounts support customer-managed keys; they do not inherently provide a mechanism to encrypt each user’s data with a separate key as effectively as Azure Data Lake Storage Gen2.

B. Files in a premium file share storage account - Premium file shares are optimized for high-performance file storage but do not inherently support separate encryption keys for each user’s data; which is a key requirement.

D. Files in a general-purpose v2 storage account - Similar to option B; while general-purpose v2 accounts support customer-managed keys; they do not provide a straightforward way to encrypt each user’s data with a separate key; which is a critical requirement in this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Your company deploys several virtual machines both on-premises and in Azure. ExpressRoute is set up and configured for connectivity between on-premises and Azure. Some virtual machines are experiencing network connectivity issues. You need to analyze the network traffic to determine if packets are being allowed or denied to the virtual machines. Solution: Use Network Performance Monitor in Azure Network Watcher to analyze the network traffic. Does this meet the goal?
A. Yes
B. No

A

Answer: A. Yes

Reasoning: The question asks whether using Network Performance Monitor in Azure Network Watcher is a suitable solution to analyze network traffic to determine if packets are being allowed or denied to virtual machines. Network Performance Monitor is a tool within Azure Network Watcher that provides insights into network performance and can help diagnose connectivity issues. It can monitor network traffic and identify packet loss; latency; and other network-related issues; which aligns with the requirement to analyze network traffic for connectivity issues. Therefore; using Network Performance Monitor in Azure Network Watcher meets the goal.

Breakdown of non-selected answer option:
B. No - This option is not suitable because Network Performance Monitor in Azure Network Watcher is indeed capable of analyzing network traffic and diagnosing connectivity issues; which is the requirement stated in the question. Therefore; the solution provided does meet the goal; making “No” an incorrect choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

You are designing an Azure environment for a large enterprise and need to ensure that all Azure resources comply with the organization’s policies. Which Azure Policy scope should you use to achieve this goal?
A. Azure Active Directory (Azure AD) administrative units
B. Azure Active Directory (Azure AD) tenants
C. Subscriptions
D. Compute resources
E. Resource groups
F. Management groups

A

Answer: F. Management groups

Reasoning:
Azure Policy is a service in Azure that you use to create; assign; and manage policies. These policies enforce different rules and effects over your resources; so those resources stay compliant with your corporate standards and service level agreements. When designing an Azure environment for a large enterprise; it’s important to ensure that all resources comply with organizational policies. Management groups in Azure provide a way to manage access; policies; and compliance across multiple subscriptions. They allow you to apply policies at a higher level than individual subscriptions; which is ideal for large enterprises with multiple subscriptions.

Breakdown of non-selected options:
A. Azure Active Directory (Azure AD) administrative units - These are used to delegate administrative permissions within Azure AD; not for applying policies to Azure resources.
B. Azure Active Directory (Azure AD) tenants - Tenants are instances of Azure AD and are not used for applying Azure Policy. They are more about identity management.
C. Subscriptions - While you can apply policies at the subscription level; management groups allow for broader policy application across multiple subscriptions; which is more suitable for large enterprises.
D. Compute resources - This is too granular for applying organizational policies across an enterprise. Policies should be applied at a higher level.
E. Resource groups - These are used to manage resources within a subscription; but applying policies at this level would not ensure compliance across the entire organization. Management groups provide a broader scope.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

You need to deploy a web application with multiple dependencies on the operating system in an Azure subscription. The solution must meet these requirements: The web application must have access to the full .NET framework; be hosted in a virtual machine (VM); and provide redundancy across multiple Azure regions. Which Azure service should you use?
A. Azure Virtual Machines
B. Azure App Service
C. Azure Kubernetes Service (AKS)
D. Azure Container Instances (ACI)

A

Answer: A. Azure Virtual Machines

Reasoning: The question specifies that the web application must have access to the full .NET framework; be hosted in a virtual machine; and provide redundancy across multiple Azure regions. Azure Virtual Machines (VMs) are the most suitable option because they allow for full control over the operating system and can run the full .NET framework. Additionally; VMs can be deployed across multiple regions to provide redundancy.

Breakdown of non-selected options:
- B. Azure App Service: While Azure App Service supports .NET applications; it does not provide the same level of control over the operating system as a VM does. It is a Platform as a Service (PaaS) offering; which may not meet the requirement for full .NET framework access and specific OS dependencies.
- C. Azure Kubernetes Service (AKS): AKS is primarily used for containerized applications and may not be the best fit for applications requiring the full .NET framework and specific OS dependencies. It also adds complexity if the application is not already containerized.
- D. Azure Container Instances (ACI): Similar to AKS; ACI is used for running containers and does not provide the full control over the operating system needed for applications with specific OS dependencies and full .NET framework requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

You have an Azure AD tenant with a security group named Group1. Group1 is set up for assigned memberships and includes several members; including guest users. You need to ensure that Group1 is reviewed monthly to identify members who no longer need access. Additionally; ensure that any members removed from the group are added to another security group. What solution should you recommend?
A. Implement Azure AD Identity Protection.
B. Change the membership type of Group1 to Dynamic User.
C. Create an access review for Group1 and configure a post-review action to add removed members to another security group.
D. Implement Azure AD Privileged Identity Management (PIM).

A

Answer: C. Create an access review for Group1 and configure a post-review action to add removed members to another security group.

Reasoning:
The requirement is to review the membership of Group1 monthly and ensure that any members removed from the group are added to another security group. Azure AD Access Reviews are specifically designed for this purpose. They allow you to periodically review group memberships and can be configured to take specific actions after the review; such as adding removed members to another group. This makes option C the most suitable solution.

Breakdown of non-selected options:
A. Implement Azure AD Identity Protection: This service is primarily focused on identifying and responding to potential identity risks and does not provide functionality for reviewing group memberships or managing post-review actions.

B. Change the membership type of Group1 to Dynamic User: Changing the membership type to Dynamic User would automatically manage group membership based on user attributes; but it does not provide a mechanism for periodic reviews or handling post-review actions like moving users to another group.

D. Implement Azure AD Privileged Identity Management (PIM): PIM is used to manage; control; and monitor access within Azure AD; Azure; and other Microsoft Online Services. It is more focused on managing privileged roles and does not directly address the requirement for periodic group membership reviews or post-review actions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

You are designing an application that needs to store and process large volumes of structured data; such as customer orders and inventory records. The application must be highly available and offer fast read and write performance. Which Azure storage option should you recommend?
A. Azure Cosmos DB
B. Azure Table Storage
C. Azure SQL Database
D. Azure Disk Storage

A

Answer: C. Azure SQL Database

Reasoning: Azure SQL Database is a fully managed relational database service that provides high availability; scalability; and fast read/write performance; making it suitable for applications that need to store and process large volumes of structured data like customer orders and inventory records. It supports complex queries and transactions; which are often required for structured data processing.

Breakdown of non-selected options:
- A. Azure Cosmos DB: While Cosmos DB offers high availability and fast performance; it is more suited for globally distributed applications and unstructured or semi-structured data. It is not the best fit for structured data that requires complex querying and transactional support.
- B. Azure Table Storage: This is a NoSQL key-value store that is highly scalable and cost-effective but lacks the advanced querying capabilities and transactional support needed for structured data processing.
- D. Azure Disk Storage: This is primarily used for virtual machine storage and does not provide the database capabilities required for processing structured data like customer orders and inventory records.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

You need to recommend an Azure Storage solution that meets the following requirements:
- The storage must support 1 PB of data.
- The data must be stored in blob storage.
- The storage must support three levels of subfolders.
- The storage must support access control lists (ACLs).

What should you include in the recommendation?
A. A premium storage account configured for block blobs
B. A general-purpose v2 storage account with hierarchical namespace enabled
C. A premium storage account configured for page blobs
D. A premium storage account configured for file shares and supports large file shares

A

Answer: B. A general-purpose v2 storage account with hierarchical namespace enabled

Reasoning:
- The requirement specifies the need for blob storage that supports 1 PB of data; three levels of subfolders; and access control lists (ACLs).
- Azure Blob Storage with a hierarchical namespace enabled (also known as Azure Data Lake Storage Gen2) supports these requirements. It allows for organizing data into a hierarchy of directories and subdirectories; which satisfies the need for three levels of subfolders.
- Additionally; it supports ACLs; which are necessary for fine-grained access control.

Breakdown of non-selected options:
- A. A premium storage account configured for block blobs: While this option supports blob storage; it does not inherently support hierarchical namespaces or ACLs; which are required for organizing data into subfolders and managing access control.
- C. A premium storage account configured for page blobs: Page blobs are typically used for scenarios like virtual hard disks (VHDs) and do not support hierarchical namespaces or ACLs for blob storage.
- D. A premium storage account configured for file shares and supports large file shares: This option is related to Azure Files; which is different from blob storage and does not meet the requirement for blob storage with hierarchical namespaces and ACLs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

You have an app named App1 that uses an on-premises MySQL database named DB1. You plan to migrate DB1 to an Azure Database for MySQL. You need to enable customer-managed Transparent Data Encryption (TDE) for the database. The solution must maximize encryption strength. Which encryption algorithm and key length should you use for the TDE protector?

A. AES 192
B. RSA 2048
C. AES 128
D. RSA 3072

A

Answer: D. RSA 3072

Reasoning:
To enable customer-managed Transparent Data Encryption (TDE) for an Azure Database for MySQL; you need to choose an encryption algorithm and key length that maximizes encryption strength. RSA 3072 is a strong encryption algorithm with a longer key length compared to RSA 2048; providing enhanced security. AES options are not suitable here because the question specifically asks for the TDE protector; which typically uses RSA keys for key encryption.

Breakdown of non-selected options:
- A. AES 192: AES is a symmetric encryption algorithm; and while it is strong; it is not typically used for TDE protectors; which require asymmetric encryption like RSA.
- B. RSA 2048: While RSA 2048 is a common choice for encryption; RSA 3072 offers a higher level of security due to its longer key length.
- C. AES 128: Similar to AES 192; AES 128 is a symmetric encryption algorithm and is not used for TDE protectors. Additionally; it offers less encryption strength compared to AES 192.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Note: This question is part of a series that presents the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one solution; while others might not have a solution. After you answer a question in this section; you will NOT be able to return to it. As a result; these questions will not appear in the review screen. Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases. The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region. You need to recommend a solution to meet the regulatory requirement. Solution: You recommend creating resource groups based on locations and implementing resource locks on the resource groups. Does this meet the goal?

A. Yes
B. No

A

Answer: B. No

Reasoning: The solution proposed in the question suggests creating resource groups based on locations and implementing resource locks on the resource groups. While creating resource groups based on locations can help organize resources by region; it does not inherently enforce the deployment of App Service instances and Azure SQL databases to specific regions. Resource locks are used to prevent accidental deletion or modification of resources; but they do not enforce regional deployment requirements. Therefore; this solution does not adequately meet the regulatory requirement to ensure that App Service instances are deployed only to specific Azure regions.

Breakdown of non-selected answer option:
- A. Yes: This option is incorrect because the proposed solution does not enforce the deployment of resources to specific regions. Creating resource groups based on locations is an organizational strategy; and resource locks are for preventing changes; not for enforcing regional deployment. Therefore; the solution does not meet the stated goal of ensuring compliance with the regulatory requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

You need to design a highly available Azure Cosmos DB database that meets the following requirements: ✑ Replication must occur across multiple regions. ✑ Read and write latency must be minimized. ✑ Costs must be minimized. Which deployment option should you choose?

A. Cosmos DB global distribution
B. Cosmos DB account failover
C. Cosmos DB multi-master replication
D. Cosmos DB direct connectivity

A

Answer: C. Cosmos DB multi-master replication

Reasoning:
- Cosmos DB multi-master replication is the most suitable option for the given requirements. It allows for replication across multiple regions; which ensures high availability. Additionally; it minimizes read and write latency by allowing writes to be performed in any region; thus reducing the distance data must travel. This setup can also help in minimizing costs by optimizing the performance and reducing the need for additional resources to handle latency issues.

Breakdown of non-selected options:
- A. Cosmos DB global distribution: While this option supports replication across multiple regions; it does not inherently minimize write latency as effectively as multi-master replication. Global distribution typically involves a single write region; which can lead to higher latency for write operations if the write region is far from the user.

  • B. Cosmos DB account failover: This option is primarily focused on providing high availability through failover mechanisms rather than optimizing latency. It does not address the requirement to minimize read and write latency across multiple regions.
  • D. Cosmos DB direct connectivity: This option pertains to the network connectivity model and does not directly address the requirements of multi-region replication or latency minimization. It is more about how clients connect to the Cosmos DB service rather than the replication strategy.
63
Q

You plan to deploy a web application to Azure App Service; which will utilize Azure Blob Storage for file storage. You need to ensure that the files stored in Azure Blob Storage are encrypted. What should you do?

A. Use server-side encryption with customer-managed keys (CMK).
B. Use server-side encryption with platform-managed keys (PMK).
C. Use client-side encryption with customer-managed keys (CMK).
D. Use client-side encryption with platform-managed keys (PMK).

A

Answer: B. Use server-side encryption with platform-managed keys (PMK).

Reasoning: Azure Blob Storage provides server-side encryption by default; which ensures that all data is encrypted at rest. The simplest and most straightforward way to ensure encryption of files stored in Azure Blob Storage is to use server-side encryption with platform-managed keys (PMK). This option is automatically enabled and managed by Azure; providing a balance of security and ease of use without requiring additional configuration or management overhead from the user.

Breakdown of non-selected options:

A. Use server-side encryption with customer-managed keys (CMK).
- While this option provides more control over the encryption keys; it requires additional management and configuration. The question does not specify a need for customer-managed keys; making PMK a more suitable choice for simplicity and ease of use.

C. Use client-side encryption with customer-managed keys (CMK).
- Client-side encryption requires the application to handle encryption and decryption; adding complexity to the application. The question does not indicate a need for this level of control or complexity; making server-side encryption a more appropriate choice.

D. Use client-side encryption with platform-managed keys (PMK).
- Similar to option C; client-side encryption adds unnecessary complexity to the application. Additionally; platform-managed keys are typically associated with server-side encryption; not client-side. Therefore; this option is not suitable for the requirements stated in the question.

64
Q

You need to deploy a highly available web application on Azure. The solution must meet the following requirements: use a managed database service; scale to handle large volumes of user traffic; and be highly available across multiple regions. Which Azure services should you use to achieve these requirements?

A. Azure Virtual Machines; Azure Traffic Manager; Azure SQL Database
B. Azure App Service; Azure Front Door; Azure SQL Database
C. Azure Kubernetes Service; Azure Traffic Manager; Azure Cosmos DB
D. Azure App Service; Azure Traffic Manager; Azure Cosmos DB

A

Answer: B. Azure App Service; Azure Front Door; Azure SQL Database

Reasoning:
- The requirement is to deploy a highly available web application that can scale to handle large volumes of user traffic and use a managed database service. It must also be highly available across multiple regions.
- Azure App Service is a fully managed platform for building; deploying; and scaling web apps; which fits the requirement for a scalable and managed service.
- Azure Front Door provides global load balancing and application acceleration; which ensures high availability and performance across multiple regions.
- Azure SQL Database is a fully managed relational database service that offers high availability and scalability; meeting the requirement for a managed database service.

Breakdown of non-selected options:
- A. Azure Virtual Machines; Azure Traffic Manager; Azure SQL Database: While Azure Traffic Manager can provide global load balancing; using Azure Virtual Machines requires more management overhead compared to Azure App Service. Azure SQL Database is suitable; but the overall solution is less managed and scalable compared to option B.
- C. Azure Kubernetes Service; Azure Traffic Manager; Azure Cosmos DB: Azure Kubernetes Service is a good choice for containerized applications but requires more management than Azure App Service. Azure Cosmos DB is a globally distributed database service; which is suitable; but the combination is more complex than necessary for a web application.
- D. Azure App Service; Azure Traffic Manager; Azure Cosmos DB: Azure App Service is suitable; but Azure Traffic Manager does not provide the same level of global load balancing and acceleration as Azure Front Door. Azure Cosmos DB is a NoSQL database; which may not be the best fit if a relational database is needed.

65
Q

Your company has 300 virtual machines hosted in a VMware environment. These virtual machines vary in size and utilization levels. You plan to migrate all the virtual machines to Azure. You need to recommend the number and size of Azure virtual machines required to accommodate the current workloads in Azure; while minimizing administrative effort. What tool should you use to make this recommendation?

A. Azure Pricing Calculator
B. Azure Advisor
C. Azure Migrate
D. Azure Cost Management

A

Answer: C. Azure Migrate

Reasoning: Azure Migrate is the most suitable tool for this scenario as it is specifically designed to assist with the migration of on-premises workloads to Azure. It provides a comprehensive assessment of your current environment; including the number and size of virtual machines needed in Azure to accommodate your workloads. It also helps in minimizing administrative effort by providing insights and recommendations for the migration process.

Breakdown of non-selected options:

A. Azure Pricing Calculator: This tool is primarily used for estimating the cost of Azure services based on your configuration. It does not provide recommendations on the number and size of virtual machines needed for migration.

B. Azure Advisor: This tool provides best practice recommendations to optimize your Azure resources for high availability; security; performance; and cost. However; it does not specifically assist with planning the migration of virtual machines from a VMware environment to Azure.

D. Azure Cost Management: This tool is used for monitoring and managing Azure spending and resource usage. It does not provide recommendations for the migration of virtual machines or the sizing of Azure resources.

66
Q

You need to deploy a stateless web app to Azure that will be accessed by users globally. The solution must meet the following requirements: provide low latency and high availability; automatically scale up or down based on demand; and ensure that the web app can be deployed to any Azure region. Which Azure service or services should you use?
A. Azure App Service
B. Azure Virtual Machines
C. Azure Kubernetes Service
D. Azure Container Instances

A

Answer: A. Azure App Service

Reasoning: Azure App Service is a fully managed platform for building; deploying; and scaling web apps. It provides built-in load balancing and autoscaling; which ensures low latency and high availability. It also supports deployment to any Azure region; making it suitable for a globally accessed web app. Additionally; it is designed to automatically scale up or down based on demand; meeting all the requirements specified in the question.

Breakdown of non-selected options:
- B. Azure Virtual Machines: While Azure Virtual Machines can be used to host web apps; they require more management overhead; including setting up load balancing and scaling. They do not automatically scale up or down based on demand without additional configuration; making them less suitable for this scenario.
- C. Azure Kubernetes Service: Azure Kubernetes Service (AKS) is a powerful option for container orchestration and can meet the requirements; but it is more complex to manage compared to Azure App Service. It requires more expertise to set up and manage scaling and availability; making it less straightforward for deploying a simple stateless web app.
- D. Azure Container Instances: Azure Container Instances are suitable for running containers without managing servers; but they do not provide built-in autoscaling or load balancing. They are more suited for simple; isolated container deployments rather than a globally accessed web app requiring high availability and low latency.

67
Q

You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements: Provide access to the full .NET Framework; ensure redundancy in case an Azure region fails; and allow administrators access to the operating system to install custom application dependencies. Solution: You deploy an Azure App Service plan with two instances across two Azure regions and use Azure Traffic Manager to load-balance traffic between the instances. Does this meet the goal?
A. Yes
B. No

A

Answer: B. No

Reasoning: The solution described does not meet all the requirements specified in the question. While deploying an Azure App Service plan with two instances across two Azure regions and using Azure Traffic Manager can provide redundancy and load balancing; it does not meet the requirement of providing access to the full .NET Framework and allowing administrators access to the operating system to install custom application dependencies. Azure App Service provides a managed platform where you do not have direct access to the underlying operating system; which is necessary for installing custom dependencies and accessing the full .NET Framework.

Breakdown of non-selected answer option:
A. Yes - This option is incorrect because; although the solution provides redundancy and load balancing; it does not allow for access to the operating system or the full .NET Framework; which are key requirements in the question.

68
Q

You have an app named App1 that uses an on-premises MongoDB database called DB1. You plan to migrate DB1 to an Azure Cosmos DB account. You need to enable customer-managed Transparent Data Encryption (TDE) for the database. The solution must maximize encryption strength. Which encryption algorithm and key length should you use for the TDE protector?
A. RSA 4096
B. AES 192
C. RSA 3072
D. AES 256

A

Answer: D. AES 256

Reasoning: Azure Cosmos DB supports customer-managed keys for Transparent Data Encryption (TDE); and when it comes to maximizing encryption strength; AES (Advanced Encryption Standard) is generally preferred over RSA for data encryption due to its efficiency and strength. AES 256 is considered one of the strongest encryption algorithms available; providing a high level of security. RSA is typically used for encrypting small amounts of data; such as encryption keys; rather than large datasets. Therefore; AES 256 is the most suitable choice for maximizing encryption strength in this context.

Breakdown of non-selected options:
- A. RSA 4096: While RSA 4096 provides strong encryption; it is not typically used for encrypting large datasets due to its computational intensity and inefficiency compared to AES. RSA is more suitable for encrypting keys rather than data.
- B. AES 192: AES 192 is a strong encryption algorithm; but AES 256 offers a higher level of security due to its longer key length; making it a better choice for maximizing encryption strength.
- C. RSA 3072: Similar to RSA 4096; RSA 3072 is not ideal for encrypting large datasets. AES is more efficient and suitable for data encryption; and AES 256 provides stronger encryption than RSA 3072.

69
Q

You have an Azure subscription. You need to recommend a solution that allows developers to provision Azure virtual machines. The solution must meet the following requirements: ✑ Only permit the creation of virtual machines in specific regions. ✑ Only permit the creation of specific sizes of virtual machines. What should you include in the recommendation?
A. Azure Resource Manager (ARM) templates
B. Azure Policy
C. Conditional Access policies
D. Role-Based Access Control (RBAC)

A

Answer: B. Azure Policy

Reasoning: Azure Policy is the most suitable solution for this scenario because it allows you to enforce specific rules and effects over your resources; ensuring that they comply with your corporate standards and service level agreements. In this case; Azure Policy can be used to restrict the creation of virtual machines to specific regions and specific sizes; which directly addresses the requirements stated in the question.

Breakdown of non-selected options:
- A. Azure Resource Manager (ARM) templates: While ARM templates can define the infrastructure and configuration for Azure resources; they do not inherently enforce restrictions on regions or sizes. They are more about deployment consistency rather than governance and compliance.
- C. Conditional Access policies: These are used to control access to Azure resources based on conditions like user location; device state; etc. They are not designed to restrict the creation of resources based on regions or sizes.
- D. Role-Based Access Control (RBAC): RBAC is used to manage who has access to Azure resources and what they can do with those resources. It does not provide the capability to restrict the creation of resources based on specific regions or sizes.

70
Q

You are designing a SQL database solution for a healthcare company that needs to store and process patient records and healthcare data. The company must comply with regulatory requirements such as HIPAA and GDPR. The solution must meet a Service Level Agreement (SLA) of 99.99% uptime and have reserved capacity. Compute charges must be minimized. Which database platform should you recommend?
A. Azure SQL Database vCore
B. Azure SQL Database Hyperscale
C. Azure SQL Managed Instance
D. Azure SQL Database with Transparent Data Encryption

A

Answer: A. Azure SQL Database vCore

Reasoning:
Azure SQL Database vCore is a suitable choice for this scenario because it allows for a flexible pricing model that can help minimize compute charges by selecting the appropriate number of vCores based on the workload requirements. It also supports compliance with regulatory requirements such as HIPAA and GDPR; and offers a high availability SLA of 99.99% uptime. Additionally; it provides reserved capacity options; which can further help in cost management by committing to a certain level of usage over time.

Breakdown of non-selected options:
B. Azure SQL Database Hyperscale: While Hyperscale offers high scalability and is suitable for very large databases; it may not be the most cost-effective option for minimizing compute charges unless the workload specifically requires its capabilities. It is more suited for scenarios where rapid scaling and large storage are primary concerns.

C. Azure SQL Managed Instance: This option provides a high degree of compatibility with on-premises SQL Server features; which might not be necessary for this scenario. It can be more expensive compared to Azure SQL Database vCore; especially if the full feature set of Managed Instance is not required.

D. Azure SQL Database with Transparent Data Encryption: Transparent Data Encryption (TDE) is a security feature that can be enabled on Azure SQL Database; but it is not a standalone database platform. Therefore; this option does not address the requirements of minimizing compute charges or meeting the SLA of 99.99% uptime.

71
Q

You have an on-premises file server storing 10 TB of infrequently accessed data. You want to migrate this data to Azure and ensure it’s available within 24 hours of being requested. Which storage solution should you implement to minimize costs?
A. Azure Blob Storage with the Archive access tier
B. Azure File Storage with the Cool access tier
C. Azure Disk Storage with the Standard HDD disk type
D. Azure Table Storage with the Hot access tier

A

Answer: A. Azure Blob Storage with the Archive access tier

Reasoning: The question specifies that the data is infrequently accessed and needs to be available within 24 hours of being requested. Azure Blob Storage with the Archive access tier is designed for infrequently accessed data and offers the lowest storage costs. However; it has a retrieval time of up to 15 hours; which fits within the 24-hour availability requirement. This makes it the most cost-effective solution for the given scenario.

Breakdown of non-selected options:
- B. Azure File Storage with the Cool access tier: While this option is suitable for infrequently accessed data; it is generally more expensive than the Archive tier in Blob Storage. The Cool tier is designed for data that is accessed less frequently but still needs to be readily available; which is not necessary given the 24-hour retrieval requirement.
- C. Azure Disk Storage with the Standard HDD disk type: This option is not suitable because disk storage is typically used for VM disks and scenarios requiring high IOPS and low latency; which is not the case here. It is also more expensive than Blob Storage for storing large amounts of infrequently accessed data.
- D. Azure Table Storage with the Hot access tier: Table Storage is a NoSQL store for structured data and is not suitable for storing large files or unstructured data like a file server. Additionally; the Hot access tier is designed for frequently accessed data; which would not minimize costs for infrequently accessed data.

72
Q

Is using Azure Application Gateway for load balancing and SSL termination sufficient for monitoring application availability and diagnosing issues in Azure?

A. Yes
B. No

A

Answer: B. No

Reasoning: Azure Application Gateway is primarily used for load balancing and SSL termination; which are essential for distributing traffic and securing data in transit. However; it does not inherently provide comprehensive monitoring and diagnostic capabilities for application availability and issues. For monitoring and diagnosing application availability and issues; additional tools and services like Azure Monitor; Application Insights; or Azure Log Analytics are typically required.

Breakdown of non-selected answer option:
A. Yes - This option is incorrect because while Azure Application Gateway provides load balancing and SSL termination; it does not offer the full suite of monitoring and diagnostic tools needed to ensure application availability and diagnose issues. Additional Azure services are necessary to fulfill these requirements.

73
Q

You are designing an Azure environment with multiple subscriptions and want to use Azure Policy to enforce governance policies. You need to ensure that the policies are applied consistently across all subscriptions. Which Azure Policy scope should you use?

A. Azure Active Directory (Azure AD) administrative units
B. Azure Active Directory (Azure AD) tenants
C. Subscriptions
D. Compute resources
E. Resource groups
F. Management groups

A

Answer: F. Management groups

Reasoning:
Azure Policy is a service in Azure that you use to create; assign; and manage policies. These policies enforce different rules and effects over your resources; so those resources stay compliant with your corporate standards and service level agreements. When you have multiple subscriptions and want to enforce policies consistently across all of them; the best practice is to use Management Groups. Management Groups allow you to organize your subscriptions into a hierarchy for unified policy and access management. Policies applied at the management group level are inherited by all subscriptions under that management group; ensuring consistent policy enforcement.

Breakdown of non-selected options:
A. Azure Active Directory (Azure AD) administrative units - These are used to delegate administrative permissions within Azure AD and are not related to Azure Policy or resource governance across subscriptions.

B. Azure Active Directory (Azure AD) tenants - A tenant is a dedicated instance of Azure AD that an organization receives when it signs up for a Microsoft cloud service. It is not used for organizing subscriptions or applying Azure Policies.

C. Subscriptions - While you can apply policies at the subscription level; this would require applying the same policy individually to each subscription; which is not efficient for consistent policy enforcement across multiple subscriptions.

D. Compute resources - Azure Policy can target specific resource types; but applying policies at the compute resource level would not ensure consistent policy enforcement across all subscriptions.

E. Resource groups - Similar to subscriptions; policies can be applied at the resource group level; but this would not ensure consistent policy enforcement across multiple subscriptions. Management groups provide a higher level of organization for this purpose.

74
Q

You are designing a microservices architecture to be hosted in an Azure Kubernetes Service (AKS) cluster. The applications consuming these microservices will be hosted on Azure virtual machines. Both the virtual machines and the AKS cluster will be on the same virtual network. You need to design a solution to expose the microservices to the consumer applications. The solution must meet the following requirements:
✑ Restrict ingress access to the microservices to a single private IP address and protect it using mutual TLS authentication.
✑ Rate-limit the number of incoming microservice calls.
✑ Minimize costs.
What should you include in the solution?

A. Azure Application Gateway with Azure Web Application Firewall (WAF)
B. Azure API Management Standard tier with a service endpoint
C. Azure Front Door with Azure Web Application Firewall (WAF)
D. Azure API Management Premium tier with a virtual network connection

A

Answer: B. Azure API Management Standard tier with a service endpoint

Reasoning:
- The requirements specify the need to restrict ingress access to a single private IP address; use mutual TLS authentication; rate-limit incoming calls; and minimize costs.
- Azure API Management (APIM) is designed to manage APIs; including features like rate limiting; authentication; and IP restriction.
- The Standard tier of APIM supports service endpoints; which can be used to restrict access to a specific private IP address.
- Mutual TLS authentication can be configured in APIM to secure the communication between the client and the API.
- The Standard tier is more cost-effective compared to the Premium tier; aligning with the requirement to minimize costs.

Breakdown of non-selected options:
A. Azure Application Gateway with Azure Web Application Firewall (WAF)
- While it can provide security features and rate limiting; it is primarily designed for web traffic and may not support mutual TLS authentication as effectively as APIM.
- It may not be as cost-effective as APIM for API management purposes.

C. Azure Front Door with Azure Web Application Firewall (WAF)
- Azure Front Door is designed for global load balancing and web application acceleration; which may not be necessary for this scenario.
- It does not natively support mutual TLS authentication for backend services.

D. Azure API Management Premium tier with a virtual network connection
- While it provides all the necessary features; including mutual TLS and IP restriction; it is more expensive than the Standard tier.
- The requirement to minimize costs makes the Standard tier a more suitable choice.

75
Q

Your company has deployed several virtual machines both on-premises and in Azure. ExpressRoute is set up and configured for connectivity between on-premises and Azure. Some virtual machines are experiencing network connectivity issues. You need to analyze the network traffic to determine if packets are being allowed or denied to these virtual machines. Solution: Use Azure Firewall to analyze the network traffic. Does this meet the goal?
A. Yes
B. No

A

Answer: B. No

Reasoning: Azure Firewall is a managed; cloud-based network security service that protects your Azure Virtual Network resources. It is primarily used for filtering and controlling outbound and inbound network traffic based on rules. However; it is not specifically designed for analyzing network traffic to determine if packets are being allowed or denied. For analyzing network traffic; tools like Network Watcher; which includes features like IP flow verify and Network Security Group (NSG) flow logs; would be more appropriate.

Breakdown of non-selected answer options:
- A. Yes: This option is incorrect because Azure Firewall is not the right tool for analyzing network traffic to determine if packets are being allowed or denied. It is more suited for enforcing security policies rather than analyzing traffic.

76
Q

You have an Azure subscription containing two applications named App1 and App2. App1 generates messages that are added to an Azure Service Bus topic; and App2 processes those messages. You need to ensure that App2 processes messages as quickly as possible. What should you recommend?
A. One Azure Service Bus queue.
B. One Azure Storage queue.
C. One Azure Event Grid subscription.
D. One Azure Logic App.

A

Answer: A. One Azure Service Bus queue.

Reasoning:
To ensure that App2 processes messages as quickly as possible; the most suitable solution is to use an Azure Service Bus queue. Azure Service Bus is designed for high-throughput and low-latency message processing; which is ideal for scenarios where messages need to be processed quickly. By using a queue; App2 can pull messages from the queue as soon as they are available; ensuring efficient and timely processing.

Breakdown of non-selected options:
- B. One Azure Storage queue: While Azure Storage queues are a viable option for message queuing; they are generally used for simpler scenarios and do not offer the same level of features and performance as Azure Service Bus queues. Service Bus queues are more suitable for complex messaging patterns and scenarios requiring high throughput and low latency.

  • C. One Azure Event Grid subscription: Azure Event Grid is designed for event-driven architectures and is optimized for handling events rather than message processing. It is not the best choice for scenarios where messages need to be processed as quickly as possible; as it is more suited for event distribution and handling.
  • D. One Azure Logic App: Azure Logic Apps are used for automating workflows and integrating services; but they are not optimized for high-throughput message processing. They introduce additional overhead and latency; making them less suitable for scenarios where quick message processing is required.
77
Q

You are designing an Azure IoT solution involving 1 million IoT devices; each streaming data such as location; speed; and time. Approximately 1 million records will be written every second. You need to recommend services to store and query the data. Which two services can you recommend? Each option presents a complete solution.
A. Azure Cosmos DB for NoSQL
B. Azure Stream Analytics
C. Azure Event Hubs
D. Azure Data Lake Storage Gen2

A

Answer: A. Azure Cosmos DB for NoSQL
Answer: D. Azure Data Lake Storage Gen2

Reasoning:
The question requires a solution for storing and querying a high volume of data generated by 1 million IoT devices; with approximately 1 million records written every second. The solution needs to handle both storage and querying efficiently.

  • Azure Cosmos DB for NoSQL: This service is suitable for storing large volumes of data with low latency and high availability. It supports fast querying and is designed to handle massive amounts of data; making it a good fit for IoT scenarios where data is continuously streamed and needs to be queried in real-time or near real-time.
  • Azure Data Lake Storage Gen2: This service is optimized for big data analytics and can store vast amounts of data. It is designed to handle high throughput and is suitable for scenarios where large datasets need to be stored and queried. It provides a hierarchical namespace and is integrated with Azure analytics services; making it a good choice for storing IoT data that will be analyzed later.

Breakdown of non-selected options:

  • B. Azure Stream Analytics: This service is primarily used for real-time data stream processing and analytics; not for long-term storage. It is designed to process data in motion and provide insights; but it is not a storage solution.
  • C. Azure Event Hubs: This service is a data streaming platform and event ingestion service; designed to handle large volumes of data streams. It is used for ingesting and buffering data but not for long-term storage or querying. It acts as a data pipeline rather than a storage solution.
78
Q

Your company needs a highly available database solution capable of handling large data volumes and supporting real-time analytics. The solution must ensure no data loss during failover and minimize costs. Which Azure service should you use?
A. Azure Cosmos DB
B. Azure SQL Database Managed Instance
C. Azure Database for MySQL
D. Azure Database for PostgreSQL

A

Answer: A. Azure Cosmos DB

Reasoning: Azure Cosmos DB is a globally distributed; multi-model database service that provides high availability; low latency; and scalability. It is designed to handle large volumes of data and supports real-time analytics. Cosmos DB offers strong consistency models and automatic failover; ensuring no data loss during failover. Additionally; it provides a cost-effective pricing model with options to optimize costs based on usage patterns.

Breakdown of non-selected options:
- B. Azure SQL Database Managed Instance: While it offers high availability and supports large data volumes; it is primarily designed for SQL workloads and may not be as cost-effective or flexible for real-time analytics across diverse data models as Cosmos DB.
- C. Azure Database for MySQL: This service is suitable for MySQL workloads but may not be the best choice for handling large data volumes with real-time analytics requirements. It also may not provide the same level of global distribution and failover capabilities as Cosmos DB.
- D. Azure Database for PostgreSQL: Similar to the MySQL option; this service is tailored for PostgreSQL workloads and may not offer the same scalability and real-time analytics capabilities as Cosmos DB. It also may not ensure no data loss during failover as effectively as Cosmos DB.

79
Q

You have 200 servers running Windows Server 2019 that host Microsoft SQL Server 2019 instances. These instances host databases with the following characteristics: ✑ The databases are highly transactional; with a large number of concurrent transactions. ✑ The largest database is currently 2 TB; and none of the databases will ever exceed 3 TB. You plan to migrate all the data from SQL Server to Azure. You need to recommend a service to host the databases. The solution must meet the following requirements: ✑ Ensure that the migrated databases are highly available and resilient. ✑ Provide low latency and high throughput for transactional workloads. ✑ Support automatic failover and replication to multiple regions. What should you include in the recommendation?
A. Azure SQL Database single databases
B. Azure SQL Managed Instance
C. Azure SQL Database Hyperscale
D. SQL Server on Azure Virtual Machines

A

Answer: B. Azure SQL Managed Instance

Reasoning:
Azure SQL Managed Instance is the most suitable option for hosting SQL Server databases in Azure when considering the requirements of high availability; resilience; low latency; high throughput; automatic failover; and replication to multiple regions. Managed Instance provides a fully managed SQL Server instance with built-in high availability and supports features like automatic failover groups for geo-replication; making it ideal for transactional workloads.

Breakdown of non-selected options:
A. Azure SQL Database single databases - While this option provides high availability and resilience; it is more suited for individual databases rather than a large number of databases with high transactional workloads. It may not provide the same level of compatibility and features as Managed Instance for SQL Server workloads.

C. Azure SQL Database Hyperscale - This option is designed for very large databases and provides high scalability. However; the databases in the scenario are not expected to exceed 3 TB; so Hyperscale’s benefits are not necessary. Additionally; Hyperscale may not offer the same level of transactional performance and compatibility as Managed Instance.

D. SQL Server on Azure Virtual Machines - This option provides full control over the SQL Server environment but requires more management overhead compared to a managed service. It does not inherently provide the same level of built-in high availability; automatic failover; and multi-region replication as Managed Instance.

80
Q

You need to deploy a web application that can withstand failures of the underlying infrastructure components in Azure. The application will be deployed to a single Azure Kubernetes Service (AKS) cluster in an Azure region. The application deployment must meet the following requirements: ✑ Ensure that the application remains available if a single pod fails. ✑ Ensure that internet traffic is encrypted using SSL without configuring SSL on each container. Which service should you include in the recommendation?
A. Azure Front Door
B. Azure Traffic Manager
C. AKS Ingress Controller
D. Azure Load Balancer

A

Answer: C. AKS Ingress Controller

Reasoning: The question requires a solution that ensures high availability of a web application deployed on Azure Kubernetes Service (AKS) and also handles SSL encryption for internet traffic without configuring SSL on each container.

  • AKS Ingress Controller: This is the most suitable option because it can manage external access to the services in a Kubernetes cluster; typically HTTP. It provides SSL termination; which means it can handle SSL encryption and decryption; thus offloading this task from individual containers. It also supports load balancing and can ensure that the application remains available even if a single pod fails by routing traffic to healthy pods.

Breakdown of non-selected options:
- A. Azure Front Door: While Azure Front Door provides SSL termination and global load balancing; it is more suited for global routing and high availability across multiple regions rather than within a single AKS cluster in one region. It is not specifically designed to handle pod failures within a Kubernetes cluster.

  • B. Azure Traffic Manager: This service is used for DNS-based traffic routing and is designed for distributing traffic across multiple regions or endpoints. It does not provide SSL termination or manage traffic within a single AKS cluster.
  • D. Azure Load Balancer: This service provides load balancing at the network layer (Layer 4) and does not handle SSL termination. It is not capable of managing HTTP/HTTPS traffic directly or providing SSL offloading; which is required in this scenario.
81
Q

You have an on-premises application named App1 that uses Kerberos authentication. You plan to migrate App1 to Azure. Some users work remotely and do not have VPN access to the on-premises network. You need to provide the remote users with SSO access to App1. Which two features should you include in the solution?

A. Azure AD Application Proxy
B. Azure AD Privileged Identity Management (PIM)
C. Azure AD Connect Health
D. Azure AD Domain Services
E. Azure AD enterprise applications
F. Azure Application Gateway

A

Answer: A. Azure AD Application Proxy
Answer: E. Azure AD enterprise applications

Reasoning:
To provide Single Sign-On (SSO) access to an application that uses Kerberos authentication and is being migrated to Azure; while also accommodating remote users without VPN access; Azure AD Application Proxy and Azure AD enterprise applications are the most suitable options.

  • Azure AD Application Proxy allows you to securely publish on-premises applications to users outside your network; providing SSO capabilities without the need for a VPN. It acts as a bridge to provide access to the application hosted on-premises or in Azure.
  • Azure AD enterprise applications enable integration with Azure Active Directory for authentication and SSO. This feature allows you to configure SSO settings for applications; including those that are migrated to Azure.

Breakdown of non-selected options:

B. Azure AD Privileged Identity Management (PIM): This is used for managing; controlling; and monitoring access within Azure AD; Azure; and other Microsoft Online Services. It is not directly related to providing SSO access to applications.

C. Azure AD Connect Health: This is used for monitoring the health of your on-premises identity infrastructure. It does not provide SSO capabilities or facilitate remote access to applications.

D. Azure AD Domain Services: This provides managed domain services such as domain join; group policy; and Kerberos/NTLM authentication. While it supports Kerberos; it does not directly facilitate remote SSO access for applications.

F. Azure Application Gateway: This is a web traffic load balancer that enables you to manage traffic to your web applications. It does not provide SSO capabilities or facilitate remote access to applications.

82
Q

You are planning to migrate a MongoDB database to Azure Cosmos DB. The database must be highly available and support global data distribution. Which consistency level should you recommend for the Cosmos DB configuration?
A. Strong
B. Bounded Staleness
C. Session

A

Answer: B. Bounded Staleness

Reasoning: When migrating a MongoDB database to Azure Cosmos DB with requirements for high availability and global data distribution; the consistency level plays a crucial role. Azure Cosmos DB offers five consistency levels: Strong; Bounded Staleness; Session; Consistent Prefix; and Eventual.

  • Strong consistency provides the highest level of consistency but at the cost of availability and latency; especially in a globally distributed setup. It requires reads to be acknowledged by all replicas; which can be slow and less available in a multi-region setup.
  • Bounded Staleness offers a balance between consistency and availability. It allows reads to lag behind writes by a certain number of versions (K) or time interval (T); which can be configured. This level is suitable for scenarios requiring global distribution with high availability; as it provides a predictable consistency model while allowing for some delay in data propagation.
  • Session consistency is scoped to a client session; providing strong consistency guarantees within a session but not across different sessions. While it is a good choice for scenarios where a single user’s session needs consistency; it may not be the best fit for global distribution requirements where multiple users or systems interact with the data.

Breakdown of non-selected options:
- A. Strong: Not selected because it prioritizes consistency over availability and latency; which can be detrimental in a globally distributed system where high availability is a requirement.

  • C. Session: Not selected because it is more suited for scenarios where consistency is needed within a single user’s session rather than across a globally distributed system. It does not provide the same level of predictability in consistency across different sessions as Bounded Staleness does.
83
Q

You are planning an Azure IoT Hub solution that will include 1;000;000 IoT devices. Each device will stream data; including video; device ID; and timestamp. Approximately 1;000;000 records will be written every second. The data needs to be visualized in real-time. You need to recommend services to store and query the data. Which two services can you recommend? Each option presents a complete solution.

A. Azure Table Storage
B. Azure Stream Analytics
C. Azure Cosmos DB for NoSQL
D. Azure Data Lake Storage

A

Answer: B. Azure Stream Analytics
Answer: C. Azure Cosmos DB for NoSQL

Reasoning:
- The scenario involves handling a high volume of data (1;000;000 records per second) from IoT devices; which includes video; device ID; and timestamp. The data needs to be visualized in real-time; which requires efficient processing and querying capabilities.

  • Azure Stream Analytics is suitable for real-time data processing and analytics. It can ingest data from IoT Hub; process it in real-time; and output the results to various services for visualization or storage. This makes it a suitable choice for the requirement of real-time data visualization.
  • Azure Cosmos DB for NoSQL is a globally distributed; multi-model database service that provides high throughput and low latency. It is well-suited for storing large volumes of data and supports querying capabilities; making it a good fit for storing and querying the data from IoT devices.

Breakdown of non-selected options:
- A. Azure Table Storage: While Azure Table Storage is a scalable NoSQL data store; it is not optimized for real-time analytics or handling the high throughput required in this scenario. It is more suitable for storing large amounts of structured data but lacks the real-time processing capabilities needed here.

  • D. Azure Data Lake Storage: Azure Data Lake Storage is designed for big data analytics and can store large volumes of data. However; it is more suited for batch processing rather than real-time analytics. It does not provide the real-time querying capabilities required for this scenario.
84
Q

You have an on-premises application that consumes data from multiple databases. The application code references database tables using a combination of the server; database; and table name. You need to migrate the application data to Azure. To which two services can you migrate the application data to achieve this goal? Each option presents a complete solution. NOTE: Each selection is worth one point.
A. SQL Server Stretch Database
B. SQL Server on an Azure virtual machine
C. Azure SQL Database
D. Azure SQL Managed Instance

A

Answer: B. SQL Server on an Azure virtual machine
Answer: D. Azure SQL Managed Instance

Reasoning: The requirement is to migrate an on-premises application that references database tables using server; database; and table names. This implies the need for a solution that supports the same level of compatibility and functionality as the on-premises SQL Server.

  • SQL Server on an Azure virtual machine (Option B) is suitable because it provides full SQL Server functionality and compatibility; allowing the application to continue using server; database; and table names without modification.
  • Azure SQL Managed Instance (Option D) is also suitable because it offers near 100% compatibility with on-premises SQL Server; including support for server; database; and table names; making it a good choice for migrating applications with minimal changes.

Breakdown of non-selected options:

  • A. SQL Server Stretch Database: This option is not suitable because it is designed for stretching data from on-premises SQL Server to Azure; primarily for cold data; rather than fully migrating an application and its data to Azure.
  • C. Azure SQL Database: This option is not selected because; while it is a managed database service; it does not support all SQL Server features and might require changes to the application code; especially if the application relies on server-level features or specific SQL Server functionalities.
85
Q

You are planning to deploy a multi-tier application in Azure that will store sensitive data. The application will include web servers; application servers; and a database server. You need to ensure that sensitive data is encrypted both at rest and in transit. What should you include in the solution?

A. Virtual Private Network (VPN)
B. Azure Key Vault
C. Azure AD App Proxy
D. Azure Data Lake Storage

A

Answer: B. Azure Key Vault

Reasoning:
Azure Key Vault is a service that provides secure storage and management of sensitive information such as keys; secrets; and certificates. It is specifically designed to help safeguard cryptographic keys and secrets used by cloud applications and services. In the context of the question; Azure Key Vault can be used to manage encryption keys for encrypting data at rest and in transit; ensuring that sensitive data is protected.

Breakdown of non-selected options:
A. Virtual Private Network (VPN) - While a VPN can secure data in transit by encrypting the traffic between the client and the server; it does not provide encryption for data at rest. Therefore; it does not fully meet the requirement of encrypting sensitive data both at rest and in transit.

C. Azure AD App Proxy - Azure AD Application Proxy is used to provide secure remote access to on-premises applications. It does not provide encryption for data at rest or in transit; so it is not suitable for the requirement of encrypting sensitive data.

D. Azure Data Lake Storage - Azure Data Lake Storage is a scalable storage service for big data analytics. While it can store data securely; it is not specifically designed for managing encryption keys or ensuring encryption in transit. Therefore; it is not the most suitable option for the requirement of encrypting sensitive data both at rest and in transit.

86
Q

You need to design a highly available Azure Kubernetes Service (AKS) cluster that meets the following requirements: ✑ The cluster must remain operational during a zone outage. ✑ The cluster must be scalable. ✑ Costs should be minimized. Which deployment option should you choose?
A. AKS with Availability Zones
B. AKS with Virtual Machine Scale Sets
C. AKS with Azure Front Door
D. AKS with Azure Traffic Manager

A

Answer: A. AKS with Availability Zones

Reasoning:
To design a highly available AKS cluster that remains operational during a zone outage; the most suitable option is to use AKS with Availability Zones. This deployment option ensures that the cluster is distributed across multiple zones within a region; providing resilience against zone failures. Additionally; AKS inherently supports scalability; allowing you to adjust the number of nodes as needed. While there might be some cost associated with using Availability Zones; it is generally a cost-effective solution for achieving high availability compared to other complex configurations.

Breakdown of non-selected options:
- B. AKS with Virtual Machine Scale Sets: While VM Scale Sets provide scalability; they do not inherently provide zone redundancy. This option does not ensure that the cluster remains operational during a zone outage; which is a key requirement.
- C. AKS with Azure Front Door: Azure Front Door is a global load balancing service that provides high availability and low latency for applications. However; it does not address the need for zone redundancy within an AKS cluster itself. It is more suited for distributing traffic across multiple regions rather than ensuring intra-region availability.
- D. AKS with Azure Traffic Manager: Similar to Azure Front Door; Azure Traffic Manager is a DNS-based traffic load balancer that distributes traffic across multiple regions. It does not provide zone-level redundancy within a single region; which is necessary to meet the requirement of remaining operational during a zone outage.

87
Q

You are designing a financial application that requires high security; compliance; and low-latency reads. The application will be used globally and must support SQL commands. You need to recommend a database solution that meets these requirements. Which solution should you recommend?
A. Azure SQL Database
B. Azure Cosmos DB for NoSQL
C. Azure Database for PostgreSQL
D. Azure Database for MySQL

A

Answer: A. Azure SQL Database

Reasoning: The question specifies the need for a database solution that supports SQL commands; requires high security; compliance; and low-latency reads; and is used globally. Azure SQL Database is a fully managed relational database service that supports SQL commands and offers high security and compliance features; including advanced threat protection and data encryption. It also provides low-latency reads with features like geo-replication for global distribution; making it a suitable choice for the requirements outlined.

Breakdown of non-selected options:
- B. Azure Cosmos DB for NoSQL: While Azure Cosmos DB offers global distribution and low-latency reads; it is primarily a NoSQL database. The requirement for SQL command support makes this option less suitable; as it does not natively support SQL in the same way a relational database does.
- C. Azure Database for PostgreSQL: This option supports SQL commands and offers good security features; but it may not provide the same level of global distribution and low-latency reads as Azure SQL Database. Additionally; Azure SQL Database is more commonly associated with high compliance standards.
- D. Azure Database for MySQL: Similar to PostgreSQL; this option supports SQL commands and offers security features. However; it may not match the global distribution capabilities and low-latency performance of Azure SQL Database; which is specifically designed for such scenarios.

88
Q

You have an Azure subscription that includes a storage account with multiple file shares. A team of contractors needs access to these file shares for one week only. You must recommend a solution to grant access to the contractors for the specified week while restricting access for everyone else. What should you recommend?
A. Azure AD Managed Identities
B. Azure AD Privileged Identity Management
C. Shared Access Signatures (SAS)
D. Role-Based Access Control (RBAC)

A

Answer: C. Shared Access Signatures (SAS)

Reasoning: Shared Access Signatures (SAS) are a suitable solution for granting temporary access to Azure storage resources; such as file shares; for a specified period. SAS tokens can be configured with an expiration date and time; making them ideal for scenarios where access needs to be restricted to a specific timeframe; such as one week in this case. Additionally; SAS tokens can be scoped to allow access to specific resources and operations; providing precise control over what the contractors can do.

Breakdown of non-selected options:
- A. Azure AD Managed Identities: Managed Identities are used to provide Azure services with an automatically managed identity in Azure AD; which can be used to authenticate to services that support Azure AD authentication. They are not designed for granting temporary access to storage resources for external users like contractors.

  • B. Azure AD Privileged Identity Management: This service is used to manage; control; and monitor access within Azure AD; Azure; and other Microsoft Online Services. It is more focused on managing privileged roles and does not provide a straightforward mechanism for granting temporary access to Azure storage resources.
  • D. Role-Based Access Control (RBAC): RBAC is used to assign roles to users; groups; and services to grant access to Azure resources. While it can be used to control access; it is not ideal for temporary access scenarios because it requires manual intervention to assign and revoke roles; and it does not have built-in expiration capabilities like SAS.
89
Q

You need to deploy a web application that requires multi-factor authentication in an Azure subscription. The solution must meet the following requirements: The web application must have access to the full .NET framework; be hosted in a virtual machine (VM); and provide multi-factor authentication for user access. Which Azure service should you use?
A. Azure Active Directory
B. Azure Virtual Machines
C. Azure Kubernetes Service (AKS)
D. Azure Container Instances (ACI)

A

Answer: B. Azure Virtual Machines

Reasoning: The question specifies that the web application must be hosted in a virtual machine and have access to the full .NET framework. Azure Virtual Machines is the only option that directly provides a virtual machine environment where you can install and run the full .NET framework. Additionally; multi-factor authentication can be implemented for applications hosted on Azure Virtual Machines by integrating with Azure Active Directory or other identity providers.

Breakdown of non-selected options:
- A. Azure Active Directory: While Azure Active Directory provides multi-factor authentication services; it is not a hosting service for web applications. It is used for identity and access management; not for hosting applications on a VM.
- C. Azure Kubernetes Service (AKS): AKS is used for deploying and managing containerized applications using Kubernetes. It does not directly provide a virtual machine environment for running the full .NET framework as required by the question.
- D. Azure Container Instances (ACI): ACI is used for running containerized applications without managing the underlying infrastructure. It does not provide a virtual machine environment for running the full .NET framework; which is a requirement in the question.

90
Q

You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements:
- Provide access to the Java runtime environment.
- Ensure redundancy in case an Azure region fails.
- Allow administrators access to the operating system to install custom application dependencies.

Solution: You deploy an Azure Kubernetes Service (AKS) cluster with two nodes and an Azure Application Gateway. You use a custom Docker image that includes the Java runtime environment. You grant the necessary permissions to administrators. Does this meet the goal?
A. Yes
B. No

A

Answer: A. Yes

Reasoning: The solution involves deploying an Azure Kubernetes Service (AKS) cluster with two nodes and an Azure Application Gateway; which meets the requirements outlined in the question:

  1. Provide access to the Java runtime environment: The use of a custom Docker image that includes the Java runtime environment ensures that the web app has access to Java.
  2. Ensure redundancy in case an Azure region fails: Deploying an AKS cluster with multiple nodes can provide redundancy and high availability. While the question does not explicitly mention deploying across multiple regions; AKS can be configured to support multi-region deployments for failover scenarios.
  3. Allow administrators access to the operating system to install custom application dependencies: AKS allows administrators to access the underlying nodes; providing the ability to install custom dependencies as needed.

Breakdown of non-selected answer option:

B. No: This option is not selected because the proposed solution using AKS and a custom Docker image meets all the specified requirements. The solution provides the necessary Java runtime environment; redundancy; and administrative access to the operating system. Therefore; the answer “No” does not accurately reflect the suitability of the solution.

91
Q

Your company is experiencing latency issues with an application running on a virtual machine in Azure. You suspect that the issue may be related to network connectivity; but you’re not certain. Which Azure Network Watcher feature should you use to troubleshoot the issue?

A. IP Flow Verify
B. Connection Monitor
C. Network Performance Monitor
D. Traffic Analytics

A

Answer: B. Connection Monitor

Reasoning:
To troubleshoot latency issues related to network connectivity for an application running on a virtual machine in Azure; the most suitable Azure Network Watcher feature is Connection Monitor. Connection Monitor provides end-to-end visibility into network connectivity; allowing you to monitor and analyze the connection between your virtual machine and other network resources. It helps in identifying connectivity issues; measuring latency; and ensuring that the network paths are performing as expected.

Breakdown of non-selected answer options:
- A. IP Flow Verify: This feature is used to check if a packet is allowed or denied to or from a virtual machine based on the configured security group rules. It is not specifically designed for monitoring latency or end-to-end network connectivity issues.
- C. Network Performance Monitor: While this tool can monitor network performance; it is more focused on monitoring the performance of network links across hybrid environments and not specifically for troubleshooting connectivity issues on a single virtual machine.
- D. Traffic Analytics: This feature provides insights into network traffic flow patterns and can help with security and compliance; but it is not specifically designed for troubleshooting latency or connectivity issues on a virtual machine.

92
Q

Your company has deployed several virtual machines in Azure; and you want to monitor the network traffic to and from these virtual machines. You plan to use Network Watcher for this purpose. Does this meet your goal?
A. Yes
B. No

A

Answer: A. Yes

Reasoning: Network Watcher is a service in Azure that provides tools to monitor; diagnose; and gain insights into your network traffic in Azure. It includes features such as packet capture; connection monitoring; and network performance monitoring; which are suitable for monitoring network traffic to and from virtual machines. Therefore; using Network Watcher meets the goal of monitoring network traffic for the deployed virtual machines.

Breakdown of non-selected answer option:
B. No - This option is not suitable because Network Watcher is indeed capable of monitoring network traffic; which aligns with the goal stated in the question. Therefore; the answer “No” does not accurately reflect the capabilities of Network Watcher in this context.

93
Q

You are designing a global e-commerce website that must handle millions of transactions daily. The website needs to be highly available and support consistent reads and writes across multiple regions. Which database solution should you recommend?
A. Azure SQL Database with active geo-replication
B. Azure Cosmos DB with multi-region writes
C. Azure Database for MySQL with read replicas
D. Azure Synapse Analytics

A

Answer: B. Azure Cosmos DB with multi-region writes

Reasoning:
The requirement is for a database solution that supports high availability and consistent reads and writes across multiple regions. Azure Cosmos DB with multi-region writes is designed to provide exactly this capability. It allows for globally distributed; multi-model database service with turnkey global distribution across any number of Azure regions. It offers multi-region writes; which means it can handle writes in multiple regions simultaneously; ensuring low latency and high availability. This makes it the most suitable option for a global e-commerce website handling millions of transactions daily.

Breakdown of non-selected options:
A. Azure SQL Database with active geo-replication - While Azure SQL Database supports geo-replication; it primarily provides read-only replicas in other regions. It does not support multi-region writes; which is a key requirement for consistent reads and writes across multiple regions.

C. Azure Database for MySQL with read replicas - This option supports read replicas; which can help with read scalability; but it does not support multi-region writes. It is not designed for handling consistent writes across multiple regions; which is a critical requirement in this scenario.

D. Azure Synapse Analytics - This is primarily an analytics service; not a transactional database solution. It is designed for big data and data warehousing scenarios; not for handling millions of transactional operations with consistent reads and writes across multiple regions.

94
Q

You have an on-premises application named App1 that uses forms-based authentication. You plan to migrate App1 to Azure. Some users work remotely and do not have VPN access to the on-premises network. You need to provide the remote users with single sign-on (SSO) access to App1. Which two features should you include in the solution?
A. Azure AD Application Proxy
B. Azure AD Privileged Identity Management (PIM)
C. Conditional Access policies
D. Azure Arc
E. Azure AD enterprise applications
F. Azure Application Gateway

A

Answer: A. Azure AD Application Proxy
Answer: E. Azure AD enterprise applications

Reasoning:
To provide remote users with single sign-on (SSO) access to an on-premises application that is being migrated to Azure; Azure AD Application Proxy and Azure AD enterprise applications are the most suitable features. Azure AD Application Proxy allows secure remote access to on-premises applications without the need for a VPN; which is ideal for users who work remotely. Azure AD enterprise applications enable integration with Azure Active Directory for authentication and SSO capabilities.

Breakdown of non-selected options:
B. Azure AD Privileged Identity Management (PIM) - This is used for managing; controlling; and monitoring access within Azure AD; Azure; and other Microsoft Online Services. It is not directly related to providing SSO access to applications.

C. Conditional Access policies - These are used to enforce access controls on applications based on specific conditions. While useful for security; they do not directly provide SSO capabilities.

D. Azure Arc - This is used for managing resources across on-premises; multi-cloud; and edge environments. It is not relevant to providing SSO access to applications.

F. Azure Application Gateway - This is a web traffic load balancer that enables you to manage traffic to your web applications. It does not provide SSO capabilities.

95
Q

You are designing a solution for a large-scale; mission-critical MySQL database in Azure. The database must ensure low latency and high performance. Which compute tier should you recommend?

A. Burstable
B. General Purpose
C. Memory Optimized

A

Answer: C. Memory Optimized

Reasoning: For a large-scale; mission-critical MySQL database that requires low latency and high performance; the Memory Optimized compute tier is the most suitable option. This tier is designed to provide high performance and low latency by offering a higher memory-to-vCPU ratio; which is ideal for workloads that require fast data processing and retrieval.

Breakdown of non-selected options:

A. Burstable: This tier is designed for workloads that do not require consistent performance and can tolerate variable performance levels. It is not suitable for mission-critical applications that require low latency and high performance.

B. General Purpose: While this tier offers a balanced compute and memory option; it may not provide the high performance and low latency required for a large-scale; mission-critical database. It is more suitable for general workloads that do not have stringent performance requirements.

96
Q

You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements: Provide access to the full .NET Framework. Ensure redundancy in case an Azure region fails. Allow administrators access to the operating system to install custom application dependencies. Solution: You deploy two Azure Virtual Machines across two Azure regions and implement Azure Traffic Manager. Does this meet the goal?
A. Yes
B. No

A

Answer: A. Yes

Reasoning: The solution described in the question involves deploying two Azure Virtual Machines (VMs) across two Azure regions and implementing Azure Traffic Manager. This setup meets the requirements specified in the question:

  1. Provide access to the full .NET Framework: Azure Virtual Machines can run Windows Server; which supports the full .NET Framework; thus meeting this requirement.
  2. Ensure redundancy in case an Azure region fails: By deploying VMs across two Azure regions; the solution ensures that if one region fails; the other can continue to serve the application; thus providing redundancy.
  3. Allow administrators access to the operating system to install custom application dependencies: Azure VMs provide full administrative access to the operating system; allowing administrators to install any necessary custom application dependencies.

Breakdown of non-selected answer option:

B. No: This option is not suitable because the proposed solution does indeed meet all the specified requirements. The use of Azure VMs across multiple regions with Azure Traffic Manager provides the necessary redundancy; access to the full .NET Framework; and administrative access to the OS. Therefore; the correct answer is “Yes;” making option B incorrect.

97
Q

You have an app named App1 that uses two on-premises Microsoft SQL Server databases named DB1 and DB2. You plan to migrate DB1 and DB2 to Azure. You need to recommend an Azure solution to host DB1 and DB2. The solution must meet the following requirements:
✑ Support server-side transactions across DB1 and DB2.
✑ Minimize administrative effort to update the solution.
What should you recommend?

A. Two Azure SQL databases in an elastic pool
B. Two databases on the same Azure SQL Managed Instance
C. Two databases on the same SQL Server instance on an Azure Virtual Machine
D. Two Azure SQL databases on different Azure SQL Database servers

A

Answer: B. Two databases on the same Azure SQL Managed Instance

Reasoning:
- The requirement to support server-side transactions across DB1 and DB2 indicates the need for a solution that can handle distributed transactions. Azure SQL Managed Instance supports distributed transactions; making it suitable for this requirement.
- Minimizing administrative effort suggests a preference for a managed service over an Infrastructure as a Service (IaaS) solution like running SQL Server on an Azure Virtual Machine. Azure SQL Managed Instance is a Platform as a Service (PaaS) offering; which reduces administrative overhead compared to managing a virtual machine.
- Azure SQL Managed Instance allows hosting multiple databases on the same instance; which facilitates server-side transactions across databases.

Breakdown of non-selected options:
- A. Two Azure SQL databases in an elastic pool: Azure SQL Database does not support distributed transactions across databases; which disqualifies this option based on the requirement for server-side transactions.
- C. Two databases on the same SQL Server instance on an Azure Virtual Machine: While this option supports server-side transactions; it involves more administrative effort compared to a managed service like Azure SQL Managed Instance; which contradicts the requirement to minimize administrative effort.
- D. Two Azure SQL databases on different Azure SQL Database servers: This option does not support distributed transactions across databases; making it unsuitable for the requirement to support server-side transactions.

98
Q

You plan to migrate a web app named App1 from an on-premises data center to Azure. App1 relies on a custom COM component installed on the host server. You need to recommend a solution to host App1 in Azure; ensuring the following requirements are met: ✑ App1 must remain accessible to users even if an Azure data center becomes unavailable. ✑ Costs must be minimized. What should you include in the recommendation?

A. Deploy a load balancer and a web app in two Azure regions.
B. Deploy a load balancer and a virtual machine scale set in two Azure regions.
C. Deploy a load balancer and a virtual machine scale set across two availability zones.
D. Deploy an Azure Traffic Manager profile and a web app in two Azure regions.

A

Answer: D. Deploy an Azure Traffic Manager profile and a web app in two Azure regions.

Reasoning:
- The requirement is to ensure that App1 remains accessible even if an Azure data center becomes unavailable. This implies the need for a solution that provides geographic redundancy.
- Azure Traffic Manager is a DNS-based traffic load balancer that enables you to distribute traffic optimally to services across global Azure regions; which aligns with the requirement for accessibility even if a data center is unavailable.
- Deploying the web app in two Azure regions ensures that if one region becomes unavailable; the other can still serve the application; meeting the high availability requirement.
- This solution also minimizes costs compared to deploying virtual machines; as web apps (App Services) generally have lower operational costs than virtual machines.

Breakdown of non-selected options:
A. Deploy a load balancer and a web app in two Azure regions.
- While this option provides redundancy by deploying in two regions; it lacks the global traffic management capabilities of Azure Traffic Manager; which is more suited for handling regional failures.

B. Deploy a load balancer and a virtual machine scale set in two Azure regions.
- This option provides redundancy but involves higher costs due to the use of virtual machines and scale sets; which are typically more expensive than web apps.

C. Deploy a load balancer and a virtual machine scale set across two availability zones.
- This option provides redundancy within a single region by using availability zones; but it does not protect against a complete regional failure; which is a requirement in the question.

Overall; option D is the most suitable as it provides the necessary redundancy across regions while minimizing costs.

99
Q

You need to design a highly available Azure Virtual Desktop solution that meets the following requirements:
✑ The solution must remain available during a zone outage.
✑ The solution must be scalable.
✑ Costs must be minimized.
Which deployment option should you use?
A. Virtual Desktop with Azure Front Door
B. Virtual Desktop with Azure Traffic Manager
C. Virtual Desktop with Availability Zones
D. Virtual Desktop with Virtual Machine Scale Sets

A

Answer: C. Virtual Desktop with Availability Zones

Reasoning:
To design a highly available Azure Virtual Desktop solution that remains available during a zone outage; the use of Availability Zones is the most suitable option. Availability Zones are physically separate locations within an Azure region that provide high availability and fault tolerance. They ensure that if one zone goes down; the services in other zones remain unaffected; thus meeting the requirement of remaining available during a zone outage. Additionally; Availability Zones support scalability and can help minimize costs by optimizing resource allocation and usage.

Breakdown of non-selected options:
A. Virtual Desktop with Azure Front Door - Azure Front Door is primarily used for global load balancing and application acceleration; not specifically for handling zone outages within a region. It does not inherently provide the high availability required during a zone outage.

B. Virtual Desktop with Azure Traffic Manager - Azure Traffic Manager is a DNS-based traffic load balancer that distributes traffic across multiple regions; not zones. It is not designed to handle zone outages within a single region; making it unsuitable for this requirement.

D. Virtual Desktop with Virtual Machine Scale Sets - While Virtual Machine Scale Sets provide scalability and can help manage costs; they do not inherently provide high availability across zones. They are more focused on scaling virtual machines within a single zone or region; not ensuring availability during a zone outage.

100
Q

You are designing an application that needs to store and process large amounts of time-series data. The data must be queried quickly and efficiently. Which database solution should you recommend?
A. Azure Cosmos DB for NoSQL
B. Azure SQL Database with active geo-replication
C. Azure Time Series Insights
D. Azure Stream Analytics

A

Answer: C. Azure Time Series Insights

Reasoning: Azure Time Series Insights is specifically designed for storing; querying; and visualizing time-series data. It provides a scalable and efficient platform for managing large volumes of time-stamped data; making it the most suitable choice for applications that need to handle time-series data efficiently.

Breakdown of non-selected options:
- A. Azure Cosmos DB for NoSQL: While Azure Cosmos DB is a highly scalable and globally distributed database service; it is not specifically optimized for time-series data. It can handle time-series data but lacks the specialized features and optimizations that Azure Time Series Insights offers for this type of data.

  • B. Azure SQL Database with active geo-replication: Azure SQL Database is a relational database service that can store time-series data; but it is not optimized for the specific needs of time-series data processing and querying. Active geo-replication is more about data availability and disaster recovery rather than efficient time-series data handling.
  • D. Azure Stream Analytics: This service is designed for real-time data stream processing and analytics; not for storing and querying large amounts of historical time-series data. It is more suitable for processing data in motion rather than storing and querying data at rest.
101
Q

You have a mission-critical application that requires high performance and low-latency database access. You plan to use Azure Database for MySQL. Which compute tier should you recommend?
A. Burstable
B. General Purpose
C. Memory Optimized

A

Answer: C. Memory Optimized

Reasoning: The question specifies a need for high performance and low-latency database access for a mission-critical application. Among the compute tiers available for Azure Database for MySQL; the Memory Optimized tier is designed to provide high performance and low latency; making it the most suitable choice for this scenario.

Breakdown of non-selected answer options:
- A. Burstable: This tier is designed for workloads that do not require consistent performance and can tolerate variable performance levels. It is not suitable for mission-critical applications that require high performance and low latency.
- B. General Purpose: This tier offers balanced compute and memory resources and is suitable for most general workloads. However; it may not provide the high performance and low latency required for mission-critical applications compared to the Memory Optimized tier.

102
Q

You have an on-premises application named App1 that uses Integrated Windows Authentication. You plan to migrate App1 to Azure. Some users work remotely and have VPN access to the on-premises network. You need to provide the remote users with SSO access to App1 without requiring VPN connectivity. Which two features should you include in the solution?

A. Azure AD Application Proxy
B. Azure AD Privileged Identity Management (PIM)
C. Conditional Access policies
D. Azure Arc
E. Azure AD Enterprise Applications
F. Azure Application Gateway

A

Answer: A. Azure AD Application Proxy
Answer: E. Azure AD Enterprise Applications

Reasoning:
To provide remote users with Single Sign-On (SSO) access to an on-premises application like App1 without requiring VPN connectivity; Azure AD Application Proxy and Azure AD Enterprise Applications are the most suitable options. Azure AD Application Proxy allows secure remote access to on-premises applications without the need for a VPN by acting as a bridge between Azure AD and the on-premises application. Azure AD Enterprise Applications facilitate the integration of applications with Azure AD for SSO capabilities.

Breakdown of non-selected options:
B. Azure AD Privileged Identity Management (PIM) - This is used for managing; controlling; and monitoring access within Azure AD; Azure; and other Microsoft Online Services. It is not directly related to providing SSO access to applications.
C. Conditional Access policies - These are used to enforce access controls on applications based on specific conditions. While useful for security; they do not directly provide SSO access to applications.
D. Azure Arc - This is used for managing resources across on-premises; multi-cloud; and edge environments. It does not provide SSO capabilities for applications.
F. Azure Application Gateway - This is a web traffic load balancer that enables you to manage traffic to your web applications. It does not provide SSO access to applications.

103
Q

You are designing a new web application that requires a database solution to store user data. The solution must be scalable; support SQL commands; and provide low-latency read operations. Which Azure database service should you recommend?
A. Azure Cosmos DB for NoSQL
B. Azure SQL Database
C. Azure Database for MySQL
D. Azure Database for PostgreSQL

A

Answer: B. Azure SQL Database

Reasoning: The question requires a database solution that is scalable; supports SQL commands; and provides low-latency read operations. Azure SQL Database is a fully managed relational database service that is highly scalable; supports SQL commands; and is optimized for low-latency operations; making it the most suitable choice for the given requirements.

Breakdown of non-selected options:
- A. Azure Cosmos DB for NoSQL: While Azure Cosmos DB is highly scalable and provides low-latency operations; it is primarily a NoSQL database. The question specifies the need for SQL command support; which makes this option less suitable.
- C. Azure Database for MySQL: Although it supports SQL commands and can be scalable; Azure Database for MySQL is not as optimized for low-latency operations as Azure SQL Database; making it a less ideal choice.
- D. Azure Database for PostgreSQL: Similar to MySQL; it supports SQL commands and can be scalable; but it is not as optimized for low-latency operations as Azure SQL Database; making it less suitable for the requirements.

104
Q

You are developing a sales application that will include several Azure cloud services to manage various components of a transaction. These cloud services will handle customer orders; billing; payment; inventory; and shipping. You need to recommend a solution that allows the cloud services to asynchronously communicate transaction information using XML messages. What should you include in the recommendation?
A. Azure Notification Hubs
B. Azure Data Lake
C. Azure Service Bus
D. Azure Blob Storage

A

Answer: C. Azure Service Bus

Reasoning: Azure Service Bus is a messaging service that enables asynchronous communication between different components of a distributed application. It is well-suited for scenarios where different services need to communicate using messages; such as XML messages in this case. Service Bus supports reliable message queuing and can handle complex messaging patterns; making it ideal for managing transactions across multiple services like customer orders; billing; payment; inventory; and shipping.

Breakdown of non-selected options:
- A. Azure Notification Hubs: This service is primarily used for sending push notifications to mobile devices. It is not designed for handling asynchronous communication between cloud services using XML messages.
- B. Azure Data Lake: This service is used for storing and analyzing large volumes of data. It is not suitable for message-based communication between services.
- D. Azure Blob Storage: This service is used for storing unstructured data such as text or binary data. While it can store XML files; it does not provide messaging capabilities needed for asynchronous communication between services.

105
Q

You are designing a mobile application that requires a database solution to store user data. The solution must be scalable; support SQL commands; and provide low-latency read operations. The application will be used globally; so data replication must be supported. Which Azure database service should you recommend?
A. Azure SQL Database with active geo-replication
B. Azure Cosmos DB for NoSQL
C. Azure Database for MySQL
D. Azure Database for PostgreSQL

A

Answer: A. Azure SQL Database with active geo-replication

Reasoning: The question requires a database solution that is scalable; supports SQL commands; provides low-latency read operations; and supports global data replication. Azure SQL Database with active geo-replication meets all these requirements. It is a fully managed relational database service that supports SQL commands; offers scalability; and provides active geo-replication for global data distribution; ensuring low-latency read operations across different regions.

Breakdown of non-selected options:
- B. Azure Cosmos DB for NoSQL: While Azure Cosmos DB is highly scalable and provides low-latency read operations with global distribution; it is primarily a NoSQL database. The requirement specifies support for SQL commands; which makes Azure SQL Database a more suitable choice.
- C. Azure Database for MySQL: This option supports SQL commands and is scalable; but it does not inherently provide the same level of global data replication and low-latency read operations as Azure SQL Database with active geo-replication.
- D. Azure Database for PostgreSQL: Similar to Azure Database for MySQL; it supports SQL commands and is scalable; but it lacks the built-in global data replication and low-latency read capabilities provided by Azure SQL Database with active geo-replication.

106
Q

You are designing an Azure solution that requires highly available and scalable message processing. The solution must ensure that no messages are lost; even during a zone outage; while minimizing costs. Which Azure service should you use?
A. Azure Event Grid
B. Azure Event Hubs
C. Azure Service Bus
D. Azure Notification Hubs

A

Answer: C. Azure Service Bus

Reasoning: Azure Service Bus is designed for highly reliable and scalable message processing. It provides features such as message queuing and publish/subscribe patterns; which ensure that no messages are lost even during outages. Azure Service Bus supports geo-disaster recovery and can replicate messages across different zones; ensuring high availability and durability. Additionally; it offers a cost-effective solution for scenarios requiring guaranteed message delivery.

Breakdown of non-selected options:
- A. Azure Event Grid: While Azure Event Grid is a scalable event routing service; it is primarily used for event-driven architectures and does not provide the same level of message durability and reliability as Azure Service Bus. It is not designed to handle message queuing or ensure message delivery during outages.
- B. Azure Event Hubs: Azure Event Hubs is a big data streaming platform and event ingestion service; which is optimized for high-throughput data streaming rather than reliable message processing. It does not guarantee message delivery in the same way as Azure Service Bus.
- D. Azure Notification Hubs: Azure Notification Hubs is designed for sending push notifications to mobile devices and is not suitable for general message processing or ensuring message durability and availability during outages.

107
Q

You have an Azure AD tenant with a security group named Group1. Group1 is set up for assigned memberships and includes several members; such as guest users. You need to ensure that Group1 is reviewed monthly to identify any members who no longer need access. What solution should you recommend?
A. Implement Azure AD Identity Protection.
B. Change the membership type of Group1 to Dynamic User.
C. Create an access review for Group1.
D. Implement Azure AD Privileged Identity Management (PIM).

A

Answer: C. Create an access review for Group1.

Reasoning: The requirement is to ensure that Group1 is reviewed monthly to identify any members who no longer need access. Azure AD Access Reviews are specifically designed for this purpose; allowing administrators to regularly review group memberships and access rights to ensure they are still appropriate. This feature is particularly useful for groups with guest users or other members whose access needs may change over time.

Breakdown of non-selected options:
A. Implement Azure AD Identity Protection: This option is not suitable because Azure AD Identity Protection is focused on identifying and managing risks related to user identities; such as detecting compromised accounts; rather than reviewing group memberships.

B. Change the membership type of Group1 to Dynamic User: Changing the membership type to Dynamic User would automatically adjust group membership based on user attributes; but it does not provide a mechanism for regular reviews of whether users still need access.

D. Implement Azure AD Privileged Identity Management (PIM): Azure AD PIM is used to manage; control; and monitor access within Azure AD; Azure; and other Microsoft Online Services; focusing on privileged roles rather than regular group membership reviews. It is not designed for periodic access reviews of security groups like Group1.

108
Q

You are planning to deploy an Azure web app that will store sensitive data. The web app will access a database server that also stores sensitive data. You need to ensure that sensitive data is encrypted both at rest and in transit. What should you include in the solution?

A. Azure Key Vault
B. Azure Front Door
C. Azure VPN Gateway
D. Azure Application Gateway

A

Answer: A. Azure Key Vault

Reasoning:
To ensure that sensitive data is encrypted both at rest and in transit; Azure Key Vault is the most suitable option. Azure Key Vault is designed to safeguard cryptographic keys and secrets used by cloud applications and services. It provides a secure way to manage and control access to encryption keys and secrets; which can be used to encrypt data at rest. Additionally; Azure Key Vault can be integrated with other Azure services to ensure data is encrypted in transit using SSL/TLS.

Breakdown of non-selected options:
- B. Azure Front Door: Azure Front Door is a scalable and secure entry point for fast delivery of your global applications. It provides features like load balancing and web application firewall; but it is not specifically designed for managing encryption of data at rest or in transit.

  • C. Azure VPN Gateway: Azure VPN Gateway is used to send encrypted traffic between an Azure virtual network and an on-premises location over the public Internet. While it provides encryption for data in transit; it does not address encryption of data at rest.
  • D. Azure Application Gateway: Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. It offers SSL termination and web application firewall capabilities; but it does not provide encryption for data at rest.
109
Q

You are designing an Azure solution that requires a highly available and scalable message broker with the following requirements: The message broker must support publish-subscribe messaging patterns and message persistence. It must be able to scale elastically to accommodate a growing number of messages and subscribers. The solution must minimize costs while meeting these requirements. Which Azure service should you use?
A. Azure Service Bus
B. Azure Event Grid
C. Azure Event Hubs
D. Azure Relay

A

Answer: A. Azure Service Bus

Reasoning: Azure Service Bus is a fully managed enterprise message broker with message queues and publish-subscribe topics. It supports publish-subscribe messaging patterns and message persistence; which are key requirements in the question. Additionally; Azure Service Bus can scale elastically to handle a growing number of messages and subscribers; making it suitable for the scenario described. It is also cost-effective for scenarios requiring message persistence and complex messaging patterns.

Breakdown of non-selected options:
- B. Azure Event Grid: While Azure Event Grid is designed for event-driven architectures and can handle high-throughput scenarios; it is not primarily a message broker and does not support message persistence; which is a requirement in the question.
- C. Azure Event Hubs: Azure Event Hubs is designed for big data streaming and event ingestion; not for message brokering with publish-subscribe patterns and message persistence. It is more suitable for telemetry and event stream processing.
- D. Azure Relay: Azure Relay is used for hybrid applications to securely expose services that run in a corporate network to the public cloud. It does not provide message brokering capabilities or support publish-subscribe messaging patterns.

110
Q

You are designing an Azure environment that will host numerous virtual machines. You need to ensure that all virtual machines comply with the organization’s policies. Which Azure Policy scope should you use to achieve this goal?
A. Azure Active Directory (Azure AD) administrative units
B. Azure Active Directory (Azure AD) tenants
C. Subscriptions
D. Compute resources
E. Resource groups
F. Management groups

A

Answer: F. Management groups

Reasoning:
Azure Policy is a service in Azure that you use to create; assign; and manage policies. These policies enforce different rules and effects over your resources; so those resources stay compliant with your corporate standards and service level agreements. When you need to ensure that all virtual machines comply with the organization’s policies; you should apply the policies at a scope that encompasses all the resources you want to manage.

Management groups are the highest level of scope in Azure; allowing you to apply policies across multiple subscriptions. This makes them the most suitable choice when you want to ensure compliance across numerous virtual machines that might be spread across different subscriptions.

Breakdown of non-selected options:
A. Azure Active Directory (Azure AD) administrative units - These are used for delegating administrative permissions within Azure AD and are not related to Azure Policy or resource compliance.
B. Azure Active Directory (Azure AD) tenants - Tenants are instances of Azure AD and are not directly related to managing Azure resources or applying Azure Policies.
C. Subscriptions - While you can apply policies at the subscription level; this would only affect resources within a single subscription. If you have multiple subscriptions; management groups are more appropriate.
D. Compute resources - This is not a valid scope for Azure Policy. Policies are applied at higher levels such as resource groups; subscriptions; or management groups.
E. Resource groups - Policies can be applied at the resource group level; but this would require applying policies to each resource group individually; which is less efficient than using management groups for broader compliance.

111
Q

You are designing an Azure solution for a company that needs a highly available and scalable container orchestration platform with these requirements: The platform must support multiple container runtimes and orchestration engines. It must scale elastically to handle a growing number of containers and workloads. The solution should minimize costs while meeting these requirements. Which Azure service should you use?
A. Azure Kubernetes Service (AKS)
B. Azure Container Instances (ACI)
C. Azure Container Registry (ACR)
D. Azure Batch

A

Answer: A. Azure Kubernetes Service (AKS)

Reasoning: Azure Kubernetes Service (AKS) is the most suitable option for a highly available and scalable container orchestration platform that supports multiple container runtimes and orchestration engines. AKS is designed to handle a growing number of containers and workloads by scaling elastically. It is a managed Kubernetes service; which means it can support different container runtimes and orchestration engines. Additionally; AKS is cost-effective because it is a managed service; reducing the overhead of managing the infrastructure.

Breakdown of non-selected options:
- B. Azure Container Instances (ACI): ACI is suitable for running individual containers without orchestration. It does not provide a full orchestration platform like AKS and is not designed to support multiple container runtimes and orchestration engines.
- C. Azure Container Registry (ACR): ACR is a service for storing and managing container images; not for orchestrating or running containers. It does not meet the requirements for a scalable container orchestration platform.
- D. Azure Batch: Azure Batch is designed for running large-scale parallel and high-performance computing (HPC) applications. It is not specifically designed for container orchestration and does not support multiple container runtimes and orchestration engines.

112
Q

You are responsible for managing an Azure environment with numerous virtual machines. You need to create a monthly report detailing all new virtual machine deployments. Which solution should you recommend?
A. Azure Activity Log
B. Azure Advisor
C. Azure Analysis Services
D. Azure Monitor performance counters

A

Answer: A. Azure Activity Log

Reasoning: The Azure Activity Log is a service that provides insight into subscription-level events in Azure. It includes information about new virtual machine deployments; such as when they were created and by whom. This makes it the most suitable option for generating a monthly report on new virtual machine deployments.

Breakdown of non-selected options:
- B. Azure Advisor: Azure Advisor provides personalized best practices and recommendations to optimize Azure resources; but it does not provide detailed logs or reports on virtual machine deployments.
- C. Azure Analysis Services: This service is used for data modeling and analytics; not for tracking or reporting on Azure resource deployments.
- D. Azure Monitor performance counters: Azure Monitor is used for collecting and analyzing performance data from Azure resources; but it does not specifically track deployment events like the creation of new virtual machines.

113
Q

You need to design a highly available Azure SQL database that meets the following requirements:
✑ The database must support read scale-out.
✑ The database must remain available during a regional outage.
✑ Costs must be minimized.
Which deployment option should you use?
A. Azure SQL Managed Instance Business Critical
B. Azure SQL Database Premium
C. Azure SQL Database General Purpose
D. Azure SQL Managed Instance General Purpose

A

Answer: A. Azure SQL Managed Instance Business Critical

Reasoning:
- The requirement for the database to remain available during a regional outage suggests the need for geo-replication or a similar high-availability feature that spans multiple regions.
- The requirement for read scale-out indicates the need for a solution that supports read replicas or secondary replicas for load balancing read operations.
- Azure SQL Managed Instance Business Critical tier supports both read scale-out and high availability with the ability to configure auto-failover groups for regional outages; making it suitable for this scenario.
- Although cost minimization is a factor; the Business Critical tier is necessary to meet the availability and read scale-out requirements.

Breakdown of non-selected options:
- B. Azure SQL Database Premium: While it supports high availability and read scale-out; it is primarily designed for single-region deployments. To ensure availability during a regional outage; additional configurations like geo-replication would be needed; which might increase costs.
- C. Azure SQL Database General Purpose: This option is more cost-effective but does not inherently support read scale-out or the high availability needed for regional outages without additional configurations; which could increase complexity and costs.
- D. Azure SQL Managed Instance General Purpose: This option is more cost-effective but does not support the Business Critical features like read scale-out and the high availability needed for regional outages.

114
Q

You have an Azure Active Directory (Azure AD) tenant that syncs with an on-premises Active Directory domain. You need to provide users with a way to request access to a specific resource; ensuring the request is approved by a designated approver before access is granted. What should you do?
A. Create an Azure AD group with assigned membership and configure an access review for the group.
B. Configure Azure AD Privileged Identity Management (PIM) for the resource.
C. Implement Azure AD Identity Protection.
D. Configure Azure AD entitlement management.

A

Answer: D. Configure Azure AD entitlement management.

Reasoning: The requirement is to provide users with a way to request access to a specific resource; with the condition that the request must be approved by a designated approver before access is granted. Azure AD entitlement management is specifically designed for managing access to resources through access packages; which can include approval workflows. This makes it the most suitable option for the scenario described.

Breakdown of non-selected options:
- A. Create an Azure AD group with assigned membership and configure an access review for the group: While access reviews can help manage group memberships; they are typically used for periodic reviews rather than on-demand access requests with approval workflows. This option does not inherently provide a mechanism for users to request access and have it approved by a designated approver.
- B. Configure Azure AD Privileged Identity Management (PIM) for the resource: PIM is primarily used for managing and controlling access to Azure resources and roles with just-in-time access and approval workflows. However; it is more focused on privileged roles rather than general resource access requests; making it less suitable for the scenario.
- C. Implement Azure AD Identity Protection: Identity Protection is focused on identifying and responding to identity-based risks and threats; such as compromised accounts. It does not provide functionality for access requests and approval workflows; making it irrelevant to the scenario.

115
Q

You plan to deploy five applications to Azure. These applications will be deployed to a single Azure Kubernetes Service (AKS) cluster within an Azure region. The application deployment must meet the following requirements: ✑ Ensure that the applications remain available if a single pod fails. ✑ Ensure that the internet traffic is encrypted using SSL without configuring SSL on each container. Which service should you include in the recommendation?
A. Azure Front Door
B. Azure Traffic Manager
C. AKS ingress controller
D. Azure Load Balancer

A

Answer: C. AKS ingress controller

Reasoning: The question requires a solution that ensures high availability for applications in case of a pod failure and encrypts internet traffic using SSL without configuring SSL on each container. An AKS ingress controller is designed to manage external access to services in a Kubernetes cluster; typically HTTP. It can handle SSL termination; meaning it can manage SSL certificates and decrypt incoming SSL traffic; thus offloading the SSL configuration from individual containers. Additionally; it can distribute traffic across multiple pods; ensuring availability even if a single pod fails.

Breakdown of non-selected options:
- A. Azure Front Door: While Azure Front Door can handle SSL termination and provide global load balancing; it is more suited for global routing and web application acceleration rather than managing internal traffic within a single AKS cluster.
- B. Azure Traffic Manager: This service is used for DNS-based traffic routing to distribute traffic across multiple regions or endpoints; not for SSL termination or managing traffic within a single AKS cluster.
- D. Azure Load Balancer: This service provides Layer 4 (TCP/UDP) load balancing and does not handle SSL termination; which is a requirement in the question. It is also not specifically designed for Kubernetes environments.

116
Q

You have an Azure subscription with an Azure Kubernetes Service (AKS) cluster. How can you ensure the AKS cluster authenticates with Azure AD to access Azure resources?
A. A system-assigned managed identity
B. An Azure AD application
C. A service principal
D. Azure AD Pod Identity

A

Answer: A. A system-assigned managed identity

Reasoning: To ensure that an Azure Kubernetes Service (AKS) cluster can authenticate with Azure Active Directory (Azure AD) to access Azure resources; using a managed identity is the most suitable approach. Managed identities are a feature of Azure Active Directory that provides Azure services with an automatically managed identity in Azure AD. This identity can be used to authenticate to any service that supports Azure AD authentication; without needing to manage credentials.

  • A system-assigned managed identity is specifically tied to the lifecycle of the Azure resource it is enabled on; in this case; the AKS cluster. This makes it a secure and convenient option for authenticating the AKS cluster with Azure AD.

Breakdown of non-selected options:

B. An Azure AD application: While Azure AD applications can be used for authentication; they require manual management of credentials and are not as seamless or secure as managed identities for this use case.

C. A service principal: Service principals are used for non-interactive login to Azure resources; but they also require manual management of credentials; which can be less secure and more cumbersome compared to managed identities.

D. Azure AD Pod Identity: Azure AD Pod Identity allows Kubernetes applications to access Azure resources using Azure AD identities; but it is more focused on individual pod-level authentication rather than the entire AKS cluster. It is not the most straightforward solution for authenticating the AKS cluster itself with Azure AD.

117
Q

You are the IT administrator for a large organization that uses a mix of on-premises and cloud-based services. One of the services used by the organization is a SQL Server instance running on an Azure virtual machine. You need to recommend a disaster recovery solution that meets the following requirements: Provides near real-time data replication to a secondary location in a different Azure region; supports an RTO of 10 minutes; supports an RPO of 5 minutes; and minimizes costs while providing the necessary level of protection. What solution should you recommend?

A. Azure Site Recovery
B. Azure Backup
C. Always On Availability Groups
D. Azure SQL Database Managed Instance

A

Answer: C. Always On Availability Groups

Reasoning:
- The question requires a disaster recovery solution for a SQL Server instance on an Azure VM with specific RTO and RPO requirements.
- Always On Availability Groups provide near real-time data replication and can be configured to meet the RTO of 10 minutes and RPO of 5 minutes; making it suitable for the requirements.
- It allows for data replication to a secondary location in a different Azure region; which aligns with the requirement for geographic redundancy.
- While Always On Availability Groups may involve some costs; they are typically more cost-effective than other high-availability solutions when considering the level of protection and replication capabilities required.

Breakdown of non-selected options:
- A. Azure Site Recovery: Primarily used for VM-level disaster recovery; not specifically for SQL Server data replication. It may not meet the RPO requirement of 5 minutes as it is not designed for near real-time data replication.
- B. Azure Backup: Designed for periodic backups rather than real-time replication. It does not meet the RPO and RTO requirements for near real-time data replication and quick recovery.
- D. Azure SQL Database Managed Instance: This is a different service offering and would require migrating the SQL Server instance to a managed instance; which is not mentioned as a requirement. It does not directly address the need for real-time replication for an existing SQL Server on a VM.

118
Q

You are designing a new Azure solution that requires a database. The database must support SQL commands and be able to scale out to handle a high volume of read operations. The solution must also be cost-effective. Which Azure database service should you recommend?
A. Azure SQL Database
B. Azure Cosmos DB
C. Azure Database for PostgreSQL

A

Answer: A. Azure SQL Database

Reasoning: Azure SQL Database is a fully managed relational database service that supports SQL commands and can scale out to handle a high volume of read operations using features like read replicas and elastic pools. It is also cost-effective; offering various pricing tiers to suit different needs and budgets.

Breakdown of non-selected options:
- B. Azure Cosmos DB: While Cosmos DB is highly scalable and supports SQL-like queries; it is a NoSQL database service designed for globally distributed applications. It might not be the most cost-effective option for a solution specifically requiring SQL command support.
- C. Azure Database for PostgreSQL: This service supports SQL commands and can handle high read volumes with read replicas. However; Azure SQL Database is generally more optimized for SQL workloads and offers more straightforward scaling options; making it a more suitable choice for this scenario.

119
Q

Introductory Information: Case Study

This is a case study. Case studies are not timed separately. You can use as much exam time as you need to complete each case. However; there may be additional case studies and sections on this exam. You must manage your time to ensure that you can complete all questions included in this exam within the time provided. To answer the questions included in a case study; you will need to reference information provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario described in the case study. Each question is independent of the other questions in this case study. At the end of this case study; a review screen will appear. This screen allows you to review your answers and make changes before you move to the next section of the exam. After you begin a new section; you cannot return to this section. To start the case study; click the Next button to display the first question. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements; existing environment; and problem statements. If the case study has an All Information tab; note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question; click the Question button to return to the question.

Overview

Fabrikam; Inc. is an engineering company with offices throughout Europe. The company has a main office in London and three branch offices in Amsterdam; Berlin; and Rome.

Existing Environment: Active Directory Environment

The network contains two Active Directory forests named corp.fabrikam.com and rd.fabrikam.com. There are no trust relationships between the forests. Corp.fabrikam.com is a production forest that contains identities used for internal user and computer authentication. Rd.fabrikam.com is used by the research and development (R&D) department only. The R&D department is restricted to using on-premises resources only.

Existing Environment: Network Infrastructure

Each office contains at least one domain controller from the corp.fabrikam.com domain. The main office contains all the domain controllers for the rd.fabrikam.com forest. All the offices have a high-speed connection to the internet. An existing application named WebApp1 is hosted in the data center of the London office. WebApp1 is used by customers to place and track orders. WebApp1 has a web tier that uses Microsoft Internet Information Services (IIS) and a database tier that runs Microsoft SQL Server 2016. The web tier and the database tier are deployed to virtual machines that run on Hyper-V. The IT department currently uses a separate Hyper-V environment to test updates to WebApp1. Fabrikam purchases all Microsoft licenses through a Microsoft Enterprise Agreement that includes Software Assurance.

Existing Environment: Problem Statements

The use of WebApp1 is unpredictable. At peak times; users often report delays. At other times; many resources for WebApp1 are underutilized.

Requirements: Planned Changes

Fabrikam plans to move most of its production workloads to Azure during the next few years; including virtual machines that rely on Active Directory for authentication. As one of its first projects; the company plans to establish a hybrid identity model; facilitating an upcoming Microsoft 365 deployment. All R&D operations will remain on-premises. Fabrikam plans to migrate the production and test instances of WebApp1 to Azure.

Requirements: Technical Requirements

Fabrikam identifies the following technical requirements: Website content must be easily updated from a single point. User input must be minimized when provisioning new web app instances. Whenever possible; existing on-premises licenses must be used to reduce cost. Users must always authenticate by using their corp.fabrikam.com UPN identity. Any new deployments to Azure must be redundant in case an Azure region fails. Whenever possible; solutions must be deployed to Azure by using the Standard pricing tier of Azure App Service. An email distribution group named IT Support must be notified of any issues relating to the directory synchronization services. In the event that a link fails between Azure and the on-premises network; ensure that the virtual machines hosted in Azure can authenticate to Active Directory. Directory synchronization between Azure Active Directory (Azure AD) and corp.fabrikam.com must not be affected by a link failure between Azure and the on-premises network.

Requirements: Database Requirements

Fabrikam identifies the following database requirements: Database metrics for the production instance of WebApp1 must be available for analysis so that database administrators can optimize the performance settings. To avoid disrupting customer access; database downtime must be minimized when databases are migrated. Database backups must be retained for a minimum of seven years to meet compliance requirements.

Requirements: Security Requirements

Fabrikam identifies the following security requirements: Company information including policies; templates; and data must be inaccessible to anyone outside the company. Users on the on-premises network must be able to authenticate to corp.fabrikam.com if an internet link fails. Administrators must be able to authenticate to the Azure portal by using their corp.fabrikam.com credentials. All administrative access to the Azure portal must be secured by using multi-factor authentication (MFA). The testing of WebApp1 updates must not be visible to anyone outside the company.

Question

You need to recommend a solution to meet the database retention requirements. What should you recommend?

A. Configure a long-term retention policy for the database.
B. Configure Azure Site Recovery.
C. Use automatic Azure SQL Database backups.
D. Configure geo-replication of the database.

A

Answer: A. Configure a long-term retention policy for the database.

Reasoning:
The requirement is to retain database backups for a minimum of seven years to meet compliance requirements. Azure SQL Database offers a feature called Long-term Retention (LTR) that allows you to store full backups for up to 10 years. This feature is specifically designed to meet compliance and retention requirements; making it the most suitable option for this scenario.

Breakdown of non-selected options:

B. Configure Azure Site Recovery: Azure Site Recovery is primarily used for disaster recovery and business continuity; not for long-term backup retention. It does not address the requirement of retaining backups for seven years.

C. Use automatic Azure SQL Database backups: While Azure SQL Database provides automatic backups; these are typically retained for a shorter period (up to 35 days for point-in-time restore). This does not meet the requirement of retaining backups for seven years.

D. Configure geo-replication of the database: Geo-replication is used for high availability and disaster recovery by replicating databases across different regions. It does not address the need for long-term backup retention for compliance purposes.

120
Q

Note: This question is part of a series that presents the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one solution; while others might not have a solution. After you answer a question in this section; you will NOT be able to return to it. As a result; these questions will not appear in the review screen. You plan to deploy multiple instances of an Azure web app across several Azure regions. You need to design an access solution for the app. The solution must meet the following replication requirements: ✑ Support rate limiting. ✑ Balance requests between all instances. ✑ Ensure that users can access the app in the event of a regional outage. Solution: You use Azure Load Balancer to provide access to the app. Does this meet the goal?
A. Yes
B. No

A

Answer: B. No

Reasoning:
Azure Load Balancer is a Layer 4 (TCP; UDP) load balancer that distributes incoming network traffic across multiple virtual machines or services within a single region. It does not inherently support rate limiting; nor does it provide global load balancing across multiple regions. Therefore; it does not meet the requirements of balancing requests between instances across several regions or ensuring access during a regional outage.

Breakdown of non-selected answer option:
- A. Yes: This option is incorrect because Azure Load Balancer does not support global distribution of traffic across multiple regions; nor does it provide rate limiting. It is primarily used for distributing traffic within a single region; which does not satisfy the requirement of ensuring access during a regional outage.

121
Q

You need to deploy a web application on Azure that requires custom software that cannot be installed on a PaaS offering. The solution must meet the following requirements: The application must be highly available within a single region; able to scale to handle large volumes of user traffic; and use a managed database service. Which Azure services should you use to achieve these requirements?

A. Azure Virtual Machines; Azure Traffic Manager; Azure SQL Database
B. Azure App Service; Azure Load Balancer; Azure SQL Database
C. Azure Kubernetes Service; Azure Load Balancer; Azure Cosmos DB
D. Azure Virtual Machines; Azure Application Gateway; Azure Cosmos DB

A

Answer: A. Azure Virtual Machines; Azure Traffic Manager; Azure SQL Database

Reasoning:
- The requirement specifies that the web application needs custom software that cannot be installed on a PaaS offering. This implies the need for Infrastructure as a Service (IaaS); which is best provided by Azure Virtual Machines.
- The application must be highly available within a single region and able to scale to handle large volumes of user traffic. Azure Traffic Manager can distribute traffic across multiple instances of the application; ensuring high availability and scalability.
- The requirement for a managed database service is met by Azure SQL Database; which is a fully managed relational database service.

Breakdown of non-selected options:
- B. Azure App Service; Azure Load Balancer; Azure SQL Database: Azure App Service is a PaaS offering; which does not meet the requirement for custom software installation.
- C. Azure Kubernetes Service; Azure Load Balancer; Azure Cosmos DB: While Azure Kubernetes Service can handle custom software; it is more complex to manage than Virtual Machines for this scenario. Azure Cosmos DB is a NoSQL database; which may not be suitable if a relational database is required.
- D. Azure Virtual Machines; Azure Application Gateway; Azure Cosmos DB: While Azure Virtual Machines meet the custom software requirement; Azure Application Gateway is more suited for web traffic management rather than distributing traffic for high availability. Additionally; Azure Cosmos DB is a NoSQL database; which may not align with the requirement for a managed database service if a relational database is needed.

122
Q

You are designing an Azure IoT solution that will include 1 million devices. Each device will stream data; including pressure; device ID; and timestamp. Approximately 1 million records will be written every second. The data needs to be visualized in near real-time. You need to recommend services to store and query the data. Which two services can you recommend?
A. Azure Data Factory
B. Azure Event Hubs
C. Azure Cosmos DB for NoSQL
D. Azure SQL Database

A

Answer: B. Azure Event Hubs; C. Azure Cosmos DB for NoSQL

Reasoning:
- Azure Event Hubs is a highly scalable data streaming platform and event ingestion service capable of receiving and processing millions of events per second. It is suitable for handling the high throughput of 1 million records per second from IoT devices.
- Azure Cosmos DB for NoSQL is a globally distributed; multi-model database service that provides low-latency and high-throughput data access. It is well-suited for storing and querying large volumes of data with near real-time performance; making it appropriate for the IoT solution’s requirements.

Breakdown of non-selected options:
- A. Azure Data Factory: This is primarily an ETL (Extract; Transform; Load) service used for data integration and orchestration. It is not designed for real-time data ingestion or querying; making it unsuitable for the requirement of near real-time data visualization.
- D. Azure SQL Database: While it is a relational database service; it may not handle the scale of 1 million records per second efficiently and could introduce latency issues; making it less suitable compared to Cosmos DB for this scenario.

123
Q

You have an Azure AD tenant with multiple security groups that have assigned memberships. Each group includes several members; including guest users. You need to ensure that all security groups are evaluated every three months to identify any members who no longer require access. What solution should you recommend?
A. Implement Azure AD Privileged Identity Management (PIM).
B. Change the membership type of all security groups to Dynamic User.
C. Create an access review for each security group.
D. Implement Azure AD Identity Protection.

A

Answer: C. Create an access review for each security group.

Reasoning: The requirement is to evaluate security group memberships every three months to identify members who no longer require access. Azure AD Access Reviews are specifically designed for this purpose. They allow administrators to review group memberships and access rights periodically; ensuring that only the necessary users retain access. This aligns perfectly with the requirement to evaluate group memberships quarterly.

Breakdown of non-selected options:
- A. Implement Azure AD Privileged Identity Management (PIM): PIM is primarily used for managing; controlling; and monitoring access within Azure AD; Azure; and other Microsoft Online Services. It focuses on privileged roles rather than regular security group memberships; making it less suitable for the requirement of reviewing all security group memberships.

  • B. Change the membership type of all security groups to Dynamic User: Dynamic membership automatically updates group membership based on user attributes. While this can help maintain group membership based on predefined rules; it does not provide a mechanism for periodic reviews to identify users who no longer need access.
  • D. Implement Azure AD Identity Protection: Identity Protection is focused on detecting and responding to identity-based risks and threats. It does not provide functionality for reviewing and managing group memberships periodically; making it unsuitable for the requirement.
124
Q

You are designing an application that requires low-latency reads and writes with strong consistency. The application also needs to scale horizontally as demand increases. Which database solution should you recommend?
A. Azure Cosmos DB for NoSQL
B. Azure SQL Database with active geo-replication
C. Azure Database for MySQL
D. Azure Database for PostgreSQL

A

Answer: A. Azure Cosmos DB for NoSQL

Reasoning:
Azure Cosmos DB is designed to provide low-latency reads and writes with strong consistency; which aligns perfectly with the requirements of the application. It also supports horizontal scaling; making it a suitable choice for applications that need to scale as demand increases. Cosmos DB offers multiple consistency models; including strong consistency; and is optimized for low-latency operations globally.

Breakdown of non-selected options:
B. Azure SQL Database with active geo-replication: While Azure SQL Database can provide strong consistency within a single region; active geo-replication is typically used for high availability and disaster recovery rather than low-latency operations. It may introduce latency due to data replication across regions; which does not align with the low-latency requirement.

C. Azure Database for MySQL: This option is a managed relational database service that may not provide the same level of low-latency performance and strong consistency as Cosmos DB. It is also not inherently designed for horizontal scaling in the same way Cosmos DB is.

D. Azure Database for PostgreSQL: Similar to Azure Database for MySQL; this is a managed relational database service. While it can scale vertically; it does not natively support horizontal scaling and may not meet the low-latency and strong consistency requirements as effectively as Cosmos DB.

125
Q

You have 100 servers running Windows Server 2016 that host Microsoft SQL Server 2017 instances. These instances contain databases with the following characteristics: ✑ Stored procedures are implemented using CLR. ✑ The largest database is currently 6 TB; and none of the databases will exceed 8 TB. You plan to migrate all the data from SQL Server to Azure. You need to recommend a service to host the databases. The solution must meet these requirements: ✑ Minimize downtime during the migration process. ✑ Ensure the databases are accessible from any geographical location. ✑ Support automatic scaling of resources to handle varying workloads. What should you recommend?

A. Azure SQL Managed Instance
B. Azure SQL Database single databases
C. Azure SQL Database Hyperscale
D. Azure Virtual Machines running SQL Server

A

Answer: C. Azure SQL Database Hyperscale

Reasoning:
Azure SQL Database Hyperscale is the most suitable option for this scenario because it is designed to handle large databases and can scale out storage and compute resources independently. It supports databases up to 100 TB; which comfortably accommodates the largest database size mentioned (6 TB; with a maximum of 8 TB). Hyperscale also provides features for minimizing downtime during migration; such as the ability to use the Azure Database Migration Service. Additionally; it offers global accessibility and automatic scaling; which aligns with the requirements of ensuring accessibility from any geographical location and supporting varying workloads.

Breakdown of non-selected options:

A. Azure SQL Managed Instance:
- While Azure SQL Managed Instance supports CLR and is a good option for minimizing downtime during migration; it is not as scalable as Hyperscale for very large databases. It is limited in terms of storage compared to Hyperscale; which could be a constraint if the databases grow beyond the current size.

B. Azure SQL Database single databases:
- This option is not suitable because single databases are not designed to handle very large databases efficiently. They also do not offer the same level of scalability and flexibility as Hyperscale; especially for databases approaching the upper size limit mentioned.

D. Azure Virtual Machines running SQL Server:
- Although this option provides full control over the SQL Server environment and supports CLR; it does not inherently offer automatic scaling or minimize downtime during migration as effectively as the PaaS options. It also requires more management overhead compared to Hyperscale.

126
Q

You are designing an Azure solution for a company that requires a highly secure and scalable identity and access management platform with the following requirements: The platform must support multi-factor authentication and conditional access policies. It must also scale elastically to accommodate a growing number of users and applications. The solution must maximize security while meeting these requirements. Which Azure service should you use?
A. Azure Active Directory Domain Services (AD DS)
B. Azure Active Directory B2C
C. Azure Active Directory (AD)
D. Azure Multi-Factor Authentication (MFA)

A

Answer: C. Azure Active Directory (AD)

Reasoning: Azure Active Directory (AD) is the most suitable service for the given requirements. It provides a highly secure and scalable identity and access management platform. Azure AD supports multi-factor authentication and conditional access policies; which are explicitly required in the question. Additionally; Azure AD can scale elastically to accommodate a growing number of users and applications; making it ideal for the company’s needs. It is designed to maximize security; aligning with the requirement to maximize security while meeting the specified needs.

Breakdown of non-selected options:
- A. Azure Active Directory Domain Services (AD DS): This service provides managed domain services such as domain join; group policy; and LDAP; but it does not inherently provide multi-factor authentication or conditional access policies; which are key requirements in the question.
- B. Azure Active Directory B2C: While Azure AD B2C is designed for customer identity and access management; it is more focused on consumer-facing applications rather than internal enterprise identity management. It may not fully meet the requirements for multi-factor authentication and conditional access policies in the same way Azure AD does.
- D. Azure Multi-Factor Authentication (MFA): This service specifically provides multi-factor authentication capabilities but does not cover the broader identity and access management needs; such as conditional access policies and elastic scalability; which are required in the question.

127
Q

You are planning an Azure IoT Hub solution that will include 500;000 IoT devices. Each device will stream data; including images; device IDs; and timestamps. Approximately 500;000 records will be written every second. The data needs to be visualized in real-time. You need to recommend services to store and query the data. Which two services can you recommend? Each answer presents a complete solution.

A. Azure Table Storage
B. Azure Stream Analytics
C. Azure Cosmos DB for NoSQL
D. Azure Event Hubs

A

Answer: B. Azure Stream Analytics
Answer: C. Azure Cosmos DB for NoSQL

Reasoning:
The question requires a solution for storing and querying a high volume of data from IoT devices in real-time. The solution must handle 500;000 records per second and support real-time visualization.

  • Azure Stream Analytics is suitable for real-time data processing and analytics. It can process large volumes of data with low latency; making it ideal for real-time visualization needs.
  • Azure Cosmos DB for NoSQL is a globally distributed; multi-model database service that can handle massive amounts of data with high throughput and low latency. It is well-suited for storing IoT data; including images; device IDs; and timestamps; and supports querying the data efficiently.

Breakdown of non-selected options:

  • A. Azure Table Storage: While it is a scalable storage solution; it is not optimized for real-time analytics or high throughput scenarios like the one described. It is more suitable for storing large amounts of structured data with a simple key/attribute store design.
  • D. Azure Event Hubs: This is a big data streaming platform and event ingestion service; capable of receiving and processing millions of events per second. However; it is primarily used for ingesting and buffering data rather than storing and querying it. It would be used in conjunction with other services like Azure Stream Analytics for processing the data.
128
Q

You have an app named App1 that uses an on-premises PostgreSQL database called DB1. You plan to migrate DB1 to an Azure Database for PostgreSQL. You need to enable customer-managed Transparent Data Encryption (TDE) for the database; ensuring maximum encryption strength. Which encryption algorithm and key length should you use for the TDE protector?

A. AES 256
B. RSA 3072
C. AES 128
D. RSA 2048

A

Answer: A. AES 256

Reasoning:
Azure Database for PostgreSQL supports Transparent Data Encryption (TDE) to protect data at rest. When implementing customer-managed TDE; it is important to choose an encryption algorithm and key length that provide strong security. AES (Advanced Encryption Standard) is a widely used encryption standard known for its strength and efficiency. AES 256 is the strongest option available among the choices; providing a 256-bit key length; which is considered very secure and is commonly used for encrypting sensitive data.

Breakdown of non-selected options:
- B. RSA 3072: RSA is an asymmetric encryption algorithm; typically used for encrypting small amounts of data or for key exchange rather than bulk data encryption like TDE. While RSA 3072 offers strong security; it is not typically used for TDE.
- C. AES 128: AES 128 is a secure encryption algorithm; but it offers a lower key length compared to AES 256. For maximum encryption strength; AES 256 is preferred over AES 128.
- D. RSA 2048: Similar to RSA 3072; RSA 2048 is an asymmetric encryption algorithm. Although it is secure; it is not the best choice for TDE; which typically uses symmetric encryption like AES.

129
Q

You need to design an Azure SQL database that meets the following requirements: ✑ Supports in-memory OLTP for fast performance. ✑ Has automatic tuning for query optimization. Which deployment option should you choose?
A. Azure SQL Managed Instance Business Critical
B. Azure SQL Database Premium
C. Azure SQL Database Basic
D. Azure SQL Database General Purpose with In-Memory OLTP

A

Answer: B. Azure SQL Database Premium

Reasoning:
To meet the requirements of supporting in-memory OLTP for fast performance and having automatic tuning for query optimization; Azure SQL Database Premium is the most suitable option. The Premium tier of Azure SQL Database supports in-memory OLTP; which is essential for high-performance transactional processing. Additionally; Azure SQL Database offers automatic tuning features; which are available across different service tiers; including Premium.

Breakdown of non-selected options:
- A. Azure SQL Managed Instance Business Critical: While this option supports in-memory OLTP and automatic tuning; it is more suited for scenarios requiring a fully managed SQL Server instance with high compatibility for SQL Server features. The question specifically asks for an Azure SQL Database; not a Managed Instance.
- C. Azure SQL Database Basic: This tier does not support in-memory OLTP; which is a critical requirement in the question. It is designed for small databases with less demanding performance needs.
- D. Azure SQL Database General Purpose with In-Memory OLTP: The General Purpose tier does not support in-memory OLTP. This feature is available in the Premium tier; making the General Purpose option unsuitable for the requirements.

130
Q

You need to deploy a web application with high-performance computing capabilities in an Azure subscription. The solution must meet the following requirements: The web application must have access to the full .NET framework and be hosted in a virtual machine (VM). Which Azure service should you use?
A. Azure Virtual Machines
B. Azure Batch
C. Azure Kubernetes Service (AKS)
D. Azure Container Instances (ACI)

A

Answer: A. Azure Virtual Machines

Reasoning: The requirement is to deploy a web application with high-performance computing capabilities that must have access to the full .NET framework and be hosted in a virtual machine. Azure Virtual Machines (VMs) provide the flexibility to run a wide range of computing solutions; including those that require the full .NET framework. VMs allow for full control over the operating system and installed software; making them suitable for applications that need specific configurations or dependencies; such as the full .NET framework.

Breakdown of non-selected options:
- B. Azure Batch: Azure Batch is designed for running large-scale parallel and high-performance computing (HPC) applications efficiently in the cloud. However; it is not specifically designed for hosting web applications with the full .NET framework in a VM.
- C. Azure Kubernetes Service (AKS): AKS is a managed Kubernetes service for running containerized applications. While it can be used for high-performance computing; it is not specifically designed for hosting applications that require the full .NET framework in a VM.
- D. Azure Container Instances (ACI): ACI is used for running containers without managing servers. It is not suitable for applications that require the full .NET framework in a VM; as it is more suited for lightweight; stateless applications.

131
Q

Your company; Contoso; Ltd.; has implemented several Azure virtual machines that provide access to an on-premises web service. Contoso has established a partnership with another company; Fabrikam; Inc. Fabrikam does not have an existing Azure Active Directory (Azure AD) tenant and uses a third-party OAuth 2.0 identity management system to authenticate its users. Developers at Fabrikam plan to use a subset of the virtual machines to build applications that will integrate with Contoso’s on-premises web service. You need to design a solution to provide Fabrikam developers with access to the virtual machines. The solution must meet the following requirements: ✑ Requests to the virtual machines from the developers must be limited to lower rates than the requests from Contoso users. ✑ Developers must be able to use their existing OAuth 2.0 provider to access the virtual machines. ✑ The solution must NOT require changes to the virtual machines. ✑ The solution must NOT use Azure AD guest accounts. What should you include in the solution?

A. Azure AD Application Proxy
B. Azure API Management
C. Azure Front Door
D. Azure AD Business-to-Business (B2B)

A

Answer: B. Azure API Management

Reasoning:
- Azure API Management is a suitable solution because it allows you to manage and control access to APIs; which can include rate limiting based on different user groups. This aligns with the requirement to limit requests from Fabrikam developers to lower rates than Contoso users.
- Azure API Management can integrate with third-party OAuth 2.0 identity providers; allowing Fabrikam developers to use their existing authentication system without requiring changes to the virtual machines or using Azure AD guest accounts.

Breakdown of non-selected options:
- A. Azure AD Application Proxy: This option is primarily used to provide secure remote access to on-premises applications and does not inherently support rate limiting or integration with third-party OAuth 2.0 providers without Azure AD.
- C. Azure Front Door: While Azure Front Door can provide global load balancing and routing; it does not directly support rate limiting based on user groups or integrate with third-party OAuth 2.0 providers for authentication.
- D. Azure AD Business-to-Business (B2B): This option involves inviting external users as guest accounts in Azure AD; which contradicts the requirement to not use Azure AD guest accounts.

132
Q

Your company has deployed several virtual machines in Azure; and you want to monitor their performance and diagnose issues. You plan to use Azure Monitor to collect and analyze metrics and logs. Does this approach meet your goal?
A. Yes
B. No

A

Answer: A. Yes

Reasoning: Azure Monitor is a comprehensive solution for collecting; analyzing; and acting on telemetry from your cloud and on-premises environments. It helps you understand how your applications are performing and proactively identifies issues affecting them and the resources they depend on. Since the goal is to monitor the performance of virtual machines and diagnose issues; using Azure Monitor is indeed a suitable approach.

Breakdown of non-selected answer option:
- B. No: This option is not suitable because Azure Monitor is specifically designed to meet the requirements of monitoring performance and diagnosing issues in Azure environments; including virtual machines. Therefore; the statement that using Azure Monitor does not meet the goal is incorrect.

133
Q

You have an Azure subscription. You need to recommend a solution that allows developers to provision Azure virtual machines. The solution must meet the following requirements: ✑ Only permit the creation of virtual machines in specific regions. ✑ Only permit the creation of specific sizes of virtual machines. What should you include in the recommendation?
A. Attribute-based access control (ABAC)
B. Azure Policy
C. Conditional Access policies
D. Role-based access control (RBAC)

A

Answer: B. Azure Policy

Reasoning: Azure Policy is the most suitable solution for this scenario because it allows you to enforce specific rules and effects over your resources; ensuring that they comply with your corporate standards and service level agreements. In this case; Azure Policy can be used to restrict the creation of virtual machines to specific regions and specific sizes; which directly addresses the requirements stated in the question.

Breakdown of non-selected options:

A. Attribute-based access control (ABAC): ABAC is used to provide fine-grained access control based on user attributes; resource attributes; and environment attributes. It is not specifically designed to enforce policies on resource creation like restricting regions or VM sizes.

C. Conditional Access policies: These are primarily used to control access to Azure resources based on conditions such as user location; device state; or application. They do not provide the capability to enforce restrictions on the creation of specific Azure resources.

D. Role-based access control (RBAC): RBAC is used to manage who has access to Azure resources and what they can do with those resources. While RBAC can restrict actions based on roles; it does not provide the capability to enforce specific conditions on resource creation; such as limiting regions or VM sizes.

134
Q

You are tasked with designing a highly available Azure SQL database that must meet the following requirements: ✑ Support read scale-out to reduce query latency. ✑ Minimize costs. Which deployment option should you choose?
A. Azure SQL Managed Instance Business Critical
B. Azure SQL Database Premium
C. Azure SQL Database Basic
D. Azure SQL Database Hyperscale with read replicas

A

Answer: D. Azure SQL Database Hyperscale with read replicas

Reasoning:
- The requirement is to design a highly available Azure SQL database that supports read scale-out to reduce query latency and minimizes costs.
- Azure SQL Database Hyperscale with read replicas is designed to support read scale-out by allowing multiple read replicas; which can handle read queries and reduce latency. This makes it suitable for the requirement of supporting read scale-out.
- Hyperscale is also cost-effective for scenarios that require high scalability and read replicas; as it allows for flexible scaling of resources.

Breakdown of non-selected options:
- A. Azure SQL Managed Instance Business Critical: While this option supports high availability and read scale-out; it is generally more expensive than Hyperscale; especially for scenarios that require multiple read replicas. Therefore; it does not meet the requirement to minimize costs as effectively as Hyperscale.
- B. Azure SQL Database Premium: This option provides high performance and availability but does not inherently support read scale-out with read replicas. It is also typically more costly than Hyperscale for scenarios requiring read scale-out.
- C. Azure SQL Database Basic: This option is the least expensive but does not support read scale-out or high availability features required by the question. It is not suitable for the requirements specified.

135
Q

You need to deploy a web application with high-performance computing capabilities in an Azure subscription. The solution must meet these requirements: The web application must have access to the full .NET framework; be hosted in a virtual machine (VM) that supports high-performance computing; and provide automatic scaling based on the workload. Which Azure service should you use?
A. Azure Virtual Machines
B. Azure Batch
C. Azure Kubernetes Service (AKS)
D. Azure Container Instances (ACI)

A

Answer: A. Azure Virtual Machines

Reasoning: The requirements specify that the web application must have access to the full .NET framework; be hosted in a VM that supports high-performance computing; and provide automatic scaling based on the workload. Azure Virtual Machines (VMs) are suitable for this scenario because they can run the full .NET framework; support high-performance computing with specialized VM sizes (such as the H-series for high-performance computing); and can be configured to scale automatically using Azure VM Scale Sets.

Breakdown of non-selected options:
- B. Azure Batch: While Azure Batch is designed for running large-scale parallel and high-performance computing applications; it is not specifically designed for hosting web applications with the full .NET framework. It is more suitable for batch processing jobs rather than web applications.
- C. Azure Kubernetes Service (AKS): AKS is a managed Kubernetes service that can provide automatic scaling and orchestration for containerized applications. However; it is more suited for microservices and containerized applications rather than applications requiring the full .NET framework on a VM.
- D. Azure Container Instances (ACI): ACI is used for running containers without managing servers; but it does not provide the full .NET framework support on a VM. It is more suitable for lightweight; stateless applications rather than high-performance computing scenarios requiring full .NET framework support.

136
Q

You need to design an Azure SQL database that meets the following requirements: ✑ Supports up to 100 TB of data. ✑ Provides fast and predictable performance. ✑ Supports automatic scaling. Which deployment option should you choose?
A. Azure SQL Managed Instance Business Critical
B. Azure SQL Database Premium
C. Azure SQL Database Basic
D. Azure SQL Database Hyperscale

A

Answer: D. Azure SQL Database Hyperscale

Reasoning:
Azure SQL Database Hyperscale is designed to support very large databases; up to 100 TB; which aligns with the requirement of supporting up to 100 TB of data. It also provides fast and predictable performance due to its architecture that separates compute and storage; allowing for rapid scaling and efficient data management. Additionally; Hyperscale supports automatic scaling; which is another requirement specified in the question.

Breakdown of non-selected options:
A. Azure SQL Managed Instance Business Critical - While this option provides high availability and performance; it does not support databases up to 100 TB. Managed Instances are generally limited to smaller sizes compared to Hyperscale.

B. Azure SQL Database Premium - This option offers high performance and availability but is not designed to handle databases up to 100 TB. The Premium tier is more suitable for smaller databases that require high performance.

C. Azure SQL Database Basic - This option is intended for small databases with minimal performance requirements. It does not support the large size (up to 100 TB) or the performance and scaling needs specified in the question.

137
Q

You have an Azure subscription with two applications; App1 and App2. App1 generates messages that are sent to an Azure Event Grid topic; and App2 processes these messages. You need to ensure that App2 can scale to handle a high volume of messages. What should you recommend?
A. An Azure Service Bus queue.
B. An Azure Storage account queue.
C. An Azure Event Grid subscription.
D. An Azure Stream Analytics job.

A

Answer: C. An Azure Event Grid subscription.

Reasoning:
To ensure that App2 can scale to handle a high volume of messages from App1; which sends messages to an Azure Event Grid topic; the most suitable solution is to use an Azure Event Grid subscription. This allows App2 to subscribe to the Event Grid topic and process messages as they arrive. Azure Event Grid is designed to handle high throughput and can automatically scale to accommodate the volume of messages; making it an ideal choice for this scenario.

Breakdown of non-selected options:
A. An Azure Service Bus queue: While Azure Service Bus is a reliable messaging service that can handle high volumes; it is not directly related to scaling App2 in the context of Event Grid messages. It would require additional setup to integrate with Event Grid.

B. An Azure Storage account queue: Azure Storage queues are simple and cost-effective but are not as scalable or feature-rich as Event Grid for handling high volumes of messages. They also require additional integration with Event Grid.

D. An Azure Stream Analytics job: Azure Stream Analytics is used for real-time data processing and analytics; not for scaling message processing applications. It is not directly relevant to ensuring App2 can handle a high volume of messages from Event Grid.

138
Q

Your company has deployed several virtual machines both on-premises and in Azure. ExpressRoute is set up and configured for connectivity between on-premises and Azure. Some virtual machines are experiencing network connectivity issues. You need to analyze the network traffic to determine if packets are being allowed or denied to these virtual machines. Solution: Use Azure Monitor to analyze the network traffic. Does this meet the goal?
A. Yes
B. No

A

Answer: B. No

Reasoning: Azure Monitor is a comprehensive solution for collecting; analyzing; and acting on telemetry from your cloud and on-premises environments. However; it is not specifically designed for analyzing network traffic to determine if packets are being allowed or denied. For network traffic analysis; tools like Network Watcher; which includes features such as IP flow verify and Network Security Group (NSG) flow logs; would be more appropriate.

Breakdown of non-selected answer options:
- A. Yes: This option is incorrect because Azure Monitor; while useful for monitoring and diagnostics; does not specifically provide the tools needed to analyze network traffic at the packet level to determine if packets are being allowed or denied. Network Watcher would be the more appropriate tool for this task.

139
Q

You are designing an Azure solution that requires a database. The database must support SQL commands; be highly available; and be cost-effective. Which Azure database service should you recommend?
A. Azure SQL Managed Instance
B. Azure SQL Database
C. Azure Database for MySQL

A

Answer: B. Azure SQL Database

Reasoning:
Azure SQL Database is a fully managed relational database service that supports SQL commands and is designed for high availability and cost-effectiveness. It is a Platform as a Service (PaaS) offering that provides built-in high availability; automated backups; and scaling options; making it a suitable choice for applications that require SQL support and high availability while being cost-effective.

Breakdown of non-selected options:
- A. Azure SQL Managed Instance: While Azure SQL Managed Instance also supports SQL commands and provides high availability; it is generally more expensive than Azure SQL Database. It is designed for scenarios where you need full SQL Server compatibility and features; which might not be necessary for all applications; especially if cost-effectiveness is a priority.

  • C. Azure Database for MySQL: This option supports MySQL; not SQL Server; so it does not natively support SQL Server commands. While it can be highly available and cost-effective; it does not meet the requirement of supporting SQL commands as specified in the question.
140
Q

You are planning to migrate a large-scale Oracle database to Azure. You need to recommend a solution for the Azure SQL Database configuration that meets the following requirements: The database must support high-performance OLTP workloads; and the solution must provide automatic failover to a secondary region in case of a disaster. Which service tier should you recommend for the Azure SQL Database?

A. General Purpose
B. Business Critical
C. Hyperscale

A

Answer: B. Business Critical

Reasoning: The Business Critical service tier is designed for high-performance OLTP workloads and provides features such as in-memory OLTP and high availability with automatic failover. It is suitable for scenarios where performance and resilience are critical; such as large-scale Oracle database migrations that require high-performance OLTP capabilities and disaster recovery solutions.

Breakdown of non-selected options:
- A. General Purpose: This tier is more cost-effective and suitable for general workloads but does not offer the same level of performance or high availability features as the Business Critical tier. It lacks the automatic failover to a secondary region; which is a requirement in this scenario.
- C. Hyperscale: This tier is designed for very large databases and provides scalability benefits but is not specifically optimized for high-performance OLTP workloads. It also does not inherently provide automatic failover to a secondary region; which is a critical requirement in this scenario.

141
Q

Your company has recently deployed a new virtual network on Azure; and you need to ensure that network traffic is properly secured. You want to use Azure Network Watcher to identify potential security vulnerabilities in the virtual network. Which feature of Azure Network Watcher should you use to achieve this goal?
A. IP flow verify
B. Connection monitor
C. Network performance monitor
D. Traffic analytics

A

Answer: D. Traffic analytics

Reasoning: To identify potential security vulnerabilities in a virtual network; you need a tool that can analyze network traffic and provide insights into security issues. Azure Network Watcher’s Traffic Analytics is specifically designed for this purpose. It provides insights into network traffic patterns; identifies security threats; and helps in understanding network performance and security posture.

Breakdown of non-selected options:
- A. IP flow verify: This feature is used to check if a packet is allowed or denied to or from a virtual machine; which is useful for troubleshooting connectivity issues but not for identifying security vulnerabilities across the network.
- B. Connection monitor: This feature is used to monitor the connectivity between virtual machines and other network resources. It helps in ensuring that connections are established and maintained but does not focus on identifying security vulnerabilities.
- C. Network performance monitor: This feature is used to monitor the performance of the network; such as latency and packet loss; but it does not provide insights into security vulnerabilities.

142
Q

You need to store 50 TB of data in Azure and ensure it’s readily accessible. Which storage solution should you implement to minimize costs?
A. Azure Blob Storage with the Hot access tier
B. Azure File Storage with the Premium performance tier
C. Azure Disk Storage with the Ultra disk type
D. Azure Table Storage with the Hot access tier

A

Answer: A. Azure Blob Storage with the Hot access tier

Reasoning:
Azure Blob Storage is designed for storing large amounts of unstructured data; such as text or binary data; making it suitable for storing 50 TB of data. The Hot access tier is optimized for data that is accessed frequently; which aligns with the requirement of ensuring the data is readily accessible. Additionally; Azure Blob Storage is generally more cost-effective compared to other storage solutions when dealing with large volumes of data.

Breakdown of non-selected options:
- B. Azure File Storage with the Premium performance tier: Azure File Storage is typically used for scenarios where you need a fully managed file share in the cloud that can be accessed via SMB protocol. The Premium performance tier is designed for high-performance needs; which may not be necessary for this scenario and would increase costs unnecessarily.

  • C. Azure Disk Storage with the Ultra disk type: Azure Disk Storage is primarily used for virtual machine disks and is not the most cost-effective solution for storing large amounts of unstructured data. The Ultra disk type is designed for high-performance workloads; which would significantly increase costs and is not suitable for the requirement of minimizing costs.
  • D. Azure Table Storage with the Hot access tier: Azure Table Storage is a NoSQL store that is optimized for storing structured data. It is not suitable for storing large amounts of unstructured data like the 50 TB mentioned in the question. Additionally; it does not have an access tier concept like Blob Storage; making it an incorrect choice for this scenario.
143
Q

You have an Azure subscription with a web app. How can you restrict access to the web app to a specific set of IP addresses?
A. Azure AD Privileged Identity Management
B. Azure AD Conditional Access
C. Azure AD Connect Health
D. Azure App Service Environment (ASE)

A

Answer: D. Azure App Service Environment (ASE)

Reasoning: To restrict access to a web app to a specific set of IP addresses; you need a solution that allows for network-level access control. Azure App Service Environment (ASE) is a premium service offering of Azure App Service that provides a fully isolated and dedicated environment for securely running App Service apps at high scale. ASE allows you to configure network security features; such as IP restrictions; to control access to your web app.

Breakdown of non-selected options:
- A. Azure AD Privileged Identity Management: This service is used to manage; control; and monitor access within Azure AD; Azure; and other Microsoft Online Services. It is not used for restricting access to web apps based on IP addresses.
- B. Azure AD Conditional Access: This feature is used to enforce access controls on the basis of user identity and device compliance; not for restricting access based on IP addresses to web apps.
- C. Azure AD Connect Health: This tool is used to monitor the health of your on-premises identity infrastructure and Azure AD Connect sync. It does not provide functionality for restricting access to web apps based on IP addresses.

144
Q

You have an Azure Active Directory (Azure AD) tenant that synchronizes with an on-premises Active Directory domain. You need to ensure that users can reset their own passwords and manage their own authentication methods. What should you do?
A. Implement Azure AD Privileged Identity Management (PIM).
B. Implement self-service password reset.
C. Configure Azure AD Identity Protection.
D. Implement Azure AD Connect Health.

A

Answer: B. Implement self-service password reset.

Reasoning: The requirement is to allow users to reset their own passwords and manage their own authentication methods. Azure AD’s self-service password reset (SSPR) is specifically designed to enable users to reset their passwords without administrator intervention. It also allows users to manage their authentication methods; such as updating their phone numbers or email addresses used for verification.

Breakdown of non-selected options:
- A. Implement Azure AD Privileged Identity Management (PIM): PIM is used to manage; control; and monitor access within Azure AD; Azure; and other Microsoft Online Services. It is not related to self-service password reset or managing authentication methods for users.
- C. Configure Azure AD Identity Protection: This service is used to identify potential vulnerabilities affecting an organization’s identities and to configure automated responses to detected suspicious actions. It does not provide self-service password reset capabilities.
- D. Implement Azure AD Connect Health: This tool is used to monitor the health of your on-premises identity infrastructure and Azure AD Connect sync. It does not provide functionality for users to reset their passwords or manage authentication methods.

145
Q

You are designing an Azure IoT solution that will include 500;000 devices. Each device will stream data; including speed; device ID; and timestamp. Approximately 500;000 records will be written every second. The data needs to be visualized in near real-time. You need to recommend services to store and query the data. Which two services can you recommend?
A. Azure Stream Analytics
B. Azure Data Lake Storage Gen2
C. Azure Cosmos DB for NoSQL
D. Azure SQL Database

A

Answer: A. Azure Stream Analytics
Answer: C. Azure Cosmos DB for NoSQL

Reasoning:
The question requires a solution for storing and querying high-velocity data from 500;000 devices in near real-time. The solution must handle approximately 500;000 records per second and provide near real-time visualization.

  • Azure Stream Analytics is suitable for real-time data processing and analytics. It can ingest data from IoT devices; process it in real-time; and output it to various storage solutions or visualization tools. This makes it an ideal choice for the real-time visualization requirement.
  • Azure Cosmos DB for NoSQL is a globally distributed; multi-model database service that provides high throughput and low latency. It is designed to handle large volumes of data and can scale to accommodate the high ingestion rate of 500;000 records per second. It also supports querying the data efficiently; making it suitable for storing and querying the IoT data.

Breakdown of non-selected options:

  • B. Azure Data Lake Storage Gen2: While it is excellent for storing large volumes of data; it is optimized for batch processing and analytics rather than real-time querying and visualization. It is not the best choice for near real-time data visualization.
  • D. Azure SQL Database: Although it can handle large volumes of data; it may not be the best fit for the scale and speed required by this scenario. Azure SQL Database might struggle with the ingestion rate of 500;000 records per second and is not optimized for real-time analytics compared to Cosmos DB and Stream Analytics.
146
Q

You have a large dataset stored in an on-premises HDFS cluster that you want to analyze using Apache Spark. You plan to migrate the dataset to Azure and run Spark jobs in the cloud. Which Azure service should you use?

A. Azure HDInsight
B. Azure Synapse Analytics
C. Azure Databricks
D. Azure Stream Analytics

A

Answer: C. Azure Databricks

Reasoning: Azure Databricks is a fast; easy; and collaborative Apache Spark-based analytics platform optimized for Azure. It is designed to handle large-scale data processing and is well-suited for running Apache Spark jobs. Given the requirement to analyze a large dataset using Apache Spark after migrating it to Azure; Azure Databricks is the most suitable choice due to its seamless integration with Azure services; ease of use; and optimized performance for Spark workloads.

Breakdown of non-selected options:

A. Azure HDInsight: While Azure HDInsight can run Apache Spark; it is a more complex and less integrated solution compared to Azure Databricks. Azure Databricks offers a more streamlined and user-friendly experience for Spark workloads; making it a better choice for this scenario.

B. Azure Synapse Analytics: Azure Synapse Analytics is a powerful analytics service that integrates big data and data warehousing. However; it is not specifically optimized for Apache Spark workloads in the same way that Azure Databricks is. Azure Databricks provides a more focused and efficient environment for running Spark jobs.

D. Azure Stream Analytics: Azure Stream Analytics is designed for real-time data stream processing and is not suitable for batch processing of large datasets using Apache Spark. It is not the right choice for the scenario described; which involves analyzing a large dataset with Spark.

147
Q

You have an Azure subscription containing three applications named App1; App2; and App3. Each application generates messages added to an Azure Event Grid topic. You need to ensure that messages generated by each application are processed only by the corresponding application. What should you recommend?
A. Multiple Azure Event Grid subscriptions.
B. One Azure Service Bus queue.
C. One Azure Service Bus topic.
D. Multiple Azure Event Hubs.

A

Answer: A. Multiple Azure Event Grid subscriptions.

Reasoning:
To ensure that messages generated by each application are processed only by the corresponding application; you need a mechanism that allows for filtering and routing of messages to specific endpoints. Azure Event Grid subscriptions allow you to create multiple subscriptions for a single Event Grid topic; where each subscription can have its own filter criteria. This means you can set up a subscription for each application (App1; App2; and App3) with filters that ensure only the relevant messages are delivered to each application.

Breakdown of non-selected options:
- B. One Azure Service Bus queue: A single queue would not allow for the separation of messages by application. All messages would be placed in the same queue; and there would be no built-in mechanism to ensure that only the corresponding application processes its messages.

  • C. One Azure Service Bus topic: While Service Bus topics support subscriptions and filtering; using a single topic would require additional configuration to ensure messages are routed correctly. It is more complex compared to using Event Grid subscriptions directly; which are designed for this purpose.
  • D. Multiple Azure Event Hubs: Event Hubs are designed for high-throughput data streaming rather than message routing and filtering. They do not provide the same level of filtering and routing capabilities as Event Grid subscriptions; making them less suitable for this scenario.
148
Q

You are designing a real-time analytics application that requires a database solution to store and process large volumes of data. The solution must support multi-master writes and provide low-latency read operations. Which Azure database service should you recommend?
A. Azure Cosmos DB for NoSQL
B. Azure SQL Database with active geo-replication
C. Azure Data Lake Storage Gen2
D. Azure Synapse Analytics

A

Answer: A. Azure Cosmos DB for NoSQL

Reasoning:
Azure Cosmos DB is a globally distributed; multi-model database service that is designed to provide low-latency and high availability. It supports multi-master writes; which allows for multiple regions to accept writes simultaneously; making it ideal for real-time analytics applications that require low-latency read operations and the ability to handle large volumes of data. This makes it the most suitable option for the requirements specified in the question.

Breakdown of non-selected options:
B. Azure SQL Database with active geo-replication: While Azure SQL Database can provide low-latency read operations through geo-replication; it does not support multi-master writes. It is primarily designed for single-master scenarios; which makes it less suitable for the requirements of the question.

C. Azure Data Lake Storage Gen2: This is a storage solution optimized for big data analytics workloads; but it is not a database service and does not support multi-master writes or low-latency read operations. It is more suitable for batch processing rather than real-time analytics.

D. Azure Synapse Analytics: This is an analytics service that integrates big data and data warehousing. While it is powerful for large-scale analytics; it is not designed for real-time analytics with multi-master writes and low-latency reads. It is more suited for complex queries and data transformations rather than real-time data processing.

149
Q

You have 500 servers running Windows Server 2019 that host Microsoft SQL Server 2019 instances. These instances contain databases with the following characteristics: ✑ All databases use the full recovery model and have log backups configured. ✑ The largest database is currently 10 TB; and none will exceed 12 TB. You plan to migrate all the data from SQL Server to Azure. You need to recommend a service to host the databases. The solution must meet these requirements: ✑ Ensure the migration process is secure and encrypted. ✑ Provide a high-performance solution with low latency. ✑ Support easy integration with Azure services. What should you include in the recommendation?
A. Azure SQL Managed Instance
B. Azure SQL Database single databases
C. Azure SQL Database Managed Instance
D. SQL Server on Azure Virtual Machines

A

Answer: A. Azure SQL Managed Instance

Reasoning:
Azure SQL Managed Instance is the most suitable option for hosting SQL Server databases in Azure given the requirements. It provides a fully managed SQL Server instance in the cloud; which ensures that the migration process is secure and encrypted. It also offers high performance with low latency; as it is designed to support large databases and complex workloads. Additionally; Azure SQL Managed Instance supports easy integration with other Azure services; making it a comprehensive solution for migrating SQL Server databases to Azure.

Breakdown of non-selected options:
B. Azure SQL Database single databases - This option is not suitable because it is designed for single databases and may not efficiently handle the migration of multiple large databases; especially those up to 12 TB in size. It also lacks some of the features and compatibility that Managed Instance offers for SQL Server workloads.

C. Azure SQL Database Managed Instance - This option is actually a misnomer and does not exist as a separate service. The correct term is Azure SQL Managed Instance; which is already covered in option A.

D. SQL Server on Azure Virtual Machines - While this option provides full control over the SQL Server environment and can handle large databases; it requires more management overhead compared to a managed service. It may not offer the same level of integration with Azure services and ease of management as Azure SQL Managed Instance.

150
Q

You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements:
✑ Provide access to the Python runtime environment.
✑ Ensure redundancy in case an Azure region fails.
✑ Allow administrators to manage the web app.

Solution: You deploy an Azure App Service with two instances and an Azure Traffic Manager. You use a custom Docker image that includes the Python runtime environment. You grant the necessary permissions to administrators. Does this meet the goal?
A. Yes
B. No

A

Answer: A. Yes

Reasoning: The solution involves deploying an Azure App Service with two instances and using Azure Traffic Manager; which meets the requirements specified in the question. Here’s the breakdown:

  1. Provide access to the Python runtime environment: The use of a custom Docker image allows you to include the Python runtime environment; satisfying this requirement.
  2. Ensure redundancy in case an Azure region fails: Deploying two instances of the Azure App Service across different regions and using Azure Traffic Manager ensures redundancy and failover capabilities; meeting this requirement.
  3. Allow administrators to manage the web app: Azure App Service provides built-in management capabilities; and granting necessary permissions to administrators ensures they can manage the web app.

Breakdown of non-selected answer option:

B. No: This option is not suitable because the proposed solution does indeed meet all the specified requirements. The use of Azure App Service with a custom Docker image; along with Azure Traffic Manager; addresses the need for a Python runtime; redundancy; and administrative management. Therefore; selecting “No” would be incorrect.

151
Q

You need to design a highly available Azure Virtual Machine that meets the following requirements:
✑ The Virtual Machine must have a recovery time objective (RTO) of less than 5 minutes.
✑ The Virtual Machine must remain available in the event of a hardware failure.
✑ Costs must be minimized.
Which deployment option should you use?
A. Azure Virtual Machines with Azure Site Recovery
B. Azure Virtual Machines with Availability Zones
C. Azure Virtual Machines with Azure Backup
D. Azure Virtual Machines with Premium SSD

A

Answer: B. Azure Virtual Machines with Availability Zones

Reasoning:
To design a highly available Azure Virtual Machine that meets the specified requirements; we need to consider both the RTO and the need for availability in the event of a hardware failure.

  • The requirement for an RTO of less than 5 minutes suggests that the solution must provide rapid recovery capabilities.
  • The need for the VM to remain available during a hardware failure indicates that redundancy and fault tolerance are critical.
  • Minimizing costs is also a factor; but it should not compromise the availability and recovery objectives.

Azure Availability Zones are designed to provide high availability by distributing VMs across physically separate locations within an Azure region. This setup ensures that if one zone experiences a failure; the VM can continue running in another zone; thus meeting the requirement for availability during hardware failures. Additionally; Availability Zones can help achieve a low RTO by maintaining the VM’s state across zones; allowing for quick recovery.

Breakdown of non-selected options:
- A. Azure Virtual Machines with Azure Site Recovery: While Azure Site Recovery provides disaster recovery capabilities; it is primarily used for cross-region disaster recovery and may not meet the RTO of less than 5 minutes due to potential delays in failover processes. It is also more suited for scenarios where entire regions might fail; rather than individual hardware failures within a region.

  • C. Azure Virtual Machines with Azure Backup: Azure Backup is designed for data protection and recovery; not for maintaining VM availability during hardware failures. It does not provide the necessary redundancy or quick failover capabilities required to meet the RTO and availability requirements.
  • D. Azure Virtual Machines with Premium SSD: Premium SSDs offer high-performance storage but do not inherently provide high availability or redundancy across hardware failures. While they can improve performance; they do not address the need for the VM to remain available during a hardware failure or meet the RTO requirement.
152
Q

You have an on-premises web application running on a dedicated server that needs to be migrated to Azure. You must recommend a hosting solution that meets the following requirements: • Minimize costs. • Ensure high availability. • Allow dynamic scaling to handle varying workloads. What should you recommend?

A. Azure Virtual Machines with load balancers
B. Azure App Service
C. Azure Kubernetes Service
D. Azure Service Fabric

A

Answer: B. Azure App Service

Reasoning: Azure App Service is a fully managed platform for building; deploying; and scaling web apps. It is cost-effective because it abstracts the underlying infrastructure; reducing the need for managing virtual machines or other resources. It ensures high availability with built-in load balancing and can automatically scale out to handle varying workloads; making it a suitable choice for the given requirements.

Breakdown of non-selected options:

A. Azure Virtual Machines with load balancers: While this option can provide high availability and dynamic scaling; it typically incurs higher costs due to the need to manage and maintain the virtual machines and associated infrastructure. It requires more administrative overhead compared to Azure App Service.

C. Azure Kubernetes Service: This is a powerful option for containerized applications and provides high availability and scaling. However; it is more complex to manage and may not be the most cost-effective solution for a simple web application; especially if the application does not require container orchestration.

D. Azure Service Fabric: This is a distributed systems platform that can be used to build scalable and reliable microservices. It is more complex and may be overkill for a simple web application. It also requires more management effort and may not be as cost-effective as Azure App Service for this scenario.

153
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one solution; while others might not have a solution. After you answer a question in this section; you will NOT be able to return to it. As a result; these questions will not appear in the review screen. Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases. The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region. You need to recommend a solution to meet the regulatory requirement. Solution: You recommend using an Azure Policy initiative to enforce the location. Does this meet the goal?
A. Yes
B. No

A

Answer: A. Yes

Reasoning: The question requires a solution to ensure that Azure App Service instances are deployed only to specific Azure regions; in compliance with regulatory requirements. An Azure Policy initiative can be used to enforce specific rules and compliance requirements across Azure resources. By using an Azure Policy initiative; you can define and enforce policies that restrict the deployment of resources to specific regions. Therefore; recommending an Azure Policy initiative to enforce the location meets the goal of ensuring that the App Service instances are deployed only to the specified regions.

Breakdown of non-selected answer option:
B. No - This option is not suitable because the use of an Azure Policy initiative is indeed a valid solution to enforce the deployment of resources to specific regions; thereby meeting the regulatory requirement stated in the question.