tom Flashcards
What should you include in the identity management strategy to accommodate the planned changes?
A. Deploy domain controllers for corp.fabrikam.com to virtual networks in Azure.
B. Move all the domain controllers from corp.fabrikam.com to virtual networks in Azure.
C. Deploy a new Azure AD tenant for authenticating new R&D projects.
Answer: A. Deploy domain controllers for corp.fabrikam.com to virtual networks in Azure.
Reasoning: The question asks about accommodating planned changes in the identity management strategy. Deploying domain controllers for corp.fabrikam.com to virtual networks in Azure allows for extending the on-premises Active Directory environment into Azure; which is a common strategy for hybrid identity management. This approach supports seamless integration with existing infrastructure and provides flexibility for scaling and managing identities in a cloud environment.
Breakdown of non-selected options:
- B. Move all the domain controllers from corp.fabrikam.com to virtual networks in Azure: This option suggests moving all domain controllers to Azure; which might not be suitable if there is a need to maintain on-premises infrastructure for redundancy; compliance; or performance reasons. It could also introduce risks if connectivity to Azure is disrupted.
- C. Deploy a new Azure AD tenant for authenticating new R&D projects: Creating a new Azure AD tenant would separate the identity management for R&D projects from the existing corp.fabrikam.com domain; which might not align with the goal of accommodating planned changes within the existing identity management framework. This option could lead to increased complexity in managing multiple identity systems.
You have an Azure subscription that includes a virtual network. You need to ensure that the traffic between this virtual network and an on-premises network is encrypted. What should you recommend?
A. Azure AD Privileged Identity Management
B. Azure AD Conditional Access
C. Azure VPN Gateway
D. Azure Security Center
Answer: C. Azure VPN Gateway
Reasoning:
The requirement is to ensure that the traffic between an Azure virtual network and an on-premises network is encrypted. The most suitable solution for this scenario is to use a VPN (Virtual Private Network) connection; which encrypts the data transmitted between the two networks. Azure VPN Gateway is specifically designed to provide secure cross-premises connectivity; making it the appropriate choice for encrypting traffic between an Azure virtual network and an on-premises network.
Breakdown of non-selected options:
A. Azure AD Privileged Identity Management - This service is used for managing; controlling; and monitoring access within Azure AD; not for encrypting network traffic between Azure and on-premises networks.
B. Azure AD Conditional Access - This feature is used to enforce access controls on Azure AD resources based on conditions; not for encrypting network traffic.
D. Azure Security Center - This service provides security management and threat protection for Azure resources; but it does not specifically handle encryption of network traffic between Azure and on-premises networks.
You have an application that uses three on-premises Microsoft SQL Server databases. You plan to migrate these databases to Azure. The application requires server-side transactions across all three databases. What Azure solution should you recommend to meet this requirement?
A. Azure SQL Database Hyperscale
B. Azure SQL Database Managed Instance
C. Azure SQL Database Elastic Pool
D. Azure SQL Database Single Database
Answer: B. Azure SQL Database Managed Instance
Reasoning: The requirement is to support server-side transactions across three databases; which implies the need for features like distributed transactions or cross-database transactions. Azure SQL Database Managed Instance supports distributed transactions across multiple databases; making it suitable for this scenario. It provides near 100% compatibility with on-premises SQL Server; including support for features like cross-database queries and transactions; which are essential for the application in question.
Breakdown of non-selected options:
- A. Azure SQL Database Hyperscale: This option is designed for single databases with high scalability needs. It does not inherently support cross-database transactions; which are required in this scenario.
- C. Azure SQL Database Elastic Pool: Elastic Pools are used to manage and scale multiple databases with varying and unpredictable usage demands. However; they do not support cross-database transactions; which are necessary for the application.
- D. Azure SQL Database Single Database: This option is for single; isolated databases and does not support cross-database transactions; which are needed for the application to function correctly across the three databases.
You have an on-premises server named Server1 running Windows Server 2016. Server1 hosts a SQL Server database that is 4 TB in size. You need to migrate this database to an Azure Blob Storage account named store1. The migration process must be secure and encrypted. Which Azure service should you recommend?
A. Azure Data Box
B. Azure Site Recovery
C. Azure Database Migration Service
D. Azure Import/Export
Answer: A. Azure Data Box
Reasoning:
Azure Data Box is a service designed to transfer large amounts of data to Azure in a secure and efficient manner. Given the size of the SQL Server database (4 TB); Azure Data Box is suitable because it provides a physical device that can be shipped to the customer to load data securely and then sent back to Microsoft for uploading to Azure. This method ensures data is encrypted during transit and is ideal for large datasets where network transfer might be impractical due to bandwidth limitations or time constraints.
Breakdown of non-selected options:
- B. Azure Site Recovery: This service is primarily used for disaster recovery and business continuity; allowing you to replicate on-premises servers to Azure for failover purposes. It is not designed for one-time data migrations to Azure Blob Storage.
- C. Azure Database Migration Service: This service is typically used for migrating databases to Azure SQL Database or Azure SQL Managed Instance; not directly to Azure Blob Storage. It focuses on database schema and data migration rather than bulk data transfer to storage accounts.
- D. Azure Import/Export: While this service can be used to transfer data to Azure by shipping hard drives; it is generally less efficient and secure compared to Azure Data Box for large data sizes like 4 TB. Azure Data Box is specifically designed for such scenarios; offering a more streamlined and secure process.
Your company is migrating its on-premises virtual machines to Azure. These virtual machines will communicate with each other within the same virtual network using private IP addresses. You need to recommend a solution to prevent virtual machines that are not part of the migration from communicating with the migrating virtual machines. Which solution should you recommend?
A. Azure ExpressRoute
B. Network Security Groups (NSGs)
C. Azure Bastion
D. Azure Private Link
Answer: B. Network Security Groups (NSGs)
Reasoning: Network Security Groups (NSGs) are designed to filter network traffic to and from Azure resources in an Azure virtual network. They can be used to control inbound and outbound traffic to network interfaces; VMs; and subnets; making them suitable for isolating the migrating virtual machines from those not part of the migration. By configuring NSGs; you can specify rules that allow or deny traffic based on source and destination IP addresses; ports; and protocols; effectively preventing unwanted communication.
Breakdown of non-selected options:
- A. Azure ExpressRoute: This is a service that provides a private connection between an on-premises network and Azure; bypassing the public internet. It is not used for controlling communication between virtual machines within a virtual network.
- C. Azure Bastion: This is a service that provides secure and seamless RDP and SSH connectivity to virtual machines directly through the Azure portal. It is not used for controlling network traffic between virtual machines.
- D. Azure Private Link: This service provides private connectivity to Azure services over a private endpoint in your virtual network. It is not designed to control communication between virtual machines within the same virtual network.
You plan to deploy a microservices-based application to Azure. The application consists of several containerized services that need to communicate with each other. The application deployment must meet the following requirements: ✑ Ensure that each service can scale independently. ✑ Ensure that internet traffic is encrypted using SSL without configuring SSL on each container. Which service should you include in the recommendation?
A. Azure Front Door
B. Azure Traffic Manager
C. AKS ingress controller
D. Azure Application Gateway
Answer: C. AKS ingress controller
Reasoning: The question requires a solution for deploying a microservices-based application with containerized services that can scale independently and have encrypted internet traffic using SSL without configuring SSL on each container. An AKS ingress controller is suitable for this scenario because it allows for managing external access to the services in a Kubernetes cluster; including SSL termination; which means SSL can be managed at the ingress level rather than on each individual container. This allows each service to scale independently within the Kubernetes environment.
Breakdown of non-selected options:
- A. Azure Front Door: While Azure Front Door can handle SSL termination and provide global load balancing; it is more suited for routing traffic across multiple regions and does not inherently support scaling individual microservices within a Kubernetes cluster.
- B. Azure Traffic Manager: This service is primarily used for DNS-based traffic routing and does not handle SSL termination or provide the ability to scale individual services within a microservices architecture.
- D. Azure Application Gateway: Although it supports SSL termination and can route traffic to backend services; it is more suited for traditional web applications rather than containerized microservices that require independent scaling within a Kubernetes environment.
You have an on-premises storage solution that supports the Hadoop Distributed File System (HDFS) and uses Kerberos for authentication. You need to migrate this solution to Azure while ensuring it continues to use Kerberos. What should you use?
A. Azure Data Lake Storage Gen2
B. Azure NetApp Files
C. Azure Files
D. Azure Blob Storage
Answer: A. Azure Data Lake Storage Gen2
Reasoning: Azure Data Lake Storage Gen2 is designed to work with big data analytics and supports the Hadoop Distributed File System (HDFS) natively. It also integrates with Azure Active Directory (AAD) for authentication; which can be configured to support Kerberos authentication. This makes it the most suitable option for migrating an on-premises HDFS solution that uses Kerberos authentication to Azure.
Breakdown of non-selected options:
B. Azure NetApp Files: While Azure NetApp Files is a high-performance file storage service that supports NFS and SMB protocols; it is not specifically designed for HDFS workloads and does not natively support Kerberos authentication for HDFS.
C. Azure Files: Azure Files provides fully managed file shares in the cloud that are accessible via the SMB protocol. It is not designed for HDFS workloads and does not natively support Kerberos authentication for HDFS.
D. Azure Blob Storage: Azure Blob Storage is a scalable object storage solution for unstructured data. It does not natively support HDFS or Kerberos authentication; making it unsuitable for this scenario.
You are designing an application that requires a MySQL database in Azure. The application must be highly available and support automatic failover. Which service tier should you recommend?
A. Basic
B. General Purpose
C. Memory Optimized
D. Serverless
Answer: B. General Purpose
Reasoning: The requirement is for a MySQL database in Azure that is highly available and supports automatic failover. Azure Database for MySQL offers different service tiers; each with specific features and capabilities. The General Purpose tier is designed to provide balanced compute and memory resources with high availability and automatic failover capabilities; making it suitable for most business workloads that require these features.
Breakdown of non-selected options:
A. Basic - The Basic tier is designed for workloads that do not require high availability or automatic failover. It is more suitable for development or testing environments rather than production environments that require high availability.
C. Memory Optimized - While the Memory Optimized tier provides high performance for memory-intensive workloads; it is not specifically designed for high availability and automatic failover. It focuses more on performance rather than availability.
D. Serverless - The Serverless tier is designed for intermittent; unpredictable workloads and offers automatic scaling and billing based on the actual usage. However; it does not inherently provide high availability and automatic failover; which are the key requirements in this scenario.
You are designing an IoT solution that involves 100;000 devices. These devices will stream data; including device ID; location; and sensor data; at a rate of 100 messages per second. The solution must store and analyze the data in real time. Which Azure service should you recommend?
A. Azure Data Explorer
B. Azure Stream Analytics
C. Azure Cosmos DB
D. Azure IoT Hub
Answer: B. Azure Stream Analytics
Reasoning: Azure Stream Analytics is specifically designed for real-time data processing and analysis. It can handle large volumes of data streaming from IoT devices; making it suitable for scenarios where data needs to be analyzed in real time. Given the requirement to store and analyze data in real time from 100;000 devices streaming at 100 messages per second; Azure Stream Analytics is the most appropriate choice.
Breakdown of non-selected options:
- A. Azure Data Explorer: While Azure Data Explorer is excellent for analyzing large volumes of data; it is more suited for exploratory data analysis and interactive analytics rather than real-time streaming analytics.
- C. Azure Cosmos DB: Azure Cosmos DB is a globally distributed; multi-model database service. It is ideal for storing data with low latency but does not provide real-time analytics capabilities.
- D. Azure IoT Hub: Azure IoT Hub is a service for managing IoT devices and ingesting data from them. While it is essential for the IoT solution; it does not provide real-time data analysis capabilities on its own.
You are designing a highly available Azure web application that must remain operational during a regional outage. You need to minimize costs while ensuring no data loss during failover. Which Azure service should you use?
A. Azure App Service Standard
B. Azure App Service Premium
C. Azure Kubernetes Service (AKS)
D. Azure Service Fabric
Answer: B. Azure App Service Premium
Reasoning:
To ensure high availability and operational continuity during a regional outage; the application must be able to failover to another region without data loss. Azure App Service Premium provides features such as traffic manager integration and geo-distribution; which are essential for maintaining availability across regions. It also includes built-in backup and restore capabilities; which help in minimizing data loss during failover. Additionally; the Premium tier offers better performance and scaling options compared to the Standard tier; which is crucial for handling increased loads during failover scenarios.
Breakdown of non-selected options:
- A. Azure App Service Standard: While this option provides basic scaling and availability features; it lacks the advanced geo-distribution and traffic management capabilities of the Premium tier; which are necessary for handling regional outages effectively.
- C. Azure Kubernetes Service (AKS): AKS is a container orchestration service that can provide high availability; but it requires more complex setup and management compared to Azure App Service. It may not be the most cost-effective solution for a web application that needs to minimize costs while ensuring no data loss.
- D. Azure Service Fabric: This is a distributed systems platform that can provide high availability and resilience. However; it is more complex to manage and may not be the most cost-effective solution for a simple web application compared to Azure App Service Premium; which offers built-in features for high availability and disaster recovery.
You are developing a sales application that will include several Azure cloud services to manage various components of a transaction. These services will handle customer orders; billing; payment; inventory; and shipping. You need to recommend a solution that allows these cloud services to communicate transaction information asynchronously using XML messages. What should you include in your recommendation?
A. Azure Service Fabric
B. Azure Data Lake
C. Azure Service Bus
D. Azure Traffic Manager
Answer: C. Azure Service Bus
Reasoning: Azure Service Bus is a messaging service that facilitates asynchronous communication between different services and applications. It supports various messaging protocols; including XML; and is designed to handle complex messaging workflows; making it suitable for scenarios where different components of a system need to communicate asynchronously. In this case; the sales application requires asynchronous communication between services handling customer orders; billing; payment; inventory; and shipping; which aligns well with the capabilities of Azure Service Bus.
Breakdown of non-selected options:
A. Azure Service Fabric: Azure Service Fabric is a distributed systems platform used to build and manage scalable and reliable microservices and containers. While it is useful for developing applications; it is not specifically designed for asynchronous messaging between services; which is the requirement in this scenario.
B. Azure Data Lake: Azure Data Lake is a storage service optimized for big data analytics workloads. It is not designed for messaging or communication between services; making it unsuitable for the requirement of asynchronous communication using XML messages.
D. Azure Traffic Manager: Azure Traffic Manager is a DNS-based traffic load balancer that enables you to distribute traffic optimally to services across global Azure regions. It is not related to messaging or communication between services; and therefore does not meet the requirement for asynchronous communication using XML messages.
You need to implement disaster recovery for an on-premises Hadoop cluster that uses HDFS; with Azure as the replication target. Which Azure service should you use?
A. Azure Blob Storage
B. Azure Data Lake Storage Gen2
C. Azure Backup
D. Azure Site Recovery
Answer: B. Azure Data Lake Storage Gen2
Reasoning:
The question requires a solution for disaster recovery of an on-premises Hadoop cluster using HDFS; with Azure as the replication target. Azure Data Lake Storage Gen2 is specifically designed for big data analytics and is optimized for Hadoop workloads. It provides hierarchical namespace and is compatible with HDFS; making it the most suitable choice for replicating Hadoop data.
Breakdown of non-selected options:
- A. Azure Blob Storage: While Azure Blob Storage can store large amounts of unstructured data; it does not provide the hierarchical namespace and HDFS compatibility that Azure Data Lake Storage Gen2 offers; which are crucial for Hadoop workloads.
- C. Azure Backup: Azure Backup is primarily used for backing up and restoring data; but it is not designed for replicating Hadoop clusters or handling HDFS data specifically.
- D. Azure Site Recovery: Azure Site Recovery is used for disaster recovery of entire virtual machines and applications; but it is not tailored for Hadoop clusters or HDFS data replication.
You have been tasked with implementing a governance solution for a large Azure environment containing numerous resource groups. You need to ensure that all resource groups comply with the organization’s policies. Which Azure Policy scope should you use?
A. Azure Active Directory (Azure AD) administrative units
B. Azure Active Directory (Azure AD) tenants
C. Subscriptions
D. Compute resources
E. Resource groups
F. Management groups
Answer: F. Management groups
Reasoning:
Azure Policy is a service in Azure that you use to create; assign; and manage policies. These policies enforce different rules and effects over your resources; so those resources stay compliant with your corporate standards and service level agreements. When dealing with a large Azure environment containing numerous resource groups; it is important to apply policies at a level that can encompass all these resource groups efficiently. Management groups are designed to help manage access; policy; and compliance across multiple subscriptions. By applying policies at the management group level; you can ensure that all underlying subscriptions and their respective resource groups comply with the organization’s policies.
Breakdown of non-selected options:
A. Azure Active Directory (Azure AD) administrative units - These are used to delegate administrative permissions within Azure AD and are not related to Azure Policy scope for resource compliance.
B. Azure Active Directory (Azure AD) tenants - A tenant is a dedicated instance of Azure AD that an organization receives when it signs up for a Microsoft cloud service. It is not used for Azure Policy scope.
C. Subscriptions - While policies can be applied at the subscription level; using management groups allows for broader policy application across multiple subscriptions; which is more suitable for large environments.
D. Compute resources - This is a specific type of resource and not a scope for applying Azure Policies.
E. Resource groups - Policies can be applied at the resource group level; but this would require applying policies individually to each resource group; which is not efficient for a large environment.
You have an on-premises data center hosting several SQL Server instances. You plan to migrate some of these databases to Azure SQL Database Managed Instance. You need to recommend a migration solution that meets the following requirements: • Ensures minimal downtime during migration. • Supports on-premises instances running SQL Server 2008 R2. • Allows the migration of multiple databases in parallel. • Maintains compatibility with all SQL Server features. What should you include in your recommendation?
A. Use Azure Database Migration Service to migrate the databases.
B. Use SQL Server Integration Services to migrate the databases.
C. Upgrade the on-premises instances to SQL Server 2016; then use Azure Database Migration Service to migrate the databases.
D. Use Data Migration Assistant to migrate the databases.
Answer: A. Use Azure Database Migration Service to migrate the databases.
Reasoning:
Azure Database Migration Service (DMS) is designed to facilitate the migration of databases to Azure with minimal downtime; which is a key requirement in this scenario. It supports migrations from SQL Server 2008 R2; allowing for the migration of multiple databases in parallel; and maintains compatibility with SQL Server features. DMS is specifically built to handle such migrations efficiently and is the most suitable option given the requirements.
Breakdown of non-selected options:
B. Use SQL Server Integration Services to migrate the databases.
- SQL Server Integration Services (SSIS) is primarily used for data transformation and ETL processes rather than full database migrations. It does not inherently support minimal downtime or parallel migrations of multiple databases as effectively as DMS.
C. Upgrade the on-premises instances to SQL Server 2016; then use Azure Database Migration Service to migrate the databases.
- While upgrading to SQL Server 2016 could be beneficial for other reasons; it is not necessary for the migration process itself. Azure DMS supports SQL Server 2008 R2 directly; making this step redundant and not aligned with the requirement for minimal downtime.
D. Use Data Migration Assistant to migrate the databases.
- Data Migration Assistant (DMA) is a tool used to assess and identify compatibility issues when migrating to Azure SQL Database; but it is not designed for the actual migration process; especially when minimal downtime and parallel migrations are required.
You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements: Provide access to the full .NET Framework; ensure redundancy in case an Azure region fails; and allow administrators access to the operating system to install custom application dependencies. Solution: You deploy an Azure VM Scale Set across two Azure regions and use an Azure Load Balancer to distribute traffic between the VMs in the Scale Set. Does this meet the goal?
A. Yes
B. No
Answer: A. Yes
Reasoning: The requirements for hosting a stateless web app include providing access to the full .NET Framework; ensuring redundancy in case an Azure region fails; and allowing administrators access to the operating system to install custom application dependencies. Deploying an Azure VM Scale Set across two Azure regions with an Azure Load Balancer meets these requirements as follows:
- Access to the full .NET Framework: Azure VMs can run Windows Server; which supports the full .NET Framework.
- Redundancy in case an Azure region fails: By deploying the VM Scale Set across two regions; the solution ensures that if one region fails; the other can continue to serve the application.
- Administrator access to the operating system: Azure VMs provide full access to the OS; allowing administrators to install custom application dependencies.
Breakdown of non-selected answer option:
B. No: This option is incorrect because the proposed solution does meet all the specified requirements. Deploying an Azure VM Scale Set across two regions with a load balancer provides the necessary redundancy; access to the full .NET Framework; and administrative access to the OS.
You have an Azure subscription that includes an Azure Storage account. You plan to implement Azure File Sync. What is the first step you should take to prepare the storage account for Azure File Sync?
A. Register the Microsoft.Storage resource provider.
B. Create a file share in the storage account.
C. Create a virtual network.
D. Install the Azure File Sync agent on a server.
Answer: B. Create a file share in the storage account.
Reasoning: To implement Azure File Sync; the first step is to create a file share in the Azure Storage account. Azure File Sync requires a file share to sync files between the on-premises server and the Azure cloud. This file share acts as the cloud endpoint for the sync process.
Breakdown of non-selected answer options:
- A. Register the Microsoft.Storage resource provider: This step is not necessary for preparing the storage account specifically for Azure File Sync. The Microsoft.Storage resource provider is typically registered by default in Azure subscriptions; and it is not a specific requirement for Azure File Sync setup.
- C. Create a virtual network: Creating a virtual network is not directly related to setting up Azure File Sync. Azure File Sync does not require a virtual network configuration as part of its initial setup process.
- D. Install the Azure File Sync agent on a server: While installing the Azure File Sync agent is a necessary step in the overall process; it is not the first step in preparing the storage account itself. The agent is installed on the on-premises server that will sync with the Azure file share.
You have a highly available application running on an AKS cluster in Azure. You need to ensure that the application is accessible over HTTPS without configuring SSL on each container. Which Azure service should you use?
A. Azure Front Door
B. Azure Traffic Manager
C. AKS Ingress Controller
D. Azure Application Gateway
Answer: C. AKS Ingress Controller
AKS Ingress Controller: An Ingress Controller can manage SSL termination; but it requires additional configuration and management within the AKS cluster. Azure Application Gateway provides a more integrated and managed solution for SSL termination outside the cluster.
You have an on-premises storage solution that supports the Hadoop Distributed File System (HDFS). You need to migrate this solution to Azure and ensure it is accessible from multiple regions. What should you use?
A. Azure Data Lake Storage Gen2
B. Azure NetApp Files
C. Azure Files
D. Azure Blob Storage
Answer: A. Azure Data Lake Storage Gen2
Reasoning: Azure Data Lake Storage Gen2 is specifically designed to handle big data analytics workloads and is fully compatible with HDFS; making it an ideal choice for migrating an on-premises HDFS solution to Azure. It also provides high scalability and can be accessed from multiple regions; which aligns with the requirement of ensuring accessibility from multiple regions.
Breakdown of non-selected options:
- B. Azure NetApp Files: While Azure NetApp Files provides high-performance file storage; it is not specifically designed for HDFS compatibility and big data analytics workloads; making it less suitable for this scenario.
- C. Azure Files: Azure Files offers fully managed file shares in the cloud that are accessible via the SMB protocol. However; it does not natively support HDFS; which is a critical requirement for this migration.
- D. Azure Blob Storage: Although Azure Blob Storage is highly scalable and can be accessed from multiple regions; it does not natively support HDFS. It is more suited for object storage rather than file system compatibility required for HDFS.
You plan to deploy an Azure virtual machine to run a mission-critical application. The virtual machine will store data on a disk with BitLocker Drive Encryption enabled. You need to use Azure Backup to back up the virtual machine. Which two backup solutions should you use? Each option presents part of the solution.
A. Azure Backup (MARS) agent
B. Azure Backup Server
C. Azure Site Recovery
D. Backup Pre-Checks
Answer: B. Azure Backup Server
Answer: D. Backup Pre-Checks
Reasoning:
When backing up an Azure virtual machine with BitLocker Drive Encryption enabled; it’s important to ensure that the backup solution supports encrypted disks. Azure Backup Server is a suitable option because it can handle the backup of encrypted disks. Additionally; Backup Pre-Checks are essential to ensure that the backup configuration is correct and that there are no issues that could prevent a successful backup. These pre-checks help identify potential problems before the backup process begins; which is crucial for mission-critical applications.
Breakdown of non-selected options:
A. Azure Backup (MARS) agent - The MARS agent is typically used for backing up files; folders; and system state from on-premises machines to Azure. It is not suitable for backing up Azure virtual machines directly; especially those with BitLocker encryption.
C. Azure Site Recovery - This is primarily a disaster recovery solution rather than a backup solution. It is used to replicate and failover virtual machines to another region; not for regular backup purposes.
You have an Azure subscription. You need to deploy an Azure Kubernetes Service (AKS) solution that will use Windows Server 2019 nodes. The solution must meet the following requirements:
✑ Minimize the time it takes to provision compute resources during scale-out operations.
✑ Support autoscaling of Windows Server containers.
Which scaling option should you recommend?
A. Kubernetes version 1.20.2 or newer
B. Virtual nodes with Virtual Kubelet ACI
C. Cluster autoscaler
D. Horizontal pod autoscaler
Answer: C. Cluster autoscaler
Reasoning:
The question requires a solution that minimizes the time it takes to provision compute resources during scale-out operations and supports autoscaling of Windows Server containers. The Cluster autoscaler is designed to automatically adjust the size of the Kubernetes cluster by adding or removing nodes based on the resource requirements of the workloads. This is particularly useful for scaling out operations as it can quickly provision additional nodes when needed; which aligns with the requirement to minimize provisioning time. Additionally; the Cluster autoscaler supports Windows Server nodes; making it suitable for the given scenario.
Breakdown of non-selected options:
A. Kubernetes version 1.20.2 or newer - While using a newer version of Kubernetes might provide some performance improvements and additional features; it does not directly address the requirement of minimizing provisioning time or supporting autoscaling specifically for Windows Server containers.
B. Virtual nodes with Virtual Kubelet ACI - Virtual nodes with Virtual Kubelet allow for burstable workloads using Azure Container Instances (ACI); but they are more suited for scenarios where you need to run containers without managing the underlying infrastructure. This option does not directly address the requirement for autoscaling Windows Server containers or minimizing provisioning time for compute resources.
D. Horizontal pod autoscaler - The Horizontal pod autoscaler automatically scales the number of pods in a deployment or replica set based on observed CPU utilization or other select metrics. While it helps in scaling applications; it does not directly manage the scaling of the underlying compute resources (nodes); which is necessary to minimize provisioning time during scale-out operations.
You have an Azure Active Directory (Azure AD) tenant that syncs with an on-premises Active Directory. Your company has a line-of-business (LOB) application developed internally. You need to implement SAML single sign-on (SSO) and enforce multi-factor authentication (MFA) when users attempt to access the application from an unknown location. Which two features should you include in the solution? Each selection is worth one point.
A. Azure AD Privileged Identity Management (PIM)
B. Azure Application Gateway
C. Azure AD enterprise applications
D. Azure AD Identity Protection
E. Conditional Access policies
Answer: C. Azure AD enterprise applications
Answer: E. Conditional Access policies
Reasoning:
To implement SAML single sign-on (SSO) and enforce multi-factor authentication (MFA) for an internally developed line-of-business (LOB) application; you need to use Azure AD enterprise applications and Conditional Access policies. Azure AD enterprise applications allow you to configure SAML-based SSO for applications. Conditional Access policies enable you to enforce MFA based on specific conditions; such as accessing the application from an unknown location.
Breakdown of non-selected options:
A. Azure AD Privileged Identity Management (PIM) - This is used for managing; controlling; and monitoring access within Azure AD; Azure; and other Microsoft Online Services. It is not directly related to implementing SSO or enforcing MFA for applications.
B. Azure Application Gateway - This is a web traffic load balancer that enables you to manage traffic to your web applications. It does not provide SSO or MFA capabilities.
D. Azure AD Identity Protection - This is used to identify potential vulnerabilities affecting your organization’s identities and to configure automated responses to detected suspicious actions. While it can enhance security; it is not directly used to implement SSO or enforce MFA for specific applications.
You are storing user profile data in an Azure Cosmos DB database. You want to set up a process to automatically back up the data to Azure Storage every week. What should you use to achieve this?
A. Azure Backup
B. Azure Cosmos DB backup and restore
C. Azure Import/Export Service
D. Azure Data Factory
Answer: D. Azure Data Factory
Reasoning: Azure Data Factory is a cloud-based data integration service that allows you to create data-driven workflows for orchestrating and automating data movement and data transformation. It is suitable for setting up a process to automatically back up data from Azure Cosmos DB to Azure Storage on a weekly basis. You can create a pipeline in Azure Data Factory to copy data from Cosmos DB to Azure Storage and schedule it to run weekly.
Breakdown of non-selected options:
A. Azure Backup: Azure Backup is primarily used for backing up Azure VMs; SQL databases; and other Azure resources. It does not natively support backing up data from Azure Cosmos DB to Azure Storage.
B. Azure Cosmos DB backup and restore: While Azure Cosmos DB has built-in backup and restore capabilities; it does not provide a direct mechanism to back up data to Azure Storage on a scheduled basis. It is more focused on point-in-time restore within the Cosmos DB service itself.
C. Azure Import/Export Service: This service is used for transferring large amounts of data to and from Azure using physical disks. It is not suitable for setting up automated; scheduled backups of Cosmos DB data to Azure Storage.
Therefore; Azure Data Factory is the most suitable option for automating the backup process from Azure Cosmos DB to Azure Storage on a weekly schedule.
You have a highly available application running on an AKS cluster in Azure. To ensure the application remains available even if a single availability zone fails; which Azure service should you use?
A. Azure Front Door
B. Azure Traffic Manager
C. AKS ingress controller
D. Azure Load Balancer
Answer: A. Azure Front Door
Reasoning: Azure Front Door is a global; scalable entry point that uses the Microsoft global edge network to create fast; secure; and highly available web applications. It provides high availability and can route traffic across multiple regions or availability zones; ensuring that your application remains available even if a single availability zone fails. This makes it the most suitable option for ensuring high availability in the scenario described.
Breakdown of non-selected options:
- B. Azure Traffic Manager: While Azure Traffic Manager can route traffic based on DNS and provide high availability by directing traffic to different regions; it operates at the DNS level and does not provide the same level of real-time failover and global load balancing as Azure Front Door.
- C. AKS ingress controller: An AKS ingress controller is used to manage inbound traffic to applications running in an AKS cluster. However; it does not inherently provide cross-zone or cross-region failover capabilities; which are necessary to ensure availability in the event of an availability zone failure.
- D. Azure Load Balancer: Azure Load Balancer is a regional service that distributes traffic within a single region. It does not provide cross-region or cross-zone failover capabilities; which are required to maintain availability if an entire availability zone fails.
You are planning to migrate a large-scale PostgreSQL database to Azure. The database must be highly available and support read replicas to scale out read operations. Which Azure database service should you recommend?
A. Azure SQL Managed Instance
B. Azure Database for PostgreSQL
C. Azure Cosmos DB
Answer: B. Azure Database for PostgreSQL
Reasoning: The requirement is to migrate a large-scale PostgreSQL database to Azure with high availability and support for read replicas to scale out read operations. Azure Database for PostgreSQL is specifically designed to handle PostgreSQL databases and offers features such as high availability and read replicas; making it the most suitable choice for this scenario.
Breakdown of non-selected options:
- A. Azure SQL Managed Instance: This option is designed for SQL Server databases; not PostgreSQL. It does not natively support PostgreSQL databases; so it is not suitable for this requirement.
- C. Azure Cosmos DB: While Cosmos DB is a globally distributed; multi-model database service; it is not specifically designed for PostgreSQL databases. It does not natively support PostgreSQL features like read replicas in the same way Azure Database for PostgreSQL does; making it less suitable for this scenario.
You have an on-premises server named Server1 running Windows Server 2019. Server1 hosts a SQL Server database that is 2 TB in size. You want to copy this database to an Azure Blob Storage account named store1. You need to recommend an Azure service that can achieve this goal while minimizing costs and ensuring high availability of the database. Which Azure service should you recommend?
A. Azure Storage Explorer
B. Azure Backup
C. Azure Site Recovery
D. Azure Database Migration Service
Answer: D. Azure Database Migration Service
Reasoning:
Azure Database Migration Service is specifically designed to facilitate the migration of databases to Azure; including SQL Server databases to Azure SQL Database or Azure Blob Storage. It provides a streamlined; reliable; and cost-effective way to migrate large databases while ensuring high availability during the migration process.
Breakdown of non-selected options:
- A. Azure Storage Explorer: This is a tool for managing Azure Storage resources; not a service for migrating databases. It does not provide the capabilities needed for a large-scale database migration with high availability.
- B. Azure Backup: This service is primarily used for backing up data to Azure; not for migrating databases. It does not offer the necessary features for database migration.
- C. Azure Site Recovery: This service is designed for disaster recovery and not specifically for database migration. It is not optimized for migrating SQL Server databases to Azure Blob Storage.
You need to deploy a highly available; globally scalable web application on Azure. The solution must meet the following requirements: support autoscaling based on user traffic; ensure high availability across multiple regions; and provide low latency for users worldwide. Which Azure services should you use to achieve these requirements?
A. Azure Traffic Manager; Azure Load Balancer; Azure Virtual Machines
B. Azure App Service; Azure Traffic Manager; Azure SQL Database
C. Azure App Service; Azure Front Door; Azure Cosmos DB
D. Azure Kubernetes Service; Azure Traffic Manager; Azure Cosmos DB
Answer: C. Azure App Service; Azure Front Door; Azure Cosmos DB
Reasoning:
- Azure App Service: This service is ideal for deploying web applications as it supports autoscaling and high availability. It can automatically scale out to handle increased traffic and provides built-in load balancing.
- Azure Front Door: This service is designed for global routing and provides low latency by directing user traffic to the nearest available backend. It also supports high availability across multiple regions.
- Azure Cosmos DB: This globally distributed database service ensures low latency and high availability for data access worldwide; making it suitable for applications that require global scalability.
Breakdown of non-selected options:
- A. Azure Traffic Manager; Azure Load Balancer; Azure Virtual Machines: While this combination can provide high availability and load balancing; it requires more manual configuration and management compared to Azure App Service. Azure Load Balancer is regional and does not provide global routing; which is necessary for low latency worldwide.
- B. Azure App Service; Azure Traffic Manager; Azure SQL Database: Azure Traffic Manager provides DNS-based traffic routing; but it does not offer the same level of global load balancing and low latency as Azure Front Door. Azure SQL Database; while highly available; is not as globally distributed as Azure Cosmos DB.
- D. Azure Kubernetes Service; Azure Traffic Manager; Azure Cosmos DB: Azure Kubernetes Service is a powerful option for containerized applications but requires more management and configuration than Azure App Service. Azure Traffic Manager; as mentioned; does not provide the same global routing capabilities as Azure Front Door.
Introductory Information Case Study - This is a case study. Case studies are not timed separately. You can use as much exam time as you need to complete each case. However; there may be additional case studies and sections on this exam. You must manage your time to ensure that you can complete all questions included in this exam within the time provided. To answer the questions included in a case study; you will need to reference information provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario described in the case study. Each question is independent of the other questions in this case study. At the end of this case study; a review screen will appear. This screen allows you to review your answers and make changes before you move to the next section of the exam. After you begin a new section; you cannot return to this section. To start the case study - To display the first question in this case study; click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements; existing environment; and problem statements. If the case study has an All Information tab; note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question; click the Question button to return to the question. Overview - Fabrikam; Inc. is an engineering company that has offices throughout Europe. The company has a main office in London and three branch offices in Amsterdam; Berlin; and Rome. Existing Environment: Active Directory Environment The network contains two Active Directory forests named corp.fabrikam.com and rd.fabrikam.com. There are no trust relationships between the forests. Corp.fabrikam.com is a production forest that contains identities used for internal user and computer authentication. Rd.fabrikam.com is used by the research and development (R&D) department only. The R&D department is restricted to using on-premises resources only. Existing Environment: Network Infrastructure Each office contains at least one domain controller from the corp.fabrikam.com forest. The main office contains all the domain controllers for the rd.fabrikam.com forest. All the offices have a high-speed connection to the internet. An existing application named WebApp1 is hosted in the data center of the London office. WebApp1 is used by customers to place and track orders. WebApp1 has a web tier that uses Microsoft Internet Information Services (IIS) and a database tier that runs Microsoft SQL Server 2016. The web tier and the database tier are deployed to virtual machines that run on Hyper-V. The IT department currently uses a separate Hyper-V environment to test updates to WebApp1. Fabrikam purchases all Microsoft licenses through a Microsoft Enterprise Agreement that includes Software Assurance. Existing Environment: Problem Statements The use of WebApp1 is unpredictable. At peak times; users often report delays. At other times; many resources for WebApp1 are underutilized. Requirements: Planned Changes - Fabrikam plans to move most of its production workloads to Azure during the next few years; including virtual machines that rely on Active Directory for authentication. As one of its first projects; the company plans to establish a hybrid identity model; facilitating an upcoming Microsoft 365 deployment. All R&D operations will remain on-premises. Fabrikam plans to migrate the production and test instances of WebApp1 to Azure. Requirements: Technical Requirements Fabrikam identifies the following technical requirements: Website content must be easily updated from a single point. User input must be minimized when provisioning new web app instances. Whenever possible; existing on-premises licenses must be used to reduce cost. Users must always authenticate by using their corp.fabrikam.com UPN identity. Any new deployments to Azure must be redundant in case an Azure region fails. Whenever possible; solutions must be deployed to Azure by using the Standard pricing tier of Azure App Service. An email distribution group named IT Support must be notified of any issues relating to the directory synchronization services. In the event that a link fails between Azure and the on-premises network; ensure that the virtual machines hosted in Azure can authenticate to Active Directory. Directory synchronization between Azure Active Directory (Azure AD) and corp.fabrikam.com must not be affected by a link failure between Azure and the on-premises network. Requirements: Database Requirements Fabrikam identifies the following database requirements: Database metrics for the production instance of WebApp1 must be available for analysis so that database administrators can optimize the performance settings. To avoid disrupting customer access; database downtime must be minimized when databases are migrated. Database backups must be retained for a minimum of seven years to meet compliance requirements. Requirements: Security Requirements Fabrikam identifies the following security requirements: Company information including policies; templates; and data must be inaccessible to anyone outside the company. Users on the on-premises network must be able to authenticate to corp.fabrikam.com if an internet link fails. Administrators must be able to authenticate to the Azure portal by using their corp.fabrikam.com credentials. All administrative access to the Azure portal must be secured by using multi-factor authentication (MFA). The testing of WebApp1 updates must not be visible to anyone outside the company.
Question: You need to recommend a notification solution for the IT Support distribution group. What should you include in the recommendation?
A. a SendGrid account with advanced reporting
B. an action group
C. Azure Network Watcher
D. Azure AD Connect Health
Answer: D. Azure AD Connect Health
Reasoning: The question requires a notification solution for the IT Support distribution group specifically related to directory synchronization services. Azure AD Connect Health is designed to monitor and provide insights into the health of your on-premises identity infrastructure; including directory synchronization. It can send alerts and notifications to specified recipients; such as the IT Support distribution group; when issues are detected with the directory synchronization services.
Breakdown of non-selected options:
A. a SendGrid account with advanced reporting - SendGrid is primarily used for sending emails and managing email campaigns. While it can send notifications; it is not specifically designed for monitoring directory synchronization services or providing health insights related to Azure AD Connect.
B. an action group - Action groups are used in Azure Monitor to trigger actions like sending emails or SMS when an alert is fired. While they can be used for notifications; they are not specifically tailored for directory synchronization services. Azure AD Connect Health provides more targeted monitoring and alerting for this purpose.
C. Azure Network Watcher - Azure Network Watcher is a network performance monitoring; diagnostic; and analytics service. It is not related to directory synchronization services and would not be suitable for notifying the IT Support group about issues with directory synchronization.
D. Azure AD Connect Health - This option is specifically designed to monitor the health of Azure AD Connect and related services. It provides alerts and notifications for issues related to directory synchronization; making it the most suitable choice for the requirement.
You have an Azure Storage account containing sensitive data; and you want to encrypt the data at rest using customer-managed keys. Which encryption algorithm and key length should you use for the encryption keys?
A. RSA 2048
B. RSA 3072
C. AES 128
D. AES 256
Answer: D. AES 256
Reasoning: Azure Storage supports encryption of data at rest using customer-managed keys; and the recommended encryption algorithm for this purpose is AES (Advanced Encryption Standard) with a key length of 256 bits. AES 256 is widely recognized for its strong security and is a standard choice for encrypting sensitive data. It provides a good balance between security and performance; making it suitable for encrypting data at rest in Azure Storage.
Breakdown of non-selected options:
- A. RSA 2048: RSA is an asymmetric encryption algorithm; which is not typically used for encrypting data at rest due to its computational intensity and inefficiency for large data volumes. It is more commonly used for encrypting small amounts of data; such as keys or digital signatures.
- B. RSA 3072: Similar to RSA 2048; RSA 3072 is an asymmetric encryption algorithm and is not suitable for encrypting large volumes of data at rest. It is primarily used for secure key exchange and digital signatures.
- C. AES 128: While AES 128 is a symmetric encryption algorithm like AES 256; it offers a lower level of security due to its shorter key length. AES 256 is preferred for encrypting sensitive data because it provides a higher level of security.
You plan to deploy 50 applications to Azure. These applications will be deployed across five Azure Kubernetes Service (AKS) clusters; with each cluster located in a different Azure region. The application deployment must meet the following requirements: ✑ Ensure that the applications remain available if a single AKS cluster fails. ✑ Ensure that internet traffic is encrypted using SSL without configuring SSL on each container. Which service should you include in the recommendation?
A. Azure Front Door
B. Azure Traffic Manager
C. AKS ingress controller
D. Azure Load Balancer
Answer: A. Azure Front Door
Reasoning: Azure Front Door is a global; scalable entry point that uses the Microsoft global edge network to create fast; secure; and highly available web applications. It provides SSL termination; which means it can handle SSL encryption and decryption; allowing you to offload SSL from your applications. This meets the requirement of ensuring internet traffic is encrypted using SSL without configuring SSL on each container. Additionally; Azure Front Door can route traffic to multiple regions; providing high availability and resilience in case a single AKS cluster fails.
Breakdown of non-selected options:
B. Azure Traffic Manager: While Azure Traffic Manager can distribute traffic across multiple regions and provide high availability; it does not handle SSL termination. Therefore; it does not meet the requirement of encrypting internet traffic using SSL without configuring SSL on each container.
C. AKS ingress controller: An ingress controller can manage SSL termination; but it operates at the cluster level. This means you would need to configure SSL for each AKS cluster individually; which does not meet the requirement of avoiding SSL configuration on each container.
D. Azure Load Balancer: Azure Load Balancer operates at the network layer (Layer 4) and does not provide SSL termination. It is primarily used for distributing traffic within a single region or cluster; and it does not meet the requirement of ensuring internet traffic is encrypted using SSL without configuring SSL on each container.
You need to deploy a highly available web application on Azure. The solution must meet the following requirements: use a managed database service; be highly available within a single region; and support autoscaling based on user traffic. Which Azure services should you use to achieve these requirements?
A. Azure Virtual Machines; Azure Load Balancer; Azure SQL Database
B. Azure App Service; Azure Load Balancer; Azure SQL Database
C. Azure Kubernetes Service; Azure Load Balancer; Azure Cosmos DB
D. Azure App Service; Azure Application Gateway; Azure Cosmos DB
Answer: D. Azure App Service; Azure Application Gateway; Azure Cosmos DB
Reasoning:
- Azure App Service is a fully managed platform for building; deploying; and scaling web apps. It supports autoscaling and is highly available within a single region; making it suitable for the requirement of deploying a highly available web application.
- Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. It provides high availability and autoscaling features; which align with the requirements.
- Azure Cosmos DB is a fully managed NoSQL database service that offers high availability and scalability. It is a managed database service; which meets the requirement of using a managed database service.
Breakdown of non-selected options:
- A. Azure Virtual Machines; Azure Load Balancer; Azure SQL Database: While Azure SQL Database is a managed database service; using Azure Virtual Machines requires more management overhead compared to Azure App Service. Azure Load Balancer provides load balancing but does not offer the same level of application-level routing and autoscaling as Azure Application Gateway.
- B. Azure App Service; Azure Load Balancer; Azure SQL Database: Azure App Service and Azure SQL Database are suitable choices; but Azure Load Balancer is more suited for network-level load balancing rather than application-level; which is better handled by Azure Application Gateway.
- C. Azure Kubernetes Service; Azure Load Balancer; Azure Cosmos DB: Azure Kubernetes Service is a good option for containerized applications but requires more management compared to Azure App Service. Azure Load Balancer is not as suitable as Azure Application Gateway for application-level traffic management.
You need to design a highly available Azure Function App that meets the following requirements:
✑ The function app must remain available during a zone outage.
✑ The function app must be scalable.
✑ Costs must be minimized.
Which deployment option should you use?
A. Function App on App Service Environment
B. Function App on Linux
C. Function App with Traffic Manager
D. Function App with Azure Load Balancer
Answer: C. Function App with Traffic Manager
Reasoning:
To design a highly available Azure Function App that remains available during a zone outage; is scalable; and minimizes costs; the most suitable option is to use a Function App with Traffic Manager. Traffic Manager allows you to distribute traffic across multiple regions; providing high availability and resilience against zone outages. It also supports automatic failover and load balancing; which ensures scalability. Additionally; Traffic Manager is a cost-effective solution compared to deploying in an App Service Environment.
Breakdown of non-selected options:
A. Function App on App Service Environment: While this option provides high availability and scalability; it is more expensive than using Traffic Manager. App Service Environment is typically used for isolated and high-security environments; which may not be necessary for this scenario.
B. Function App on Linux: This option does not inherently provide high availability across zones. It is simply a hosting option for the Function App and does not address the requirement to remain available during a zone outage.
D. Function App with Azure Load Balancer: Azure Load Balancer is typically used for distributing traffic within a single region and does not provide global distribution or failover capabilities across multiple regions; which are necessary to meet the requirement of remaining available during a zone outage.
You need to design a highly available Azure Storage account that meets the following requirements: ✑ The storage account must remain available during a zone outage. ✑ The storage account must be highly performant. ✑ Costs must be minimized. Which deployment option should you choose?
A. Geo-redundant storage
B. Zone-redundant storage
C. Premium storage
D. Standard storage with read-access geo-redundant storage
Answer: B. Zone-redundant storage
Reasoning:
The question requires a storage solution that remains available during a zone outage; is highly performant; and minimizes costs. Zone-redundant storage (ZRS) is designed to provide high availability by replicating data across multiple availability zones within a region; ensuring that the storage account remains available even if one zone goes down. ZRS also offers good performance and is generally more cost-effective than geo-redundant options; making it the most suitable choice given the requirements.
Breakdown of non-selected options:
A. Geo-redundant storage - While this option provides high availability by replicating data across regions; it is more expensive than ZRS and not necessary for the requirement of zone-level redundancy. It also may introduce higher latency compared to ZRS.
C. Premium storage - This option is designed for high-performance workloads but does not inherently provide zone redundancy. It is also more costly; which does not align with the requirement to minimize costs.
D. Standard storage with read-access geo-redundant storage - This option provides geo-redundancy and read-access during regional outages; but it is more expensive than ZRS and not necessary for zone-level redundancy. The performance might also not be as high as required.
You are designing a highly secure Azure solution that requires encryption of data at rest for a SQL Server database. You plan to use Azure SQL Managed Instance. Which encryption algorithm and key length should you use for Transparent Data Encryption (TDE)?
A. RSA 2048
B. AES 128
C. AES 256
D. RSA 3072
Answer: C. AES 256
Reasoning: Azure SQL Managed Instance uses Transparent Data Encryption (TDE) to encrypt data at rest. TDE in Azure SQL Managed Instance uses the AES encryption algorithm. Among the options provided; AES 256 is the most suitable choice because it offers a higher level of security compared to AES 128 due to its longer key length. AES 256 is a widely accepted standard for strong encryption and is commonly used for securing sensitive data.
Breakdown of non-selected options:
A. RSA 2048 - RSA is an asymmetric encryption algorithm; which is not used for encrypting data at rest in databases like Azure SQL Managed Instance. TDE uses symmetric encryption; specifically AES.
B. AES 128 - While AES 128 is a valid encryption algorithm for TDE; AES 256 provides a stronger level of encryption due to its longer key length; making it a more secure choice.
D. RSA 3072 - Similar to RSA 2048; RSA 3072 is an asymmetric encryption algorithm and is not used for TDE in Azure SQL Managed Instance. TDE relies on symmetric encryption with AES.
You are designing an Azure IoT solution involving 100;000 IoT devices; each streaming data such as location; speed; and time. Approximately 100;000 records will be written every second. You need to recommend a service to store and query the data. Which two services can you recommend? Each option presents a complete solution.
A. Azure Cosmos DB for NoSQL
B. Azure Stream Analytics
C. Azure Event Hubs
D. Azure SQL Database
Answer: A. Azure Cosmos DB for NoSQL
Answer: D. Azure SQL Database
Reasoning:
The question requires a solution for storing and querying data from 100;000 IoT devices; with a high write throughput of approximately 100;000 records per second. The solution should be able to handle large-scale data ingestion and provide querying capabilities.
- Azure Cosmos DB for NoSQL: This service is designed for high throughput and low latency; making it suitable for IoT scenarios with massive data ingestion. It supports horizontal scaling and can handle the required write throughput efficiently. Additionally; it offers rich querying capabilities; which makes it a suitable choice for this scenario.
- Azure SQL Database: This service can handle large volumes of data and provides robust querying capabilities. With features like elastic pools and scaling options; it can be configured to manage high write throughput. It is a suitable choice for scenarios where relational data storage and complex querying are required.
Breakdown of non-selected options:
- Azure Stream Analytics: This service is primarily used for real-time data processing and analytics rather than storage. It is designed to process data streams and provide insights; but it is not a storage solution. Therefore; it is not suitable for the requirement of storing and querying data.
- Azure Event Hubs: This service is designed for data ingestion and event streaming; not for storage or querying. It acts as an event ingestor that can capture and store data temporarily; but it is not intended for long-term storage or complex querying. Hence; it is not suitable for the requirement of storing and querying data.
You need to deploy a web application that requires horizontal scaling in an Azure subscription. The solution must meet the following requirements: The web application must have access to the full .NET Framework; be hosted in a Platform as a Service (PaaS) environment; and provide automatic scaling based on CPU utilization. Which Azure service should you use?
A. Azure App Service
B. Azure Virtual Machines
C. Azure Kubernetes Service (AKS)
D. Azure Container Instances (ACI)
Answer: A. Azure App Service
Reasoning: Azure App Service is a Platform as a Service (PaaS) offering that supports web applications with access to the full .NET Framework. It provides automatic scaling based on CPU utilization; which aligns with the requirement for horizontal scaling. Azure App Service is specifically designed for hosting web applications and offers built-in scaling features; making it the most suitable choice for this scenario.
Breakdown of non-selected options:
- B. Azure Virtual Machines: This option provides Infrastructure as a Service (IaaS); not PaaS. While it can run the full .NET Framework; it does not inherently provide automatic scaling based on CPU utilization without additional configuration and management; making it less suitable for the requirements.
- C. Azure Kubernetes Service (AKS): AKS is a container orchestration service that can provide horizontal scaling; but it is more complex to set up and manage compared to Azure App Service. It is not specifically tailored for hosting web applications with the full .NET Framework in a PaaS environment.
- D. Azure Container Instances (ACI): ACI is a service for running containers without managing servers; but it does not provide the full PaaS experience for web applications with the full .NET Framework. It also lacks built-in automatic scaling based on CPU utilization; making it less suitable for the given requirements.
You are designing an Azure IoT solution involving 10;000 IoT devices that will each stream data; including temperature; humidity; and timestamp data. Approximately 10;000 records will be written every second. You need to recommend a service to store and query the data. Which two services can you recommend? Each option presents a complete solution.
A. Azure Table Storage
B. Azure Event Grid
C. Azure Cosmos DB for NoSQL
D. Azure Time Series Insights
Answer: C. Azure Cosmos DB for NoSQL
Answer: D. Azure Time Series Insights
Reasoning:
The question requires a solution to store and query high-volume streaming data from IoT devices. The solution must handle approximately 10;000 records per second; which implies the need for a scalable and efficient data storage and querying service.
- Azure Cosmos DB for NoSQL: This service is designed for high-throughput and low-latency data access; making it suitable for IoT scenarios where large volumes of data are ingested and queried. It supports flexible schema and global distribution; which are beneficial for IoT data storage and querying.
- Azure Time Series Insights: This service is specifically designed for IoT data; providing capabilities to store; query; and visualize time-series data. It is optimized for handling large volumes of time-stamped data; such as temperature and humidity readings from IoT devices; making it an ideal choice for this scenario.
Breakdown of non-selected options:
- A. Azure Table Storage: While Azure Table Storage can handle large volumes of data; it is not optimized for querying complex data patterns or high-throughput scenarios like those required for IoT data streams. It lacks the advanced querying capabilities needed for this use case.
- B. Azure Event Grid: This service is primarily used for event routing and handling; not for data storage or querying. It is designed to manage events and notifications rather than storing large volumes of IoT data; making it unsuitable for the requirements of this question.
You have an Azure subscription. You need to recommend an Azure Kubernetes Service (AKS) solution that will use Linux nodes. The solution must meet the following requirements:
✑ Minimize the time it takes to provision compute resources during scale-out operations.
✑ Support autoscaling of Linux containers.
✑ Minimize administrative effort.
Which scaling option should you recommend?
A. Horizontal Pod Autoscaler
B. Cluster Autoscaler
C. Virtual Nodes
D. Virtual Kubelet
Answer: C. Virtual Nodes
Reasoning:
- The requirement is to minimize the time it takes to provision compute resources during scale-out operations; support autoscaling of Linux containers; and minimize administrative effort.
- Virtual Nodes in AKS allow for burstable workloads by integrating with Azure Container Instances (ACI); which can provision compute resources very quickly; thus minimizing the time for scale-out operations.
- Virtual Nodes support autoscaling by allowing AKS to scale out to ACI when the cluster runs out of capacity; which also reduces the administrative effort since it offloads the management of additional nodes.
Breakdown of non-selected options:
- A. Horizontal Pod Autoscaler: This option adjusts the number of pods in a deployment based on CPU utilization or other select metrics. While it supports autoscaling; it does not directly minimize the time to provision compute resources or reduce administrative effort related to node management.
- B. Cluster Autoscaler: This option automatically adjusts the number of nodes in a cluster based on the pending pods. While it supports autoscaling; it involves provisioning new VMs; which can take more time compared to using Virtual Nodes with ACI.
- D. Virtual Kubelet: This is an open-source project that allows Kubernetes to connect to other APIs; such as ACI. However; in the context of AKS; Virtual Nodes is the specific implementation that integrates with ACI; making Virtual Nodes the more appropriate choice for the requirements given.
You need to recommend a solution to generate a monthly report of all new Azure Resource Manager (ARM) resource deployments in your Azure subscription. What should you include in the recommendation?
A. Azure Activity Log
B. Azure Arc
C. Azure Analysis Services
D. Azure Monitor action groups
Answer: A. Azure Activity Log
Reasoning:
The Azure Activity Log provides a record of all activities related to resource management in your Azure subscription. It includes information about new resource deployments; modifications; and deletions. This makes it the most suitable option for generating a monthly report of all new Azure Resource Manager (ARM) resource deployments.
Breakdown of non-selected options:
- B. Azure Arc: Azure Arc is a service that extends Azure management and services to any infrastructure. It is not specifically designed for tracking or reporting on resource deployments within an Azure subscription.
- C. Azure Analysis Services: Azure Analysis Services is a fully managed platform as a service (PaaS) that provides enterprise-grade data models in the cloud. It is used for data analysis and does not directly track or report on Azure resource deployments.
- D. Azure Monitor action groups: Azure Monitor action groups are used to define a set of actions to take when an alert is triggered. They are not used for generating reports on resource deployments.
You have an Azure VM running a Windows Server 2019 image. You plan to enable BitLocker on the VM to encrypt the system drive. Which encryption algorithm and key length should you use for BitLocker on the system drive?
A. AES-128
B. AES-256
C. XTS-AES 128
D. XTS-AES 256
Answer: D. XTS-AES 256
Reasoning: When enabling BitLocker on a system drive; especially in a cloud environment like Azure; it is important to choose an encryption algorithm that provides strong security. XTS-AES is a mode of AES encryption that is specifically designed for disk encryption and is considered more secure than the older CBC mode. Between the two options for XTS-AES; the 256-bit key length (XTS-AES 256) offers a higher level of security compared to the 128-bit key length (XTS-AES 128). Therefore; XTS-AES 256 is the most suitable choice for encrypting the system drive with BitLocker on an Azure VM.
Breakdown of non-selected options:
- A. AES-128: While AES-128 is a secure encryption algorithm; it is not as strong as AES-256. Additionally; it does not use the XTS mode; which is more suitable for disk encryption.
- B. AES-256: Although AES-256 provides strong encryption; it does not utilize the XTS mode; which is specifically designed for disk encryption and offers additional security benefits.
- C. XTS-AES 128: XTS-AES 128 uses the XTS mode; which is suitable for disk encryption; but it provides a lower level of security compared to XTS-AES 256 due to the shorter key length.
You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements: ✑ Provide access to the full .NET Framework. ✑ Ensure redundancy in case an Azure region fails. ✑ Allow administrators access to the operating system to install custom application dependencies. Solution: You deploy an Azure virtual machine to each Azure region and configure Azure Traffic Manager. Does this meet the goal?
A. Yes
B. No
Answer: A. Yes
Reasoning: The solution requires hosting a stateless web app with access to the full .NET Framework; redundancy across Azure regions; and administrative access to the operating system. Deploying an Azure virtual machine (VM) in each region satisfies these requirements:
- Full .NET Framework Access: Azure VMs can run Windows Server; which supports the full .NET Framework.
- Redundancy: By deploying VMs in multiple regions and using Azure Traffic Manager; which provides DNS-based traffic routing; the solution ensures redundancy and failover capabilities in case one region fails.
- Administrative Access: Azure VMs provide full administrative access to the operating system; allowing installation of custom application dependencies.
Breakdown of Non-Selected Answer Option:
- B. No: This option is incorrect because the proposed solution does meet all the specified requirements. Deploying VMs in multiple regions with Traffic Manager ensures redundancy; and VMs provide the necessary access to the full .NET Framework and administrative control over the OS.
You need to deploy a web application with persistent storage in an Azure subscription. The solution must meet these requirements: The web application must use the full .NET framework; be hosted in a Platform as a Service (PaaS) environment; and provide persistent storage for application data. Which Azure service should you use?
A. Azure App Service
B. Azure Virtual Machines
C. Azure Kubernetes Service (AKS)
D. Azure Container Instances (ACI)
Answer: A. Azure App Service
Reasoning: Azure App Service is a Platform as a Service (PaaS) offering that supports hosting web applications using the full .NET framework. It also provides options for persistent storage through Azure Storage or other integrated services. This makes it the most suitable choice for deploying a web application with persistent storage in a PaaS environment.
Breakdown of non-selected options:
- B. Azure Virtual Machines: This is an Infrastructure as a Service (IaaS) offering; not PaaS. While it can host applications using the full .NET framework and provide persistent storage; it does not meet the requirement of being a PaaS solution.
- C. Azure Kubernetes Service (AKS): While AKS is a PaaS offering for container orchestration; it is more complex and typically used for microservices architectures. It is not the most straightforward choice for hosting a simple web application with the full .NET framework.
- D. Azure Container Instances (ACI): ACI is a PaaS offering for running containers without managing servers; but it is not specifically designed for hosting full .NET framework applications. It also does not inherently provide persistent storage solutions suitable for this scenario.
You have a large volume of structured data that needs to be stored in Azure. Which Azure service should you choose to ensure high scalability; availability; and performance?
A. Azure Blob Storage
B. Azure Data Lake Storage Gen2
C. Azure Cosmos DB
D. Azure SQL Database
Answer: C. Azure Cosmos DB
Reasoning: Azure Cosmos DB is a globally distributed; multi-model database service that provides high scalability; availability; and performance. It is designed to handle large volumes of structured data with low latency and offers features like automatic scaling; global distribution; and multiple consistency models; making it highly suitable for applications requiring these capabilities.
Breakdown of non-selected options:
A. Azure Blob Storage: While Azure Blob Storage is highly scalable and available; it is primarily designed for unstructured data storage; such as documents; images; and backups. It is not optimized for structured data and does not provide the database functionalities required for structured data management.
B. Azure Data Lake Storage Gen2: This service is optimized for big data analytics and is suitable for storing large volumes of data in a hierarchical file system. However; it is not specifically designed for structured data and does not offer the database features needed for high-performance structured data operations.
D. Azure SQL Database: Azure SQL Database is a fully managed relational database service that provides high availability and performance for structured data. However; it may not offer the same level of global distribution and scalability as Azure Cosmos DB; especially for applications requiring multi-region deployments and low-latency access across the globe.
You need to deploy a highly scalable and resilient web application in Azure. The application must meet the following requirements: It must be stateless; use a managed database service; and scale to handle large volumes of user traffic. Which Azure services should you use to achieve these requirements?
A. Azure Functions; Azure Cosmos DB
B. Azure App Service; Azure SQL Database
C. Azure Kubernetes Service; Azure Cosmos DB
D. Azure Container Instances; Azure SQL Database
Answer: B. Azure App Service; Azure SQL Database
Reasoning:
- The requirement is to deploy a highly scalable and resilient web application that is stateless; uses a managed database service; and can handle large volumes of user traffic.
- Azure App Service is a fully managed platform for building; deploying; and scaling web apps. It supports stateless applications and can automatically scale to handle large volumes of traffic; making it suitable for this requirement.
- Azure SQL Database is a fully managed relational database service that offers high availability; scalability; and security. It is a managed database service that fits the requirement for a managed database.
Breakdown of non-selected options:
- A. Azure Functions; Azure Cosmos DB: Azure Functions is suitable for stateless applications and can scale; but it is more suited for event-driven; serverless applications rather than a traditional web application. Azure Cosmos DB is a managed database service; but the combination is not the most typical for a web application.
- C. Azure Kubernetes Service; Azure Cosmos DB: Azure Kubernetes Service (AKS) is suitable for deploying containerized applications and can scale; but it requires more management overhead compared to Azure App Service. Azure Cosmos DB is a managed database service; but the combination is more complex than necessary for a typical web application.
- D. Azure Container Instances; Azure SQL Database: Azure Container Instances is suitable for running containers but does not provide the same level of scalability and management as Azure App Service. Azure SQL Database is a managed database service; but the combination is not as optimal for a scalable web application as Azure App Service with Azure SQL Database.
You are planning to deploy an Azure SQL Database instance. You need to ensure that you can restore the database to any point within the past hour. The solution must minimize costs. Which pricing tier should you choose?
A. General Purpose
B. Basic
C. Standard
D. Premium
Answer: A. General pourpus
You are designing an application to store confidential medical records. The application must be highly available and offer fast read and write performance. Additionally; the data must be encrypted both at rest and in transit. Which Azure storage option should you recommend?
A. Azure Files
B. Azure Blob Storage
C. Azure Disk Storage
D. Azure Cosmos DB
Answer: B. Azure Blob Storage
Reasoning: Azure Blob Storage is a highly scalable and durable object storage solution that is suitable for storing large amounts of unstructured data; such as medical records. It offers high availability and fast read/write performance; which are key requirements for the application. Additionally; Azure Blob Storage supports encryption at rest using Azure Storage Service Encryption (SSE) and encryption in transit using HTTPS; meeting the security requirements for confidential data.
Breakdown of non-selected options:
A. Azure Files: While Azure Files provides managed file shares and supports encryption; it is typically used for scenarios where file sharing is needed across multiple virtual machines. It may not offer the same level of performance and scalability as Azure Blob Storage for large-scale unstructured data storage.
C. Azure Disk Storage: Azure Disk Storage is primarily used for persistent storage for Azure Virtual Machines. While it offers encryption and high performance; it is not optimized for storing large volumes of unstructured data like medical records; making it less suitable for this scenario.
D. Azure Cosmos DB: Azure Cosmos DB is a globally distributed; multi-model database service. While it offers high availability and low latency; it is designed for scenarios requiring complex querying and transactional capabilities; which may not be necessary for storing medical records. Additionally; it may be more costly and complex than needed for simple storage of unstructured data.
You are designing a healthcare application that will collect real-time patient data from various IoT devices. The application requires a database solution that can scale horizontally; support JSON documents; and provide low-latency reads and writes. Which database solution should you recommend?
A. Azure Cosmos DB with MongoDB API
B. Azure SQL Database with JSON support
C. Azure Database for PostgreSQL with Hyperscale
D. Azure Time Series Insights
Answer: A. Azure Cosmos DB with MongoDB API
Reasoning:
The requirements for the database solution include the ability to scale horizontally; support JSON documents; and provide low-latency reads and writes. Azure Cosmos DB with MongoDB API is designed to meet these requirements. It is a globally distributed; multi-model database service that natively supports JSON documents and offers horizontal scaling with low-latency access to data. The MongoDB API allows for compatibility with MongoDB applications; making it a suitable choice for applications that require JSON document support.
Breakdown of non-selected options:
- B. Azure SQL Database with JSON support: While Azure SQL Database does support JSON; it is a relational database and does not inherently provide the same level of horizontal scalability and low-latency reads and writes as Azure Cosmos DB. It is more suited for structured data and traditional relational database use cases.
- C. Azure Database for PostgreSQL with Hyperscale: Although Hyperscale (Citus) can provide horizontal scaling for PostgreSQL; it is primarily designed for relational data and does not natively support JSON documents in the same way as a NoSQL database like Cosmos DB. It may not provide the same low-latency performance for JSON document workloads.
- D. Azure Time Series Insights: This is a fully managed analytics; storage; and visualization service for managing IoT-scale time-series data. It is not a general-purpose database solution and is specifically designed for time-series data analysis; making it unsuitable for the broader requirements of the healthcare application described.
You are designing an Azure solution that requires encryption of data at rest for a highly sensitive database. You plan to use Azure SQL Database. Which encryption algorithm and key length should you choose for Transparent Data Encryption (TDE)?
A. RSA 2048
B. AES 128
C. AES 256
D. RSA 3072
Answer: C. AES 256
Reasoning: Azure SQL Database uses Transparent Data Encryption (TDE) to encrypt data at rest. TDE in Azure SQL Database uses the AES encryption algorithm. Among the options provided; AES 256 is the most suitable choice because it offers a higher level of security compared to AES 128 due to its longer key length. AES 256 is a standard choice for encrypting sensitive data; providing a strong balance between security and performance.
Breakdown of non-selected options:
- A. RSA 2048: RSA is an asymmetric encryption algorithm; which is not typically used for encrypting data at rest due to its computational intensity and inefficiency for large data volumes. TDE uses symmetric encryption; making RSA unsuitable for this purpose.
- B. AES 128: While AES 128 is a valid encryption algorithm for TDE; AES 256 provides a higher level of security due to its longer key length; making it a more suitable choice for highly sensitive data.
- D. RSA 3072: Similar to RSA 2048; RSA 3072 is an asymmetric encryption algorithm and is not used for encrypting data at rest in TDE. Symmetric encryption like AES is preferred for this purpose.
You are designing a SQL database solution for an e-commerce company that needs to store and process customer orders and inventory data. The company requires data replication to a disaster recovery site for high availability. The solution must meet a Service Level Agreement (SLA) of 99.99% uptime and have reserved capacity; while minimizing compute charges. Which database platform should you recommend?
A. Azure SQL Database vCore
B. Azure SQL Database Managed Instance
C. Azure SQL Database Hyperscale
D. Azure SQL Database Zone-redundant configuration
Answer: B. Azure SQL Database Managed Instance
Reasoning: Azure SQL Database Managed Instance is the most suitable option for this scenario because it provides a fully managed SQL Server instance with built-in high availability and disaster recovery capabilities. It supports data replication to a disaster recovery site; which is essential for the company’s requirement of high availability. Managed Instance also offers a 99.99% SLA for uptime; meeting the company’s SLA requirement. Additionally; it allows for reserved capacity pricing; which can help minimize compute charges over time.
Breakdown of non-selected options:
A. Azure SQL Database vCore: While it offers flexibility in terms of compute and storage resources; it does not inherently provide the same level of built-in high availability and disaster recovery features as Managed Instance. It may require additional configuration and resources to meet the disaster recovery requirements.
C. Azure SQL Database Hyperscale: This option is designed for very large databases and provides high scalability; but it may not be necessary for the company’s needs unless they have extremely large data volumes. It also does not specifically address the disaster recovery requirement as effectively as Managed Instance.
D. Azure SQL Database Zone-redundant configuration: This configuration provides high availability within a single region by distributing replicas across availability zones. However; it does not inherently provide cross-region disaster recovery; which is a key requirement for the company.
You plan to deploy a containerized application to Azure Kubernetes Service (AKS). The application is critical to the business and must be highly available. The application deployment must meet the following requirements: ✑ Ensure that the application remains available if a single AKS node fails. ✑ Ensure that the internet traffic is encrypted using SSL without configuring SSL on each container. Which service should you include in the recommendation?
A. Azure Front Door
B. Azure Traffic Manager
C. AKS ingress controller
D. Azure Load Balancer
Answer: C. AKS ingress controller
Reasoning:
To ensure high availability and SSL termination for a containerized application on Azure Kubernetes Service (AKS); an AKS ingress controller is the most suitable option. An ingress controller can manage external access to the services in a cluster; typically HTTP; and can provide SSL termination; meaning it can handle SSL encryption and decryption; thus offloading this task from the individual containers. This aligns with the requirement to encrypt internet traffic using SSL without configuring SSL on each container.
Breakdown of non-selected options:
- A. Azure Front Door: While Azure Front Door can provide SSL termination and global load balancing; it is more suited for global routing and web application acceleration. It is not directly integrated with AKS for managing internal traffic and high availability within a Kubernetes cluster.
- B. Azure Traffic Manager: This service is used for DNS-based traffic routing and is not suitable for SSL termination or managing traffic within an AKS cluster. It is more appropriate for distributing traffic across multiple regions or endpoints.
- D. Azure Load Balancer: This service provides Layer 4 (TCP/UDP) load balancing and does not handle SSL termination. It is not suitable for managing HTTP/HTTPS traffic directly or providing SSL termination for AKS applications.
You are designing a web application that requires a database backend. The application will have a high number of concurrent users and must support complex queries. Which Azure database service should you select?
A. Azure SQL Database
B. Azure Database for MySQL
C. Azure Cosmos DB
Answer: A. Azure SQL Database
Reasoning: Azure SQL Database is a fully managed relational database service that is highly suitable for applications requiring complex queries and a high number of concurrent users. It supports advanced querying capabilities; including complex joins; stored procedures; and full-text search; making it ideal for applications with complex query requirements. Additionally; Azure SQL Database offers scalability and performance features that can handle a high number of concurrent users effectively.
Breakdown of non-selected options:
B. Azure Database for MySQL: While Azure Database for MySQL is a managed relational database service; it is typically used for applications that are already using MySQL or require specific MySQL features. It can handle complex queries; but Azure SQL Database is generally more optimized for high concurrency and complex query scenarios in the Azure ecosystem.
C. Azure Cosmos DB: Azure Cosmos DB is a globally distributed; multi-model database service designed for high availability and low latency. It is excellent for scenarios requiring global distribution and horizontal scaling; but it is not primarily optimized for complex relational queries. It is more suitable for NoSQL workloads and scenarios requiring flexible schema and high throughput.
You have 100 Microsoft SQL Server Integration Services (SSIS) packages configured to use 10 on-premises SQL Server databases as their destinations. You plan to migrate these 10 on-premises databases to Azure SQL Database. You need to recommend a solution to create Azure SQL Server Integration Services (SSIS) packages. The solution must ensure that the packages can target the SQL Database instances as their destinations. What should you include in the recommendation?
A. Data Migration Assistant (DMA)
B. Azure Data Factory
C. Azure Data Catalog
D. SQL Server Migration Assistant (SSMA)
Answer: B. Azure Data Factory
Reasoning: Azure Data Factory (ADF) is a cloud-based data integration service that allows you to create data-driven workflows for orchestrating and automating data movement and data transformation. It supports running SSIS packages in the cloud; which makes it suitable for migrating and running existing SSIS packages targeting Azure SQL Database. ADF provides the capability to lift and shift existing SSIS packages to Azure; ensuring they can target Azure SQL Database instances as their destinations.
Breakdown of non-selected options:
A. Data Migration Assistant (DMA): DMA is primarily used for assessing and migrating on-premises SQL Server databases to Azure SQL Database. It helps identify compatibility issues and provides recommendations for migration. However; it is not used for creating or running SSIS packages.
C. Azure Data Catalog: Azure Data Catalog is an enterprise-wide metadata catalog that enables self-service data asset discovery. It is not used for creating or running SSIS packages; nor does it facilitate data migration or integration tasks.
D. SQL Server Migration Assistant (SSMA): SSMA is a tool designed to automate the migration of database schemas and data from various database platforms to SQL Server or Azure SQL Database. While it assists in database migration; it does not handle SSIS package creation or execution.
You need to deploy resources to host a stateful web app in an Azure subscription. The solution must meet the following requirements: ✑ Ensure high availability of the database. ✑ Provide access to the .NET Core runtime environment. ✑ Allow administrators to manage the database. Solution: You deploy an Azure SQL Database with geo-replication and an Azure Virtual Machine running the .NET Core runtime environment. You grant the necessary permissions to administrators. Does this solution meet the requirements?
A. Yes
B. No
Answer: A. Yes
Reasoning: The solution involves deploying an Azure SQL Database with geo-replication and an Azure Virtual Machine running the .NET Core runtime environment. This setup meets the requirements as follows:
- High availability of the database is ensured through Azure SQL Database with geo-replication; which provides redundancy and failover capabilities.
- The .NET Core runtime environment is provided by the Azure Virtual Machine; which can be configured to run .NET Core applications.
- Administrators can manage the database through Azure SQL Database; which offers various management tools and permissions settings.
Breakdown of non-selected answer option:
B. No - This option is not selected because the proposed solution does meet all the specified requirements: high availability; .NET Core runtime; and database management capabilities.
You are planning to deploy an app that will utilize an Azure Storage account. You need to deploy a storage account that meets the following requirements: ✑ Store data for multiple users. ✑ Encrypt each user’s data with a separate key. ✑ Encrypt all data in the storage account using customer-managed keys. What should you deploy?
A. Blobs in a general-purpose v2 storage account
B. Files in a premium file share storage account
C. Blobs in an Azure Data Lake Storage Gen2 account
D. Files in a general-purpose v2 storage account
Answer: C. Blobs in an Azure Data Lake Storage Gen2 account
Reasoning:
The requirements specify that the storage account must store data for multiple users; encrypt each user’s data with a separate key; and use customer-managed keys for encryption. Azure Data Lake Storage Gen2 is designed for big data analytics and supports hierarchical namespace; which is beneficial for organizing data for multiple users. It also supports encryption with customer-managed keys; allowing for separate encryption keys for different data sets or users; which aligns with the requirement to encrypt each user’s data with a separate key.
Breakdown of non-selected options:
A. Blobs in a general-purpose v2 storage account - While general-purpose v2 storage accounts support customer-managed keys; they do not inherently provide a mechanism to encrypt each user’s data with a separate key as effectively as Azure Data Lake Storage Gen2.
B. Files in a premium file share storage account - Premium file shares are optimized for high-performance file storage but do not inherently support separate encryption keys for each user’s data; which is a key requirement.
D. Files in a general-purpose v2 storage account - Similar to option B; while general-purpose v2 accounts support customer-managed keys; they do not provide a straightforward way to encrypt each user’s data with a separate key; which is a critical requirement in this scenario.
Your company deploys several virtual machines both on-premises and in Azure. ExpressRoute is set up and configured for connectivity between on-premises and Azure. Some virtual machines are experiencing network connectivity issues. You need to analyze the network traffic to determine if packets are being allowed or denied to the virtual machines. Solution: Use Network Performance Monitor in Azure Network Watcher to analyze the network traffic. Does this meet the goal?
A. Yes
B. No
Answer: A. Yes
Reasoning: The question asks whether using Network Performance Monitor in Azure Network Watcher is a suitable solution to analyze network traffic to determine if packets are being allowed or denied to virtual machines. Network Performance Monitor is a tool within Azure Network Watcher that provides insights into network performance and can help diagnose connectivity issues. It can monitor network traffic and identify packet loss; latency; and other network-related issues; which aligns with the requirement to analyze network traffic for connectivity issues. Therefore; using Network Performance Monitor in Azure Network Watcher meets the goal.
Breakdown of non-selected answer option:
B. No - This option is not suitable because Network Performance Monitor in Azure Network Watcher is indeed capable of analyzing network traffic and diagnosing connectivity issues; which is the requirement stated in the question. Therefore; the solution provided does meet the goal; making “No” an incorrect choice.
You are designing an Azure environment for a large enterprise and need to ensure that all Azure resources comply with the organization’s policies. Which Azure Policy scope should you use to achieve this goal?
A. Azure Active Directory (Azure AD) administrative units
B. Azure Active Directory (Azure AD) tenants
C. Subscriptions
D. Compute resources
E. Resource groups
F. Management groups
Answer: F. Management groups
Reasoning:
Azure Policy is a service in Azure that you use to create; assign; and manage policies. These policies enforce different rules and effects over your resources; so those resources stay compliant with your corporate standards and service level agreements. When designing an Azure environment for a large enterprise; it’s important to ensure that all resources comply with organizational policies. Management groups in Azure provide a way to manage access; policies; and compliance across multiple subscriptions. They allow you to apply policies at a higher level than individual subscriptions; which is ideal for large enterprises with multiple subscriptions.
Breakdown of non-selected options:
A. Azure Active Directory (Azure AD) administrative units - These are used to delegate administrative permissions within Azure AD; not for applying policies to Azure resources.
B. Azure Active Directory (Azure AD) tenants - Tenants are instances of Azure AD and are not used for applying Azure Policy. They are more about identity management.
C. Subscriptions - While you can apply policies at the subscription level; management groups allow for broader policy application across multiple subscriptions; which is more suitable for large enterprises.
D. Compute resources - This is too granular for applying organizational policies across an enterprise. Policies should be applied at a higher level.
E. Resource groups - These are used to manage resources within a subscription; but applying policies at this level would not ensure compliance across the entire organization. Management groups provide a broader scope.
You need to deploy a web application with multiple dependencies on the operating system in an Azure subscription. The solution must meet these requirements: The web application must have access to the full .NET framework; be hosted in a virtual machine (VM); and provide redundancy across multiple Azure regions. Which Azure service should you use?
A. Azure Virtual Machines
B. Azure App Service
C. Azure Kubernetes Service (AKS)
D. Azure Container Instances (ACI)
Answer: A. Azure Virtual Machines
Reasoning: The question specifies that the web application must have access to the full .NET framework; be hosted in a virtual machine; and provide redundancy across multiple Azure regions. Azure Virtual Machines (VMs) are the most suitable option because they allow for full control over the operating system and can run the full .NET framework. Additionally; VMs can be deployed across multiple regions to provide redundancy.
Breakdown of non-selected options:
- B. Azure App Service: While Azure App Service supports .NET applications; it does not provide the same level of control over the operating system as a VM does. It is a Platform as a Service (PaaS) offering; which may not meet the requirement for full .NET framework access and specific OS dependencies.
- C. Azure Kubernetes Service (AKS): AKS is primarily used for containerized applications and may not be the best fit for applications requiring the full .NET framework and specific OS dependencies. It also adds complexity if the application is not already containerized.
- D. Azure Container Instances (ACI): Similar to AKS; ACI is used for running containers and does not provide the full control over the operating system needed for applications with specific OS dependencies and full .NET framework requirements.
You have an Azure AD tenant with a security group named Group1. Group1 is set up for assigned memberships and includes several members; including guest users. You need to ensure that Group1 is reviewed monthly to identify members who no longer need access. Additionally; ensure that any members removed from the group are added to another security group. What solution should you recommend?
A. Implement Azure AD Identity Protection.
B. Change the membership type of Group1 to Dynamic User.
C. Create an access review for Group1 and configure a post-review action to add removed members to another security group.
D. Implement Azure AD Privileged Identity Management (PIM).
Answer: C. Create an access review for Group1 and configure a post-review action to add removed members to another security group.
Reasoning:
The requirement is to review the membership of Group1 monthly and ensure that any members removed from the group are added to another security group. Azure AD Access Reviews are specifically designed for this purpose. They allow you to periodically review group memberships and can be configured to take specific actions after the review; such as adding removed members to another group. This makes option C the most suitable solution.
Breakdown of non-selected options:
A. Implement Azure AD Identity Protection: This service is primarily focused on identifying and responding to potential identity risks and does not provide functionality for reviewing group memberships or managing post-review actions.
B. Change the membership type of Group1 to Dynamic User: Changing the membership type to Dynamic User would automatically manage group membership based on user attributes; but it does not provide a mechanism for periodic reviews or handling post-review actions like moving users to another group.
D. Implement Azure AD Privileged Identity Management (PIM): PIM is used to manage; control; and monitor access within Azure AD; Azure; and other Microsoft Online Services. It is more focused on managing privileged roles and does not directly address the requirement for periodic group membership reviews or post-review actions.
You are designing an application that needs to store and process large volumes of structured data; such as customer orders and inventory records. The application must be highly available and offer fast read and write performance. Which Azure storage option should you recommend?
A. Azure Cosmos DB
B. Azure Table Storage
C. Azure SQL Database
D. Azure Disk Storage
Answer: C. Azure SQL Database
Reasoning: Azure SQL Database is a fully managed relational database service that provides high availability; scalability; and fast read/write performance; making it suitable for applications that need to store and process large volumes of structured data like customer orders and inventory records. It supports complex queries and transactions; which are often required for structured data processing.
Breakdown of non-selected options:
- A. Azure Cosmos DB: While Cosmos DB offers high availability and fast performance; it is more suited for globally distributed applications and unstructured or semi-structured data. It is not the best fit for structured data that requires complex querying and transactional support.
- B. Azure Table Storage: This is a NoSQL key-value store that is highly scalable and cost-effective but lacks the advanced querying capabilities and transactional support needed for structured data processing.
- D. Azure Disk Storage: This is primarily used for virtual machine storage and does not provide the database capabilities required for processing structured data like customer orders and inventory records.
You need to recommend an Azure Storage solution that meets the following requirements:
- The storage must support 1 PB of data.
- The data must be stored in blob storage.
- The storage must support three levels of subfolders.
- The storage must support access control lists (ACLs).
What should you include in the recommendation?
A. A premium storage account configured for block blobs
B. A general-purpose v2 storage account with hierarchical namespace enabled
C. A premium storage account configured for page blobs
D. A premium storage account configured for file shares and supports large file shares
Answer: B. A general-purpose v2 storage account with hierarchical namespace enabled
Reasoning:
- The requirement specifies the need for blob storage that supports 1 PB of data; three levels of subfolders; and access control lists (ACLs).
- Azure Blob Storage with a hierarchical namespace enabled (also known as Azure Data Lake Storage Gen2) supports these requirements. It allows for organizing data into a hierarchy of directories and subdirectories; which satisfies the need for three levels of subfolders.
- Additionally; it supports ACLs; which are necessary for fine-grained access control.
Breakdown of non-selected options:
- A. A premium storage account configured for block blobs: While this option supports blob storage; it does not inherently support hierarchical namespaces or ACLs; which are required for organizing data into subfolders and managing access control.
- C. A premium storage account configured for page blobs: Page blobs are typically used for scenarios like virtual hard disks (VHDs) and do not support hierarchical namespaces or ACLs for blob storage.
- D. A premium storage account configured for file shares and supports large file shares: This option is related to Azure Files; which is different from blob storage and does not meet the requirement for blob storage with hierarchical namespaces and ACLs.
You have an app named App1 that uses an on-premises MySQL database named DB1. You plan to migrate DB1 to an Azure Database for MySQL. You need to enable customer-managed Transparent Data Encryption (TDE) for the database. The solution must maximize encryption strength. Which encryption algorithm and key length should you use for the TDE protector?
A. AES 192
B. RSA 2048
C. AES 128
D. RSA 3072
Answer: D. RSA 3072
Reasoning:
To enable customer-managed Transparent Data Encryption (TDE) for an Azure Database for MySQL; you need to choose an encryption algorithm and key length that maximizes encryption strength. RSA 3072 is a strong encryption algorithm with a longer key length compared to RSA 2048; providing enhanced security. AES options are not suitable here because the question specifically asks for the TDE protector; which typically uses RSA keys for key encryption.
Breakdown of non-selected options:
- A. AES 192: AES is a symmetric encryption algorithm; and while it is strong; it is not typically used for TDE protectors; which require asymmetric encryption like RSA.
- B. RSA 2048: While RSA 2048 is a common choice for encryption; RSA 3072 offers a higher level of security due to its longer key length.
- C. AES 128: Similar to AES 192; AES 128 is a symmetric encryption algorithm and is not used for TDE protectors. Additionally; it offers less encryption strength compared to AES 192.
Note: This question is part of a series that presents the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one solution; while others might not have a solution. After you answer a question in this section; you will NOT be able to return to it. As a result; these questions will not appear in the review screen. Your company plans to deploy various Azure App Service instances that will use Azure SQL databases. The App Service instances will be deployed at the same time as the Azure SQL databases. The company has a regulatory requirement to deploy the App Service instances only to specific Azure regions. The resources for the App Service instances must reside in the same region. You need to recommend a solution to meet the regulatory requirement. Solution: You recommend creating resource groups based on locations and implementing resource locks on the resource groups. Does this meet the goal?
A. Yes
B. No
Answer: B. No
Reasoning: The solution proposed in the question suggests creating resource groups based on locations and implementing resource locks on the resource groups. While creating resource groups based on locations can help organize resources by region; it does not inherently enforce the deployment of App Service instances and Azure SQL databases to specific regions. Resource locks are used to prevent accidental deletion or modification of resources; but they do not enforce regional deployment requirements. Therefore; this solution does not adequately meet the regulatory requirement to ensure that App Service instances are deployed only to specific Azure regions.
Breakdown of non-selected answer option:
- A. Yes: This option is incorrect because the proposed solution does not enforce the deployment of resources to specific regions. Creating resource groups based on locations is an organizational strategy; and resource locks are for preventing changes; not for enforcing regional deployment. Therefore; the solution does not meet the stated goal of ensuring compliance with the regulatory requirement.