tom Flashcards
What should you include in the identity management strategy to accommodate the planned changes?
A. Deploy domain controllers for corp.fabrikam.com to virtual networks in Azure.
B. Move all the domain controllers from corp.fabrikam.com to virtual networks in Azure.
C. Deploy a new Azure AD tenant for authenticating new R&D projects.
Answer: A. Deploy domain controllers for corp.fabrikam.com to virtual networks in Azure.
Reasoning: The question asks about accommodating planned changes in the identity management strategy. Deploying domain controllers for corp.fabrikam.com to virtual networks in Azure allows for extending the on-premises Active Directory environment into Azure; which is a common strategy for hybrid identity management. This approach supports seamless integration with existing infrastructure and provides flexibility for scaling and managing identities in a cloud environment.
Breakdown of non-selected options:
- B. Move all the domain controllers from corp.fabrikam.com to virtual networks in Azure: This option suggests moving all domain controllers to Azure; which might not be suitable if there is a need to maintain on-premises infrastructure for redundancy; compliance; or performance reasons. It could also introduce risks if connectivity to Azure is disrupted.
- C. Deploy a new Azure AD tenant for authenticating new R&D projects: Creating a new Azure AD tenant would separate the identity management for R&D projects from the existing corp.fabrikam.com domain; which might not align with the goal of accommodating planned changes within the existing identity management framework. This option could lead to increased complexity in managing multiple identity systems.
You have an Azure subscription that includes a virtual network. You need to ensure that the traffic between this virtual network and an on-premises network is encrypted. What should you recommend?
A. Azure AD Privileged Identity Management
B. Azure AD Conditional Access
C. Azure VPN Gateway
D. Azure Security Center
Answer: C. Azure VPN Gateway
Reasoning:
The requirement is to ensure that the traffic between an Azure virtual network and an on-premises network is encrypted. The most suitable solution for this scenario is to use a VPN (Virtual Private Network) connection; which encrypts the data transmitted between the two networks. Azure VPN Gateway is specifically designed to provide secure cross-premises connectivity; making it the appropriate choice for encrypting traffic between an Azure virtual network and an on-premises network.
Breakdown of non-selected options:
A. Azure AD Privileged Identity Management - This service is used for managing; controlling; and monitoring access within Azure AD; not for encrypting network traffic between Azure and on-premises networks.
B. Azure AD Conditional Access - This feature is used to enforce access controls on Azure AD resources based on conditions; not for encrypting network traffic.
D. Azure Security Center - This service provides security management and threat protection for Azure resources; but it does not specifically handle encryption of network traffic between Azure and on-premises networks.
You have an application that uses three on-premises Microsoft SQL Server databases. You plan to migrate these databases to Azure. The application requires server-side transactions across all three databases. What Azure solution should you recommend to meet this requirement?
A. Azure SQL Database Hyperscale
B. Azure SQL Database Managed Instance
C. Azure SQL Database Elastic Pool
D. Azure SQL Database Single Database
Answer: B. Azure SQL Database Managed Instance
Reasoning: The requirement is to support server-side transactions across three databases; which implies the need for features like distributed transactions or cross-database transactions. Azure SQL Database Managed Instance supports distributed transactions across multiple databases; making it suitable for this scenario. It provides near 100% compatibility with on-premises SQL Server; including support for features like cross-database queries and transactions; which are essential for the application in question.
Breakdown of non-selected options:
- A. Azure SQL Database Hyperscale: This option is designed for single databases with high scalability needs. It does not inherently support cross-database transactions; which are required in this scenario.
- C. Azure SQL Database Elastic Pool: Elastic Pools are used to manage and scale multiple databases with varying and unpredictable usage demands. However; they do not support cross-database transactions; which are necessary for the application.
- D. Azure SQL Database Single Database: This option is for single; isolated databases and does not support cross-database transactions; which are needed for the application to function correctly across the three databases.
You have an on-premises server named Server1 running Windows Server 2016. Server1 hosts a SQL Server database that is 4 TB in size. You need to migrate this database to an Azure Blob Storage account named store1. The migration process must be secure and encrypted. Which Azure service should you recommend?
A. Azure Data Box
B. Azure Site Recovery
C. Azure Database Migration Service
D. Azure Import/Export
Answer: A. Azure Data Box
Reasoning:
Azure Data Box is a service designed to transfer large amounts of data to Azure in a secure and efficient manner. Given the size of the SQL Server database (4 TB); Azure Data Box is suitable because it provides a physical device that can be shipped to the customer to load data securely and then sent back to Microsoft for uploading to Azure. This method ensures data is encrypted during transit and is ideal for large datasets where network transfer might be impractical due to bandwidth limitations or time constraints.
Breakdown of non-selected options:
- B. Azure Site Recovery: This service is primarily used for disaster recovery and business continuity; allowing you to replicate on-premises servers to Azure for failover purposes. It is not designed for one-time data migrations to Azure Blob Storage.
- C. Azure Database Migration Service: This service is typically used for migrating databases to Azure SQL Database or Azure SQL Managed Instance; not directly to Azure Blob Storage. It focuses on database schema and data migration rather than bulk data transfer to storage accounts.
- D. Azure Import/Export: While this service can be used to transfer data to Azure by shipping hard drives; it is generally less efficient and secure compared to Azure Data Box for large data sizes like 4 TB. Azure Data Box is specifically designed for such scenarios; offering a more streamlined and secure process.
Your company is migrating its on-premises virtual machines to Azure. These virtual machines will communicate with each other within the same virtual network using private IP addresses. You need to recommend a solution to prevent virtual machines that are not part of the migration from communicating with the migrating virtual machines. Which solution should you recommend?
A. Azure ExpressRoute
B. Network Security Groups (NSGs)
C. Azure Bastion
D. Azure Private Link
Answer: B. Network Security Groups (NSGs)
Reasoning: Network Security Groups (NSGs) are designed to filter network traffic to and from Azure resources in an Azure virtual network. They can be used to control inbound and outbound traffic to network interfaces; VMs; and subnets; making them suitable for isolating the migrating virtual machines from those not part of the migration. By configuring NSGs; you can specify rules that allow or deny traffic based on source and destination IP addresses; ports; and protocols; effectively preventing unwanted communication.
Breakdown of non-selected options:
- A. Azure ExpressRoute: This is a service that provides a private connection between an on-premises network and Azure; bypassing the public internet. It is not used for controlling communication between virtual machines within a virtual network.
- C. Azure Bastion: This is a service that provides secure and seamless RDP and SSH connectivity to virtual machines directly through the Azure portal. It is not used for controlling network traffic between virtual machines.
- D. Azure Private Link: This service provides private connectivity to Azure services over a private endpoint in your virtual network. It is not designed to control communication between virtual machines within the same virtual network.
You plan to deploy a microservices-based application to Azure. The application consists of several containerized services that need to communicate with each other. The application deployment must meet the following requirements: ✑ Ensure that each service can scale independently. ✑ Ensure that internet traffic is encrypted using SSL without configuring SSL on each container. Which service should you include in the recommendation?
A. Azure Front Door
B. Azure Traffic Manager
C. AKS ingress controller
D. Azure Application Gateway
Answer: C. AKS ingress controller
Reasoning: The question requires a solution for deploying a microservices-based application with containerized services that can scale independently and have encrypted internet traffic using SSL without configuring SSL on each container. An AKS ingress controller is suitable for this scenario because it allows for managing external access to the services in a Kubernetes cluster; including SSL termination; which means SSL can be managed at the ingress level rather than on each individual container. This allows each service to scale independently within the Kubernetes environment.
Breakdown of non-selected options:
- A. Azure Front Door: While Azure Front Door can handle SSL termination and provide global load balancing; it is more suited for routing traffic across multiple regions and does not inherently support scaling individual microservices within a Kubernetes cluster.
- B. Azure Traffic Manager: This service is primarily used for DNS-based traffic routing and does not handle SSL termination or provide the ability to scale individual services within a microservices architecture.
- D. Azure Application Gateway: Although it supports SSL termination and can route traffic to backend services; it is more suited for traditional web applications rather than containerized microservices that require independent scaling within a Kubernetes environment.
You have an on-premises storage solution that supports the Hadoop Distributed File System (HDFS) and uses Kerberos for authentication. You need to migrate this solution to Azure while ensuring it continues to use Kerberos. What should you use?
A. Azure Data Lake Storage Gen2
B. Azure NetApp Files
C. Azure Files
D. Azure Blob Storage
Answer: A. Azure Data Lake Storage Gen2
Reasoning: Azure Data Lake Storage Gen2 is designed to work with big data analytics and supports the Hadoop Distributed File System (HDFS) natively. It also integrates with Azure Active Directory (AAD) for authentication; which can be configured to support Kerberos authentication. This makes it the most suitable option for migrating an on-premises HDFS solution that uses Kerberos authentication to Azure.
Breakdown of non-selected options:
B. Azure NetApp Files: While Azure NetApp Files is a high-performance file storage service that supports NFS and SMB protocols; it is not specifically designed for HDFS workloads and does not natively support Kerberos authentication for HDFS.
C. Azure Files: Azure Files provides fully managed file shares in the cloud that are accessible via the SMB protocol. It is not designed for HDFS workloads and does not natively support Kerberos authentication for HDFS.
D. Azure Blob Storage: Azure Blob Storage is a scalable object storage solution for unstructured data. It does not natively support HDFS or Kerberos authentication; making it unsuitable for this scenario.
You are designing an application that requires a MySQL database in Azure. The application must be highly available and support automatic failover. Which service tier should you recommend?
A. Basic
B. General Purpose
C. Memory Optimized
D. Serverless
Answer: B. General Purpose
Reasoning: The requirement is for a MySQL database in Azure that is highly available and supports automatic failover. Azure Database for MySQL offers different service tiers; each with specific features and capabilities. The General Purpose tier is designed to provide balanced compute and memory resources with high availability and automatic failover capabilities; making it suitable for most business workloads that require these features.
Breakdown of non-selected options:
A. Basic - The Basic tier is designed for workloads that do not require high availability or automatic failover. It is more suitable for development or testing environments rather than production environments that require high availability.
C. Memory Optimized - While the Memory Optimized tier provides high performance for memory-intensive workloads; it is not specifically designed for high availability and automatic failover. It focuses more on performance rather than availability.
D. Serverless - The Serverless tier is designed for intermittent; unpredictable workloads and offers automatic scaling and billing based on the actual usage. However; it does not inherently provide high availability and automatic failover; which are the key requirements in this scenario.
You are designing an IoT solution that involves 100;000 devices. These devices will stream data; including device ID; location; and sensor data; at a rate of 100 messages per second. The solution must store and analyze the data in real time. Which Azure service should you recommend?
A. Azure Data Explorer
B. Azure Stream Analytics
C. Azure Cosmos DB
D. Azure IoT Hub
Answer: B. Azure Stream Analytics
Reasoning: Azure Stream Analytics is specifically designed for real-time data processing and analysis. It can handle large volumes of data streaming from IoT devices; making it suitable for scenarios where data needs to be analyzed in real time. Given the requirement to store and analyze data in real time from 100;000 devices streaming at 100 messages per second; Azure Stream Analytics is the most appropriate choice.
Breakdown of non-selected options:
- A. Azure Data Explorer: While Azure Data Explorer is excellent for analyzing large volumes of data; it is more suited for exploratory data analysis and interactive analytics rather than real-time streaming analytics.
- C. Azure Cosmos DB: Azure Cosmos DB is a globally distributed; multi-model database service. It is ideal for storing data with low latency but does not provide real-time analytics capabilities.
- D. Azure IoT Hub: Azure IoT Hub is a service for managing IoT devices and ingesting data from them. While it is essential for the IoT solution; it does not provide real-time data analysis capabilities on its own.
You are designing a highly available Azure web application that must remain operational during a regional outage. You need to minimize costs while ensuring no data loss during failover. Which Azure service should you use?
A. Azure App Service Standard
B. Azure App Service Premium
C. Azure Kubernetes Service (AKS)
D. Azure Service Fabric
Answer: B. Azure App Service Premium
Reasoning:
To ensure high availability and operational continuity during a regional outage; the application must be able to failover to another region without data loss. Azure App Service Premium provides features such as traffic manager integration and geo-distribution; which are essential for maintaining availability across regions. It also includes built-in backup and restore capabilities; which help in minimizing data loss during failover. Additionally; the Premium tier offers better performance and scaling options compared to the Standard tier; which is crucial for handling increased loads during failover scenarios.
Breakdown of non-selected options:
- A. Azure App Service Standard: While this option provides basic scaling and availability features; it lacks the advanced geo-distribution and traffic management capabilities of the Premium tier; which are necessary for handling regional outages effectively.
- C. Azure Kubernetes Service (AKS): AKS is a container orchestration service that can provide high availability; but it requires more complex setup and management compared to Azure App Service. It may not be the most cost-effective solution for a web application that needs to minimize costs while ensuring no data loss.
- D. Azure Service Fabric: This is a distributed systems platform that can provide high availability and resilience. However; it is more complex to manage and may not be the most cost-effective solution for a simple web application compared to Azure App Service Premium; which offers built-in features for high availability and disaster recovery.
You are developing a sales application that will include several Azure cloud services to manage various components of a transaction. These services will handle customer orders; billing; payment; inventory; and shipping. You need to recommend a solution that allows these cloud services to communicate transaction information asynchronously using XML messages. What should you include in your recommendation?
A. Azure Service Fabric
B. Azure Data Lake
C. Azure Service Bus
D. Azure Traffic Manager
Answer: C. Azure Service Bus
Reasoning: Azure Service Bus is a messaging service that facilitates asynchronous communication between different services and applications. It supports various messaging protocols; including XML; and is designed to handle complex messaging workflows; making it suitable for scenarios where different components of a system need to communicate asynchronously. In this case; the sales application requires asynchronous communication between services handling customer orders; billing; payment; inventory; and shipping; which aligns well with the capabilities of Azure Service Bus.
Breakdown of non-selected options:
A. Azure Service Fabric: Azure Service Fabric is a distributed systems platform used to build and manage scalable and reliable microservices and containers. While it is useful for developing applications; it is not specifically designed for asynchronous messaging between services; which is the requirement in this scenario.
B. Azure Data Lake: Azure Data Lake is a storage service optimized for big data analytics workloads. It is not designed for messaging or communication between services; making it unsuitable for the requirement of asynchronous communication using XML messages.
D. Azure Traffic Manager: Azure Traffic Manager is a DNS-based traffic load balancer that enables you to distribute traffic optimally to services across global Azure regions. It is not related to messaging or communication between services; and therefore does not meet the requirement for asynchronous communication using XML messages.
You need to implement disaster recovery for an on-premises Hadoop cluster that uses HDFS; with Azure as the replication target. Which Azure service should you use?
A. Azure Blob Storage
B. Azure Data Lake Storage Gen2
C. Azure Backup
D. Azure Site Recovery
Answer: B. Azure Data Lake Storage Gen2
Reasoning:
The question requires a solution for disaster recovery of an on-premises Hadoop cluster using HDFS; with Azure as the replication target. Azure Data Lake Storage Gen2 is specifically designed for big data analytics and is optimized for Hadoop workloads. It provides hierarchical namespace and is compatible with HDFS; making it the most suitable choice for replicating Hadoop data.
Breakdown of non-selected options:
- A. Azure Blob Storage: While Azure Blob Storage can store large amounts of unstructured data; it does not provide the hierarchical namespace and HDFS compatibility that Azure Data Lake Storage Gen2 offers; which are crucial for Hadoop workloads.
- C. Azure Backup: Azure Backup is primarily used for backing up and restoring data; but it is not designed for replicating Hadoop clusters or handling HDFS data specifically.
- D. Azure Site Recovery: Azure Site Recovery is used for disaster recovery of entire virtual machines and applications; but it is not tailored for Hadoop clusters or HDFS data replication.
You have been tasked with implementing a governance solution for a large Azure environment containing numerous resource groups. You need to ensure that all resource groups comply with the organization’s policies. Which Azure Policy scope should you use?
A. Azure Active Directory (Azure AD) administrative units
B. Azure Active Directory (Azure AD) tenants
C. Subscriptions
D. Compute resources
E. Resource groups
F. Management groups
Answer: F. Management groups
Reasoning:
Azure Policy is a service in Azure that you use to create; assign; and manage policies. These policies enforce different rules and effects over your resources; so those resources stay compliant with your corporate standards and service level agreements. When dealing with a large Azure environment containing numerous resource groups; it is important to apply policies at a level that can encompass all these resource groups efficiently. Management groups are designed to help manage access; policy; and compliance across multiple subscriptions. By applying policies at the management group level; you can ensure that all underlying subscriptions and their respective resource groups comply with the organization’s policies.
Breakdown of non-selected options:
A. Azure Active Directory (Azure AD) administrative units - These are used to delegate administrative permissions within Azure AD and are not related to Azure Policy scope for resource compliance.
B. Azure Active Directory (Azure AD) tenants - A tenant is a dedicated instance of Azure AD that an organization receives when it signs up for a Microsoft cloud service. It is not used for Azure Policy scope.
C. Subscriptions - While policies can be applied at the subscription level; using management groups allows for broader policy application across multiple subscriptions; which is more suitable for large environments.
D. Compute resources - This is a specific type of resource and not a scope for applying Azure Policies.
E. Resource groups - Policies can be applied at the resource group level; but this would require applying policies individually to each resource group; which is not efficient for a large environment.
You have an on-premises data center hosting several SQL Server instances. You plan to migrate some of these databases to Azure SQL Database Managed Instance. You need to recommend a migration solution that meets the following requirements: • Ensures minimal downtime during migration. • Supports on-premises instances running SQL Server 2008 R2. • Allows the migration of multiple databases in parallel. • Maintains compatibility with all SQL Server features. What should you include in your recommendation?
A. Use Azure Database Migration Service to migrate the databases.
B. Use SQL Server Integration Services to migrate the databases.
C. Upgrade the on-premises instances to SQL Server 2016; then use Azure Database Migration Service to migrate the databases.
D. Use Data Migration Assistant to migrate the databases.
Answer: A. Use Azure Database Migration Service to migrate the databases.
Reasoning:
Azure Database Migration Service (DMS) is designed to facilitate the migration of databases to Azure with minimal downtime; which is a key requirement in this scenario. It supports migrations from SQL Server 2008 R2; allowing for the migration of multiple databases in parallel; and maintains compatibility with SQL Server features. DMS is specifically built to handle such migrations efficiently and is the most suitable option given the requirements.
Breakdown of non-selected options:
B. Use SQL Server Integration Services to migrate the databases.
- SQL Server Integration Services (SSIS) is primarily used for data transformation and ETL processes rather than full database migrations. It does not inherently support minimal downtime or parallel migrations of multiple databases as effectively as DMS.
C. Upgrade the on-premises instances to SQL Server 2016; then use Azure Database Migration Service to migrate the databases.
- While upgrading to SQL Server 2016 could be beneficial for other reasons; it is not necessary for the migration process itself. Azure DMS supports SQL Server 2008 R2 directly; making this step redundant and not aligned with the requirement for minimal downtime.
D. Use Data Migration Assistant to migrate the databases.
- Data Migration Assistant (DMA) is a tool used to assess and identify compatibility issues when migrating to Azure SQL Database; but it is not designed for the actual migration process; especially when minimal downtime and parallel migrations are required.
You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements: Provide access to the full .NET Framework; ensure redundancy in case an Azure region fails; and allow administrators access to the operating system to install custom application dependencies. Solution: You deploy an Azure VM Scale Set across two Azure regions and use an Azure Load Balancer to distribute traffic between the VMs in the Scale Set. Does this meet the goal?
A. Yes
B. No
Answer: A. Yes
Reasoning: The requirements for hosting a stateless web app include providing access to the full .NET Framework; ensuring redundancy in case an Azure region fails; and allowing administrators access to the operating system to install custom application dependencies. Deploying an Azure VM Scale Set across two Azure regions with an Azure Load Balancer meets these requirements as follows:
- Access to the full .NET Framework: Azure VMs can run Windows Server; which supports the full .NET Framework.
- Redundancy in case an Azure region fails: By deploying the VM Scale Set across two regions; the solution ensures that if one region fails; the other can continue to serve the application.
- Administrator access to the operating system: Azure VMs provide full access to the OS; allowing administrators to install custom application dependencies.
Breakdown of non-selected answer option:
B. No: This option is incorrect because the proposed solution does meet all the specified requirements. Deploying an Azure VM Scale Set across two regions with a load balancer provides the necessary redundancy; access to the full .NET Framework; and administrative access to the OS.
You have an Azure subscription that includes an Azure Storage account. You plan to implement Azure File Sync. What is the first step you should take to prepare the storage account for Azure File Sync?
A. Register the Microsoft.Storage resource provider.
B. Create a file share in the storage account.
C. Create a virtual network.
D. Install the Azure File Sync agent on a server.
Answer: B. Create a file share in the storage account.
Reasoning: To implement Azure File Sync; the first step is to create a file share in the Azure Storage account. Azure File Sync requires a file share to sync files between the on-premises server and the Azure cloud. This file share acts as the cloud endpoint for the sync process.
Breakdown of non-selected answer options:
- A. Register the Microsoft.Storage resource provider: This step is not necessary for preparing the storage account specifically for Azure File Sync. The Microsoft.Storage resource provider is typically registered by default in Azure subscriptions; and it is not a specific requirement for Azure File Sync setup.
- C. Create a virtual network: Creating a virtual network is not directly related to setting up Azure File Sync. Azure File Sync does not require a virtual network configuration as part of its initial setup process.
- D. Install the Azure File Sync agent on a server: While installing the Azure File Sync agent is a necessary step in the overall process; it is not the first step in preparing the storage account itself. The agent is installed on the on-premises server that will sync with the Azure file share.
You have a highly available application running on an AKS cluster in Azure. You need to ensure that the application is accessible over HTTPS without configuring SSL on each container. Which Azure service should you use?
A. Azure Front Door
B. Azure Traffic Manager
C. AKS Ingress Controller
D. Azure Application Gateway
Answer: C. AKS Ingress Controller
AKS Ingress Controller: An Ingress Controller can manage SSL termination; but it requires additional configuration and management within the AKS cluster. Azure Application Gateway provides a more integrated and managed solution for SSL termination outside the cluster.
You have an on-premises storage solution that supports the Hadoop Distributed File System (HDFS). You need to migrate this solution to Azure and ensure it is accessible from multiple regions. What should you use?
A. Azure Data Lake Storage Gen2
B. Azure NetApp Files
C. Azure Files
D. Azure Blob Storage
Answer: A. Azure Data Lake Storage Gen2
Reasoning: Azure Data Lake Storage Gen2 is specifically designed to handle big data analytics workloads and is fully compatible with HDFS; making it an ideal choice for migrating an on-premises HDFS solution to Azure. It also provides high scalability and can be accessed from multiple regions; which aligns with the requirement of ensuring accessibility from multiple regions.
Breakdown of non-selected options:
- B. Azure NetApp Files: While Azure NetApp Files provides high-performance file storage; it is not specifically designed for HDFS compatibility and big data analytics workloads; making it less suitable for this scenario.
- C. Azure Files: Azure Files offers fully managed file shares in the cloud that are accessible via the SMB protocol. However; it does not natively support HDFS; which is a critical requirement for this migration.
- D. Azure Blob Storage: Although Azure Blob Storage is highly scalable and can be accessed from multiple regions; it does not natively support HDFS. It is more suited for object storage rather than file system compatibility required for HDFS.
You plan to deploy an Azure virtual machine to run a mission-critical application. The virtual machine will store data on a disk with BitLocker Drive Encryption enabled. You need to use Azure Backup to back up the virtual machine. Which two backup solutions should you use? Each option presents part of the solution.
A. Azure Backup (MARS) agent
B. Azure Backup Server
C. Azure Site Recovery
D. Backup Pre-Checks
Answer: B. Azure Backup Server
Answer: D. Backup Pre-Checks
Reasoning:
When backing up an Azure virtual machine with BitLocker Drive Encryption enabled; it’s important to ensure that the backup solution supports encrypted disks. Azure Backup Server is a suitable option because it can handle the backup of encrypted disks. Additionally; Backup Pre-Checks are essential to ensure that the backup configuration is correct and that there are no issues that could prevent a successful backup. These pre-checks help identify potential problems before the backup process begins; which is crucial for mission-critical applications.
Breakdown of non-selected options:
A. Azure Backup (MARS) agent - The MARS agent is typically used for backing up files; folders; and system state from on-premises machines to Azure. It is not suitable for backing up Azure virtual machines directly; especially those with BitLocker encryption.
C. Azure Site Recovery - This is primarily a disaster recovery solution rather than a backup solution. It is used to replicate and failover virtual machines to another region; not for regular backup purposes.
You have an Azure subscription. You need to deploy an Azure Kubernetes Service (AKS) solution that will use Windows Server 2019 nodes. The solution must meet the following requirements:
✑ Minimize the time it takes to provision compute resources during scale-out operations.
✑ Support autoscaling of Windows Server containers.
Which scaling option should you recommend?
A. Kubernetes version 1.20.2 or newer
B. Virtual nodes with Virtual Kubelet ACI
C. Cluster autoscaler
D. Horizontal pod autoscaler
Answer: C. Cluster autoscaler
Reasoning:
The question requires a solution that minimizes the time it takes to provision compute resources during scale-out operations and supports autoscaling of Windows Server containers. The Cluster autoscaler is designed to automatically adjust the size of the Kubernetes cluster by adding or removing nodes based on the resource requirements of the workloads. This is particularly useful for scaling out operations as it can quickly provision additional nodes when needed; which aligns with the requirement to minimize provisioning time. Additionally; the Cluster autoscaler supports Windows Server nodes; making it suitable for the given scenario.
Breakdown of non-selected options:
A. Kubernetes version 1.20.2 or newer - While using a newer version of Kubernetes might provide some performance improvements and additional features; it does not directly address the requirement of minimizing provisioning time or supporting autoscaling specifically for Windows Server containers.
B. Virtual nodes with Virtual Kubelet ACI - Virtual nodes with Virtual Kubelet allow for burstable workloads using Azure Container Instances (ACI); but they are more suited for scenarios where you need to run containers without managing the underlying infrastructure. This option does not directly address the requirement for autoscaling Windows Server containers or minimizing provisioning time for compute resources.
D. Horizontal pod autoscaler - The Horizontal pod autoscaler automatically scales the number of pods in a deployment or replica set based on observed CPU utilization or other select metrics. While it helps in scaling applications; it does not directly manage the scaling of the underlying compute resources (nodes); which is necessary to minimize provisioning time during scale-out operations.
You have an Azure Active Directory (Azure AD) tenant that syncs with an on-premises Active Directory. Your company has a line-of-business (LOB) application developed internally. You need to implement SAML single sign-on (SSO) and enforce multi-factor authentication (MFA) when users attempt to access the application from an unknown location. Which two features should you include in the solution? Each selection is worth one point.
A. Azure AD Privileged Identity Management (PIM)
B. Azure Application Gateway
C. Azure AD enterprise applications
D. Azure AD Identity Protection
E. Conditional Access policies
Answer: C. Azure AD enterprise applications
Answer: E. Conditional Access policies
Reasoning:
To implement SAML single sign-on (SSO) and enforce multi-factor authentication (MFA) for an internally developed line-of-business (LOB) application; you need to use Azure AD enterprise applications and Conditional Access policies. Azure AD enterprise applications allow you to configure SAML-based SSO for applications. Conditional Access policies enable you to enforce MFA based on specific conditions; such as accessing the application from an unknown location.
Breakdown of non-selected options:
A. Azure AD Privileged Identity Management (PIM) - This is used for managing; controlling; and monitoring access within Azure AD; Azure; and other Microsoft Online Services. It is not directly related to implementing SSO or enforcing MFA for applications.
B. Azure Application Gateway - This is a web traffic load balancer that enables you to manage traffic to your web applications. It does not provide SSO or MFA capabilities.
D. Azure AD Identity Protection - This is used to identify potential vulnerabilities affecting your organization’s identities and to configure automated responses to detected suspicious actions. While it can enhance security; it is not directly used to implement SSO or enforce MFA for specific applications.
You are storing user profile data in an Azure Cosmos DB database. You want to set up a process to automatically back up the data to Azure Storage every week. What should you use to achieve this?
A. Azure Backup
B. Azure Cosmos DB backup and restore
C. Azure Import/Export Service
D. Azure Data Factory
Answer: D. Azure Data Factory
Reasoning: Azure Data Factory is a cloud-based data integration service that allows you to create data-driven workflows for orchestrating and automating data movement and data transformation. It is suitable for setting up a process to automatically back up data from Azure Cosmos DB to Azure Storage on a weekly basis. You can create a pipeline in Azure Data Factory to copy data from Cosmos DB to Azure Storage and schedule it to run weekly.
Breakdown of non-selected options:
A. Azure Backup: Azure Backup is primarily used for backing up Azure VMs; SQL databases; and other Azure resources. It does not natively support backing up data from Azure Cosmos DB to Azure Storage.
B. Azure Cosmos DB backup and restore: While Azure Cosmos DB has built-in backup and restore capabilities; it does not provide a direct mechanism to back up data to Azure Storage on a scheduled basis. It is more focused on point-in-time restore within the Cosmos DB service itself.
C. Azure Import/Export Service: This service is used for transferring large amounts of data to and from Azure using physical disks. It is not suitable for setting up automated; scheduled backups of Cosmos DB data to Azure Storage.
Therefore; Azure Data Factory is the most suitable option for automating the backup process from Azure Cosmos DB to Azure Storage on a weekly schedule.
You have a highly available application running on an AKS cluster in Azure. To ensure the application remains available even if a single availability zone fails; which Azure service should you use?
A. Azure Front Door
B. Azure Traffic Manager
C. AKS ingress controller
D. Azure Load Balancer
Answer: A. Azure Front Door
Reasoning: Azure Front Door is a global; scalable entry point that uses the Microsoft global edge network to create fast; secure; and highly available web applications. It provides high availability and can route traffic across multiple regions or availability zones; ensuring that your application remains available even if a single availability zone fails. This makes it the most suitable option for ensuring high availability in the scenario described.
Breakdown of non-selected options:
- B. Azure Traffic Manager: While Azure Traffic Manager can route traffic based on DNS and provide high availability by directing traffic to different regions; it operates at the DNS level and does not provide the same level of real-time failover and global load balancing as Azure Front Door.
- C. AKS ingress controller: An AKS ingress controller is used to manage inbound traffic to applications running in an AKS cluster. However; it does not inherently provide cross-zone or cross-region failover capabilities; which are necessary to ensure availability in the event of an availability zone failure.
- D. Azure Load Balancer: Azure Load Balancer is a regional service that distributes traffic within a single region. It does not provide cross-region or cross-zone failover capabilities; which are required to maintain availability if an entire availability zone fails.
You are planning to migrate a large-scale PostgreSQL database to Azure. The database must be highly available and support read replicas to scale out read operations. Which Azure database service should you recommend?
A. Azure SQL Managed Instance
B. Azure Database for PostgreSQL
C. Azure Cosmos DB
Answer: B. Azure Database for PostgreSQL
Reasoning: The requirement is to migrate a large-scale PostgreSQL database to Azure with high availability and support for read replicas to scale out read operations. Azure Database for PostgreSQL is specifically designed to handle PostgreSQL databases and offers features such as high availability and read replicas; making it the most suitable choice for this scenario.
Breakdown of non-selected options:
- A. Azure SQL Managed Instance: This option is designed for SQL Server databases; not PostgreSQL. It does not natively support PostgreSQL databases; so it is not suitable for this requirement.
- C. Azure Cosmos DB: While Cosmos DB is a globally distributed; multi-model database service; it is not specifically designed for PostgreSQL databases. It does not natively support PostgreSQL features like read replicas in the same way Azure Database for PostgreSQL does; making it less suitable for this scenario.