test4-az301 Flashcards
You have an on-premises network and an Azure subscription. The on-premises network has several branch offices.
A branch office in Toronto contains a virtual machine named VM1 that is configured as a file server. Users access the shared files on VM1 from all the offices.
You need to recommend a solution to ensure that the users can access the shares files as quickly as possible if the Toronto branch office is inaccessible.
What should you include in the recommendation?
A. a Recovery Services vault and Azure Backup
B. an Azure file share and Azure File Sync
C. Azure blob containers and Azure File Sync
D. a Recovery Services vault and Windows Server Backup
The correct answer is B. an Azure file share and Azure File Sync.
Explanation
Here’s why this is the best solution and why the others aren’t as suitable:
Azure File Share and Azure File Sync:
How it works:
You create an Azure file share in a storage account.
Azure File Sync is used to synchronize files between your on-premises file server (VM1) and the Azure file share.
You can then install the Azure File Sync agent on file servers in other branch offices and configure them to use the same Azure file share.
Azure File Sync acts as a cache, keeping frequently accessed files locally at each branch office.
Why it’s the best solution:
Fast Access: Users in other branch offices will have fast, local access to cached copies of the files.
Availability: If the Toronto office becomes unavailable, the files are still accessible from the Azure file share and the cached copies at other locations.
Centralized Management: You have a central copy of your files in Azure, making backup and management easier.
Other Options:
A. a Recovery Services vault and Azure Backup: Azure Backup is primarily for backup and restore, not for providing fast, continuous access to files in a failover scenario. Restoring from a backup would take time.
C. Azure blob containers and Azure File Sync: Azure blob storage is designed for unstructured data and is not ideal for file shares that require SMB access. Although Azure File Sync can technically be configured to use blob containers, file shares are better suited for this use case.
D. a Recovery Services vault and Windows Server Backup: Similar to option A, this is a backup and restore solution, not designed for providing quick access to files in a different location if the primary location is unavailable.
How to Implement the Solution:
Create an Azure Storage Account: In your Azure subscription, create a storage account.
Create an Azure File Share: Within the storage account, create a file share.
Deploy Azure File Sync:
Create a Storage Sync Service resource in Azure.
Create a sync group within the Storage Sync Service.
Add your Azure file share as a cloud endpoint to the sync group.
Install the Azure File Sync agent on VM1 in Toronto.
Register VM1 with the Storage Sync Service.
Add VM1 as a server endpoint to the sync group.
Optional: Enable Cloud Tiering: On the server endpoint (VM1), you can configure cloud tiering, which will automatically tier less frequently accessed files to Azure, freeing up space on the local server while still keeping them accessible.
Branch Office Setup:
Install the Azure File Sync agent on file servers in other branch offices.
Register these servers with the Storage Sync Service.
Add them as server endpoints to the same sync group.
Benefits:
High Availability: Files are accessible even if the Toronto office is down.
Fast Access: Branch offices have local cached copies of frequently used files.
Reduced Bandwidth: Only changes are synced between locations, minimizing bandwidth usage.
Centralized Backup: Your files are backed up in Azure.
A company deploys Azure Active Directory (Azure AD) Connect to synchronize identity information from their on-premises Active Directory Domain Services (AD
DS) directory to their Azure AD tenant. The identity information that is synchronized includes user accounts , credential hashes for authentication (password sync), and group membership. The company plans to deploy several Windows and Linux virtual machines (VMs) to support their applications.
The VMs have the following requirements:
✑ Support domain join, LDAP read, LDAP bind, NTLM and Kerberos authentication, and Group Policy.
✑ Allow users to sign in to the domain using their corporate credentials and connect remotely to the VM by using Remote Desktop.
You need to support the VM deployment.
Which service should you use?
A. Azure AD Domain Services
B. Azure AD Privileged Identity Management
C. Azure AD Managed Service Identity
D. Active Directory Federation Services (AD FS)
The correct answer is A. Azure AD Domain Services.
Explanation:
Here’s why Azure AD Domain Services is the right choice and why the other options are not suitable:
Azure AD Domain Services (Azure AD DS):
What it is: Azure AD DS provides managed domain services such as domain join, group policy, LDAP, Kerberos/NTLM authentication that are fully compatible with Windows Server Active Directory. It essentially extends your Azure AD tenant to provide these traditional domain services.
Why it’s the right choice:
Domain Join: Allows you to join your Windows and Linux VMs to a managed domain.
LDAP, NTLM, Kerberos: Supports traditional authentication protocols needed by many applications.
Group Policy: You can manage your VMs using Group Policy, just like in an on-premises AD environment.
User Credentials: Users can sign in with their existing corporate credentials synchronized from the on-premises AD via Azure AD Connect.
Remote Desktop: Users can connect remotely using Remote Desktop Services (RDS), which can be integrated with Azure AD DS for authentication.
Simplified Management: It’s a managed service, so you don’t have to worry about patching, monitoring, or backing up domain controllers.
Other Options:
B. Azure AD Privileged Identity Management (PIM): PIM is for managing, controlling, and monitoring access to important resources in your Azure AD organization. It’s not related to providing domain services for VMs.
C. Azure AD Managed Service Identity (MSI): MSI (now called Managed Identities for Azure resources) provides an identity for applications to use when connecting to resources that support Azure AD authentication. It’s not used for providing domain services to VMs.
D. Active Directory Federation Services (AD FS): AD FS is an on-premises solution for federated identity and single sign-on (SSO). While it can be used to extend your on-premises AD to the cloud, it requires significant infrastructure and management overhead. It’s not the best choice for this scenario where a managed domain service is preferred.
What should you include in the identity management strategy to support the planned changes?
A. Move all the domain controllers from corp.fabrikam.com to virtual networks in Azure.
B. Deploy domain controllers for corp.fabrikam.com to virtual networks in Azure.
C. Deploy a new Azure AD tenant for the authentication of new R&D projects.
D. Deploy domain controllers for the rd.fabrikam.com forest to virtual networks in Azure.
The best answer is B. Deploy domain controllers for corp.fabrikam.com to virtual networks in Azure.
Explanation
Here’s why this is the most suitable approach and why the other options are less effective:
B. Deploy domain controllers for corp.fabrikam.com to virtual networks in Azure.
Why it’s the best approach:
Supports Hybrid Identity: This approach directly supports the company’s need to maintain a hybrid identity model, leveraging their existing corp.fabrikam.com Active Directory.
Extends Existing Infrastructure: By deploying domain controllers for the existing corp.fabrikam.com domain into Azure virtual networks, you effectively extend the on-premises Active Directory infrastructure into Azure. This provides a consistent authentication and authorization experience for both on-premises and cloud resources.
Facilitates Migration: This is a common and recommended first step when migrating on-premises workloads to Azure. It allows for a smoother transition and minimizes disruption to users and applications.
Authentication for Azure VMs: The domain controllers in Azure can provide authentication services for the planned R&D virtual machines, allowing them to be joined to the corp.fabrikam.com domain.
Redundancy and Disaster Recovery: Having domain controllers in Azure also provides redundancy and can be part of a disaster recovery strategy.
Why other options are not as good:
A. Move all the domain controllers from corp.fabrikam.com to virtual networks in Azure:
Risky and Disruptive: Moving all domain controllers to Azure at once is a major change and could be very disruptive to the on-premises environment. It’s generally recommended to extend your AD to Azure gradually rather than doing a complete cutover.
On-Premises Dependency: If all domain controllers are in Azure, the on-premises environment becomes entirely dependent on the Azure connection for authentication. Any network issues between on-premises and Azure would severely impact on-premises operations.
C. Deploy a new Azure AD tenant for the authentication of new R&D projects:
Creates Identity Silos: Creating a separate Azure AD tenant would create an identity silo, making it harder to manage users and access across the organization. It would also complicate collaboration between R&D and the rest of the company.
Doesn’t Leverage Existing Infrastructure: It doesn’t leverage the company’s investment in their existing corp.fabrikam.com Active Directory.
D. Deploy domain controllers for the rd.fabrikam.com forest to virtual networks in Azure:
Unnecessary Complexity: Creating a separate forest (rd.fabrikam.com) introduces unnecessary complexity in terms of management, trusts, and user experience. It’s better to integrate with the existing corp.fabrikam.com domain unless there are very specific and compelling reasons to create a new forest.
You have a hybrid deployment of Azure Active Directory (Azure AD).
You need to recommend a solution to ensure that the Azure AD tenant can be managed only from the computers on your on-premises network.
What should you include in the recommendation?
A. Azure AD roles and administrators
B. a conditional access policy
C. Azure AD Application Proxy
D. Azure AD Privileged Identity Management
The correct answer is B. a conditional access policy.
Explanation:
Conditional Access Policies: Conditional access policies in Azure AD are powerful tools for controlling access to your cloud resources. They allow you to define specific conditions under which access is granted or denied. In this case, you can create a policy that allows access to Azure AD management only from specific IP address ranges or locations that correspond to your on-premises network.
How to Implement:
Define a Named Location: In Azure AD, under Security -> Conditional Access -> Named locations, create a new location that represents your on-premises network. You can define this location by specifying the public IP address ranges used by your on-premises network.
Create a Conditional Access Policy:
Go to Security -> Conditional Access -> Policies and create a new policy.
Users and groups: Select the users or groups this policy should apply to (e.g., all users, or specific admin roles).
Cloud apps or actions: Select “All cloud apps” or specifically target apps related to Azure AD management.
Conditions:
Locations: Select “Any location” and then exclude the named location you created in step 1 (your on-premises network).
Grant: Select “Block access”.
Why Other Options Are Not Suitable:
A. Azure AD roles and administrators: Azure AD roles define permissions for managing Azure AD resources. While important for controlling what users can do, they don’t restrict where they can manage from.
C. Azure AD Application Proxy: Application Proxy is used to provide secure remote access to on-premises web applications. It’s not relevant to restricting access to Azure AD management.
D. Azure AD Privileged Identity Management (PIM): PIM is used to manage, control, and monitor access to privileged roles in Azure AD. It helps with just-in-time access and auditing but doesn’t restrict access based on location.
You have an Azure subscription that contains an Azure Cosmos DB account. You need to recommend a solution to generate an alert from Azure Log Analytics when a request charge for a query exceeds 50 request units more than 20 times within a 15-minute window. What should you recommend?
A. Create a search query to identify when requestCharge_s exceeds 50.
Configure an alert threshold of 20 and a period of 15.
B. Create a search query to identify when duration_s exceeds 20 and requestCharge_s exceeds 50.
Configure a period of 15.
C. Create a search query to identify when requestCharge_s exceeds 20.
Configure a period of 15 and a frequency of 20.
D. Create a search query to identify when duration_s exceeds 20.
Configure a period of 15.
The correct answer is A. Create a search query to identify when requestCharge_s exceeds 50. Configure an alert threshold of 20 and a period of 15.
Explanation
Here’s how we break down the requirements and why this solution is the correct one:
Requirements:
Identify Queries Exceeding 50 RUs: We need to find instances where the request charge (requestCharge_s) for a Cosmos DB query is greater than 50.
Threshold of 20 Times: The alert should only trigger if this happens more than 20 times.
Within a 15-Minute Window: The 20+ occurrences must be within a 15-minute period.
Solution Breakdown (Option A):
Create a search query to identify when requestCharge_s exceeds 50.
This is the core of the solution. We need a Log Analytics query that filters for Cosmos DB operations where requestCharge_s is greater than 50. A sample query would look like this:
AzureDiagnostics
| where Category == “DataPlaneRequests” // Focus on Cosmos DB data plane operations
| where toreal(requestCharge_s) > 50 // Filter for requests exceeding 50 RUs
| summarize count() by bin(TimeGenerated, 1m) // Count occurrences in 1-minute bins
Use code with caution.
Kusto
Configure an alert threshold of 20
This directly addresses the requirement of triggering the alert only when the condition (request charge > 50) occurs more than 20 times.
Configure a period of 15
This defines the time window (15 minutes) over which the threshold (20 times) is evaluated. In this case the alert rule will evaluate the query every ‘frequency’ minutes (which defaults to 5 minutes) and will look for the number of results returned by the query in the last 15 minutes. If the number of results returned is above the threshold the alert will trigger.
Why Other Options Are Incorrect:
B. Create a search query to identify when duration_s exceeds 20 and requestCharge_s exceeds 50. Configure a period of 15.
The duration_s field represents the duration of the query, and the requirement is focused on the request charge (cost), not the duration. Also, we don’t need to know that duration_s exceeds 20, we only need to know that the count of requestCharge_s exceeds 50 more than 20 times.
C. Create a search query to identify when requestCharge_s exceeds 20. Configure a period of 15 and a frequency of 20.
The query is incorrect (should be requestCharge_s > 50). The frequency setting does not control the number of times the condition must occur; it defines how often the alert rule is evaluated.
D. Create a search query to identify when duration_s exceeds 20. Configure a period of 15.
Similar to option B, this focuses on duration_s, which is not the primary requirement. We are concerned with the cost (request charge) of the queries.
You are designing a data protection strategy for Azure virtual machines. All the virtual machines are in the Standard tier and use managed disks. You need to recommend a solution that meets the following requirements:
- The use of encryption keys is audited.
- All the data is encrypted at rest always.
You manage the encryption keys, not Microsoft. What should you include in the recommendation?
A. BitLocker Drive Encryption (BitLocker)
B. Azure Storage Service Encryption
C. Client-side Encryption
D. Azure Disk Encryption
The correct answer is D. Azure Disk Encryption.
Explanation
Here’s why Azure Disk Encryption is the best solution to meet the requirements and why the other options are not as suitable:
Requirements:
Encryption at Rest: All data on the managed disks must be encrypted when stored (at rest).
Customer-Managed Keys: You, not Microsoft, must manage the encryption keys.
Auditing of Key Usage: The use of encryption keys needs to be auditable.
D. Azure Disk Encryption
How it works: Azure Disk Encryption leverages the BitLocker feature of Windows and the DM-Crypt feature of Linux to provide volume encryption for the OS and data disks of Azure virtual machines. It integrates with Azure Key Vault to help you control and manage the disk encryption keys and secrets.
Why it’s the best solution:
Encryption at Rest: It encrypts the managed disks at rest.
Customer-Managed Keys (CMK): You can choose to manage your encryption keys in Azure Key Vault, giving you full control over key creation, rotation, deletion, and usage.
Auditing: Azure Key Vault provides detailed audit logs of all key operations, including who accessed the keys, when, and for what purpose. This meets the auditing requirement.
Integration with Managed Disks: It seamlessly integrates with Azure managed disks.
Why Other Options Are Not as Suitable:
A. BitLocker Drive Encryption (BitLocker) (without Azure Disk Encryption): While BitLocker can encrypt disks, using it independently on Azure VMs without Azure Disk Encryption does not integrate with Azure Key Vault for key management or auditing. You would have to manage keys within the VM itself, which is less secure and doesn’t provide centralized auditing.
B. Azure Storage Service Encryption (SSE): SSE automatically encrypts data at rest in Azure Storage accounts, but it uses Microsoft-managed keys by default. While you can opt to use customer-managed keys with SSE, it’s not the primary mechanism for encrypting Azure VM disks. Azure Disk Encryption is specifically designed for that purpose.
C. Client-Side Encryption: In client-side encryption, you encrypt the data before sending it to Azure. This means you manage keys completely outside of Azure. While this approach provides a high level of control, it doesn’t directly address the requirement of encrypting data at rest on managed disks within Azure. Additionally, it may add complexity to your application logic.
You have 100 servers that run Windows Server 2012 R2 and host Microsoft SQL Server 2012 R2 instances. The instances host databases that have the following characteristics:
- The largest database is currently 3 TB.
- None of the databases will ever exceed 4 TB.
- Stored procedures are implemented by using CLR.
You plan to move all the data from SQL Server to Azure. You need to recommend an Azure service to host the databases. The solution must meet the following requirements:
- Whenever possible, minimize management overhead for the migrated databases.
- Minimize the number of database changes required to facilitate the migration.
- Ensure that users can authenticate by using their Active Directory credentials.
What should you include in the recommendation?
A. Azure SQL Database single databases
B. Azure SQL Database Managed Instance
C. Azure SQL Database elastic pools
D. SQL Server 2016 on Azure virtual machines
The correct answer is B. Azure SQL Database Managed Instance.
Explanation
Here’s why Azure SQL Database Managed Instance is the best fit for this scenario and why the other options are less suitable:
Requirements:
Minimize Management Overhead: The solution should be a PaaS (Platform as a Service) offering to reduce administrative tasks like patching, backups, and high availability configuration.
Minimize Database Changes: The migration should require minimal or no changes to the existing database schema, stored procedures, or application code. This is crucial for a smooth transition.
Active Directory Authentication: Users should be able to authenticate using their existing Active Directory credentials.
Database Size: The solution must support databases up to 4 TB.
CLR Stored Procedures: The solution must support the use of CLR (Common Language Runtime) stored procedures.
B. Azure SQL Database Managed Instance
Why it’s the best fit:
PaaS: Managed Instance is a fully managed service, so Microsoft handles OS and SQL Server patching, backups, high availability, and other management tasks.
High Compatibility: It offers near 100% compatibility with on-premises SQL Server, including support for:
Database Size: Supports databases up to 8 TB in the General Purpose tier and 16 TB in the Business Critical tier (well above the 4 TB requirement).
CLR: Supports CLR stored procedures.
Cross-Database Queries: Allows queries that span multiple databases within the same instance.
SQL Server Agent: You can use SQL Server Agent jobs for scheduling tasks.
Active Directory Authentication: It supports Azure Active Directory authentication, which can be integrated with your on-premises Active Directory using Azure AD Connect.
Minimal Migration Effort: Due to its high compatibility, migrating databases from SQL Server 2012 R2 to Managed Instance typically requires minimal code changes.
Why other options are less suitable:
A. Azure SQL Database single databases:
Limited Compatibility: Single databases have some limitations compared to on-premises SQL Server, including:
No CLR Support: Does not support CLR stored procedures.
No Cross-Database Queries: Cannot query across multiple databases.
No SQL Server Agent: No built-in job scheduler.
Database Size: The maximum size for a single database can vary based on the service tier and purchasing model but may have limitations that could become a problem in the future.
C. Azure SQL Database elastic pools:
Same Limitations as Single Databases: Elastic pools are essentially a group of single databases managed together, so they share the same compatibility limitations as single databases (no CLR, etc.).
Cost Optimization: Elastic pools are good for cost optimization when you have many databases with varying resource utilization, but they don’t address the core compatibility issues in this scenario.
D. SQL Server 2016 on Azure virtual machines:
IaaS, Not PaaS: This is an IaaS (Infrastructure as a Service) solution, meaning you are responsible for managing the operating system, SQL Server, backups, patching, high availability, etc. This does not minimize management overhead.
Unnecessary Upgrade: While you could potentially upgrade to a newer version of SQL Server in a VM, it doesn’t solve the management overhead issue and is not required by the scenario.
You plan to create an Azure Cosmos DB account that uses the SQL API. The account will contain data added by a web application. The web application will send data daily. You need to recommend a notification solution that meets the following requirements:
- Sends email notification when data is received from IoT devices.
- Minimizes compute cost.
What should you include in the recommendation?
A. Deploy an Azure logic app that has the Azure Cosmos DB connector configured to use a SendGrid action.
B. Deploy a function app that is configured to use the Consumption plan and a SendGrid binding.
C. Deploy an Azure logic app that has a SendGrid connector configured to use an Azure Cosmos DB action.
D. Deploy a function app that is configured to use the Consumption plan and an Azure Event Hubs binding.
The best answer is B. Deploy a function app that is configured to use the Consumption plan and a SendGrid binding.
Explanation:
Here’s why this solution is the most suitable and why the others are less effective:
Requirements:
Email Notification on Data Change: Send an email when new data is added to the Cosmos DB account.
Minimize Compute Cost: The solution should be cost-effective, especially since data is only sent daily (implying relatively infrequent triggers).
B. Function App (Consumption Plan) with SendGrid Binding
Why it’s the best solution:
Azure Functions (Consumption Plan): Azure Functions are serverless compute services that allow you to run code on-demand without managing infrastructure. The Consumption plan is ideal for cost optimization because you only pay for the time your function is actually executing. Since data is sent daily, the function will only run briefly each day, minimizing cost.
Cosmos DB Trigger: Azure Functions can be triggered by changes to Cosmos DB data (using the Cosmos DB trigger). This allows the function to automatically execute when new data is received.
SendGrid Binding: Azure Functions has built-in output bindings for SendGrid, simplifying the process of sending emails. You can easily configure the binding to send an email with the relevant data from the Cosmos DB change.
How it works:
Data is written to the Cosmos DB account.
The Cosmos DB trigger detects the new data and invokes the function.
The function code executes, potentially processing or formatting the data.
The SendGrid output binding sends an email notification.
Why other options are less suitable:
A. Logic App (Cosmos DB Connector to SendGrid Action):
Logic Apps Cost: While Logic Apps are also serverless, their pricing model is based on actions executed. Depending on the complexity of the logic app and the frequency of data changes, it might be more expensive than a Function App on the Consumption plan, especially for infrequent triggers.
Less Efficient for Simple Tasks: Logic Apps are great for more complex workflows and orchestrations. For a simple task like sending an email on a data change, a Function App is often more efficient.
Cosmos DB Connector -> SendGrid Action order is incorrect in this option.
C. Logic App (SendGrid Connector to Cosmos DB Action):
Incorrect Order of Actions: This option suggests the wrong order. You need to trigger based on Cosmos DB changes and then send an email (Cosmos DB -> SendGrid), not the other way around. It also has the same cost and efficiency considerations as option A.
D. Function App (Consumption Plan) with Event Hubs Binding:
Unnecessary Complexity: Event Hubs is designed for high-throughput, real-time event ingestion. It’s overkill for this scenario where data is sent daily. Using Event Hubs would add unnecessary complexity and potentially higher cost. The Cosmos DB trigger is the appropriate trigger in this case.
You plan to use Azure Site Recovery to protect several on-premises physical server workloads. Each server workload is independent of the other. The workloads are stateless. You need to recommend a failover strategy to ensure that if the on-premises data center fails, the workloads are available in Azure as quickly as possible. Which failover strategy should you include in the recommendation?
A. Latest
B. Latest app-consistent
C. Latest multi-VM processed
D. Latest processed
The correct answer is D. Latest processed.
Explanation
Here’s why “Latest processed” is the best failover strategy in this scenario and why the others are less suitable:
Requirements:
Protect Physical Servers: The workloads are running on on-premises physical servers.
Independent, Stateless Workloads: Each server is independent and stateless, meaning there are no dependencies between them, and they don’t maintain any persistent state that needs to be synchronized across multiple servers.
Fastest Recovery: The primary goal is to minimize downtime and have the workloads available in Azure as quickly as possible after a failure.
Failover Strategies in Azure Site Recovery:
A. Latest: This option fails over to the latest recovery point available. While it might seem like the fastest, it could include data that hasn’t been fully processed by Site Recovery, potentially leading to inconsistencies or longer boot times during failover.
B. Latest app-consistent: This option fails over to the latest application-consistent snapshot. App-consistent snapshots are crucial for applications that need to maintain data consistency (e.g., databases). However, they take longer to create and are not necessary for stateless workloads.
C. Latest multi-VM processed: This option is used when you have multiple VMs that need to be failed over together and kept consistent with each other (e.g., a multi-tier application). This is not relevant in this scenario because the workloads are independent.
D. Latest processed: This option fails over to the latest recovery point that has been fully processed by Site Recovery. This ensures that the data is consistent and ready to be used in Azure, resulting in the fastest possible recovery for independent, stateless workloads.
Why “Latest processed” is the best choice:
Fastest Recovery Time: By using the latest processed recovery point, you minimize the time it takes for the VMs to become operational in Azure after a failover. Site Recovery has already done the necessary processing to ensure data consistency.
Suitable for Stateless Workloads: Since the workloads are stateless, there’s no need for application-consistent snapshots, which add overhead and increase recovery time.
Independent Workloads: The “multi-VM” options are not needed because each server is independent.
You plan to move a web application named App1 from an on-premises data center to Azure. App1 depends on a custom framework that is installed on the host server. You need to recommend a solution to host App1 in Azure. The solution must meet the following requirements:
- App1 must be available to users if an Azure data center becomes unavailable.
- Costs must be minimized.
What should you include in the recommendation?
A. In two Azure regions, deploy a Traffic Manager profile and a web app.
B. In two Azure regions, deploy a load balancer and a virtual machine scale set.
C. Deploy a load balancer and a virtual machine scale set across two availability zones.
D. In two Azure regions, deploy a load balancer and a web app.
The best answer is A. In two Azure regions, deploy a Traffic Manager profile and a web app.
Explanation
Here’s why this solution is the most suitable and why the others are less effective:
Requirements:
High Availability (Data Center Outage): The application must remain available even if an entire Azure data center becomes unavailable.
Cost Minimization: The solution should be cost-effective.
Custom Framework: The application depends on a custom framework installed on the host server.
A. Traffic Manager Profile and Web App in Two Regions
Why it’s the best solution:
High Availability: Deploying the web app to two different Azure regions ensures availability even if one region experiences an outage. Traffic Manager automatically routes traffic to the healthy region.
Web App (Suitable for Custom Frameworks): Azure Web Apps (App Service) allow you to deploy web applications and, importantly, support custom frameworks. You can either include the framework in your deployment package or install it on the underlying App Service plan instances if it requires server-level installation. You can even use custom containers.
Traffic Manager (Geo-Redundancy): Traffic Manager is a DNS-based load balancer that directs user traffic to different endpoints based on various routing methods (e.g., priority, performance, geographic). It provides the geo-redundancy needed to handle a regional outage.
Cost-Effective (Compared to VMs): Web Apps are generally more cost-effective than managing virtual machines, as they are a PaaS (Platform as a Service) offering. You don’t have to manage the underlying OS or infrastructure.
How it works:
You deploy your web app (including the custom framework) to an App Service plan in each of the two Azure regions.
You create a Traffic Manager profile and add the two App Service instances as endpoints.
You configure a routing method (e.g., “Priority” for active/passive failover or “Performance” to route users to the closest region).
Traffic Manager monitors the health of the endpoints and directs traffic accordingly.
Why other options are less suitable:
B. Load Balancer and Virtual Machine Scale Set in Two Regions:
Higher Cost: Managing virtual machine scale sets (VMSS) is typically more expensive than using Web Apps, as you’re paying for the underlying VMs even when they are not fully utilized.
More Management Overhead: VMSS requires more management overhead (OS patching, updates, etc.) compared to the PaaS nature of Web Apps.
Custom Framework Installation: While you can install custom frameworks on VMs, it adds to the management complexity.
C. Load Balancer and Virtual Machine Scale Set Across Two Availability Zones:
Not Sufficient for Data Center Outage: Availability zones are within a single region. They protect against failures within a data center, but not against an entire regional outage.
D. Load Balancer and Web App in Two Regions:
Incorrect Load Balancer Type: An Azure Load Balancer operates within a single region and is typically used for load balancing across VMs. It’s not designed for cross-region traffic management like Traffic Manager.
You have an on-premises deployment of MongoDB. You plan to migrate MongoDB to an Azure Cosmos DB account that uses the MongoDB API. You need to recommend a solution for migrating MongoDB to Azure Cosmos DB. What should you include in the recommendation?
A. mongorestore
B. Data Migration Assistant
C. Azure Storage Explorer
D. Azure Cosmos DB Data Migration Tool
The correct answer is A. mongorestore.
Explanation
Here’s why mongorestore is the recommended tool for migrating MongoDB to Azure Cosmos DB’s API for MongoDB, and why the other options are not suitable:
A. mongorestore
What it is: mongorestore is a command-line utility that is part of the MongoDB toolset. It’s specifically designed to import data into a MongoDB database from a backup created by mongodump.
Why it’s the best choice for Cosmos DB’s API for MongoDB:
Compatibility: Azure Cosmos DB’s API for MongoDB is designed to be compatible with the MongoDB wire protocol. This means that tools like mongorestore (and mongodump) can be used to interact with Cosmos DB as if it were a native MongoDB instance.
Efficiency: mongorestore is generally an efficient way to import large datasets into MongoDB.
Simplicity: It’s a relatively straightforward command-line tool to use, especially if you’re already familiar with MongoDB.
Supported by Microsoft: Microsoft’s own documentation recommends using mongodump and mongorestore for migrating to Azure Cosmos DB’s API for MongoDB.
Typical Workflow:
mongodump: Use the mongodump utility to create a backup (dump) of your on-premises MongoDB database.
mongorestore: Use mongorestore to import the backup data into your Azure Cosmos DB account, specifying the Cosmos DB connection string.
Why other options are not suitable:
B. Data Migration Assistant (DMA): DMA is primarily used for assessing and migrating SQL Server databases to Azure SQL Database or Azure SQL Managed Instance. It’s not designed for MongoDB migrations.
C. Azure Storage Explorer: Storage Explorer is a tool for managing Azure Storage resources (blobs, files, queues, tables). It’s not used for migrating databases to Cosmos DB.
D. Azure Cosmos DB Data Migration Tool: While there was an older, open-source tool with this name, it’s no longer actively maintained or recommended by Microsoft. The preferred methods now are using mongorestore, Azure DMS, or other native MongoDB tools.
Important Considerations When Using mongorestore:
Connection String: Make sure you use the correct connection string for your Azure Cosmos DB account (with the MongoDB API enabled). You can find this in the Azure portal under your Cosmos DB account’s “Connection String” blade.
Throughput: Provision sufficient throughput (Request Units - RUs) on your Cosmos DB account to handle the import process efficiently. You can scale up the throughput during the migration and then scale it down afterward.
Indexing: Consider creating indexes in your Cosmos DB collections after the data import is complete. This can significantly speed up the import process.
You need to recommend a data storage solution that meets the following requirements:
- Ensures that application can access the data by using a REST connection.
- Hosts 20 independent tables of varying sizes and usage patterns.
- Automatically replicates the data to a second Azure region Minimizes costs.
What should you recommend?
A. an Azure SQL Database elastic database pool that uses active geo-replication
B. tables in an Azure Storage account that uses geo-redundant storage (GRS)
C. tables in an Azure Storage account that use read-access geo-redundant storage (RA-GR)
D. an Azure SQL database that uses active geo-replication
The correct answer is B. tables in an Azure Storage account that uses geo-redundant storage (GRS).
Explanation
Let’s analyze the requirements and why this solution is the best fit:
Requirements:
REST API Access: The application needs to access the data using a REST connection.
20 Independent Tables: The solution must support hosting multiple independent tables.
Automatic Replication to Second Region: Data needs to be automatically replicated to a secondary Azure region for disaster recovery purposes.
Minimize Costs: The solution should be cost-effective.
B. Tables in an Azure Storage Account with Geo-Redundant Storage (GRS)
Why it’s the best solution:
REST API: Azure Table Storage provides a RESTful API for accessing and manipulating data.
Tables: Azure Storage accounts support Table Storage, which allows you to store structured data in tables. You can create 20 independent tables within a single storage account.
Geo-Redundant Storage (GRS): GRS automatically replicates your data to a secondary region hundreds of miles away from the primary region. This provides protection against regional outages.
Cost-Effective: Azure Table Storage is generally a very cost-effective storage option, especially when compared to relational databases like Azure SQL Database. GRS is also relatively inexpensive compared to other replication options like RA-GRS.
Why other options are less suitable:
A. Azure SQL Database elastic database pool with active geo-replication:
Cost: Azure SQL Database is significantly more expensive than Azure Table Storage. Elastic pools are cost-effective for managing multiple databases with varying workloads, but they are still more expensive than Table Storage.
Active Geo-Replication Cost: Active geo-replication adds to the cost of Azure SQL Database.
REST API: While you can access SQL database through a REST API, it is not as straightforward as with Table storage.
C. Tables in an Azure Storage account that use read-access geo-redundant storage (RA-GRS):
Higher Cost: RA-GRS is more expensive than GRS because it allows read access to the secondary region. The requirements only specify the need for replication, not read access to the secondary region.
D. An Azure SQL database that uses active geo-replication:
Cost: As mentioned earlier, Azure SQL Database is a more expensive option than Azure Table Storage.
Single Database: This option only supports a single database, not 20 independent tables unless you manage them within a single database, which is not ideal for independent tables.
A company named Contoso, Ltd. has an Azure Active Directory (Azure AD) tenant that is integrated with Microsoft Office 365 and an Azure subscription.
Contoso has an on-premises identity infrastructure. The infrastructure includes servers that run Active Directory Domain Services (AD DS), Active Directory
Federation Services (AD FS), Azure AD Connect, and Microsoft Identity Manager (MIM).
Contoso has a partnership with a company named Fabrikam, Inc. Fabrikam has an Active Directory forest and an Office 365 tenant. Fabrikam has the same on- premises identity infrastructure as Contoso.
A team of 10 developers from Fabrikam will work on an Azure solution that will be hosted in the Azure subscription of Contoso. The developers must be added to the Contributor role for a resource in the Contoso subscription.
You need to recommend a solution to ensure that Contoso can assign the role to the 10 Fabrikam developers. The solution must ensure that the Fabrikam developers use their existing credentials to access resources.
What should you recommend?
A. Configure a forest trust between the on-premises Active Directory forests of Contoso and Fabrikam.
B. Configure an organization relationship between the Office 365 tenants of Fabrikam and Contoso.
C. In the Azure AD tenant of Contoso, enable Azure Active Directory Domain Services (Azure AD DS). Create a one-way forest trust that uses selective authentication between the Active Directory forests of Contoso and Fabrikam.
D. In the Azure AD tenant of Contoso, create guest accounts for the Fabrikam developers.
The correct answer is D. In the Azure AD tenant of Contoso, create guest accounts for the Fabrikam developers.
Here’s why:
Guest Accounts (Azure AD B2B): Azure Active Directory (Azure AD) business-to-business (B2B) collaboration lets you securely invite external users (guests) into your Azure AD tenant. These guest users can then be assigned roles (like Contributor) within your Azure subscription. The Fabrikam developers can sign in using their existing Fabrikam credentials (via federation with their Office 365 tenant or through a social provider if configured).
Let’s look at why the other options are not the best fit:
A. Configure a forest trust between the on-premises Active Directory forests of Contoso and Fabrikam.
Why it’s not ideal: While a forest trust would enable authentication, it doesn’t directly translate to authorization within the Azure subscription. Azure RBAC (Role-Based Access Control) relies on Azure AD identities, not on-premises AD. Furthermore, setting up and maintaining a forest trust for just 10 developers might be overkill and introduce unnecessary complexity.
B. Configure an organization relationship between the Office 365 tenants of Fabrikam and Contoso.
Why it’s not ideal: Organizational relationships in Office 365 primarily facilitate features like free/busy lookups and sharing calendars between tenants. They do not grant access to Azure resources or the ability to assign RBAC roles.
C. In the Azure AD tenant of Contoso, enable Azure Active Directory Domain Services (Azure AD DS). Create a one-way forest trust that uses selective authentication between the Active Directory forests of Contoso and Fabrikam.
Why it’s not ideal: Azure AD DS is a managed domain service, essentially providing a subset of AD DS functionality in the cloud. It is primarily used for legacy applications that require traditional domain services. It’s not necessary for modern cloud applications that can use Azure AD directly. Additionally, a trust with Azure AD DS would not enable assignment of roles in the Contoso subscription and is overly complex.
Your network contains an on-premises Active Directory forest.
You discover that when users change jobs within your company, the membership of the user groups are not being updated. As a result, the users can access resources that are no longer relevant to their job.
You plan to integrate Active Directory and Azure Active Directory (Azure AD) by using Azure AD Connect.
You need to recommend a solution to ensure that group owners are emailed monthly about the group memberships they manage.
What should you include in the recommendation?
A. Azure AD access reviews
B. Tenant Restrictions
C. Azure AD Identity Protection
D. conditional access policies
The correct answer is A. Azure AD access reviews.
Here’s why:
Azure AD access reviews: This feature allows you to schedule regular reviews of access rights to resources, including group memberships, applications, and roles. You can configure access reviews to target specific groups, and designate the group owners as the reviewers. During the review, owners will be notified (via email) and prompted to confirm or remove members from their groups. This directly addresses the requirement of ensuring group owners are regularly reminded to review and update memberships.
Let’s examine why the other options are not the best fit:
B. Tenant Restrictions: Tenant restrictions are used to control which external tenants your users can access when signed in with their organizational accounts. This is related to controlling outbound access to other organizations, not managing internal group memberships.
C. Azure AD Identity Protection: This service helps detect and remediate identity-based risks, such as leaked credentials or unusual sign-in activities. It’s focused on security threats, not on ensuring regular review of group memberships.
D. Conditional access policies: These policies define access rules based on conditions like user location, device state, or application being accessed. While you can use conditional access to restrict access to certain resources based on group membership, it doesn’t actively involve group owners in reviewing those memberships.
Note: This question is part of series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company has an on-premises Active Directory Domain Services (AD DS) domain and an established Azure Active Directory (Azure AD) environment.
Your company would like users to be automatically signed in to cloud apps when they are on their corporate desktops that are connected to the corporate network.
You need to enable single sign-on (SSO) for company users.
Solution: Install and configure an Azure AD Connect server to use pass-through authentication and select the Enable single sign-on option.
Does the solution meet the goal?
A. Yes
B. No
A. Yes
The proposed solution, installing and configuring an Azure AD Connect server to use pass-through authentication and selecting the “Enable single sign-on” option, does meet the goal.
Here’s why:
Azure AD Connect with Pass-through Authentication: This configuration allows users to authenticate to Azure AD using the same password they use for on-premises AD. Authentication requests are passed through to the on-premises AD for validation.
Enable single sign-on (Seamless SSO): This feature, when enabled in Azure AD Connect, automatically signs users in to Azure AD and cloud apps when they are on their domain-joined corporate devices. It uses Kerberos authentication behind the scenes.