test4 Flashcards
You have an on-premises network and an Azure subscription. The on-premises network has several branch offices.
A branch office in Toronto contains a virtual machine named VM1 that is configured as a file server. Users access the shared files on VM1 from all the offices.
You need to recommend a solution to ensure that the users can access the shares files as quickly as possible if the Toronto branch office is inaccessible.
What should you include in the recommendation?
A. a Recovery Services vault and Azure Backup
B. an Azure file share and Azure File Sync
C. Azure blob containers and Azure File Sync
D. a Recovery Services vault and Windows Server Backup
The correct answer is B. an Azure file share and Azure File Sync.
Explanation
Here’s why this is the best solution and why the others aren’t as suitable:
Azure File Share and Azure File Sync:
How it works:
You create an Azure file share in a storage account.
Azure File Sync is used to synchronize files between your on-premises file server (VM1) and the Azure file share.
You can then install the Azure File Sync agent on file servers in other branch offices and configure them to use the same Azure file share.
Azure File Sync acts as a cache, keeping frequently accessed files locally at each branch office.
Why it’s the best solution:
Fast Access: Users in other branch offices will have fast, local access to cached copies of the files.
Availability: If the Toronto office becomes unavailable, the files are still accessible from the Azure file share and the cached copies at other locations.
Centralized Management: You have a central copy of your files in Azure, making backup and management easier.
Other Options:
A. a Recovery Services vault and Azure Backup: Azure Backup is primarily for backup and restore, not for providing fast, continuous access to files in a failover scenario. Restoring from a backup would take time.
C. Azure blob containers and Azure File Sync: Azure blob storage is designed for unstructured data and is not ideal for file shares that require SMB access. Although Azure File Sync can technically be configured to use blob containers, file shares are better suited for this use case.
D. a Recovery Services vault and Windows Server Backup: Similar to option A, this is a backup and restore solution, not designed for providing quick access to files in a different location if the primary location is unavailable.
How to Implement the Solution:
Create an Azure Storage Account: In your Azure subscription, create a storage account.
Create an Azure File Share: Within the storage account, create a file share.
Deploy Azure File Sync:
Create a Storage Sync Service resource in Azure.
Create a sync group within the Storage Sync Service.
Add your Azure file share as a cloud endpoint to the sync group.
Install the Azure File Sync agent on VM1 in Toronto.
Register VM1 with the Storage Sync Service.
Add VM1 as a server endpoint to the sync group.
Optional: Enable Cloud Tiering: On the server endpoint (VM1), you can configure cloud tiering, which will automatically tier less frequently accessed files to Azure, freeing up space on the local server while still keeping them accessible.
Branch Office Setup:
Install the Azure File Sync agent on file servers in other branch offices.
Register these servers with the Storage Sync Service.
Add them as server endpoints to the same sync group.
Benefits:
High Availability: Files are accessible even if the Toronto office is down.
Fast Access: Branch offices have local cached copies of frequently used files.
Reduced Bandwidth: Only changes are synced between locations, minimizing bandwidth usage.
Centralized Backup: Your files are backed up in Azure.
A company deploys Azure Active Directory (Azure AD) Connect to synchronize identity information from their on-premises Active Directory Domain Services (AD
DS) directory to their Azure AD tenant. The identity information that is synchronized includes user accounts , credential hashes for authentication (password sync), and group membership. The company plans to deploy several Windows and Linux virtual machines (VMs) to support their applications.
The VMs have the following requirements:
✑ Support domain join, LDAP read, LDAP bind, NTLM and Kerberos authentication, and Group Policy.
✑ Allow users to sign in to the domain using their corporate credentials and connect remotely to the VM by using Remote Desktop.
You need to support the VM deployment.
Which service should you use?
A. Azure AD Domain Services
B. Azure AD Privileged Identity Management
C. Azure AD Managed Service Identity
D. Active Directory Federation Services (AD FS)
The correct answer is A. Azure AD Domain Services.
Explanation:
Here’s why Azure AD Domain Services is the right choice and why the other options are not suitable:
Azure AD Domain Services (Azure AD DS):
What it is: Azure AD DS provides managed domain services such as domain join, group policy, LDAP, Kerberos/NTLM authentication that are fully compatible with Windows Server Active Directory. It essentially extends your Azure AD tenant to provide these traditional domain services.
Why it’s the right choice:
Domain Join: Allows you to join your Windows and Linux VMs to a managed domain.
LDAP, NTLM, Kerberos: Supports traditional authentication protocols needed by many applications.
Group Policy: You can manage your VMs using Group Policy, just like in an on-premises AD environment.
User Credentials: Users can sign in with their existing corporate credentials synchronized from the on-premises AD via Azure AD Connect.
Remote Desktop: Users can connect remotely using Remote Desktop Services (RDS), which can be integrated with Azure AD DS for authentication.
Simplified Management: It’s a managed service, so you don’t have to worry about patching, monitoring, or backing up domain controllers.
Other Options:
B. Azure AD Privileged Identity Management (PIM): PIM is for managing, controlling, and monitoring access to important resources in your Azure AD organization. It’s not related to providing domain services for VMs.
C. Azure AD Managed Service Identity (MSI): MSI (now called Managed Identities for Azure resources) provides an identity for applications to use when connecting to resources that support Azure AD authentication. It’s not used for providing domain services to VMs.
D. Active Directory Federation Services (AD FS): AD FS is an on-premises solution for federated identity and single sign-on (SSO). While it can be used to extend your on-premises AD to the cloud, it requires significant infrastructure and management overhead. It’s not the best choice for this scenario where a managed domain service is preferred.
What should you include in the identity management strategy to support the planned changes?
A. Move all the domain controllers from corp.fabrikam.com to virtual networks in Azure.
B. Deploy domain controllers for corp.fabrikam.com to virtual networks in Azure.
C. Deploy a new Azure AD tenant for the authentication of new R&D projects.
D. Deploy domain controllers for the rd.fabrikam.com forest to virtual networks in Azure.
The best answer is B. Deploy domain controllers for corp.fabrikam.com to virtual networks in Azure.
Explanation
Here’s why this is the most suitable approach and why the other options are less effective:
B. Deploy domain controllers for corp.fabrikam.com to virtual networks in Azure.
Why it’s the best approach:
Supports Hybrid Identity: This approach directly supports the company’s need to maintain a hybrid identity model, leveraging their existing corp.fabrikam.com Active Directory.
Extends Existing Infrastructure: By deploying domain controllers for the existing corp.fabrikam.com domain into Azure virtual networks, you effectively extend the on-premises Active Directory infrastructure into Azure. This provides a consistent authentication and authorization experience for both on-premises and cloud resources.
Facilitates Migration: This is a common and recommended first step when migrating on-premises workloads to Azure. It allows for a smoother transition and minimizes disruption to users and applications.
Authentication for Azure VMs: The domain controllers in Azure can provide authentication services for the planned R&D virtual machines, allowing them to be joined to the corp.fabrikam.com domain.
Redundancy and Disaster Recovery: Having domain controllers in Azure also provides redundancy and can be part of a disaster recovery strategy.
Why other options are not as good:
A. Move all the domain controllers from corp.fabrikam.com to virtual networks in Azure:
Risky and Disruptive: Moving all domain controllers to Azure at once is a major change and could be very disruptive to the on-premises environment. It’s generally recommended to extend your AD to Azure gradually rather than doing a complete cutover.
On-Premises Dependency: If all domain controllers are in Azure, the on-premises environment becomes entirely dependent on the Azure connection for authentication. Any network issues between on-premises and Azure would severely impact on-premises operations.
C. Deploy a new Azure AD tenant for the authentication of new R&D projects:
Creates Identity Silos: Creating a separate Azure AD tenant would create an identity silo, making it harder to manage users and access across the organization. It would also complicate collaboration between R&D and the rest of the company.
Doesn’t Leverage Existing Infrastructure: It doesn’t leverage the company’s investment in their existing corp.fabrikam.com Active Directory.
D. Deploy domain controllers for the rd.fabrikam.com forest to virtual networks in Azure:
Unnecessary Complexity: Creating a separate forest (rd.fabrikam.com) introduces unnecessary complexity in terms of management, trusts, and user experience. It’s better to integrate with the existing corp.fabrikam.com domain unless there are very specific and compelling reasons to create a new forest.
You have a hybrid deployment of Azure Active Directory (Azure AD).
You need to recommend a solution to ensure that the Azure AD tenant can be managed only from the computers on your on-premises network.
What should you include in the recommendation?
A. Azure AD roles and administrators
B. a conditional access policy
C. Azure AD Application Proxy
D. Azure AD Privileged Identity Management
The correct answer is B. a conditional access policy.
Explanation:
Conditional Access Policies: Conditional access policies in Azure AD are powerful tools for controlling access to your cloud resources. They allow you to define specific conditions under which access is granted or denied. In this case, you can create a policy that allows access to Azure AD management only from specific IP address ranges or locations that correspond to your on-premises network.
How to Implement:
Define a Named Location: In Azure AD, under Security -> Conditional Access -> Named locations, create a new location that represents your on-premises network. You can define this location by specifying the public IP address ranges used by your on-premises network.
Create a Conditional Access Policy:
Go to Security -> Conditional Access -> Policies and create a new policy.
Users and groups: Select the users or groups this policy should apply to (e.g., all users, or specific admin roles).
Cloud apps or actions: Select “All cloud apps” or specifically target apps related to Azure AD management.
Conditions:
Locations: Select “Any location” and then exclude the named location you created in step 1 (your on-premises network).
Grant: Select “Block access”.
Why Other Options Are Not Suitable:
A. Azure AD roles and administrators: Azure AD roles define permissions for managing Azure AD resources. While important for controlling what users can do, they don’t restrict where they can manage from.
C. Azure AD Application Proxy: Application Proxy is used to provide secure remote access to on-premises web applications. It’s not relevant to restricting access to Azure AD management.
D. Azure AD Privileged Identity Management (PIM): PIM is used to manage, control, and monitor access to privileged roles in Azure AD. It helps with just-in-time access and auditing but doesn’t restrict access based on location.
You have an Azure subscription that contains an Azure Cosmos DB account. You need to recommend a solution to generate an alert from Azure Log Analytics when a request charge for a query exceeds 50 request units more than 20 times within a 15-minute window. What should you recommend?
A. Create a search query to identify when requestCharge_s exceeds 50.
Configure an alert threshold of 20 and a period of 15.
B. Create a search query to identify when duration_s exceeds 20 and requestCharge_s exceeds 50.
Configure a period of 15.
C. Create a search query to identify when requestCharge_s exceeds 20.
Configure a period of 15 and a frequency of 20.
D. Create a search query to identify when duration_s exceeds 20.
Configure a period of 15.
The correct answer is A. Create a search query to identify when requestCharge_s exceeds 50. Configure an alert threshold of 20 and a period of 15.
Explanation
Here’s how we break down the requirements and why this solution is the correct one:
Requirements:
Identify Queries Exceeding 50 RUs: We need to find instances where the request charge (requestCharge_s) for a Cosmos DB query is greater than 50.
Threshold of 20 Times: The alert should only trigger if this happens more than 20 times.
Within a 15-Minute Window: The 20+ occurrences must be within a 15-minute period.
Solution Breakdown (Option A):
Create a search query to identify when requestCharge_s exceeds 50.
This is the core of the solution. We need a Log Analytics query that filters for Cosmos DB operations where requestCharge_s is greater than 50. A sample query would look like this:
AzureDiagnostics
| where Category == “DataPlaneRequests” // Focus on Cosmos DB data plane operations
| where toreal(requestCharge_s) > 50 // Filter for requests exceeding 50 RUs
| summarize count() by bin(TimeGenerated, 1m) // Count occurrences in 1-minute bins
Use code with caution.
Kusto
Configure an alert threshold of 20
This directly addresses the requirement of triggering the alert only when the condition (request charge > 50) occurs more than 20 times.
Configure a period of 15
This defines the time window (15 minutes) over which the threshold (20 times) is evaluated. In this case the alert rule will evaluate the query every ‘frequency’ minutes (which defaults to 5 minutes) and will look for the number of results returned by the query in the last 15 minutes. If the number of results returned is above the threshold the alert will trigger.
Why Other Options Are Incorrect:
B. Create a search query to identify when duration_s exceeds 20 and requestCharge_s exceeds 50. Configure a period of 15.
The duration_s field represents the duration of the query, and the requirement is focused on the request charge (cost), not the duration. Also, we don’t need to know that duration_s exceeds 20, we only need to know that the count of requestCharge_s exceeds 50 more than 20 times.
C. Create a search query to identify when requestCharge_s exceeds 20. Configure a period of 15 and a frequency of 20.
The query is incorrect (should be requestCharge_s > 50). The frequency setting does not control the number of times the condition must occur; it defines how often the alert rule is evaluated.
D. Create a search query to identify when duration_s exceeds 20. Configure a period of 15.
Similar to option B, this focuses on duration_s, which is not the primary requirement. We are concerned with the cost (request charge) of the queries.
You are designing a data protection strategy for Azure virtual machines. All the virtual machines are in the Standard tier and use managed disks. You need to recommend a solution that meets the following requirements:
- The use of encryption keys is audited.
- All the data is encrypted at rest always.
You manage the encryption keys, not Microsoft. What should you include in the recommendation?
A. BitLocker Drive Encryption (BitLocker)
B. Azure Storage Service Encryption
C. Client-side Encryption
D. Azure Disk Encryption
The correct answer is D. Azure Disk Encryption.
Explanation
Here’s why Azure Disk Encryption is the best solution to meet the requirements and why the other options are not as suitable:
Requirements:
Encryption at Rest: All data on the managed disks must be encrypted when stored (at rest).
Customer-Managed Keys: You, not Microsoft, must manage the encryption keys.
Auditing of Key Usage: The use of encryption keys needs to be auditable.
D. Azure Disk Encryption
How it works: Azure Disk Encryption leverages the BitLocker feature of Windows and the DM-Crypt feature of Linux to provide volume encryption for the OS and data disks of Azure virtual machines. It integrates with Azure Key Vault to help you control and manage the disk encryption keys and secrets.
Why it’s the best solution:
Encryption at Rest: It encrypts the managed disks at rest.
Customer-Managed Keys (CMK): You can choose to manage your encryption keys in Azure Key Vault, giving you full control over key creation, rotation, deletion, and usage.
Auditing: Azure Key Vault provides detailed audit logs of all key operations, including who accessed the keys, when, and for what purpose. This meets the auditing requirement.
Integration with Managed Disks: It seamlessly integrates with Azure managed disks.
Why Other Options Are Not as Suitable:
A. BitLocker Drive Encryption (BitLocker) (without Azure Disk Encryption): While BitLocker can encrypt disks, using it independently on Azure VMs without Azure Disk Encryption does not integrate with Azure Key Vault for key management or auditing. You would have to manage keys within the VM itself, which is less secure and doesn’t provide centralized auditing.
B. Azure Storage Service Encryption (SSE): SSE automatically encrypts data at rest in Azure Storage accounts, but it uses Microsoft-managed keys by default. While you can opt to use customer-managed keys with SSE, it’s not the primary mechanism for encrypting Azure VM disks. Azure Disk Encryption is specifically designed for that purpose.
C. Client-Side Encryption: In client-side encryption, you encrypt the data before sending it to Azure. This means you manage keys completely outside of Azure. While this approach provides a high level of control, it doesn’t directly address the requirement of encrypting data at rest on managed disks within Azure. Additionally, it may add complexity to your application logic.
You have 100 servers that run Windows Server 2012 R2 and host Microsoft SQL Server 2012 R2 instances. The instances host databases that have the following characteristics:
- The largest database is currently 3 TB.
- None of the databases will ever exceed 4 TB.
- Stored procedures are implemented by using CLR.
You plan to move all the data from SQL Server to Azure. You need to recommend an Azure service to host the databases. The solution must meet the following requirements:
- Whenever possible, minimize management overhead for the migrated databases.
- Minimize the number of database changes required to facilitate the migration.
- Ensure that users can authenticate by using their Active Directory credentials.
What should you include in the recommendation?
A. Azure SQL Database single databases
B. Azure SQL Database Managed Instance
C. Azure SQL Database elastic pools
D. SQL Server 2016 on Azure virtual machines
The correct answer is B. Azure SQL Database Managed Instance.
Explanation
Here’s why Azure SQL Database Managed Instance is the best fit for this scenario and why the other options are less suitable:
Requirements:
Minimize Management Overhead: The solution should be a PaaS (Platform as a Service) offering to reduce administrative tasks like patching, backups, and high availability configuration.
Minimize Database Changes: The migration should require minimal or no changes to the existing database schema, stored procedures, or application code. This is crucial for a smooth transition.
Active Directory Authentication: Users should be able to authenticate using their existing Active Directory credentials.
Database Size: The solution must support databases up to 4 TB.
CLR Stored Procedures: The solution must support the use of CLR (Common Language Runtime) stored procedures.
B. Azure SQL Database Managed Instance
Why it’s the best fit:
PaaS: Managed Instance is a fully managed service, so Microsoft handles OS and SQL Server patching, backups, high availability, and other management tasks.
High Compatibility: It offers near 100% compatibility with on-premises SQL Server, including support for:
Database Size: Supports databases up to 8 TB in the General Purpose tier and 16 TB in the Business Critical tier (well above the 4 TB requirement).
CLR: Supports CLR stored procedures.
Cross-Database Queries: Allows queries that span multiple databases within the same instance.
SQL Server Agent: You can use SQL Server Agent jobs for scheduling tasks.
Active Directory Authentication: It supports Azure Active Directory authentication, which can be integrated with your on-premises Active Directory using Azure AD Connect.
Minimal Migration Effort: Due to its high compatibility, migrating databases from SQL Server 2012 R2 to Managed Instance typically requires minimal code changes.
Why other options are less suitable:
A. Azure SQL Database single databases:
Limited Compatibility: Single databases have some limitations compared to on-premises SQL Server, including:
No CLR Support: Does not support CLR stored procedures.
No Cross-Database Queries: Cannot query across multiple databases.
No SQL Server Agent: No built-in job scheduler.
Database Size: The maximum size for a single database can vary based on the service tier and purchasing model but may have limitations that could become a problem in the future.
C. Azure SQL Database elastic pools:
Same Limitations as Single Databases: Elastic pools are essentially a group of single databases managed together, so they share the same compatibility limitations as single databases (no CLR, etc.).
Cost Optimization: Elastic pools are good for cost optimization when you have many databases with varying resource utilization, but they don’t address the core compatibility issues in this scenario.
D. SQL Server 2016 on Azure virtual machines:
IaaS, Not PaaS: This is an IaaS (Infrastructure as a Service) solution, meaning you are responsible for managing the operating system, SQL Server, backups, patching, high availability, etc. This does not minimize management overhead.
Unnecessary Upgrade: While you could potentially upgrade to a newer version of SQL Server in a VM, it doesn’t solve the management overhead issue and is not required by the scenario.
You plan to create an Azure Cosmos DB account that uses the SQL API. The account will contain data added by a web application. The web application will send data daily. You need to recommend a notification solution that meets the following requirements:
- Sends email notification when data is received from IoT devices.
- Minimizes compute cost.
What should you include in the recommendation?
A. Deploy an Azure logic app that has the Azure Cosmos DB connector configured to use a SendGrid action.
B. Deploy a function app that is configured to use the Consumption plan and a SendGrid binding.
C. Deploy an Azure logic app that has a SendGrid connector configured to use an Azure Cosmos DB action.
D. Deploy a function app that is configured to use the Consumption plan and an Azure Event Hubs binding.
The best answer is B. Deploy a function app that is configured to use the Consumption plan and a SendGrid binding.
Explanation:
Here’s why this solution is the most suitable and why the others are less effective:
Requirements:
Email Notification on Data Change: Send an email when new data is added to the Cosmos DB account.
Minimize Compute Cost: The solution should be cost-effective, especially since data is only sent daily (implying relatively infrequent triggers).
B. Function App (Consumption Plan) with SendGrid Binding
Why it’s the best solution:
Azure Functions (Consumption Plan): Azure Functions are serverless compute services that allow you to run code on-demand without managing infrastructure. The Consumption plan is ideal for cost optimization because you only pay for the time your function is actually executing. Since data is sent daily, the function will only run briefly each day, minimizing cost.
Cosmos DB Trigger: Azure Functions can be triggered by changes to Cosmos DB data (using the Cosmos DB trigger). This allows the function to automatically execute when new data is received.
SendGrid Binding: Azure Functions has built-in output bindings for SendGrid, simplifying the process of sending emails. You can easily configure the binding to send an email with the relevant data from the Cosmos DB change.
How it works:
Data is written to the Cosmos DB account.
The Cosmos DB trigger detects the new data and invokes the function.
The function code executes, potentially processing or formatting the data.
The SendGrid output binding sends an email notification.
Why other options are less suitable:
A. Logic App (Cosmos DB Connector to SendGrid Action):
Logic Apps Cost: While Logic Apps are also serverless, their pricing model is based on actions executed. Depending on the complexity of the logic app and the frequency of data changes, it might be more expensive than a Function App on the Consumption plan, especially for infrequent triggers.
Less Efficient for Simple Tasks: Logic Apps are great for more complex workflows and orchestrations. For a simple task like sending an email on a data change, a Function App is often more efficient.
Cosmos DB Connector -> SendGrid Action order is incorrect in this option.
C. Logic App (SendGrid Connector to Cosmos DB Action):
Incorrect Order of Actions: This option suggests the wrong order. You need to trigger based on Cosmos DB changes and then send an email (Cosmos DB -> SendGrid), not the other way around. It also has the same cost and efficiency considerations as option A.
D. Function App (Consumption Plan) with Event Hubs Binding:
Unnecessary Complexity: Event Hubs is designed for high-throughput, real-time event ingestion. It’s overkill for this scenario where data is sent daily. Using Event Hubs would add unnecessary complexity and potentially higher cost. The Cosmos DB trigger is the appropriate trigger in this case.
You plan to use Azure Site Recovery to protect several on-premises physical server workloads. Each server workload is independent of the other. The workloads are stateless. You need to recommend a failover strategy to ensure that if the on-premises data center fails, the workloads are available in Azure as quickly as possible. Which failover strategy should you include in the recommendation?
A. Latest
B. Latest app-consistent
C. Latest multi-VM processed
D. Latest processed
The correct answer is D. Latest processed.
Explanation
Here’s why “Latest processed” is the best failover strategy in this scenario and why the others are less suitable:
Requirements:
Protect Physical Servers: The workloads are running on on-premises physical servers.
Independent, Stateless Workloads: Each server is independent and stateless, meaning there are no dependencies between them, and they don’t maintain any persistent state that needs to be synchronized across multiple servers.
Fastest Recovery: The primary goal is to minimize downtime and have the workloads available in Azure as quickly as possible after a failure.
Failover Strategies in Azure Site Recovery:
A. Latest: This option fails over to the latest recovery point available. While it might seem like the fastest, it could include data that hasn’t been fully processed by Site Recovery, potentially leading to inconsistencies or longer boot times during failover.
B. Latest app-consistent: This option fails over to the latest application-consistent snapshot. App-consistent snapshots are crucial for applications that need to maintain data consistency (e.g., databases). However, they take longer to create and are not necessary for stateless workloads.
C. Latest multi-VM processed: This option is used when you have multiple VMs that need to be failed over together and kept consistent with each other (e.g., a multi-tier application). This is not relevant in this scenario because the workloads are independent.
D. Latest processed: This option fails over to the latest recovery point that has been fully processed by Site Recovery. This ensures that the data is consistent and ready to be used in Azure, resulting in the fastest possible recovery for independent, stateless workloads.
Why “Latest processed” is the best choice:
Fastest Recovery Time: By using the latest processed recovery point, you minimize the time it takes for the VMs to become operational in Azure after a failover. Site Recovery has already done the necessary processing to ensure data consistency.
Suitable for Stateless Workloads: Since the workloads are stateless, there’s no need for application-consistent snapshots, which add overhead and increase recovery time.
Independent Workloads: The “multi-VM” options are not needed because each server is independent.
You plan to move a web application named App1 from an on-premises data center to Azure. App1 depends on a custom framework that is installed on the host server. You need to recommend a solution to host App1 in Azure. The solution must meet the following requirements:
- App1 must be available to users if an Azure data center becomes unavailable.
- Costs must be minimized.
What should you include in the recommendation?
A. In two Azure regions, deploy a Traffic Manager profile and a web app.
B. In two Azure regions, deploy a load balancer and a virtual machine scale set.
C. Deploy a load balancer and a virtual machine scale set across two availability zones.
D. In two Azure regions, deploy a load balancer and a web app.
C. Deploy a load balancer and a virtual machine scale set across two availability zones.
You have an on-premises deployment of MongoDB. You plan to migrate MongoDB to an Azure Cosmos DB account that uses the MongoDB API. You need to recommend a solution for migrating MongoDB to Azure Cosmos DB. What should you include in the recommendation?
A. mongorestore
B. Data Migration Assistant
C. Azure Storage Explorer
D. Azure Cosmos DB Data Migration Tool
The correct answer is A. mongorestore.
Explanation
Here’s why mongorestore is the recommended tool for migrating MongoDB to Azure Cosmos DB’s API for MongoDB, and why the other options are not suitable:
A. mongorestore
What it is: mongorestore is a command-line utility that is part of the MongoDB toolset. It’s specifically designed to import data into a MongoDB database from a backup created by mongodump.
Why it’s the best choice for Cosmos DB’s API for MongoDB:
Compatibility: Azure Cosmos DB’s API for MongoDB is designed to be compatible with the MongoDB wire protocol. This means that tools like mongorestore (and mongodump) can be used to interact with Cosmos DB as if it were a native MongoDB instance.
Efficiency: mongorestore is generally an efficient way to import large datasets into MongoDB.
Simplicity: It’s a relatively straightforward command-line tool to use, especially if you’re already familiar with MongoDB.
Supported by Microsoft: Microsoft’s own documentation recommends using mongodump and mongorestore for migrating to Azure Cosmos DB’s API for MongoDB.
Typical Workflow:
mongodump: Use the mongodump utility to create a backup (dump) of your on-premises MongoDB database.
mongorestore: Use mongorestore to import the backup data into your Azure Cosmos DB account, specifying the Cosmos DB connection string.
Why other options are not suitable:
B. Data Migration Assistant (DMA): DMA is primarily used for assessing and migrating SQL Server databases to Azure SQL Database or Azure SQL Managed Instance. It’s not designed for MongoDB migrations.
C. Azure Storage Explorer: Storage Explorer is a tool for managing Azure Storage resources (blobs, files, queues, tables). It’s not used for migrating databases to Cosmos DB.
D. Azure Cosmos DB Data Migration Tool: While there was an older, open-source tool with this name, it’s no longer actively maintained or recommended by Microsoft. The preferred methods now are using mongorestore, Azure DMS, or other native MongoDB tools.
Important Considerations When Using mongorestore:
Connection String: Make sure you use the correct connection string for your Azure Cosmos DB account (with the MongoDB API enabled). You can find this in the Azure portal under your Cosmos DB account’s “Connection String” blade.
Throughput: Provision sufficient throughput (Request Units - RUs) on your Cosmos DB account to handle the import process efficiently. You can scale up the throughput during the migration and then scale it down afterward.
Indexing: Consider creating indexes in your Cosmos DB collections after the data import is complete. This can significantly speed up the import process.
You need to recommend a data storage solution that meets the following requirements:
- Ensures that application can access the data by using a REST connection.
- Hosts 20 independent tables of varying sizes and usage patterns.
- Automatically replicates the data to a second Azure region Minimizes costs.
What should you recommend?
A. an Azure SQL Database elastic database pool that uses active geo-replication
B. tables in an Azure Storage account that uses geo-redundant storage (GRS)
C. tables in an Azure Storage account that use read-access geo-redundant storage (RA-GR)
D. an Azure SQL database that uses active geo-replication
The correct answer is B. tables in an Azure Storage account that uses geo-redundant storage (GRS).
Explanation
Let’s analyze the requirements and why this solution is the best fit:
Requirements:
REST API Access: The application needs to access the data using a REST connection.
20 Independent Tables: The solution must support hosting multiple independent tables.
Automatic Replication to Second Region: Data needs to be automatically replicated to a secondary Azure region for disaster recovery purposes.
Minimize Costs: The solution should be cost-effective.
B. Tables in an Azure Storage Account with Geo-Redundant Storage (GRS)
Why it’s the best solution:
REST API: Azure Table Storage provides a RESTful API for accessing and manipulating data.
Tables: Azure Storage accounts support Table Storage, which allows you to store structured data in tables. You can create 20 independent tables within a single storage account.
Geo-Redundant Storage (GRS): GRS automatically replicates your data to a secondary region hundreds of miles away from the primary region. This provides protection against regional outages.
Cost-Effective: Azure Table Storage is generally a very cost-effective storage option, especially when compared to relational databases like Azure SQL Database. GRS is also relatively inexpensive compared to other replication options like RA-GRS.
Why other options are less suitable:
A. Azure SQL Database elastic database pool with active geo-replication:
Cost: Azure SQL Database is significantly more expensive than Azure Table Storage. Elastic pools are cost-effective for managing multiple databases with varying workloads, but they are still more expensive than Table Storage.
Active Geo-Replication Cost: Active geo-replication adds to the cost of Azure SQL Database.
REST API: While you can access SQL database through a REST API, it is not as straightforward as with Table storage.
C. Tables in an Azure Storage account that use read-access geo-redundant storage (RA-GRS):
Higher Cost: RA-GRS is more expensive than GRS because it allows read access to the secondary region. The requirements only specify the need for replication, not read access to the secondary region.
D. An Azure SQL database that uses active geo-replication:
Cost: As mentioned earlier, Azure SQL Database is a more expensive option than Azure Table Storage.
Single Database: This option only supports a single database, not 20 independent tables unless you manage them within a single database, which is not ideal for independent tables.
A company named Contoso, Ltd. has an Azure Active Directory (Azure AD) tenant that is integrated with Microsoft Office 365 and an Azure subscription.
Contoso has an on-premises identity infrastructure. The infrastructure includes servers that run Active Directory Domain Services (AD DS), Active Directory
Federation Services (AD FS), Azure AD Connect, and Microsoft Identity Manager (MIM).
Contoso has a partnership with a company named Fabrikam, Inc. Fabrikam has an Active Directory forest and an Office 365 tenant. Fabrikam has the same on- premises identity infrastructure as Contoso.
A team of 10 developers from Fabrikam will work on an Azure solution that will be hosted in the Azure subscription of Contoso. The developers must be added to the Contributor role for a resource in the Contoso subscription.
You need to recommend a solution to ensure that Contoso can assign the role to the 10 Fabrikam developers. The solution must ensure that the Fabrikam developers use their existing credentials to access resources.
What should you recommend?
A. Configure a forest trust between the on-premises Active Directory forests of Contoso and Fabrikam.
B. Configure an organization relationship between the Office 365 tenants of Fabrikam and Contoso.
C. In the Azure AD tenant of Contoso, enable Azure Active Directory Domain Services (Azure AD DS). Create a one-way forest trust that uses selective authentication between the Active Directory forests of Contoso and Fabrikam.
D. In the Azure AD tenant of Contoso, create guest accounts for the Fabrikam developers.
The correct answer is D. In the Azure AD tenant of Contoso, create guest accounts for the Fabrikam developers.
Here’s why:
Guest Accounts (Azure AD B2B): Azure Active Directory (Azure AD) business-to-business (B2B) collaboration lets you securely invite external users (guests) into your Azure AD tenant. These guest users can then be assigned roles (like Contributor) within your Azure subscription. The Fabrikam developers can sign in using their existing Fabrikam credentials (via federation with their Office 365 tenant or through a social provider if configured).
Let’s look at why the other options are not the best fit:
A. Configure a forest trust between the on-premises Active Directory forests of Contoso and Fabrikam.
Why it’s not ideal: While a forest trust would enable authentication, it doesn’t directly translate to authorization within the Azure subscription. Azure RBAC (Role-Based Access Control) relies on Azure AD identities, not on-premises AD. Furthermore, setting up and maintaining a forest trust for just 10 developers might be overkill and introduce unnecessary complexity.
B. Configure an organization relationship between the Office 365 tenants of Fabrikam and Contoso.
Why it’s not ideal: Organizational relationships in Office 365 primarily facilitate features like free/busy lookups and sharing calendars between tenants. They do not grant access to Azure resources or the ability to assign RBAC roles.
C. In the Azure AD tenant of Contoso, enable Azure Active Directory Domain Services (Azure AD DS). Create a one-way forest trust that uses selective authentication between the Active Directory forests of Contoso and Fabrikam.
Why it’s not ideal: Azure AD DS is a managed domain service, essentially providing a subset of AD DS functionality in the cloud. It is primarily used for legacy applications that require traditional domain services. It’s not necessary for modern cloud applications that can use Azure AD directly. Additionally, a trust with Azure AD DS would not enable assignment of roles in the Contoso subscription and is overly complex.
Your network contains an on-premises Active Directory forest.
You discover that when users change jobs within your company, the membership of the user groups are not being updated. As a result, the users can access resources that are no longer relevant to their job.
You plan to integrate Active Directory and Azure Active Directory (Azure AD) by using Azure AD Connect.
You need to recommend a solution to ensure that group owners are emailed monthly about the group memberships they manage.
What should you include in the recommendation?
A. Azure AD access reviews
B. Tenant Restrictions
C. Azure AD Identity Protection
D. conditional access policies
The correct answer is A. Azure AD access reviews.
Here’s why:
Azure AD access reviews: This feature allows you to schedule regular reviews of access rights to resources, including group memberships, applications, and roles. You can configure access reviews to target specific groups, and designate the group owners as the reviewers. During the review, owners will be notified (via email) and prompted to confirm or remove members from their groups. This directly addresses the requirement of ensuring group owners are regularly reminded to review and update memberships.
Let’s examine why the other options are not the best fit:
B. Tenant Restrictions: Tenant restrictions are used to control which external tenants your users can access when signed in with their organizational accounts. This is related to controlling outbound access to other organizations, not managing internal group memberships.
C. Azure AD Identity Protection: This service helps detect and remediate identity-based risks, such as leaked credentials or unusual sign-in activities. It’s focused on security threats, not on ensuring regular review of group memberships.
D. Conditional access policies: These policies define access rules based on conditions like user location, device state, or application being accessed. While you can use conditional access to restrict access to certain resources based on group membership, it doesn’t actively involve group owners in reviewing those memberships.
Note: This question is part of series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company has an on-premises Active Directory Domain Services (AD DS) domain and an established Azure Active Directory (Azure AD) environment.
Your company would like users to be automatically signed in to cloud apps when they are on their corporate desktops that are connected to the corporate network.
You need to enable single sign-on (SSO) for company users.
Solution: Install and configure an Azure AD Connect server to use pass-through authentication and select the Enable single sign-on option.
Does the solution meet the goal?
A. Yes
B. No
A. Yes
The proposed solution, installing and configuring an Azure AD Connect server to use pass-through authentication and selecting the “Enable single sign-on” option, does meet the goal.
Here’s why:
Azure AD Connect with Pass-through Authentication: This configuration allows users to authenticate to Azure AD using the same password they use for on-premises AD. Authentication requests are passed through to the on-premises AD for validation.
Enable single sign-on (Seamless SSO): This feature, when enabled in Azure AD Connect, automatically signs users in to Azure AD and cloud apps when they are on their domain-joined corporate devices. It uses Kerberos authentication behind the scenes.
Note: This question is part of series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company has an on-premises Active Directory Domain Services (AD DS) domain and an established Azure Active Directory (Azure AD) environment.
Your company would like users to be automatically signed in to cloud apps when they are on their corporate desktops that are connected to the corporate network.
You need to enable single sign-on (SSO) for company users.
Solution: Install and configure an Azure AD Connect server to use password hash synchronization and select the Enable single sign-on option.
Does the solution meet the goal?
A. Yes
B. No
A. Yes
The proposed solution, installing and configuring an Azure AD Connect server to use password hash synchronization and selecting the “Enable single sign-on” option, does meet the goal.
Here’s why:
Azure AD Connect with Password Hash Synchronization: This configuration synchronizes a hash of the users’ on-premises password hashes to Azure AD. This allows users to authenticate to Azure AD using the same password they use on-premises, even though authentication is handled by Azure AD directly.
Enable single sign-on (Seamless SSO): This feature, when enabled in Azure AD Connect, works independently of the synchronization method (password hash sync or pass-through authentication). It still relies on Kerberos authentication to achieve the automatic sign-in experience for domain-joined devices on the corporate network.
You manage an Azure environment for a company. The environment has over 25,000 licensed users and 100 mission-critical applications.
You need to recommend a solution that provides advanced endpoint threat detection and remediation strategies.
What should you recommend?
A. Azure Active Directory (Azure AD) authentication
B. Microsoft Identity Manager
C. Azure Active Directory Federation Services (AD FS)
D. Azure Active Directory (AZ AD) Connect
E. Azure Active Directory (Azure AD) Identity Protection
The correct recommendation for providing advanced endpoint threat detection and remediation strategies in your Azure environment is:
E. Azure Active Directory (Azure AD) Identity Protection
Here’s why this option is the best choice:
Advanced Threat Detection: Azure AD Identity Protection provides capabilities for detecting potential vulnerabilities affecting your organization’s identities, including risk-based conditional access policies. It helps identify compromised accounts and can trigger alerts for suspicious activities.
Remediation Strategies: This service not only detects threats but also offers remediation strategies such as requiring password changes or multi-factor authentication (MFA) for users flagged as high risk. This aligns with the need for effective remediation strategies.
Integration with Security Solutions: Azure AD Identity Protection integrates well with other Microsoft security solutions, enhancing overall security posture and threat response capabilities across your Azure environment.
Why the Other Options Are Less Suitable:
A. Azure Active Directory (Azure AD) authentication: While essential for identity management, it does not provide advanced threat detection or remediation capabilities.
B. Microsoft Identity Manager: This is primarily focused on identity synchronization and management rather than advanced threat detection.
C. Azure Active Directory Federation Services (AD FS): AD FS is used for single sign-on and does not provide endpoint threat detection or remediation features.
D. Azure Active Directory (Azure AD) Connect: This tool is used to synchronize on-premises directories with Azure AD but does not offer any endpoint threat detection capabilities.
You have an Azure subscription that contains a custom application named Application1. Application1 was developed by an external company named Fabrikam,
Ltd. Developers at Fabrikam were assigned role-based access control (RBAC) permissions to the Application1 components. All users are licensed for the
Microsoft 365 E5 plan.
You need to recommend a solution to verify whether the Fabrikam developers still require permissions to Application1. The solution must meet the following requirements:
✑ To the manager of the developers, send a monthly email message that lists the access permissions to Application1.
✑ If the manager does not verify an access permission, automatically revoke that permission.
✑ Minimize development effort.
What should you recommend?
A. In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources
B. Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet
C. Create an Azure Automation runbook that runs the Get-AzureRmRoleAssignment cmdlet
D. In Azure Active Directory (Azure AD), create an access review of Application1
The correct answer is D. In Azure Active Directory (Azure AD), create an access review of Application1.
Here’s why:
Azure AD Access Reviews: This feature is specifically designed for reviewing access rights to resources like applications, groups, and roles. It allows you to:
Target specific resources: You can create an access review specifically for Application1.
Assign reviewers: You can designate the manager of the Fabrikam developers as the reviewer.
Set a schedule: You can configure the review to occur monthly, meeting the requirement for a monthly email.
Automate actions: You can configure the review to automatically remove access if the reviewer (the manager) doesn’t approve it within a specified timeframe.
Minimal development effort: Access reviews are a built-in feature, requiring no custom code or scripting.
Let’s analyze why the other options are not as suitable:
A. In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources:
Why it’s not ideal: PIM is primarily used for managing just-in-time access to privileged roles. While you could create a custom role, PIM doesn’t offer the automated monthly review and manager approval/denial workflow needed in this scenario.
B. Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet:
Why it’s not ideal: This would require significant development effort to create a runbook that retrieves role assignments, sends emails, and processes responses. It also wouldn’t have the built-in review and approval workflow of access reviews.
C. Create an Azure Automation runbook that runs the Get-AzureRmRoleAssignment cmdlet:
Why it’s not ideal: Similar to option B, this requires custom development. Additionally, Get-AzureRmRoleAssignment is part of the older AzureRM module. It is recommended to use Az module cmdlets for new development. Using this also wouldn’t have the built-in review features.
Note: This question is part of series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure subscription that contains a resource group named RG1.
You create an Azure Active Directory (Azure AD) group named ResearchUsers that contains the user accounts of all researchers.
You need to recommend a solution that meets the following requirements:
✑ The researchers must be allowed to create Azure virtual machines.
✑ The researchers must only be able to create Azure virtual machines by using specific Azure Resource Manager templates.
Solution: On RG1, assign a custom role-based access control (RBAC) role to the ResearchUsers group.
Does this meet the goal?
A. Yes
B. No
B. No
Assigning a custom RBAC role to the ResearchUsers group on RG1 alone does not meet the goal.
Here’s why:
Custom RBAC Roles: Custom RBAC roles allow you to define granular permissions for specific actions on Azure resources. You could create a custom role that grants permissions to create virtual machines within RG1.
Missing Restriction: However, a custom RBAC role does not provide a mechanism to enforce the use of specific ARM templates when creating virtual machines. The researchers would still be able to create VMs using any method, including the portal, CLI, or PowerShell, without being restricted to the designated templates.
To meet the goal, you would need to combine a custom RBAC role with Azure Policy:
Custom RBAC Role: Grant the ResearchUsers group a custom role that allows the creation of virtual machines in RG1 (Microsoft.Compute/virtualMachines/*).
Azure Policy: Create a policy that enforces the use of specific ARM templates. This can be done by using the deployIfNotExists effect to deploy a resource (like a VM) only if it matches the desired template configuration. You would need to identify or create the specific ARM templates you want to allow and reference them in the policy definition.
You have an Azure subscription that contains several resource groups. Including a resource group named RG1. RG1 contains several business-critical resources.
A user named admin1 is assigned the Owner role to the subscription.
You need to prevent admin1 from modifying the resources in RG1. The solution must ensure that admin1 can manage the resources in the other resource groups.
What should you use?
A. a management group
B. an Azure policy
C. a custom role
D. an Azure blueprint
The correct answer is B. an Azure policy.
Here’s why:
Azure Policy: Azure Policy allows you to define and enforce rules about the configuration of your Azure resources. You can create a policy that specifically denies the modification of resources within RG1. You can then assign this policy at the subscription level but exclude RG1’s scope from the policy that allows other actions, essentially creating a “Deny” effect for RG1. This would prevent admin1 (or any other user) from modifying resources in RG1, even though they have the Owner role at the subscription level.
Let’s examine why the other options aren’t the best fit:
A. a management group: Management groups are used to organize subscriptions and apply governance policies at a higher level than subscriptions. While you could use management groups to apply policies, they don’t offer the granularity to target a specific resource group within a subscription for a deny action while allowing actions on others.
C. a custom role: You could create a custom role that excludes permissions for RG1, but you’d have to assign it to admin1 at the resource group level for every other resource group. This is not practical or scalable. The Owner role assigned at the subscription level would override this custom role, making it ineffective for preventing modifications in RG1.
D. an Azure blueprint: Azure blueprints are used to define and deploy a set of resources and policies in a standardized way. While you could use a blueprint to apply a policy, it’s not the most direct way to achieve the specific requirement of preventing modifications to RG1 while allowing them elsewhere.
You have an Azure subscription.
You need to recommend a solution to provide developers with the ability to provision Azure virtual machines. The solution must meet the following requirements:
✑ Only allow the creation of the virtual machines in specific regions.
✑ Only allow the creation of specific sizes of virtual machines.
What should include in the recommendation?
A. conditional access policies
B. Azure Policy
C. Azure Resource Manager templates
D. role-based access control (RBAC)
Let’s analyze each option against the requirements:
A. Conditional Access Policies: Conditional Access Policies are used to enforce authentication and authorization policies in Azure Active Directory (Azure AD). They control who can access what under what conditions. They are primarily focused on securing access to applications and services, not on restricting the provisioning of Azure resources like virtual machines based on region or size.
B. Azure Policy: Azure Policy is a service in Azure that allows you to create, assign, and manage policies to enforce different rules and effects over your Azure resources. Azure Policy can be used to:
Restrict resource locations: You can define policies that only allow resource deployments in specific Azure regions. For example, you can create a policy that only allows virtual machines to be created in “East US” and “West US”.
Restrict resource SKUs (sizes): You can define policies that only allow the creation of specific virtual machine sizes. For example, you can create a policy that only allows “Standard_DS1_v2” and “Standard_DS2_v2” virtual machine sizes.
Enforce other configurations: Azure Policy can also enforce tagging, naming conventions, and various other resource configurations.
Azure Policy is specifically designed to govern and control the creation and configuration of Azure resources to ensure compliance and standardization.
C. Azure Resource Manager templates (ARM templates): ARM templates are Infrastructure-as-Code (IaC) files used to define and deploy Azure resources. While you can define the region and size of virtual machines within an ARM template, ARM templates themselves do not enforce restrictions. Developers could still deploy virtual machines using other methods (like the Azure portal or Azure CLI) and bypass the restrictions defined in the ARM template unless there is an enforcement mechanism in place. ARM templates are a tool for deployment, not for policy enforcement.
D. Role-Based Access Control (RBAC): RBAC controls who has access to what Azure resources and what actions they can perform. You can use RBAC to grant developers the “Virtual Machine Contributor” role, which allows them to create and manage virtual machines. However, RBAC alone does not provide the capability to restrict the region or size of virtual machines that can be created. RBAC controls permissions at a broader level (e.g., create virtual machines), but not the specific properties of those resources (region, size).
Conclusion:
Azure Policy is the most appropriate solution because it is specifically designed to enforce organizational standards and compliance requirements on Azure resources. It directly addresses both requirements: restricting virtual machine creation to specific regions and specific sizes.
Final Answer: B. Azure Policy
Note: This question is part of series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure subscription named Project1. Only a group named Project1admins is assigned roles in the Project1 subscription. The Project1 subscription contains all the resources for an application named Application1.
Your company is developing a new application named Application2. The members of the Application2 development team belong to an Azure Active Directory
(Azure AD) group named App2Dev.
You identify the following requirements for Application2:
✑ The members of App2Dev must be prevented from changing the role assignments in Azure.
✑ The members of App2Dev must be able to create new Azure resources required by Application2.
✑ All the required role assignments for Application2 will be performed by the members of Project1admins.
You need to recommend a solution for the role assignments of Application2.
Solution: Create a new Azure subscription named Project2. Assign Project1admins the User Access Administrator role for the Project2 subscription. Assign
App2Dev the Owner role for the Project2 subscription.
Does this meet the goal?
A. Yes
B. No
Let’s analyze the requirements and the proposed solution step by step.
Requirements:
Requirement 1: Prevent App2Dev from changing role assignments. The members of App2Dev should not be able to modify who has access to Application2 resources. This is about limiting their permissions related to Azure RBAC (Role-Based Access Control).
Requirement 2: App2Dev must be able to create new Azure resources. The members of App2Dev need to be able to deploy and manage the Azure resources necessary for Application2. This means they need sufficient permissions to create and manage resources like virtual machines, storage accounts, web apps, etc.
Requirement 3: Role assignments done by Project1admins. All role assignments for Application2 should be exclusively managed by the members of Project1admins. This means App2Dev should not have permissions to manage role assignments.
Proposed Solution:
Action 1: Create a new Azure subscription named Project2. This action creates a separate administrative boundary for Application2. It can help with organization, billing, and resource isolation, but it doesn’t directly address the role assignment requirements itself.
Action 2: Assign Project1admins the User Access Administrator role for the Project2 subscription. The User Access Administrator role allows Project1admins to manage access to Azure resources within the Project2 subscription. This role is specifically designed for managing role assignments. So, Project1admins will be able to grant and revoke access to resources in Project2.
Action 3: Assign App2Dev the Owner role for the Project2 subscription. The Owner role is a built-in Azure RBAC role that grants full access to manage all resources, including the ability to delegate access to other resources. This is a very powerful role.
Evaluation of the Solution against the Requirements:
Requirement 1: Prevent App2Dev from changing role assignments? NO. The Owner role includes the permission Microsoft.Authorization/*/Write, which explicitly grants the ability to manage role assignments. By assigning the Owner role to App2Dev, they can change role assignments within the Project2 subscription. This directly violates Requirement 1.
Requirement 2: App2Dev must be able to create new Azure resources? YES. The Owner role grants all permissions, including the ability to create and manage all types of Azure resources. So, App2Dev members will be able to create new resources in Project2.
Requirement 3: Role assignments done by Project1admins? NO. While Project1admins have the User Access Administrator role and can manage role assignments, the solution also grants the Owner role to App2Dev, which also allows App2Dev to manage role assignments. Requirement 3 implies that only Project1admins should manage role assignments, and the solution fails to enforce this exclusivity because App2Dev, with the Owner role, also gains this capability.
Conclusion:
The proposed solution fails to meet Requirement 1 and Requirement 3. By granting the Owner role to App2Dev, the solution inadvertently allows App2Dev to manage role assignments, which is explicitly against the stated requirements. While App2Dev gains the ability to create resources (Requirement 2), the solution compromises the access control requirements.
Therefore, the proposed solution does not meet the goal.
Final Answer: No
Note: This question is part of series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure subscription that contains a resource group named RG1.
You create an Azure Active Directory (Azure AD) group named ResearchUsers that contains the user accounts of all researchers.
You need to recommend a solution that meets the following requirements:
✑ The researchers must be allowed to create Azure virtual machines.
✑ The researchers must only be able to create Azure virtual machines by using specific Azure Resource Manager templates.
Solution: On RG1, assign the Contributor role to the ResearchUsers group. Create a custom Azure Policy definition and assign the policy to RG1.
Does this meet the goal?
A. Yes
B. No
Let’s break down the requirements and the proposed solution to determine if it meets the goal.
Requirements:
Requirement 1: Researchers must be allowed to create Azure virtual machines.
Requirement 2: Researchers must only be able to create Azure virtual machines by using specific Azure Resource Manager templates.
Proposed Solution:
Step 1: On RG1, assign the Contributor role to the ResearchUsers group.
The Contributor role is a built-in Azure RBAC role that grants permissions to manage most resources, including creating and managing virtual machines within the scope it’s assigned to (in this case, Resource Group RG1).
Does this meet Requirement 1? Yes, assigning the Contributor role to ResearchUsers on RG1 will allow researchers to create Azure virtual machines within RG1.
Step 2: Create a custom Azure Policy definition and assign the policy to RG1.
Azure Policy allows you to define rules and effects that are enforced when Azure resources are created or modified. You can use Azure Policy to control various aspects of resource configurations.
Does this meet Requirement 2? Azure Policy cannot directly enforce the use of specific Azure Resource Manager templates. Azure Policy works by evaluating resource properties after a deployment request is made. It can then enforce rules based on those properties (e.g., allowed VM sizes, allowed regions, required tags).
However, Azure Policy can restrict the properties of virtual machines that can be created. You could create a custom Azure Policy that, for example:
Only allows specific VM sizes.
Only allows VMs to be created in specific regions.
Only allows VMs to be created from specific images.
Requires specific tags.
By carefully crafting the Azure Policy, you can constrain the characteristics of the virtual machines that researchers can create. If the “specific Azure Resource Manager templates” in Requirement 2 are designed to deploy virtual machines with certain pre-defined characteristics (like specific sizes, regions, images, etc.), then Azure Policy can be used to enforce these characteristics. In this way, while not directly enforcing template usage, Azure Policy can guide and restrict VM creation to align with the desired outcomes of using those specific templates.
Evaluation:
Requirement 1 is met: The Contributor role grants the necessary permissions.
Requirement 2 is partially met, but not in a strict sense: Azure Policy cannot force users to use specific ARM templates. However, Azure Policy can enforce constraints on VM properties that would effectively limit the type of VMs researchers can create, guiding them towards configurations consistent with specific templates.
Conclusion:
While Azure Policy cannot literally enforce the use of specific ARM templates, it can be used to enforce configurations that are likely defined within those templates. In a practical sense, this combination of Contributor role and Azure Policy provides a level of control that can effectively guide researchers to create VMs that meet certain pre-defined specifications, which might be the intent behind requiring “specific ARM templates.”
Considering the options and the capabilities of Azure Policy, and in the context of a certification question, the most likely intended answer is Yes, assuming that the goal is to control what kind of VMs are created, and Azure Policy is used to enforce the properties of those VMs, even if it doesn’t strictly enforce the method of creation (i.e., using a specific template). If the requirement was interpreted extremely literally as only allowing deployment through specific templates and blocking all other methods, then the answer would be “No”. However, in a practical governance scenario, using Azure Policy to control VM properties is a common and effective approach.
Final Answer: Yes
HOTSPOT -
You are building an application that will run in a virtual machine (VM). The application will use Managed Service Identity (MSI).
The application uses Azure Key Vault, Azure SQL Database, and Azure Cosmos DB.
You need to ensure the application can use secure credentials to access these services.
Which authorization methods should you recommend? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Functionality
Azure Key Vault:
Hash-based message authentication code (HMAC)
Managed Service Identity (MSI)
Role-Based Access Controls (RBAC)
HTTPS encryption
Azure SQL:
Hash-based message authentication code (HMAC)
Managed Service Identity (MSI)
Role-Based Access Controls (RBAC)
HTTPS encryption
Cosmos DB:
Hash-based message authentication code (HMAC)
Managed Service Identity (MSI)
Role-Based Access Controls (RBAC)
HTTPS encryption
The question is about using Managed Service Identity (MSI) for an application running in a VM to securely access Azure Key Vault, Azure SQL Database, and Azure Cosmos DB. MSI allows Azure services to authenticate to other Azure services without storing credentials in code. We need to identify the appropriate authorization methods for each service when using MSI.
Azure Key Vault:
Managed Service Identity (MSI): This is the core technology being used. MSI enables the VM’s identity to be recognized by Azure services.
Role-Based Access Controls (RBAC): Azure Key Vault uses Access Policies or, more recently, Azure RBAC to control access to vaults and their contents (secrets, keys, certificates). When using MSI, you would grant the VM’s MSI identity specific permissions in Key Vault using RBAC or Access Policies. For example, you would grant the MSI identity the “Key Vault Secrets User” role (or specific Access Policy permissions) to allow the application to retrieve secrets from Key Vault.
Hash-based message authentication code (HMAC): HMAC is a type of message authentication code obtained by running a cryptographic hash function on the data to be authenticated and a shared secret key. While HMAC can be used for authentication in some contexts, it is not the primary authorization method used with MSI for Azure Key Vault access.
HTTPS encryption: HTTPS is used to secure communication channels, ensuring data is encrypted in transit. While important for security, HTTPS is not an authorization method itself.
Azure SQL Database:
Managed Service Identity (MSI): As with Key Vault, MSI is the core technology for authentication. For Azure SQL Database, this is specifically referred to as “Azure AD authentication for Azure SQL Database”.
Role-Based Access Controls (RBAC): Azure RBAC can be used to manage permissions for Azure AD identities (including MSIs) to access Azure SQL Database. You would grant the VM’s MSI identity roles like “SQL Database Contributor” or more granular roles to control what the application can do within the SQL database.
Hash-based message authentication code (HMAC): HMAC is not the standard authorization method for Azure SQL Database access using MSI or Azure AD authentication.
HTTPS encryption: HTTPS is essential for securing the connection to Azure SQL Database, but it is not an authorization method.
Azure Cosmos DB:
Managed Service Identity (MSI): Cosmos DB supports using Managed Identities for Azure Resources (MSI) for authentication.
Role-Based Access Controls (RBAC): Cosmos DB supports Role-Based Access Control (RBAC) for data plane operations (newer model). You can assign RBAC roles to the VM’s MSI identity to control access to Cosmos DB resources like databases, containers, and items. For example, you might assign a role that allows reading and writing items in a specific container.
Hash-based message authentication code (HMAC): Cosmos DB also supports authentication using master keys or resource tokens, which are based on HMAC. However, when using MSI, RBAC is the preferred and more secure method for authorization, especially for service-to-service authentication.
HTTPS encryption: HTTPS is crucial for securing communication with Cosmos DB, but it’s not an authorization method.
Therefore, the correct selections are:
Azure Key Vault: Managed Service Identity (MSI), Role-Based Access Controls (RBAC)
Azure SQL: Managed Service Identity (MSI), Role-Based Access Controls (RBAC)
Cosmos DB: Managed Service Identity (MSI), Role-Based Access Controls (RBAC)