test4-az301 Flashcards

1
Q

You have an on-premises network and an Azure subscription. The on-premises network has several branch offices.
A branch office in Toronto contains a virtual machine named VM1 that is configured as a file server. Users access the shared files on VM1 from all the offices.
You need to recommend a solution to ensure that the users can access the shares files as quickly as possible if the Toronto branch office is inaccessible.
What should you include in the recommendation?
A. a Recovery Services vault and Azure Backup
B. an Azure file share and Azure File Sync
C. Azure blob containers and Azure File Sync
D. a Recovery Services vault and Windows Server Backup

A

The correct answer is B. an Azure file share and Azure File Sync.

Explanation

Here’s why this is the best solution and why the others aren’t as suitable:

Azure File Share and Azure File Sync:

How it works:

You create an Azure file share in a storage account.

Azure File Sync is used to synchronize files between your on-premises file server (VM1) and the Azure file share.

You can then install the Azure File Sync agent on file servers in other branch offices and configure them to use the same Azure file share.

Azure File Sync acts as a cache, keeping frequently accessed files locally at each branch office.

Why it’s the best solution:

Fast Access: Users in other branch offices will have fast, local access to cached copies of the files.

Availability: If the Toronto office becomes unavailable, the files are still accessible from the Azure file share and the cached copies at other locations.

Centralized Management: You have a central copy of your files in Azure, making backup and management easier.

Other Options:

A. a Recovery Services vault and Azure Backup: Azure Backup is primarily for backup and restore, not for providing fast, continuous access to files in a failover scenario. Restoring from a backup would take time.

C. Azure blob containers and Azure File Sync: Azure blob storage is designed for unstructured data and is not ideal for file shares that require SMB access. Although Azure File Sync can technically be configured to use blob containers, file shares are better suited for this use case.

D. a Recovery Services vault and Windows Server Backup: Similar to option A, this is a backup and restore solution, not designed for providing quick access to files in a different location if the primary location is unavailable.

How to Implement the Solution:

Create an Azure Storage Account: In your Azure subscription, create a storage account.

Create an Azure File Share: Within the storage account, create a file share.

Deploy Azure File Sync:

Create a Storage Sync Service resource in Azure.

Create a sync group within the Storage Sync Service.

Add your Azure file share as a cloud endpoint to the sync group.

Install the Azure File Sync agent on VM1 in Toronto.

Register VM1 with the Storage Sync Service.

Add VM1 as a server endpoint to the sync group.

Optional: Enable Cloud Tiering: On the server endpoint (VM1), you can configure cloud tiering, which will automatically tier less frequently accessed files to Azure, freeing up space on the local server while still keeping them accessible.

Branch Office Setup:

Install the Azure File Sync agent on file servers in other branch offices.

Register these servers with the Storage Sync Service.

Add them as server endpoints to the same sync group.

Benefits:

High Availability: Files are accessible even if the Toronto office is down.

Fast Access: Branch offices have local cached copies of frequently used files.

Reduced Bandwidth: Only changes are synced between locations, minimizing bandwidth usage.

Centralized Backup: Your files are backed up in Azure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A company deploys Azure Active Directory (Azure AD) Connect to synchronize identity information from their on-premises Active Directory Domain Services (AD
DS) directory to their Azure AD tenant. The identity information that is synchronized includes user accounts , credential hashes for authentication (password sync), and group membership. The company plans to deploy several Windows and Linux virtual machines (VMs) to support their applications.
The VMs have the following requirements:
✑ Support domain join, LDAP read, LDAP bind, NTLM and Kerberos authentication, and Group Policy.
✑ Allow users to sign in to the domain using their corporate credentials and connect remotely to the VM by using Remote Desktop.
You need to support the VM deployment.
Which service should you use?
A. Azure AD Domain Services
B. Azure AD Privileged Identity Management
C. Azure AD Managed Service Identity
D. Active Directory Federation Services (AD FS)

A

The correct answer is A. Azure AD Domain Services.

Explanation:

Here’s why Azure AD Domain Services is the right choice and why the other options are not suitable:

Azure AD Domain Services (Azure AD DS):

What it is: Azure AD DS provides managed domain services such as domain join, group policy, LDAP, Kerberos/NTLM authentication that are fully compatible with Windows Server Active Directory. It essentially extends your Azure AD tenant to provide these traditional domain services.

Why it’s the right choice:

Domain Join: Allows you to join your Windows and Linux VMs to a managed domain.

LDAP, NTLM, Kerberos: Supports traditional authentication protocols needed by many applications.

Group Policy: You can manage your VMs using Group Policy, just like in an on-premises AD environment.

User Credentials: Users can sign in with their existing corporate credentials synchronized from the on-premises AD via Azure AD Connect.

Remote Desktop: Users can connect remotely using Remote Desktop Services (RDS), which can be integrated with Azure AD DS for authentication.

Simplified Management: It’s a managed service, so you don’t have to worry about patching, monitoring, or backing up domain controllers.

Other Options:

B. Azure AD Privileged Identity Management (PIM): PIM is for managing, controlling, and monitoring access to important resources in your Azure AD organization. It’s not related to providing domain services for VMs.

C. Azure AD Managed Service Identity (MSI): MSI (now called Managed Identities for Azure resources) provides an identity for applications to use when connecting to resources that support Azure AD authentication. It’s not used for providing domain services to VMs.

D. Active Directory Federation Services (AD FS): AD FS is an on-premises solution for federated identity and single sign-on (SSO). While it can be used to extend your on-premises AD to the cloud, it requires significant infrastructure and management overhead. It’s not the best choice for this scenario where a managed domain service is preferred.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What should you include in the identity management strategy to support the planned changes?
A. Move all the domain controllers from corp.fabrikam.com to virtual networks in Azure.
B. Deploy domain controllers for corp.fabrikam.com to virtual networks in Azure.
C. Deploy a new Azure AD tenant for the authentication of new R&D projects.
D. Deploy domain controllers for the rd.fabrikam.com forest to virtual networks in Azure.

A

The best answer is B. Deploy domain controllers for corp.fabrikam.com to virtual networks in Azure.

Explanation

Here’s why this is the most suitable approach and why the other options are less effective:

B. Deploy domain controllers for corp.fabrikam.com to virtual networks in Azure.

Why it’s the best approach:

Supports Hybrid Identity: This approach directly supports the company’s need to maintain a hybrid identity model, leveraging their existing corp.fabrikam.com Active Directory.

Extends Existing Infrastructure: By deploying domain controllers for the existing corp.fabrikam.com domain into Azure virtual networks, you effectively extend the on-premises Active Directory infrastructure into Azure. This provides a consistent authentication and authorization experience for both on-premises and cloud resources.

Facilitates Migration: This is a common and recommended first step when migrating on-premises workloads to Azure. It allows for a smoother transition and minimizes disruption to users and applications.

Authentication for Azure VMs: The domain controllers in Azure can provide authentication services for the planned R&D virtual machines, allowing them to be joined to the corp.fabrikam.com domain.

Redundancy and Disaster Recovery: Having domain controllers in Azure also provides redundancy and can be part of a disaster recovery strategy.

Why other options are not as good:

A. Move all the domain controllers from corp.fabrikam.com to virtual networks in Azure:

Risky and Disruptive: Moving all domain controllers to Azure at once is a major change and could be very disruptive to the on-premises environment. It’s generally recommended to extend your AD to Azure gradually rather than doing a complete cutover.

On-Premises Dependency: If all domain controllers are in Azure, the on-premises environment becomes entirely dependent on the Azure connection for authentication. Any network issues between on-premises and Azure would severely impact on-premises operations.

C. Deploy a new Azure AD tenant for the authentication of new R&D projects:

Creates Identity Silos: Creating a separate Azure AD tenant would create an identity silo, making it harder to manage users and access across the organization. It would also complicate collaboration between R&D and the rest of the company.

Doesn’t Leverage Existing Infrastructure: It doesn’t leverage the company’s investment in their existing corp.fabrikam.com Active Directory.

D. Deploy domain controllers for the rd.fabrikam.com forest to virtual networks in Azure:

Unnecessary Complexity: Creating a separate forest (rd.fabrikam.com) introduces unnecessary complexity in terms of management, trusts, and user experience. It’s better to integrate with the existing corp.fabrikam.com domain unless there are very specific and compelling reasons to create a new forest.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You have a hybrid deployment of Azure Active Directory (Azure AD).
You need to recommend a solution to ensure that the Azure AD tenant can be managed only from the computers on your on-premises network.
What should you include in the recommendation?
A. Azure AD roles and administrators
B. a conditional access policy
C. Azure AD Application Proxy
D. Azure AD Privileged Identity Management

A

The correct answer is B. a conditional access policy.

Explanation:

Conditional Access Policies: Conditional access policies in Azure AD are powerful tools for controlling access to your cloud resources. They allow you to define specific conditions under which access is granted or denied. In this case, you can create a policy that allows access to Azure AD management only from specific IP address ranges or locations that correspond to your on-premises network.

How to Implement:

Define a Named Location: In Azure AD, under Security -> Conditional Access -> Named locations, create a new location that represents your on-premises network. You can define this location by specifying the public IP address ranges used by your on-premises network.

Create a Conditional Access Policy:

Go to Security -> Conditional Access -> Policies and create a new policy.

Users and groups: Select the users or groups this policy should apply to (e.g., all users, or specific admin roles).

Cloud apps or actions: Select “All cloud apps” or specifically target apps related to Azure AD management.

Conditions:

Locations: Select “Any location” and then exclude the named location you created in step 1 (your on-premises network).

Grant: Select “Block access”.

Why Other Options Are Not Suitable:

A. Azure AD roles and administrators: Azure AD roles define permissions for managing Azure AD resources. While important for controlling what users can do, they don’t restrict where they can manage from.

C. Azure AD Application Proxy: Application Proxy is used to provide secure remote access to on-premises web applications. It’s not relevant to restricting access to Azure AD management.

D. Azure AD Privileged Identity Management (PIM): PIM is used to manage, control, and monitor access to privileged roles in Azure AD. It helps with just-in-time access and auditing but doesn’t restrict access based on location.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

You have an Azure subscription that contains an Azure Cosmos DB account. You need to recommend a solution to generate an alert from Azure Log Analytics when a request charge for a query exceeds 50 request units more than 20 times within a 15-minute window. What should you recommend?

A. Create a search query to identify when requestCharge_s exceeds 50.

Configure an alert threshold of 20 and a period of 15.

B. Create a search query to identify when duration_s exceeds 20 and requestCharge_s exceeds 50.

Configure a period of 15.

C. Create a search query to identify when requestCharge_s exceeds 20.

Configure a period of 15 and a frequency of 20.

D. Create a search query to identify when duration_s exceeds 20.

Configure a period of 15.

A

The correct answer is A. Create a search query to identify when requestCharge_s exceeds 50. Configure an alert threshold of 20 and a period of 15.

Explanation

Here’s how we break down the requirements and why this solution is the correct one:

Requirements:

Identify Queries Exceeding 50 RUs: We need to find instances where the request charge (requestCharge_s) for a Cosmos DB query is greater than 50.

Threshold of 20 Times: The alert should only trigger if this happens more than 20 times.

Within a 15-Minute Window: The 20+ occurrences must be within a 15-minute period.

Solution Breakdown (Option A):

Create a search query to identify when requestCharge_s exceeds 50.

This is the core of the solution. We need a Log Analytics query that filters for Cosmos DB operations where requestCharge_s is greater than 50. A sample query would look like this:

AzureDiagnostics
| where Category == “DataPlaneRequests” // Focus on Cosmos DB data plane operations
| where toreal(requestCharge_s) > 50 // Filter for requests exceeding 50 RUs
| summarize count() by bin(TimeGenerated, 1m) // Count occurrences in 1-minute bins
Use code with caution.
Kusto
Configure an alert threshold of 20

This directly addresses the requirement of triggering the alert only when the condition (request charge > 50) occurs more than 20 times.

Configure a period of 15

This defines the time window (15 minutes) over which the threshold (20 times) is evaluated. In this case the alert rule will evaluate the query every ‘frequency’ minutes (which defaults to 5 minutes) and will look for the number of results returned by the query in the last 15 minutes. If the number of results returned is above the threshold the alert will trigger.

Why Other Options Are Incorrect:

B. Create a search query to identify when duration_s exceeds 20 and requestCharge_s exceeds 50. Configure a period of 15.

The duration_s field represents the duration of the query, and the requirement is focused on the request charge (cost), not the duration. Also, we don’t need to know that duration_s exceeds 20, we only need to know that the count of requestCharge_s exceeds 50 more than 20 times.

C. Create a search query to identify when requestCharge_s exceeds 20. Configure a period of 15 and a frequency of 20.

The query is incorrect (should be requestCharge_s > 50). The frequency setting does not control the number of times the condition must occur; it defines how often the alert rule is evaluated.

D. Create a search query to identify when duration_s exceeds 20. Configure a period of 15.

Similar to option B, this focuses on duration_s, which is not the primary requirement. We are concerned with the cost (request charge) of the queries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You are designing a data protection strategy for Azure virtual machines. All the virtual machines are in the Standard tier and use managed disks. You need to recommend a solution that meets the following requirements:

  • The use of encryption keys is audited.
  • All the data is encrypted at rest always.

You manage the encryption keys, not Microsoft. What should you include in the recommendation?

A. BitLocker Drive Encryption (BitLocker)

B. Azure Storage Service Encryption

C. Client-side Encryption

D. Azure Disk Encryption

A

The correct answer is D. Azure Disk Encryption.

Explanation

Here’s why Azure Disk Encryption is the best solution to meet the requirements and why the other options are not as suitable:

Requirements:

Encryption at Rest: All data on the managed disks must be encrypted when stored (at rest).

Customer-Managed Keys: You, not Microsoft, must manage the encryption keys.

Auditing of Key Usage: The use of encryption keys needs to be auditable.

D. Azure Disk Encryption

How it works: Azure Disk Encryption leverages the BitLocker feature of Windows and the DM-Crypt feature of Linux to provide volume encryption for the OS and data disks of Azure virtual machines. It integrates with Azure Key Vault to help you control and manage the disk encryption keys and secrets.

Why it’s the best solution:

Encryption at Rest: It encrypts the managed disks at rest.

Customer-Managed Keys (CMK): You can choose to manage your encryption keys in Azure Key Vault, giving you full control over key creation, rotation, deletion, and usage.

Auditing: Azure Key Vault provides detailed audit logs of all key operations, including who accessed the keys, when, and for what purpose. This meets the auditing requirement.

Integration with Managed Disks: It seamlessly integrates with Azure managed disks.

Why Other Options Are Not as Suitable:

A. BitLocker Drive Encryption (BitLocker) (without Azure Disk Encryption): While BitLocker can encrypt disks, using it independently on Azure VMs without Azure Disk Encryption does not integrate with Azure Key Vault for key management or auditing. You would have to manage keys within the VM itself, which is less secure and doesn’t provide centralized auditing.

B. Azure Storage Service Encryption (SSE): SSE automatically encrypts data at rest in Azure Storage accounts, but it uses Microsoft-managed keys by default. While you can opt to use customer-managed keys with SSE, it’s not the primary mechanism for encrypting Azure VM disks. Azure Disk Encryption is specifically designed for that purpose.

C. Client-Side Encryption: In client-side encryption, you encrypt the data before sending it to Azure. This means you manage keys completely outside of Azure. While this approach provides a high level of control, it doesn’t directly address the requirement of encrypting data at rest on managed disks within Azure. Additionally, it may add complexity to your application logic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You have 100 servers that run Windows Server 2012 R2 and host Microsoft SQL Server 2012 R2 instances. The instances host databases that have the following characteristics:

  • The largest database is currently 3 TB.
  • None of the databases will ever exceed 4 TB.
  • Stored procedures are implemented by using CLR.

You plan to move all the data from SQL Server to Azure. You need to recommend an Azure service to host the databases. The solution must meet the following requirements:

  • Whenever possible, minimize management overhead for the migrated databases.
  • Minimize the number of database changes required to facilitate the migration.
  • Ensure that users can authenticate by using their Active Directory credentials.

What should you include in the recommendation?

A. Azure SQL Database single databases

B. Azure SQL Database Managed Instance

C. Azure SQL Database elastic pools

D. SQL Server 2016 on Azure virtual machines

A

The correct answer is B. Azure SQL Database Managed Instance.

Explanation

Here’s why Azure SQL Database Managed Instance is the best fit for this scenario and why the other options are less suitable:

Requirements:

Minimize Management Overhead: The solution should be a PaaS (Platform as a Service) offering to reduce administrative tasks like patching, backups, and high availability configuration.

Minimize Database Changes: The migration should require minimal or no changes to the existing database schema, stored procedures, or application code. This is crucial for a smooth transition.

Active Directory Authentication: Users should be able to authenticate using their existing Active Directory credentials.

Database Size: The solution must support databases up to 4 TB.

CLR Stored Procedures: The solution must support the use of CLR (Common Language Runtime) stored procedures.

B. Azure SQL Database Managed Instance

Why it’s the best fit:

PaaS: Managed Instance is a fully managed service, so Microsoft handles OS and SQL Server patching, backups, high availability, and other management tasks.

High Compatibility: It offers near 100% compatibility with on-premises SQL Server, including support for:

Database Size: Supports databases up to 8 TB in the General Purpose tier and 16 TB in the Business Critical tier (well above the 4 TB requirement).

CLR: Supports CLR stored procedures.

Cross-Database Queries: Allows queries that span multiple databases within the same instance.

SQL Server Agent: You can use SQL Server Agent jobs for scheduling tasks.

Active Directory Authentication: It supports Azure Active Directory authentication, which can be integrated with your on-premises Active Directory using Azure AD Connect.

Minimal Migration Effort: Due to its high compatibility, migrating databases from SQL Server 2012 R2 to Managed Instance typically requires minimal code changes.

Why other options are less suitable:

A. Azure SQL Database single databases:

Limited Compatibility: Single databases have some limitations compared to on-premises SQL Server, including:

No CLR Support: Does not support CLR stored procedures.

No Cross-Database Queries: Cannot query across multiple databases.

No SQL Server Agent: No built-in job scheduler.

Database Size: The maximum size for a single database can vary based on the service tier and purchasing model but may have limitations that could become a problem in the future.

C. Azure SQL Database elastic pools:

Same Limitations as Single Databases: Elastic pools are essentially a group of single databases managed together, so they share the same compatibility limitations as single databases (no CLR, etc.).

Cost Optimization: Elastic pools are good for cost optimization when you have many databases with varying resource utilization, but they don’t address the core compatibility issues in this scenario.

D. SQL Server 2016 on Azure virtual machines:

IaaS, Not PaaS: This is an IaaS (Infrastructure as a Service) solution, meaning you are responsible for managing the operating system, SQL Server, backups, patching, high availability, etc. This does not minimize management overhead.

Unnecessary Upgrade: While you could potentially upgrade to a newer version of SQL Server in a VM, it doesn’t solve the management overhead issue and is not required by the scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You plan to create an Azure Cosmos DB account that uses the SQL API. The account will contain data added by a web application. The web application will send data daily. You need to recommend a notification solution that meets the following requirements:

  • Sends email notification when data is received from IoT devices.
  • Minimizes compute cost.

What should you include in the recommendation?

A. Deploy an Azure logic app that has the Azure Cosmos DB connector configured to use a SendGrid action.

B. Deploy a function app that is configured to use the Consumption plan and a SendGrid binding.

C. Deploy an Azure logic app that has a SendGrid connector configured to use an Azure Cosmos DB action.

D. Deploy a function app that is configured to use the Consumption plan and an Azure Event Hubs binding.

A

The best answer is B. Deploy a function app that is configured to use the Consumption plan and a SendGrid binding.

Explanation:

Here’s why this solution is the most suitable and why the others are less effective:

Requirements:

Email Notification on Data Change: Send an email when new data is added to the Cosmos DB account.

Minimize Compute Cost: The solution should be cost-effective, especially since data is only sent daily (implying relatively infrequent triggers).

B. Function App (Consumption Plan) with SendGrid Binding

Why it’s the best solution:

Azure Functions (Consumption Plan): Azure Functions are serverless compute services that allow you to run code on-demand without managing infrastructure. The Consumption plan is ideal for cost optimization because you only pay for the time your function is actually executing. Since data is sent daily, the function will only run briefly each day, minimizing cost.

Cosmos DB Trigger: Azure Functions can be triggered by changes to Cosmos DB data (using the Cosmos DB trigger). This allows the function to automatically execute when new data is received.

SendGrid Binding: Azure Functions has built-in output bindings for SendGrid, simplifying the process of sending emails. You can easily configure the binding to send an email with the relevant data from the Cosmos DB change.

How it works:

Data is written to the Cosmos DB account.

The Cosmos DB trigger detects the new data and invokes the function.

The function code executes, potentially processing or formatting the data.

The SendGrid output binding sends an email notification.

Why other options are less suitable:

A. Logic App (Cosmos DB Connector to SendGrid Action):

Logic Apps Cost: While Logic Apps are also serverless, their pricing model is based on actions executed. Depending on the complexity of the logic app and the frequency of data changes, it might be more expensive than a Function App on the Consumption plan, especially for infrequent triggers.

Less Efficient for Simple Tasks: Logic Apps are great for more complex workflows and orchestrations. For a simple task like sending an email on a data change, a Function App is often more efficient.

Cosmos DB Connector -> SendGrid Action order is incorrect in this option.

C. Logic App (SendGrid Connector to Cosmos DB Action):

Incorrect Order of Actions: This option suggests the wrong order. You need to trigger based on Cosmos DB changes and then send an email (Cosmos DB -> SendGrid), not the other way around. It also has the same cost and efficiency considerations as option A.

D. Function App (Consumption Plan) with Event Hubs Binding:

Unnecessary Complexity: Event Hubs is designed for high-throughput, real-time event ingestion. It’s overkill for this scenario where data is sent daily. Using Event Hubs would add unnecessary complexity and potentially higher cost. The Cosmos DB trigger is the appropriate trigger in this case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You plan to use Azure Site Recovery to protect several on-premises physical server workloads. Each server workload is independent of the other. The workloads are stateless. You need to recommend a failover strategy to ensure that if the on-premises data center fails, the workloads are available in Azure as quickly as possible. Which failover strategy should you include in the recommendation?

A. Latest

B. Latest app-consistent

C. Latest multi-VM processed

D. Latest processed

A

The correct answer is D. Latest processed.

Explanation

Here’s why “Latest processed” is the best failover strategy in this scenario and why the others are less suitable:

Requirements:

Protect Physical Servers: The workloads are running on on-premises physical servers.

Independent, Stateless Workloads: Each server is independent and stateless, meaning there are no dependencies between them, and they don’t maintain any persistent state that needs to be synchronized across multiple servers.

Fastest Recovery: The primary goal is to minimize downtime and have the workloads available in Azure as quickly as possible after a failure.

Failover Strategies in Azure Site Recovery:

A. Latest: This option fails over to the latest recovery point available. While it might seem like the fastest, it could include data that hasn’t been fully processed by Site Recovery, potentially leading to inconsistencies or longer boot times during failover.

B. Latest app-consistent: This option fails over to the latest application-consistent snapshot. App-consistent snapshots are crucial for applications that need to maintain data consistency (e.g., databases). However, they take longer to create and are not necessary for stateless workloads.

C. Latest multi-VM processed: This option is used when you have multiple VMs that need to be failed over together and kept consistent with each other (e.g., a multi-tier application). This is not relevant in this scenario because the workloads are independent.

D. Latest processed: This option fails over to the latest recovery point that has been fully processed by Site Recovery. This ensures that the data is consistent and ready to be used in Azure, resulting in the fastest possible recovery for independent, stateless workloads.

Why “Latest processed” is the best choice:

Fastest Recovery Time: By using the latest processed recovery point, you minimize the time it takes for the VMs to become operational in Azure after a failover. Site Recovery has already done the necessary processing to ensure data consistency.

Suitable for Stateless Workloads: Since the workloads are stateless, there’s no need for application-consistent snapshots, which add overhead and increase recovery time.

Independent Workloads: The “multi-VM” options are not needed because each server is independent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You plan to move a web application named App1 from an on-premises data center to Azure. App1 depends on a custom framework that is installed on the host server. You need to recommend a solution to host App1 in Azure. The solution must meet the following requirements:

  • App1 must be available to users if an Azure data center becomes unavailable.
  • Costs must be minimized.

What should you include in the recommendation?

A. In two Azure regions, deploy a Traffic Manager profile and a web app.

B. In two Azure regions, deploy a load balancer and a virtual machine scale set.

C. Deploy a load balancer and a virtual machine scale set across two availability zones.

D. In two Azure regions, deploy a load balancer and a web app.

A

The best answer is A. In two Azure regions, deploy a Traffic Manager profile and a web app.

Explanation

Here’s why this solution is the most suitable and why the others are less effective:

Requirements:

High Availability (Data Center Outage): The application must remain available even if an entire Azure data center becomes unavailable.

Cost Minimization: The solution should be cost-effective.

Custom Framework: The application depends on a custom framework installed on the host server.

A. Traffic Manager Profile and Web App in Two Regions

Why it’s the best solution:

High Availability: Deploying the web app to two different Azure regions ensures availability even if one region experiences an outage. Traffic Manager automatically routes traffic to the healthy region.

Web App (Suitable for Custom Frameworks): Azure Web Apps (App Service) allow you to deploy web applications and, importantly, support custom frameworks. You can either include the framework in your deployment package or install it on the underlying App Service plan instances if it requires server-level installation. You can even use custom containers.

Traffic Manager (Geo-Redundancy): Traffic Manager is a DNS-based load balancer that directs user traffic to different endpoints based on various routing methods (e.g., priority, performance, geographic). It provides the geo-redundancy needed to handle a regional outage.

Cost-Effective (Compared to VMs): Web Apps are generally more cost-effective than managing virtual machines, as they are a PaaS (Platform as a Service) offering. You don’t have to manage the underlying OS or infrastructure.

How it works:

You deploy your web app (including the custom framework) to an App Service plan in each of the two Azure regions.

You create a Traffic Manager profile and add the two App Service instances as endpoints.

You configure a routing method (e.g., “Priority” for active/passive failover or “Performance” to route users to the closest region).

Traffic Manager monitors the health of the endpoints and directs traffic accordingly.

Why other options are less suitable:

B. Load Balancer and Virtual Machine Scale Set in Two Regions:

Higher Cost: Managing virtual machine scale sets (VMSS) is typically more expensive than using Web Apps, as you’re paying for the underlying VMs even when they are not fully utilized.

More Management Overhead: VMSS requires more management overhead (OS patching, updates, etc.) compared to the PaaS nature of Web Apps.

Custom Framework Installation: While you can install custom frameworks on VMs, it adds to the management complexity.

C. Load Balancer and Virtual Machine Scale Set Across Two Availability Zones:

Not Sufficient for Data Center Outage: Availability zones are within a single region. They protect against failures within a data center, but not against an entire regional outage.

D. Load Balancer and Web App in Two Regions:

Incorrect Load Balancer Type: An Azure Load Balancer operates within a single region and is typically used for load balancing across VMs. It’s not designed for cross-region traffic management like Traffic Manager.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You have an on-premises deployment of MongoDB. You plan to migrate MongoDB to an Azure Cosmos DB account that uses the MongoDB API. You need to recommend a solution for migrating MongoDB to Azure Cosmos DB. What should you include in the recommendation?

A. mongorestore

B. Data Migration Assistant

C. Azure Storage Explorer

D. Azure Cosmos DB Data Migration Tool

A

The correct answer is A. mongorestore.

Explanation

Here’s why mongorestore is the recommended tool for migrating MongoDB to Azure Cosmos DB’s API for MongoDB, and why the other options are not suitable:

A. mongorestore

What it is: mongorestore is a command-line utility that is part of the MongoDB toolset. It’s specifically designed to import data into a MongoDB database from a backup created by mongodump.

Why it’s the best choice for Cosmos DB’s API for MongoDB:

Compatibility: Azure Cosmos DB’s API for MongoDB is designed to be compatible with the MongoDB wire protocol. This means that tools like mongorestore (and mongodump) can be used to interact with Cosmos DB as if it were a native MongoDB instance.

Efficiency: mongorestore is generally an efficient way to import large datasets into MongoDB.

Simplicity: It’s a relatively straightforward command-line tool to use, especially if you’re already familiar with MongoDB.

Supported by Microsoft: Microsoft’s own documentation recommends using mongodump and mongorestore for migrating to Azure Cosmos DB’s API for MongoDB.

Typical Workflow:

mongodump: Use the mongodump utility to create a backup (dump) of your on-premises MongoDB database.

mongorestore: Use mongorestore to import the backup data into your Azure Cosmos DB account, specifying the Cosmos DB connection string.

Why other options are not suitable:

B. Data Migration Assistant (DMA): DMA is primarily used for assessing and migrating SQL Server databases to Azure SQL Database or Azure SQL Managed Instance. It’s not designed for MongoDB migrations.

C. Azure Storage Explorer: Storage Explorer is a tool for managing Azure Storage resources (blobs, files, queues, tables). It’s not used for migrating databases to Cosmos DB.

D. Azure Cosmos DB Data Migration Tool: While there was an older, open-source tool with this name, it’s no longer actively maintained or recommended by Microsoft. The preferred methods now are using mongorestore, Azure DMS, or other native MongoDB tools.

Important Considerations When Using mongorestore:

Connection String: Make sure you use the correct connection string for your Azure Cosmos DB account (with the MongoDB API enabled). You can find this in the Azure portal under your Cosmos DB account’s “Connection String” blade.

Throughput: Provision sufficient throughput (Request Units - RUs) on your Cosmos DB account to handle the import process efficiently. You can scale up the throughput during the migration and then scale it down afterward.

Indexing: Consider creating indexes in your Cosmos DB collections after the data import is complete. This can significantly speed up the import process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You need to recommend a data storage solution that meets the following requirements:

  • Ensures that application can access the data by using a REST connection.
  • Hosts 20 independent tables of varying sizes and usage patterns.
  • Automatically replicates the data to a second Azure region Minimizes costs.

What should you recommend?

A. an Azure SQL Database elastic database pool that uses active geo-replication

B. tables in an Azure Storage account that uses geo-redundant storage (GRS)

C. tables in an Azure Storage account that use read-access geo-redundant storage (RA-GR)

D. an Azure SQL database that uses active geo-replication

A

The correct answer is B. tables in an Azure Storage account that uses geo-redundant storage (GRS).

Explanation

Let’s analyze the requirements and why this solution is the best fit:

Requirements:

REST API Access: The application needs to access the data using a REST connection.

20 Independent Tables: The solution must support hosting multiple independent tables.

Automatic Replication to Second Region: Data needs to be automatically replicated to a secondary Azure region for disaster recovery purposes.

Minimize Costs: The solution should be cost-effective.

B. Tables in an Azure Storage Account with Geo-Redundant Storage (GRS)

Why it’s the best solution:

REST API: Azure Table Storage provides a RESTful API for accessing and manipulating data.

Tables: Azure Storage accounts support Table Storage, which allows you to store structured data in tables. You can create 20 independent tables within a single storage account.

Geo-Redundant Storage (GRS): GRS automatically replicates your data to a secondary region hundreds of miles away from the primary region. This provides protection against regional outages.

Cost-Effective: Azure Table Storage is generally a very cost-effective storage option, especially when compared to relational databases like Azure SQL Database. GRS is also relatively inexpensive compared to other replication options like RA-GRS.

Why other options are less suitable:

A. Azure SQL Database elastic database pool with active geo-replication:

Cost: Azure SQL Database is significantly more expensive than Azure Table Storage. Elastic pools are cost-effective for managing multiple databases with varying workloads, but they are still more expensive than Table Storage.

Active Geo-Replication Cost: Active geo-replication adds to the cost of Azure SQL Database.

REST API: While you can access SQL database through a REST API, it is not as straightforward as with Table storage.

C. Tables in an Azure Storage account that use read-access geo-redundant storage (RA-GRS):

Higher Cost: RA-GRS is more expensive than GRS because it allows read access to the secondary region. The requirements only specify the need for replication, not read access to the secondary region.

D. An Azure SQL database that uses active geo-replication:

Cost: As mentioned earlier, Azure SQL Database is a more expensive option than Azure Table Storage.

Single Database: This option only supports a single database, not 20 independent tables unless you manage them within a single database, which is not ideal for independent tables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A company named Contoso, Ltd. has an Azure Active Directory (Azure AD) tenant that is integrated with Microsoft Office 365 and an Azure subscription.
Contoso has an on-premises identity infrastructure. The infrastructure includes servers that run Active Directory Domain Services (AD DS), Active Directory
Federation Services (AD FS), Azure AD Connect, and Microsoft Identity Manager (MIM).
Contoso has a partnership with a company named Fabrikam, Inc. Fabrikam has an Active Directory forest and an Office 365 tenant. Fabrikam has the same on- premises identity infrastructure as Contoso.
A team of 10 developers from Fabrikam will work on an Azure solution that will be hosted in the Azure subscription of Contoso. The developers must be added to the Contributor role for a resource in the Contoso subscription.
You need to recommend a solution to ensure that Contoso can assign the role to the 10 Fabrikam developers. The solution must ensure that the Fabrikam developers use their existing credentials to access resources.
What should you recommend?
A. Configure a forest trust between the on-premises Active Directory forests of Contoso and Fabrikam.
B. Configure an organization relationship between the Office 365 tenants of Fabrikam and Contoso.
C. In the Azure AD tenant of Contoso, enable Azure Active Directory Domain Services (Azure AD DS). Create a one-way forest trust that uses selective authentication between the Active Directory forests of Contoso and Fabrikam.
D. In the Azure AD tenant of Contoso, create guest accounts for the Fabrikam developers.

A

The correct answer is D. In the Azure AD tenant of Contoso, create guest accounts for the Fabrikam developers.

Here’s why:

Guest Accounts (Azure AD B2B): Azure Active Directory (Azure AD) business-to-business (B2B) collaboration lets you securely invite external users (guests) into your Azure AD tenant. These guest users can then be assigned roles (like Contributor) within your Azure subscription. The Fabrikam developers can sign in using their existing Fabrikam credentials (via federation with their Office 365 tenant or through a social provider if configured).

Let’s look at why the other options are not the best fit:

A. Configure a forest trust between the on-premises Active Directory forests of Contoso and Fabrikam.

Why it’s not ideal: While a forest trust would enable authentication, it doesn’t directly translate to authorization within the Azure subscription. Azure RBAC (Role-Based Access Control) relies on Azure AD identities, not on-premises AD. Furthermore, setting up and maintaining a forest trust for just 10 developers might be overkill and introduce unnecessary complexity.

B. Configure an organization relationship between the Office 365 tenants of Fabrikam and Contoso.

Why it’s not ideal: Organizational relationships in Office 365 primarily facilitate features like free/busy lookups and sharing calendars between tenants. They do not grant access to Azure resources or the ability to assign RBAC roles.

C. In the Azure AD tenant of Contoso, enable Azure Active Directory Domain Services (Azure AD DS). Create a one-way forest trust that uses selective authentication between the Active Directory forests of Contoso and Fabrikam.

Why it’s not ideal: Azure AD DS is a managed domain service, essentially providing a subset of AD DS functionality in the cloud. It is primarily used for legacy applications that require traditional domain services. It’s not necessary for modern cloud applications that can use Azure AD directly. Additionally, a trust with Azure AD DS would not enable assignment of roles in the Contoso subscription and is overly complex.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Your network contains an on-premises Active Directory forest.
You discover that when users change jobs within your company, the membership of the user groups are not being updated. As a result, the users can access resources that are no longer relevant to their job.
You plan to integrate Active Directory and Azure Active Directory (Azure AD) by using Azure AD Connect.
You need to recommend a solution to ensure that group owners are emailed monthly about the group memberships they manage.
What should you include in the recommendation?
A. Azure AD access reviews
B. Tenant Restrictions
C. Azure AD Identity Protection
D. conditional access policies

A

The correct answer is A. Azure AD access reviews.

Here’s why:

Azure AD access reviews: This feature allows you to schedule regular reviews of access rights to resources, including group memberships, applications, and roles. You can configure access reviews to target specific groups, and designate the group owners as the reviewers. During the review, owners will be notified (via email) and prompted to confirm or remove members from their groups. This directly addresses the requirement of ensuring group owners are regularly reminded to review and update memberships.

Let’s examine why the other options are not the best fit:

B. Tenant Restrictions: Tenant restrictions are used to control which external tenants your users can access when signed in with their organizational accounts. This is related to controlling outbound access to other organizations, not managing internal group memberships.

C. Azure AD Identity Protection: This service helps detect and remediate identity-based risks, such as leaked credentials or unusual sign-in activities. It’s focused on security threats, not on ensuring regular review of group memberships.

D. Conditional access policies: These policies define access rules based on conditions like user location, device state, or application being accessed. While you can use conditional access to restrict access to certain resources based on group membership, it doesn’t actively involve group owners in reviewing those memberships.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Note: This question is part of series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company has an on-premises Active Directory Domain Services (AD DS) domain and an established Azure Active Directory (Azure AD) environment.
Your company would like users to be automatically signed in to cloud apps when they are on their corporate desktops that are connected to the corporate network.
You need to enable single sign-on (SSO) for company users.
Solution: Install and configure an Azure AD Connect server to use pass-through authentication and select the Enable single sign-on option.
Does the solution meet the goal?
A. Yes
B. No

A

A. Yes

The proposed solution, installing and configuring an Azure AD Connect server to use pass-through authentication and selecting the “Enable single sign-on” option, does meet the goal.

Here’s why:

Azure AD Connect with Pass-through Authentication: This configuration allows users to authenticate to Azure AD using the same password they use for on-premises AD. Authentication requests are passed through to the on-premises AD for validation.

Enable single sign-on (Seamless SSO): This feature, when enabled in Azure AD Connect, automatically signs users in to Azure AD and cloud apps when they are on their domain-joined corporate devices. It uses Kerberos authentication behind the scenes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Note: This question is part of series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
Your company has an on-premises Active Directory Domain Services (AD DS) domain and an established Azure Active Directory (Azure AD) environment.
Your company would like users to be automatically signed in to cloud apps when they are on their corporate desktops that are connected to the corporate network.
You need to enable single sign-on (SSO) for company users.
Solution: Install and configure an Azure AD Connect server to use password hash synchronization and select the Enable single sign-on option.
Does the solution meet the goal?
A. Yes
B. No

A

A. Yes

The proposed solution, installing and configuring an Azure AD Connect server to use password hash synchronization and selecting the “Enable single sign-on” option, does meet the goal.

Here’s why:

Azure AD Connect with Password Hash Synchronization: This configuration synchronizes a hash of the users’ on-premises password hashes to Azure AD. This allows users to authenticate to Azure AD using the same password they use on-premises, even though authentication is handled by Azure AD directly.

Enable single sign-on (Seamless SSO): This feature, when enabled in Azure AD Connect, works independently of the synchronization method (password hash sync or pass-through authentication). It still relies on Kerberos authentication to achieve the automatic sign-in experience for domain-joined devices on the corporate network.

17
Q

You manage an Azure environment for a company. The environment has over 25,000 licensed users and 100 mission-critical applications.
You need to recommend a solution that provides advanced endpoint threat detection and remediation strategies.
What should you recommend?
A. Azure Active Directory (Azure AD) authentication
B. Microsoft Identity Manager
C. Azure Active Directory Federation Services (AD FS)
D. Azure Active Directory (AZ AD) Connect
E. Azure Active Directory (Azure AD) Identity Protection

A

The closest answer to the correct one from the provided options is E. Azure Active Directory (Azure AD) Identity Protection, but it’s not the most accurate or comprehensive solution for the specific requirement of advanced endpoint threat detection and remediation.

The ideal solution would be Microsoft Defender for Endpoint.

Here’s a breakdown:

Why Microsoft Defender for Endpoint is the best fit:

Advanced Endpoint Threat Detection: Defender for Endpoint is a comprehensive endpoint security platform that provides:

Threat & Vulnerability Management: Identifies and prioritizes vulnerabilities on endpoints.

Attack Surface Reduction: Hardens devices against attacks.

Next-generation Protection: Uses behavioral analysis, machine learning, and cloud-delivered protection to block malware, ransomware, and other threats.

Endpoint Detection and Response (EDR): Detects advanced threats that may have bypassed other defenses, provides detailed information about attacks, and enables investigation and response.

Automated Investigation and Remediation (AIR): Automatically investigates alerts and takes action to remediate threats, reducing the burden on security teams.

Microsoft Threat Experts: Provides access to Microsoft security experts for help with investigations and threat hunting.

Remediation Strategies: Defender for Endpoint allows for automated and manual remediation actions, including isolating devices, blocking files, and terminating processes.

Integration with Azure AD: Defender for Endpoint integrates with Azure AD for identity-based security policies and conditional access.

Why the other options are not suitable:

A. Azure Active Directory (Azure AD) authentication: Azure AD provides authentication services but does not offer endpoint threat detection or remediation.

B. Microsoft Identity Manager (MIM): MIM is an on-premises identity management solution, primarily for managing user accounts, groups, and credentials within an organization’s infrastructure. It’s not designed for endpoint security.

C. Azure Active Directory Federation Services (AD FS): AD FS is a feature of Windows Server that enables federated identity and single sign-on. It’s related to authentication, not endpoint security.

D. Azure Active Directory (Azure AD) Connect: Azure AD Connect synchronizes on-premises identities to Azure AD. It’s a critical component for hybrid identity but doesn’t provide endpoint security features.

E. Azure Active Directory (Azure AD) Identity Protection: This is the closest of the options, but it primarily focuses on identity-based risks (e.g., leaked credentials, impossible travel) and risky sign-ins. While it can trigger actions like requiring MFA or blocking access, it’s not a full endpoint security solution like Defender for Endpoint.

18
Q

You have an Azure subscription that contains a custom application named Application1. Application1 was developed by an external company named Fabrikam,
Ltd. Developers at Fabrikam were assigned role-based access control (RBAC) permissions to the Application1 components. All users are licensed for the
Microsoft 365 E5 plan.
You need to recommend a solution to verify whether the Fabrikam developers still require permissions to Application1. The solution must meet the following requirements:
✑ To the manager of the developers, send a monthly email message that lists the access permissions to Application1.
✑ If the manager does not verify an access permission, automatically revoke that permission.
✑ Minimize development effort.
What should you recommend?
A. In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources
B. Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet
C. Create an Azure Automation runbook that runs the Get-AzureRmRoleAssignment cmdlet
D. In Azure Active Directory (Azure AD), create an access review of Application1

A

The correct answer is D. In Azure Active Directory (Azure AD), create an access review of Application1.

Here’s why:

Azure AD Access Reviews: This feature is specifically designed for reviewing access rights to resources like applications, groups, and roles. It allows you to:

Target specific resources: You can create an access review specifically for Application1.

Assign reviewers: You can designate the manager of the Fabrikam developers as the reviewer.

Set a schedule: You can configure the review to occur monthly, meeting the requirement for a monthly email.

Automate actions: You can configure the review to automatically remove access if the reviewer (the manager) doesn’t approve it within a specified timeframe.

Minimal development effort: Access reviews are a built-in feature, requiring no custom code or scripting.

Let’s analyze why the other options are not as suitable:

A. In Azure Active Directory (AD) Privileged Identity Management, create a custom role assignment for the Application1 resources:

Why it’s not ideal: PIM is primarily used for managing just-in-time access to privileged roles. While you could create a custom role, PIM doesn’t offer the automated monthly review and manager approval/denial workflow needed in this scenario.

B. Create an Azure Automation runbook that runs the Get-AzureADUserAppRoleAssignment cmdlet:

Why it’s not ideal: This would require significant development effort to create a runbook that retrieves role assignments, sends emails, and processes responses. It also wouldn’t have the built-in review and approval workflow of access reviews.

C. Create an Azure Automation runbook that runs the Get-AzureRmRoleAssignment cmdlet:

Why it’s not ideal: Similar to option B, this requires custom development. Additionally, Get-AzureRmRoleAssignment is part of the older AzureRM module. It is recommended to use Az module cmdlets for new development. Using this also wouldn’t have the built-in review features.

19
Q

Note: This question is part of series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure subscription that contains a resource group named RG1.
You create an Azure Active Directory (Azure AD) group named ResearchUsers that contains the user accounts of all researchers.
You need to recommend a solution that meets the following requirements:
✑ The researchers must be allowed to create Azure virtual machines.
✑ The researchers must only be able to create Azure virtual machines by using specific Azure Resource Manager templates.
Solution: On RG1, assign a custom role-based access control (RBAC) role to the ResearchUsers group.
Does this meet the goal?
A. Yes
B. No

A

B. No

Assigning a custom RBAC role to the ResearchUsers group on RG1 alone does not meet the goal.

Here’s why:

Custom RBAC Roles: Custom RBAC roles allow you to define granular permissions for specific actions on Azure resources. You could create a custom role that grants permissions to create virtual machines within RG1.

Missing Restriction: However, a custom RBAC role does not provide a mechanism to enforce the use of specific ARM templates when creating virtual machines. The researchers would still be able to create VMs using any method, including the portal, CLI, or PowerShell, without being restricted to the designated templates.

To meet the goal, you would need to combine a custom RBAC role with Azure Policy:

Custom RBAC Role: Grant the ResearchUsers group a custom role that allows the creation of virtual machines in RG1 (Microsoft.Compute/virtualMachines/*).

Azure Policy: Create a policy that enforces the use of specific ARM templates. This can be done by using the deployIfNotExists effect to deploy a resource (like a VM) only if it matches the desired template configuration. You would need to identify or create the specific ARM templates you want to allow and reference them in the policy definition.

20
Q

You have an Azure subscription that contains several resource groups. Including a resource group named RG1. RG1 contains several business-critical resources.
A user named admin1 is assigned the Owner role to the subscription.
You need to prevent admin1 from modifying the resources in RG1. The solution must ensure that admin1 can manage the resources in the other resource groups.
What should you use?
A. a management group
B. an Azure policy
C. a custom role
D. an Azure blueprint

A

The correct answer is B. an Azure policy.

Here’s why:

Azure Policy: Azure Policy allows you to define and enforce rules about the configuration of your Azure resources. You can create a policy that specifically denies the modification of resources within RG1. You can then assign this policy at the subscription level but exclude RG1’s scope from the policy that allows other actions, essentially creating a “Deny” effect for RG1. This would prevent admin1 (or any other user) from modifying resources in RG1, even though they have the Owner role at the subscription level.

Let’s examine why the other options aren’t the best fit:

A. a management group: Management groups are used to organize subscriptions and apply governance policies at a higher level than subscriptions. While you could use management groups to apply policies, they don’t offer the granularity to target a specific resource group within a subscription for a deny action while allowing actions on others.

C. a custom role: You could create a custom role that excludes permissions for RG1, but you’d have to assign it to admin1 at the resource group level for every other resource group. This is not practical or scalable. The Owner role assigned at the subscription level would override this custom role, making it ineffective for preventing modifications in RG1.

D. an Azure blueprint: Azure blueprints are used to define and deploy a set of resources and policies in a standardized way. While you could use a blueprint to apply a policy, it’s not the most direct way to achieve the specific requirement of preventing modifications to RG1 while allowing them elsewhere.

21
Q

You have an Azure subscription.
You need to recommend a solution to provide developers with the ability to provision Azure virtual machines. The solution must meet the following requirements:
✑ Only allow the creation of the virtual machines in specific regions.
✑ Only allow the creation of specific sizes of virtual machines.
What should include in the recommendation?
A. conditional access policies
B. Azure Policy
C. Azure Resource Manager templates
D. role-based access control (RBAC)

A

Let’s analyze each option against the requirements:

A. Conditional Access Policies: Conditional Access Policies are used to enforce authentication and authorization policies in Azure Active Directory (Azure AD). They control who can access what under what conditions. They are primarily focused on securing access to applications and services, not on restricting the provisioning of Azure resources like virtual machines based on region or size.

B. Azure Policy: Azure Policy is a service in Azure that allows you to create, assign, and manage policies to enforce different rules and effects over your Azure resources. Azure Policy can be used to:

Restrict resource locations: You can define policies that only allow resource deployments in specific Azure regions. For example, you can create a policy that only allows virtual machines to be created in “East US” and “West US”.

Restrict resource SKUs (sizes): You can define policies that only allow the creation of specific virtual machine sizes. For example, you can create a policy that only allows “Standard_DS1_v2” and “Standard_DS2_v2” virtual machine sizes.

Enforce other configurations: Azure Policy can also enforce tagging, naming conventions, and various other resource configurations.

Azure Policy is specifically designed to govern and control the creation and configuration of Azure resources to ensure compliance and standardization.

C. Azure Resource Manager templates (ARM templates): ARM templates are Infrastructure-as-Code (IaC) files used to define and deploy Azure resources. While you can define the region and size of virtual machines within an ARM template, ARM templates themselves do not enforce restrictions. Developers could still deploy virtual machines using other methods (like the Azure portal or Azure CLI) and bypass the restrictions defined in the ARM template unless there is an enforcement mechanism in place. ARM templates are a tool for deployment, not for policy enforcement.

D. Role-Based Access Control (RBAC): RBAC controls who has access to what Azure resources and what actions they can perform. You can use RBAC to grant developers the “Virtual Machine Contributor” role, which allows them to create and manage virtual machines. However, RBAC alone does not provide the capability to restrict the region or size of virtual machines that can be created. RBAC controls permissions at a broader level (e.g., create virtual machines), but not the specific properties of those resources (region, size).

Conclusion:

Azure Policy is the most appropriate solution because it is specifically designed to enforce organizational standards and compliance requirements on Azure resources. It directly addresses both requirements: restricting virtual machine creation to specific regions and specific sizes.

Final Answer: B. Azure Policy

22
Q

Note: This question is part of series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure subscription named Project1. Only a group named Project1admins is assigned roles in the Project1 subscription. The Project1 subscription contains all the resources for an application named Application1.
Your company is developing a new application named Application2. The members of the Application2 development team belong to an Azure Active Directory
(Azure AD) group named App2Dev.
You identify the following requirements for Application2:
✑ The members of App2Dev must be prevented from changing the role assignments in Azure.
✑ The members of App2Dev must be able to create new Azure resources required by Application2.
✑ All the required role assignments for Application2 will be performed by the members of Project1admins.
You need to recommend a solution for the role assignments of Application2.
Solution: Create a new Azure subscription named Project2. Assign Project1admins the User Access Administrator role for the Project2 subscription. Assign
App2Dev the Owner role for the Project2 subscription.
Does this meet the goal?
A. Yes
B. No

A

Let’s analyze the requirements and the proposed solution step by step.

Requirements:

Requirement 1: Prevent App2Dev from changing role assignments. The members of App2Dev should not be able to modify who has access to Application2 resources. This is about limiting their permissions related to Azure RBAC (Role-Based Access Control).

Requirement 2: App2Dev must be able to create new Azure resources. The members of App2Dev need to be able to deploy and manage the Azure resources necessary for Application2. This means they need sufficient permissions to create and manage resources like virtual machines, storage accounts, web apps, etc.

Requirement 3: Role assignments done by Project1admins. All role assignments for Application2 should be exclusively managed by the members of Project1admins. This means App2Dev should not have permissions to manage role assignments.

Proposed Solution:

Action 1: Create a new Azure subscription named Project2. This action creates a separate administrative boundary for Application2. It can help with organization, billing, and resource isolation, but it doesn’t directly address the role assignment requirements itself.

Action 2: Assign Project1admins the User Access Administrator role for the Project2 subscription. The User Access Administrator role allows Project1admins to manage access to Azure resources within the Project2 subscription. This role is specifically designed for managing role assignments. So, Project1admins will be able to grant and revoke access to resources in Project2.

Action 3: Assign App2Dev the Owner role for the Project2 subscription. The Owner role is a built-in Azure RBAC role that grants full access to manage all resources, including the ability to delegate access to other resources. This is a very powerful role.

Evaluation of the Solution against the Requirements:

Requirement 1: Prevent App2Dev from changing role assignments? NO. The Owner role includes the permission Microsoft.Authorization/*/Write, which explicitly grants the ability to manage role assignments. By assigning the Owner role to App2Dev, they can change role assignments within the Project2 subscription. This directly violates Requirement 1.

Requirement 2: App2Dev must be able to create new Azure resources? YES. The Owner role grants all permissions, including the ability to create and manage all types of Azure resources. So, App2Dev members will be able to create new resources in Project2.

Requirement 3: Role assignments done by Project1admins? NO. While Project1admins have the User Access Administrator role and can manage role assignments, the solution also grants the Owner role to App2Dev, which also allows App2Dev to manage role assignments. Requirement 3 implies that only Project1admins should manage role assignments, and the solution fails to enforce this exclusivity because App2Dev, with the Owner role, also gains this capability.

Conclusion:

The proposed solution fails to meet Requirement 1 and Requirement 3. By granting the Owner role to App2Dev, the solution inadvertently allows App2Dev to manage role assignments, which is explicitly against the stated requirements. While App2Dev gains the ability to create resources (Requirement 2), the solution compromises the access control requirements.

Therefore, the proposed solution does not meet the goal.

Final Answer: No

23
Q

Note: This question is part of series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure subscription that contains a resource group named RG1.
You create an Azure Active Directory (Azure AD) group named ResearchUsers that contains the user accounts of all researchers.
You need to recommend a solution that meets the following requirements:
✑ The researchers must be allowed to create Azure virtual machines.
✑ The researchers must only be able to create Azure virtual machines by using specific Azure Resource Manager templates.
Solution: On RG1, assign the Contributor role to the ResearchUsers group. Create a custom Azure Policy definition and assign the policy to RG1.
Does this meet the goal?
A. Yes
B. No

A

Let’s break down the requirements and the proposed solution to determine if it meets the goal.

Requirements:

Requirement 1: Researchers must be allowed to create Azure virtual machines.

Requirement 2: Researchers must only be able to create Azure virtual machines by using specific Azure Resource Manager templates.

Proposed Solution:

Step 1: On RG1, assign the Contributor role to the ResearchUsers group.

The Contributor role is a built-in Azure RBAC role that grants permissions to manage most resources, including creating and managing virtual machines within the scope it’s assigned to (in this case, Resource Group RG1).

Does this meet Requirement 1? Yes, assigning the Contributor role to ResearchUsers on RG1 will allow researchers to create Azure virtual machines within RG1.

Step 2: Create a custom Azure Policy definition and assign the policy to RG1.

Azure Policy allows you to define rules and effects that are enforced when Azure resources are created or modified. You can use Azure Policy to control various aspects of resource configurations.

Does this meet Requirement 2? Azure Policy cannot directly enforce the use of specific Azure Resource Manager templates. Azure Policy works by evaluating resource properties after a deployment request is made. It can then enforce rules based on those properties (e.g., allowed VM sizes, allowed regions, required tags).

However, Azure Policy can restrict the properties of virtual machines that can be created. You could create a custom Azure Policy that, for example:

Only allows specific VM sizes.

Only allows VMs to be created in specific regions.

Only allows VMs to be created from specific images.

Requires specific tags.

By carefully crafting the Azure Policy, you can constrain the characteristics of the virtual machines that researchers can create. If the “specific Azure Resource Manager templates” in Requirement 2 are designed to deploy virtual machines with certain pre-defined characteristics (like specific sizes, regions, images, etc.), then Azure Policy can be used to enforce these characteristics. In this way, while not directly enforcing template usage, Azure Policy can guide and restrict VM creation to align with the desired outcomes of using those specific templates.

Evaluation:

Requirement 1 is met: The Contributor role grants the necessary permissions.

Requirement 2 is partially met, but not in a strict sense: Azure Policy cannot force users to use specific ARM templates. However, Azure Policy can enforce constraints on VM properties that would effectively limit the type of VMs researchers can create, guiding them towards configurations consistent with specific templates.

Conclusion:

While Azure Policy cannot literally enforce the use of specific ARM templates, it can be used to enforce configurations that are likely defined within those templates. In a practical sense, this combination of Contributor role and Azure Policy provides a level of control that can effectively guide researchers to create VMs that meet certain pre-defined specifications, which might be the intent behind requiring “specific ARM templates.”

Considering the options and the capabilities of Azure Policy, and in the context of a certification question, the most likely intended answer is Yes, assuming that the goal is to control what kind of VMs are created, and Azure Policy is used to enforce the properties of those VMs, even if it doesn’t strictly enforce the method of creation (i.e., using a specific template). If the requirement was interpreted extremely literally as only allowing deployment through specific templates and blocking all other methods, then the answer would be “No”. However, in a practical governance scenario, using Azure Policy to control VM properties is a common and effective approach.

Final Answer: Yes

24
Q

HOTSPOT -
You are building an application that will run in a virtual machine (VM). The application will use Managed Service Identity (MSI).
The application uses Azure Key Vault, Azure SQL Database, and Azure Cosmos DB.
You need to ensure the application can use secure credentials to access these services.
Which authorization methods should you recommend? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area
Functionality
Azure Key Vault:
Hash-based message authentication code (HMAC)
Managed Service Identity (MSI)
Role-Based Access Controls (RBAC)
HTTPS encryption
Azure SQL:
Hash-based message authentication code (HMAC)
Managed Service Identity (MSI)
Role-Based Access Controls (RBAC)
HTTPS encryption
Cosmos DB:
Hash-based message authentication code (HMAC)
Managed Service Identity (MSI)
Role-Based Access Controls (RBAC)
HTTPS encryption

A

The question is about using Managed Service Identity (MSI) for an application running in a VM to securely access Azure Key Vault, Azure SQL Database, and Azure Cosmos DB. MSI allows Azure services to authenticate to other Azure services without storing credentials in code. We need to identify the appropriate authorization methods for each service when using MSI.

Azure Key Vault:

Managed Service Identity (MSI): This is the core technology being used. MSI enables the VM’s identity to be recognized by Azure services.

Role-Based Access Controls (RBAC): Azure Key Vault uses Access Policies or, more recently, Azure RBAC to control access to vaults and their contents (secrets, keys, certificates). When using MSI, you would grant the VM’s MSI identity specific permissions in Key Vault using RBAC or Access Policies. For example, you would grant the MSI identity the “Key Vault Secrets User” role (or specific Access Policy permissions) to allow the application to retrieve secrets from Key Vault.

Hash-based message authentication code (HMAC): HMAC is a type of message authentication code obtained by running a cryptographic hash function on the data to be authenticated and a shared secret key. While HMAC can be used for authentication in some contexts, it is not the primary authorization method used with MSI for Azure Key Vault access.

HTTPS encryption: HTTPS is used to secure communication channels, ensuring data is encrypted in transit. While important for security, HTTPS is not an authorization method itself.

Azure SQL Database:

Managed Service Identity (MSI): As with Key Vault, MSI is the core technology for authentication. For Azure SQL Database, this is specifically referred to as “Azure AD authentication for Azure SQL Database”.

Role-Based Access Controls (RBAC): Azure RBAC can be used to manage permissions for Azure AD identities (including MSIs) to access Azure SQL Database. You would grant the VM’s MSI identity roles like “SQL Database Contributor” or more granular roles to control what the application can do within the SQL database.

Hash-based message authentication code (HMAC): HMAC is not the standard authorization method for Azure SQL Database access using MSI or Azure AD authentication.

HTTPS encryption: HTTPS is essential for securing the connection to Azure SQL Database, but it is not an authorization method.

Azure Cosmos DB:

Managed Service Identity (MSI): Cosmos DB supports using Managed Identities for Azure Resources (MSI) for authentication.

Role-Based Access Controls (RBAC): Cosmos DB supports Role-Based Access Control (RBAC) for data plane operations (newer model). You can assign RBAC roles to the VM’s MSI identity to control access to Cosmos DB resources like databases, containers, and items. For example, you might assign a role that allows reading and writing items in a specific container.

Hash-based message authentication code (HMAC): Cosmos DB also supports authentication using master keys or resource tokens, which are based on HMAC. However, when using MSI, RBAC is the preferred and more secure method for authorization, especially for service-to-service authentication.

HTTPS encryption: HTTPS is crucial for securing communication with Cosmos DB, but it’s not an authorization method.

Therefore, the correct selections are:

Azure Key Vault: Managed Service Identity (MSI), Role-Based Access Controls (RBAC)
Azure SQL: Managed Service Identity (MSI), Role-Based Access Controls (RBAC)
Cosmos DB: Managed Service Identity (MSI), Role-Based Access Controls (RBAC)

25
Q

Note: This question is part of series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure Active Directory (Azure AZD) tenant named contoso.com. The tenant contains a group named Group1. Group1 contains all the administrative user accounts.
You discover several login attempts to the Azure portal from countries where administrative users do NOT work.
You need to ensure that all login attempts to the Azure portal from those countries require Azure Multi-Factor Authentication (MFA).
Solution: Implement Azure AD Privileged Identity Management.
Does this solution meet the goal?
A. Yes
B. No

A

The goal is to enforce Azure Multi-Factor Authentication (MFA) for login attempts to the Azure portal from specific countries where administrative users do not typically work. The proposed solution is to implement Azure AD Privileged Identity Management (PIM). Let’s evaluate if PIM meets this goal.

Azure AD Privileged Identity Management (PIM):

Purpose of PIM: PIM is primarily designed to manage, control, and monitor privileged access within an Azure Active Directory (Azure AD) organization. It focuses on providing just-in-time (JIT) access to Azure resources and Azure AD roles for administrative users. Key features of PIM include:

Just-in-Time Role Activation: Users need to request activation for privileged roles, granting them temporary permissions.

Approval Workflows: Role activation requests can require approval from designated approvers.

MFA Enforcement during Activation: MFA can be enforced when users activate privileged roles.

Audit Logging and Reporting: PIM provides detailed logs of role activations and access.

How PIM relates to the stated goal: PIM can enforce MFA, but it primarily does so in the context of role activation. The scenario describes login attempts to the Azure portal from specific countries for administrative users (Group1). While Group1 contains administrative users, the requirement is about all login attempts from those locations, not specifically about the activation of privileged roles.

Limitations of PIM for this specific goal: PIM is not designed to enforce MFA based on the location of a login attempt for all logins to the Azure portal. PIM’s MFA enforcement is tied to the activation of privileged roles. It does not inherently apply MFA based on geographic location for every portal login, even if the user is an administrator.

Alternative Solution: Azure AD Conditional Access: Azure AD Conditional Access is the feature specifically designed to control access to applications and services based on various conditions, including:

User or Group Membership: Target specific users or groups.

Location: Control access based on the user’s location (countries, trusted locations, etc.).

Device State: Control access based on device compliance, hybrid Azure AD join, etc.

Application: Control access to specific applications (like the Azure portal).

Risk Level: Integrate with Identity Protection to control access based on sign-in risk.

Using Conditional Access, you can create a policy that:

Targets: All members of Group1 (administrative users).

Condition: Location is NOT in the allowed countries (countries where admins work).

Access Control: Require Multi-Factor Authentication.

Application: Azure portal (Microsoft Azure Management).

Conclusion:

Azure AD Privileged Identity Management is not the correct tool to directly address the requirement of enforcing MFA based on login location for all login attempts to the Azure portal. PIM’s MFA enforcement is tied to role activation, not general login attempts.

Azure AD Conditional Access is the appropriate feature for implementing location-based MFA requirements for accessing the Azure portal.

Therefore, the proposed solution of implementing Azure AD Privileged Identity Management does not meet the goal.

Final Answer: No

26
Q

HOTSPOT -
You are designing an access policy for the sales department at your company.
Occasionally, the developers at the company must stop, start, and restart Azure virtual machines. The development team changes often.
You need to recommend a solution to provide the developers with the required access to the virtual machines. The solution must meet the following requirements:
✑ Provide permissions only when needed.
✑ Use the principle of least privilege.
Minimize costs.

What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

Answer Area
Azure Active Directory (Azure ID)
license:
Free
Basic
Premium P1
Premium P2
Security feature:
Just in time VM access
A conditional access policy
Privileged Identity Management for the Azure resources

A

To meet the requirements of providing developers with just-in-time (JIT) permissions to stop, start, and restart Azure virtual machines while adhering to the principle of least privilege and minimizing costs, we need to select the appropriate Azure Active Directory (Azure AD) license and security feature. Let’s analyze each option:

Azure Active Directory (Azure AD) License:

Free: The Free tier of Azure AD provides basic user and group management but does not include advanced features like Azure AD Privileged Identity Management (PIM) or Conditional Access policies that are needed for just-in-time access and fine-grained control over Azure resources in the context of RBAC.

Basic: Azure AD Basic offers some enhancements over the Free tier, but it still lacks the advanced security and governance features necessary for implementing just-in-time RBAC and PIM.

Premium P1: Azure AD Premium P1 includes features like Conditional Access, which can enhance security but does not directly provide just-in-time RBAC for Azure resources or Privileged Identity Management.

Premium P2: Azure AD Premium P2 includes all the features of P1 and adds Azure AD Privileged Identity Management (PIM). PIM is specifically designed for managing, controlling, and monitoring privileged access to Azure resources and Azure AD roles. It enables just-in-time access, role activation workflows, and enforcement of MFA for privileged operations, perfectly aligning with the “provide permissions only when needed” requirement.

Security Feature:

Just in time VM access: This feature in Microsoft Defender for Cloud (formerly Azure Security Center) controls network access to VMs by limiting open ports and requiring just-in-time requests for inbound network traffic. While it enhances VM security, it does not manage Azure RBAC permissions for VM actions like start, stop, and restart. It’s focused on network access, not RBAC.

A conditional access policy: Conditional Access policies enforce authentication requirements based on various conditions (user, location, device, etc.). While Conditional Access can be used to strengthen security around Azure portal access or application access, it is not designed for providing just-in-time RBAC permissions to Azure resources. It does not directly address the requirement of granting temporary permissions for VM operations.

Privileged Identity Management for the Azure resources: Privileged Identity Management (PIM) for Azure resources is the correct feature. PIM allows you to grant users eligible assignments to Azure roles. Developers can then activate these role assignments when they need to perform VM operations. Activation can be time-bound, require MFA, and can be approved, ensuring permissions are granted only when needed and for the minimum necessary duration (least privilege). PIM directly addresses the requirements for just-in-time access and least privilege for managing Azure resources.

Conclusion:

To meet all the requirements effectively and minimize costs while providing the necessary functionality, the optimal combination is:

Azure Active Directory (Azure ID) license: Premium P2 (because PIM is required, and PIM is included in P2)

Security feature: Privileged Identity Management for the Azure resources (because PIM is the service designed for just-in-time RBAC for Azure resources, directly addressing the core requirement).

Final Answer:

Azure Active Directory (Azure ID) license: Premium P2
Security feature: Privileged Identity Management for the Azure resources

27
Q

Your company purchases an app named App1.
You need to recommend a solution to ensure that App1 can read and modify access reviews.
What should you recommend?
A. From API Management services, publish the API of App1, and then delegate permissions to the Microsoft Graph API.
B. From API Management services, publish the API of App1. From the Access control (IAM) blade, delegate permissions.
C. From the Azure Active Directory admin center, register App1, and then delegate permissions to the Microsoft Graph API.
D. From the Azure Active Directory admin center, register App1. From the Access control (IAM) blade, delegate permissions.

A

To allow App1 to read and modify access reviews, it needs to interact with Azure Active Directory’s (Azure AD) access review functionality programmatically. The standard way to interact with Azure AD services programmatically is through the Microsoft Graph API.

Let’s analyze each option:

A. From API Management services, publish the API of App1, and then delegate permissions to the Microsoft Graph API.

Incorrect. API Management (APIM) is used to manage and expose your own APIs, not to grant permissions to an application to access other APIs like Microsoft Graph. Publishing App1’s API in APIM is irrelevant to the requirement of accessing access reviews. Delegating permissions to Microsoft Graph API is the correct goal, but using APIM to publish App1’s API is the wrong approach.

B. From API Management services, publish the API of App1. From the Access control (IAM) blade, delegate permissions.

Incorrect. Similar to Option A, publishing App1’s API in APIM is unnecessary and incorrect. “Access control (IAM) blade” in this context is also incorrect. IAM in Azure typically refers to Role-Based Access Control (RBAC) for managing permissions to Azure resources. It’s not used to grant application permissions to Microsoft Graph API. Graph API permissions are configured in Azure AD app registrations.

C. From the Azure Active Directory admin center, register App1, and then delegate permissions to the Microsoft Graph API.

Correct. This option outlines the correct steps.

“From the Azure Active Directory admin center, register App1”: This is the essential first step. Before App1 can interact with Azure AD or Microsoft Graph, it must be registered as an application in your Azure AD tenant. This registration creates an “application object” in Azure AD, which is the identity of App1 in Azure AD.

“and then delegate permissions to the Microsoft Graph API”: After registering App1, you need to configure the API permissions for this application registration. Within the app registration in the Azure AD admin center, you will find a section to configure “API permissions”. Here, you would add permissions for the Microsoft Graph API, specifically the permissions required to read and modify access reviews (e.g., AccessReview.ReadWrite.All). The term “delegate permissions” is slightly misleading in this context, as you are actually granting “Application permissions” (if App1 acts on its own behalf) or “Delegated permissions” (if App1 acts on behalf of a signed-in user), but the general idea of granting permissions to the Graph API is correct.

D. From the Azure Active Directory admin center, register App1. From the Access control (IAM) blade, delegate permissions.

Incorrect. While registering App1 in the Azure AD admin center is correct, “From the Access control (IAM) blade, delegate permissions” is wrong in this context. As mentioned before, the “Access control (IAM)” blade is for Azure RBAC, not for granting application permissions to Microsoft Graph API. API permissions for Graph are configured within the app registration itself under “API permissions.”

Therefore, Option C is the correct and most accurate recommendation. It describes the standard and necessary steps to allow App1 to programmatically interact with Microsoft Graph API to read and modify access reviews.

Final Answer: C

28
Q

You have an Azure Active Directory (Azure AD) tenant named contoso.com that contains several administrative user accounts.
You need to recommend a solution to identify which administrative user accounts have NOT signed in during the previous 30 days.
Which service should you include in the recommendation?
A. Azure AD Identity Protection
B. Azure Activity Log
C. Azure Advisor
D. Azure AD Privileged Identity Management (PIM)

A

The goal is to identify administrative user accounts in an Azure AD tenant that have not signed in during the previous 30 days. Let’s evaluate each service option:

A. Azure AD Identity Protection:
Azure AD Identity Protection is focused on detecting, preventing, and remediating identity-based risks. It analyzes sign-in patterns and flags suspicious activities like unusual locations, unfamiliar devices, or leaked credentials. While Identity Protection monitors sign-ins, its primary purpose is security and threat detection, not generating reports on user inactivity. It’s not designed to directly provide a list of administrative users who haven’t signed in for a specific period.

B. Azure Activity Log:
Azure Activity Log is a subscription log that provides insight into subscription-level events that have occurred in Azure. It includes operational logs, but for Azure AD user sign-in activities, you would typically look at Azure AD audit logs, which are part of Azure Monitor and can be accessed via Log Analytics or Diagnostic Settings, and indirectly via Activity Log through links to these logs. While sign-in events are recorded, directly extracting a list of inactive administrative users from raw Activity Logs for the past 30 days would require complex querying and data processing. Activity Log is more about auditing actions, not directly about user inactivity reporting.

C. Azure Advisor:
Azure Advisor is a personalized cloud consultant that helps you follow best practices to optimize your Azure deployments. It analyzes your resource configuration and usage telemetry and then recommends solutions that can help you improve the cost-effectiveness, performance, reliability (High Availability, Security, and Performance), and operational excellence of your Azure resources. Advisor is not related to user sign-in activity or identifying inactive administrative accounts. Its focus is on resource optimization and best practices.

D. Azure AD Privileged Identity Management (PIM):
Azure AD Privileged Identity Management (PIM) is used to manage, control, and monitor privileged access in Azure AD. While PIM’s main function is to provide just-in-time (JIT) access to privileged roles, it also has features for access reviews and reporting on role assignments and usage. Specifically related to this question:

Access Reviews in PIM: PIM’s Access Reviews can be configured to review role assignments. As part of an access review, you can potentially identify users who have not been actively using their administrative roles. While not directly a “last sign-in report,” access reviews can give insights into the usage of administrative roles.

Audit Logs and Reporting: PIM provides audit logs of role activations and assignments. While not a direct “inactive user report,” these logs can be analyzed to infer user activity levels.

Although PIM’s primary function is not to create a report of inactive users, among the given options, it is the most relevant service for the task. PIM is concerned with administrative users and their roles, and its reporting and review features offer the closest capabilities to identifying accounts that are not actively used within administrative contexts, compared to Identity Protection (focused on risk), Activity Log (raw event data, not user inactivity reports), and Advisor (resource optimization).

Therefore, the most appropriate answer, albeit not a perfect fit for a dedicated “inactivity report,” is D. Azure AD Privileged Identity Management (PIM), as it provides the most relevant tools and context related to managing and monitoring administrative user accounts in Azure AD and their activities, which can be leveraged to identify potentially inactive administrative accounts.

Final Answer: D

29
Q

You have an on-premises Active Directory forest and an Azure Active Directory (Azure AD) tenant. All Azure AD users are assigned a Premium P1 license.
You deploy Azure AD Connect.
Which two features are available in this environment that can reduce operational overhead for your company’s help desk? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. Azure AD Privileged Identity Management policies
B. access reviews
C. self-service password reset
D. Microsoft Cloud App Security Conditional Access App Control
E. password writeback

A

The question asks for two Azure AD Connect features that reduce help desk operational overhead in a hybrid environment with Azure AD Premium P1 licenses. Let’s examine each option:

A. Azure AD Privileged Identity Management policies:

Impact on Help Desk: Azure AD Privileged Identity Management (PIM) focuses on managing, controlling, and monitoring privileged access. While PIM enhances security and governance, it doesn’t directly reduce the day-to-day operational overhead of a general help desk. PIM is more concerned with managing administrative roles and access, not general user issues like password resets or basic access problems. Therefore, PIM is not a primary feature to reduce general help desk overhead.

B. Access reviews:

Impact on Help Desk: Access reviews are used to periodically review and certify user access to groups, applications, and roles. Access reviews are a governance and security feature, ensuring that users have appropriate access over time. While important for security and compliance, they don’t directly reduce the immediate workload of a help desk in terms of handling user support tickets. Access reviews are more of a proactive governance activity, not a reactive help desk task reducer.

C. Self-service password reset (SSPR):

Impact on Help Desk: Self-service password reset (SSPR) is a feature that allows users to reset their own passwords without contacting the help desk. This is a classic and highly effective way to reduce help desk call volume related to password resets. With Azure AD Premium P1 licenses, SSPR is available. In a hybrid environment connected by Azure AD Connect, SSPR can be configured to write password changes back to the on-premises Active Directory. This feature directly and significantly reduces the help desk workload related to password resets.

D. Microsoft Cloud App Security Conditional Access App Control:

Impact on Help Desk: Microsoft Cloud App Security (MCAS) Conditional Access App Control provides real-time monitoring and control over user access and actions within cloud applications. It’s focused on enhancing security and data loss prevention in cloud applications. While MCAS is valuable for security, it doesn’t directly reduce the help desk’s operational overhead for basic user support tasks. It’s more about advanced security and session monitoring, not help desk efficiency in handling common user issues.

E. Password writeback:

Impact on Help Desk: Password writeback is a feature of Azure AD Connect that allows password changes initiated in Azure AD (e.g., through SSPR or by administrators in Azure AD) to be written back to the on-premises Active Directory. Password writeback is essential for making SSPR effective in a hybrid environment. Without password writeback, if a user resets their password in Azure AD, the change would not propagate to the on-premises Active Directory, potentially causing inconsistencies and requiring help desk intervention. By enabling password writeback, SSPR becomes a complete self-service solution for password resets in a hybrid environment, directly reducing help desk password reset requests.

Conclusion:

The two features that most directly reduce operational overhead for a help desk in this scenario are Self-service password reset (SSPR) and password writeback. SSPR empowers users to resolve their password issues independently, and password writeback ensures that SSPR works seamlessly in the hybrid environment, synchronizing password changes to the on-premises AD, thus minimizing help desk involvement in password-related tickets.

Final Answer: C and E

30
Q

You have an Azure Active Directory (Azure AD) tenant named contoso.com that contains several administrative user accounts.
You need to recommend a solution to identify which administrative user accounts have NOT signed in during the previous 30 days.
Which service should you include in the recommendation?
A. Azure AD Privileged Identity Management (PIM)
B. Azure AD Identity Protection
C. Azure Advisor
D. Azure Activity Log

A

To identify administrative user accounts that have not signed in during the previous 30 days in Azure AD, you need a service that logs and allows analysis of user sign-in activity. Let’s evaluate each option:

A. Azure AD Privileged Identity Management (PIM):
While PIM manages and monitors privileged access, its primary function is not to generate reports on user sign-in inactivity. PIM focuses on just-in-time access, role activation, and access reviews for administrative roles. While you can get audit logs from PIM related to role activations, it’s not the best tool for identifying general sign-in inactivity for all administrative accounts.

B. Azure AD Identity Protection:
Azure AD Identity Protection focuses on detecting, preventing, and responding to identity-based risks. It analyzes sign-in patterns to identify suspicious activities and security threats. While it monitors sign-ins, its purpose is security risk assessment, not generating reports on inactive user accounts.

C. Azure Advisor:
Azure Advisor provides recommendations for optimizing Azure resources for cost, performance, reliability, and security. It does not monitor or report on user sign-in activity in Azure AD. It’s irrelevant to the task of identifying inactive administrative accounts.

D. Azure Activity Log:
Azure Activity Log records subscription-level events. However, Azure AD sign-in logs, which contain the information about user sign-in activity (including last sign-in time), are technically part of Azure Monitor, specifically within Azure AD reporting (Sign-in logs, Audit logs, etc.). These logs can be accessed and analyzed using Azure Monitor Logs (Log Analytics), which can be integrated with Azure Activity Log through Diagnostic Settings to route Azure AD audit logs and sign-in logs to Log Analytics workspace (which is part of Azure Monitor).

While “Azure Activity Log” itself doesn’t directly contain the granular sign-in details needed for this report, it is often used as a general term referring to Azure’s logging and monitoring capabilities, which include access to Azure AD sign-in logs via Azure Monitor.

To directly address the requirement, you would typically use Azure Monitor Logs (Log Analytics) and query the Azure AD Sign-in logs. You could write a query to:

Identify all administrative users (e.g., by checking group membership or assigned administrative roles).

Query the Sign-in logs for the last 30 days for these users.

Filter out users who have at least one successful sign-in within the last 30 days.

The remaining users would be those who have NOT signed in during the previous 30 days.

Given the options and the likely intention of the question, Azure Activity Log (D) is the closest and most reasonable answer. It’s understood that to get detailed sign-in information, you’d leverage Azure Monitor and potentially integrate Azure AD logs with Log Analytics, which is conceptually within the broader Azure logging and monitoring framework, often referred to in general terms as “Azure Activity Log” in a broader context.

Final Answer: D. Azure Activity Log

31
Q
A
32
Q
A
33
Q
A