test13 Flashcards

1
Q

You have an Azure web app named Contoso2023.
You add a deployment slot to Contoso2023 named Slot1.
You need to be able to perform a deployment slot swap with preview.
What should you modify?
application settings for Contoso2023
general settings for Contoso2023
application settings for Contoso2023-Slot1
general settings for Contoso2023-Slot1
which one is correct? why correct?

A

The correct answer is: application settings for Contoso2023-Slot1
Here’s why:
For deployment slot swap with preview to work properly, you need to configure the application settings in the SOURCE slot (Slot1 in this case) because:
The swap with preview operation starts from the source slot
Configuration changes need to be made where the new code/changes are being tested
The preview swap allows you to validate how the app will behave with production settings
The process works as follows:
First, you configure slot-specific settings in Slot1
When you initiate a swap with preview:
The production slot’s settings are applied to the staging slot (Slot1) [1]
This allows you to test how your application will behave with production settings
You can validate the application before completing the swap
Why other options are incorrect:
General settings for Contoso2023:
General settings don’t control swap behavior
These settings are more about platform configuration
Application settings for Contoso2023 (production):
Changes in production settings won’t enable swap with preview
Production settings are already in place
General settings for Contoso2023-Slot1:
General settings don’t control swap behavior
These are basic configuration settings unrelated to swap preview functionality
Key points about slot swap with preview: [2]
It’s a two-phase operation
Allows testing with production configuration
Helps prevent unexpected behavior after swap
Configuration must be done in the source slot
Enables validation before completing the swap
Therefore, modifying the application settings in Slot1 (Contoso2023-Slot1) is the correct approach to enable and configure swap with preview functionality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You have an Azure web app named WebApp1.
You discover that backup options are unavailable for WebApp1.
You need to back up WebApp1.
What should you do first?
Modify the platform settings of WebApp1.
Modify the Application settings of WebApp1.
Scale up the app service plan.
Scale out the app service plan.
which one is correct? why correct?

A

The correct answer is: Scale up the app service plan.
Here’s why:
Azure Web App Backup Requirements:
Backup functionality is only available in Standard, Premium, and Isolated pricing tiers
Basic and Free tiers do not support backup features
Scaling up means moving to a higher tier (e.g., from Basic to Standard)
Why scaling up is the first step:
It changes the service tier to one that supports backups
Once in a supported tier, backup options become available [1]
This is a prerequisite for any backup configuration
Why other options are incorrect:
Modify the platform settings of WebApp1:
Platform settings don’t control backup availability
These settings are for runtime configurations
Won’t enable backup features if you’re in an unsupported tier
Modify the Application settings of WebApp1:
Application settings are for app-specific configurations
Don’t affect the availability of backup features
Can’t enable backups through these settings [2]
Scale out the app service plan:
Scaling out only adds more instances
Doesn’t change the tier/pricing level
Won’t enable backup features
The correct sequence would be:
Scale up to at least Standard tier
Configure backup settings
Set up backup schedule and retention
Configure storage account for backups
Therefore, scaling up the app service plan is the first and necessary step to enable backup functionality for WebApp1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You have an Azure web service named Contoso2022 that runs in the Standard App Service plan. Contoso2022 has five deployment slots in use.
A user named User1 has the Contributor role for Contoso2022.
You need to ensure that User1 can create additional deployment slots to Contoso2022.
What should you do?
Assign User1 the Owner role for Contoso2022.
Assign User1 the Website Contributor role for Contoso2022.
Scale up the Contoso2022 App Service plan.
Scale out the Contoso2022 App Service plan.
You have an Azure web service named Contoso2022 that runs in the Standard App Service plan. Contoso2022 has five deployment slots in use.
A user named User1 has the Contributor role for Contoso2022.
You need to ensure that User1 can create additional deployment slots to Contoso2022.
What should you do?
Assign User1 the Owner role for Contoso2022.
Assign User1 the Website Contributor role for Contoso2022.
Scale up the Contoso2022 App Service plan.
Scale out the Contoso2022 App Service plan.
which one is correct? why correct?

A

The correct answer is: Scale up the Contoso2022 App Service plan.
Here’s why:
Deployment Slot Limitations:
Standard (S1) tier allows up to 5 deployment slots
Premium (P1v2/P1v3) and higher tiers allow up to 20 deployment slots
Since Contoso2022 already has 5 slots in use, it has reached the Standard tier limit
Why scaling up is the solution:
Scaling up means moving to a higher tier (e.g., from Standard to Premium)
Premium tier provides more deployment slots (up to 20)
This directly addresses the limitation preventing new slot creation
Why other options are incorrect:
Assign User1 the Owner role:
User1 already has Contributor role which is sufficient for slot management
The issue is not permissions-related
Higher role won’t overcome slot limit
Assign User1 the Website Contributor role:
This role doesn’t provide additional capabilities for slot creation
The limitation is tier-based, not permission-based
Website Contributor role has similar permissions to Contributor
Scale out the Contoso2022 App Service plan:
Scaling out only adds more instances
Doesn’t change the tier limitations
Won’t increase the number of available deployment slots
Key points:
Standard tier: 5 deployment slots maximum
Premium tier: 20 deployment slots maximum
User permissions are not the limiting factor
Scaling up increases service capabilities
Therefore, scaling up the App Service plan is the correct solution to allow User1 to create additional deployment slots beyond the current limit of 5

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You plan to deploy an Azure web app that will have the following settings:
* Name:WebApp1
* Publish: Docker container
* Operating system: Windows
* Region: West US
* Windows Plan (West US): ASP-RG1-8bcf

You need to ensure that WebApp1 uses the ASP.NET v4.7 runtime stack.
Which setting should you modify?

○ Region
○ Operating system
○ Publish
○ Windows Plan
which one is correct? why correct?

A

The correct answer is: Publish
Here’s why:
Current Configuration Issue:
The web app is set to publish as a Docker container [1]
When using Docker container deployment, you can’t directly specify the runtime stack
Docker containers come with their own runtime environment
Why changing Publish is the solution:
Change Publish from “Docker container” to “Code”
When publishing as “Code”, you can:
Select specific runtime stacks
Choose ASP.NET v4.7 as the runtime
Configure framework-specific settings
Why other options are incorrect:
Region:
Region selection doesn’t affect runtime stack availability
All supported runtimes are available in West US
Changing regions won’t enable runtime stack selection
Operating system:
Windows is already the correct choice for ASP.NET v4.7
Changing OS won’t enable runtime stack selection while using Docker
Windows Plan:
App Service Plan doesn’t determine runtime stack options
This only affects resources available to the app
Changing plan won’t enable runtime stack selection
The correct sequence would be:
Change Publish type from “Docker container” to “Code”
Select ASP.NET v4.7 as the runtime stack
Configure other application settings as needed
Therefore, modifying the Publish setting is the correct solution to enable ASP.NET v4.7 runtime stack for WebApp1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You plan to create an Azure container instance named container1 that will use a Docker image named Image1.
You need to ensure that container1 has persistent storage.
Which Azure resources should you deploy for the persistent storage?
O an Azure container registry only
O an Azure Storage account and a file share
O an Azure Storage account and a blob container
O an Azure SQL database only
which one is correct? why correct?

A

The correct answer is: an Azure Storage account and a file share
Here’s why:
Azure Container Instance (ACI) Persistent Storage Requirements:
ACI supports Azure Files (file shares) for persistent storage
File shares provide shared storage that can be mounted to containers [1]
The storage must persist independently of the container lifecycle
Why Azure Storage account and file share is the correct solution:
Azure Storage account hosts the file share
Azure Files provides SMB protocol access
Containers can mount the file share directly
Data persists even if container is deleted/recreated
Supports concurrent access from multiple containers
Why other options are incorrect:
Azure container registry only:
Container registry stores and manages container images
Doesn’t provide persistent storage for running containers
Used for image management, not data persistence
Azure Storage account and a blob container:
Blob storage isn’t directly mountable to containers
Not suitable for file system-like operations
Doesn’t provide the same file system semantics as file shares
Azure SQL database only:
This is a relational database service
Not designed for container file system storage
Cannot be mounted as persistent storage
Implementation steps:
Create an Azure Storage account
Create a file share within the storage account
Configure the container instance to mount the file share
Use storage account key for authentication
Therefore, deploying an Azure Storage account with a file share is the correct solution for providing persistent storage to container1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You have an Azure subscription that contains the following resources:
* a storage account named storage123
* a container instance named AppContainer
The subscription contains a virtual network named VirtualNet4 that has the following subnets:
* SubnetA- storage123 is connected to SubnetA.
* SubnetB- AppContainer is connected to SubnetB.
* SubnetC- No resources.
You plan to deploy an Azure container instance named container5 to VirtualNet4.
To which subnets can you deploy container5?
SubnetB only
SubnetC only
SubnetB and SubnetC only
SubnetA, SubnetB, and SubnetC
which one is correct? why correct?

A

The correct answer is: SubnetC only
Here’s why:
Azure Container Instance (ACI) Network Deployment Rules:
Only one container group can be deployed per subnet
Subnets used for container groups cannot contain other resource types
Each subnet can only be used by one container instance
Analysis of each subnet:
SubnetA:
Already contains storage123 (storage account)
Cannot be used for container instances when other resources exist
Not available for container5
SubnetB:
Already contains AppContainer (existing container instance)
One subnet can only host one container group
Not available for container5
SubnetC:
Currently empty
No other resources or container instances
Available for container5 deployment
Why other options are incorrect:
“SubnetB only”:
SubnetB already has a container instance
Cannot deploy another container to same subnet
“SubnetB and SubnetC only”:
SubnetB is not available (has existing container)
Only SubnetC is actually available
“SubnetA, SubnetB, and SubnetC”:
SubnetA has storage account
SubnetB has existing container
Only SubnetC is available
Key rules for ACI networking:
One container group per subnet
Subnet must be dedicated to container instances
Cannot mix with other resource types
Must be an empty subnet for new container deployment
Therefore, container5 can only be deployed to SubnetC as it’s the only subnet that meets all requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

You have a Docker image named Image1 that contains a corporate app.
You need to deploy Image1 to Azure and make the app accessible to users.
Which two Azure services should you deploy? Each correct answer presents part of the solution.
Azure App service
a virtual machine
Azure Container Registry
a virtual machine scale set
which one is correct? why correct?

A

The correct answers are:
Azure App Service
Azure Container Registry
Here’s why these are the correct answers: [1]
Azure Container Registry (ACR):
Required to store and manage the Docker image (Image1)
Provides a private, secure location for container images
Integrates seamlessly with other Azure services
Enables version control of container images
Necessary for storing Image1 before deployment
Azure App Service:
Provides a managed platform for hosting containerized applications
Supports Docker container deployment
Offers built-in auto-scaling and load balancing
Provides easy integration with ACR
Handles the infrastructure management
Makes the app accessible to users via HTTP/HTTPS endpoints
Why other options are incorrect:
Virtual Machine:
Requires manual container management
More complex to maintain
Requires more administrative overhead
Not a managed service for containers
Overkill for running a containerized application
Virtual Machine Scale Set:
More complex than necessary
Requires manual container orchestration
Better suited for complex infrastructure scenarios
Requires more management overhead
Not a managed container service
The deployment process would typically involve:
Create an Azure Container Registry
Push Image1 to the registry
Create an Azure App Service
Configure App Service to pull and run Image1 from ACR
Therefore, Azure Container Registry and Azure App Service together provide the complete solution for deploying and hosting the containerized application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You have an Azure Storage account named storage1.
You create the following encryption scopes for storage1:
* Scope1 that has an encryption type of Microsoft-managed keys
* Scope2 that has an encryption type of Customer-managed keys
Which storage services can be used with Scope2?
blob only
file only
blob and file only
table and queue only
blob, file, table, and queue
which one is correct? why correct?

A

The correct answer is: blob and file only
Here’s why:
Encryption Scope Support:
Encryption scopes with customer-managed keys are supported only for:
Azure Blob Storage
Azure Files
Not supported for: [1]
Azure Queue Storage
Azure Table Storage
Customer-managed keys (Scope2) capabilities:
Can be used to encrypt blob data
Can be used to encrypt file share data
Provides more control over encryption keys
Allows key rotation and management
Enables bring-your-own-key (BYOK) scenarios
Why other options are incorrect:
“blob only”:
While blob storage supports encryption scopes
Files also support customer-managed keys
Too limiting
“file only”:
While file storage supports encryption scopes
Blob storage also supports customer-managed keys
Too limiting
“table and queue only”:
Table storage doesn’t support encryption scopes
Queue storage doesn’t support encryption scopes
Completely incorrect
“blob, file, table, and queue”:
Table and Queue services don’t support encryption scopes
Too inclusive
Key points about encryption scopes:
Available for blob and file services
Can use Microsoft-managed or customer-managed keys
Provide granular encryption control
Allow different encryption settings within same storage account
Support key rotation and management
Therefore, only blob and file services can be used with Scope2 (customer-managed keys encryption scope)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You have an Azure Storage account named storage1 that is configured to use the Hot access tier.
Storage1 has a container named container1 and the lifecycle management rule with following settings:
* Move blob to cool storage: Selected
* Days after last modification: 3
* Move blob to archive storage: Selected
* Days after last modification: 5
On December 1, you create a file named File1 in container1.
On December 10, you rehydrate File1 and move the file to the Hot access tier.
When will File1 be moved to archive storage?
within 24 hours
on December 15
on December 18
on January 1
Which one is correct? why correct?

A

The correct answer is: December 15
Here’s why:
When you rehydrate File1 and move it back to Hot tier on December 10, this action counts as a modification of the blob. This resets the “last modification” timestamp to December 10.
According to the lifecycle management rules:
Files move to Cool tier after 3 days from last modification
Files move to Archive tier after 5 days from last modification
Starting from December 10 (the new last modification date):
The file will move to Cool tier on December 13 (3 days later)
The file will move to Archive tier on December 15 (5 days later)
The other options are incorrect because:
“within 24 hours” is too soon and doesn’t follow the lifecycle rules
“December 18” is too late as it would be 8 days after modification
“January 1” is much too late and doesn’t align with the lifecycle rules
Important to note:
Lifecycle management rules are based on the last modification time
When you rehydrate and change the access tier, it counts as a modification
The countdown for lifecycle rules restarts from the last modification date
The rules are sequential - the blob must first move to Cool before moving to Archive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You have an Azure Storage account named storage1.
You need to provide time-limited access to storage1.
What should you use?
an access key
a role assignment
an access policy
a shared access signature (SAS)
Which one is correct? why correct?

A

The correct answer is: a shared access signature (SAS)
Here’s why:
Shared Access Signature (SAS) is the best solution for providing time-limited access to Azure Storage because: [1]
It provides secure, delegated access to resources in your storage account
You can specify an expiry time/date for the access
You can define specific permissions (read, write, delete, etc.)
You can restrict access to specific IP addresses, protocols, and services
You can revoke access at any time
Why the other options are not optimal:
Access Key:
Provides full access to the storage account
Cannot be time-limited
Harder to revoke without impacting other applications
Sharing access keys is considered a security risk
Role Assignment:
More suitable for long-term access management
Requires Azure AD integration
Cannot be easily time-limited
More complex to set up for temporary access
Access Policy:
Is actually a component that can be used with SAS
Cannot provide time-limited access on its own
Used to define permissions that can be referenced by a SAS
Key benefits of using SAS:
Granular control over what resources can be accessed
Control over what operations are allowed
Control over when access starts and expires
Can be easily revoked if needed
Can be associated with stored access policies for additional control
SAS is specifically designed for scenarios requiring temporary, limited access to storage resources, making it the ideal choice for this requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You have an Azure Storage account named storage1 that contains a file share named share1.
You also have an on-premises Active Directory domain that contains a user named User1.
You need to ensure that User1 can access share1 by using the SMB protocol.
What should you do?
Provide User1 with the shared access signature (SAS) for storage1.
Configure the Access control (IAM) settings of storage1.
Configure the Firewalls and virtual networks settings of storage1.
Provide User1 with the access key for storage1.
Which one is correct? why correct?

A

The correct answer is: Configure the Access control (IAM) settings of storage1
Here’s why:
For SMB access to Azure File Shares with Active Directory authentication: [1]
You need to configure Azure AD Domain Services or AD authentication
IAM (Identity and Access Management) settings need to be configured to allow Active Directory users to access the file share
This provides seamless integration with existing Active Directory credentials
Users can access the file share using their AD credentials without additional authentication [2]
Why the other options are incorrect:
Shared Access Signature (SAS):
SAS is primarily used for REST-based access
Not suitable for SMB protocol authentication
Doesn’t integrate with Active Directory authentication
Would require manual token management
Firewalls and virtual networks settings:
This only controls network-level access
Doesn’t handle authentication
While important for security, it doesn’t solve the authentication requirement
Access key:
Access keys are for administrative access
Not suitable for end-user authentication
Sharing access keys is a security risk
Doesn’t integrate with Active Directory
Key steps to implement the solution:
Configure Azure AD Domain Services or AD authentication for the storage account
Set up appropriate IAM roles and assignments
Ensure the user has the correct RBAC permissions
Configure the necessary network connectivity between on-premises and Azure
This approach provides:
Seamless integration with existing AD credentials
Secure access using SMB protocol
Proper authentication and authorization
Maintainable and scalable access control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You have an Azure virtual machine named VM1 that automatically registers in an Azure private DNS zone named contoso.com.
VM1 hosts a website named Site1.
You need to ensure that Site1 can be resolved by using a URL of http://www.contoso.com. The solution must ensure that if the IP
address of VM1 changes, www.contoso.com will resolve to the changed IP address.
Which DNS record type should you add to contoso.com?
OA
O SVR
O TXT
O AAAA
O CNAME
Which one is correct? why correct?

A

The correct answer is: CNAME (Canonical Name) [1]
Here’s why:
CNAME is the best choice because:
It creates an alias that points to another DNS name (canonical name)
When VM1’s IP address changes, the CNAME record will automatically resolve to the new IP address
The CNAME record would point www.contoso.com to VM1’s automatically registered DNS name in the private DNS zone
It provides automatic updates when the underlying IP address changes
How it works in this scenario:
VM1 automatically registers its DNS record in contoso.com
You create a CNAME record where:
www.contoso.com (alias) points to VM1’s DNS name (canonical name)
When VM1’s IP changes, its DNS record updates automatically
The CNAME record follows this change without requiring manual updates
Why other options are incorrect:
A Record:
Maps directly to an IP address
Would need manual updates when VM1’s IP changes
Doesn’t provide the automatic update capability needed
SRV Record:
Used for service location
Typically for specific services and ports
Not appropriate for basic web hosting scenarios
TXT Record:
Used for text information
Cannot be used for DNS resolution
Typically used for domain verification or SPF records
AAAA Record:
Used for IPv6 addresses only
Like A records, would need manual updates
Doesn’t provide automatic resolution
Using a CNAME record is the most efficient solution because:
It maintains a dynamic link to the VM’s DNS name
Automatically handles IP address changes
Requires minimal maintenance
Provides reliable name resolution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company named Contoso, Ltd. has an Azure subscription that contains an Azure Active Directory (Azure AD) tenant named
contoso.com. The Azure subscription contains the following virtual networks:
* VNET1- deployed in the East US location
* VNET2- deployed in the East US location
* VNET3- deployed in the West US location
Contoso purchases a company named A. Datum Corporation. A. Datum has an Azure subscription that contains an Azure AD
tenant named adatum.com. Adatum.com contains the following virtual networks:
* VNETA- deployed in the East US location
* VNETB- deployed in the West US location
Which virtual networks can you peer to VNET1?
○ VNET2 only
○ VNET2 and VNET3 only
○ VNET2 and VNETA only
○ VNET2, VNET3, and VNETA only
○ VNET2, VNET3, VNETA, and VNETB
Which one is correct? why correct?

A

The correct answer is: VNET2, VNET3, VNETA, and VNETB
Here’s why:
Azure Virtual Network Peering Capabilities:
VNet peering enables you to connect virtual networks seamlessly
Peering can be established between:
VNets in the same region
VNets in different regions (Global VNet peering)
VNets across different subscriptions
VNets across different Azure AD tenants
In this scenario:
VNET2 can be peered because:
It’s in the same subscription
It’s in the same region (East US)
VNET3 can be peered because:
It’s in the same subscription
Global VNet peering supports cross-region connectivity
VNETA can be peered because:
Cross-subscription peering is supported
Cross-tenant peering is supported
It’s in the same region (East US)
VNETB can be peered because:
Cross-subscription peering is supported
Cross-tenant peering is supported
Global VNet peering supports cross-region connectivity
Key Points:
Location (region) is not a limitation thanks to Global VNet peering
Different subscriptions can be connected through peering
Different Azure AD tenants can be connected through peering
All virtual networks mentioned can be peered with VNET1
Requirements for VNet Peering:
Appropriate permissions in both subscriptions
Non-overlapping IP address spaces
Network connectivity between tenants
Therefore, VNET1 can be peered with all other virtual networks mentioned (VNET2, VNET3, VNETA, and VNETB), making this the correct answer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You have an Azure virtual machine named VM1 that connects to a virtual network named VNET1.
You create a private DNS zone named contoso.com and add an A record named host1 to the zone.
You need to ensure that VM1 can resolve host1.contoso.com.
What should you do?
Modify the Access control (IAM) settings of the zone.
From the zone, add a virtual network link.
From the properties of the network interface, modify the options of the DNS servers.
From the properties of VNET1, modify the options of the DNS servers.
Which one is correct? why correct?

A

The correct answer is: From the zone, add a virtual network link
Here’s why:
Virtual Network Link is the correct solution because:
It creates a connection between the private DNS zone and the virtual network
It enables DNS resolution for resources within the linked virtual network
It’s specifically designed for private DNS resolution within Azure virtual networks
Once linked, VMs in the virtual network can automatically resolve records in the private DNS zone
Implementation steps:
Go to the private DNS zone (contoso.com) [1]
Select “Virtual network links”
Add a new link to VNET1
Enable auto-registration if needed
Why other options are incorrect:
Modifying Access control (IAM) settings:
IAM controls administrative access to the DNS zone
Doesn’t affect DNS resolution
Only manages who can manage the DNS zone
Modifying DNS servers of the network interface:
Not necessary for Azure private DNS resolution
Would be used for custom DNS servers
Could actually interfere with Azure private DNS resolution
Modifying DNS servers of VNET1:
Not required for Azure private DNS zones
Would be used for custom DNS servers
Azure private DNS resolution works with default Azure DNS
Benefits of using virtual network links:
Automatic DNS resolution within the virtual network
No additional configuration needed on VMs
Works with Azure’s built-in DNS infrastructure
Can be configured for auto-registration of VM DNS records
The virtual network link is the essential component that enables private DNS resolution between the DNS zone and resources in the virtual network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You have an Azure virtual network named VNET1 that has an IP address space of 192.168.0.0/16 and the following subnets:
* Subnet1- has an IP address range of 192.168.1.0/24 and is connected to 15 VMs
* Subnet2- has an IP address range of 192.168.2.0/24 and does not have any VMs connected
You need to ensure that you can deploy Azure Firewall to VNET1.
What should you do?
Add a new subnet to VNET1.
Add a service endpoint to Subnet2.
Modify the subnet mask of Subnet2.
Modify the IP address space of VNET1.
Which one is correct? why correct?

A

The correct answer is: Add a new subnet to VNET1
Here’s why:
Azure Firewall requirements:
Azure Firewall must be deployed in a dedicated subnet named “AzureFirewallSubnet”
This is a mandatory naming requirement
The subnet must be created specifically for the Azure Firewall
Minimum subnet size must be /26 [1]
Current situation:
VNET1 has IP space of 192.168.0.0/16 (plenty of available space)
Subnet1 (192.168.1.0/24) is in use
Subnet2 (192.168.2.0/24) is empty but not properly named for firewall
Why other options are incorrect:
Add a service endpoint to Subnet2:
Service endpoints are for securing Azure service connections
Doesn’t address the requirement for a dedicated firewall subnet
Wrong solution for firewall deployment
Modify the subnet mask of Subnet2:
Even if modified, the subnet name is still incorrect
Azure Firewall requires specifically named subnet
Changing mask alone doesn’t solve the requirement
Modify the IP address space of VNET1:
Current IP space (192.168.0.0/16) is sufficient
No need to modify as there’s plenty of address space
Wouldn’t solve the subnet requirement
Implementation steps:
Create a new subnet named “AzureFirewallSubnet”
Allocate appropriate address range (minimum /26)
Can use available space within 192.168.0.0/16
Then deploy Azure Firewall to this new subnet
The solution requires adding a new, properly named subnet because:
Azure Firewall has specific subnet naming requirements
Existing subnets can’t be repurposed
There’s sufficient IP space in the VNet
It’s the most straightforward and correct approach

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You have a proximity placement group named Proximity1.
You plan to create the following Azure resources:
* a virtual machine named VM1
* a disk named Disk1
* a virtual network named VNET1
* a public IP address named IP1
Which resources can you place in Proximity1?
○ VM1 only
○ VM1 and Disk1 only
○ Disk1 and IP1 only
○ VNET1, Disk1, and IP1 only
Which one is correct? why correct?

A

The correct answer is: VM1 only
Here’s why:
Proximity Placement Groups (PPG) characteristics:
PPGs are designed specifically for IaaS compute resources
They ensure Virtual Machines are physically placed as close to each other as possible [1]
Only compute resources can be placed in a proximity placement group
Currently, PPGs only support Virtual Machines and VM Scale Sets
For each resource type:
VM1 (Virtual Machine):
Can be placed in a PPG
This is the primary use case for PPGs
Helps reduce network latency between VMs
Disk1:
Cannot be directly placed in a PPG
Managed disks follow their associated VMs
Not a compute resource
VNET1:
Cannot be placed in a PPG
Network resources are not supported
Virtual networks span across datacenter infrastructure
IP1 (Public IP):
Cannot be placed in a PPG
Network resources are not supported
Not a compute resource
Key points about PPGs:
Used to minimize network latency
Particularly useful for latency-sensitive applications
Common use cases include:
High-performance computing
Gaming
Financial trading applications
ERP applications like SAP
Therefore, among the given resources, only VM1 can be placed in the proximity placement group Proximity1.

15
Q

You have an Azure virtual network named VNET1 has and a network security group (NSG) named NSG1. NSG1 has the following inbound security rules:
* Rule1 has a priority of 100 and allows port 3389 on TCP protocol from any source and to any destination
* Rule2 has a priority of 200 and allows ports 80 and 8080 on UDP protocol from any source and to any destination
* Rule3 has a priority of 300 and denies ports 1-2000 on TCP protocol from any source and to any destination
* Rule4 has a priority of 400 and allows ports 50-500 on TCP protocol from VirtualNetwork source and to any destination
* Rules has a priority of 500 and allows ports 80 and 443 on TCP protocol from any source and to any destination
You need to allow http and https connections from the internet to VNET1.
What should you change for NSG1?
Priority for Rule4 to 250
Protocol for Rule2 to TCP
Priority for Rule3 to 450
Priority for Rule5 to 250
Which one is correct? why correct?

A

The correct answer is: Priority for Rule5 to 250
Here’s why:
Let’s analyze what’s needed:
HTTP (port 80) and HTTPS (port 443) connections need to be allowed
These connections use TCP protocol
Rule5 already allows the correct ports (80 and 443) and protocol (TCP)
The source is correctly set to “any” (internet access needed)
Current problem:
Rule3 (priority 300) denies TCP ports 1-2000
This includes ports 80 and 443
Rule5 (priority 500) allows ports 80 and 443
But Rule3 takes precedence because it has a higher priority (lower number)
Why changing Rule5’s priority to 250 is correct:
It will make Rule5 execute before Rule3
Allows HTTP/HTTPS traffic before the deny rule takes effect
Maintains the functionality of other rules
Solves the requirement with minimal changes
Why other options are incorrect:
Changing Rule4’s priority to 250:
Only affects internal VirtualNetwork traffic
Doesn’t help with internet access
Wrong rule to modify
Changing Rule2’s protocol to TCP:
Rule2 is for different ports (80 and 8080)
Changing protocol alone doesn’t solve the priority issue
Would create conflicting rules
Changing Rule3’s priority to 450:
Would work technically but is less efficient
Makes more sense to move the allow rule up than the deny rule down
Could potentially affect other rules between 300 and 450
Final rule processing order after change:
Rule1 (100) - Allow RDP
Rule5 (250) - Allow HTTP/HTTPS
Rule2 (200) - Allow UDP 80/8080
Rule3 (300) - Deny TCP 1-2000
Rule4 (400) - Allow TCP 50-500 from VirtualNetwork
This change ensures HTTP/HTTPS traffic is allowed while maintaining the security and functionality of other rules

16
Q

You have an Azure virtual machine named VM1 that connects to a virtual network named VNET1.
A network security group (NSG) named NSG1 allows connections to VM1 from VNET1 only.
You need to add an inbound security rule to NSG1 that meets the following requirements:
* Allows Azure Backup to back up VM1
* Minimizes the types of allowed inbound traffic
What should you use as the source for the inbound security rule?
any IP address
the IP address of VM1
a service tag for Azure Backup
an application security group
Which one is correct? why correct?

A

The correct answer is: a service tag for Azure Backup
Here’s why:
Service Tags are the best solution because:
They are built-in identifiers that represent a group of IP addresses of Azure services
The AzureBackup service tag specifically represents all IP addresses used by Azure Backup service
Microsoft manages the IP addresses included in the service tag automatically
It provides the most precise and secure way to allow only Azure Backup traffic
Why other options are incorrect:
Any IP address:
Too permissive
Allows unnecessary traffic from all sources
Creates security vulnerabilities
Doesn’t meet the requirement of minimizing allowed traffic
IP address of VM1:
Wrong approach as VM1 is the destination, not the source
Wouldn’t allow Azure Backup service to connect
Doesn’t help with backup functionality
Application security group:
Used for grouping VMs and applying network policies
Cannot represent Azure services like Azure Backup
Not applicable for this scenario
Benefits of using the Azure Backup service tag:
Automatically updated by Microsoft
No need to maintain IP address lists
Follows the principle of least privilege
Simplifies security rule management
Only allows necessary backup traffic
Implementation details:
Source: AzureBackup service tag
Destination: VM1
Required ports: As specified by Azure Backup service
Protocol: TCP
Action: Allow
This solution provides the most secure and maintainable way to allow Azure Backup while minimizing allowed inbound traffic

17
Q

You have an Azure subscription that contains a virtual network named VNET1. VNET1 uses the following address spaces:
* 10.10.1.0/24
* 10.10.2.0/28
VNET1 contains the following subnets:
* Subnet1- has an address space of 10.10.1.0/24
* Subnet2- has an address space of 10.10.2.0/28
To Subnet1, you deploy a virtual machine named VM1 that runs Windows Server 2019. VM1 has Remote Desktop enabled.
VM1 does NOT have a public IP address.
You need to be able to deploy Azure Bastion, and then protect VM1.
What should you do first?
Add a new subnet to VNET1.
Modify the address space of VNET1.
Add a public IP address to VM1.
Add an extension to VM1.
Which one is correct? why correct?

A

The correct answer is: Add a new subnet to VNET1
Here’s why:
Azure Bastion requirements: [1]
Requires a dedicated subnet named “AzureBastionSubnet”
Subnet must have a minimum size of /26 (/27 or smaller not supported)
Must be named exactly “AzureBastionSubnet”
Cannot be used for any other resources
Current situation analysis:
VNET1 has two address spaces:
10.10.1.0/24 (used by Subnet1)
10.10.2.0/28 (used by Subnet2)
Both existing subnets are already allocated
No dedicated subnet exists for Bastion
Why other options are incorrect:
Modify the address space of VNET1:
Not necessary as you can add a new subnet
Current address space could be sufficient
More complex than adding a subnet
Add a public IP address to VM1:
Not required for Azure Bastion
Azure Bastion provides secure access without public IPs on VMs
Would not solve the Bastion deployment requirement
Add an extension to VM1:
Not related to Bastion deployment
Extensions don’t affect network connectivity
Doesn’t address subnet requirements
Implementation steps:
Add a new address space to VNET1 (if needed)
Create new subnet named “AzureBastionSubnet”
Ensure subnet size is at least /26
Then deploy Azure Bastion
This solution is correct because:
Follows Azure Bastion requirements
Enables secure remote access to VM1
Maintains security by not requiring public IP on VM1
Provides the necessary infrastructure for Bastion deployment

18
Q

You have an Azure virtual machine named Computer5 and a Recovery Services vault named Vault5. Computer5 contains the
following data disks:
* DiskA has a size of 512 GB
* DiskB has a size of 30 TB
* DiskC has a size of 26 TB
* DiskD has a size of 2.0 TB
Which data disks can you back up to Vault5?
○ DiskA only
○ DiskB only
○ DiskC only
○ DiskD only
○ DiskA, DiskB, DiskC, and DiskD

A

In Azure, the maximum size for a data disk that can be backed up to a Recovery Services vault is 32 TB. Given the sizes of the disks you mentioned:

DiskA: 512 GB (can be backed up)
DiskB: 30 TB (can be backed up)
DiskC: 26 TB (can be backed up)
DiskD: 2.0 TB (can be backed up)
Conclusion:
All the disks (DiskA, DiskB, DiskC, and DiskD) can be backed up to Vault5 since they are all below the 32 TB limit.

Therefore, the correct answer is: ○ DiskA, DiskB, DiskC, and DiskD

19
Q

You have the following Azure resources:
* a virtual machine named VM1
* a Recovery Services vault named Vault1
On January 1, you configure backups for VM1 by using the following backup policy:
* Frequency: Daily
* Time: 23:00
* Timezone: (UTC) Coordinated Universal Time
* Retain instant recovery snapshot(s) for: 2 Day(s)
* Retention of daily backup point: 7 Day(s)
* Azure Backup Resource Group: Backup1RG
How many restore point collections recovery points will be stored in Backup1RG on January 10?
O 2
○ 7
O 9
O 10
Which one is correct? why correct?

A

Let’s analyze this step by step to determine how many restore point collections will be stored in Backup1RG on January 10.
The correct answer is 9 restore points. Here’s why:
First, let’s understand what’s being stored:
Daily backups are taken at 23:00 UTC
Instant recovery snapshots are kept for 2 days
Daily backup points are retained for 7 days
By January 10, the following backups will exist:
Instant recovery snapshots from January 8 and 9 (2 points)
Daily backup points from January 3 through January 9 (7 points)
Total count:
2 instant recovery snapshots
7 daily backup points = 9 total restore points
The key points to remember:
The instant recovery snapshots provide quick recovery options for the most recent 2 days
The daily backup points provide longer-term recovery options for up to 7 days
These are maintained separately but both count as restore points
By January 10, you’ll have both types of recovery points adding up to 9 total points
10 is incorrect because January 10’s backup hasn’t occurred yet (it happens at 23:00)
7 is incorrect because it only counts the daily backup points
2 is incorrect because it only counts the instant recovery snapshots
Therefore, 9 is the correct answer as it accounts for both the instant recovery snapshots and the daily backup points that would be present on January 10.

20
Q

You have a Recovery Services vault named Recovery1 that includes a backup policy named Policy1.
You back up several Azure virtual machines to Recovery1 by using Policy1.
You need to view the Azure Backup reports.
What should you do first?
Create an Azure Log Analytics workspace.
Modify the Backup Configuration settings of Recovery1.
Configure the Diagnostics settings of Recovery1.
Which one is correct? why correct?

A

The correct answer is: Configure the Diagnostics settings of Recovery1.
Here’s why:
To view Azure Backup Reports, you need to enable and configure diagnostic settings in the Recovery Services vault (in this case, Recovery1). This process sends the backup telemetry (logs and metrics) to a destination where reports can be generated and viewed, such as Azure Monitor Logs, a Log Analytics workspace, or Azure Storage.
Steps to View Backup Reports:
Configure Diagnostic Settings:
Go to the Recovery Services vault (Recovery1).
Navigate to “Diagnostics settings”.
Enable and configure diagnostic settings to send logs to one of the supported destinations:
Log Analytics workspace
Storage Account
Event Hubs
(Optional) Use Log Analytics Workspace:
If you plan to query and analyze logs in detail or generate reports using Log Analytics, you can create and configure an Azure Log Analytics workspace as the destination for diagnostic logs.
Generate Reports:
Once diagnostics settings are configured, you can view the reports through Azure Backup Reports (e.g., using Azure Monitor or Power BI).
Why the Other Options Are Incorrect:
Create an Azure Log Analytics workspace:
While creating a Log Analytics workspace is part of the process for detailed log analysis, it is not the first step. You must first configure the Recovery Services vault to send logs to that workspace.
Modify the Backup Configuration settings of Recovery1:
Backup Configuration settings control backup schedules, retention policies, and other backup-related settings. These settings do not enable reports or telemetry.

21
Q

You have a Recovery Services vault named Vault1 that has soft delete enabled.
Vault1 stores backups for the following Azure resources:
* an Azure virtual machine named VM1
* an Azure file share named share1
* a SQL Server on Azure virtual machine named SQL1
Which backups are protected by soft delete?
○ VM1 only
○ share1 only
○ VM1 and SQL1 only
○ VM1, share1, and SQL1
Which one is correct? why correct?

A

The correct answer is “VM1, share1, and SQL1”
Here’s why all three resources are protected by soft delete in the Recovery Services vault: [1]
Soft Delete Protection Coverage:
Soft delete is a vault-level feature that protects ALL backup data in the vault
When enabled, it protects backup items from accidental or malicious deletions
It applies to all workload types stored in the Recovery Services vault
Protection for different workload types:
Azure VMs (VM1): Protected by soft delete [2]
Azure File Shares (share1): Protected by soft delete
SQL Server on Azure VMs (SQL1): Protected by soft delete
Key aspects of soft delete:
Provides a 14-day recovery window by default
Allows recovery of deleted backup items within the retention period
Works as an additional layer of protection for all supported workload types
Helps prevent accidental or malicious deletion of backup data
Why other options are incorrect:
“VM1 only”:
Incorrect because soft delete protection isn’t limited to just Azure VMs
“share1 only”:
Incorrect because soft delete isn’t exclusive to file shares
“VM1 and SQL1 only”:
Incorrect because it excludes file shares which are also protected
The soft delete feature is designed to protect all backup items in the vault regardless of the workload type, making “VM1, share1, and SQL1” the correct answer. This comprehensive protection helps organizations maintain data security and recover from accidental deletions across all their backed-up resources.

22
Q

You have an Azure Kubernetes Service (AKS) cluster named AKS1 that runs Kubernetes version 1.16.10.
You need to ensure that you can run a Windows Server container in AKS1.
What should you do first?
Add a node pool to AKS1.
Modify the networking settings of AKS1.
Integrate AKS1 and the Azure container registry.
Upgrade AKS1 to a newer version of Kubernetes.
Which one is correct? why correct?

A

The correct answer is “Upgrade AKS1 to a newer version of Kubernetes.”
Here’s why upgrading Kubernetes is the first step needed: [1]
Version Requirements:
Kubernetes version 1.16.10 is too old for supported Windows container deployments [2]
Windows container support in AKS requires a more recent version of Kubernetes
Production support for Windows containers started with Kubernetes 1.14 but has evolved significantly since then
Upgrade Process:
Must upgrade the cluster to a supported version first
After upgrading, you can add Windows node pools
The upgrade is a prerequisite for any Windows container functionality
Why other options are not the first step:
“Add a node pool to AKS1”:
Cannot add Windows node pools until the cluster is on a supported version
This would be the second step after upgrading
“Modify the networking settings of AKS1”:
Networking modifications aren’t a prerequisite for Windows container support
Network configuration would already be in place from the initial cluster setup
“Integrate AKS1 and the Azure container registry”:
While useful for container management, this isn’t required for Windows container support
This is optional and can be done at any time
The correct sequence would be:
Upgrade AKS1 to a supported Kubernetes version
Add a Windows node pool
Deploy Windows containers
Therefore, upgrading AKS1 to a newer version of Kubernetes is the correct first step as it provides the necessary foundation for Windows container support in the cluster.

23
Q

You have an Azure subscription that contains an Azure container registry named Contoso2020.
You plan to create an Azure Kubernetes Service (AKS) cluster named AKS1 that has the following settings:
* Kubernetes version: 1.16.10
* Node pools:1
* Virtual nodes: Disabled
* Authentication method: Service principal
* Network configuration: Basic
You need to ensure that you can integrate AKS1 and Contoso2020.
Which AKS1 settings should you modify?
○ Kubernetes version
○ Virtual nodes
○ Authentication method
○ Network configuration
Which one is correct? why correct?

A

The correct answer is “Authentication method”
Here’s why modifying the authentication method is necessary for integrating AKS with Azure Container Registry (ACR): [1]
Authentication Requirements:
To integrate AKS with ACR securely, proper authentication is required
The service principal authentication method needs to be configured with the correct permissions
Modern best practice is to use managed identities instead of service principals
Why Authentication Method needs to change:
Service principals require manual credential management
Service principals have expiring credentials that need renewal
Managed identity provides a more secure and maintenance-free alternative
Managed identity automatically handles authentication between AKS and ACR
Why other options are not correct:
“Kubernetes version”:
Version 1.16.10, while old, doesn’t prevent ACR integration
ACR integration works across different Kubernetes versions
“Virtual nodes”:
Virtual nodes setting is for scaling purposes
Has no impact on ACR integration capabilities
Can be enabled/disabled independently of ACR integration
“Network configuration”:
Basic networking can work with ACR integration
Network configuration doesn’t directly affect the ability to pull images from ACR
Best practices for AKS-ACR integration:
Use managed identity instead of service principal
Configure proper RBAC roles
Ensure the identity has proper permissions to pull images from ACR
Therefore, changing the authentication method is the correct answer as it’s crucial for establishing secure and maintainable integration between AKS and ACR.

24
Q

You have an Azure Active Directory (Azure AD) tenant that contains a group named Group1 that has the following users:
* User1- Member
* User2- Member
* User3- Guest
User1 is an owner of Group1.
You create an access review that has the following settings:
* Review name: Review1
* Start date: 07/15/2020
* Frequency: One time
* End date: 08/14/2020
* Users to review: Members of a group
* Scope: Everyone
* Group: Group1
* Reviewers: Members (self)
* Auto apply results to resource: Disable
* If reviewers don’t respond: Remove access
The users provide the following responses to the Do you require membership in Group1? access review question:
* User1: No
* User2: Yes
* User3: did not answer
Which users will be members of Group1 on 08/20/2020?
○ User2 only
○ User1 and User2 only
○ User2 and User3 only
○ User1, User2, and User3
Which one is correct? why correct?

A

The correct answer is “User2 only”
Let’s analyze this step by step:
Access Review Settings:
One-time review from 7/15/2020 to 8/14/2020
Scope includes everyone (both members and guests)
Self-review by members
If no response, access is removed
Auto-apply results is disabled
User Responses:
User1: Responded “No” (will lose access)
User2: Responded “Yes” (will retain access)
User3: Did not respond (will lose access per settings)
Key Factors:
“If reviewers don’t respond: Remove access” means User3 will be removed
User1’s “No” response means they will be removed
User2’s “Yes” response means they will retain membership
Even though auto-apply is disabled, the question asks about 8/20/2020, which is after the review period
Results Analysis:
User1: Will be removed (answered “No”)
User2: Will remain (answered “Yes”)
User3: Will be removed (no response)
Therefore, by 8/20/2020:
Only User2 will remain in Group1
User1 will be removed due to their “No” response
User3 will be removed due to no response
The fact that User1 is an owner of Group1 doesn’t affect the access review outcome, and the guest status of User3 doesn’t change how the access review applies to them since the scope was “Everyone.”
This makes “User2 only” the correct answer as they will be the only remaining member after the access review results are applied.

25
Q

You have an Azure subscription that contains a user named User1, a security group named Group1, and a virtual machine named VM1.
You enable a system-assigned managed identity for VM1.
To which identities can you assign the Reports reader role?
User1 only
User1 and Group1 only
User1 and VM1 only
User1, Group1, and VM1
Which one is correct? why correct?

A

The correct answer is “User1, Group1, and VM1”
Here’s why all three identities can be assigned the Reports reader role:
Types of Identities in Azure: [1]
User identities (User1): Individual user accounts in Azure AD
Group identities (Group1): Security groups in Azure AD
Managed identities (VM1): System-assigned or user-assigned identities for Azure resources
Role Assignment Capabilities:
Reports reader role can be assigned to:
User accounts (User1)
Security groups (Group1)
System-assigned managed identities (VM1)
System-assigned Managed Identity:
When enabled on VM1, it creates a service principal
This service principal can be assigned roles
It can be treated as an identity for role assignments
Why other options are incorrect:
“User1 only”:
Too restrictive
Ignores the ability to assign roles to groups and managed identities
“User1 and Group1 only”:
Excludes managed identities
VM1’s system-assigned managed identity can receive role assignments
“User1 and VM1 only”:
Excludes security groups
Groups are valid targets for role assignments
Key points:
All three identity types can be assigned Azure RBAC roles
System-assigned managed identities are valid role assignment targets
Security groups are commonly used for role assignments
Individual users can receive role assignments
Therefore, “User1, Group1, and VM1” is correct because all three identity types can be assigned the Reports reader role in Azure.

26
Q

You have an Azure Active Directory (Azure AD) tenant named contoso.com that contains a user named Ben Smith.
You configure a Password protection for contoso.com that includes the following Custom banned passwords settings:
* Enforce custom list: Yes
* Custom banned password list: Contoso
Which password can be used by Ben Smith?
FgRs01
Cont0s0123
CONTOSO123
Conto123so
Which one is correct? why correct?

A

The correct answer is “FgRs01”
Let’s analyze why this is the correct answer by understanding Azure AD Password Protection rules:
Custom Banned Password Rules:
The custom list contains “Contoso”
Azure AD Password Protection is case-insensitive
It checks for variations and common substitutions
It looks for the banned word anywhere within the password
Analysis of each password option:
“FgRs01”:
Does not contain “contoso” or variations
Meets complexity requirements
Does not contain any banned terms
Therefore, this is allowed
“Cont0s0123”:
Contains “contoso” with number substitutions
‘0’ substituting for ‘o’ is detected
Not allowed due to containing banned word
“CONTOSO123”:
Contains “contoso” directly (case doesn’t matter)
Not allowed due to containing banned word
“Conto123so”:
Contains “contoso” with numbers in between
Password protection can detect split variations
Not allowed due to containing banned word
Key Password Protection Features:
Case-insensitive matching
Common character substitutions are detected
Split word variations are detected
Global banned password list is also applied
Therefore, “FgRs01” is the only password that would be allowed because:
It doesn’t contain the banned word “contoso”
It meets basic password complexity requirements
It doesn’t trigger any of the password protection rules
All other options contain variations of the banned word “contoso” and would be rejected by the password protection system

27
Q

You have the following Azure resources:
* Azure Key Vault named KeyVault1
* Azure App Service named WebApp1
You need to ensure that WebApp1 can access KeyVault1 by using Azure Active Directory (Azure AD) authentication.
Which two settings can be used to configure WebApp1? Each correct answer presents a complete solution.
User assigned managed identity
Application settings
TLS/SSL bindings
App Service Authentication
System assigned managed identity
Which one is correct? why correct?

A

The correct two answers are:
User assigned managed identity
System assigned managed identity
Here’s why these are the correct solutions for enabling WebApp1 to access KeyVault1 using Azure AD authentication:
User Assigned Managed Identity:
Can be created independently and assigned to WebApp1
Can be shared across multiple resources
Provides Azure AD authentication for accessing Key Vault [1]
Maintains separate lifecycle from the web app
Can be managed independently
System Assigned Managed Identity:
Created directly with WebApp1
Tied to WebApp1’s lifecycle
Provides automatic Azure AD authentication
Simpler to manage for single-resource scenarios
No separate identity management needed
Why other options are incorrect:
“Application settings”:
Used for configuration values and connection strings
Doesn’t provide authentication mechanism
Can store Key Vault references but doesn’t handle authentication
“TLS/SSL bindings”:
Related to HTTPS and certificate management
Has nothing to do with authentication to Key Vault
Only handles secure communication with clients
“App Service Authentication”:
Handles end-user authentication to the web app
Not related to service-to-service authentication with Key Vault
Focuses on client-side authentication
Implementation steps using either solution:
Enable chosen managed identity type on WebApp1
Grant appropriate access policies in KeyVault1
Configure WebApp1 to use the managed identity when accessing KeyVault1
Both managed identity options provide complete solutions because they:
Enable Azure AD authentication
Eliminate need for storing credentials in code
Provide secure access to Key Vault
Are fully supported by Azure Key Vault
Therefore, both User assigned managed identity and System assigned managed identity are valid solutions for enabling WebApp1 to access KeyVault1 using Azure AD authentication.

28
Q

You have the following Azure virtual machines that run Windows Server 2019:
* Server1- connected to VirtualNET1 and has a Wingtiptoys.com DNS suffix configured in Windows Server 2019
* Server2- connected to VirtualNET1 and has a Fabrikam.com DNS suffix configured in Windows Server 2019
* Server3- connected to VirtualNET2 and has a Wingtiptoys.com DNS suffix configured in Windows Server 2019
* Server4- connected to VirtualNET2 and has a Fabrikam.com DNS suffix configured in Windows Server 2019
You create a private DNS zone named fabrikam.com and add the following virtual network links to fabrikam.com:
* Link1- connected to VirtualNET1 and has auto registration enabled
* Link2- connected to VirtualNET2 and has auto registration enabled
Which virtual machines will register a DNS record in fabrikam.com?
○ Server2 only
○ Server1 and Server2 only
○ Server2 and Server4 only
○ Server1, Server2, Server3, and Server4
Which one is correct? why correct?

A

The correct answer is “Server2 and Server4 only”
Let’s analyze why:
Private DNS Zone Auto-registration Rules:
Auto-registration is enabled on both virtual network links
VMs will auto-register if their DNS suffix matches the private DNS zone name
The DNS suffix configured in Windows Server must match the private DNS zone name
Analysis of each server:
Server1:
Connected to VirtualNET1 (Link1)
DNS suffix: Wingtiptoys.com
Won’t register because DNS suffix doesn’t match fabrikam.com
Server2:
Connected to VirtualNET1 (Link1)
DNS suffix: Fabrikam.com
Will register because DNS suffix matches fabrikam.com
Server3:
Connected to VirtualNET2 (Link2)
DNS suffix: Wingtiptoys.com
Won’t register because DNS suffix doesn’t match fabrikam.com
Server4:
Connected to VirtualNET2 (Link2)
DNS suffix: Fabrikam.com
Will register because DNS suffix matches fabrikam.com
Key factors:
Auto-registration is enabled on both virtual network links
DNS suffix must match the private DNS zone name (fabrikam.com)
The virtual network connection is required but not sufficient alone
Both conditions (matching DNS suffix and connected network) must be met
Therefore, only Server2 and Server4 will register in fabrikam.com because:
They both have the matching Fabrikam.com DNS suffix
They are connected to virtual networks with auto-registration enabled
The other servers have different DNS suffixes and won’t auto-register
This makes “Server2 and Server4 only” the correct answer

29
Q

You have an Azure subscription that contains a storage account named storage1 and the following virtual machines:
* VM1 has a public IP address of 13.68.158.24 and is connected to VNET1/Subnet1
* VM2 has a public IP address of 52.255.145.76 and is connected to VNET1/Subnet1
* VM3 has a public IP address of 13.68.158.50 and is connected to VNET1/Subnet2
The subnets have the following service endpoints:
* Subnet1 has a Microsoft.Storage service endpoint
* Subnet2 does not have any service endpoint
Storage1 has a firewall configured to allow access from the 13.68.158.0/24 IP address range only.
You need to identify which virtual machines can access storage1.
What should you identify?
VM1 only
VM3 only
VM1 and VM2 only
VM1 and VM3 only
VM1, VM2, and VM3
Which one is correct? why correct?

A

The correct answer is “VM1 and VM3 only”
Let’s analyze why by examining the key factors:
Storage Account Firewall Rules:
Allows access from IP range 13.68.158.0/24 only
This means only IPs starting with 13.68.158.x can access
Public IP Addresses:
VM1: 13.68.158.24 (within allowed range)
VM2: 52.255.145.76 (outside allowed range)
VM3: 13.68.158.50 (within allowed range)
Network Configuration:
VM1: VNET1/Subnet1 (has Storage service endpoint)
VM2: VNET1/Subnet1 (has Storage service endpoint)
VM3: VNET1/Subnet2 (no service endpoint)
Access Analysis:
VM1:
IP address (13.68.158.24) is within allowed range
Can access storage1
Service endpoint presence doesn’t affect access in this case
VM2:
IP address (52.255.145.76) is outside allowed range
Cannot access storage1
Service endpoint alone doesn’t grant access if IP is not allowed
VM3:
IP address (13.68.158.50) is within allowed range
Can access storage1
Lack of service endpoint doesn’t prevent access since IP is allowed
Therefore, “VM1 and VM3 only” is correct because:
Both VMs have IP addresses within the allowed range (13.68.158.0/24)
The service endpoint configuration doesn’t override the IP-based firewall rules
VM2 is blocked because its IP address is outside the allowed range
The presence or absence of service endpoints doesn’t affect access in this scenario since the primary control is the IP-based firewall rule.

30
Q

You have the following containerized applications:
* App1 that runs in a Server Core installation of Windows Server container
* App2 that runs in a Nano Server container
* App3 that runs in a Linux container
* App4 that runs in a Linux container

What is the minimum number of Azure Kubernetes Service (AKS) node pools required to run all the applications?
○ 1
○ 2
○ 3
○ 4
Which one is correct? why correct?

A

The correct answer is “2” node pools [1]
Here’s why two node pools are the minimum requirement:
Container Type Requirements:
Windows Server containers (App1) need Windows nodes
Nano Server containers (App2) need Windows nodes
Linux containers (App3 and App4) need Linux nodes
Node Pool Compatibility:
All Windows containers (Server Core and Nano Server) can run on the same Windows node pool
All Linux containers can run on the same Linux node pool
Windows and Linux containers cannot run on the same node pool
Required Node Pools:
One Windows node pool for:
App1 (Server Core)
App2 (Nano Server)
One Linux node pool for:
App3 (Linux)
App4 (Linux)
Why other options are incorrect:
“1 node pool”:
Insufficient because Windows and Linux containers can’t mix
Cannot support both operating systems on same nodes
“3 node pools”:
Unnecessary separation
Windows containers can share the same Windows node pool
Linux containers can share the same Linux node pool
“4 node pools”:
Excessive separation
No need for separate pools for each application
Increases management overhead unnecessarily
Therefore, “2” is the correct answer because:
One Windows node pool can host both Windows containers
One Linux node pool can host both Linux containers
This is the minimum configuration needed to support all container types

31
Q

1 has a Department set to Sales and a Country set to USA
* User2 has a Department set to Marketing and a Country set to USA
* User3 has a Department set to Sales and a Country set to DE
* User4 has a Department set to Marketing and a Country set to DE
You create a group named Group1 that has the following dynamic membership rule.
user.country -eq “USA” -and user.department -eq “Marketing” or user.department -eq “Sales”
Which users are members of Group1?
User1 and User2 only
User1 and User3 only
User2 and User3 only
User1, User2, and User3 only
User1, User2, User3 and User4

A

The correct answer is “User1, User2, and User3 only”
Let’s analyze this by breaking down the dynamic membership rule and evaluating each user:
Dynamic Rule Analysis:
user.country -eq “USA” -and user.department -eq “Marketing” or user.department -eq “Sales”
Copy
Insert at cursor
text
This can be broken down as:
(user.country -eq “USA” AND user.department -eq “Marketing”) OR
(user.department -eq “Sales”)
Evaluating Each User:
User1:
Department: Sales
Country: USA
Result: TRUE (matches second condition “department -eq Sales”)
User2:
Department: Marketing
Country: USA
Result: TRUE (matches first condition combination)
User3:
Department: Sales
Country: DE
Result: TRUE (matches second condition “department -eq Sales”)
User4:
Department: Marketing
Country: DE
Result: FALSE (doesn’t match either condition)
Key Points:
The “OR” operator makes the “Sales” department check independent
Anyone in Sales department qualifies regardless of country
For Marketing department, they must be in USA
The rule’s structure means Sales department users qualify regardless of country
Therefore, “User1, User2, and User3 only” is correct because:
User1: Included (Sales department)
User2: Included (Marketing department in USA)
User3: Included (Sales department)
User4: Excluded (Marketing department not in USA)
This demonstrates how the dynamic membership rule evaluates conditions and how the OR operator affects the overall logic of the rule

32
Q

You have 10 Azure Active Directory (Azure AD) tenants.
You need to ensure that a user named Admin1 can onboard the tenants for access reviews. The solution must use the principle of
least privilege.
Which role should you assign to Admin1?
User administrator
Group administrator
Security administrator
Compliance administrator
Which one is correct? why correct?

A

The correct answer is “Security administrator”
Here’s why the Security administrator role is the most appropriate with least privilege:
Security Administrator Role:
Can manage access reviews
Can configure access review policies
Has permissions to manage security-related features
Doesn’t have excessive permissions beyond security features
Aligns with the principle of least privilege
Why other roles are incorrect:
User Administrator:
Focused on user management tasks
Can’t manage access reviews
Has different scope of permissions
Too limited for access review configuration
Group Administrator:
Limited to group management
Cannot configure access reviews
Scope too narrow for the required task
Insufficient permissions
Compliance Administrator:
Focused on compliance-related tasks
Has broader permissions than needed
Not specifically aligned with access review management
Violates principle of least privilege
Key Considerations:
Access reviews are a security feature
Onboarding tenants requires security-related permissions
Need minimal necessary permissions
Must follow least privilege principle
Therefore, “Security administrator” is the correct choice because:
It provides necessary permissions for access review management
Follows the principle of least privilege
Specifically designed for security feature management
Appropriate scope for managing access reviews across tenants

33
Q

You have an Azure container registry named Registry1.
You enable the admin user for Registry1.
Which username should you use to connect to Registry1 as an admin user?
root
Admin
Administrator
Registry1
Registry1.azurecr.io
Which one is correct? why correct?

A

The correct answer is “Registry1”
Here’s why Registry1 is the correct username for admin access:
Azure Container Registry Admin User:
When admin user is enabled, the username is always the registry name
In this case, the registry name is “Registry1”
This is a fixed behavior in Azure Container Registry
Cannot be customized or changed
Why other options are incorrect:
“root”:
Not used for Azure Container Registry
Common in other systems but not ACR
Not the Azure convention
“Admin”:
Generic name not used by ACR
Not the standard naming convention
Not linked to registry identity
“Administrator”:
Not used by ACR
Too generic
Not the standard naming convention
“Registry1.azurecr.io”:
This is the login server URL
Used for connecting to registry
Not the admin username
Used in connection string
Key Points:
Admin username is always the registry name
This is consistent across all Azure Container Registries
Simple and predictable naming convention
Directly tied to registry identity
Usage Example:
docker login Registry1.azurecr.io -u Registry1 -p <admin-password>
Copy
Insert at cursor
bash
Therefore, "Registry1" is correct because:
It matches the registry name
Follows Azure Container Registry conventions
Is the standard admin username when admin user is enabled
Aligns with Azure's documentation and implementation</admin-password>

34
Q

You have an Azure subscription that contains the following resources:
* VM1- a virtual machine that runs Microsoft SQL Server and is deployed in the West US location
* VM2- a virtual machine that runs Microsoft SQL Server and is deployed in the East US location
* SQL1- an Azure SQL Server deployed to the West US location
* Vault1- a Recovery Services vault deployed to the West US location
Which resources can you back up to Vault1?
VM1 only
VM1 and VM2 only
VM1 and SQL1 only
VM1, VM2, and SQL1
Which one is correct? why correct?

A

In this scenario, the correct answer is VM1 and SQL1 only.

Explanation:
VM1 (West US): Since VM1 is deployed in the West US location and Vault1 (the Recovery Services vault) is also in the West US location, you can back up VM1 to Vault1.

VM2 (East US): VM2 is deployed in the East US location. Azure Recovery Services vaults can only back up resources that are located in the same region. Therefore, you cannot back up VM2 to Vault1, which is in the West US.

SQL1 (West US): SQL1 is an Azure SQL Server that is also deployed in the West US location. Since it is in the same region as Vault1, you can back up SQL1 to Vault1.

Conclusion:
VM1 and SQL1 can be backed up to Vault1 because they are both located in the West US region.
VM2 cannot be backed up to Vault1 because it is located in a different region (East US).