test13 Flashcards
You have an Azure web app named Contoso2023.
You add a deployment slot to Contoso2023 named Slot1.
You need to be able to perform a deployment slot swap with preview.
What should you modify?
application settings for Contoso2023
general settings for Contoso2023
application settings for Contoso2023-Slot1
general settings for Contoso2023-Slot1
which one is correct? why correct?
The correct answer is: application settings for Contoso2023-Slot1
Here’s why:
For deployment slot swap with preview to work properly, you need to configure the application settings in the SOURCE slot (Slot1 in this case) because:
The swap with preview operation starts from the source slot
Configuration changes need to be made where the new code/changes are being tested
The preview swap allows you to validate how the app will behave with production settings
The process works as follows:
First, you configure slot-specific settings in Slot1
When you initiate a swap with preview:
The production slot’s settings are applied to the staging slot (Slot1) [1]
This allows you to test how your application will behave with production settings
You can validate the application before completing the swap
Why other options are incorrect:
General settings for Contoso2023:
General settings don’t control swap behavior
These settings are more about platform configuration
Application settings for Contoso2023 (production):
Changes in production settings won’t enable swap with preview
Production settings are already in place
General settings for Contoso2023-Slot1:
General settings don’t control swap behavior
These are basic configuration settings unrelated to swap preview functionality
Key points about slot swap with preview: [2]
It’s a two-phase operation
Allows testing with production configuration
Helps prevent unexpected behavior after swap
Configuration must be done in the source slot
Enables validation before completing the swap
Therefore, modifying the application settings in Slot1 (Contoso2023-Slot1) is the correct approach to enable and configure swap with preview functionality.
You have an Azure web app named WebApp1.
You discover that backup options are unavailable for WebApp1.
You need to back up WebApp1.
What should you do first?
Modify the platform settings of WebApp1.
Modify the Application settings of WebApp1.
Scale up the app service plan.
Scale out the app service plan.
which one is correct? why correct?
The correct answer is: Scale up the app service plan.
Here’s why:
Azure Web App Backup Requirements:
Backup functionality is only available in Standard, Premium, and Isolated pricing tiers
Basic and Free tiers do not support backup features
Scaling up means moving to a higher tier (e.g., from Basic to Standard)
Why scaling up is the first step:
It changes the service tier to one that supports backups
Once in a supported tier, backup options become available [1]
This is a prerequisite for any backup configuration
Why other options are incorrect:
Modify the platform settings of WebApp1:
Platform settings don’t control backup availability
These settings are for runtime configurations
Won’t enable backup features if you’re in an unsupported tier
Modify the Application settings of WebApp1:
Application settings are for app-specific configurations
Don’t affect the availability of backup features
Can’t enable backups through these settings [2]
Scale out the app service plan:
Scaling out only adds more instances
Doesn’t change the tier/pricing level
Won’t enable backup features
The correct sequence would be:
Scale up to at least Standard tier
Configure backup settings
Set up backup schedule and retention
Configure storage account for backups
Therefore, scaling up the app service plan is the first and necessary step to enable backup functionality for WebApp1.
You have an Azure web service named Contoso2022 that runs in the Standard App Service plan. Contoso2022 has five deployment slots in use.
A user named User1 has the Contributor role for Contoso2022.
You need to ensure that User1 can create additional deployment slots to Contoso2022.
What should you do?
Assign User1 the Owner role for Contoso2022.
Assign User1 the Website Contributor role for Contoso2022.
Scale up the Contoso2022 App Service plan.
Scale out the Contoso2022 App Service plan.
You have an Azure web service named Contoso2022 that runs in the Standard App Service plan. Contoso2022 has five deployment slots in use.
A user named User1 has the Contributor role for Contoso2022.
You need to ensure that User1 can create additional deployment slots to Contoso2022.
What should you do?
Assign User1 the Owner role for Contoso2022.
Assign User1 the Website Contributor role for Contoso2022.
Scale up the Contoso2022 App Service plan.
Scale out the Contoso2022 App Service plan.
which one is correct? why correct?
The correct answer is: Scale up the Contoso2022 App Service plan.
Here’s why:
Deployment Slot Limitations:
Standard (S1) tier allows up to 5 deployment slots
Premium (P1v2/P1v3) and higher tiers allow up to 20 deployment slots
Since Contoso2022 already has 5 slots in use, it has reached the Standard tier limit
Why scaling up is the solution:
Scaling up means moving to a higher tier (e.g., from Standard to Premium)
Premium tier provides more deployment slots (up to 20)
This directly addresses the limitation preventing new slot creation
Why other options are incorrect:
Assign User1 the Owner role:
User1 already has Contributor role which is sufficient for slot management
The issue is not permissions-related
Higher role won’t overcome slot limit
Assign User1 the Website Contributor role:
This role doesn’t provide additional capabilities for slot creation
The limitation is tier-based, not permission-based
Website Contributor role has similar permissions to Contributor
Scale out the Contoso2022 App Service plan:
Scaling out only adds more instances
Doesn’t change the tier limitations
Won’t increase the number of available deployment slots
Key points:
Standard tier: 5 deployment slots maximum
Premium tier: 20 deployment slots maximum
User permissions are not the limiting factor
Scaling up increases service capabilities
Therefore, scaling up the App Service plan is the correct solution to allow User1 to create additional deployment slots beyond the current limit of 5
You plan to deploy an Azure web app that will have the following settings:
* Name:WebApp1
* Publish: Docker container
* Operating system: Windows
* Region: West US
* Windows Plan (West US): ASP-RG1-8bcf
You need to ensure that WebApp1 uses the ASP.NET v4.7 runtime stack.
Which setting should you modify?
○ Region
○ Operating system
○ Publish
○ Windows Plan
which one is correct? why correct?
The correct answer is: Publish
Here’s why:
Current Configuration Issue:
The web app is set to publish as a Docker container [1]
When using Docker container deployment, you can’t directly specify the runtime stack
Docker containers come with their own runtime environment
Why changing Publish is the solution:
Change Publish from “Docker container” to “Code”
When publishing as “Code”, you can:
Select specific runtime stacks
Choose ASP.NET v4.7 as the runtime
Configure framework-specific settings
Why other options are incorrect:
Region:
Region selection doesn’t affect runtime stack availability
All supported runtimes are available in West US
Changing regions won’t enable runtime stack selection
Operating system:
Windows is already the correct choice for ASP.NET v4.7
Changing OS won’t enable runtime stack selection while using Docker
Windows Plan:
App Service Plan doesn’t determine runtime stack options
This only affects resources available to the app
Changing plan won’t enable runtime stack selection
The correct sequence would be:
Change Publish type from “Docker container” to “Code”
Select ASP.NET v4.7 as the runtime stack
Configure other application settings as needed
Therefore, modifying the Publish setting is the correct solution to enable ASP.NET v4.7 runtime stack for WebApp1.
You plan to create an Azure container instance named container1 that will use a Docker image named Image1.
You need to ensure that container1 has persistent storage.
Which Azure resources should you deploy for the persistent storage?
O an Azure container registry only
O an Azure Storage account and a file share
O an Azure Storage account and a blob container
O an Azure SQL database only
which one is correct? why correct?
The correct answer is: an Azure Storage account and a file share
Here’s why:
Azure Container Instance (ACI) Persistent Storage Requirements:
ACI supports Azure Files (file shares) for persistent storage
File shares provide shared storage that can be mounted to containers [1]
The storage must persist independently of the container lifecycle
Why Azure Storage account and file share is the correct solution:
Azure Storage account hosts the file share
Azure Files provides SMB protocol access
Containers can mount the file share directly
Data persists even if container is deleted/recreated
Supports concurrent access from multiple containers
Why other options are incorrect:
Azure container registry only:
Container registry stores and manages container images
Doesn’t provide persistent storage for running containers
Used for image management, not data persistence
Azure Storage account and a blob container:
Blob storage isn’t directly mountable to containers
Not suitable for file system-like operations
Doesn’t provide the same file system semantics as file shares
Azure SQL database only:
This is a relational database service
Not designed for container file system storage
Cannot be mounted as persistent storage
Implementation steps:
Create an Azure Storage account
Create a file share within the storage account
Configure the container instance to mount the file share
Use storage account key for authentication
Therefore, deploying an Azure Storage account with a file share is the correct solution for providing persistent storage to container1.
You have an Azure subscription that contains the following resources:
* a storage account named storage123
* a container instance named AppContainer
The subscription contains a virtual network named VirtualNet4 that has the following subnets:
* SubnetA- storage123 is connected to SubnetA.
* SubnetB- AppContainer is connected to SubnetB.
* SubnetC- No resources.
You plan to deploy an Azure container instance named container5 to VirtualNet4.
To which subnets can you deploy container5?
SubnetB only
SubnetC only
SubnetB and SubnetC only
SubnetA, SubnetB, and SubnetC
which one is correct? why correct?
The correct answer is: SubnetC only
Here’s why:
Azure Container Instance (ACI) Network Deployment Rules:
Only one container group can be deployed per subnet
Subnets used for container groups cannot contain other resource types
Each subnet can only be used by one container instance
Analysis of each subnet:
SubnetA:
Already contains storage123 (storage account)
Cannot be used for container instances when other resources exist
Not available for container5
SubnetB:
Already contains AppContainer (existing container instance)
One subnet can only host one container group
Not available for container5
SubnetC:
Currently empty
No other resources or container instances
Available for container5 deployment
Why other options are incorrect:
“SubnetB only”:
SubnetB already has a container instance
Cannot deploy another container to same subnet
“SubnetB and SubnetC only”:
SubnetB is not available (has existing container)
Only SubnetC is actually available
“SubnetA, SubnetB, and SubnetC”:
SubnetA has storage account
SubnetB has existing container
Only SubnetC is available
Key rules for ACI networking:
One container group per subnet
Subnet must be dedicated to container instances
Cannot mix with other resource types
Must be an empty subnet for new container deployment
Therefore, container5 can only be deployed to SubnetC as it’s the only subnet that meets all requirements.
You have a Docker image named Image1 that contains a corporate app.
You need to deploy Image1 to Azure and make the app accessible to users.
Which two Azure services should you deploy? Each correct answer presents part of the solution.
Azure App service
a virtual machine
Azure Container Registry
a virtual machine scale set
which one is correct? why correct?
The correct answers are:
Azure App Service
Azure Container Registry
Here’s why these are the correct answers: [1]
Azure Container Registry (ACR):
Required to store and manage the Docker image (Image1)
Provides a private, secure location for container images
Integrates seamlessly with other Azure services
Enables version control of container images
Necessary for storing Image1 before deployment
Azure App Service:
Provides a managed platform for hosting containerized applications
Supports Docker container deployment
Offers built-in auto-scaling and load balancing
Provides easy integration with ACR
Handles the infrastructure management
Makes the app accessible to users via HTTP/HTTPS endpoints
Why other options are incorrect:
Virtual Machine:
Requires manual container management
More complex to maintain
Requires more administrative overhead
Not a managed service for containers
Overkill for running a containerized application
Virtual Machine Scale Set:
More complex than necessary
Requires manual container orchestration
Better suited for complex infrastructure scenarios
Requires more management overhead
Not a managed container service
The deployment process would typically involve:
Create an Azure Container Registry
Push Image1 to the registry
Create an Azure App Service
Configure App Service to pull and run Image1 from ACR
Therefore, Azure Container Registry and Azure App Service together provide the complete solution for deploying and hosting the containerized application.
You have an Azure Storage account named storage1.
You create the following encryption scopes for storage1:
* Scope1 that has an encryption type of Microsoft-managed keys
* Scope2 that has an encryption type of Customer-managed keys
Which storage services can be used with Scope2?
blob only
file only
blob and file only
table and queue only
blob, file, table, and queue
which one is correct? why correct?
The correct answer is: blob and file only
Here’s why:
Encryption Scope Support:
Encryption scopes with customer-managed keys are supported only for:
Azure Blob Storage
Azure Files
Not supported for: [1]
Azure Queue Storage
Azure Table Storage
Customer-managed keys (Scope2) capabilities:
Can be used to encrypt blob data
Can be used to encrypt file share data
Provides more control over encryption keys
Allows key rotation and management
Enables bring-your-own-key (BYOK) scenarios
Why other options are incorrect:
“blob only”:
While blob storage supports encryption scopes
Files also support customer-managed keys
Too limiting
“file only”:
While file storage supports encryption scopes
Blob storage also supports customer-managed keys
Too limiting
“table and queue only”:
Table storage doesn’t support encryption scopes
Queue storage doesn’t support encryption scopes
Completely incorrect
“blob, file, table, and queue”:
Table and Queue services don’t support encryption scopes
Too inclusive
Key points about encryption scopes:
Available for blob and file services
Can use Microsoft-managed or customer-managed keys
Provide granular encryption control
Allow different encryption settings within same storage account
Support key rotation and management
Therefore, only blob and file services can be used with Scope2 (customer-managed keys encryption scope)
You have an Azure Storage account named storage1 that is configured to use the Hot access tier.
Storage1 has a container named container1 and the lifecycle management rule with following settings:
* Move blob to cool storage: Selected
* Days after last modification: 3
* Move blob to archive storage: Selected
* Days after last modification: 5
On December 1, you create a file named File1 in container1.
On December 10, you rehydrate File1 and move the file to the Hot access tier.
When will File1 be moved to archive storage?
within 24 hours
on December 15
on December 18
on January 1
Which one is correct? why correct?
The correct answer is: December 15
Here’s why:
When you rehydrate File1 and move it back to Hot tier on December 10, this action counts as a modification of the blob. This resets the “last modification” timestamp to December 10.
According to the lifecycle management rules:
Files move to Cool tier after 3 days from last modification
Files move to Archive tier after 5 days from last modification
Starting from December 10 (the new last modification date):
The file will move to Cool tier on December 13 (3 days later)
The file will move to Archive tier on December 15 (5 days later)
The other options are incorrect because:
“within 24 hours” is too soon and doesn’t follow the lifecycle rules
“December 18” is too late as it would be 8 days after modification
“January 1” is much too late and doesn’t align with the lifecycle rules
Important to note:
Lifecycle management rules are based on the last modification time
When you rehydrate and change the access tier, it counts as a modification
The countdown for lifecycle rules restarts from the last modification date
The rules are sequential - the blob must first move to Cool before moving to Archive
You have an Azure Storage account named storage1.
You need to provide time-limited access to storage1.
What should you use?
an access key
a role assignment
an access policy
a shared access signature (SAS)
Which one is correct? why correct?
The correct answer is: a shared access signature (SAS)
Here’s why:
Shared Access Signature (SAS) is the best solution for providing time-limited access to Azure Storage because: [1]
It provides secure, delegated access to resources in your storage account
You can specify an expiry time/date for the access
You can define specific permissions (read, write, delete, etc.)
You can restrict access to specific IP addresses, protocols, and services
You can revoke access at any time
Why the other options are not optimal:
Access Key:
Provides full access to the storage account
Cannot be time-limited
Harder to revoke without impacting other applications
Sharing access keys is considered a security risk
Role Assignment:
More suitable for long-term access management
Requires Azure AD integration
Cannot be easily time-limited
More complex to set up for temporary access
Access Policy:
Is actually a component that can be used with SAS
Cannot provide time-limited access on its own
Used to define permissions that can be referenced by a SAS
Key benefits of using SAS:
Granular control over what resources can be accessed
Control over what operations are allowed
Control over when access starts and expires
Can be easily revoked if needed
Can be associated with stored access policies for additional control
SAS is specifically designed for scenarios requiring temporary, limited access to storage resources, making it the ideal choice for this requirement.
You have an Azure Storage account named storage1 that contains a file share named share1.
You also have an on-premises Active Directory domain that contains a user named User1.
You need to ensure that User1 can access share1 by using the SMB protocol.
What should you do?
Provide User1 with the shared access signature (SAS) for storage1.
Configure the Access control (IAM) settings of storage1.
Configure the Firewalls and virtual networks settings of storage1.
Provide User1 with the access key for storage1.
Which one is correct? why correct?
The correct answer is: Configure the Access control (IAM) settings of storage1
Here’s why:
For SMB access to Azure File Shares with Active Directory authentication: [1]
You need to configure Azure AD Domain Services or AD authentication
IAM (Identity and Access Management) settings need to be configured to allow Active Directory users to access the file share
This provides seamless integration with existing Active Directory credentials
Users can access the file share using their AD credentials without additional authentication [2]
Why the other options are incorrect:
Shared Access Signature (SAS):
SAS is primarily used for REST-based access
Not suitable for SMB protocol authentication
Doesn’t integrate with Active Directory authentication
Would require manual token management
Firewalls and virtual networks settings:
This only controls network-level access
Doesn’t handle authentication
While important for security, it doesn’t solve the authentication requirement
Access key:
Access keys are for administrative access
Not suitable for end-user authentication
Sharing access keys is a security risk
Doesn’t integrate with Active Directory
Key steps to implement the solution:
Configure Azure AD Domain Services or AD authentication for the storage account
Set up appropriate IAM roles and assignments
Ensure the user has the correct RBAC permissions
Configure the necessary network connectivity between on-premises and Azure
This approach provides:
Seamless integration with existing AD credentials
Secure access using SMB protocol
Proper authentication and authorization
Maintainable and scalable access control
You have an Azure virtual machine named VM1 that automatically registers in an Azure private DNS zone named contoso.com.
VM1 hosts a website named Site1.
You need to ensure that Site1 can be resolved by using a URL of http://www.contoso.com. The solution must ensure that if the IP
address of VM1 changes, www.contoso.com will resolve to the changed IP address.
Which DNS record type should you add to contoso.com?
OA
O SVR
O TXT
O AAAA
O CNAME
Which one is correct? why correct?
The correct answer is: CNAME (Canonical Name) [1]
Here’s why:
CNAME is the best choice because:
It creates an alias that points to another DNS name (canonical name)
When VM1’s IP address changes, the CNAME record will automatically resolve to the new IP address
The CNAME record would point www.contoso.com to VM1’s automatically registered DNS name in the private DNS zone
It provides automatic updates when the underlying IP address changes
How it works in this scenario:
VM1 automatically registers its DNS record in contoso.com
You create a CNAME record where:
www.contoso.com (alias) points to VM1’s DNS name (canonical name)
When VM1’s IP changes, its DNS record updates automatically
The CNAME record follows this change without requiring manual updates
Why other options are incorrect:
A Record:
Maps directly to an IP address
Would need manual updates when VM1’s IP changes
Doesn’t provide the automatic update capability needed
SRV Record:
Used for service location
Typically for specific services and ports
Not appropriate for basic web hosting scenarios
TXT Record:
Used for text information
Cannot be used for DNS resolution
Typically used for domain verification or SPF records
AAAA Record:
Used for IPv6 addresses only
Like A records, would need manual updates
Doesn’t provide automatic resolution
Using a CNAME record is the most efficient solution because:
It maintains a dynamic link to the VM’s DNS name
Automatically handles IP address changes
Requires minimal maintenance
Provides reliable name resolution
A company named Contoso, Ltd. has an Azure subscription that contains an Azure Active Directory (Azure AD) tenant named
contoso.com. The Azure subscription contains the following virtual networks:
* VNET1- deployed in the East US location
* VNET2- deployed in the East US location
* VNET3- deployed in the West US location
Contoso purchases a company named A. Datum Corporation. A. Datum has an Azure subscription that contains an Azure AD
tenant named adatum.com. Adatum.com contains the following virtual networks:
* VNETA- deployed in the East US location
* VNETB- deployed in the West US location
Which virtual networks can you peer to VNET1?
○ VNET2 only
○ VNET2 and VNET3 only
○ VNET2 and VNETA only
○ VNET2, VNET3, and VNETA only
○ VNET2, VNET3, VNETA, and VNETB
Which one is correct? why correct?
The correct answer is: VNET2, VNET3, VNETA, and VNETB
Here’s why:
Azure Virtual Network Peering Capabilities:
VNet peering enables you to connect virtual networks seamlessly
Peering can be established between:
VNets in the same region
VNets in different regions (Global VNet peering)
VNets across different subscriptions
VNets across different Azure AD tenants
In this scenario:
VNET2 can be peered because:
It’s in the same subscription
It’s in the same region (East US)
VNET3 can be peered because:
It’s in the same subscription
Global VNet peering supports cross-region connectivity
VNETA can be peered because:
Cross-subscription peering is supported
Cross-tenant peering is supported
It’s in the same region (East US)
VNETB can be peered because:
Cross-subscription peering is supported
Cross-tenant peering is supported
Global VNet peering supports cross-region connectivity
Key Points:
Location (region) is not a limitation thanks to Global VNet peering
Different subscriptions can be connected through peering
Different Azure AD tenants can be connected through peering
All virtual networks mentioned can be peered with VNET1
Requirements for VNet Peering:
Appropriate permissions in both subscriptions
Non-overlapping IP address spaces
Network connectivity between tenants
Therefore, VNET1 can be peered with all other virtual networks mentioned (VNET2, VNET3, VNETA, and VNETB), making this the correct answer
You have an Azure virtual machine named VM1 that connects to a virtual network named VNET1.
You create a private DNS zone named contoso.com and add an A record named host1 to the zone.
You need to ensure that VM1 can resolve host1.contoso.com.
What should you do?
Modify the Access control (IAM) settings of the zone.
From the zone, add a virtual network link.
From the properties of the network interface, modify the options of the DNS servers.
From the properties of VNET1, modify the options of the DNS servers.
Which one is correct? why correct?
The correct answer is: From the zone, add a virtual network link
Here’s why:
Virtual Network Link is the correct solution because:
It creates a connection between the private DNS zone and the virtual network
It enables DNS resolution for resources within the linked virtual network
It’s specifically designed for private DNS resolution within Azure virtual networks
Once linked, VMs in the virtual network can automatically resolve records in the private DNS zone
Implementation steps:
Go to the private DNS zone (contoso.com) [1]
Select “Virtual network links”
Add a new link to VNET1
Enable auto-registration if needed
Why other options are incorrect:
Modifying Access control (IAM) settings:
IAM controls administrative access to the DNS zone
Doesn’t affect DNS resolution
Only manages who can manage the DNS zone
Modifying DNS servers of the network interface:
Not necessary for Azure private DNS resolution
Would be used for custom DNS servers
Could actually interfere with Azure private DNS resolution
Modifying DNS servers of VNET1:
Not required for Azure private DNS zones
Would be used for custom DNS servers
Azure private DNS resolution works with default Azure DNS
Benefits of using virtual network links:
Automatic DNS resolution within the virtual network
No additional configuration needed on VMs
Works with Azure’s built-in DNS infrastructure
Can be configured for auto-registration of VM DNS records
The virtual network link is the essential component that enables private DNS resolution between the DNS zone and resources in the virtual network.
You have an Azure virtual network named VNET1 that has an IP address space of 192.168.0.0/16 and the following subnets:
* Subnet1- has an IP address range of 192.168.1.0/24 and is connected to 15 VMs
* Subnet2- has an IP address range of 192.168.2.0/24 and does not have any VMs connected
You need to ensure that you can deploy Azure Firewall to VNET1.
What should you do?
Add a new subnet to VNET1.
Add a service endpoint to Subnet2.
Modify the subnet mask of Subnet2.
Modify the IP address space of VNET1.
Which one is correct? why correct?
The correct answer is: Add a new subnet to VNET1
Here’s why:
Azure Firewall requirements:
Azure Firewall must be deployed in a dedicated subnet named “AzureFirewallSubnet”
This is a mandatory naming requirement
The subnet must be created specifically for the Azure Firewall
Minimum subnet size must be /26 [1]
Current situation:
VNET1 has IP space of 192.168.0.0/16 (plenty of available space)
Subnet1 (192.168.1.0/24) is in use
Subnet2 (192.168.2.0/24) is empty but not properly named for firewall
Why other options are incorrect:
Add a service endpoint to Subnet2:
Service endpoints are for securing Azure service connections
Doesn’t address the requirement for a dedicated firewall subnet
Wrong solution for firewall deployment
Modify the subnet mask of Subnet2:
Even if modified, the subnet name is still incorrect
Azure Firewall requires specifically named subnet
Changing mask alone doesn’t solve the requirement
Modify the IP address space of VNET1:
Current IP space (192.168.0.0/16) is sufficient
No need to modify as there’s plenty of address space
Wouldn’t solve the subnet requirement
Implementation steps:
Create a new subnet named “AzureFirewallSubnet”
Allocate appropriate address range (minimum /26)
Can use available space within 192.168.0.0/16
Then deploy Azure Firewall to this new subnet
The solution requires adding a new, properly named subnet because:
Azure Firewall has specific subnet naming requirements
Existing subnets can’t be repurposed
There’s sufficient IP space in the VNet
It’s the most straightforward and correct approach