test3 Flashcards
https://itexamviet.com/free-az-305-dump/16/
DRAG DROP
You are designing a solution to secure a company’s Azure resources. The environment hosts 10 teams. Each team manages a project and has a project manager, a virtual machine (VM) operator, developers, and contractors.
Project managers must be able to manage everything except access and authentication for users. VM operators must be able to manage VMs, but not the virtual network or storage account to which they are connected. Developers and contractors must be able to manage storage accounts.
You need to recommend roles for each member.
What should you recommend? To answer, drag the appropriate roles to the correct employee types. Each role may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Roles
Owner
Contributor
Reader
Virtual Machine Contributor
Storage Account Contributor
Answer Area
Employee type | Role
Project manager | Role
VM operators | Role
Developers | Role
Contractors | Role
Answer Area:
Employee type Role
Project manager Contributor
VM operators Virtual Machine Contributor
Developers Storage Account Contributor
Contractors Storage Account Contributor
Explanation of why each role is appropriate:
Project Manager: Contributor
The Contributor role allows users to create and manage all types of Azure resources but does not grant them the ability to manage access to those resources (i.e., they cannot assign roles to other users). This aligns perfectly with the requirement that project managers can manage everything except access and authentication.
VM Operators: Virtual Machine Contributor
The Virtual Machine Contributor role specifically grants permissions to manage virtual machines. This includes starting, stopping, resizing, and other VM-related tasks. Importantly, it does not grant permissions to manage the virtual network or storage accounts the VMs are connected to, fulfilling the stated restriction.
Developers: Storage Account Contributor
The Storage Account Contributor role allows users to manage Azure Storage accounts. This is exactly what developers need to fulfill their requirement.
Contractors: Storage Account Contributor
Since contractors also need to manage storage accounts, the Storage Account Contributor role is the appropriate choice for them as well.
Why other roles are not the best fit:
Owner: This role grants full control over the Azure resource, including the ability to delegate access. This is too much权限 for Project Managers, VM Operators, Developers, and Contractors based on the requirements.
Reader: This role only allows users to view Azure resources, not make any changes. None of the employee types can fulfill their responsibilities with only Reader access.
You have an Azure virtual machine named VM1 and an Azure Active Directory (Azure AD) tenant named adatum.com.
VM1 has the following settings:
– IP address: 10.10.0.10
– System-assigned managed identity: On
You need to create a script that will run from within VM1 to retrieve the authentication token of VM1.
Which address should you use in the script?
vm1.adatum.com.onmicrosoft.com
169.254.169.254
10.10.0.10
vm1.adatum.com
Correct Answer:
169.254.169.254
Explanation:
The Magic IP Address: The IP address 169.254.169.254 is a special, non-routable IP address that is specifically used within Azure virtual machines for accessing the Instance Metadata Service (IMDS).
IMDS and Managed Identities: The IMDS is a REST API endpoint available on every Azure VM. When a VM has a system-assigned or user-assigned managed identity enabled, it can use IMDS to obtain an Azure AD authentication token. This token allows the VM to authenticate to other Azure services without needing to embed credentials within the application running on the VM.
How it Works:
Your script running inside VM1 makes an HTTP request to 169.254.169.254.
The IMDS service on the VM’s hypervisor captures this request and verifies that it originates from the VM.
If the VM has an assigned managed identity, the IMDS endpoint can then return an OAuth 2.0 access token that the application running in VM1 can use to authenticate against other Azure services.
Why Other Options are Incorrect:
vm1.adatum.com.onmicrosoft.com: This is an FQDN (Fully Qualified Domain Name) and would not resolve to the internal metadata service IP address.
10.10.0.10: This is the private IP address of the VM. It does not expose the metadata service and cannot be used to fetch authentication tokens.
vm1.adatum.com: This is another FQDN and would not resolve to the internal metadata service IP address.
Important Tips for the AZ-305 Exam:
Managed Identities: This is a HUGE topic on the AZ-305 exam. You must thoroughly understand:
What they are: How they work with a VM or other azure services
System-assigned vs. User-assigned managed identities.
Why you should use them: To improve security, prevent credentials to be hardcoded
How to assign a managed identity to an Azure resource.
How to grant the managed identity permissions to access other Azure resources.
Instance Metadata Service (IMDS):
Know what it is and what information it exposes.
Understand its purpose in accessing VM metadata and managed identities.
Know the magic IP address: 169.254.169.254. This is very important for the exam.
Be aware it’s a secure, local endpoint that can only be accessed within the VM.
Authentication Flow:
Understand the general authentication flow using managed identities: VM request to IMDS endpoint, IMDS returning token, use token to authenticate with other azure services.
Security: Managed identities enhance security by eliminating the need to store credentials within your application or configuration files. This is a strong security practice, hence it is often covered in exam questions.
Practice and Hands-on: Do practical exercises to create VMs, enable managed identities, and access tokens using the IMDS. This will reinforce your understanding. There are many free online labs to help you with that.
HOTSPOT
Your company has a virtualization environment that contains the virtualization hosts shown in the following table.
Name Hypervisor Guest
Server1 VMware VM1, VM2, VM3
Server2 Hyper-V VMA, VMB, VMC
Virtual Machines Configuration:
Name Generation Memory Operating System (OS) OS Disk Data Disk
VM1 Not applicable 4 GB Windows Server 2016 200 GB 800 GB
VM2 Not applicable 12 GB Red Hat Enterprise Linux 7.2 3 TB 200 GB
VM3 Not applicable 32 GB Windows Server 2012 R2 200 GB 1 TB
VMA 1 8 GB Windows Server 2012 100 GB 2 TB
VMB 1 16 GB Red Hat Enterprise Linux 7.2 150 GB 3 TB
VMC 2 24 GB Windows Server 2016 500 GB 6 TB
All the virtual machines use basic disks. VM1 is protected by using BitLocker Drive Encryption (BitLocker).
You plan to migrate the virtual machines to Azure by using Azure Site Recovery.
You need to identify which virtual machines can be migrated.
Which virtual machines should you identify for each server? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
The virtual machines that can be migrated from Server1:
VM1 only
VM2 only
VM3 only
VM1 and VM2 only
VM1 and VM3 only
VM1, VM2, and VM3
The virtual machines that can be migrated from Server2:
VMA only
VMB only
VMC only
VMA and VMB only
VMA and VMC only
VMA, VMB, and VMC
To determine which virtual machines can be migrated to Azure using Azure Site Recovery, we need to check the compatibility requirements and limitations of Azure Site Recovery. Key constraints are related to operating system, disk type, disk size, and specific features like BitLocker.
Azure Site Recovery Compatibility Considerations:
Supported Operating Systems: All listed operating systems (Windows Server 2016, Red Hat Enterprise Linux 7.2, Windows Server 2012 R2, Windows Server 2012) are generally supported by Azure Site Recovery for both VMware and Hyper-V.
Disk Type: Basic disks are supported for Azure Site Recovery.
Disk Size Limit: Azure Site Recovery has a limit on the size of each disk that can be replicated. The maximum supported disk size for Azure Site Recovery is 4 TB.
BitLocker: Azure Site Recovery supports replicating virtual machines that use BitLocker Drive Encryption. For VMware VMs, BitLocker is generally supported.
Analyzing each Virtual Machine:
Server1 (VMware):
VM1:
OS: Windows Server 2016 (Supported)
Disk Sizes: OS Disk 200 GB, Data Disk 800 GB (Both within 4 TB limit)
BitLocker: Enabled, but supported by ASR.
Migratable
VM2:
OS: Red Hat Enterprise Linux 7.2 (Supported)
Disk Sizes: OS Disk 3 TB, Data Disk 200 GB (Both within 4 TB limit)
Migratable
VM3:
OS: Windows Server 2012 R2 (Supported)
Disk Sizes: OS Disk 200 GB, Data Disk 1 TB (Both within 4 TB limit)
Migratable
Server2 (Hyper-V):
VMA:
Generation: 1 (Supported)
OS: Windows Server 2012 (Supported)
Disk Sizes: OS Disk 100 GB, Data Disk 2 TB (Both within 4 TB limit)
Migratable
VMB:
Generation: 1 (Supported)
OS: Red Hat Enterprise Linux 7.2 (Supported)
Disk Sizes: OS Disk 150 GB, Data Disk 3 TB (Both within 4 TB limit)
Migratable
VMC:
Generation: 2 (Supported)
OS: Windows Server 2016 (Supported)
Disk Sizes: OS Disk 500 GB, Data Disk 6 TB (Data Disk exceeds 4 TB limit)
Not Migratable
Conclusion:
Server1: VM1, VM2, and VM3 are all within the supported limits and are migratable.
Server2: VMA and VMB are within the supported limits and are migratable. VMC is not migratable because its Data Disk is 6 TB, exceeding the 4 TB limit per disk for Azure Site Recovery.
Therefore, the correct answer is:
The virtual machines that can be migrated from Server1: VM1, VM2, and VM3
The virtual machines that can be migrated from Server2: VMA and VMB only
You are designing an Azure solution.
The solution must meet the following requirements:
– Distribute traffic to different pools of dedicated virtual machines (VMs) based on rules.
– Provide SSL offloading capabilities.
You need to recommend a solution to distribute network traffic.
Which technology should you recommend?
Azure Application Gateway
Azure Load Balancer
Azure Traffic Manager
server-level firewall rules
Correct Answer:
Azure Application Gateway
Explanation:
Requirement 1: Distribute traffic based on rules:
Azure Application Gateway provides advanced routing capabilities, allowing you to direct traffic to different backend pools of VMs based on rules you define. These rules can be based on HTTP headers, URL paths, cookies, and more. This is a key distinguishing factor compared to Azure Load Balancer.
Requirement 2: Provide SSL Offloading:
Application Gateway can terminate SSL/TLS connections at the gateway level. This means the backend VMs don’t need to handle the overhead of encryption and decryption, freeing up their resources for application processing. This is a critical requirement that Azure Load Balancer can’t satisfy.
Why Other Options are Incorrect:
Azure Load Balancer: Azure Load Balancer distributes traffic at the transport layer (Layer 4) and does not provide features such as SSL offloading or advanced rule-based routing that the application gateway does. It load balances the TCP connections.
Azure Traffic Manager: Azure Traffic Manager is a DNS-based load balancer used for global traffic routing. It directs users to the nearest or healthiest endpoint (e.g., different Azure regions) but not the individual backend pools within a region that Application Gateway does.
Server-level firewall rules: Server level firewall rules provides network security and does not distribute network traffic.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You manage an Active Directory domain named contoso.local.
You install Azure AD Connect and connect to an Azure Active Directory (Azure AD) tenant named contoso.com without syncing any accounts.
You need to ensure that only users who have a UPN suffix of contoso.com in the contoso.local domain sync to Azure AD.
Solution: You use Azure AD Connect to customize the synchronization options.
Does this meet the goal?
Yes
No
Correct Answer:
Yes
Explanation:
Requirement: The goal is to synchronize only users from the on-premises Active Directory (contoso.local) to Azure AD (contoso.com) if they have a User Principal Name (UPN) suffix of contoso.com.
Proposed Solution: The solution suggests using Azure AD Connect to customize the synchronization options.
How Azure AD Connect Customization Works: Azure AD Connect provides a robust filtering mechanism that allows you to control which objects and attributes are synchronized to Azure AD. You can apply filtering based on:
Organizational Units (OUs): Sync only users from specific OUs.
Domains: Sync only users from a specific on-premises domain.
Attributes: Sync only users based on the value of a specific attribute, such as UPN suffix in this case.
Filtering based on UPN suffix: Azure AD Connect allows you to create a synchronization rule that filters users based on the UPN suffix, or any other attribute. Therefore it is possible to filter to only sync users with contoso.com UPN suffix.
Why It Meets the Goal: By customizing the synchronization rules in Azure AD Connect, you can configure a rule to check the UPN suffix for each user in contoso.local. Only users with a UPN suffix of contoso.com would be synchronized to Azure AD, achieving the desired outcome.
Important Tips for the AZ-305 Exam:
Azure AD Connect: This is a critical component for hybrid identity management. You need to have a deep understanding of its functions:
Synchronization: Understand how it synchronizes on-premises AD objects to Azure AD.
Filtering: How filtering works and how to configure it for domains, OUs, and attributes. This includes understanding how to customize synchronization rules to filter based on attribute value.
Password Hash Synchronization (PHS), Pass-through Authentication (PTA), and Federation.
Write-back features.
Synchronization Rules: Know how to customize synchronization rules. This includes understanding the syntax for filtering attributes and for applying transformation.
User Principal Name (UPN): Understand what a UPN is and how it is used in both on-premises Active Directory and Azure AD. You should know that user logon name is the same as UPN by default.
Hybrid Identity: Understand the concepts of hybrid identity and how Azure AD Connect facilitates it.
Real-World Scenarios: Be prepared for questions that require you to configure synchronization rules for specific scenarios.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You manage an Active Directory domain named contoso.local.
You install Azure AD Connect and connect to an Azure Active Directory (Azure AD) tenant named contoso.com without syncing any accounts.
You need to ensure that only users who have a UPN suffix of contoso.com in the contoso.local domain sync to Azure AD.
Solution: You use Synchronization Rules Editor to create a synchronization rule.
Does this meet the goal?
Yes
No
Correct Answer:
Yes
Explanation:
Requirement: The goal remains the same: to synchronize only users from the contoso.local Active Directory domain to the contoso.com Azure AD tenant if their UPN suffix is contoso.com.
Proposed Solution: This time, the solution suggests using the Synchronization Rules Editor.
Synchronization Rules Editor: The Synchronization Rules Editor is a tool that is part of Azure AD Connect. It provides a way to:
View existing synchronization rules.
Create new custom synchronization rules.
Modify existing synchronization rules.
Delete synchronization rules
Set precedence on synchronization rules
Essentially, it provides a more hands-on and granular way to control how objects are synchronized from the on-premises Active Directory to Azure AD.
How It Meets the Goal: The Synchronization Rules Editor enables you to create a custom rule specifically designed to filter users based on their UPN suffix. You can set a rule with a condition to check the userPrincipalName attribute. If the UPN ends with contoso.com, the rule will allow the synchronization. Otherwise, it will skip the synchronization. This allows the sync engine to filter only users with a UPN suffix of contoso.com.
Why It’s a Correct Approach: Both this solution and the previous solution using “Azure AD Connect to customize the synchronization options” (general solution) work by configuring a synchronization rule. However, this solution specifically points out the tool that is used to achieve that, that is the Synchronization Rules Editor, which is a more accurate approach. Therefore, using the Synchronization Rules Editor, you can achieve the desired filtering and ensure only the correct users are synchronized.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You manage an Active Directory domain named contoso.local.
You install Azure AD Connect and connect to an Azure Active Directory (Azure AD) tenant named contoso.com without syncing any accounts.
You need to ensure that only users who have a UPN suffix of contoso.com in the contoso.local domain sync to Azure AD.
Solution: You use the Synchronization Service Manager to modify the Active Directory Domain Services (AD DS) Connector.
Does this meet the goal?
Yes
No
Correct Answer:
No
Explanation:
Requirement: The core requirement remains: to synchronize only users with a UPN suffix of contoso.com from the contoso.local domain to the contoso.com Azure AD tenant.
Proposed Solution: This solution suggests using the Synchronization Service Manager to modify the Active Directory Domain Services (AD DS) Connector.
Synchronization Service Manager: The Synchronization Service Manager is a tool within Azure AD Connect that is used to:
Monitor the synchronization process.
View synchronization errors.
Manage connectors and their configuration.
Run delta and full synchronizations.
While you can modify some settings for the AD DS connector within Synchronization Service Manager, you cannot create granular attribute-based filtering rules using this tool alone.
Why It Fails to Meet the Goal: The Synchronization Service Manager does not provide the ability to directly filter based on the value of a specific user attribute like the UPN suffix. You can modify which attributes are synchronized through the connector, and you can manage which OUs and domains to include or exclude. However, you cannot set a condition on attribute values. Therefore, modifying the AD DS Connector in the Synchronization Service Manager will not allow you to filter users based on the value of the UPN suffix.
Important Tips for the AZ-305 Exam:
Synchronization Service Manager: Understand the role of this tool and its limitations.
It’s primarily for monitoring, error diagnosis, and basic connector management.
It does not replace the need for the Synchronization Rules Editor for advanced filtering and attribute mapping.
Do not confuse the purpose of the Synchronization Service Manager with the Synchronization Rules Editor.
Azure AD Connect Components: Understand all the different tools that come with Azure AD Connect, and their use.
Filtering: This exam emphasizes filtering rules for a reason. Be very familiar with filtering based on OUs, domains and attributes.
Attribute Filtering: Know the limitations of filtering specific user attributes such as the UPN suffix.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an app named App1 that uses data from two on-premises Microsoft SQL Server databases named DB1 and DB2.
You plan to move DB1 and DB2 to Azure.
You need to implement Azure services to host DB1 and DB2. The solution must support server-side transactions across DB1 and DB2.
Solution: You deploy DB1 and DB2 to SQL Server on an Azure virtual machine.
Does this meet the goal?
Yes
No
Correct Answer:
Yes
Explanation:
Requirement: The main requirement is to move two on-premises SQL Server databases (DB1 and DB2) to Azure while maintaining the ability to perform server-side transactions across both databases.
Proposed Solution: The solution suggests deploying both DB1 and DB2 to SQL Server on an Azure virtual machine (VM).
How This Solution Works:
SQL Server on Azure VM: When you deploy SQL Server on an Azure VM, you essentially have full control over a SQL Server instance running on a Windows Server in Azure.
Server-Side Transactions: SQL Server on an Azure VM maintains the traditional functionality of a SQL Server. In this environment, it’s possible to have distributed transactions across different databases within the same SQL Server instance. It’s also possible to have cross-instance transactions between SQL server instances using Linked Server features, however, this is not the main functionality being tested here. The key requirement is to support server-side transactions and VM SQL Server does satisfy that requirement.
Why It Meets the Goal: By deploying both databases on the same SQL Server instance within a VM, you maintain the ability to perform server-side transactions across them using standard T-SQL, which is exactly what the requirement asks for. The transaction can be initiated from the SQL Server itself. Since the app is connecting to the SQL server from the same environment (Azure), the transaction will work.
Important Tips for the AZ-305 Exam:
SQL Server on Azure VM:
Understand that this is essentially a lift-and-shift of your on-premises SQL Server environment to an Azure VM.
You have full control over the SQL Server instance, similar to on-premises.
You are responsible for VM maintenance, patching, backup, etc.
It is important to understand this solution if you want to migrate the on-premise database to the cloud with minimal disruption, therefore, be familiar with this solution.
Server-Side Transactions: Understand what server-side transactions are and how they differ from client-side transactions.
Server-side transactions are executed on the SQL Server (or database server) and provide ACID (Atomicity, Consistency, Isolation, Durability) properties.
This type of transaction is initiated on the database server.
Be aware of this as this is tested very often on AZ-305 exams.
Azure SQL Options:
Be familiar with the different SQL options in Azure: Azure SQL VM, Azure SQL Database, Azure SQL Managed Instance.
Understand the scenarios where each option is appropriate.
Cross-Database Transactions: Understand the mechanism to handle transactions across different SQL servers (linked servers, distributed transactions).
Migration: Understand different approaches to migrate on-prem SQL server to Azure.
Real World Application: Understand when it is best to choose a SQL VM over the other solutions, such as database as a service.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure Cosmos DB database that contains a container named Container1. The partition key for Container1 is set to /day. Container1 contains the items shown in the following table.
Name Content
Item1 { “id”: “1”, “day”: “Mon”, “value”: “10” }
Item2 { “id”: “2”, “day”: “Mon”, “value”: “15” }
Item3 { “id”: “3”, “day”: “Tue”, “value”: “10” }
Item4 { “id”: “4”, “day”: “Wed”, “value”: “15” }
You need to programmatically query Azure Cosmos DB and retrieve Item1 and Item2 only.
Solution: You run the following query.
SELECT id FROM c
WHERE c.day = “Mon” OR c.day = “Tue”
You set the EnableCrossPartitionQuery property to False.
Does this meet the goal?
Yes
No
Correct Answer:
No
Explanation:
Requirement: The goal is to programmatically retrieve only Item1 and Item2 from the Cosmos DB container Container1.
Proposed Solution: The solution proposes using the following query:
SELECT id FROM c
WHERE c.day = “Mon” OR c.day = “Tue”
Use code with caution.
SQL
and setting the EnableCrossPartitionQuery property to False.
Partitioning in Cosmos DB:
Cosmos DB uses partitioning to distribute data across physical storage.
The partition key determines how data is distributed and where it’s stored.
In this case, the partition key is /day. This means that all items with day = “Mon” will be stored in one partition, items with day = “Tue” will be in a different one, and so on.
EnableCrossPartitionQuery = False:
When this property is set to False, Cosmos DB will only query a single partition.
This is to optimize cost by preventing the query from scanning every partition.
Why It Fails:
Query Results: The query SELECT id FROM c WHERE c.day = “Mon” OR c.day = “Tue” intends to retrieve all items with day = “Mon” or day = “Tue”. This would result Item1, Item2, and Item3.
Cross-Partitioning disabled: However, since the EnableCrossPartitionQuery property is set to False, the query will only scan the partition containing the items with day = “Mon” because that’s the first condition of the OR statement. The query will never scan the day = “Tue” partition. Therefore, the query will only retrieve Item1 and Item2.
Why it’s wrong: This logic might sound correct initially but the problem is that the query specifies SELECT ID, but not the entire document. The requirement is to select the entire document so that the application can retrieve the entire item from the database. The SELECT ID clause in this query will only retrieve the ID and not the whole item. The WHERE statement is correct but not the SELECT statement. Secondly, the result set contains all three item Item1, Item2 and Item3, and the requirement is to only retrieve Item1 and Item2.
Important Tips for the AZ-305 Exam:
Cosmos DB Partitioning: Thoroughly understand partitioning concepts, partition keys, logical and physical partitions.
Cross-Partition Queries: Understand what a cross partition query is, and know the effect of enabling or disabling them.
Be aware that it can impact cost and performance.
Querying Cosmos DB: Be familiar with the SQL API syntax for querying Cosmos DB.
SQL Statement: Know how to correctly select all columns from a table by using SELECT * FROM c.
Performance: Know how to optimize Cosmos DB queries for performance, including choosing the correct partition key and avoiding cross-partition queries when possible.
Real-World Scenarios: The exam often presents scenarios where you must create efficient Cosmos DB queries to retrieve specific items.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure Cosmos DB database that contains a container named Container1. The partition key for Container1 is set to /day. Container1 contains the items shown in the following table.
Item1 { “id”: “1”, “day”: “Mon”, “value”: “10” }
Item2 { “id”: “2”, “day”: “Mon”, “value”: “15” }
Item3 { “id”: “3”, “day”: “Tue”, “value”: “10” }
Item4 { “id”: “4”, “day”: “Wed”, “value”: “15” }
You need to programmatically query Azure Cosmos DB and retrieve Item1 and Item2 only.
Solution: You run the following query.
SELECT day FROM c
WHERE c.value = “10” OR c.value = “15”
You set the EnableCrossPartitionQuery property to True.
Does this meet the goal?
Yes
No
Correct Answer:
No
Explanation:
Requirement: The goal is to programmatically retrieve only Item1 and Item2 from the Cosmos DB container Container1.
Proposed Solution: The solution suggests using the following query:
SELECT day FROM c
WHERE c.value = “10” OR c.value = “15”
Use code with caution.
SQL
and setting the EnableCrossPartitionQuery property to True.
How This Solution Works:
The Query: The SQL query SELECT day FROM c WHERE c.value = “10” OR c.value = “15” aims to retrieve the day attribute of all items in the container where the value is either 10 or 15.
Cross Partition Query: Setting EnableCrossPartitionQuery to True means the query will scan all partitions of the container.
Why It Fails to Meet the Goal:
Incorrect Result Set: The proposed query will return all items with a value of 10 or 15. This means it will return Item1, Item2, Item3, and Item4. However, the requirement is to return only Item1 and Item2.
Incorrect Projection: Also, the SELECT day FROM c statement will only return the day property of the document instead of the entire document. The requirement is to retrieve the full items Item1 and Item2.
Important Tips for the AZ-305 Exam:
Cosmos DB Querying: You should be very familiar with the SQL syntax used for querying Cosmos DB. Know that the SELECT clause determines which attribute will be in the output.
Partitioning: Understand how the partition key affects querying and performance. Know what is a cross partition query.
EnableCrossPartitionQuery:
Know the purpose and implications of using this property.
Be aware of the performance and cost implications.
Correct Query Conditions: Carefully assess the query conditions to make sure they match the required results set.
SELECT Clause: Understand the difference between SELECT * and SELECT <field> clause.</field>
Real-World Application: In the exam, you need to make sure your query is returning the right item, with the correct properties. You need to understand that SELECT determines what attributes will be returned in the output.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure Cosmos DB database that contains a container named Container1. The partition key for Container1 is set to /day. Container1 contains the items shown in the following table.
Item1 { “id”: “1”, “day”: “Mon”, “value”: “10” }
Item2 { “id”: “2”, “day”: “Mon”, “value”: “15” }
Item3 { “id”: “3”, “day”: “Tue”, “value”: “10” }
Item4 { “id”: “4”, “day”: “Wed”, “value”: “15” }
You need to programmatically query Azure Cosmos DB and retrieve Item1 and Item2 only.
Solution: You run the following query.
SELECT day FROM c -
WHERE c.value = “10” OR c.value = “15”
You set the EnableCrossPartitionQuery property to True.
Does this meet the goal?
Yes
No
The goal is to retrieve only Item1 and Item2 from the Azure Cosmos DB container.
The provided solution uses the following query:
SELECT day FROM c
WHERE c.value = “10” OR c.value = “15”
Use code with caution.
SQL
and sets EnableCrossPartitionQuery to True.
Let’s analyze the data and the query:
Container Container1 has a partition key /day.
The items are:
Item1: { “id”: “1”, “day”: “Mon”, “value”: “10” }
Item2: { “id”: “2”, “day”: “Mon”, “value”: “15” }
Item3: { “id”: “3”, “day”: “Tue”, “value”: “10” }
Item4: { “id”: “4”, “day”: “Wed”, “value”: “15” }
The query SELECT day FROM c WHERE c.value = “10” OR c.value = “15” will select the day field from all items (FROM c) that satisfy the condition c.value = “10” OR c.value = “15”.
Let’s check which items satisfy the condition:
Item1: c.value = “10” is true. Item1 is selected.
Item2: c.value = “15” is true. Item2 is selected.
Item3: c.value = “10” is true. Item3 is selected.
Item4: c.value = “15” is true. Item4 is selected.
Therefore, the query will retrieve Item1, Item2, Item3, and Item4. The SELECT day FROM c part only specifies that the output will contain only the day field from each of these items, but it still selects all four items based on the WHERE clause.
The goal was to retrieve only Item1 and Item2. The provided solution retrieves Item1, Item2, Item3, and Item4. Thus, the solution does not meet the goal.
Setting EnableCrossPartitionQuery to True is necessary for this query to work across all partitions, as the query does not filter based on the partition key (/day). However, enabling cross-partition query does not change which items are selected based on the WHERE clause.
To retrieve only Item1 and Item2, you would need a query that specifically targets these items, for example by using their id values in the WHERE clause, like WHERE c.id IN (“1”, “2”).
Final Answer: No
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You manage an Active Directory domain named contoso.local.
You install Azure AD Connect and connect to an Azure Active Directory (Azure AD) tenant named contoso.com without syncing any accounts.
You need to ensure that only users who have a UPN suffix of contoso.com in the contoso.local domain sync to Azure AD.
Solution: You use the Synchronization Service Manager to modify the Metaverse Designer tab.
Does this meet the goal?
Yes
No
Correct Answer:
No
Explanation:
Requirement: The goal remains: to synchronize only users from the contoso.local Active Directory domain to the contoso.com Azure AD tenant if their UPN suffix is contoso.com.
Proposed Solution: This solution suggests using the Synchronization Service Manager to modify the Metaverse Designer tab.
What is the Metaverse?
The Metaverse is a central, shared data store used by Azure AD Connect to hold objects during synchronization. This data store is temporary, which means that it is not persistent and every time you have synchronization, the engine reads the objects from the connected data sources (such as Active Directory, Azure AD) and process the information using the synchronization rules and saves the output of those processing in the metaverse.
Objects from different connected data sources are represented as metaverse objects.
Synchronization Service Manager and Metaverse Designer Tab: The Synchronization Service Manager is a tool to monitor, manage, and troubleshoot the synchronization process. The Metaverse Designer tab is a viewer within the Synchronization Service Manager that allows you to:
See the schema of the metaverse.
Inspect the attributes and rules that apply to metaverse objects.
View object properties.
It does not allow you to modify the synchronization rules or the behavior that controls which objects are initially loaded into the metaverse or synchronized to Azure AD. It can only view and not modify the metadata.
Why It Fails to Meet the Goal: The Metaverse Designer tab in the Synchronization Service Manager is a viewing tool, not a configuration tool. You cannot modify synchronization behavior and filtering rules directly through this interface. It provides a way to see how attributes of your synchronized object are mapped and how rules are processed. However, the Metaverse Designer cannot be used to control which objects get loaded into the metaverse in the first place, and it cannot apply filters based on specific attributes of the users.
Important Tips for the AZ-305 Exam:
Azure AD Connect Components: Have a solid understanding of all the tools that come with Azure AD Connect.
Synchronization Service Manager: Be familiar with all the tabs in this tool: Operations, Connectors, Metaverse Search, Metaverse Designer, Connectors Space, Lineage. Know what kind of activities that you can perform in each of these tabs.
Metaverse: You must understand the role of the metaverse in Azure AD Connect.
Filtering: Be aware that filtering has to happen before the object is loaded in the metaverse. The Metaverse Designer can only be used to view metadata but not to filter.
Correct Tool for Task: It’s crucial to use the right tool for the task. For filtering based on a specific attribute (like UPN suffix), you must use the Synchronization Rules Editor.
Real-World Scenarios: In the exam, you’ll often be asked to choose the correct tool for a given scenario.
HOTSPOT
You have an Azure subscription that contains a resource group named RG1.
You have a group named Group1 that is assigned the Contributor role for RG1.
You need to enhance security for the virtual machines in RG1 to meet the following requirements:
– Prevent Group1 from assigning external IP addresses to the virtual machines.
– Ensure that Group1 can establish a Remote Desktop connection to the virtual machines through a shared external IP address.
What should you use to meet each requirement? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Prevent Group1 from assigning external IP addresses to the virtual machines:
Azure Policy
Azure Bastion
Virtual network service endpoints
Azure Web Application Firewall (WAF)
Ensure that Group1 can establish a Remote Desktop connection to the virtual machines through a shared external IP address:
Azure Policy
Azure Bastion
Virtual network service endpoints
Azure Web Application Firewall (WAF)
Correct Answer Area:
Prevent Group1 from assigning external IP addresses to the virtual machines:
Azure Policy
Ensure that Group1 can establish a Remote Desktop connection to the virtual machines through a shared external IP address:
Azure Bastion
Explanation:
Let’s analyze each requirement and why the selected options are correct.
Requirement 1: Prevent Group1 from assigning external IP addresses to the virtual machines.
Azure Policy: Azure Policy allows you to define and enforce rules (policies) on your Azure resources. You can create a policy that denies the ability to create or modify resources to add public IP addresses for VMs in your subscriptions. Azure Policy enables you to restrict any action that can be performed through the Azure Control Plane and can enforce security, compliance, governance, cost control and many more things. You could restrict the users’ ability to add Public IP to a VM, remove the Public IP or change the Public IP configuration of a VM.
Why Other Options are Incorrect:
Azure Bastion: Azure Bastion is a service that provides secure RDP/SSH access to VMs but does not control whether a VM can have an external IP address.
Virtual network service endpoints: Service endpoints restrict access to Azure PaaS services (e.g. SQL Database, Storage Account) to only specific virtual networks but is not relevant to the requirements.
Azure Web Application Firewall (WAF): WAF protects web applications from common attacks but does not control resource provisioning.
Requirement 2: Ensure that Group1 can establish a Remote Desktop connection to the virtual machines through a shared external IP address.
Azure Bastion: Azure Bastion allows users to connect to their VMs directly through the Azure portal using a secure connection and through one single shared external IP address. Instead of directly exposing RDP/SSH ports on the VMs to the internet, you establish secure access via Bastion, where users can use either the Azure Portal or native RDP client using Bastion as a jump server.
Why Other Options are Incorrect:
Azure Policy: Azure Policy does not provide remote access to VMs.
Virtual network service endpoints: Service endpoints don’t enable RDP/SSH connections to VMs.
Azure Web Application Firewall (WAF): WAF protects web applications but does not provide remote access.
Important Tips for the AZ-305 Exam:
Azure Policy: This is a very important topic for the AZ-305 exam. You should have a very solid understanding:
What is Azure Policy: You need to know how it enforces the standards across your Azure resources.
How to define Policy: You should know how to define a policy using Azure Portal, CLI, Powershell or Terraform.
How to Assign Policy: You should know how to assign Azure Policies to different scope.
How to evaluate Azure Policies.
Different scenarios of Azure Policy
Azure Bastion:
Understand that this is a secure, managed service for remote access to VMs.
Know the benefits of Bastion compared to directly exposing RDP/SSH ports to the internet.
Be familiar with different connection methods via Bastion.
Security: Pay attention to security aspects of Azure services. Azure Policy helps enforce security policies, while Azure Bastion provides secure access.
RBAC: This question highlights how RBAC and Azure Policy work together, where RBAC assign permission and Policy puts the guardrails.
Real-World Scenarios: Be prepared to choose between various Azure services based on requirements.
You create a container image named Image1 on a developer workstation.
You plan to create an Azure Web App for Containers named WebAppContainer that will use Image1.
You need to upload Image1 to Azure. The solution must ensure that WebAppContainer can use Image1.
To which storage type should you upload Image1?
an Azure Storage account that contains a blob container
Azure Container Instances
Azure Container Registry
an Azure Storage account that contains a file share
Correct Answer:
Azure Container Registry
Explanation:
Requirement: The goal is to upload a container image (Image1) created on a developer workstation to Azure so that an Azure Web App for Containers (WebAppContainer) can use it.
Why Azure Container Registry is the Correct Choice:
Container Registry: Azure Container Registry (ACR) is a managed, private Docker registry service. It’s specifically designed to store and manage your private Docker container images.
Integration with Azure Services: ACR is tightly integrated with other Azure services such as Azure Web App for Containers, Azure Kubernetes Service (AKS), Azure Container Instances (ACI), etc. These services are designed to retrieve container images from a container registry (such as ACR) and deploy the containers based on the image definition.
Security: ACR provides secure storage for container images and supports authentication for accessing images. This is critical because you don’t want unauthorized access to your private container images.
Image Management: ACR allows you to manage versions of your container images and supports advanced features such as geo-replication.
Why Other Options are Incorrect:
Azure Storage account that contains a blob container: Azure Storage blobs are designed for storing unstructured data, not for storing and managing container images. While you could technically store a container image in a blob, the Azure Web App for Containers service doesn’t directly use a storage blob container to get a container image.
Azure Container Instances: Azure Container Instances (ACI) is a serverless compute option for running containers, but it is not a container image registry. While ACI can retrieve and run container images from a registry, it is not a registry itself.
Azure Storage account that contains a file share: Azure file shares are designed for storing file system data, not for storing container images. It’s not designed to be a container registry and is not integrated with Azure Web App for Containers.
You have an Azure Cosmos DB account named Account1. Account1 includes a database named DB1 that contains a container named Container1. The partition key for Container1 is set to /city.
You plan to change the partition key for Container1.
What should you do first?
Delete Container1.
Create a new container in DB1.
Implement the Azure Cosmos DB.NET.SDK.
Regenerate the keys for Account1.
Correct Answer:
Create a new container in DB1.
Explanation:
The Problem: Immutable Partition Keys: In Azure Cosmos DB, the partition key you choose for a container is immutable. This means that once you set the partition key for a container, you cannot change it.
Why the Other Options are Incorrect:
Delete Container1: While deleting the container would allow you to create a new container with a different partition key, it will also delete all the data inside the container. This is not ideal, and in most scenarios, you would want to maintain the data.
Implement the Azure Cosmos DB .NET SDK: While you need the .NET SDK to interact with Cosmos DB programmatically, it is not related to the act of changing the partition key.
Regenerate the keys for Account1: Regenerating account keys is a security measure and is not related to the partition key change process.
The Correct Approach:
Create a new container: The first step is to create a new container in your database DB1. You’ll set the desired new partition key for this new container.
Migrate the data: Next, you need to migrate all the data from your original Container1 to the new container. You can write an application or use data migration tool to read data from Container1 and write them to the new container.
Application Changes: You’ll need to update your application to now read and write data to this new container with the new partition key.
Delete the old container: Once the migration is complete and the application has been updated, then you can delete Container1.
Important Tips for the AZ-305 Exam:
Cosmos DB Partitioning: You must understand the importance of partitioning and the concept of partition keys. It’s a key aspect of Cosmos DB.
Immutable Partition Keys: Know that a container’s partition key cannot be changed once it’s set. This is a very important characteristic of Cosmos DB.
Migration: Understand that you must migrate the data to a new container if you have to change your partition key.
Data Migration: Understand how to use the Azure SDK or Data Migration tool to migrate the data.
SDK: Understand that while SDKs are important for interacting with Azure Services, it is not part of the core infrastructure design.
Security: You should know the different security mechanisms such as regenerating keys and how it affects your application.
You have an Azure subscription that contains 10 virtual machines on a virtual network.
You need to create a graph visualization to display the traffic flow between the virtual machines.
What should you do from Azure Monitor?
From Activity log, use quick insights.
From Metrics, create a chart.
From Logs, create a new query.
From Workbooks, create a workbook.
Correct Answer:
From Workbooks, create a workbook.
Explanation:
Requirement: The goal is to visualize the traffic flow between 10 virtual machines (VMs) on an Azure virtual network.
Why Azure Monitor Workbooks are the Right Choice:
Visualizations: Azure Monitor Workbooks allow you to create rich, interactive visualizations, including graphs, charts, and maps. They are excellent for combining different data sources into a single, informative view.
Traffic Flow: You can use workbooks to create a graph visualization that shows the connections between the VMs and the data that is flowing through those connections.
Customization: You can fully customize your workbooks to display different metric, log data, or other types of information.
Data Sources: It provides a very intuitive way to integrate different data sources, including Azure Monitor Log Analytics workspace, Application Insights etc. to give you a comprehensive overview of your environment.
Why Other Options are Incorrect:
From Activity log, use quick insights: The Activity Log records events related to resource management. It does not track or visualize network traffic. Quick insights provides information on successful or failed operations.
From Metrics, create a chart: Azure Monitor Metrics tracks performance data like CPU, memory, and network usage. While you can see network usage, you can’t directly see the flow of traffic between VMs in a visual graph using metrics charts. Metrics provide numeric values and not graph visualization.
From Logs, create a new query: Azure Monitor Logs allows you to query logs using Kusto Query Language (KQL). You could write a query to see the traffic flow, but the query results are not displayed as a graph. While logs is an excellent data source for your workbook, it will not provide the visual representation that is required.
HOTSPOT
You plan to create an Azure Storage account in the Azure region of East US 2.
You need to create a storage account that meets the following requirements:
– Replicates synchronously
– Remains available if a single data center in the region fails
How should you configure the storage account? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Replication:
Geo-redundant storage (GRS)
Locally-redundant storage (LRS)
Read-access geo-redundant storage (RA GRS)
Zone-redundant storage (ZRS)
Account type:
Blob storage
Storage (general purpose v1)
StorageV2 (general purpose v2)
Correct Answer Area:
Replication:
Zone-redundant storage (ZRS)
Account type:
StorageV2 (general purpose v2)
Explanation:
Requirement 1: Replicates synchronously
Synchronous Replication: This means data is written to multiple storage locations simultaneously and acknowledged only after all writes are confirmed. This guarantees data consistency between storage locations.
Zone-Redundant Storage (ZRS): ZRS replicates data synchronously across three availability zones within a single Azure region. This ensures high availability and data durability even if one data center (zone) fails.
Requirement 2: Remains available if a single data center in the region fails
Zone-Redundant Storage (ZRS): By replicating the data to three different availability zones in the same region, ZRS will keep the storage available even if there is a single data center failure.
Why Other Replication Options are Incorrect:
Geo-redundant storage (GRS): GRS replicates data asynchronously to a paired region, which provides protection against regional disaster but is not needed for high availability.
Locally-redundant storage (LRS): LRS replicates data within a single data center, which does not protect against data center failures.
Read-access geo-redundant storage (RA-GRS): RA-GRS is the same as GRS, but the data can be read from the secondary region as well. But the replication is still async.
Why StorageV2 (general purpose v2) is correct:
StorageV2 (general purpose v2) is the latest and recommended storage account type. It supports all storage services (blobs, files, queues, tables) and the latest features such as ZRS, and provides better pricing model.
Why Other Account types are incorrect:
Blob storage: This account type is optimized for blob storage only and may not support some features needed in this case, and the lack of support for other storage services makes this an inappropriate option.
Storage (general purpose v1): This is an older storage account type and is not recommended for new deployments. It does not have many of the newer features that StorageV2 provides.
Important Tips for the AZ-305 Exam:
Azure Storage Redundancy: This is a crucial topic for the AZ-305 exam. You MUST understand the different storage redundancy options:
LRS (Locally-redundant storage): Data is copied within a single data center.
ZRS (Zone-redundant storage): Data is copied across three availability zones within the same region.
GRS (Geo-redundant storage): Data is copied to a paired region.
RA-GRS (Read-access geo-redundant storage): Data is copied to a paired region and can be read from the secondary region.
Synchronous vs. Asynchronous Replication: Understand the difference between these replication types. Synchronous replication is needed for high availability within the region, and asynchronous for disaster recovery.
Availability Zones: Be aware of the concept of availability zones and how they provide resilience.
Storage Account Types: Know the purpose and capabilities of different storage account types:
StorageV2: latest storage account that provides many of the latest features.
Blob storage: Designed for unstructured data such as image and videos.
File Storage: designed for file shares for virtual machines.
Storage (general purpose v1): older version and not recommended for new deployments.
Data Durability: Understand which storage option provides the best data durability and fault tolerance.
Cost: Be aware that the more fault tolerant the storage is, the more expensive it is.
Real-World Scenarios: The exam often presents scenarios where you need to choose the right storage redundancy based on specific requirements (availability, durability, cost).
HOTSPOT
You plan to deploy an Azure virtual machine named VM1 by using an Azure Resource Manager template.
You need to complete the template.
What should you include Scope1, Scope2 in the template? To answer, select the appropriate options in the answer area.
a) Microsoft.Network/publicIPAddresses/
b) Microsoft.Network/virtualNetworks/
c) Microsoft.Network/networkInterfaces/
d) Microsoft.Network/virtualNetworks/subnets
d) Microsoft.Storage/storageAccounts/
NOTE: Each correct selection is worth one point.
{
“type”: “Microsoft.Compute/virtualMachines”,
“apiVersion”: “2018-10-01”,
“name”: “VM1”,
“location”: “[parameters(‘location’)]”,
“dependsOn”: [
“[resourceId(‘Microsoft.Storage/storageAccounts/’, variables(‘Name3’))]”,
“[resourceId(“
Scope1, variables(‘Name4’)
“)]”
]
},
{
“type”: “Microsoft.Network/networkInterfaces”,
“apiVersion”: “2018-11-01”,
“name”: “NIC1”,
“location”: “[parameters(‘location’)]”,
“dependsOn”: [
“[resourceId(‘Microsoft.Network/publicIPAddresses/’, variables(‘Name1’))]”,
“[resourceId(“
Scope2, variables(‘Name2’)
“)]”
]
}
Correct Answer Area:
Scope1: Microsoft.Network/networkInterfaces/
Scope2: Microsoft.Network/virtualNetworks/
Explanation:
Understanding ARM Template resourceId() Function:
The resourceId() function in an ARM template is used to construct the fully qualified ID of a resource. It takes a resource provider namespace and resource type along with optional parent resource IDs as parameters to form the resource ID string.
Virtual Machine Resource (Microsoft.Compute/virtualMachines):
The dependsOn property here indicates the dependencies of the VM.
“[resourceId(‘Microsoft.Storage/storageAccounts/’, variables(‘Name3’))]” refers to the storage account on which the VM’s OS disk will be stored.
“[resourceId(Scope1, variables(‘Name4’))]” refers to a resource, whose type is given by Scope1. This will be the network interface because the name of the resource is referred to by the variable Name4, which in a traditional VM creation, will be the network interface. Therefore the Scope1 will need to be Microsoft.Network/networkInterfaces/
Network Interface Resource (Microsoft.Network/networkInterfaces):
The dependsOn property here specifies the dependencies of the NIC.
“[resourceId(‘Microsoft.Network/publicIPAddresses/’, variables(‘Name1’))]” refers to the public IP address if the NIC is to be connected to a public IP.
“[resourceId(Scope2, variables(‘Name2’))]” refers to a resource, whose type is given by Scope2. This will be the virtual network because the name of the resource is referred to by the variable Name2, which in a traditional VM creation, will be the virtual network. Therefore the Scope2 will need to be Microsoft.Network/virtualNetworks/
Why other scopes are incorrect:
Microsoft.Network/publicIPAddresses/: The public IP address resource itself is already referred in the dependsOn entry of the NIC resource.
Microsoft.Network/virtualNetworks/subnets: The subnet is not a dependency at this level.
Microsoft.Storage/storageAccounts/ The Storage Account resource has already been referenced in the dependsOn entry of the VM resource.
HOTSPOT
Your network contains an Active Directory domain named adatum.com and an Azure Active Directory (Azure AD) tenant named adatum.onmicrosoft.com.
Adatum.com contains the user accounts in the following table.
Name Member of
User1 Domain Admins
User2 Schema Admins
User3 Incoming Forest Trust Builders
User4 Replicator
User5 Enterprise Admins
Adatum.onmicrosoft.com contains the user accounts in the following table
Name Role
UserA Global administrator
UserB User administrator
UserC Security administrator
UserD Service administrator
You need to implement Azure AD Connect. The solution must follow the principle of least privilege.
Which user accounts should you use in Adatum.com and Adatum.onmicrosoft.com to implement Azure AD Connect? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Adatum.com:
User1
User2
User3
User4
User5
Adatum.onmicrosoft.com:
UserA
UserB
UserC
UserD
Adatum.com: User4
Explanation: To implement Azure AD Connect, the account used on the on-premises Active Directory side needs read access to the directory to synchronize objects. The Replicator account has the necessary permissions to read directory information for replication purposes. This aligns with the principle of least privilege as it avoids using highly privileged accounts like Domain Admins or Enterprise Admins.
Adatum.onmicrosoft.com: UserA
Explanation: To implement Azure AD Connect in Azure AD, you need an account with Global administrator permissions. This is required for the initial setup and configuration of Azure AD Connect, including creating the Azure AD Connector account and setting up the synchronization rules.
Therefore, the correct selections are:
Adatum.com: User4
Adatum.onmicrosoft.com: UserA
Why other options are incorrect:
Adatum.com:
User1 (Domain Admins): Has excessive permissions. Violates the principle of least privilege.
User2 (Schema Admins): Has permissions to modify the Active Directory schema, which is far more than needed for Azure AD Connect. Violates the principle of least privilege.
User3 (Incoming Forest Trust Builders): This account is specifically for creating trust relationships and is not directly relevant to Azure AD Connect’s synchronization needs.
User5 (Enterprise Admins): Has the highest level of permissions in the Active Directory forest. Violates the principle of least privilege.
Adatum.onmicrosoft.com:
UserB (User administrator): While this role can manage users, it typically doesn’t have the necessary permissions for the initial setup and configuration of Azure AD Connect.
UserC (Security administrator): This role focuses on security-related tasks and doesn’t have the permissions required for Azure AD Connect setup.
UserD (Service administrator): This is a custom administrator role and might not have the specific permissions needed for Azure AD Connect. Global Administrator is generally required for the initial setup.
You have an Azure subscription that contains 100 virtual machines.
You have a set of Pester tests in PowerShell that validate the virtual machine environment.
You need to run the tests whenever there is an operating system update on the virtual machines. The solution must minimize implementation time and recurring costs.
Which three resources should you use to implement the tests? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Azure Automation runbook
an alert rule
an Azure Monitor query
a virtual machine that has network access to the 100 virtual machines
an alert action group
Correct Answer:
Azure Automation runbook
an alert rule
an alert action group
Explanation:
Requirement: The goal is to run Pester tests automatically whenever there’s an OS update on any of the 100 VMs, while minimizing setup time and costs.
Why these options are correct:
Azure Automation runbook:
This is where you will store the logic of running the Pester tests. You can create a PowerShell script (runbook) within Azure Automation that contains the logic to execute your Pester tests. The script can be stored in Azure Automation and can be executed as part of Azure Automation service.
You can use PowerShell commands to connect to the virtual machines and execute the Pester tests, or use Azure Automation DSC (Desired State Configuration) or Azure VM extensions for this.
An Alert Rule:
This will detect the operating system updates in the virtual machines. You can create a new alert rule that is configured to be triggered on Microsoft.Compute/virtualMachines resource when a specific event is generated, such as OS patch install.
Alert rules allow you to define conditions that trigger actions.
An Alert Action Group:
This is used to call the Azure Automation runbook when the alert rule is triggered. When the operating system update event is detected, the alert action group will be triggered and will call the Azure Automation runbook to execute the Pester tests.
Action groups define the actions that will occur when an alert is triggered, such as sending an email, sending SMS messages, calling a logic app or calling Azure Automation runbook, which is what we want to accomplish here.
Why Other Options are Incorrect:
An Azure Monitor query: While a query can be useful for investigation and analyzing the logs, this is not required in this solution. The Alert rule and Action group will provide the core functionality for the automation we are trying to implement.
A virtual machine that has network access to the 100 virtual machines: You don’t need an additional VM just to run the tests. The test will be executed inside the Azure Automation Runbook using the credentials and network connectivity it already has. This will add additional operational overhead, management overhead and recurring cost, which we are trying to minimize.
Important Tips for the AZ-305 Exam:
Azure Automation: You must know the details about Azure Automation, especially its purpose and the way you can automate tasks using Runbooks.
Know how to create, configure, and trigger runbooks.
Understand how to use PowerShell with Azure Automation.
Azure Monitor: You need to know how Azure Monitor is used to observe your azure resources.
Alerts:
Understand how to create alert rules based on metrics and logs.
Know how to configure action groups to take actions when an alert is triggered.
Pester: Know what is Pester and how can it be used to test infrastructure.
Real-World Automation: Be prepared to design automated solutions that use Azure services for complex processes.
Cost Optimization: Pay attention to cost minimization in your designs. Avoid unnecessary resources.
DevOps mindset: Understand the concepts and processes of DevOps.
HOTSPOT
You have an Azure subscription that contains multiple resource groups.
You create an availability set as shown in the following exhibit.
Create availability set X
*Name
AS1
*Subscription
Azure Pass
*Resource group
RG1
Create new
*Location
West Europe
Fault domains
2
Update domains
3
Use managed disks
No(Classic) Yes(Alignet)
You deploy 10 virtual machines to AS1.
Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.
During planned maintenance, at least [answer choice]
virtual machines will be available.
▼
4
5
6
8
To add another virtual machine to AS1, the virtual machine
must be added to [answer choice].
any region and the RG1 resource group
the West Europe region and any resource group
the West Europe region and the RG1 resource group
Statement 1: During planned maintenance, at least [6] virtual machines will be available.
Explanation: Availability sets provide protection against planned maintenance (Azure updates) by distributing virtual machines across update domains. With 3 update domains, Azure will update these domains sequentially. In the worst-case scenario, all virtual machines in one update domain will be unavailable during maintenance.
Worst-case distribution: To find the minimum number available, consider the most uneven distribution possible across the 3 update domains. For instance, you could have 4 VMs in UD1, 3 VMs in UD2, and 3 VMs in UD3. When UD1 is being updated, the 3 + 3 = 6 VMs in the other domains are still available. Therefore, at least 6 VMs will be available.
Statement 2: To add another virtual machine to AS1, the virtual machine must be added to [the West Europe region and the RG1 resource group].
Explanation:
Region: Availability sets are a regional resource. All virtual machines within an availability set must reside in the same Azure region as the availability set itself. AS1 is located in West Europe.
Resource Group: While an availability set exists within a resource group, the individual virtual machines within that availability set also need to be in the same resource group. AS1 is in RG1.
Therefore, the correct options are:
Statement 1: 6
Statement 2: the West Europe region and the RG1 resource group
HOTSPOT
You have an Azure subscription that contains the resource groups shown in the following table.
Name Location
RG1 West US
RG2 East US
You create an Azure Resource Manager template named Template1 as shown in the following exhibit.
{
“$schema”: “http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#”,
“contentVersion”: “1.0.0.0”,
“parameters”: {
“name”: {
“type”: “String”
},
“location”: {
“defaultValue”: “westus”,
“type”: “String”
}
},
“variables”: {
“location”: “[resourceGroup().location]”
},
“resources”: [
{
“type”: “Microsoft.Network/publicIPAddresses”,
“apiVersion”: “2019-11-01”,
“name”: “[parameters(‘name’)]”,
“location”: “[variables(‘location’)]”,
“sku”: {
“name”: “Basic”
},
“properties”: {
“publicIPAddressVersion”: “IPv4”,
“publicIPAllocationMethod”: “Dynamic”,
“idleTimeoutInMinutes”: 4,
“ipTags”: []
}
}
]
}
From the Azure portal, you deploy Template1 four times by using the settings shown in the following table.
Resource group Name Location
RG1 IP1 westus
RG1 IP2 westus
RG2 IP1 westus
RG2 IP3 westus
What is the result of the deployment? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
Number of public IP addresses in West US:
▼
1
2
3
4
Total number of public IP addresses created:
▼
1
2
3
4
Answer Area:
Number of public IP addresses in West US: 2
Total number of public IP addresses created: 4
Explanation:
Let’s analyze each deployment:
Deployment 1 (RG1, IP1, westus):
The template’s variables.location is set to [resourceGroup().location].
Since the resource group is RG1, which is in West US, the public IP address IP1 will be created in West US.
Deployment 2 (RG1, IP2, westus):
Again, variables.location resolves to the resource group’s location (RG1, West US).
The public IP address IP2 will be created in West US.
Deployment 3 (RG2, IP1, westus):
The resource group is RG2, which is in East US.
Even though the deployment specifies “westus” for the parameter, the template’s variables.location overrides this and uses the resource group’s location.
The public IP address IP1 will be created in East US. Note that the name “IP1” is reused, but it’s allowed since it’s in a different resource group.
Deployment 4 (RG2, IP3, westus):
Similar to deployment 3, the resource group is RG2 (East US).
Public IP address IP3 will be created in East US.
Therefore:
Public IP addresses in West US: IP1 and IP2 (2 total)
Total public IP addresses created: IP1 (West US), IP2 (West US), IP1 (East US), IP3 (East US) (4 total)
Tips for the AZ-305 Exam (and similar Azure exams):
Understand ARM Template Evaluation: Pay close attention to how ARM templates evaluate expressions, especially the order of precedence. In this case, variables override parameter defaults.
Resource Group Scope: Remember that many resources are scoped to a resource group. The resourceGroup() function is very useful for accessing resource group properties within a template.
Variable Usage: Understand how variables can be used to dynamically set properties based on other template inputs or Azure context.
Deployment Scope vs. Resource Location: Be aware that the location specified during deployment can be different from the actual location where the resource ends up if the template logic dictates otherwise (like using resourceGroup().location).
Naming Conflicts in Resource Groups: Know that resource names must be unique within a resource group but can be reused across different resource groups.
Practice with ARM Templates: The best way to understand ARM templates is to write and deploy them. Experiment with different functions and scenarios.
Focus on Key Functions: Be familiar with commonly used ARM template functions like parameters(), variables(), resourceGroup(), subscription(), etc.
You have an Azure subscription that contains an Azure Log Analytics workspace.
You have a resource group that contains 100 virtual machines. The virtual machines run Linux.
You need to collect events from the virtual machines to the Log Analytics workspace.
Which type of data source should you configure in the workspace?
Syslog
Linux performance counters
custom fields
Correct Answer:
Syslog
Explanation:
Requirement: The goal is to collect events from Linux VMs and send them to an Azure Log Analytics workspace.
Why Syslog is the Correct Choice:
Syslog Standard: Syslog is a standard protocol for message logging in Linux systems. Many applications and services on Linux use Syslog to generate their logs.
Log Collection: The Log Analytics agent for Linux (which runs on the VM) is configured to use Syslog as its primary source of event data. It can collect logs from different Syslog facilities, such as auth, cron, daemon, and many more.
Centralized Logging: By configuring Syslog in the Log Analytics workspace, you enable centralized collection of system events, making it easier to analyze and troubleshoot issues across multiple VMs.
Why Other Options are Incorrect:
Linux performance counters: While performance counters (such as CPU, memory, disk) are important, they are not the source of event logs and are separate from the Syslog functionality. Performance counters provide metrics whereas Syslog provides logs.
Custom fields: Custom fields are used to define additional data fields in your log data, but they are not a data source in themselves. You would need another source (like Syslog) to actually create the log, and then custom fields can be added.
You have a virtual network named VNet1 as shown in the exhibit. (Click the Exhibit tab.)
Refresh
Move
Delete
Resource group (change)
Production
Location
West US
Subscription (change)
Production subscription
Subscription ID
12ab3cd4-5e67-8901-f234-g5hi67jkl8m9
Tags (change)
Click here to add tags
Connected devices
Search connected devices
DEVICE TYPE IP ADDRESS SUBNET
No results.
Address space
10.2.0.0/16
DNS servers
Azure provided DNS service
No devices are connected to VNet1.
You plan to peer VNet1 to another virtual network named VNet2. VNet2 has an address space of 10.2.0.0/16.
You need to create the peering.
What should you do first?
Configure a service endpoint on VNet2.
Add a gateway subnet to VNet1.
Create a subnet on VNet1 and VNet2.
Modify the address space of VNet1.
Correct Answer:
Modify the address space of VNet1.
Explanation:
Virtual Network Peering Requirements:
Virtual network peering enables you to connect two or more virtual networks in Azure. The virtual networks can be in the same or different Azure regions.
One of the fundamental requirements for virtual network peering is that the virtual networks must have non-overlapping address spaces. If the address spaces overlap, Azure cannot establish a route between the networks, and peering will fail.
Current Situation:
VNet1 has an address space of 10.2.0.0/16.
VNet2 has an address space of 10.2.0.0/16.
The address spaces overlap, therefore peering is not possible at this time.
The Correct First Step:
The first step in the process is to modify the address space of either VNet1 or VNet2, or both so that their address space do not overlap. Since the requirement is to make a change to VNet1, we must modify the address space of VNet1 first.
Why Other Options are Incorrect:
Configure a service endpoint on VNet2: Service endpoints restrict access to Azure PaaS resources (e.g. storage accounts) and is not related to the virtual network peering process.
Add a gateway subnet to VNet1: A gateway subnet is required for VPN or ExpressRoute connections, and it’s not relevant to virtual network peering.
Create a subnet on VNet1 and VNet2: While subnets are required within a virtual network, you do not need to create subnets in the virtual networks for the peering. It will also not solve the overlapping CIDR problem, therefore this is not a correct option.
HOTSPOT
You have an Azure Resource Manager template for a virtual machine named Template1. Template1 has the following parameters section.
“parameters”: {
“adminUsername”: {
“type”: “string”
},
“adminPassword”: {
“type”: “securestring”
},
“dnsLabelPrefix”: {
“type”: “string”
},
“windowsOSVersion”: {
“type”: “string”,
“defaultValue”: “2016-Datacenter”,
“allowedValues”: [
“2016-Datacenter”,
“2019-Datacenter”,
]
},
“location”: {
“type”: “String”,
“allowedValues”: [
“eastus”,
“centralus”,
“westus” ]
}
},
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Statements Yes No
When you deploy Template1 by using the Azure portal, you are prompted for a resource group.
When you deploy Template1 by using the Azure portal, you are prompted for the Windows operating system version.
When you deploy Template1 by using the Azure portal, you are prompted for a location.
Statements Yes No Explanation
When you deploy Template1 by using the Azure portal, you are prompted for a resource group. Yes No When deploying any Azure resource through the portal, you are always asked to select or create a resource group. The resource group acts as a container for your resources.
When you deploy Template1 by using the Azure portal, you are prompted for the Windows operating system version. Yes No The windowsOSVersion parameter has a defaultValue but also allowedValues. The Azure portal will present this parameter to the user, allowing them to either accept the default value or choose from the allowed options. Thus, you are “prompted” with the choice.
When you deploy Template1 by using the Azure portal, you are prompted for a location. Yes No The location parameter has allowedValues but no defaultValue. Since there’s no default, the Azure portal must prompt the user to select a location from the allowed list during deployment.
Therefore, the correct answer is:
Statement 1: Yes
Statement 2: Yes
Statement 3: Yes
Tips for the AZ-305 Exam Related to this Question:
Understanding ARM Template Structure: Be very familiar with the different sections of an ARM template (parameters, variables, resources, outputs). Know what each section does.
Parameter Properties: Pay close attention to the properties of parameters, especially:
type: Understand the different data types (string, int, bool, object, securestring, array).
defaultValue: Know that if a defaultValue is present, the user might not be required to enter a value, but they will see the option.
allowedValues: Understand that this restricts the user’s choices to a specific set of values. If there’s no defaultValue, the user must choose from these.
Azure Portal Deployment Experience: Have a general understanding of the flow when deploying resources through the Azure portal, including when deploying from a template. You’ll be presented with the parameters defined in the template.
Resource Group Importance: Remember that a resource group is a fundamental requirement for deploying Azure resources.
SecureString: Know that securestring is used for sensitive data like passwords and is handled differently by Azure (often masked in the portal).
HOTSPOT
You have an Azure Active Directory (Azure AD) tenant named contoso.com. The tenant contains the users shown in the following table.
Name Member of
User1 Group1
User2 Group2
The tenant contains computers that run Windows 10. The computers are configured as shown in the following table.
Name Member of
Computer1 GroupA
Computer2 GroupA
Computer3 GroupB
You enable Enterprise State Roaming in contoso.com for Group1 and GroupA.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Statements Yes No
If User1 modifies the desktop background of Computer1, User1 will see the changed background when signing in to Computer3.
If User2 modifies the desktop background of Computer1, User2 will see the changed background when signing in to Computer2.
If User1 modifies the desktop background of Computer3, User1 will see the changed background when signing in to Computer2.
Therefore, the correct options are:
Statement 1: No
Statement 2: No
Statement 3: No
Why Correct:
The core principle of Azure AD Enterprise State Roaming is that settings only synchronize when both the user and the device are within the defined scope of enablement. In this scenario, Enterprise State Roaming is enabled for:
Users: Members of Group1 (only User1)
Devices: Members of GroupA (Computer1 and Computer2)
Let’s analyze each statement again with this in mind:
Statement 1: While User1 is enabled, Computer3 is not, preventing roaming.
Statement 2: While Computer1 and Computer2 are enabled, User2 is not, preventing roaming.
Statement 3: The initial change happens on Computer3, which is not enabled, preventing the change from being roamed, even though User1 and Computer2 are enabled.
Tips for the AZ-305 Exam:
Understand the Scope: The most critical aspect is understanding the scope of Enterprise State Roaming. Pay very close attention to which user groups and device groups are explicitly enabled. Anything outside of these groups is excluded.
Both User and Device Must Be Enabled: This is the fundamental rule. For settings to roam, both the user and the device involved must be within the enabled scope.
Origin of Change Matters: If a setting is changed on a device that is not enabled for roaming, that change will not synchronize to other devices, even if the user and the other devices are enabled.
Read Carefully: Pay very close attention to the user and computer memberships in each statement. Misreading this information is a common mistake.
Focus on “Enabled For”: The question clearly states “You enable Enterprise State Roaming… for Group1 and GroupA.” This is your key information.
Visualize: It can be helpful to quickly jot down or mentally visualize which users and computers are in the enabled groups to avoid confusion.
Think Logically: Break down each scenario step-by-step. Is the user enabled? Is the initial device enabled? Is the target device enabled? If any of these are “no,” then roaming will not occur.
HOTSPOT
You have an Azure Resource Manager template named Template1 in the library as shown in the following exhibit.
{
“$schema”: “https:/schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#”,
“contentVersion”: “1.0.0.0”,
“parameters”: {},
“resources”: [
{
“apiVersion”: “2016-01-01”,
“type”: “Microsoft.Storage/storageAccounts”,
“name”: “[concat(copyIndex(), ‘storage’, uniqueString(resourceGroup().id))]”,
“location”: “[resourceGroup().location]”,
“sku”: {
“name”: “Premium_LRS”
},
“kind”: “Storage”,
“properties”: {},
“copy”: {
“name”: “storagecopy”,
“count”: 3,
“mode”: “Serial”,
“batchSize”: 1
}
}
]
}
Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.
During the deployment of Template1,
you can specify [answer choice].
the number of resources to deploy
the name of the resources to deploy
the resource group to which to deploy the resources
the permissions for the resources that will be deployed
Template1 deploys [answer choice].
a single storage account in one resource group
three storage accounts in one resource group
three resource groups that each has one storage account
three resource groups that each has three storage accounts
Statement 1: During the deployment of Template1, you can specify [the resource group to which to deploy the resources].
Why Correct: When deploying an Azure Resource Manager template, a fundamental requirement is to specify the resource group where the resources defined in the template will be created. The template itself defines what resources will be created and how, but the deployment process dictates where they will reside.
Statement 2: Template1 deploys [three storage accounts in one resource group].
Why Correct: Let’s analyze the template:
type: “Microsoft.Storage/storageAccounts”: This indicates that the template will deploy storage accounts.
copy: { “name”: “storagecopy”, “count”: 3, … }: The copy element with “count”: 3 specifies that three instances of the defined resource (storage account) will be created.
location: “[resourceGroup().location]”: This indicates that all the storage accounts will be deployed to the same resource group where the template deployment is targeted.
Therefore, the correct options are:
Statement 1: the resource group to which to deploy the resources
Statement 2: three storage accounts in one resource group
Tips for the AZ-305 Exam (and similar Azure exams):
ARM Template Structure is Key: Be very familiar with the basic structure of an ARM template, including the parameters, variables, and resources sections. Understand the purpose of each section.
Understanding the copy Loop: The copy loop is a powerful feature for deploying multiple instances of a resource. Pay close attention to the count property, as this determines how many resources will be created.
Resource Scope and Deployment: Remember that when you deploy a template, you deploy it to a specific resource group. Resources defined within the template, unless explicitly specified otherwise, will be created within that target resource group. The resourceGroup().location function reinforces this.
Template Functions: Be familiar with commonly used ARM template functions like resourceGroup(), concat(), uniqueString(), and copyIndex(). Understand what they do and how they manipulate values within the template.
Deployment Time vs. Template Definition: Understand what aspects of a deployment are determined by the template itself and what can be specified during the deployment process. In this case, the number of resources is defined in the template, but the target resource group is specified during deployment.
Practice with Templates: The best way to become proficient with ARM templates is to write and deploy them. Experiment with different features and scenarios.
Read the Exhibits Carefully: The provided exhibit contains all the information needed to answer the questions. Pay close attention to the details within the JSON structure.
HOTSPOT
Your company hosts multiple websites by using Azure virtual machine scale sets (VMSS) that run Internet Information Server (IIS).
All network communications must be secured by using end to end Secure Socket Layer (SSL) encryption. User sessions must be routed to the same server by using cookie-based session affinity.
The image shown depicts the network traffic flow for the websites to the VMSS.
An incoming IP address routes traffic through the load balancer.
The load balancer directs traffic based on hostname:
Requests for www.tailspintoys.com are routed to a backend pool of servers hosting this domain.
Requests for www.wingtiptoys.com are routed to another backend pool of servers hosting this domain.
Use the drop-down menus to select the answer choice that answers each question.
NOTE: Each correct selection is worth one point.
Which Azure solution should you create to route the web application traffic to the VMSS?
Azure VPN Gateway
Azure Application Gateway
Azure ExpressRoute
Azure Network Watcher
What should you configure to make sure web traffic arrives at the appropriate server in the VMSS?
Routing rules and backend listeners
CNAME and A records
Routing method and DNS time to live (TTL)
Path-based redirection and WebSockets
Question 1: Which Azure solution should you create to route the web application traffic to the VMSS?
The correct answer is Azure Application Gateway.
Why Application Gateway is the right choice: Application Gateway is a layer-7 load balancer, meaning it can make routing decisions based on HTTP headers, such as the hostname (www.tailspintoys.com, www.wingtiptoys.com). It also provides built-in SSL offloading and end-to-end SSL encryption capabilities, which are requirements in your scenario. Additionally, it natively supports cookie-based session affinity.
Why other options are incorrect:
Azure VPN Gateway: VPN Gateway is used to establish secure connections between on-premises networks and Azure virtual networks, not for load balancing web traffic.
Azure ExpressRoute: ExpressRoute is a dedicated, private connection between your on-premises network and Azure, again not suited for public-facing website load balancing.
Azure Network Watcher: Network Watcher is a diagnostic and troubleshooting tool for network issues, not a load balancing solution.
Question 2: What should you configure to make sure web traffic arrives at the appropriate server in the VMSS?
The correct answer is Routing rules and backend listeners.
Why routing rules and backend listeners are correct: In Application Gateway, you configure listeners to listen on specific ports (e.g., 443 for HTTPS). Then, you create routing rules that map incoming hostnames (like www.tailspintoys.com) to different backend pools. These backend pools are associated with the VMSS instances hosting each website. This combination ensures that traffic for each hostname is directed to the correct VMSS.
Why other options are incorrect:
CNAME and A records: While DNS records (CNAME and A) are necessary for directing traffic to Application Gateway’s public IP, they are not sufficient for routing traffic within Application Gateway to the correct backend pools.
Routing method and DNS TTL: The routing method within Application Gateway (e.g., round-robin, least connections) is not directly related to hostname-based routing. DNS TTL (Time To Live) affects how long DNS records are cached, but doesn’t influence traffic routing to specific backend pools.
Path-based redirection and WebSockets: Path-based redirection is for redirecting traffic based on URL paths, not hostnames. WebSockets are a communication protocol, unrelated to routing decisions based on hostnames.
DRAG DROP
You have an Azure subscription that contains two virtual networks named VNet1 and VNet2. Virtual machines connect to the virtual networks.
The virtual networks have the address spaces and the subnets configured as shown in the following table.
Virtual network Address space Subnet Peering
VNet1 10.1.0.0/16 10.1.0.0/24 VNet2
10.1.1.0/26
VNet2 10.2.0.0/26 10.2.0.0/24 VNet1
You need to add the address space of 10.33.0.0/16 to VNet1. The solution must ensure that the hosts on VNet1 and VNet2 can communicate.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Actions
On the peering connection in VNet2, allow gateway transit.
Recreate peering between VNet1 and VNet2.
Remove VNet1.
Create a new virtual network named VNet1.
On the peering connection in VNet1, allow gateway transit.
Add the 10.33.0.0/16 address space to VNet1.
Remove peering between VNet1 and VNet2.
Answer Area
Remove peering between VNet1 and VNet2.
Add the 10.33.0.0/16 address space to VNet1.
Recreate peering between VNet1 and VNet2.
Here’s why this is the correct approach and why other options are incorrect:
Why removing peering is necessary: Azure doesn’t allow you to modify the address space of a virtual network while it has an active peering connection. You must remove the peering before making the address space change.
Why adding the address space is next: This is the core task you need to accomplish. Once the peering is removed, you’re free to add the 10.33.0.0/16 address space.
Why recreating peering is last: After the address space modification, you need to re-establish connectivity between VNet1 and VNet2, which is done by recreating the peering.
Why other options are incorrect:
Allowing gateway transit: Gateway transit is used when you want traffic from one virtual network to flow through another virtual network’s virtual network gateway to reach an on-premises network or another virtual network. It’s not relevant to simply adding an address space and enabling direct communication between VNet1 and VNet2.
Removing/Creating VNet1: Deleting and recreating an entire virtual network is a drastic and unnecessary step. This would involve rebuilding all virtual machines, network interfaces, and other resources within the virtual network, leading to significant downtime and effort. The problem can be solved by simply modifying the existing VNet1.
You have an Azure App Service app.
You need to implement tracing for the app. The tracing information must include the following:
– Usage trends
– AJAX call responses
– Page load speed by browser
– Server and browser exceptions
What should you do?
Configure IIS logging in Azure Log Analytics.
Configure a connection monitor in Azure Network Watcher.
Configure custom logs in Azure Log Analytics.
Enable the Azure Application Insights site extension.
The correct answer is to Enable the Azure Application Insights site extension.
Here’s why:
Application Insights is specifically designed for application performance monitoring and diagnostics, including the specific requirements you listed:
Usage trends: Application Insights automatically collects data on usage patterns, including page views, user sessions, and other key metrics.
AJAX call responses: It tracks AJAX calls, providing details on response times and any errors.
Page load speed by browser: Application Insights measures page load times and breaks them down by browser type.
Server and browser exceptions: It captures both server-side exceptions (occurring in your app code) and client-side exceptions (happening in the user’s browser).
The other options are not suitable for this specific task:
IIS logs in Azure Log Analytics: While you could get some of this information (like page load times and server errors) from IIS logs, you wouldn’t get the detailed client-side information like AJAX call performance and browser exceptions. It’s also more complex to configure and query.
Connection monitor in Azure Network Watcher: Connection Monitor is for diagnosing network connectivity issues. It won’t provide application-level performance data like page load speeds or AJAX responses.
Custom logs in Azure Log Analytics: While flexible, custom logging requires you to manually instrument your code to collect the specific data points you need. Application Insights provides much of this functionality automatically, making it a far more efficient solution.
HOTSPOT
You have an Azure subscription named Subscription1. Subscription1 contains the resources in the following table.
Name Type
RG1 Resource group
RG2 Resource group
VNet1 Virtual network
VNet2 Virtual network
VNet1 is in RG1. VNet2 is in RG2. There is no connectivity between VNet1 and VNet2.
An administrator named Admin1 creates an Azure virtual machine named VM1 in RG1. VM1 uses a disk named Disk1 and connects to VNet1. Admin1 then installs a custom application in VM1.
You need to move the custom application to VNet2. The solution must minimize administrative effort.
Which two actions should you perform? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
First action:
Create a network interface in RG2.
Detach a network interface.
Delete VM1.
Move a network interface to RG2.
Second action:
Attach a network interface.
Create a network interface in RG2.
Create a new virtual machine.
Move VM1 to RG2.
First action: Detach a network interface.
Admin1 needs to detach the network interface from VM1. This separates the VM from the network, allowing the interface (and implicitly, the associated IP configuration) to be moved.
Second action: Attach a network interface.
After detaching the network interface and ensuring VNet2 is properly configured (which may involve adding subnets, address space modifications, or ensuring connectivity between VNet1 and VNet2 if communication is required), Admin1 can attach the existing network interface to a new or existing VM in VNet2. This preservers the IP configuration, minimizing reconfiguration effort.
Why other options are not optimal:
Moving the network interface to RG2: While technically possible, moving the network interface to a different resource group doesn’t inherently change its association with the virtual network. This operation is more about management and organization than facilitating the application move.
Deleting VM1: Deleting VM1 is unnecessary and destructive. The goal is to move the application, not destroy its current environment.
Creating a new virtual machine: You could create a new VM in VNet2 first and then attach the network interface, but the order of operations presented in the correct answer is more efficient. Detaching the interface before dealing with the new VM reduces the time the original VM is unavailable.
Creating a network interface in RG2: You don’t need to create a new network interface. Reusing the existing one preserves any existing configuration (IP address, etc.), minimizing the effort required to get the application running in the new VNet.
Moving VM1 to RG2: While moving a VM between resource groups is possible, this operation doesn’t inherently change the VM’s associated virtual network. VM1 would still be connected to VNet1. This approach would require more configuration work on networking after the move.
You have an Azure subscription that contains the storage accounts shown in the following table.
Name Contains
storagecontoso1 A blob service and a table service
storagecontoso2 A blob service and a file service
storagecontoso3 A queue service
storagecontoso4 A file service and a queue service
storagecontoso5 A table service
You enable Storage Advanced Threat Protection (ATP) for all the storage accounts.
You need to identify which storage accounts will generate Storage ATP alerts.
Which two storage accounts should you identify? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
storagecontoso1
storagecontoso2
storagecontoso3
storagecontoso4
storagecontoso5
Storage ATP only generates alerts for blob containers. Therefore, the storage accounts that will generate alerts are those containing a blob service:
storagecontoso1: Contains a blob service.
storagecontoso2: Contains a blob service.
The other storage accounts do not have blob services and thus will not generate Storage ATP alerts.
HOTSPOT
Your company has an Azure Container Registry named Registry1.
You have an Azure virtual machine named Server1 that runs Windows Server 2019.
From Server1, you create a container image named image1 and then tag image1.
You need to add image1 to Registry1.
Which command should you run on Server1? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
docker
AzCopy
Robocopy
esentutl
push
registry1.azurecr.io
registry1.onmicrosoft.com
https://registry1.onmicrosoft.com
\registry1.blob.core.windows.net
/image1
The correct command is constructed as follows:
docker push: This is the core Docker command to push an image to a registry.
registry1.azurecr.io: This is the correct format for the Azure Container Registry login server name.
/image1: This represents the image name being pushed, including any tags you’ve added.
Therefore, the complete command (assuming image1 is properly tagged) would look like this:
docker push registry1.azurecr.io/image1:latest
Use code with caution.
Bash
or, if you used a different tag:
docker push registry1.azurecr.io/image1:<your_tag>
Use code with caution.
Bash
Why other options are incorrect:</your_tag>
AzCopy / Robocopy / esentutl: These are file transfer utilities, not relevant for interacting with a container registry.
registry1.onmicrosoft.com / https://registry1.onmicrosoft.com / \registry1.blob.core.windows.net: These are not the correct formats for addressing an Azure Container Registry. Azure CR uses the .azurecr.io domain.
It’s important to note that you would likely need to log in to your Azure Container Registry first using the docker login command before you can push images. For example:
az acr login –name registry1 #Gets credentials from Azure and logs you in
Use code with caution.
Bash
or (less secure method, using an admin account)
docker login registry1.azurecr.io -u <username> -p <password>
Use code with caution.
Bash
This ensures that Docker is authenticated with your registry and has the necessary permissions to push images.</password></username>
HOTSPOT
You are developing an Azure Web App. You configure TLS mutual authentication for the web app.
You need to validate the client certificate in the web app. To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Property
Client certificate location:
HTTP request header
Client cookie
HTTP message body
URL query string
Encoding type:
HTML
URL
Unicode
Base64
Client certificate location: HTTP request header
Why this is correct: With TLS/SSL mutual authentication, the client certificate is presented to the server as part of the TLS/SSL handshake. Azure Web Apps make this certificate accessible to your application code via the X-ARR-ClientCert HTTP request header. Your code can then retrieve and validate this certificate.
Why other options are incorrect:
Client cookie: While cookies could theoretically store information about a client certificate, they would not store the certificate itself securely or reliably. Cookies are also client-side and easily manipulated.
HTTP message body: The message body is typically used for the payload of the HTTP request, not for transmitting client certificates.
URL query string: Including a certificate in the query string is highly insecure and not a standard practice.
Encoding type: Base64
Why this is correct: The client certificate in the X-ARR-ClientCert header is encoded in Base64 format. This encoding is necessary to represent the binary certificate data as a string suitable for transmission in an HTTP header.
Why other options are incorrect:
HTML: HTML is a markup language, not an encoding scheme for certificates.
URL: URL encoding is used for encoding characters in URLs, not for entire certificates.
Unicode: Unicode is a character encoding standard, but it’s not used for encoding client certificates in this context.
DRAG DROP
You are designing a solution to secure a company’s Azure resources. The environment hosts 10 teams. Each team manages a project and has a project manager, a virtual machine (VM) operator, developers, and contractors.
Project managers must be able to manage everything except access and authentication for users. VM operators must be able to manage VMs, but not the virtual network or storage account to which they are connected. Developers and contractors must be able to manage storage accounts.
You need to recommend roles for each member.
What should you recommend? To answer, drag the appropriate roles to the correct employee types. Each role may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Roles
Owner
Contributor
Employee type
Reader
Virtual Machine Contributor
Storage Account Contributor
Answer Area
Project manager: Role
VM operators: Role
Developers: Role
Contractors: Role
Answer Area:
Project manager: Contributor
VM operators: Virtual Machine Contributor
Developers: Storage Account Contributor
Contractors: Storage Account Contributor
Explanation:
Project Manager: Contributor
The Contributor role allows the user to create and manage all types of Azure resources but does not grant access to manage user access or assignments. This aligns perfectly with the requirement that project managers can manage everything except access and authentication.
VM Operators: Virtual Machine Contributor
The Virtual Machine Contributor role specifically allows the user to manage virtual machines, including starting, stopping, resizing, etc. Crucially, it does not grant permissions to manage the underlying virtual network or storage account that the VMs utilize. This fulfills the requirement for VM operators.
Developers: Storage Account Contributor
The Storage Account Contributor role allows the user to manage storage accounts. This directly addresses the requirement for developers to manage storage.
Contractors: Storage Account Contributor
Similar to developers, contractors also need to manage storage accounts, making the Storage Account Contributor role the appropriate choice.
Why other roles are not the best fit:
Owner: This role grants full access to all resources, including managing access. It’s too powerful for the Project Manager and definitely not needed for VM operators, developers, or contractors given the stated restrictions.
Reader: This role only allows viewing resources, not managing them. It doesn’t meet the requirements for any of the employee types.
You have an Azure virtual machine named VM1 and an Azure Active Directory (Azure AD) tenant named adatum.com.
VM1 has the following settings:
– IP address: 10.10.0.10
– System-assigned managed identity: On
You need to create a script that will run from within VM1 to retrieve the authentication token of VM1.
Which address should you use in the script?
vm1.adatum.com.onmicrosoft.com
169.254.169.254
10.10.0.10
vm1.adatum.com
The correct answer is 169.254.169.254.
Explanation:
To retrieve an authentication token for a virtual machine with a system-assigned managed identity, you need to contact a specific IP address on the local machine: 169.254.169.254.
This is the Azure Instance Metadata Service (IMDS) endpoint. When a managed identity is enabled for a resource like a VM, Azure makes the authentication token available through this endpoint.
Here’s why the other options are incorrect:
vm1.adatum.com.onmicrosoft.com: This is the default domain name for the Azure AD tenant. While it’s related to the identity, it’s not the address to retrieve the token from within the VM.
10.10.0.10: This is the private IP address of the VM. While you can communicate with the VM using this address, it’s not the endpoint for retrieving the managed identity token.
vm1.adatum.com: This is a general domain name and not the specific endpoint for retrieving the token.
HOTSPOT
Your company has a virtualization environment that contains the virtualization hosts shown in the following table.
Name Hypervisor Guest
Server1 VMware VM1, VM2, VM3
Server2 Hyper-V VMA, VMB, VMC
The virtual machines are configured as shown in the following table.
Name Generation Memory Operating system (OS) OS disk Data disk
VM1 Not applicable 4 GB Windows Server 2016 200 GB 800 GB
VM2 Not applicable 12 GB Red Hat Enterprise Linux 7.2 3 TB 200 GB
VM3 Not applicable 32 GB Windows Server 2012 R2 200 GB 1 TB
VMA 1 8 GB Windows Server 2012 100 GB 2 TB
VMB 1 16 GB Red Hat Enterprise Linux 7.2 150 GB 3 TB
VMC 2 24 GB Windows Server 2016 500 GB 6 TB
All the virtual machines use basic disks. VM1 is protected by using BitLocker Drive Encryption (BitLocker).
You plan to migrate the virtual machines to Azure by using Azure Site Recovery.
You need to identify which virtual machines can be migrated.
Which virtual machines should you identify for each server? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
The virtual machines that can be migrated from Server1:
VM1 only
VM2 only
VM3 only
VM1 and VM2 only
VM1 and VM3 only
VM1, VM2, and VM3
The virtual machines that can be migrated from Server2:
VMA only
VMB only
VMC only
VMA and VMB only
VMA and VMC only
VMA, VMB, and VMC
1.) VM3 only
2.) VMA and VMB only
VM1 cannot be migrates as it has BitLocker enabled.
VM2 cannot be migrates as the OS disk on VM2 is larger than 2TB.
VMC cannot be migrates as the Data disk on VMC is larger than 4TB.
You are designing an Azure solution.
The solution must meet the following requirements:
– Distribute traffic to different pools of dedicated virtual machines (VMs) based on rules.
– Provide SSL offloading capabilities.
You need to recommend a solution to distribute network traffic.
Which technology should you recommend?
Azure Application Gateway
Azure Load Balancer
Azure Traffic Manager
server-level firewall rules
The correct technology to recommend is Azure Application Gateway.
Here’s why:
Distribute traffic to different pools of dedicated virtual machines (VMs) based on rules: Azure Application Gateway operates at Layer 7 of the OSI model (the application layer). This allows it to make routing decisions based on HTTP headers (like host names or paths), cookies, and other application-level data. You can define rules to direct traffic to different backend pools based on these criteria.
Provide SSL offloading capabilities: Application Gateway can terminate SSL/TLS connections at the gateway itself. This decrypts the traffic, allowing the gateway to inspect it for routing and security purposes before forwarding it to the backend VMs over HTTP or HTTPS. This offloads the SSL processing from the backend servers, improving their performance.
Let’s look at why the other options are not the best fit:
Azure Load Balancer: Azure Load Balancer operates at Layer 4 (the transport layer). It distributes traffic based on IP addresses and ports. While it can distribute traffic across multiple VMs, it doesn’t have the application-level awareness to route traffic based on HTTP headers or paths. It also does not provide SSL offloading.
Azure Traffic Manager: Azure Traffic Manager is a DNS-based traffic routing service. It directs clients to different endpoints (like different Azure regions) based on routing methods like performance or geographic location. It doesn’t distribute traffic within a single region to different pools of VMs based on application rules, and it doesn’t provide SSL offloading.
Server-level firewall rules: While firewall rules can control network access based on IP addresses and ports, they are not designed for intelligent traffic distribution based on application-level rules or providing SSL offloading. Their primary function is security and access control.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You manage an Active Directory domain named contoso.local.
You install Azure AD Connect and connect to an Azure Active Directory (Azure AD) tenant named contoso.com without syncing any accounts.
You need to ensure that only users who have a UPN suffix of contoso.com in the contoso.local domain sync to Azure AD.
Solution: You use Azure AD Connect to customize the synchronization options.
Does this meet the goal?
Yes
No
Yes
Explanation:
Azure AD Connect provides robust filtering capabilities that allow you to precisely control which objects and attributes are synchronized from your on-premises Active Directory to Azure AD. You can configure filtering based on domains, organizational units (OUs), and even attributes.
In this scenario, you can configure Azure AD Connect to:
Filter on the userPrincipalName attribute.
Specify a rule that only synchronizes users where the userPrincipalName attribute ends with @contoso.com.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You manage an Active Directory domain named contoso.local.
You install Azure AD Connect and connect to an Azure Active Directory (Azure AD) tenant named contoso.com without syncing any accounts.
You need to ensure that only users who have a UPN suffix of contoso.com in the contoso.local domain sync to Azure AD.
Solution: You use Synchronization Rules Editor to create a synchronization rule.
Does this meet the goal?
Yes
No
Yes
Explanation:
As in the previous question with the slightly different phrasing, the Synchronization Rules Editor within Azure AD Connect is the intended tool for creating highly specific synchronization rules.
You can absolutely use the Synchronization Rules Editor to create an inbound synchronization rule that filters users based on their UPN suffix. The rule would:
Target User objects.
Include a scoping filter that examines the userPrincipalName attribute.
Implement a condition that only allows synchronization of users where the userPrincipalName attribute ends with @contoso.com.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You manage an Active Directory domain named contoso.local.
You install Azure AD Connect and connect to an Azure Active Directory (Azure AD) tenant named contoso.com without syncing any accounts.
You need to ensure that only users who have a UPN suffix of contoso.com in the contoso.local domain sync to Azure AD.
Solution: You use the Synchronization Service Manager to modify the Active Directory Domain Services (AD DS) Connector.
Does this meet the goal?
Yes
No
No
Explanation:
The Synchronization Service Manager is primarily used for monitoring and managing the synchronization process itself. It allows you to:
View connector status.
Run full or delta synchronizations.
Troubleshoot synchronization errors.
Manage connector space objects.
While the Synchronization Service Manager interacts with the AD DS Connector, it does not provide the functionality to define granular filtering rules based on the content of attributes like the UPN suffix.
To achieve the goal of syncing only users with a specific UPN suffix, you need to use either:
The Azure AD Connect configuration wizard: During the initial setup or by re-running the wizard, you can configure filtering based on domains, OUs, or even create attribute-based filters (though this is less granular than the rules editor).
The Synchronization Rules Editor: This is the more powerful and precise tool for creating custom synchronization rules, including rules that filter users based on the value of their userPrincipalName attribute.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an app named App1 that uses data from two on-premises Microsoft SQL Server databases named DB1 and DB2.
You plan to move DB1 and DB2 to Azure.
You need to implement Azure services to host DB1 and DB2. The solution must support server-side transactions across DB1 and DB2.
Solution: You deploy DB1 and DB2 to SQL Server on an Azure virtual machine.
Does this meet the goal?
Yes
No
Yes
Explanation:
Deploying DB1 and DB2 to SQL Server on Azure Virtual Machines (VMs) allows you to maintain a similar environment to your on-premises setup. Critically, SQL Server running on Azure VMs fully supports distributed transactions.
When both databases reside within the same SQL Server instance on an Azure VM, or even on different SQL Server instances within the same or different Azure VMs (as long as they are properly networked), you can utilize Distributed Transaction Coordinator (DTC) to manage transactions that span across both databases.
Here’s why this solution works:
Full SQL Server Functionality: Running SQL Server on an Azure VM provides the complete feature set of SQL Server, including distributed transaction capabilities.
Control over Configuration: You have full administrative control over the SQL Server instances on the VMs, allowing you to configure DTC as needed.
Network Connectivity: Azure networking allows you to establish the necessary connectivity between the VMs hosting the SQL Server instances to facilitate distributed transactions.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You manage an Active Directory domain named contoso.local.
You install Azure AD Connect and connect to an Azure Active Directory (Azure AD) tenant named contoso.com without syncing any accounts.
You need to ensure that only users who have a UPN suffix of contoso.com in the contoso.local domain sync to Azure AD.
Solution: You use the Synchronization Service Manager to modify the Metaverse Designer tab.
Does this meet the goal?
Yes
No
No
Explanation:
The Metaverse Designer within the Synchronization Service Manager is used to configure the schema of the metaverse – the central, unified identity store that Azure AD Connect uses. It’s where you define the object types and attributes that will be synchronized and how they map between different connected data sources (like your on-premises Active Directory and Azure AD).
While you can see the attributes present in the metaverse through the Metaverse Designer, you cannot use it to directly define filtering rules based on the values of those attributes.
To achieve the goal of filtering users based on their UPN suffix, you need to use either:
The Azure AD Connect configuration wizard: During the initial setup or by re-running the wizard, you can configure filtering based on domains, OUs, or even create attribute-based filters (though this is less granular than the rules editor).
The Synchronization Rules Editor: This is the more powerful and precise tool for creating custom synchronization rules, including rules that filter users based on the value of their userPrincipalName attribute. You would create an inbound synchronization rule with a scoping filter that checks if the userPrincipalName ends with @contoso.com.
HOTSPOT
You have an Azure subscription that contains a resource group named RG1.
You have a group named Group1 that is assigned the Contributor role for RG1.
You need to enhance security for the virtual machines in RG1 to meet the following requirements:
– Prevent Group1 from assigning external IP addresses to the virtual machines.
– Ensure that Group1 can establish a Remote Desktop connection to the virtual machines through a shared external IP address.
What should you use to meet each requirement? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Prevent Group1 from assigning external IP addresses to the virtual machines:
Azure Policy
Azure Bastion
Virtual network service endpoints
Azure Web Application Firewall (WAF)
Ensure that Group1 can establish a Remote Desktop connection to the virtual machines through a shared external IP address:
Azure Policy
Azure Bastion
Virtual network service endpoints
Azure Web Application Firewall (WAF)
Prevent Group1 from assigning external IP addresses to the virtual machines:
Azure Policy
Explanation: Azure Policy allows you to create, assign, and manage policies that enforce different rules and effects over your resources. You can create a policy that specifically prevents the creation or association of public IP addresses with network interfaces within the resource group RG1. This would effectively block Group1 (even with Contributor role) from assigning external IPs to the VMs.
Ensure that Group1 can establish a Remote Desktop connection to the virtual machines through a shared external IP address:
Azure Bastion
Explanation: Azure Bastion is a fully managed platform as a service (PaaS) that you provision inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines in that virtual network directly through the Azure portal and over SSL. Key benefits for this requirement:
Shared External IP: All RDP/SSH connections go through the Azure Bastion host, which has a single public IP address. The individual VMs do not need public IPs.
Enhanced Security: It eliminates the need to expose VMs directly to the public internet through their own public IPs, significantly reducing the attack surface.
Simplified Management: Users connect directly through the Azure portal, simplifying access management.
Why other options are incorrect:
Virtual network service endpoints: These secure access to specific Azure service resources (like Azure Storage or Azure SQL Database) to only your virtual network. They are not relevant for controlling external IP assignment or providing RDP access to VMs.
Azure Web Application Firewall (WAF): WAF is designed to protect web applications from common web exploits. It doesn’t control VM IP assignments or provide RDP access.
While Azure Policy can prevent external IP assignment, it doesn’t facilitate RDP connections.
You create a container image named Image1 on a developer workstation.
You plan to create an Azure Web App for Containers named WebAppContainer that will use Image1.
You need to upload Image1 to Azure. The solution must ensure that WebAppContainer can use Image1.
To which storage type should you upload Image1?
an Azure Storage account that contains a blob container
Azure Container Instances
Azure Container Registry
an Azure Storage account that contains a file share
The correct answer is Azure Container Registry.
Explanation:
Azure Container Registry (ACR) is a private, hosted registry service provided by Azure for building, storing, and managing container images and related artifacts. It’s designed specifically for this purpose and is the recommended way to store container images for use with Azure services like Web App for Containers.
Here’s why the other options are incorrect:
An Azure Storage account that contains a blob container: While you could theoretically store a container image as a blob, it wouldn’t be in a format that Azure Web App for Containers can directly consume. Web App for Containers expects to pull images from a container registry.
Azure Container Instances (ACI): ACI is a service for running containerized applications on demand. It doesn’t act as a registry for storing and distributing container images for other services.
An Azure Storage account that contains a file share: File shares are used for storing files that can be accessed via standard file protocols like SMB. They are not designed for storing or managing container images.
You have an Azure Service Bus and two clients named Client1 and Client2.
You create a Service Bus queue named Queue1 as shown in the exhibit. (Click the Exhibit tab.)
Create queue
Service Bus
Name*: Queue1
Max queue size: 1 GB
Message time to live:
Days: 14
Hours: 0
Minutes: 0
Seconds: 0
Lock duration:
Days: 0
Hours: 0
Minutes: 0
Seconds: 30
Enable duplicate detection: [✔]
Duplicate detection window:
Days: 0
Hours: 0
Minutes: 10
Seconds: 0
Enable dead lettering on message expiration: [ ]
Enable sessions: [✔]
Client1 sends messages to Queue1 as shown in the following table.
Bảng
Time Message
12:01:01 M3
12:01:02 M2
12:01:03 M1
12:01:04 M3
Client2 reads the messages from Queue1 at 12:01:05.
How will the messages be presented to Client2?
A. Client2 will read three messages in the following order: M1, M2, and then M3.
B. Client2 will read three messages in the following order: M3, M1, and then M2.
C. Client2 will read four messages in the following order: M3, M1, M2 and then M3.
D. Client2 will read four messages in the following order: M3, M2, M1 and then M3.
Let’s analyze the configuration of the Service Bus queue and the order in which messages are sent to determine how Client2 will read the messages.
Queue Configuration:
Enable Duplicate Detection: Yes (10 Minutes Window) - This feature will detect and discard duplicate messages sent within a 10-minute window based on MessageId. However, in this scenario, messages are sent within seconds of each other, and while Message M3 is sent twice, they are sent at different times (12:01:01 and 12:01:04). Unless the messages have the exact same MessageId and are sent within the 10-minute window, duplicate detection is unlikely to discard any messages in this scenario. We assume messages have unique IDs for this analysis unless stated otherwise.
Enable Sessions: Yes - Enabling sessions in a queue allows for message grouping and ordered processing within a session. However, if messages are sent without session IDs, they are treated as session-less messages in the queue. In this problem description, there is no mention of setting session IDs when Client1 sends messages. Therefore, for the purpose of this question, we can assume that messages are being sent as session-less messages, and the queue will behave as a standard queue in terms of ordering - generally FIFO (First-In, First-Out).
Message Sending Order (Client 1):
Client1 sends messages in the following order:
12:01:01 - M3
12:01:02 - M2
12:01:03 - M1
12:01:04 - M3 (another message, even if named the same)
Message Reading Time (Client 2):
Client2 reads messages at 12:01:05, which is after all messages have been sent and presumably available in the queue.
Expected Message Retrieval Order:
Service Bus queues, by default, attempt to provide “best-effort” ordered delivery, which typically means messages are delivered in the order they were received by the queue (FIFO). Given the sending times and queue configuration, we can expect the messages to be presented to Client2 in the order they were sent (and received by the queue).
Therefore, the expected retrieval order would be: M3, M2, M1, M3.
Comparing with Options:
A. Client2 will read three messages in the following order: M1, M2, and then M3. - Incorrect. This is reverse order and missing a message.
B. Client2 will read three messages in the following order: M3, M1, and then M2. - Incorrect. This is reordered and missing a message.
C. Client2 will read four messages in the following order: M3, M1, M2 and then M3. - Incorrect. This is reordered.
D. Client2 will read four messages in the following order: M3, M2, M1 and then M3. - Correct. This option presents all four messages in the order they were sent to the queue, which aligns with the expected FIFO behavior of a Service Bus queue when sessions are enabled but not explicitly used for message ordering, and when duplicate detection is unlikely to be triggered given the message sending times and assuming unique MessageIds.
Conclusion:
Option D is the closest and most correct answer because it reflects the expected FIFO (First-In, First-Out) behavior of the Azure Service Bus queue and the order in which the messages were sent by Client1.
Final Answer: The final answer is D.
HOTSPOT
You have an Azure subscription that contains the storage accounts shown in the following table.
Name Kind Performance tier Replication Location
storage1 StorageV2 Premium Locally-redundant storage (LRS) East US
storage2 Storage Standard Geo-redundant storage (GRS) UK West
storage3 BlobStorage Standard Locally-redundant storage (LRS) North Europe
— —
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Statements
storage1 can host Azure file shares.
There are six copies of the data in storage2.
storage3 can be converted to a GRS account.
Name | Kind | Performance tier | Replication | Location |
Statements:
storage1 can host Azure file shares. - Yes
Explanation: StorageV2 (General-purpose v2) accounts support all core Azure Storage services, including Azure Files.
There are six copies of the data in storage2. - Yes
Explanation: Geo-redundant storage (GRS) replicates your data synchronously three times within the primary region and asynchronously three times to a secondary region. This results in a total of six copies of your data.
storage3 can be converted to a GRS account. - No
Explanation: You cannot directly convert a BlobStorage account to a GRS account. BlobStorage accounts have a limited set of redundancy options, primarily LRS, ZRS, and GZRS. To achieve GRS-level redundancy for data in storage3, you would typically need to create a new StorageV2 account with GRS and then copy the data from storage3 to the new account.
Therefore, the correct answers are Yes, Yes, No.
You have an Azure subscription named Subscription1 that is used by several departments at your company. Subscription1 contains the resources in the following table.
Name Type
storage1 Storage account
RG1 Resource group
container1 Blob container
share1 File share
Another administrator deploys a virtual machine named VM1 and an Azure Storage account named storage2 by using a single Azure Resource Manager template.
You need to view the template used for the deployment.
From the Azure Portal, for which blade can you view the template that was used for the deployment?
container1
VM1
RG1
storage2
The correct answer is RG1.
Explanation:
When resources are deployed using an Azure Resource Manager template, the deployment itself is associated with the resource group where the resources are deployed. You can view the deployment history and the associated template within the blade of the resource group.
Here’s why the other options are incorrect:
container1: This is a specific resource within a storage account. You won’t find the overall deployment template here.
VM1: While the template deployed VM1, viewing the template from the VM1 blade will typically show the ARM template for that specific VM, not necessarily the template that deployed it along with storage2.
storage2: Similar to VM1, viewing the template from the storage2 blade will likely show the ARM template for that specific storage account, not the combined deployment template.
You have an Azure subscription that contains a resource group named RG1. RG1 contains multiple resources.
You need to trigger an alert when the resources in RG1 consume $1,000 USD.
What should you do?
From Cost Management + Billing, add a cloud connector.
From the subscription, create an event subscription.
From Cost Management + Billing, create a budget.
From RG1, create an event subscription.
The correct answer is From Cost Management + Billing, create a budget.
Explanation:
Azure Budgets, which are part of the Cost Management + Billing service, are specifically designed to help you plan for and track your Azure spending. Here’s why this is the correct approach:
Cost Tracking: Azure Budgets allow you to define a spending threshold for a specific scope (like a resource group, subscription, or management group).
Alerting: When the spending reaches a defined percentage of the budget (e.g., 50%, 75%, 100%), Azure can trigger alerts. These alerts can be sent to specified email addresses or trigger Azure Actions.
Scope Specificity: You can create a budget specifically for the resource group RG1, ensuring that the cost tracking and alerts are focused on the resources within that group.
Let’s look at why the other options are not the best fit:
From Cost Management + Billing, add a cloud connector: Cloud connectors are used to integrate cost data from other cloud providers (like AWS) into Azure Cost Management. They are not used for setting up budget alerts for Azure resources.
From the subscription, create an event subscription: Azure Event Grid allows you to subscribe to events within Azure, such as resource creation or deletion. While there are cost-related events, using budgets is a more direct and purpose-built approach for cost threshold alerts.
From RG1, create an event subscription: Similar to the previous point, while you could potentially use Event Grid for some cost-related scenarios, Azure Budgets within Cost Management + Billing are the primary and recommended tool for setting up spending alerts for resource groups.
You plan to automate the deployment of a virtual machine scale set that uses the Windows Server 2016 Datacenter image.
You need to ensure that when the scale set virtual machines are provisioned, they have web server components installed.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Upload a configuration script.
Create an Azure policy.
Modify the extensionProfile section of the Azure Resource Manager template.
Create a new virtual machine scale set in the Azure portal.
Create an automation account.
The correct two actions are:
Upload a configuration script.
Modify the extensionProfile section of the Azure Resource Manager template.
Explanation:
Upload a configuration script: You’ll need a script (e.g., PowerShell) that contains the commands to install the web server components. This script will be executed on the virtual machines during the provisioning process. You can store this script in Azure Blob Storage and reference it in your ARM template.
Modify the extensionProfile section of the Azure Resource Manager template: The extensionProfile section within the virtual machine scale set resource in your ARM template is used to define VM extensions. You will configure a VM extension (specifically the CustomScriptExtension for Windows VMs) within this section. This configuration will specify:
The location of the configuration script you uploaded.
Any command-line arguments needed to execute the script.
Potentially, storage account details to access the script.
Why other options are incorrect:
Create an Azure policy: Azure Policy is used to enforce organizational standards and assess compliance. While you could potentially use Azure Policy to ensure web server components are installed after the VM is provisioned, it’s not the ideal method for the initial installation during provisioning.
Create a new virtual machine scale set in the Azure portal: Creating the scale set in the portal is a manual step. The goal is automation. While you might do this initially to get the ARM template, the core of the automated solution lies in the template itself and the script.
Create an automation account: Azure Automation is a service for automating tasks across Azure and on-premises environments. While you could use Azure Automation to install web server components after the VMs are provisioned, using the CustomScriptExtension during provisioning is a more direct and efficient way to achieve the requirement.
HOTSPOT
You have several Azure virtual machines on a virtual network named VNet1. Vnet1 has two subnets that have 10.2.0.0/24 and 10.2.9.0/24 address spaces.
You configure an Azure Storage account as shown in the following exhibit.
Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.
The virtual machines on the 10.2.9.0/24 subnet will have
network connectivity to the file shares in the storage account
always
during a backup
never
Azure Backup will be able to back up the unmanaged hard
disks of the virtual machines in the storage account
always
during a backup
never
The virtual machines on the 10.2.9.0/24 subnet will have never network connectivity to the file shares in the storage account.
Explanation:
The storage account’s firewall settings explicitly allow access only from the virtual network vnet1 (azure) and the subnet subnet-1 (vnet1) which has the address space 10.2.0.0/24.
The virtual machines on the 10.2.9.0/24 subnet are on the same virtual network (VNet1) but on a different subnet.
Since the firewall rules are specific to the 10.2.0.0/24 subnet, the VMs on the 10.2.9.0/24 subnet will be blocked from accessing the storage account’s file shares.
Azure Backup will be able to back up the unmanaged hard disks of the virtual machines in the storage account always.
Explanation:
The storage account’s firewall settings have the option “Allow trusted Microsoft services to access this storage account” enabled.
Azure Backup is considered a trusted Microsoft service.
This setting allows Azure Backup to bypass the network restrictions configured in the firewall and access the storage account to perform backups, even if the VMs being backed up are not on the allowed subnet.
Therefore, the correct answers are:
The virtual machines on the 10.2.9.0/24 subnet will have: never
Azure Backup will be able to back up the unmanaged hard disks of the virtual machines in the storage account: always
HOTSPOT
You create and save an Azure Resource Manager template named Template1 that includes the following four sections.
Section1.
json { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json", "contentVersion": "1.0.0.0", "parameters": { "windowsOSVersion": { "defaultValue": "2019-Datacenter", "allowedValues": [ "2012-Datacenter", "2012-R2-Datacenter", "2016-Datacenter", "2019-Datacenter" ] }, } } ~~~ Section2. "variables": { "windowsOSVersion": "2012-Datacenter", Section3.json
“resources”: [
},
{
“type”: “Microsoft.Compute/virtualMachines”,
~~~
Section4.
```json
“storageProfile”: {
“imageReference”: {
“publisher”: “MicrosoftWindowsServer”,
“offer”: “WindowsServer”,
“sku”: “2012-R2-Datacenter”,
“version”: “latest”
},
}
~~~
You deploy Template1.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Statements
Windows Server 2012 R2 Datacenter will be deployed
to the Azure virtual machine.
A custom image of Windows Server will be deployed.
During the deployment of Template1, an administrator
will be prompted to select a version of Windows Server.
Statements:
Windows Server 2012 R2 Datacenter will be deployed to the Azure virtual machine. - Yes
Explanation: Section 4, within the storageProfile.imageReference, explicitly sets the sku to “2012-R2-Datacenter”. This setting within the resources section will override any conflicting information from the parameters or variables sections when it comes to the actual image used for the virtual machine.
A custom image of Windows Server will be deployed. - No
Explanation: The imageReference in Section 4 uses standard values for publisher (“MicrosoftWindowsServer”), offer (“WindowsServer”), and sku (“2012-R2-Datacenter”). This indicates a standard image from the Azure Marketplace, not a custom image. To use a custom image, you would typically specify a virtualMachineImageId instead.
During the deployment of Template1, an administrator will be prompted to select a version of Windows Server. - No
Explanation: While Section 1 defines a parameter windowsOSVersion with a defaultValue and allowedValues, this parameter is not actually used within the resources section to determine the OS image. The imageReference.sku in Section 4 hardcodes the image to “2012-R2-Datacenter”. Therefore, the administrator will not be prompted for the OS version during deployment because the template has already specified it definitively.
Therefore, the correct answers are Yes, No, No.
DRAG DROP
You have virtual machines (VMs) that run a mission-critical application.
You need to ensure that the VMs never experience down time.
What should you recommend? To answer, drag the appropriate solutions to the correct scenarios. Each solution may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point
Solutions
Availability Zone
Availability Set
Fault Domain
Scale Sets
Scenario
Maintain application performance across identical VMs.: Solution
Maintain application availability when an Azure datacenter fails.:Solution
Maintain application performance across different VMs.:Solution
Scenario: Maintain application performance across identical VMs.: Solution: Scale Sets
Explanation: Virtual Machine Scale Sets are designed to deploy and manage a set of identical, auto-scaling virtual machines. This is ideal for distributing load and maintaining performance across multiple instances of the same application.
Scenario: Maintain application availability when an Azure datacenter fails.: Solution: Availability Zone
Explanation: Availability Zones are physically separate datacenters within an Azure region. Deploying VMs across multiple Availability Zones protects your application from a complete datacenter failure, ensuring continued availability.
Scenario: Maintain application performance across different VMs.: Solution: Availability Set
Explanation: Availability Sets distribute your VMs across multiple fault domains and update domains within a datacenter. This protects your application from localized hardware failures and planned maintenance, improving availability and indirectly contributing to performance by ensuring the application remains running even if some underlying infrastructure fails.
Why other solutions are not the best fit:
Fault Domain: While fault domains are a component of Availability Sets (grouping VMs that share a common power and network source), they don’t represent a complete solution for maintaining availability on their own. You can’t directly deploy to a fault domain.
Scale Sets: While they provide high availability within a datacenter, they aren’t the primary solution for surviving a full datacenter failure. Availability Zones are designed for that.
Therefore, the correct drag-and-drop is:
Maintain application performance across identical VMs.: Scale Sets
Maintain application availability when an Azure datacenter fails.: Availability Zone
Maintain application performance across different VMs.: Availability Set
Your company has an office in Seattle.
You have an Azure subscription that contains a virtual network named VNET1.
You create a site-to-site VPN between the Seattle office and VNET1.
VNET1 contains the subnets shown in the following table.
Name IP address space
Subnet1 10.1.1.0/24
GatewaySubnet 10.1.200.0/28
You need to route all Internet-bound traffic from Subnet1 to the Seattle office.
What should you create?
a route for GatewaySubnet that uses the virtual network gateway as the next hop
a route for Subnet1 that uses the local network gateway as the next hop
a route for Subnet1 that uses the virtual network gateway as the next hop
a route for GatewaySubnet that uses the local network gateway as the next hop
The correct answer is a route for Subnet1 that uses the local network gateway as the next hop.
Explanation:
To route all Internet-bound traffic from Subnet1 to the Seattle office through the site-to-site VPN, you need to create a user-defined route (often called a route table) and associate it with Subnet1. This route will specify:
Destination prefix: 0.0.0.0/0 (This represents all possible destination IP addresses, effectively meaning all Internet traffic).
Next hop type: Virtual appliance
Next hop address: The IP address of the local network gateway that represents your Seattle office VPN device in Azure.
Here’s why the other options are incorrect:
a route for GatewaySubnet that uses the virtual network gateway as the next hop: GatewaySubnet is where the Azure VPN gateway resides. You don’t typically route traffic originating from this subnet. Also, using the virtual network gateway as the next hop for internet traffic wouldn’t send it back to your on-premises network.
a route for Subnet1 that uses the virtual network gateway as the next hop: The virtual network gateway is the Azure end of the VPN connection. To send traffic to your Seattle office, you need to direct it to the representation of your on-premises gateway in Azure, which is the local network gateway.
a route for GatewaySubnet that uses the local network gateway as the next hop: Again, GatewaySubnet is not the source of the internet-bound traffic you want to redirect.
Key Concepts:
User-Defined Routes (UDRs) / Route Tables: These allow you to override Azure’s default routing behavior.
Local Network Gateway: This Azure resource represents your on-premises VPN device in Azure and is used as the next hop for traffic destined for your on-premises network.
Virtual Network Gateway: This is the Azure VPN gateway service that connects to your on-premises VPN device.