test3 Flashcards
https://itexamviet.com/free-az-305-dump/16/
DRAG DROP
You are designing a solution to secure a company’s Azure resources. The environment hosts 10 teams. Each team manages a project and has a project manager, a virtual machine (VM) operator, developers, and contractors.
Project managers must be able to manage everything except access and authentication for users. VM operators must be able to manage VMs, but not the virtual network or storage account to which they are connected. Developers and contractors must be able to manage storage accounts.
You need to recommend roles for each member.
What should you recommend? To answer, drag the appropriate roles to the correct employee types. Each role may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Roles
Owner
Contributor
Reader
Virtual Machine Contributor
Storage Account Contributor
Answer Area
Employee type | Role
Project manager | Role
VM operators | Role
Developers | Role
Contractors | Role
Answer Area:
Employee type Role
Project manager Contributor
VM operators Virtual Machine Contributor
Developers Storage Account Contributor
Contractors Storage Account Contributor
Explanation of why each role is appropriate:
Project Manager: Contributor
The Contributor role allows users to create and manage all types of Azure resources but does not grant them the ability to manage access to those resources (i.e., they cannot assign roles to other users). This aligns perfectly with the requirement that project managers can manage everything except access and authentication.
VM Operators: Virtual Machine Contributor
The Virtual Machine Contributor role specifically grants permissions to manage virtual machines. This includes starting, stopping, resizing, and other VM-related tasks. Importantly, it does not grant permissions to manage the virtual network or storage accounts the VMs are connected to, fulfilling the stated restriction.
Developers: Storage Account Contributor
The Storage Account Contributor role allows users to manage Azure Storage accounts. This is exactly what developers need to fulfill their requirement.
Contractors: Storage Account Contributor
Since contractors also need to manage storage accounts, the Storage Account Contributor role is the appropriate choice for them as well.
Why other roles are not the best fit:
Owner: This role grants full control over the Azure resource, including the ability to delegate access. This is too much权限 for Project Managers, VM Operators, Developers, and Contractors based on the requirements.
Reader: This role only allows users to view Azure resources, not make any changes. None of the employee types can fulfill their responsibilities with only Reader access.
You have an Azure virtual machine named VM1 and an Azure Active Directory (Azure AD) tenant named adatum.com.
VM1 has the following settings:
– IP address: 10.10.0.10
– System-assigned managed identity: On
You need to create a script that will run from within VM1 to retrieve the authentication token of VM1.
Which address should you use in the script?
vm1.adatum.com.onmicrosoft.com
169.254.169.254
10.10.0.10
vm1.adatum.com
Correct Answer:
169.254.169.254
Explanation:
The Magic IP Address: The IP address 169.254.169.254 is a special, non-routable IP address that is specifically used within Azure virtual machines for accessing the Instance Metadata Service (IMDS).
IMDS and Managed Identities: The IMDS is a REST API endpoint available on every Azure VM. When a VM has a system-assigned or user-assigned managed identity enabled, it can use IMDS to obtain an Azure AD authentication token. This token allows the VM to authenticate to other Azure services without needing to embed credentials within the application running on the VM.
How it Works:
Your script running inside VM1 makes an HTTP request to 169.254.169.254.
The IMDS service on the VM’s hypervisor captures this request and verifies that it originates from the VM.
If the VM has an assigned managed identity, the IMDS endpoint can then return an OAuth 2.0 access token that the application running in VM1 can use to authenticate against other Azure services.
Why Other Options are Incorrect:
vm1.adatum.com.onmicrosoft.com: This is an FQDN (Fully Qualified Domain Name) and would not resolve to the internal metadata service IP address.
10.10.0.10: This is the private IP address of the VM. It does not expose the metadata service and cannot be used to fetch authentication tokens.
vm1.adatum.com: This is another FQDN and would not resolve to the internal metadata service IP address.
Important Tips for the AZ-305 Exam:
Managed Identities: This is a HUGE topic on the AZ-305 exam. You must thoroughly understand:
What they are: How they work with a VM or other azure services
System-assigned vs. User-assigned managed identities.
Why you should use them: To improve security, prevent credentials to be hardcoded
How to assign a managed identity to an Azure resource.
How to grant the managed identity permissions to access other Azure resources.
Instance Metadata Service (IMDS):
Know what it is and what information it exposes.
Understand its purpose in accessing VM metadata and managed identities.
Know the magic IP address: 169.254.169.254. This is very important for the exam.
Be aware it’s a secure, local endpoint that can only be accessed within the VM.
Authentication Flow:
Understand the general authentication flow using managed identities: VM request to IMDS endpoint, IMDS returning token, use token to authenticate with other azure services.
Security: Managed identities enhance security by eliminating the need to store credentials within your application or configuration files. This is a strong security practice, hence it is often covered in exam questions.
Practice and Hands-on: Do practical exercises to create VMs, enable managed identities, and access tokens using the IMDS. This will reinforce your understanding. There are many free online labs to help you with that.
HOTSPOT
Your company has a virtualization environment that contains the virtualization hosts shown in the following table.
Name Hypervisor Guest
Server1 VMware VM1, VM2, VM3
Server2 Hyper-V VMA, VMB, VMC
Virtual Machines Configuration:
Name Generation Memory Operating System (OS) OS Disk Data Disk
VM1 Not applicable 4 GB Windows Server 2016 200 GB 800 GB
VM2 Not applicable 12 GB Red Hat Enterprise Linux 7.2 3 TB 200 GB
VM3 Not applicable 32 GB Windows Server 2012 R2 200 GB 1 TB
VMA 1 8 GB Windows Server 2012 100 GB 2 TB
VMB 1 16 GB Red Hat Enterprise Linux 7.2 150 GB 3 TB
VMC 2 24 GB Windows Server 2016 500 GB 6 TB
All the virtual machines use basic disks. VM1 is protected by using BitLocker Drive Encryption (BitLocker).
You plan to migrate the virtual machines to Azure by using Azure Site Recovery.
You need to identify which virtual machines can be migrated.
Which virtual machines should you identify for each server? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
The virtual machines that can be migrated from Server1:
VM1 only
VM2 only
VM3 only
VM1 and VM2 only
VM1 and VM3 only
VM1, VM2, and VM3
The virtual machines that can be migrated from Server2:
VMA only
VMB only
VMC only
VMA and VMB only
VMA and VMC only
VMA, VMB, and VMC
To determine which virtual machines can be migrated to Azure using Azure Site Recovery, we need to check the compatibility requirements and limitations of Azure Site Recovery. Key constraints are related to operating system, disk type, disk size, and specific features like BitLocker.
Azure Site Recovery Compatibility Considerations:
Supported Operating Systems: All listed operating systems (Windows Server 2016, Red Hat Enterprise Linux 7.2, Windows Server 2012 R2, Windows Server 2012) are generally supported by Azure Site Recovery for both VMware and Hyper-V.
Disk Type: Basic disks are supported for Azure Site Recovery.
Disk Size Limit: Azure Site Recovery has a limit on the size of each disk that can be replicated. The maximum supported disk size for Azure Site Recovery is 4 TB.
BitLocker: Azure Site Recovery supports replicating virtual machines that use BitLocker Drive Encryption. For VMware VMs, BitLocker is generally supported.
Analyzing each Virtual Machine:
Server1 (VMware):
VM1:
OS: Windows Server 2016 (Supported)
Disk Sizes: OS Disk 200 GB, Data Disk 800 GB (Both within 4 TB limit)
BitLocker: Enabled, but supported by ASR.
Migratable
VM2:
OS: Red Hat Enterprise Linux 7.2 (Supported)
Disk Sizes: OS Disk 3 TB, Data Disk 200 GB (Both within 4 TB limit)
Migratable
VM3:
OS: Windows Server 2012 R2 (Supported)
Disk Sizes: OS Disk 200 GB, Data Disk 1 TB (Both within 4 TB limit)
Migratable
Server2 (Hyper-V):
VMA:
Generation: 1 (Supported)
OS: Windows Server 2012 (Supported)
Disk Sizes: OS Disk 100 GB, Data Disk 2 TB (Both within 4 TB limit)
Migratable
VMB:
Generation: 1 (Supported)
OS: Red Hat Enterprise Linux 7.2 (Supported)
Disk Sizes: OS Disk 150 GB, Data Disk 3 TB (Both within 4 TB limit)
Migratable
VMC:
Generation: 2 (Supported)
OS: Windows Server 2016 (Supported)
Disk Sizes: OS Disk 500 GB, Data Disk 6 TB (Data Disk exceeds 4 TB limit)
Not Migratable
Conclusion:
Server1: VM1, VM2, and VM3 are all within the supported limits and are migratable.
Server2: VMA and VMB are within the supported limits and are migratable. VMC is not migratable because its Data Disk is 6 TB, exceeding the 4 TB limit per disk for Azure Site Recovery.
Therefore, the correct answer is:
The virtual machines that can be migrated from Server1: VM1, VM2, and VM3
The virtual machines that can be migrated from Server2: VMA and VMB only
You are designing an Azure solution.
The solution must meet the following requirements:
– Distribute traffic to different pools of dedicated virtual machines (VMs) based on rules.
– Provide SSL offloading capabilities.
You need to recommend a solution to distribute network traffic.
Which technology should you recommend?
Azure Application Gateway
Azure Load Balancer
Azure Traffic Manager
server-level firewall rules
Correct Answer:
Azure Application Gateway
Explanation:
Requirement 1: Distribute traffic based on rules:
Azure Application Gateway provides advanced routing capabilities, allowing you to direct traffic to different backend pools of VMs based on rules you define. These rules can be based on HTTP headers, URL paths, cookies, and more. This is a key distinguishing factor compared to Azure Load Balancer.
Requirement 2: Provide SSL Offloading:
Application Gateway can terminate SSL/TLS connections at the gateway level. This means the backend VMs don’t need to handle the overhead of encryption and decryption, freeing up their resources for application processing. This is a critical requirement that Azure Load Balancer can’t satisfy.
Why Other Options are Incorrect:
Azure Load Balancer: Azure Load Balancer distributes traffic at the transport layer (Layer 4) and does not provide features such as SSL offloading or advanced rule-based routing that the application gateway does. It load balances the TCP connections.
Azure Traffic Manager: Azure Traffic Manager is a DNS-based load balancer used for global traffic routing. It directs users to the nearest or healthiest endpoint (e.g., different Azure regions) but not the individual backend pools within a region that Application Gateway does.
Server-level firewall rules: Server level firewall rules provides network security and does not distribute network traffic.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You manage an Active Directory domain named contoso.local.
You install Azure AD Connect and connect to an Azure Active Directory (Azure AD) tenant named contoso.com without syncing any accounts.
You need to ensure that only users who have a UPN suffix of contoso.com in the contoso.local domain sync to Azure AD.
Solution: You use Azure AD Connect to customize the synchronization options.
Does this meet the goal?
Yes
No
Correct Answer:
Yes
Explanation:
Requirement: The goal is to synchronize only users from the on-premises Active Directory (contoso.local) to Azure AD (contoso.com) if they have a User Principal Name (UPN) suffix of contoso.com.
Proposed Solution: The solution suggests using Azure AD Connect to customize the synchronization options.
How Azure AD Connect Customization Works: Azure AD Connect provides a robust filtering mechanism that allows you to control which objects and attributes are synchronized to Azure AD. You can apply filtering based on:
Organizational Units (OUs): Sync only users from specific OUs.
Domains: Sync only users from a specific on-premises domain.
Attributes: Sync only users based on the value of a specific attribute, such as UPN suffix in this case.
Filtering based on UPN suffix: Azure AD Connect allows you to create a synchronization rule that filters users based on the UPN suffix, or any other attribute. Therefore it is possible to filter to only sync users with contoso.com UPN suffix.
Why It Meets the Goal: By customizing the synchronization rules in Azure AD Connect, you can configure a rule to check the UPN suffix for each user in contoso.local. Only users with a UPN suffix of contoso.com would be synchronized to Azure AD, achieving the desired outcome.
Important Tips for the AZ-305 Exam:
Azure AD Connect: This is a critical component for hybrid identity management. You need to have a deep understanding of its functions:
Synchronization: Understand how it synchronizes on-premises AD objects to Azure AD.
Filtering: How filtering works and how to configure it for domains, OUs, and attributes. This includes understanding how to customize synchronization rules to filter based on attribute value.
Password Hash Synchronization (PHS), Pass-through Authentication (PTA), and Federation.
Write-back features.
Synchronization Rules: Know how to customize synchronization rules. This includes understanding the syntax for filtering attributes and for applying transformation.
User Principal Name (UPN): Understand what a UPN is and how it is used in both on-premises Active Directory and Azure AD. You should know that user logon name is the same as UPN by default.
Hybrid Identity: Understand the concepts of hybrid identity and how Azure AD Connect facilitates it.
Real-World Scenarios: Be prepared for questions that require you to configure synchronization rules for specific scenarios.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You manage an Active Directory domain named contoso.local.
You install Azure AD Connect and connect to an Azure Active Directory (Azure AD) tenant named contoso.com without syncing any accounts.
You need to ensure that only users who have a UPN suffix of contoso.com in the contoso.local domain sync to Azure AD.
Solution: You use Synchronization Rules Editor to create a synchronization rule.
Does this meet the goal?
Yes
No
Correct Answer:
Yes
Explanation:
Requirement: The goal remains the same: to synchronize only users from the contoso.local Active Directory domain to the contoso.com Azure AD tenant if their UPN suffix is contoso.com.
Proposed Solution: This time, the solution suggests using the Synchronization Rules Editor.
Synchronization Rules Editor: The Synchronization Rules Editor is a tool that is part of Azure AD Connect. It provides a way to:
View existing synchronization rules.
Create new custom synchronization rules.
Modify existing synchronization rules.
Delete synchronization rules
Set precedence on synchronization rules
Essentially, it provides a more hands-on and granular way to control how objects are synchronized from the on-premises Active Directory to Azure AD.
How It Meets the Goal: The Synchronization Rules Editor enables you to create a custom rule specifically designed to filter users based on their UPN suffix. You can set a rule with a condition to check the userPrincipalName attribute. If the UPN ends with contoso.com, the rule will allow the synchronization. Otherwise, it will skip the synchronization. This allows the sync engine to filter only users with a UPN suffix of contoso.com.
Why It’s a Correct Approach: Both this solution and the previous solution using “Azure AD Connect to customize the synchronization options” (general solution) work by configuring a synchronization rule. However, this solution specifically points out the tool that is used to achieve that, that is the Synchronization Rules Editor, which is a more accurate approach. Therefore, using the Synchronization Rules Editor, you can achieve the desired filtering and ensure only the correct users are synchronized.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You manage an Active Directory domain named contoso.local.
You install Azure AD Connect and connect to an Azure Active Directory (Azure AD) tenant named contoso.com without syncing any accounts.
You need to ensure that only users who have a UPN suffix of contoso.com in the contoso.local domain sync to Azure AD.
Solution: You use the Synchronization Service Manager to modify the Active Directory Domain Services (AD DS) Connector.
Does this meet the goal?
Yes
No
Correct Answer:
No
Explanation:
Requirement: The core requirement remains: to synchronize only users with a UPN suffix of contoso.com from the contoso.local domain to the contoso.com Azure AD tenant.
Proposed Solution: This solution suggests using the Synchronization Service Manager to modify the Active Directory Domain Services (AD DS) Connector.
Synchronization Service Manager: The Synchronization Service Manager is a tool within Azure AD Connect that is used to:
Monitor the synchronization process.
View synchronization errors.
Manage connectors and their configuration.
Run delta and full synchronizations.
While you can modify some settings for the AD DS connector within Synchronization Service Manager, you cannot create granular attribute-based filtering rules using this tool alone.
Why It Fails to Meet the Goal: The Synchronization Service Manager does not provide the ability to directly filter based on the value of a specific user attribute like the UPN suffix. You can modify which attributes are synchronized through the connector, and you can manage which OUs and domains to include or exclude. However, you cannot set a condition on attribute values. Therefore, modifying the AD DS Connector in the Synchronization Service Manager will not allow you to filter users based on the value of the UPN suffix.
Important Tips for the AZ-305 Exam:
Synchronization Service Manager: Understand the role of this tool and its limitations.
It’s primarily for monitoring, error diagnosis, and basic connector management.
It does not replace the need for the Synchronization Rules Editor for advanced filtering and attribute mapping.
Do not confuse the purpose of the Synchronization Service Manager with the Synchronization Rules Editor.
Azure AD Connect Components: Understand all the different tools that come with Azure AD Connect, and their use.
Filtering: This exam emphasizes filtering rules for a reason. Be very familiar with filtering based on OUs, domains and attributes.
Attribute Filtering: Know the limitations of filtering specific user attributes such as the UPN suffix.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an app named App1 that uses data from two on-premises Microsoft SQL Server databases named DB1 and DB2.
You plan to move DB1 and DB2 to Azure.
You need to implement Azure services to host DB1 and DB2. The solution must support server-side transactions across DB1 and DB2.
Solution: You deploy DB1 and DB2 to SQL Server on an Azure virtual machine.
Does this meet the goal?
Yes
No
Correct Answer:
Yes
Explanation:
Requirement: The main requirement is to move two on-premises SQL Server databases (DB1 and DB2) to Azure while maintaining the ability to perform server-side transactions across both databases.
Proposed Solution: The solution suggests deploying both DB1 and DB2 to SQL Server on an Azure virtual machine (VM).
How This Solution Works:
SQL Server on Azure VM: When you deploy SQL Server on an Azure VM, you essentially have full control over a SQL Server instance running on a Windows Server in Azure.
Server-Side Transactions: SQL Server on an Azure VM maintains the traditional functionality of a SQL Server. In this environment, it’s possible to have distributed transactions across different databases within the same SQL Server instance. It’s also possible to have cross-instance transactions between SQL server instances using Linked Server features, however, this is not the main functionality being tested here. The key requirement is to support server-side transactions and VM SQL Server does satisfy that requirement.
Why It Meets the Goal: By deploying both databases on the same SQL Server instance within a VM, you maintain the ability to perform server-side transactions across them using standard T-SQL, which is exactly what the requirement asks for. The transaction can be initiated from the SQL Server itself. Since the app is connecting to the SQL server from the same environment (Azure), the transaction will work.
Important Tips for the AZ-305 Exam:
SQL Server on Azure VM:
Understand that this is essentially a lift-and-shift of your on-premises SQL Server environment to an Azure VM.
You have full control over the SQL Server instance, similar to on-premises.
You are responsible for VM maintenance, patching, backup, etc.
It is important to understand this solution if you want to migrate the on-premise database to the cloud with minimal disruption, therefore, be familiar with this solution.
Server-Side Transactions: Understand what server-side transactions are and how they differ from client-side transactions.
Server-side transactions are executed on the SQL Server (or database server) and provide ACID (Atomicity, Consistency, Isolation, Durability) properties.
This type of transaction is initiated on the database server.
Be aware of this as this is tested very often on AZ-305 exams.
Azure SQL Options:
Be familiar with the different SQL options in Azure: Azure SQL VM, Azure SQL Database, Azure SQL Managed Instance.
Understand the scenarios where each option is appropriate.
Cross-Database Transactions: Understand the mechanism to handle transactions across different SQL servers (linked servers, distributed transactions).
Migration: Understand different approaches to migrate on-prem SQL server to Azure.
Real World Application: Understand when it is best to choose a SQL VM over the other solutions, such as database as a service.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure Cosmos DB database that contains a container named Container1. The partition key for Container1 is set to /day. Container1 contains the items shown in the following table.
Name Content
Item1 { “id”: “1”, “day”: “Mon”, “value”: “10” }
Item2 { “id”: “2”, “day”: “Mon”, “value”: “15” }
Item3 { “id”: “3”, “day”: “Tue”, “value”: “10” }
Item4 { “id”: “4”, “day”: “Wed”, “value”: “15” }
You need to programmatically query Azure Cosmos DB and retrieve Item1 and Item2 only.
Solution: You run the following query.
SELECT id FROM c
WHERE c.day = “Mon” OR c.day = “Tue”
You set the EnableCrossPartitionQuery property to False.
Does this meet the goal?
Yes
No
Correct Answer:
No
Explanation:
Requirement: The goal is to programmatically retrieve only Item1 and Item2 from the Cosmos DB container Container1.
Proposed Solution: The solution proposes using the following query:
SELECT id FROM c
WHERE c.day = “Mon” OR c.day = “Tue”
Use code with caution.
SQL
and setting the EnableCrossPartitionQuery property to False.
Partitioning in Cosmos DB:
Cosmos DB uses partitioning to distribute data across physical storage.
The partition key determines how data is distributed and where it’s stored.
In this case, the partition key is /day. This means that all items with day = “Mon” will be stored in one partition, items with day = “Tue” will be in a different one, and so on.
EnableCrossPartitionQuery = False:
When this property is set to False, Cosmos DB will only query a single partition.
This is to optimize cost by preventing the query from scanning every partition.
Why It Fails:
Query Results: The query SELECT id FROM c WHERE c.day = “Mon” OR c.day = “Tue” intends to retrieve all items with day = “Mon” or day = “Tue”. This would result Item1, Item2, and Item3.
Cross-Partitioning disabled: However, since the EnableCrossPartitionQuery property is set to False, the query will only scan the partition containing the items with day = “Mon” because that’s the first condition of the OR statement. The query will never scan the day = “Tue” partition. Therefore, the query will only retrieve Item1 and Item2.
Why it’s wrong: This logic might sound correct initially but the problem is that the query specifies SELECT ID, but not the entire document. The requirement is to select the entire document so that the application can retrieve the entire item from the database. The SELECT ID clause in this query will only retrieve the ID and not the whole item. The WHERE statement is correct but not the SELECT statement. Secondly, the result set contains all three item Item1, Item2 and Item3, and the requirement is to only retrieve Item1 and Item2.
Important Tips for the AZ-305 Exam:
Cosmos DB Partitioning: Thoroughly understand partitioning concepts, partition keys, logical and physical partitions.
Cross-Partition Queries: Understand what a cross partition query is, and know the effect of enabling or disabling them.
Be aware that it can impact cost and performance.
Querying Cosmos DB: Be familiar with the SQL API syntax for querying Cosmos DB.
SQL Statement: Know how to correctly select all columns from a table by using SELECT * FROM c.
Performance: Know how to optimize Cosmos DB queries for performance, including choosing the correct partition key and avoiding cross-partition queries when possible.
Real-World Scenarios: The exam often presents scenarios where you must create efficient Cosmos DB queries to retrieve specific items.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure Cosmos DB database that contains a container named Container1. The partition key for Container1 is set to /day. Container1 contains the items shown in the following table.
Item1 { “id”: “1”, “day”: “Mon”, “value”: “10” }
Item2 { “id”: “2”, “day”: “Mon”, “value”: “15” }
Item3 { “id”: “3”, “day”: “Tue”, “value”: “10” }
Item4 { “id”: “4”, “day”: “Wed”, “value”: “15” }
You need to programmatically query Azure Cosmos DB and retrieve Item1 and Item2 only.
Solution: You run the following query.
SELECT day FROM c
WHERE c.value = “10” OR c.value = “15”
You set the EnableCrossPartitionQuery property to True.
Does this meet the goal?
Yes
No
Correct Answer:
No
Explanation:
Requirement: The goal is to programmatically retrieve only Item1 and Item2 from the Cosmos DB container Container1.
Proposed Solution: The solution suggests using the following query:
SELECT day FROM c
WHERE c.value = “10” OR c.value = “15”
Use code with caution.
SQL
and setting the EnableCrossPartitionQuery property to True.
How This Solution Works:
The Query: The SQL query SELECT day FROM c WHERE c.value = “10” OR c.value = “15” aims to retrieve the day attribute of all items in the container where the value is either 10 or 15.
Cross Partition Query: Setting EnableCrossPartitionQuery to True means the query will scan all partitions of the container.
Why It Fails to Meet the Goal:
Incorrect Result Set: The proposed query will return all items with a value of 10 or 15. This means it will return Item1, Item2, Item3, and Item4. However, the requirement is to return only Item1 and Item2.
Incorrect Projection: Also, the SELECT day FROM c statement will only return the day property of the document instead of the entire document. The requirement is to retrieve the full items Item1 and Item2.
Important Tips for the AZ-305 Exam:
Cosmos DB Querying: You should be very familiar with the SQL syntax used for querying Cosmos DB. Know that the SELECT clause determines which attribute will be in the output.
Partitioning: Understand how the partition key affects querying and performance. Know what is a cross partition query.
EnableCrossPartitionQuery:
Know the purpose and implications of using this property.
Be aware of the performance and cost implications.
Correct Query Conditions: Carefully assess the query conditions to make sure they match the required results set.
SELECT Clause: Understand the difference between SELECT * and SELECT <field> clause.</field>
Real-World Application: In the exam, you need to make sure your query is returning the right item, with the correct properties. You need to understand that SELECT determines what attributes will be returned in the output.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure Cosmos DB database that contains a container named Container1. The partition key for Container1 is set to /day. Container1 contains the items shown in the following table.
Item1 { “id”: “1”, “day”: “Mon”, “value”: “10” }
Item2 { “id”: “2”, “day”: “Mon”, “value”: “15” }
Item3 { “id”: “3”, “day”: “Tue”, “value”: “10” }
Item4 { “id”: “4”, “day”: “Wed”, “value”: “15” }
You need to programmatically query Azure Cosmos DB and retrieve Item1 and Item2 only.
Solution: You run the following query.
SELECT day FROM c -
WHERE c.value = “10” OR c.value = “15”
You set the EnableCrossPartitionQuery property to True.
Does this meet the goal?
Yes
No
The goal is to retrieve only Item1 and Item2 from the Azure Cosmos DB container.
The provided solution uses the following query:
SELECT day FROM c
WHERE c.value = “10” OR c.value = “15”
Use code with caution.
SQL
and sets EnableCrossPartitionQuery to True.
Let’s analyze the data and the query:
Container Container1 has a partition key /day.
The items are:
Item1: { “id”: “1”, “day”: “Mon”, “value”: “10” }
Item2: { “id”: “2”, “day”: “Mon”, “value”: “15” }
Item3: { “id”: “3”, “day”: “Tue”, “value”: “10” }
Item4: { “id”: “4”, “day”: “Wed”, “value”: “15” }
The query SELECT day FROM c WHERE c.value = “10” OR c.value = “15” will select the day field from all items (FROM c) that satisfy the condition c.value = “10” OR c.value = “15”.
Let’s check which items satisfy the condition:
Item1: c.value = “10” is true. Item1 is selected.
Item2: c.value = “15” is true. Item2 is selected.
Item3: c.value = “10” is true. Item3 is selected.
Item4: c.value = “15” is true. Item4 is selected.
Therefore, the query will retrieve Item1, Item2, Item3, and Item4. The SELECT day FROM c part only specifies that the output will contain only the day field from each of these items, but it still selects all four items based on the WHERE clause.
The goal was to retrieve only Item1 and Item2. The provided solution retrieves Item1, Item2, Item3, and Item4. Thus, the solution does not meet the goal.
Setting EnableCrossPartitionQuery to True is necessary for this query to work across all partitions, as the query does not filter based on the partition key (/day). However, enabling cross-partition query does not change which items are selected based on the WHERE clause.
To retrieve only Item1 and Item2, you would need a query that specifically targets these items, for example by using their id values in the WHERE clause, like WHERE c.id IN (“1”, “2”).
Final Answer: No
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You manage an Active Directory domain named contoso.local.
You install Azure AD Connect and connect to an Azure Active Directory (Azure AD) tenant named contoso.com without syncing any accounts.
You need to ensure that only users who have a UPN suffix of contoso.com in the contoso.local domain sync to Azure AD.
Solution: You use the Synchronization Service Manager to modify the Metaverse Designer tab.
Does this meet the goal?
Yes
No
Correct Answer:
No
Explanation:
Requirement: The goal remains: to synchronize only users from the contoso.local Active Directory domain to the contoso.com Azure AD tenant if their UPN suffix is contoso.com.
Proposed Solution: This solution suggests using the Synchronization Service Manager to modify the Metaverse Designer tab.
What is the Metaverse?
The Metaverse is a central, shared data store used by Azure AD Connect to hold objects during synchronization. This data store is temporary, which means that it is not persistent and every time you have synchronization, the engine reads the objects from the connected data sources (such as Active Directory, Azure AD) and process the information using the synchronization rules and saves the output of those processing in the metaverse.
Objects from different connected data sources are represented as metaverse objects.
Synchronization Service Manager and Metaverse Designer Tab: The Synchronization Service Manager is a tool to monitor, manage, and troubleshoot the synchronization process. The Metaverse Designer tab is a viewer within the Synchronization Service Manager that allows you to:
See the schema of the metaverse.
Inspect the attributes and rules that apply to metaverse objects.
View object properties.
It does not allow you to modify the synchronization rules or the behavior that controls which objects are initially loaded into the metaverse or synchronized to Azure AD. It can only view and not modify the metadata.
Why It Fails to Meet the Goal: The Metaverse Designer tab in the Synchronization Service Manager is a viewing tool, not a configuration tool. You cannot modify synchronization behavior and filtering rules directly through this interface. It provides a way to see how attributes of your synchronized object are mapped and how rules are processed. However, the Metaverse Designer cannot be used to control which objects get loaded into the metaverse in the first place, and it cannot apply filters based on specific attributes of the users.
Important Tips for the AZ-305 Exam:
Azure AD Connect Components: Have a solid understanding of all the tools that come with Azure AD Connect.
Synchronization Service Manager: Be familiar with all the tabs in this tool: Operations, Connectors, Metaverse Search, Metaverse Designer, Connectors Space, Lineage. Know what kind of activities that you can perform in each of these tabs.
Metaverse: You must understand the role of the metaverse in Azure AD Connect.
Filtering: Be aware that filtering has to happen before the object is loaded in the metaverse. The Metaverse Designer can only be used to view metadata but not to filter.
Correct Tool for Task: It’s crucial to use the right tool for the task. For filtering based on a specific attribute (like UPN suffix), you must use the Synchronization Rules Editor.
Real-World Scenarios: In the exam, you’ll often be asked to choose the correct tool for a given scenario.
HOTSPOT
You have an Azure subscription that contains a resource group named RG1.
You have a group named Group1 that is assigned the Contributor role for RG1.
You need to enhance security for the virtual machines in RG1 to meet the following requirements:
– Prevent Group1 from assigning external IP addresses to the virtual machines.
– Ensure that Group1 can establish a Remote Desktop connection to the virtual machines through a shared external IP address.
What should you use to meet each requirement? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Prevent Group1 from assigning external IP addresses to the virtual machines:
Azure Policy
Azure Bastion
Virtual network service endpoints
Azure Web Application Firewall (WAF)
Ensure that Group1 can establish a Remote Desktop connection to the virtual machines through a shared external IP address:
Azure Policy
Azure Bastion
Virtual network service endpoints
Azure Web Application Firewall (WAF)
Correct Answer Area:
Prevent Group1 from assigning external IP addresses to the virtual machines:
Azure Policy
Ensure that Group1 can establish a Remote Desktop connection to the virtual machines through a shared external IP address:
Azure Bastion
Explanation:
Let’s analyze each requirement and why the selected options are correct.
Requirement 1: Prevent Group1 from assigning external IP addresses to the virtual machines.
Azure Policy: Azure Policy allows you to define and enforce rules (policies) on your Azure resources. You can create a policy that denies the ability to create or modify resources to add public IP addresses for VMs in your subscriptions. Azure Policy enables you to restrict any action that can be performed through the Azure Control Plane and can enforce security, compliance, governance, cost control and many more things. You could restrict the users’ ability to add Public IP to a VM, remove the Public IP or change the Public IP configuration of a VM.
Why Other Options are Incorrect:
Azure Bastion: Azure Bastion is a service that provides secure RDP/SSH access to VMs but does not control whether a VM can have an external IP address.
Virtual network service endpoints: Service endpoints restrict access to Azure PaaS services (e.g. SQL Database, Storage Account) to only specific virtual networks but is not relevant to the requirements.
Azure Web Application Firewall (WAF): WAF protects web applications from common attacks but does not control resource provisioning.
Requirement 2: Ensure that Group1 can establish a Remote Desktop connection to the virtual machines through a shared external IP address.
Azure Bastion: Azure Bastion allows users to connect to their VMs directly through the Azure portal using a secure connection and through one single shared external IP address. Instead of directly exposing RDP/SSH ports on the VMs to the internet, you establish secure access via Bastion, where users can use either the Azure Portal or native RDP client using Bastion as a jump server.
Why Other Options are Incorrect:
Azure Policy: Azure Policy does not provide remote access to VMs.
Virtual network service endpoints: Service endpoints don’t enable RDP/SSH connections to VMs.
Azure Web Application Firewall (WAF): WAF protects web applications but does not provide remote access.
Important Tips for the AZ-305 Exam:
Azure Policy: This is a very important topic for the AZ-305 exam. You should have a very solid understanding:
What is Azure Policy: You need to know how it enforces the standards across your Azure resources.
How to define Policy: You should know how to define a policy using Azure Portal, CLI, Powershell or Terraform.
How to Assign Policy: You should know how to assign Azure Policies to different scope.
How to evaluate Azure Policies.
Different scenarios of Azure Policy
Azure Bastion:
Understand that this is a secure, managed service for remote access to VMs.
Know the benefits of Bastion compared to directly exposing RDP/SSH ports to the internet.
Be familiar with different connection methods via Bastion.
Security: Pay attention to security aspects of Azure services. Azure Policy helps enforce security policies, while Azure Bastion provides secure access.
RBAC: This question highlights how RBAC and Azure Policy work together, where RBAC assign permission and Policy puts the guardrails.
Real-World Scenarios: Be prepared to choose between various Azure services based on requirements.
You create a container image named Image1 on a developer workstation.
You plan to create an Azure Web App for Containers named WebAppContainer that will use Image1.
You need to upload Image1 to Azure. The solution must ensure that WebAppContainer can use Image1.
To which storage type should you upload Image1?
an Azure Storage account that contains a blob container
Azure Container Instances
Azure Container Registry
an Azure Storage account that contains a file share
Correct Answer:
Azure Container Registry
Explanation:
Requirement: The goal is to upload a container image (Image1) created on a developer workstation to Azure so that an Azure Web App for Containers (WebAppContainer) can use it.
Why Azure Container Registry is the Correct Choice:
Container Registry: Azure Container Registry (ACR) is a managed, private Docker registry service. It’s specifically designed to store and manage your private Docker container images.
Integration with Azure Services: ACR is tightly integrated with other Azure services such as Azure Web App for Containers, Azure Kubernetes Service (AKS), Azure Container Instances (ACI), etc. These services are designed to retrieve container images from a container registry (such as ACR) and deploy the containers based on the image definition.
Security: ACR provides secure storage for container images and supports authentication for accessing images. This is critical because you don’t want unauthorized access to your private container images.
Image Management: ACR allows you to manage versions of your container images and supports advanced features such as geo-replication.
Why Other Options are Incorrect:
Azure Storage account that contains a blob container: Azure Storage blobs are designed for storing unstructured data, not for storing and managing container images. While you could technically store a container image in a blob, the Azure Web App for Containers service doesn’t directly use a storage blob container to get a container image.
Azure Container Instances: Azure Container Instances (ACI) is a serverless compute option for running containers, but it is not a container image registry. While ACI can retrieve and run container images from a registry, it is not a registry itself.
Azure Storage account that contains a file share: Azure file shares are designed for storing file system data, not for storing container images. It’s not designed to be a container registry and is not integrated with Azure Web App for Containers.
You have an Azure Cosmos DB account named Account1. Account1 includes a database named DB1 that contains a container named Container1. The partition key for Container1 is set to /city.
You plan to change the partition key for Container1.
What should you do first?
Delete Container1.
Create a new container in DB1.
Implement the Azure Cosmos DB.NET.SDK.
Regenerate the keys for Account1.
Correct Answer:
Create a new container in DB1.
Explanation:
The Problem: Immutable Partition Keys: In Azure Cosmos DB, the partition key you choose for a container is immutable. This means that once you set the partition key for a container, you cannot change it.
Why the Other Options are Incorrect:
Delete Container1: While deleting the container would allow you to create a new container with a different partition key, it will also delete all the data inside the container. This is not ideal, and in most scenarios, you would want to maintain the data.
Implement the Azure Cosmos DB .NET SDK: While you need the .NET SDK to interact with Cosmos DB programmatically, it is not related to the act of changing the partition key.
Regenerate the keys for Account1: Regenerating account keys is a security measure and is not related to the partition key change process.
The Correct Approach:
Create a new container: The first step is to create a new container in your database DB1. You’ll set the desired new partition key for this new container.
Migrate the data: Next, you need to migrate all the data from your original Container1 to the new container. You can write an application or use data migration tool to read data from Container1 and write them to the new container.
Application Changes: You’ll need to update your application to now read and write data to this new container with the new partition key.
Delete the old container: Once the migration is complete and the application has been updated, then you can delete Container1.
Important Tips for the AZ-305 Exam:
Cosmos DB Partitioning: You must understand the importance of partitioning and the concept of partition keys. It’s a key aspect of Cosmos DB.
Immutable Partition Keys: Know that a container’s partition key cannot be changed once it’s set. This is a very important characteristic of Cosmos DB.
Migration: Understand that you must migrate the data to a new container if you have to change your partition key.
Data Migration: Understand how to use the Azure SDK or Data Migration tool to migrate the data.
SDK: Understand that while SDKs are important for interacting with Azure Services, it is not part of the core infrastructure design.
Security: You should know the different security mechanisms such as regenerating keys and how it affects your application.
You have an Azure subscription that contains 10 virtual machines on a virtual network.
You need to create a graph visualization to display the traffic flow between the virtual machines.
What should you do from Azure Monitor?
From Activity log, use quick insights.
From Metrics, create a chart.
From Logs, create a new query.
From Workbooks, create a workbook.
Correct Answer:
From Workbooks, create a workbook.
Explanation:
Requirement: The goal is to visualize the traffic flow between 10 virtual machines (VMs) on an Azure virtual network.
Why Azure Monitor Workbooks are the Right Choice:
Visualizations: Azure Monitor Workbooks allow you to create rich, interactive visualizations, including graphs, charts, and maps. They are excellent for combining different data sources into a single, informative view.
Traffic Flow: You can use workbooks to create a graph visualization that shows the connections between the VMs and the data that is flowing through those connections.
Customization: You can fully customize your workbooks to display different metric, log data, or other types of information.
Data Sources: It provides a very intuitive way to integrate different data sources, including Azure Monitor Log Analytics workspace, Application Insights etc. to give you a comprehensive overview of your environment.
Why Other Options are Incorrect:
From Activity log, use quick insights: The Activity Log records events related to resource management. It does not track or visualize network traffic. Quick insights provides information on successful or failed operations.
From Metrics, create a chart: Azure Monitor Metrics tracks performance data like CPU, memory, and network usage. While you can see network usage, you can’t directly see the flow of traffic between VMs in a visual graph using metrics charts. Metrics provide numeric values and not graph visualization.
From Logs, create a new query: Azure Monitor Logs allows you to query logs using Kusto Query Language (KQL). You could write a query to see the traffic flow, but the query results are not displayed as a graph. While logs is an excellent data source for your workbook, it will not provide the visual representation that is required.
HOTSPOT
You plan to create an Azure Storage account in the Azure region of East US 2.
You need to create a storage account that meets the following requirements:
– Replicates synchronously
– Remains available if a single data center in the region fails
How should you configure the storage account? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Replication:
Geo-redundant storage (GRS)
Locally-redundant storage (LRS)
Read-access geo-redundant storage (RA GRS)
Zone-redundant storage (ZRS)
Account type:
Blob storage
Storage (general purpose v1)
StorageV2 (general purpose v2)
Correct Answer Area:
Replication:
Zone-redundant storage (ZRS)
Account type:
StorageV2 (general purpose v2)
Explanation:
Requirement 1: Replicates synchronously
Synchronous Replication: This means data is written to multiple storage locations simultaneously and acknowledged only after all writes are confirmed. This guarantees data consistency between storage locations.
Zone-Redundant Storage (ZRS): ZRS replicates data synchronously across three availability zones within a single Azure region. This ensures high availability and data durability even if one data center (zone) fails.
Requirement 2: Remains available if a single data center in the region fails
Zone-Redundant Storage (ZRS): By replicating the data to three different availability zones in the same region, ZRS will keep the storage available even if there is a single data center failure.
Why Other Replication Options are Incorrect:
Geo-redundant storage (GRS): GRS replicates data asynchronously to a paired region, which provides protection against regional disaster but is not needed for high availability.
Locally-redundant storage (LRS): LRS replicates data within a single data center, which does not protect against data center failures.
Read-access geo-redundant storage (RA-GRS): RA-GRS is the same as GRS, but the data can be read from the secondary region as well. But the replication is still async.
Why StorageV2 (general purpose v2) is correct:
StorageV2 (general purpose v2) is the latest and recommended storage account type. It supports all storage services (blobs, files, queues, tables) and the latest features such as ZRS, and provides better pricing model.
Why Other Account types are incorrect:
Blob storage: This account type is optimized for blob storage only and may not support some features needed in this case, and the lack of support for other storage services makes this an inappropriate option.
Storage (general purpose v1): This is an older storage account type and is not recommended for new deployments. It does not have many of the newer features that StorageV2 provides.
Important Tips for the AZ-305 Exam:
Azure Storage Redundancy: This is a crucial topic for the AZ-305 exam. You MUST understand the different storage redundancy options:
LRS (Locally-redundant storage): Data is copied within a single data center.
ZRS (Zone-redundant storage): Data is copied across three availability zones within the same region.
GRS (Geo-redundant storage): Data is copied to a paired region.
RA-GRS (Read-access geo-redundant storage): Data is copied to a paired region and can be read from the secondary region.
Synchronous vs. Asynchronous Replication: Understand the difference between these replication types. Synchronous replication is needed for high availability within the region, and asynchronous for disaster recovery.
Availability Zones: Be aware of the concept of availability zones and how they provide resilience.
Storage Account Types: Know the purpose and capabilities of different storage account types:
StorageV2: latest storage account that provides many of the latest features.
Blob storage: Designed for unstructured data such as image and videos.
File Storage: designed for file shares for virtual machines.
Storage (general purpose v1): older version and not recommended for new deployments.
Data Durability: Understand which storage option provides the best data durability and fault tolerance.
Cost: Be aware that the more fault tolerant the storage is, the more expensive it is.
Real-World Scenarios: The exam often presents scenarios where you need to choose the right storage redundancy based on specific requirements (availability, durability, cost).
HOTSPOT
You plan to deploy an Azure virtual machine named VM1 by using an Azure Resource Manager template.
You need to complete the template.
What should you include Scope1, Scope2 in the template? To answer, select the appropriate options in the answer area.
a) Microsoft.Network/publicIPAddresses/
b) Microsoft.Network/virtualNetworks/
c) Microsoft.Network/networkInterfaces/
d) Microsoft.Network/virtualNetworks/subnets
d) Microsoft.Storage/storageAccounts/
NOTE: Each correct selection is worth one point.
{
“type”: “Microsoft.Compute/virtualMachines”,
“apiVersion”: “2018-10-01”,
“name”: “VM1”,
“location”: “[parameters(‘location’)]”,
“dependsOn”: [
“[resourceId(‘Microsoft.Storage/storageAccounts/’, variables(‘Name3’))]”,
“[resourceId(“
Scope1, variables(‘Name4’)
“)]”
]
},
{
“type”: “Microsoft.Network/networkInterfaces”,
“apiVersion”: “2018-11-01”,
“name”: “NIC1”,
“location”: “[parameters(‘location’)]”,
“dependsOn”: [
“[resourceId(‘Microsoft.Network/publicIPAddresses/’, variables(‘Name1’))]”,
“[resourceId(“
Scope2, variables(‘Name2’)
“)]”
]
}
Correct Answer Area:
Scope1: Microsoft.Network/networkInterfaces/
Scope2: Microsoft.Network/virtualNetworks/
Explanation:
Understanding ARM Template resourceId() Function:
The resourceId() function in an ARM template is used to construct the fully qualified ID of a resource. It takes a resource provider namespace and resource type along with optional parent resource IDs as parameters to form the resource ID string.
Virtual Machine Resource (Microsoft.Compute/virtualMachines):
The dependsOn property here indicates the dependencies of the VM.
“[resourceId(‘Microsoft.Storage/storageAccounts/’, variables(‘Name3’))]” refers to the storage account on which the VM’s OS disk will be stored.
“[resourceId(Scope1, variables(‘Name4’))]” refers to a resource, whose type is given by Scope1. This will be the network interface because the name of the resource is referred to by the variable Name4, which in a traditional VM creation, will be the network interface. Therefore the Scope1 will need to be Microsoft.Network/networkInterfaces/
Network Interface Resource (Microsoft.Network/networkInterfaces):
The dependsOn property here specifies the dependencies of the NIC.
“[resourceId(‘Microsoft.Network/publicIPAddresses/’, variables(‘Name1’))]” refers to the public IP address if the NIC is to be connected to a public IP.
“[resourceId(Scope2, variables(‘Name2’))]” refers to a resource, whose type is given by Scope2. This will be the virtual network because the name of the resource is referred to by the variable Name2, which in a traditional VM creation, will be the virtual network. Therefore the Scope2 will need to be Microsoft.Network/virtualNetworks/
Why other scopes are incorrect:
Microsoft.Network/publicIPAddresses/: The public IP address resource itself is already referred in the dependsOn entry of the NIC resource.
Microsoft.Network/virtualNetworks/subnets: The subnet is not a dependency at this level.
Microsoft.Storage/storageAccounts/ The Storage Account resource has already been referenced in the dependsOn entry of the VM resource.
HOTSPOT
Your network contains an Active Directory domain named adatum.com and an Azure Active Directory (Azure AD) tenant named adatum.onmicrosoft.com.
Adatum.com contains the user accounts in the following table.
Name Member of
User1 Domain Admins
User2 Schema Admins
User3 Incoming Forest Trust Builders
User4 Replicator
User5 Enterprise Admins
Adatum.onmicrosoft.com contains the user accounts in the following table
Name Role
UserA Global administrator
UserB User administrator
UserC Security administrator
UserD Service administrator
You need to implement Azure AD Connect. The solution must follow the principle of least privilege.
Which user accounts should you use in Adatum.com and Adatum.onmicrosoft.com to implement Azure AD Connect? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Adatum.com:
User1
User2
User3
User4
User5
Adatum.onmicrosoft.com:
UserA
UserB
UserC
UserD
Adatum.com: User4
Explanation: To implement Azure AD Connect, the account used on the on-premises Active Directory side needs read access to the directory to synchronize objects. The Replicator account has the necessary permissions to read directory information for replication purposes. This aligns with the principle of least privilege as it avoids using highly privileged accounts like Domain Admins or Enterprise Admins.
Adatum.onmicrosoft.com: UserA
Explanation: To implement Azure AD Connect in Azure AD, you need an account with Global administrator permissions. This is required for the initial setup and configuration of Azure AD Connect, including creating the Azure AD Connector account and setting up the synchronization rules.
Therefore, the correct selections are:
Adatum.com: User4
Adatum.onmicrosoft.com: UserA
Why other options are incorrect:
Adatum.com:
User1 (Domain Admins): Has excessive permissions. Violates the principle of least privilege.
User2 (Schema Admins): Has permissions to modify the Active Directory schema, which is far more than needed for Azure AD Connect. Violates the principle of least privilege.
User3 (Incoming Forest Trust Builders): This account is specifically for creating trust relationships and is not directly relevant to Azure AD Connect’s synchronization needs.
User5 (Enterprise Admins): Has the highest level of permissions in the Active Directory forest. Violates the principle of least privilege.
Adatum.onmicrosoft.com:
UserB (User administrator): While this role can manage users, it typically doesn’t have the necessary permissions for the initial setup and configuration of Azure AD Connect.
UserC (Security administrator): This role focuses on security-related tasks and doesn’t have the permissions required for Azure AD Connect setup.
UserD (Service administrator): This is a custom administrator role and might not have the specific permissions needed for Azure AD Connect. Global Administrator is generally required for the initial setup.
You have an Azure subscription that contains 100 virtual machines.
You have a set of Pester tests in PowerShell that validate the virtual machine environment.
You need to run the tests whenever there is an operating system update on the virtual machines. The solution must minimize implementation time and recurring costs.
Which three resources should you use to implement the tests? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Azure Automation runbook
an alert rule
an Azure Monitor query
a virtual machine that has network access to the 100 virtual machines
an alert action group
Correct Answer:
Azure Automation runbook
an alert rule
an alert action group
Explanation:
Requirement: The goal is to run Pester tests automatically whenever there’s an OS update on any of the 100 VMs, while minimizing setup time and costs.
Why these options are correct:
Azure Automation runbook:
This is where you will store the logic of running the Pester tests. You can create a PowerShell script (runbook) within Azure Automation that contains the logic to execute your Pester tests. The script can be stored in Azure Automation and can be executed as part of Azure Automation service.
You can use PowerShell commands to connect to the virtual machines and execute the Pester tests, or use Azure Automation DSC (Desired State Configuration) or Azure VM extensions for this.
An Alert Rule:
This will detect the operating system updates in the virtual machines. You can create a new alert rule that is configured to be triggered on Microsoft.Compute/virtualMachines resource when a specific event is generated, such as OS patch install.
Alert rules allow you to define conditions that trigger actions.
An Alert Action Group:
This is used to call the Azure Automation runbook when the alert rule is triggered. When the operating system update event is detected, the alert action group will be triggered and will call the Azure Automation runbook to execute the Pester tests.
Action groups define the actions that will occur when an alert is triggered, such as sending an email, sending SMS messages, calling a logic app or calling Azure Automation runbook, which is what we want to accomplish here.
Why Other Options are Incorrect:
An Azure Monitor query: While a query can be useful for investigation and analyzing the logs, this is not required in this solution. The Alert rule and Action group will provide the core functionality for the automation we are trying to implement.
A virtual machine that has network access to the 100 virtual machines: You don’t need an additional VM just to run the tests. The test will be executed inside the Azure Automation Runbook using the credentials and network connectivity it already has. This will add additional operational overhead, management overhead and recurring cost, which we are trying to minimize.
Important Tips for the AZ-305 Exam:
Azure Automation: You must know the details about Azure Automation, especially its purpose and the way you can automate tasks using Runbooks.
Know how to create, configure, and trigger runbooks.
Understand how to use PowerShell with Azure Automation.
Azure Monitor: You need to know how Azure Monitor is used to observe your azure resources.
Alerts:
Understand how to create alert rules based on metrics and logs.
Know how to configure action groups to take actions when an alert is triggered.
Pester: Know what is Pester and how can it be used to test infrastructure.
Real-World Automation: Be prepared to design automated solutions that use Azure services for complex processes.
Cost Optimization: Pay attention to cost minimization in your designs. Avoid unnecessary resources.
DevOps mindset: Understand the concepts and processes of DevOps.
HOTSPOT
You have an Azure subscription that contains multiple resource groups.
You create an availability set as shown in the following exhibit.
Create availability set X
*Name
AS1
*Subscription
Azure Pass
*Resource group
RG1
Create new
*Location
West Europe
Fault domains
2
Update domains
3
Use managed disks
No(Classic) Yes(Alignet)
You deploy 10 virtual machines to AS1.
Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.
During planned maintenance, at least [answer choice]
virtual machines will be available.
▼
4
5
6
8
To add another virtual machine to AS1, the virtual machine
must be added to [answer choice].
any region and the RG1 resource group
the West Europe region and any resource group
the West Europe region and the RG1 resource group
Statement 1: During planned maintenance, at least [6] virtual machines will be available.
Explanation: Availability sets provide protection against planned maintenance (Azure updates) by distributing virtual machines across update domains. With 3 update domains, Azure will update these domains sequentially. In the worst-case scenario, all virtual machines in one update domain will be unavailable during maintenance.
Worst-case distribution: To find the minimum number available, consider the most uneven distribution possible across the 3 update domains. For instance, you could have 4 VMs in UD1, 3 VMs in UD2, and 3 VMs in UD3. When UD1 is being updated, the 3 + 3 = 6 VMs in the other domains are still available. Therefore, at least 6 VMs will be available.
Statement 2: To add another virtual machine to AS1, the virtual machine must be added to [the West Europe region and the RG1 resource group].
Explanation:
Region: Availability sets are a regional resource. All virtual machines within an availability set must reside in the same Azure region as the availability set itself. AS1 is located in West Europe.
Resource Group: While an availability set exists within a resource group, the individual virtual machines within that availability set also need to be in the same resource group. AS1 is in RG1.
Therefore, the correct options are:
Statement 1: 6
Statement 2: the West Europe region and the RG1 resource group
HOTSPOT
You have an Azure subscription that contains the resource groups shown in the following table.
Name Location
RG1 West US
RG2 East US
You create an Azure Resource Manager template named Template1 as shown in the following exhibit.
{
“$schema”: “http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#”,
“contentVersion”: “1.0.0.0”,
“parameters”: {
“name”: {
“type”: “String”
},
“location”: {
“defaultValue”: “westus”,
“type”: “String”
}
},
“variables”: {
“location”: “[resourceGroup().location]”
},
“resources”: [
{
“type”: “Microsoft.Network/publicIPAddresses”,
“apiVersion”: “2019-11-01”,
“name”: “[parameters(‘name’)]”,
“location”: “[variables(‘location’)]”,
“sku”: {
“name”: “Basic”
},
“properties”: {
“publicIPAddressVersion”: “IPv4”,
“publicIPAllocationMethod”: “Dynamic”,
“idleTimeoutInMinutes”: 4,
“ipTags”: []
}
}
]
}
From the Azure portal, you deploy Template1 four times by using the settings shown in the following table.
Resource group Name Location
RG1 IP1 westus
RG1 IP2 westus
RG2 IP1 westus
RG2 IP3 westus
What is the result of the deployment? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
Number of public IP addresses in West US:
▼
1
2
3
4
Total number of public IP addresses created:
▼
1
2
3
4
Answer Area:
Number of public IP addresses in West US: 2
Total number of public IP addresses created: 4
Explanation:
Let’s analyze each deployment:
Deployment 1 (RG1, IP1, westus):
The template’s variables.location is set to [resourceGroup().location].
Since the resource group is RG1, which is in West US, the public IP address IP1 will be created in West US.
Deployment 2 (RG1, IP2, westus):
Again, variables.location resolves to the resource group’s location (RG1, West US).
The public IP address IP2 will be created in West US.
Deployment 3 (RG2, IP1, westus):
The resource group is RG2, which is in East US.
Even though the deployment specifies “westus” for the parameter, the template’s variables.location overrides this and uses the resource group’s location.
The public IP address IP1 will be created in East US. Note that the name “IP1” is reused, but it’s allowed since it’s in a different resource group.
Deployment 4 (RG2, IP3, westus):
Similar to deployment 3, the resource group is RG2 (East US).
Public IP address IP3 will be created in East US.
Therefore:
Public IP addresses in West US: IP1 and IP2 (2 total)
Total public IP addresses created: IP1 (West US), IP2 (West US), IP1 (East US), IP3 (East US) (4 total)
Tips for the AZ-305 Exam (and similar Azure exams):
Understand ARM Template Evaluation: Pay close attention to how ARM templates evaluate expressions, especially the order of precedence. In this case, variables override parameter defaults.
Resource Group Scope: Remember that many resources are scoped to a resource group. The resourceGroup() function is very useful for accessing resource group properties within a template.
Variable Usage: Understand how variables can be used to dynamically set properties based on other template inputs or Azure context.
Deployment Scope vs. Resource Location: Be aware that the location specified during deployment can be different from the actual location where the resource ends up if the template logic dictates otherwise (like using resourceGroup().location).
Naming Conflicts in Resource Groups: Know that resource names must be unique within a resource group but can be reused across different resource groups.
Practice with ARM Templates: The best way to understand ARM templates is to write and deploy them. Experiment with different functions and scenarios.
Focus on Key Functions: Be familiar with commonly used ARM template functions like parameters(), variables(), resourceGroup(), subscription(), etc.
You have an Azure subscription that contains an Azure Log Analytics workspace.
You have a resource group that contains 100 virtual machines. The virtual machines run Linux.
You need to collect events from the virtual machines to the Log Analytics workspace.
Which type of data source should you configure in the workspace?
Syslog
Linux performance counters
custom fields
Correct Answer:
Syslog
Explanation:
Requirement: The goal is to collect events from Linux VMs and send them to an Azure Log Analytics workspace.
Why Syslog is the Correct Choice:
Syslog Standard: Syslog is a standard protocol for message logging in Linux systems. Many applications and services on Linux use Syslog to generate their logs.
Log Collection: The Log Analytics agent for Linux (which runs on the VM) is configured to use Syslog as its primary source of event data. It can collect logs from different Syslog facilities, such as auth, cron, daemon, and many more.
Centralized Logging: By configuring Syslog in the Log Analytics workspace, you enable centralized collection of system events, making it easier to analyze and troubleshoot issues across multiple VMs.
Why Other Options are Incorrect:
Linux performance counters: While performance counters (such as CPU, memory, disk) are important, they are not the source of event logs and are separate from the Syslog functionality. Performance counters provide metrics whereas Syslog provides logs.
Custom fields: Custom fields are used to define additional data fields in your log data, but they are not a data source in themselves. You would need another source (like Syslog) to actually create the log, and then custom fields can be added.
You have a virtual network named VNet1 as shown in the exhibit. (Click the Exhibit tab.)
Refresh
Move
Delete
Resource group (change)
Production
Location
West US
Subscription (change)
Production subscription
Subscription ID
12ab3cd4-5e67-8901-f234-g5hi67jkl8m9
Tags (change)
Click here to add tags
Connected devices
Search connected devices
DEVICE TYPE IP ADDRESS SUBNET
No results.
Address space
10.2.0.0/16
DNS servers
Azure provided DNS service
No devices are connected to VNet1.
You plan to peer VNet1 to another virtual network named VNet2. VNet2 has an address space of 10.2.0.0/16.
You need to create the peering.
What should you do first?
Configure a service endpoint on VNet2.
Add a gateway subnet to VNet1.
Create a subnet on VNet1 and VNet2.
Modify the address space of VNet1.
Correct Answer:
Modify the address space of VNet1.
Explanation:
Virtual Network Peering Requirements:
Virtual network peering enables you to connect two or more virtual networks in Azure. The virtual networks can be in the same or different Azure regions.
One of the fundamental requirements for virtual network peering is that the virtual networks must have non-overlapping address spaces. If the address spaces overlap, Azure cannot establish a route between the networks, and peering will fail.
Current Situation:
VNet1 has an address space of 10.2.0.0/16.
VNet2 has an address space of 10.2.0.0/16.
The address spaces overlap, therefore peering is not possible at this time.
The Correct First Step:
The first step in the process is to modify the address space of either VNet1 or VNet2, or both so that their address space do not overlap. Since the requirement is to make a change to VNet1, we must modify the address space of VNet1 first.
Why Other Options are Incorrect:
Configure a service endpoint on VNet2: Service endpoints restrict access to Azure PaaS resources (e.g. storage accounts) and is not related to the virtual network peering process.
Add a gateway subnet to VNet1: A gateway subnet is required for VPN or ExpressRoute connections, and it’s not relevant to virtual network peering.
Create a subnet on VNet1 and VNet2: While subnets are required within a virtual network, you do not need to create subnets in the virtual networks for the peering. It will also not solve the overlapping CIDR problem, therefore this is not a correct option.