Timed Mode Set 2 – AZ-104 Azure Administrator (Dojo) Flashcards
You are planning to migrate your on-premises media files to Azure.
You need to create a storage account named TutorialsDojoMedia that must meet the following requirements:
It must be able to tolerate the failure of a single datacenter in an Azure Region. Replication must be synchronous.
How would you configure the storage account?
Select the correct answer from the drop-down list of options. Each correct selection is worth one point.
- Account Type
A. General Purpose V1
B. General Purpose V2
C. BlobStorage - Azure App Service
A. Locally Redundant Storage (LRS)
B. Zone Redundant Storage (ZRS)
C. Geo-Zone Redundant Storage (GZRS)
D. Read-access Geo-Zone Redundant Storage (RAGZRS)
- B. General Purpose V2
- B. Zone Redundant Storage (ZRS)
Explanation:
An Azure storage account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS. Data in your Azure storage account is durable and highly available, secure, and massively scalable.
Azure Storage offers several types of storage accounts. Each type supports different features and has its own pricing model. Consider these differences before you create a storage account to determine the type of account that is best for your applications. The types of storage accounts are:
General-purpose v2 accounts: Basic storage account type for blobs, files, queues, and tables. Recommended for most scenarios using Azure Storage. It supports LRS, GRS, RA-GRS, ZRS, GZRS, RA-GZRS replication options. General-purpose v1 accounts: Legacy account type for blobs, files, queues, and tables. Use general-purpose v2 accounts instead when possible. Supports LRS, GRS, RA-GRS replication options BlockBlobStorage accounts: Storage accounts with premium performance characteristics for block blobs and append blobs. Recommended for scenarios with high transaction rates, or scenarios that use smaller objects or require consistently low storage latency. Supports LRS, ZRS replication options FileStorage accounts: Files-only storage accounts with premium performance characteristics. Recommended for enterprise or high-performance scale applications. Supports LRS, ZRS replication options BlobStorage accounts: Legacy Blob-only storage accounts. Use general-purpose v2 accounts instead when possible. Supports LRS, GRS, RA-GRS replication options
Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage offers four options for how your data is replicated:
Locally redundant storage (LRS) copies your data synchronously three times within a single physical location in the primary region. LRS is the least expensive replication option but is not recommended for applications requiring high availability. Zone-redundant storage (ZRS) copies your data synchronously across three Azure availability zones in the primary region. For applications requiring high availability. Geo-redundant storage (GRS) copies your data synchronously three times within a single physical location in the primary region using LRS. It then copies your data asynchronously to a single physical location in a secondary region that is hundreds of miles away from the primary region. Geo-zone-redundant storage (GZRS) copies your data synchronously across three Azure availability zones in the primary region using ZRS. It then copies your data asynchronously to a single physical location in the secondary region.
Therefore, you have to use the General-purpose V2 as your account type as it supports Zone-redundant storage (ZRS). Microsoft recommends that you use the General-purpose v2 option for new storage accounts.
Conversely, to achieve the fault-tolerance requirements, you need to utilize Zone-redundant storage (ZRS) as it copies your data synchronously across three Azure availability zones in the primary region.
The options that say: General-purpose V1 and Blob Storage are incorrect because these account types do not support Zone-redundant storage (ZRS).
The option that says: Locally redundant storage (LRS) is incorrect because it only copies your data synchronously three times within a single physical location in the primary region.
The options that say: Geo-zone-redundant storage (GZRS) and Read-access geo-zone-redundant storage (RA-GZRS) are incorrect because these exceed the requirements. Take note that the requirement is that your storage account must tolerate a single data center failure.
References:
https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview
https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy
Check out this Azure Storage Overview Cheat Sheet:
https://tutorialsdojo.com/azure-storage-overview/
You currently have an on-premises file server that contains a directory named E:\TutorialsDojoMedia.
There is a requirement to migrate the folder E:\TutorialsDojoMedia and its subdirectories to a public container in an Azure Storage Account named TutorialsDojoAccount.
Which of the following command should you run?
A. azcopy copy E:\TutorialsDojoMedia https://TutorialsDojoAccount.blob.core.windows.net/public –recursive
B. azcopy copy https://TutorialsDojoAccount.blob.core.windows.net/public E:\TutorialsDojoMedia –recursive
C. azcopy copy E:\TutorialsDojoMedia
https://TutorialsDojoAccount.blob.core.windows.net/public
D. az storage blob copy start-batch E:\TutorialsDojoMedia https://TutorialsDojoAccount.blob.core.windows.net/public
A. azcopy copy E:\TutorialsDojoMedia https://TutorialsDojoAccount.blob.core.windows.net/public –recursive
Explanation:
AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. You can also provide authorization credentials on your AzCopy command by using Azure Active Directory (AD) or by using a Shared Access Signature (SAS) token.
The Azure Storage platform is Microsoft’s cloud storage solution for modern data storage scenarios. Core storage services offer a massively scalable object store for data objects, disk storage for Azure virtual machines (VMs), a file system service for the cloud, a messaging store for reliable messaging, and a NoSQL store.
Azure Blob storage is Microsoft’s object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn’t adhere to a particular data model or definition, such as text or binary data.
Blob storage is designed for:
– Serving images or documents directly to a browser.
– Storing files for distributed access.
– Streaming video and audio.
– Writing to log files.
– Storing data for backup and restore disaster recovery and archiving.
– Storing data for analysis by an on-premises or Azure-hosted service.
The correct syntax in uploading files is: azcopy copy [source] [destination] [flags]
For example:
azcopy copy ‘C:\myDirectory\myTextFile.txt’ ‘https://mystorageaccount.blob.core.windows.net/mycontainer/myTextFile.txt’
Hence, the correct answer is: azcopy copy E:\TutorialsDojoMedia https://TutorialsDojoAccount.blob.core.windows.net/public –recursive
The option that says: azcopy copy https://TutorialsDojoAccount.blob.core.windows.net/public E:\TutorialsDojoMedia –recursive is incorrect because this command only downloads the contents from the storage account and not the folder. Remember that in order to upload a file to a storage account, you need to follow this syntax: azcopy copy [source] [destination] [flags].
The option that says: azcopy copy E:\TutorialsDojoMedia https://TutorialsDojoAccount.blob.core.windows.net/public is incorrect because the command will not include the subdirectories of the folder. You need to append the –recursive flag to upload files in all subdirectories.
The option that says: az storage blob copy start-batch E:\TutorialsDojoMedia https://TutorialsDojoAccount.blob.core.windows.net/public is incorrect because this command simply copies multiple blob files from a source container to the destination container.
References:
https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview
https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10
Check out this Azure Blob Storage Cheat Sheet:
https://tutorialsdojo.com/azure-blob-storage/
You have an Azure subscription that contains a virtual network named TDVNet1 with a subnet named TDSubnet1. Three virtual machines have been provisioned to TDSubnet1, each with a public IP address.
There are several applications that are hosted in your virtual machines that are accessible over port 443 (HTTPS) and 3389 (RDP) to users over the Internet.
You have extended your on-premises network to TDVNet1 using a site-to-site VPN connection.
Due to compliance requirements, you need to ensure that the Remote Desktop Protocol (RDP) connection is only accessible from the on-premises network. The solution must still allow internet users to access all the applications.
What should you do?
A. Detach the public IP address of the virtual machines
B. Change the address space of the local network gateway
C. Change the address space of TDSubnet1.
D. Add a rule to deny incoming RDP connection using network security group (NSG) which is linked to TDSubnet1
D. Add a rule to deny incoming RDP connection using network security group (NSG) which is linked to TDSubnet1
Explanation:
Azure Virtual Network (VNet) is the fundamental building block for your private network in Azure. VNet enables many types of Azure resources, such as Azure Virtual Machines (VM), to securely communicate with each other, the Internet, and on-premises networks. VNet is similar to a traditional network that you’d operate in your own data center but brings with it additional benefits of Azure’s infrastructure such as scale, availability, and isolation.
A VPN gateway is a specific type of virtual network gateway that is used to send encrypted traffic between an Azure virtual network and an on-premises location over the public Internet. You can also use a VPN gateway to send encrypted traffic between Azure virtual networks over the Microsoft network.
Azure Network Security Group is used to filter network traffic to and from Azure resources in an Azure virtual network. A network security group contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol.
Network Security Groups can be attached to subnets and/or network interfaces. Unless you have a specific reason to, it is recommended that you associate a network security group to a subnet or a network interface, but not both. Since rules in a network security group associated with a subnet can conflict with rules in a network security group associated with a network interface, you can have unexpected communication problems that require troubleshooting.
It’s important to note that security rules in an NSG associated with a subnet can affect connectivity between virtual machines within it. For example, if a rule is added to NSG1 which denies all inbound and outbound traffic, VM1 and VM2 will no longer be able to communicate with each other. Another rule would have to be added specifically to allow this.
Hence, the correct answer is: Add a rule to deny incoming RDP connection using network security group (NSG) which is linked to TDSubnet1.
The option that says: Detach the public IP address of the virtual machines is incorrect. Removing the public IP address of your virtual machines will also remove their ability to connect to the Internet. The requirement in this scenario states that the virtual machines must still be accessible by Internet users.
The option that says: Change the address space of the local network gateway is incorrect. Address space in the local network gateway refers to one or more IP address ranges (in CIDR notation) that define your on-premises address space. In this case, modifying the address space might also remove RDP access from your on-premises network to Azure.
The option that says: Change the address space of TDSubnet1 is incorrect because changing the address space of a subnet has no effect on restricting traffic going into it. If you change the address space of TDSubnet1, you also need to terminate or move the virtual machines that are associated with TDSubnet1. Instead, you can restrict traffic going into TDSubnet1 by associating it with a network security group.
References:
https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview
https://docs.microsoft.com/en-us/azure/virtual-network/network-security-groups-overview
Check out these Azure Networking Services Cheat Sheets:
https://tutorialsdojo.com/azure-cheat-sheets-networking-and-content-delivery/
Your organization plans to create a storage account in your Azure subscription.
Due to compliance requirements, you need to deploy your storage account according to the following conditions:
Your data must be replicated to another region to ensure redundancy. Ensure costs are minimized whenever possible. Block public access to all blobs or containers. Disable shared key access. Allows HTTPS traffic only to storage service. Minimum TLS version – 1.1. Cool tier must be the default access tier.
Solution: You deploy an ARM template with the following properties:
az104-2-04Does this meet the goal?
A. Yes
B. No
B. No
Explanation:
An Azure storage account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS. Data in your Azure storage account is durable and highly available, secure, and massively scalable.
Azure Storage offers several types of storage accounts. Each type supports different features and has its own pricing model. Consider these differences before you create a storage account to determine the type of account that is best for your applications. The types of storage accounts are:
General-purpose v2 accounts: Basic storage account type for blobs, files, queues, and tables. Recommended for most scenarios using Azure Storage. It supports LRS, GRS, RA-GRS, ZRS, GZRS, RA-GZRS replication options. General-purpose v1 accounts: Legacy account type for blobs, files, queues, and tables. Use general-purpose v2 accounts instead when possible. Supports LRS, GRS, RA-GRS replication options BlockBlobStorage accounts: Storage accounts with premium performance characteristics for block blobs and append blobs. Recommended for scenarios with high transaction rates, or scenarios that use smaller objects or require consistently low storage latency. Supports LRS, ZRS replication options FileStorage accounts: Files-only storage accounts with premium performance characteristics. Recommended for enterprise or high-performance scale applications. Supports LRS, ZRS replication options BlobStorage accounts: Legacy Blob-only storage accounts. Use general-purpose v2 accounts instead when possible. Supports LRS, GRS, RA-GRS replication options
The question states several requirements. Let’s review each condition and determine if the ARM template satisfies the question requirements.
- Your data must be replicated to another region to ensure redundancy. Ensure costs are minimized whenever possible.
– The SKU specified is Standard_ZRS. This will only provide redundancy when an availability zone fails, but if the entire region fails, then your data will not be available.
– This does not satisfy the requirement.
- Block public access to all blobs or containers.
– There is no declared property for blocking public access. The default value is True.
– This does not satisfy the requirement.
- Disable shared key access.
– There is no declared property for shared key access. The default value is True.
– This does not satisfy the requirement.
- Allows HTTPS traffic only to storage service.
– There is no declared property requiring HTTPS traffic only to storage service. The default value is True.
– This does not satisfy the requirement.
- Minimum TLS version – 1.1.
– There is no declared property forcing minimum TLS requests to Azure storage. The default value is 1.0.
– This does not satisfy the requirement.
- Cool tier must be the default access tier.
– There is no declared default access tier. If the access tier is not explicitly stated, the default access tier will be the Hot tier.
– This does not satisfy the requirement.
Hence, the correct answer is: No.
References:
https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview
https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy
Check out this Azure Storage Overview Cheat Sheet:
https://tutorialsdojo.com/azure-storage-overview/
Locally Redundant Storage (LRS) vs. Zone-Redundant Storage (ZRS) vs. Geo-Redundant Storage (GRS):
https://tutorialsdojo.com/locally-redundant-storage-lrs-vs-zone-redundant-storage-zrs/
Your organization plans to create a storage account in your Azure subscription.
Due to compliance requirements, you need to deploy your storage account according to the following conditions:
Your data must be replicated to another region to ensure redundancy. Ensure costs are minimized whenever possible.
Block public access to all blobs or containers.
Disable shared key access.
Allows HTTPS traffic only to storage service.
Minimum TLS version – 1.1.
Cool tier must be the default access tier.
Solution: You deploy an ARM template with the following properties:
az104-2-05Does this meet the goal?
A. Yes
B. No
B. No
Explanation:
An Azure storage account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS. Data in your Azure storage account is durable and highly available, secure, and massively scalable.
Azure Storage offers several types of storage accounts. Each type supports different features and has its own pricing model. Consider these differences before you create a storage account to determine the type of account that is best for your applications. The types of storage accounts are:
General-purpose v2 accounts: Basic storage account type for blobs, files, queues, and tables. Recommended for most scenarios using Azure Storage. It supports LRS, GRS, RA-GRS, ZRS, GZRS, RA-GZRS replication options. General-purpose v1 accounts: Legacy account type for blobs, files, queues, and tables. Use general-purpose v2 accounts instead when possible. Supports LRS, GRS, RA-GRS replication options BlockBlobStorage accounts: Storage accounts with premium performance characteristics for block blobs and append blobs. Recommended for scenarios with high transaction rates, or scenarios that use smaller objects or require consistently low storage latency. Supports LRS, ZRS replication options FileStorage accounts: Files-only storage accounts with premium performance characteristics. Recommended for enterprise or high-performance scale applications. Supports LRS, ZRS replication options BlobStorage accounts: Legacy Blob-only storage accounts. Use general-purpose v2 accounts instead when possible. Supports LRS, GRS, RA-GRS replication options
The question states several requirements. Let’s review each condition and determine if the ARM template satisfies the question requirements.
- Your data must be replicated to another region to ensure redundancy. Ensure costs are minimized whenever possible.
– The SKU specified is Standard_ZRS. This will only provide redundancy when an availability zone fails, but if the entire region fails, then your data will not be available.
– This does not satisfy the requirement.
- Block public access to all blobs or containers.
– The property “allowBlobPublicAccess” has a value of false which disables any public access to all blobs or containers. This satisfies the requirement.
– This satisfies the requirement.
- Disable shared key access.
– The property “allowSharedKeyAccess” has a value of false which disables any shared access key authorization methods.
– This satisfies the requirement.
- Allows HTTPS traffic only to storage service.
– The property “supportsHttpsTrafficOnly” has a value of True, which requires all traffic connecting to the storage account to use HTTPS only.
– This satisfies the requirement.
- Minimum TLS version – 1.1.
– The property “minimumTlsVersion” has a value of 1_1 which only allows requests to storage with a minimum of TLS 1.1
– This satisfies the requirement.
- Cool tier must be the default access tier.
– There is no declared default access tier. If the access tier is not explicitly stated, then the default access tier will be the Hot tier.
– This does not satisfy the requirement.
Hence, the correct answer is: No.
Your organization plans to create a storage account in your Azure subscription.
Due to compliance requirements, you need to deploy your storage account according to the following conditions:
Your data must be replicated to another region to ensure redundancy. Ensure costs are minimized whenever possible.
Block public access to all blobs or containers.
Disable shared key access.
Allows HTTPS traffic only to storage service.
Minimum TLS version – 1.1.
Cool tier must be the default access tier.
Solution: You deploy an ARM template with the following properties:
az104-2-06Does this meet the goal?
A. Yes
B. No
A. Yes
Explanation:
An Azure storage account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS. Data in your Azure storage account is durable and highly available, secure, and massively scalable.
Azure Storage offers several types of storage accounts. Each type supports different features and has its own pricing model. Consider these differences before you create a storage account to determine the type of account that is best for your applications. The types of storage accounts are:
General-purpose v2 accounts: Basic storage account type for blobs, files, queues, and tables. Recommended for most scenarios using Azure Storage. It supports LRS, GRS, RA-GRS, ZRS, GZRS, RA-GZRS replication options. General-purpose v1 accounts: Legacy account type for blobs, files, queues, and tables. Use general-purpose v2 accounts instead when possible. Supports LRS, GRS, RA-GRS replication options BlockBlobStorage accounts: Storage accounts with premium performance characteristics for block blobs and append blobs. Recommended for scenarios with high transaction rates, or scenarios that use smaller objects or require consistently low storage latency. Supports LRS, ZRS replication options FileStorage accounts: Files-only storage accounts with premium performance characteristics. Recommended for enterprise or high-performance scale applications. Supports LRS, ZRS replication options BlobStorage accounts: Legacy Blob-only storage accounts. Use general-purpose v2 accounts instead when possible. Supports LRS, GRS, RA-GRS replication options
The question states several requirements. Let’s review each condition and determine if the ARM template satisfies the question requirements.
- Your data must be replicated to another region to ensure redundancy. Ensure costs are minimized whenever possible.
– We must implement a redundancy option that copies your data to a secondary region. We can opt for Geo-zone-redundant storage (GZRS) which provides maximum consistency and redundancy, but one of the requirements also states that we need to minimize costs whenever possible. The bare minimum to satisfy this requirement is to use Geo-redundant storage (GRS) or Standard_GRS.
– This satisfies the requirement.
- Block public access to all blobs or containers.
– The property “allowBlobPublicAccess” has a value of false which disables any public access to all blobs or containers. This satisfies the requirement.
– This satisfies the requirement.
- Disable shared key access.
– The property “allowSharedKeyAccess” has a value of false which disables any shared access key authorization methods.
– This satisfies the requirement.
- Allows HTTPS traffic only to storage service.
– The property “supportsHttpsTrafficOnly” has a value of True, which requires all traffic connecting to the storage account to use HTTPS only.
– This satisfies the requirement.
- Minimum TLS version – 1.1.
– The property “minimumTlsVersion” has a value of 1_1 which only allows requests to storage with a minimum of TLS 1.1
– This satisfies the requirement.
- Cool tier must be the default access tier.
–The property “accessTier” has a value of Cool which sets the default access tier of the storage account to the Cool tier
– This satisfies the requirement.
Hence, the correct answer is: Yes.
References:
https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview
https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy
Check out this Azure Storage Overview Cheat Sheet:
https://tutorialsdojo.com/azure-storage-overview/
Locally Redundant Storage (LRS) vs. Zone-Redundant Storage (ZRS) vs. Geo-Redundant Storage (GRS):
https://tutorialsdojo.com/locally-redundant-storage-lrs-vs-zone-redundant-storage-zrs/
You have an Azure subscription that contains several virtual machines deployed to a virtual network named TDVnet1.
You created an Azure storage account named tdstorageaccount1 as shown in the following exhibit:
Select the correct answer from the drop-down list of options. Each correct selection is worth one point.
1. Your virtual machines deployed to the 20.2.1.0/24 subnet will have access to the file shares in tdstorageaccount1. A. Always B. During Backup C. Never
- . The unmanaged disks of the virtual machines can be backed up to tdsotrageaccount1 by using Azure Backup.
A. Always
B. During Backup
C. Never
- C Never
- C Never
Explanation:
An Azure storage account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS. Data in your Azure storage account is durable and highly available, secure, and massively scalable.
Virtual Network service endpoint allows administrators to create network rules that allow traffic only from selected VNets and subnets, creating a secure network boundary for their data. Service endpoints extend your VNet private address space and identity to the Azure services, over a direct connection. This allows you to secure your critical service resources to only your virtual networks, providing private connectivity to these resources and fully removing Internet access. You need to explicitly specify which subnets can access your storage account.
Azure Backup can access your storage account in the same subscription for running backups and restores of unmanaged disks in virtual machines. To enable this, you need to tick the “Allow trusted Microsoft Services to access this storage account” box.
Take note that in the screenshot presented in the scenario, the following observations can be made:
- There are two subnets inside TDVnet1, 20.2.0.0/24 and 20.2.1.0/24. The only subnet included in the lists of allowed subnets to tdstorageaccount1 is 20.2.0.0/24. The virtual machines deployed to the subnet 20.2.1.0/24 will never have access to tdstorageaccount1.
- The “Allow trusted Microsoft Services to access this storage account” is not enabled. This means that Azure Backup will never have the capability to backup the unmanaged disks of the virtual machines to tdstorageaccount1.
Therefore, your virtual machines in 20.2.1.0/24 will Never have access to the file shares in tdstorageaccount1.
Conversely, Azure Backup will Never be able to backup unmanaged disks of the virtual machines.
References:
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-overview
https://docs.microsoft.com/en-us/azure/storage/common/storage-network-security
Check out this Azure Storage Overview Cheat Sheet:
https://tutorialsdojo.com/azure-storage-overview/
You have an Azure blob storage account in your Azure subscription named TD1, located in the Southeast Asia region.
Due to compliance requirements, data uploaded to TD1 must be duplicated to the Australia Central region for redundancy. The solution must minimize administrative effort.
What should you do?
A. Configure object replication. B. Configure firewalls and virtual networks. C. Configure versioning. D. Configure Geo-redundant storage (GRS).
A. Configure object replication.
Explanation:
bject replication asynchronously copies block blobs between a source storage account and a destination account. Some scenarios supported by object replication include:
- Minimizing latency. Object replication can reduce latency for read requests by enabling clients to consume data from a region that is in closer physical proximity.
- Increase efficiency for compute workloads. With object replication, compute workloads can process the same sets of block blobs in different regions.
- Optimizing data distribution. You can process or analyze data in a single location and then replicate just the results to additional regions.
- Optimizing costs. After your data has been replicated, you can reduce costs by moving it to the archive tier using life cycle management policies.
The requirement states that whenever data is uploaded to TD1 must be duplicated to Australia Central due to compliance requirements. Since the regional pair of Southeast Asia is East Asia, we won’t be able to use geo-redundant storage (GRS) as we cannot choose the secondary region due to regional pairs. Instead, we can use object replication to copy data from TD1 to a storage account in Australia Central region.
Object replication is supported for general-purpose v2 storage accounts and premium block blob accounts. Both the source and destination accounts must be either general-purpose v2 or premium block blob accounts. Object replication supports block blobs only; append blobs and page blobs aren’t supported.
Hence, the correct answer is: Configure object replication.
The option that says: Configure firewalls and virtual networks is incorrect because this feature only allows users of Azure storage accounts to block or allow specific traffic to your storage account. It does not have any capability to replicate data to another region.
The option that says: Configure versioning is incorrect because this allows you to automatically maintain previous versions of an object in a single storage account. Although, to use object replication, versioning must be enabled in the source and target storage accounts.
The option that says: Configure Geo-redundant storage (GRS) is incorrect because the data will automatically be stored in East Asia since it is the regional pair of Southeast Asia region. You don’t get to choose the secondary region when enabling geo-redundant storage. Instead, use object replication.
References:
https://learn.microsoft.com/en-us/azure/storage/blobs/object-replication-overview
https://learn.microsoft.com/en-us/azure/reliability/cross-region-replication-azure
Check out this Azure Storage Overview Cheat Sheet:
https://tutorialsdojo.com/azure-storage-overview/
You have an Azure subscription that contains a Windows virtual machine named TD1 with the following configurations:
Virtual network: TDVnet1
Public IP Address: 20.10.0.1
Private IP Address: 48.156.83.51
Location: Southeast Asia
You deploy the following Azure DNS zones:
You need to determine which DNS zones can be linked to TDVnet1 and which DNS zones TD1 can be automatically registered.
Select the correct answer from the drop-down list of options. Each correct selection is worth one point.
- TDVnet1 can be linked to the following DNS zones:
A. Manila.com and Dagupan.com only
B. Davao.com and Palawan.com
C. Manila.com and Davao.com
D. Dagupan.com and Palawan.com
- TD1 can be automatically registered to the following DNS zones:
A. Manila.com and Dagupan.com only
B. Davao.com and Palawan.com
C. Manila.com and Davao.com
D. Dagupan.com and Palawan.com
- C. Manila.com and Davao.com
- C. Manila.com and Davao.com
Explanation:
Azure Private DNS provides a reliable, secure DNS service to manage and resolve domain names in a virtual network without the need to add a custom DNS solution. By using private DNS zones, you can use your own custom domain names rather than the Azure-provided names available today.
Using custom domain names helps you to tailor your virtual network architecture to best suit your organization’s needs. It provides name resolution for virtual machines (VMs) within a virtual network and between virtual networks. Additionally, you can configure zone names with a split-horizon view, which allows a private and a public DNS zone to share the name.
Once you create a private DNS zone in Azure, it is not immediately accessible from any virtual network. You must link it to a virtual network before a VM hosted in that network can access the private DNS zone.
When you create a link between a private DNS zone and a virtual network, you have an option to turn on autoregistration of DNS records for virtual machines. If you choose this option, the virtual network becomes a registration virtual network for the private DNS zone.
– A DNS record is automatically created for the virtual machines that you deploy in the network. DNS records are created for the virtual machines that you have already deployed in the virtual network.
– One private DNS zone can have multiple registration virtual networks, however, every virtual network can have exactly one registration zone associated with it.
When you create a virtual network link under a private DNS zone and choose not to enable DNS record autoregistration, the virtual network is treated as a resolution only virtual network.
– DNS records for virtual machines deployed in such networks will not be automatically created in the linked private DNS zone. However, the virtual machines deployed in such a network can successfully query the DNS records from the private DNS zone.
– These records may be manually created by you or may be populated from other virtual networks that have been linked as registration networks with the private DNS zone.
– One private DNS zone can have multiple resolution virtual networks and a virtual network can have multiple resolution zones associated to it.
Take note that you can only link a virtual network and use the auto registration feature to a private DNS zone only.
Therefore, Manila.com and Davao.com only can be linked to TDVNet1 since they are both private DNS zones.
Conversely, TD1 can be automatically registered to Manila.com and Davao.com only because both DNS zones are private DNS zones provided that you enable the auto registration feature.
The following options are incorrect because Dagupan.com and Palawan.com are public DNS zones. You can not use public DNS zones as they do not have the capability to use virtual network links and the auto registration feature.
– Manila.com and Dagupan.com only
– Davao.com and Palawan.com
– Dagupan.com and Palawan.com only
References:
https://docs.microsoft.com/en-us/azure/dns/private-dns-overview
https://docs.microsoft.com/en-us/azure/dns/private-dns-virtual-network-links
https://docs.microsoft.com/en-us/azure/dns/private-dns-autoregistration
Check out this Azure DNS cheat sheet:
https://tutorialsdojo.com/azure-dns/
Your company has an Azure subscription that contains virtual networks named TDVnet1, TDVnet2, and TDVnet3.
You peer the virtual networks as shown in the following exhibit.
MultiVnetPeering
You need to identify if the packets can be routed between virtual networks.
What should you identify?
Select the correct answer from the drop-down list of options. Each correct selection is worth one point.
- TDVnet1 packets can be routed to:
A. TDVnet2 only
B. TDVnet3 only
C. TDVnet2 and TDVnet3 only - TDVnet3 packets can be routed to:
A. TDVnet2 only
B. TDVnet3 only
C. TDVnet2 and TDVnet3 only
- C. TDVnet2 and TDVnet3 only
- D. TDVnet1 only
Explanation:
Azure Virtual Network (VNet) is the fundamental building block for your private network in Azure. VNet enables many types of Azure resources, such as Azure Virtual Machines (VM), to securely communicate with each other, the Internet, and on-premises networks. VNet is similar to a traditional network that you’d operate in your own datacenter but brings with it additional benefits of Azure’s infrastructure such as scale, availability, and isolation.
Virtual network peering enables you to connect two or more Virtual Networks in Azure seamlessly. The virtual networks appear as one for connectivity purposes. The traffic between virtual machines in peered virtual networks uses the Microsoft backbone infrastructure. Like traffic between virtual machines in the same network, traffic is routed only through Microsoft’s private network.
In the image above, TDVnet1 is the hub while TDvnet2 and TDVnet3 are the spoke. TDVnet1 can route packets to TDVnet2 and TDVnet3 since they have their own respective peerings, while TDVnet3 can route packets to TDVnet1 only.
Take note that virtual network peerings are non-transitive, meaning virtual networks that are directly peered can only communicate with each other but can’t communicate with the peers of their peers. It would be best to create a peering connection between TDVnet2 and TDVnet3 to route packets with each other.
Therefore, TDVnet1 packets can be routed to TDVnet2 and TDVnet3 because they have their own peerings. TDVnet 2 peers to TDVnet1 and TDVnet3 peers to TDVnet1.
Conversely, TDVnet3 packets can be routed to TDVnet1 only because TDVnet1 has a peering connection with TDVnet3.
References:
https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview
https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview
Check out this Azure Virtual Network Cheat Sheet:
https://tutorialsdojo.com/azure-virtual-network-vnet/
You have an Azure subscription that contains a storage account named tdstorageaccount1.
You have 14 TB of files you need to migrate to tdstorageaccount1 using Azure Import/Export service.
You need to identify the two files you need to create before the preparation of the drives for journal file.
Which two files should you create?
A. ARM template
B. Dataset CSV File
C. Driveset CSV file
D. PowerShell PS1 file
E. WAImportExport file
B. Dataset CSV File
C. Driveset CSV file
Explanation:
Azure Import/Export service is used to securely import large amounts of data to Azure Blob storage and Azure Files by shipping disk drives to an Azure datacenter. This service can also be used to transfer data from Azure Blob storage to disk drives and ship to your on-premises sites. Data from one or more disk drives can be imported either to Azure Blob storage or Azure Files.
Consider using Azure Import/Export service when uploading or downloading data over the network is too slow or getting additional network bandwidth is cost-prohibitive. Use this service in the following scenarios:
– Data migration to the cloud: Move large amounts of data to Azure quickly and cost-effectively.
– Content distribution: Quickly send data to your customer sites.
– Backup: Take backups of your on-premises data to store in Azure Storage.
– Data recovery: Recover large amount of data stored in storage and have it delivered to your on-premises location.
The first step of an import job is the preparation of the drives. This is where you need to generate a journal file. The following files are needed before you create a journal file:
– The Dataset CSV File
– Dataset CSV file is the value of /dataset flag is a CSV file that contains a list of directories and/or a list of files to be copied to target drives. The first step to creating an import job is to determine which directories and files you are going to import.
– This can be a list of directories, a list of unique files, or a combination of those two. When a directory is included, all files in the directory and its subdirectories will be part of the import job.
– The Driveset CSV file
– The value of the /InitialDriveSet or /AdditionalDriveSet flag is a CSV file that contains the list of disks to which the drive letters are mapped so that the tool can correctly pick the list of disks to be prepared.
Hence, the correct answers are:
– Dataset CSV File
– Driveset CSV file
The following options are incorrect because an Azure Import/Export journal file only requires a driveset CSV file and dataset CSV File during the preparation of your drives.
– ARM template
– PowerShell PS1 file
– WAImportExport file
References:
https://docs.microsoft.com/en-us/azure/import-export/storage-import-export-service
https://docs.microsoft.com/en-us/azure/import-export/storage-import-export-data-to-files
Check out this Azure Storage Overview Cheat Sheet:
https://tutorialsdojo.com/azure-storage-overview/
Your company has an Azure subscription that has the following resources shown in the following table:
az104-2-12 image
You create an Azure file share named TDShare1 and an Azure Blob container named TDBlob1 using TDAccount1.
What resources can you backup using TDBackup1 and TDBackup2?
Select the correct answer from the drop-down list of options. Each correct selection is worth one point.
- TDBackup1 can backup:
A. TDShare1 only
B. TD1 only
C. TDShare1, TD1 and TDBlob Only
D. TDShare1, TD1, TDSQL1, and TDBlob
TDBackup2 can backup:
A. TDShare1 only
B. TD1 only
C. TDShare1, TD1 and TDBlob Only
D. TDShare1, TD1, TDSQL1, and TDBlob
- A. TDShare1 only
- B. TD1 only
Explanation:
A Recovery Services vault is a storage entity in Azure that houses data. The data is typically copies of data, or configuration information for virtual machines (VMs), workloads, servers, or workstations. You can use Recovery Services vaults to hold backup data for various Azure services such as IaaS VMs (Linux or Windows) and Azure SQL databases. Recovery Services vaults support System Center DPM, Windows Server, Azure Backup Server, and more. Recovery Services vaults make it easy to organize your backup data while minimizing management overhead.
Azure Backup provides independent and isolated backups to guard against unintended destruction of the data on your VMs. Backups are stored in a Recovery Services vault with built-in management of recovery points. Configuration and scaling are simple, backups are optimized, and you can easily restore as needed.
As part of the backup process, a snapshot is taken, and the data is transferred to the Recovery Services vault with no impact on production workloads. The snapshot provides different levels of consistency, as described here.
Azure Backup also has specialized offerings for database workloads like SQL Server running in virtual machines and SAP HANA that is workload-aware, offers 15 minute RPO (recovery point objective), and allows backup and restore of individual databases.
In this scenario, there are two recovery services vaults located in different regions. Take note that you can not backup resources that are located in another region. It is not possible to backup Blob containers and Azure SQL Database using Azure Backup.
Therefore, you can backup TDShare1 only because it resides within the same region as TDBackup1 and you can backup an Azure FileShare using Azure Backup. Also, as you look at the table above, TDAccount1 or the Storage Account service is created in Southeast Asia.
Conversely, TDBackup2 can backup TD1 only because they are in the same region and you can backup Azure virtual machines using Azure Backup.
References:
https://docs.microsoft.com/en-us/azure/backup/backup-overview
https://docs.microsoft.com/en-us/azure/backup/backup-azure-arm-vms-prepare
https://docs.microsoft.com/en-us/azure/backup/backup-afs
Check out this Azure Virtual Machines Cheat Sheet:
https://tutorialsdojo.com/azure-virtual-machines/
You have an application that is hosted on an Azure App service named TDApp1.
You have a custom domain named tutorialsdojo.com that needs to be added to TDApp1.
What should you do first?
A. Modify the app settings
B. Add a DNS record
C. Create a Private Endpoint
D. Configure Vnet Integration
B. Add a DNS record
Explanation:
Azure App Service is an HTTP-based service for hosting web applications, REST APIs, and mobile back ends. You can develop in your favorite language, be it .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python. Applications run and scale with ease on both Windows and Linux-based environments. App Service not only adds the power of Microsoft Azure to your application, such as security, load balancing, autoscaling, and automated management. You can also take advantage of its DevOps capabilities, such as continuous deployment from Azure DevOps, GitHub, Docker Hub, other sources, package management, staging environments, custom domain, and TLS/SSL certificates.
You can configure Azure DNS to host a custom domain for your web apps. For example, you can create an Azure web app and have your users access it using either www.tutorialsdojo.com or tutorialsdojo.com as a fully qualified domain name (FQDN).
To do this, you have to create three records:
– A root “A” record pointing to your domain.
– A root “TXT” record for verification
– A “CNAME” record for any subdomain name that your domain has.
Keep in mind that if you create an A record for a web app in Azure, the A record must be manually updated if the underlying IP address for the web app changes.
Hence, the correct answer is: Add a DNS record.
The option that says: Modify the app settings is incorrect because these are simply configurations passed as environment variables to the application code.
The option that says: Create a Private Endpoint is incorrect because this only allows clients located in your private network to securely access the app over a Private Link which helps you eliminate exposure from the public Internet.
The option that says: Configure Vnet integration is incorrect because this is just a feature that enables your apps to access resources in or through a VNet. This type of integration doesn’t enable your apps to be accessed privately. You use this if you want to privately connect to the resources inside a virtual machine.
References:
https://docs.microsoft.com/en-us/azure/app-service/overview
https://docs.microsoft.com/en-us/Azure/app-service/app-service-web-tutorial-custom-domain
Check out this Azure App Service Cheat Sheet:
https://tutorialsdojo.com/azure-app-service/
Your company has an Azure subscription that contains the following resources:
az104-2-14 scenario image
You have an Azure Recovery Services vault named TDBackup1 that backs up TD1, TD2, and TD3 daily without an Azure Backup Agent.
Select the correct answer from the drop-down list of options. Each correct selection is worth one point.
- You can execute a file recovery operation TD2 to:
A. TD1 only
B. TD2 Only
C. TD3 Only
D. TD1, TD2, and TD3 - You can restore TD3 to:
A. TD1 only
B. TD2 Only
C. TD3 Only
D. TD1, TD2, and TD3
- B. TD2 Only
- C. TD3 Only
Explanation:
Azure Backup provides independent and isolated backups to guard against unintended destruction of the data on your VMs. Backups are stored in a Recovery Services vault with built-in management of recovery points. Configuration and scaling are simple, backups are optimized, and you can easily restore as needed.
To recover a specific file, you must specify the recovery point of your backup and download a script that will mount the disks from the selected recovery point. After the script is successfully downloaded, make sure you have the right machine to execute this script.
When recovering files, you can’t restore files to a previous or future operating system version. For example, you can’t restore a file from a Windows Server 2016 VM to Windows Server 2012 or a Windows 8 computer. You can restore files from a VM to the same server operating system, or to the compatible client operating system.
You can restore a virtual machine with the following options:
– Create a new VM
– Restore Disk
– Replace existing disk (OLR)
As one of the restore options, you can replace an existing VM disk with the selected restore point. The current VM must exist. If it’s been deleted, this option can’t be used. Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify.
Existing disks connected to the VM are replaced with the selected restore point. The snapshot is copied to the vault and retained in accordance with the retention policy.
After the Replace Disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren’t needed.
Therefore, you can perform file recovery to TD2 only because the operating systems of TD1 and TD3 are not compatible with TD2. You need to ensure that the machine you are recovering the file to meets the requirements before executing the script.
Conversely, you can restore TD3 to TD3 only because you can not restore the disk of TD3 to TD1 and TD2. You can only restore a virtual machine by creating a new VM, restoring a disk, or replace the existing VM disk.
References:
https://docs.microsoft.com/en-us/azure/backup/backup-overview
https://docs.microsoft.com/en-us/azure/backup/backup-azure-arm-restore-vms
https://docs.microsoft.com/en-us/azure/backup/backup-azure-restore-files-from-vm
Check out this Azure Virtual Machines Cheat Sheet:
https://tutorialsdojo.com/azure-virtual-machines/
You have an Azure subscription that contains a sync group named TDSync1 which has an associated cloud endpoint called TDCloud1. The file tutorials.docx is included in the cloud endpoint.
You have the following on-premises Windows Server 2019 file servers that you want to synchronize to Azure:
az104-2-15 scenario imageYou first registered FileServer1 as a server endpoint to TDSync1 and then registered FileServer2 as a server endpoint to TDSync1.
For each of the following items, choose Yes if the statement is true or choose No if the statement is false. Take note that each correct item is worth one point.
Questions Yes No tutorials.docx on FileServer1 will be overwritten by tutorials.docx from TDCloud1 tutorials.docx on TDCloud1 will be overwritten by tutorials.docx from FileServer1 dojo.mp4 will be synced to FileServer1
Azure Files enables you to set up highly available network file shares that can be accessed by using the standard Server Message Block (SMB) protocol. That means that multiple VMs can share the same files with both read and write access. You can also read the files using the REST interface or the storage client libraries.
Remember that whenever you make changes to any cloud endpoint or server endpoint in the sync group, it will be synced to the other endpoints in the sync group. If you make a change to the cloud endpoint (Azure file share) directly, changes first need to be discovered by an Azure File Sync change detection job. A change detection job is only initiated for a cloud endpoint once every 24 hours.
Take note that Azure does not overwrite any files in your sync group. Instead, it will keep both changes to files that are changed in two endpoints at the same time. The most recently written change keeps the original file name.
The older file (determined by LastWriteTime) has the endpoint name and the conflict number appended to the filename. For server endpoints, the endpoint name is the name of the server. For cloud endpoints, the endpoint name follows this taxonomy:
– <FileNameWithoutExtension>-<endpointName>[-#].<ext></ext></endpointName></FileNameWithoutExtension>
– For example, tutorials-FileServer1.docx
Azure File Sync supports 100 conflict files per file. Once the maximum number of conflict files has been reached, the file will fail to sync until the number of conflict files is less than 100.
Hence, this statement is correct: dojo.mp4 will be synced to FileServer1.
The following statements are incorrect because Azure File Sync will not overwrite any files in your endpoints. It will simply append a conflict number to the filename of the older file, while the most recent change will retain the original file name.
– tutorials.docx on FileServer1 will be overwritten by tutorials.docx from TDCloud1.
– tutorials.docx on TDCloud1 will be overwritten by tutorials.docx from FileServer1.
References:
https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction
https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-deployment-guide
Check out this Azure Files Cheat Sheet:
https://tutorialsdojo.com/azure-file-storage/
Your company has an Azure subscription named TDSubscription1. You have a line-of-business (LOB) application that is hosted on several virtual machines in an Azure virtual machine scale set named TDSet1.
You have an on-premises network that has a site-to-site VPN connection to your Azure environment that allows your users to access your application only.
You need to recommend a solution that will load balance the traffic to your virtual machines coming from the on-premises network.
What are the two possible Azure services that you can implement to satisfy the given requirements?
A. Traffic Manager
B. Public Load Balancer
C. Azure Front Door
D. Azure Application Gateway
E. Internal Load Balancer
D. Azure Application Gateway
E. Internal Load Balancer
Explanation:
The term load balancing refers to the distribution of workloads across multiple computing resources. Load balancing aims to optimize resource use, maximize throughput, minimize response time, and avoid overloading any single resource. It can also improve availability by sharing a workload across redundant computing resources.
Azure Application Gateway can be configured with an Internet-facing VIP or with an internal endpoint that is not exposed to the Internet, also known as an internal load balancer (ILB) endpoint. Configuring the gateway with an ILB is useful for internal line-of-business applications that are not exposed to the Internet.
It’s also useful for services and tiers within a multi-tier application that sits in a security boundary that is not exposed to the Internet but still requires round-robin load distribution, session stickiness, or Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL), termination.
An internal load balancer distributes traffic to resources that are inside a virtual network. Azure restricts access to the frontend IP addresses of a virtual network that is load balanced. Front-end IP addresses and virtual networks are never directly exposed to an Internet endpoint. Internal line-of-business applications run in Azure and are accessed from within Azure or from on-premises resources.
Take note that in this scenario, the line-of-business application is only accessed by your on-premises network that is connected by a site-to-site VPN connection. This means that access to your virtual machines must be intranet and must use a load balancing service that supports internal load balancing.
Hence, the correct answers are:
– Azure Application Gateway
– Internal Load Balancer
Traffic Manager is incorrect because it only allows you to distribute traffic to your public-facing applications across the global Azure regions. Traffic Manager also provides your public endpoints with high availability and quick responsiveness. You can not use this to load balance internal traffic to your virtual machines.
Public Load Balancer is incorrect because this simply balances the incoming Internet traffic to your virtual machines. Take note that you need to implement an internal load balancing solution since the on-premises network is connected by a site-to-site VPN connection, which is not directly accessible on the public Internet.
Azure Front Door is incorrect because it just enables you to define, manage, and monitor the global routing for your web traffic. Azure Front Door is a global, scalable entry-point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications.
References:
https://docs.microsoft.com/en-us/azure/load-balancer/components
https://docs.microsoft.com/en-us/azure/application-gateway/overview
Check out these Azure Networking Services Cheat Sheets:
https://tutorialsdojo.com/azure-load-balancer/
https://tutorialsdojo.com/azure-application-gateway/
Azure Load Balancer vs. Application Gateway vs. Traffic Manager vs. Front Door:
https://tutorialsdojo.com/azure-load-balancer-vs-app-gateway-vs-traffic-manager/
You purchased an Azure AD Premium P2 license.
You plan to add a local administrator to manage all the computers and devices that will join your domain.
What do you need to configure in Azure Active Directory to satisfy this requirement?
A. Configure device settings.
B. Require users to re-register for MFA.
C. Enable app registrations in user settings.
D. Configure group naming policy.
A. Configure device settings.
Explanation:
Azure Active Directory (Azure AD) is Microsoft’s cloud-based identity and access management service, which helps your employees sign in and access resources from external resources such as Microsoft 365, the Azure portal, and thousands of other SaaS applications. It can also be used on internal resources, such as apps on your corporate network and intranet, along with any cloud apps developed by your own organization.
To add local administrators that will manage joined devices in Azure AD, you must configure the settings shown in the image above. You can select the users that are granted local administrator rights on a device. These users will be added to the Device Administrators’ role in Azure AD. By default, global administrators in Azure AD and device owners are granted local administrator rights. This configuration is a premium edition capability, and it is available through products such as Azure AD Premium or the Enterprise Mobility Suite (EMS).
Hence, the correct answer is: Configure device settings.
The option that says: Require users to re-register for MFA is incorrect because this approach is mainly used for troubleshooting MFA end-user issues. Also, this option won’t help you add local administrators to manage joined devices.
The option that says: Enable app registrations in user settings is incorrect. If this option is enabled, non-admin users can register custom-developed applications within the directory. This option is not needed. Take note that the requirement in the scenario is to add local administrators and not to register applications.
The option that says: Configure group naming policy is incorrect. This option only allows you to add a specific prefix or suffix to the group name and alias of any Microsoft 365 group created by users. Group naming policy is not needed in the scenario because you only need to add local administrators to your domain.
References:
https://docs.microsoft.com/en-us/azure/active-directory/devices/device-management-azure-portal
https://docs.microsoft.com/en-us/azure/active-directory/devices/assign-local-admin#manage-the-device-administrator-role
Check out this Azure Active Directory Cheat Sheet:
https://tutorialsdojo.com/azure-active-directory-azure-ad/
You deployed an Ubuntu Server VM named TDAzureVM1.
You created a template based on the configuration of the TDAzureVM1 virtual machine and uploaded it to the Azure Resource Manager (ARM) Library.
You need to provision a new virtual machine named TDAzureVM2 using the same template in ARM.
What can be configured in this custom deployment process?
A. Operating system
B. Availability options
C. Size of the virtual machine
D. Resource group
D. Resource group
Explanation:
Azure Resource Manager (ARM) templates are primarily used to implement infrastructure as code for your Azure solutions. The template is a JavaScript Object Notation (JSON) file that defines your project’s infrastructure and configuration. The template uses declarative syntax, which lets you state what you intend to deploy without writing the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources.
You can export the template of an existing virtual machine and save it in Azure Resource Manager. The exported template is composed of parameters and template JSON files. In custom deployment (as shown in the figure above), the only options that you can configure are Subscription, Resource Group, and Location.
Hence, the correct answer is: Resource group.
The following options are incorrect because you can only change the subscription, resource group, and location in the custom deployment process. Remember that the operating system, availability options, and size of VM are already configured in the ARM template.
– Operating system
– Availability options
– Size of the virtual machine
References:
https://docs.microsoft.com/en-us/azure/virtual-machines/linux/create-ssh-secured-vm-from-template
https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/overview
Check out this Azure Global Infrastructure Cheat Sheet:
https://tutorialsdojo.com/azure-global-infrastructure/
You created a resource group and added the resources shown in the table below.
az104-2-19 image
The backup of VM1 is stored in RSV1.
You plan to delete the resource group after 30 days.
What should you do first to delete the resource group?
A. Delete VM1.
B. Stop the backup of VM1.
C. Delete storage1.
D. Restart VM1.
B. Stop the backup of VM1.
Explanation:
Azure Backup is a cost-effective, secure, one-click backup solution that’s scalable based on your backup storage needs. The centralized management interface makes it easy to define backup policies and protect a wide range of enterprise workloads, including Azure Virtual Machines, SQL and SAP databases, and Azure file shares.
You can’t delete a Recovery Services vault with any of the following dependencies:
– You can’t delete a vault that contains protected data sources.
– You can’t delete a vault that contains backup data. Once backup data is deleted, it will go into the soft-deleted state.
– You can’t delete a vault that contains backup data in the soft-deleted state.
– You can’t delete a vault that has registered storage accounts.
Based on the given scenario, the backup is still running. If you try to delete the vault without removing the dependencies, you will receive an error message “Failed to delete resource group.” To resolve this problem, you must stop the backup first, then disable soft delete and delete the resource group.
Hence, the correct answer is: Stop the backup of VM1.
The following options are incorrect because you need to stop the backup first before you can delete the resource group.
– Delete VM1.
– Delete storage1.
– Restart VM1.
References:
https://docs.microsoft.com/en-us/azure/backup/backup-azure-delete-vault
https://docs.microsoft.com/en-us/azure/backup/backup-overview
Check out this Azure Virtual Machines Cheat Sheet:
https://tutorialsdojo.com/azure-virtual-machines/
Your company has an Azure Log Analytics workspace in their Azure subscription.
You are instructed to find the error in the table named EventLogs.
Which log query should you run in the workspace?
A. search in (EventLogs) “error”
B. EventLogs | take 10
C. search “error”
D. EventLogs | sort by TimeGenerated desc
A. search in (EventLogs) “error”
Explanation:
Azure Monitor is a service in Azure that provides performance and availability monitoring for applications and services in Azure, other cloud environments, or on-premises. Azure Monitor collects data from multiple sources into a common data platform where it can be analyzed for trends and anomalies. Rich features in Azure Monitor assist you in quickly identifying and responding to critical situations that may affect your application.
To retrieve data in the Log Analytics workspace, you need to use a Kusto Query Language (KQL). Remember that there are different types of log queries in Azure Monitor. Based on the given question, you only need to find the “error” in the table named “EventLogs.”
With search queries, you can find the specific value that you need in your table. This query searches the “TableName” table for records that contains the word “value”:
search in (TableName) “value”
If you omit the “in (TableName)“ part and just run the search “value”, the search will go over all tables, which would take longer and be less efficient.
Hence, the correct answer is: search in (EventLogs) “error”.
The option that says: EventLogs | take 10 is incorrect because this option would only take 10 results in the EventLogs table. Remember that the requirement in the scenario is to show all the logs containing the word “error” in the table named EventLogs.
The option that says: search “error” is incorrect because this query would search “error” in all the tables. Take note that you only need to query the table EventLogs.
The option that says: EventLogs | sort by TimeGenerated desc is incorrect because this query will only sort the entire EventLogs table by the TimeGenerated column.
References:
https://docs.microsoft.com/en-us/azure/azure-monitor/log-query/get-started-queries
https://docs.microsoft.com/en-us/azure/azure-monitor/log-query/log-analytics-tutorial
Check out this Azure Monitor Cheat Sheet:
https://tutorialsdojo.com/azure-monitor/