Review Mode Set 4 Dojo Flashcards
You have an Azure subscription that contains hundreds of network resources.
You need to recommend a solution that will allow you to monitor resources in one centralized console for network monitoring.
What solution should you recommend?
A. Azure Monitor Network Insights
B. Azure Virtual Network
C. Azure Traffic Manager
D. Azure Advisor
A. Azure Monitor Network Insights
Explanation:
Azure Monitor maximizes the availability and performance of your applications and services by delivering a solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. It helps you understand how your applications are performing and proactively identifies issues affecting them and the resources they depend on.
Azure Monitor Network Insights provides a comprehensive view of health and metrics for all deployed network resources without requiring any configuration. It also provides access to network monitoring capabilities like Connection Monitor, flow logging for network security groups (NSGs), and Traffic Analytics. And it provides other network diagnostic features. Key features of Network Insight:
– Single console for network monitoring
– No agent configuration required
– Access to health state, metrics, alerts, & data from traffic and connectivity monitoring tools in one place
– View network topology with functional dependencies for simpler troubleshooting
– Access resources metrics to debug issues without writing queries or authoring workbooks
Hence, the correct answer is: Azure Monitor Network Insights.
Azure Virtual Network is incorrect because this service simply allows your resources, such as virtual machines, to securely communicate with each other, the internet, and on-premises networks. VNet is similar to a traditional network that you’d operate in your own data center but brings with it additional benefits of Azure’s infrastructure such as scale, availability, and isolation.
Azure Traffic Manager is incorrect because this is simply a DNS-based traffic load balancer that enables you to distribute traffic optimally to services across global Azure regions while providing high availability and responsiveness. However, you cannot use this to monitor your network resources.
Azure advisor is incorrect because this service just helps you improve the cost-effectiveness, performance, reliability (formerly called high availability), and security of your Azure resources.
Your organization has a Microsoft Entra ID subscription that is associated with the directory TD-Siargao.
You have been tasked to implement a conditional access policy.
The policy must require the DevOps group to use multi-factor authentication and a hybrid Microsoft Entra joined device when connecting to Microsoft Entra from untrusted locations.
Solution: Create a conditional access policy and enforce grant control.
Does the solution meet the goal?
A. No
B. Yes
B. Yes
Explanation:
Microsoft Entra ID enterprise identity service provides single sign-on and multi-factor authentication to help protect your users from 99.9 percent of cybersecurity attacks. The single sign-on is an authentication method that simplifies access to your apps from anywhere. While conditional access and multi-factor authentication help protect and govern access to your resources.
With conditional access, you can implement automated access-control decisions for accessing your cloud apps based on conditions. Conditional access policies are enforced after the first-factor authentication has been completed. It’s not intended to be a first-line defense against denial-of-service (DoS) attacks, but it uses signals from these events to determine access.
There are two types of access controls in a conditional access policy:
Grant – enforces grant or block access to resources. Session – enable limited experiences within specific cloud applications
Going back to the scenario, the requirement is to enforce a policy to the members of the DevOps group to use MFA and a hybrid Microsoft Entra joined device when connecting to Microsoft Entra from untrusted locations. The given solution is to enforce grant access control. If you check the image above, the grant control satisfies this requirement.
Hence, the correct answer is: Yes.
Your organization has an Azure AD subscription that is associated with the directory TD-Siargao.
You have been tasked to implement a conditional access policy.
The policy must require the DevOps group to use multi-factor authentication and a hybrid Azure AD joined device when connecting to Azure AD from untrusted locations.
Solution: Create a conditional access policy and enforce session control.
Does the solution meet the goal?
A. Yes
B. No
B. No
Explanation:
Azure Active Directory (Azure AD) enterprise identity service provides single sign-on and multi-factor authentication to help protect your users from 99.9 percent of cybersecurity attacks. The single sign-on is an authentication method that simplifies access to your apps from anywhere. While conditional access and multi-factor authentication help protect and govern access to your resources.
With conditional access, you can implement automated access-control decisions for accessing your cloud apps based on conditions. Conditional access policies are enforced after the first-factor authentication has been completed. It’s not intended to be a first-line defense against denial-of-service (DoS) attacks, but it uses signals from these events to determine access.
There are two types of access controls in a conditional access policy:
Grant – enforces grant or block access to resources. Session – enable limited experiences within specific cloud applications
Going back to the scenario, the requirement is to enforce a policy to the members of the DevOps group to use MFA and a hybrid Azure AD joined device when connecting to Azure AD from untrusted locations. The given solution is to enforce session access control. If you check the image above, the session control doesn’t have options to require the use of MFA and AD joined devices.
Hence, the correct answer is: No.
Your organization has a Microsoft Entra subscription that is associated with the directory TD-Siargao.
You have been tasked to implement a conditional access policy.
The policy must require the DevOps group to use multi-factor authentication and a hybrid Microsoft Entra joined device when connecting to Microsoft Entra ID from untrusted locations.
Solution: Go to the security option in Microsoft Entra and configure MFA.
Does the solution meet the goal?
A. No
B. Yes
A. No
Explanation:
Microsoft Entra ID enterprise identity service provides single sign-on and multi-factor authentication to help protect your users from 99.9 percent of cybersecurity attacks. The single sign-on is an authentication method that simplifies access to your apps from anywhere. While conditional access and multi-factor authentication help protect and govern access to your resources.
With conditional access, you can implement automated access-control decisions for accessing your cloud apps based on conditions. Conditional access policies are enforced after the first-factor authentication has been completed. It’s not intended to be a first-line defense against denial-of-service (DoS) attacks, but it uses signals from these events to determine access.
There are two types of access controls in a conditional access policy:
Grant – enforces grant or block access to resources. Session – enable limited experiences within specific cloud applications
Going back to the scenario, the requirement is to enforce a policy to the members of the DevOps group to use MFA and a hybrid Microsoft Entra joined device when connecting to Microsoft Entra from untrusted locations. The given solution is to configure MFA in Microsoft Entra security. If you check the question again, there is a line “You have been tasked to implement a conditional access policy.” This means that you must create a conditional access policy and enforce grant control. Also, configuring MFA does not enable the option to require the use of a Microsoft Entra joined device.
Hence, the correct answer is: No.
Your company created several Azure virtual machines and a file share in the subscription TD-Boracay. The VMs are all part of the same virtual network.
You have been assigned to manage the on-premises Hyper-V server replication to Azure.
To support the planned deployment, you will need to create additional resources in TD-Boracay.
Which of the following options should you create?
A. Replication Policy
B. Azure Storage Account
C. VNet Service Endpoint
D. Hyper-V site
E. Azure Recovery Services Vault
F. Azure ExpressRoute
A. Replication Policy
D. Hyper-V site
E. Azure Recovery Services Vault
Explanation:
Azure Virtual Machines is one of several types of on-demand, scalable computing resources that Azure offers. It gives you the flexibility of virtualization without having to buy and maintain the physical hardware that runs it. However, you still need to maintain the VM by performing tasks such as configuring, patching, and installing the software that runs on it.
Hyper-V is Microsoft’s hardware virtualization product. It lets you create and run a software version of a computer called a virtual machine. Each virtual machine acts like a complete computer, running an operating system and programs. Hyper-V runs each virtual machine in its own isolated space, which means you can run more than one virtual machine on the same hardware at the same time.
A Recovery Services vault is a management entity that stores recovery points created over time and provides an interface to perform backup-related operations.
A replication policy defines the settings for the retention history of recovery points. The policy also defines the frequency of app-consistent snapshots.
To set up disaster recovery of on-premises Hyper-V VMs to Azure, you should complete the following steps:
Select your replication source and target – to prepare the infrastructure, you will need to create a Recovery Services vault. After you created the vault, you can now accomplish the protection goal, as shown in the image above. Set up the source replication environment, including on-premises Site Recovery components and the target replication environment – to set up the source environment, you need to create a Hyper-V site and add to that site the Hyper-V hosts containing the VMs that you want to replicate. The target environment will be the subscription and the resource group in which the Azure VMs will be created after failover. Create a replication policy Enable replication for a VM
Hence, the correct answers are:
– Hyper-V site
– Azure Recovery Services Vault
– Replication Policy
Azure Storage Account is incorrect because before you can create an Azure file share, you need to create a storage account first. Instead of creating a storage account again, you should set up a Hyper-V site.
Azure ExpressRoute is incorrect because this service is simply used to establish a private connection between your on-premises data center or corporate network to your Azure cloud infrastructure. It does not have the capability to replicate the Hyper-V server to Azure.
VNet Service Endpoint is incorrect because this option will only remove public internet access to resources and allow traffic only from your virtual network. Remember that the main requirement is to replicate the Hyper-V server to Azure. Therefore, this option wouldn’t satisfy the requirement.
Your company has five branch offices and a Microsoft Entra ID to centrally manage all identities and application access.
You have been tasked with granting permission to local administrators to manage users and groups within their scope.
What should you do?
A. Create an administrative unit.
B. Assign a Microsoft Entra role.
C. Assign an Azure role.
D. Create management groups.
A. Create an administrative unit.
Explanation:
Microsoft Entra ID is a cloud-based identity and access management service that enables your employees access external resources. Example resources include Microsoft 365, the Azure portal, and thousands of other SaaS applications.
Microsoft Entra ID also helps them access internal resources like apps on your corporate intranet, and any cloud apps developed for your own organization.
For more granular administrative control in Microsoft Entra ID, you can assign a Microsoft Entra role with a scope limited to one or more administrative units.
Administrative units limit a role’s permissions to any portion of your organization that you define. You could, for example, use administrative units to delegate the Helpdesk Administrator role to regional support specialists, allowing them to manage users only in the region for which they are responsible.
Hence, the correct answer is: Create an administrative unit.
The option that says: Assign a Microsoft Entra role is incorrect because if you assign an administrative role to a user that is not a member of an administrative unit, the scope of this role is within the directory.
The option that says: Create a management group is incorrect because this is just a container to organize your resources and subscriptions. This option won’t help you grant permission to local administrators to manage users and groups.
The option that says: Assign an Azure role is incorrect because the requirement is to grant local administrators permission only in their respective offices. If you use an Azure role, the user will be able to manage other Azure resources. Therefore, you need to use administrative units so the administrators can only manage users in the region that they support.
Your company has a web app hosted in Azure Virtual Machine.
You plan to create a backup of TD-VM1 but the backup pre-checks displayed a warning state.
What could be the reason?
A. The Recovery Services vault lock type is read-only.
B. The TD-VM1 data disk is unattached.
C. The status of TD-VM1 is deallocated.
D. The latest VM Agent is not installed in TD-VM1
D. The latest VM Agent is not installed in TD-VM1
Explanation:
Azure Virtual Machine is an image service instance that provides on-demand and scalable computing resources with usage-based pricing. More broadly, a virtual machine behaves like a server: it is a computer within a computer that provides the user the same experience they would have on the host operating system itself. To protect your data, you can use Azure Backup to create recovery points that can be stored in geo-redundant recovery vaults.
A Recovery Services vault is a management entity that stores recovery points created over time and provides an interface to perform backup-related operations. These operations include taking on-demand backups, performing restores, and creating backup policies.
Backup Pre-Checks, as the name implies, check the configuration of your VMs for issues that may affect backups and aggregate this information so that you can view it directly from the Recovery Services Vault dashboard. It also provides recommendations for corrective measures to ensure successful file-consistent or application-consistent backups, wherever applicable.
Backup Pre-Checks are performed as part of your Azure VMs’ scheduled backup operations and result in one of the following states:
Passed: This state indicates that your VMs configuration is conducive for successful backups and no corrective action needs to be taken. Warning: This state indicates one or more issues in VM’s configuration that might lead to backup failures and provides recommended steps to ensure successful backups. Not having the latest VM Agent installed, for example, can cause backups to fail intermittently and falls in this class of issues. Critical: This state indicates one or more critical issues in the VM’s configuration that will lead to backup failures and provides required steps to ensure successful backups. A network issue caused due to an update to the NSG rules of a VM, for example, will fail backups as it prevents the VM from communicating with the Azure Backup service and falls in this class of issues.
As stated above, the reason why backup pre-checks displayed a warning state is because of the VM agent. The Azure VM Agent for Windows is automatically upgraded on images deployed from the Azure Marketplace. As new VMs are deployed to Azure, they receive the latest VM agent at VM provision time.
If you have installed the agent manually or are deploying custom VM images you will need to manually update to include the new VM agent at image creation time. To check for the Azure VM Agent in your machine, open Task Manager and look for a process name WindowsAzureGuestAgent.exe.
Hence, the correct answer is: The latest VM Agent is not installed in TD-VM1.
The option that says: The Recovery Services vault lock type is read-only is incorrect because you can’t create a backup if the configured lock type is read-only. If you attempted to backup a virtual machine with a resource lock, the operation won’t be performed, and notify you to remove the lock first.
The option that says: The TD-VM1 data disk is unattached is incorrect because you don’t need to attach a data disk to the virtual machine when creating a backup. To enable VM backup, you need to have a VM agent and Recovery Services vault.
The option that says: The status of TD-VM1 is deallocated is incorrect because you can still create a backup even if the status of your virtual machine is stopped (deallocated).
Your company eCommerce website is deployed in an Azure virtual machine named TD-BGC.
You created a backup of the TD-BGC and implemented the following changes:
– Change the local admin password.
– Create and attach a new disk.
– Resize the virtual machine.
– Copy the log reports to the data disk.
You received an email that the admin restore the TD-BGC using the replace existing configuration.
Which of the following options should you perform to bring back the changes in TD-BGC?
A. Create and attach a new disk.
B. Change the local admin password.
C. Copy the log reports to the data disk.
D. Resize the virtual machine.
C. Copy the log reports to the data disk.
Explanation:
Azure Backup is a cost-effective, secure, one-click backup solution that’s scalable based on your backup storage needs. The centralized management interface makes it easy to define backup policies and protect a wide range of enterprise workloads, including Azure Virtual Machines, SQL and SAP databases, and Azure file shares.
Azure Backup provides several ways to restore a VM:
Create a new VM – quickly creates and gets a basic VM up and running from a restore point. Restore disk – restores a VM disk, which can then be used to create a new VM. Replace existing – restore a disk, and use it to replace a disk on the existing VM. Cross-Region (secondary region) – restore Azure VMs in the secondary region, which is an Azure paired region.
The restore configuration that is given in the scenario is the replace existing option. Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. The existing disks connected to the VM are replaced with the selected restore point.
The snapshot is copied to the vault, and retained in accordance with the retention policy. After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren’t needed.
Since you restore the VM using the backup data, the new disk won’t have a copy of the log reports. To bring back the changes in the TD-BGC virtual machine, you will need to copy the log reports to the disk.
Hence, the correct answer is: Copy the log reports to the data disk.
The option that says: Change the local admin password is incorrect because the new password will not be overridden with the old password using the restore VM option. Therefore, you can use the updated password to connect via RDP to the machine.
The option that says: Create and attach a new disk is incorrect because the new disk does not contain the log reports. Instead of creating a new disk, you should attach the existing data disk that contains the log reports.
The option that says: Resize the virtual machine is incorrect because the only changes that will retain after rolling back are the VM size and the account password.
Your company plans to store media assets in two Azure regions.
You are given the following requirements:
Media assets must be stored in multiple availability zones Media assets must be stored in multiple regions Media assets must be readable in the primary and secondary regions.
Which of the following data redundancy options should you recommend?
A. Locally redundant storage
B. Zone-redundant storage
C. Geo-redundant storage
D. Read-access geo-redundant storage
D. Read-access geo-redundant storage
Explanation:
An Azure storage account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS. Data in your Azure storage account is durable and highly available, secure, and massively scalable.
Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage offers four options for how your data is replicated:
Locally redundant storage (LRS) copies your data synchronously three times within a single physical location in the primary region. LRS is the least expensive replication option but is not recommended for applications requiring high availability. Zone-redundant storage (ZRS) copies your data synchronously across three Azure availability zones in the primary region. For applications requiring high availability. Geo-redundant storage (GRS) copies your data synchronously three times within a single physical location in the primary region using LRS. It then copies your data asynchronously to a single physical location in a secondary region that is hundreds of miles away from the primary region. Geo-zone-redundant storage (GZRS) copies your data synchronously across three Azure availability zones in the primary region using ZRS. It then copies your data asynchronously to a single physical location in the secondary region.
Take note, one of the requirements states that you need the media assets must be readable in the primary and secondary regions. With Geo-redundant storage, your media assets are stored in multiple availability zones and multiple regions. But read access will only be available in the secondary region if you or Microsoft initiates a failover from the primary region to the secondary region.
In order to have read access in the primary and secondary region at all times without having the need to initiate a failover, you need to recommend Read-access geo-redundant storage.
Hence, the correct answer is: Read-access geo-redundant storage.
Locally redundant storage is incorrect because the media assets will only be stored in one physical location.
Zone-redundant storage is incorrect. It only satisfies one requirement which is to store the media assets in multiple availability zones. You still need to store your media assets in multiple regions which ZRS is unable to do.
Geo-redundant storage is incorrect because the requirement states that you need read access to the primary and secondary regions. With GRS, the data in the secondary region isn’t available for read access. You can only have read access in the secondary region if a failover from the primary region to the secondary region is initiated by you or Microsoft.
Tutorials Dojo has a subscription named TDSub1 that contains the following resources:
AZ104-D-17 image
TDVM1 needs to connect to a newly created virtual network named TDNET1 that is located in Japan West.
What should you do to connect TDVM1 to TDNET1?
Solution: You create a network interface in TD1 in the South East Asia region.
Does this meet the goal?
A. No
B. Yes
A. No
Explanation:
A network interface enables an Azure Virtual Machine to communicate with internet, Azure, and on-premises resources. When creating a virtual machine using the Azure portal, the portal creates one network interface with default settings for you.
You may instead choose to create network interfaces with custom settings and add one or more network interfaces to a virtual machine when you create it. You may also want to change default network interface settings for an existing network interface.
Remember these conditions and restrictions when it comes to network interfaces:
– A virtual machine can have multiple network interfaces attached but a network interface can only be attached to a single virtual machine.
– The network interface must be located in the same region and subscription as the virtual machine that it will be attached to.
– When you delete a virtual machine, the network interface attached to it will not be deleted.
– In order to detach a network interface from a virtual machine, you must shut down the virtual machine first.
– By default, the first network interface attached to a VM is the primary network interface. All other network interfaces in the VM are secondary network interfaces.
The solution proposed in the question is incorrect because the virtual network is not located in the same region as TDVM1. Take note that a virtual machine, virtual network and network interface must be in the same region or location.
You need to first redeploy TDVM1 from South East Asia to Japan West region and then create and attach the network interface in to TDVM1 in the Japan West region.
Hence, the correct answer is: No.
You have the following public load balancers deployed in Davao-Subscription1.
TD1 - Standard
TD2 - Basic
You provisioned two groups of virtual machines containing 5 virtual machines each where the traffic must be load balanced to ensure the traffic are evenly distributed.
Which of the following health probes are not available for TD2?
A. HTTP
B. TCP
C. RDP
D. HTTPS
D. HTTPS
Explanation:
Azure Load balancer provides a higher level of availability and scale by spreading incoming requests across virtual machines (VMs). A private load balancer distributes traffic to resources that are inside a virtual network. Azure restricts access to the frontend IP addresses of a virtual network that is load balanced. Front-end IP addresses and virtual networks are never directly exposed to an internet endpoint. Internal line-of-business applications run in Azure and are accessed from within Azure or from on-premises resources.
Remember that although cheaper, load balancers with the basic SKU have limited features compared to a standard load balancer. Basic load balancers are only useful for testing in development environments but when it comes to production workloads, you need to upgrade your basic load balancer to standard load balancer to fully utilize the features of Azure Load Balancer.
Take note, the protocols supported by the health probes of a basic load balancer only support HTTP and TCP protocols.
Hence, the correct answer is: HTTPS.
HTTP and TCP are incorrect because these are supported protocols for health probes using basic load balancer.
RDP is incorrect because this protocol is not supported by Azure Load Balancer.
You have an Azure subscription that contains the following storage accounts:
TD1 - general-purpose v1 - Locally redundant storage
TD2 - general-purpose v1 - Geo redundant storage
There is a compliance requirement where in the data in TD1 and TD2 must be available if a single availability zone in a region fails. The solution must minimize costs and administrative effort.
What should you do first?
A. Upgrade TD1 and TD2 to general-purpose v2
B. Upgrade TD1 and TD2 to zone-redundant storage
C. Configure lifecycle policy
D. Upgrade TD1 to geo-redundant storage
A. Upgrade TD1 and TD2 to general-purpose v2
Explanation:
Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage offers four options for how your data is replicated:
Locally redundant storage (LRS) copies your data synchronously three times within a single physical location in the primary region. LRS is the least expensive replication option but is not recommended for applications requiring high availability. Zone-redundant storage (ZRS) copies your data synchronously across three Azure availability zones in the primary region. For applications requiring high availability. Geo-redundant storage (GRS) copies your data synchronously three times within a single physical location in the primary region using LRS. It then copies your data asynchronously to a single physical location in a secondary region that is hundreds of miles away from the primary region. Geo-zone-redundant storage (GZRS) copies your data synchronously across three Azure availability zones in the primary region using ZRS. It then copies your data asynchronously to a single physical location in the secondary region.
The main requirement is that you need to ensure the data in TD1 and TD2 are available if a single availability zone fails while minimizing costs and administrative effort.
Between the redundancy options, zone-redundant storage fits the requirement of protecting your data by copying the data synchronously across three Azure availability zones. So even if a single availability zone fails, you still have two availability zones that are available.
Remember, ZRS is not a supported redundancy option under general-purpose v1. The first thing you need to do is to upgrade your storage account to general-purpose v2 and then upgrade the replication type to ZRS.
Hence, the correct answer is: Upgrade TD1 and TD2 to general-purpose v2.
The option that says: Upgrade TD1 and TD2 to zone-redundant storage is incorrect because zone-redundant storage is not supported under general-purpose v1.
The option that says: Upgrade TD1 to geo-redundant storage is incorrect because one of the requirements is to minimize cost. With ZRS, you have already satisfied the data availability requirement.
The option that says: Configure lifecycle policy is incorrect because this is simply a rule-based policy that you can use to transition blob data to the appropriate access tiers or to expire data at the end of the data lifecycle.
Your organization has a domain named tutorialsdojo.com.
You want to host your records in Microsoft Azure.
Which three actions should you perform?
A. Copy the Azure DNS NS records
B. Copy the Azure DNS A records
C. Create an Azure private DNS zone
D. Create an Azure public DNS zone
E. Update the Azure NS records to your domain registrar
F. Update the Azure A records to your domain registrar
A. Copy the Azure DNS NS records
D. Create an Azure public DNS zone
E. Update the Azure NS records to your domain registrar
Explanation:
Azure DNS is a hosting service for DNS domains that provides name resolution by using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services.
Using custom domain names helps you to tailor your virtual network architecture to best suit your organization’s needs. It provides name resolution for virtual machines (VMs) within a virtual network and between virtual networks. Additionally, you can configure zone names with a split-horizon view, which allows a private and a public DNS zone to share the name.
You can use Azure DNS to host your DNS domain and manage your DNS records. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services.
Since you own tutorialsdojo.com from a domain name registrar you can then create a zone with the name tutorialsdojo.com in Azure DNS. Since you’re the owner of the domain, your registrar allows you to configure the Nameserver (NS) records to your domain allowing internet users around the world are then directed to your domain in your Azure DNS zone whenever they try to resolve tutorialsdojo.com.
The steps in registering your Azure public DNS records are:
Create your Azure public DNS zone Retrieve name servers – Azure DNS gives name servers from a pool each time a zone is created. Delegate the domain – Once the DNS zone gets created and you have the name servers, you’ll need to update the parent domain with the Azure DNS name servers.
Hence, the correct answers are:
– Create an Azure public DNS zone
– Update the Azure NS records to your domain registrar
– Copy the Azure DNS NS records
The options that say: Copy the Azure DNS A records and Update the Azure A records to your domain registrar is incorrect because you need to copy the nameserver records instead of the A record. An A record is a type of DNS record that points a domain to an IP address.
The option that says: Create an Azure private DNS zone is incorrect because this simply manages and resolves domain names in the virtual network without the need to configure a custom DNS solution. The requirement states that the users must be able to access tutorialsdojo.com via the internet. You need to deploy an Azure public DNS zone instead.
You plan to deploy the following public IP addresses in your Azure subscription shown in the following table:
TD1 | Basic | Static
TD2 | Basic | Dynamic
TD3 | Standard | Static
TD4 | Standard | Dynamic
You need to associate a public IP address to a public Azure load balancer with an SKU of standard.
Which of the following IP addresses can you use?
A. TD1
B. TD3
C. TD3 and TD4
D. TD1 and TD2
B. TD3
Explanation:
A public load balancer can provide outbound connections for virtual machines (VMs) inside your virtual network. These connections are accomplished by translating their private IP addresses to public IP addresses. Public Load Balancers are used to load balance Internet traffic to your VMs.
A public IP associated with a load balancer serves as an Internet-facing frontend IP configuration. The frontend is used to access resources in the backend pool. The frontend IP can be used for members of the backend pool to egress to the Internet.
Remember that the SKU of a load balancer and the SKU of a public IP address SKU must match when you use them with public IP addresses meaning if you have a load balancer with an SKU of standard, you must provision a public IP address with an SKU of standard also.
Hence, the correct answer is: TD3.
The options that say: TD1 and TD1 and TD2 are incorrect because both public IP addresses have an SKU of basic. You must provision a public IP address with a SKU of standard so you can associate it with a standard public load balancer.
The option that says: TD3 and TD4 is incorrect because you can only create a standard public IP address with an assignment of static.
For each of the following items, choose Yes if the statement is true or choose No if the statement is false. Take note that each correct item is worth one point.
Questions Yes No 1. You can rehydrate a blob data in archive tier instantly
- You can rehydrate a blob data in archive tier without costs
- You can access your blob data that is in archive tier
- No
- No
- No
Explanation:
Azure storage offers different access tiers, which allow you to store blob object data in the most cost-effective manner. The available access tiers include:
Hot – Optimized for storing data that is accessed frequently.
Cool – Optimized for storing data that is infrequently accessed and stored for at least 30 days.
Archive – Optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements (on the order of hours).
While a blob is in the archive access tier, it’s considered offline and can’t be read or modified. The blob metadata remains online and available, allowing you to list the blob and its properties. Reading and modifying blob data is only available with online tiers such as hot or cool.
To read data in archive storage, you must first change the tier of the blob to hot or cool. This process is known as rehydration and can take hours to complete.
A rehydration operation with Set Blob Tier is billed for data read transactions and data retrieval size. High-priority rehydration has higher operation and data retrieval costs compared to standard priority. High-priority rehydration shows up as a separate line item on your bill.
The statement that says: You can rehydrate a blob data in archive tier without costs is incorrect. You are billed for data read transactions and data retrieval size (per GB).
The statement that says: You can rehydrate a blob data in archive tier instantly is incorrect. Rehydrating a blob from the Archive tier can take several hours to complete.
The statement that says: You can access your blob data that is in archive tier is incorrect because blob data stored in the archive tier is considered to be offline and can’t be read or modified.
You deployed an Ubuntu server using an Azure virtual machine.
You need to monitor the system performance metrics and log events.
Which of the following options would you use?
A. Azure Performance Diagnostics VM Extension
B. Boot diagnostics
C. Connection monitor
D. Linux Diagnostic Extension
D. Linux Diagnostic Extension
Explanation:
Azure Diagnostics extension is an agent in Azure Monitor that collects monitoring data from the guest operating system of Azure compute resources including virtual machines. It collects guest metrics into Azure Monitor Metrics and sends guest logs and metrics to Azure storage for archiving.
Azure Performance Diagnostics VM Extension helps collect performance diagnostic data from Windows VMs. The extension performs analysis and provides a report of findings and recommendations to identify and resolve performance issues on the virtual machine.
The Linux Diagnostic Extension will help you monitor the health of a Linux VM running on Microsoft Azure. It has the following capabilities:
– Collects system performance metrics from the VM and stores them in a specific table in a designated storage account.
– Retrieves log events from syslog and stores them in a specific table in the designated storage account.
– Enables users to customize the data metrics that are collected and uploaded.
– Enables users to customize the syslog facilities and severity levels of events that are collected and uploaded.
– Enables users to upload specified log files to a designated storage table.
– Supports sending metrics and log events to arbitrary EventHub endpoints and JSON-formatted blobs in the designated storage account.
With this extension, you can now monitor the system performance metrics and log events of the virtual machine.
Hence, the correct answer is: Linux Diagnostic Extension.
Azure Performance Diagnostics VM Extension is incorrect because this extension only collects performance diagnostic data from Windows VMs.
Boot diagnostics is incorrect because this feature is primarily used to diagnose VM boot failures and not for monitoring the system performance metrics and log events.
Connection monitor is incorrect because this is simply used for end-to-end connection monitoring.