Az Cloud Academy Certification test Flashcards
ASP.NET applications that run in Azure web app can create which of the following kinds of logs?
Application tracing, Web server, Detailed error message, Failed request tracing
Application tracing, Web server, Detailed error message, Access request tracing
Application tracing, Web server, Error message, Access request tracing
Application tracing, Web server, Error message, Successful request tracing
ASP.NET applications running in Azure web apps can create the following types of logs:
Application tracing
Web server
Detailed error message
Failed request tracing.
An Azure subscription named Subscription 1 contains three resource groups named Development, Test, and Production. Thomas, Logan, and Guy have been assigned roles via role-based access controls (RBAC) to access Subscription 1 resources.Logan can perform all read and write operations on all compute and storage resources within the Development and Test resource groups. Guy is an owner of the Development and Test resource groups. Thomas is an owner of Subscription 1.If necessary, who would be able to delete the entire Development resource group and all resources within it?
Both Guy and Thomas
The junior database administrator at your organization is experimenting with an Azure Stream Analytics parallel job. The query is designed to be embarrassingly parallel. The job input is from an Event Hub with eight partitions. Which of the following would be feasible for the job output?
An Event Hub with 0 partitions
An Event Hub with 16 partitions
A Blob Output
A Blob Output with 8 partitions
A Blob Output
The number of input partitions must equal the number of output partitions so the idea is to avoid a mismatched partition count issue. Blob output does not currently support partitions. However, it will inherit the partitioning scheme of the upstream query. If Event Hubs are used, there must be eight partitions.
Your database administrator and you are brainstorming ways to monitor memory pressure on a newly installed Azure Redis Cache Premium tier instance. Your database administrator insists that using the cache misses Azure Portal metric is the best way to monitor memory pressure. Why do you advise against using cache misses for monitoring memory pressure?
Cache misses are normal and do not always reflect memory pressure.
Cache misses are more a reflection of server CPU utilization issues and latency issues.
Cache misses result from client/server regional variances and request/response timeouts.
Cache misses can only measure timeout issues resulting from low network bandwidth availability.
Cache misses are normal and do not always reflect memory pressure.
Cache misses are not necessarily a bad thing. Not all data can be in the cache at once. When using the cache-aside programming pattern, an application looks first in the cache for an item. If the item is not there (cache miss), the item is retrieved from the database and added to the cache for next time. Cache misses are normal behavior for the cache-aside programming pattern. Higher than expected cache misses may be caused by application logic that populates and reads from the cache. However, if items are being evicted from the cache due to memory pressure then there may be some cache misses, but a better metric to monitor for memory pressure would be Used Memory or Evicted Keys.
Which of the following Azure PowerShell cmdlets can be used to verify VM encryption status of a Linux VM?
Get-AzVmDiskEncryptionStatus
Get-AzureLinuxVmDiskEncryptionStatus
Get-VmEncryptionStatus
Get-AzureRmLinuxDiskEncryptionStatus
Get-AzVmDiskEncryptionStatus
Use the Get-AzVmDiskEncryptionStatus cmdlet to verify the encryption status of a Linux VM.
You are designing several message queue services for clients. Service 1 is a delivery system for online invitations with the following specifications: First-in, first-out support is required to ensure messages are delivered in order. Messages must have unlimited time to live (TTL). Service 2 is a billing reminder delivery services with the following specifications: Prevention of duplicate messages - any duplicate messages would need to be detected and removed from the queue automatically. The messages will average 150 KB in size. Service 3 is a data delivery system for weather data from numerous IoT producers to a central data warehouse for batch processing for eventual data analysis. Its specifications are: The messages will be 10 KB in size The service will have to process thousands of messages per second. The data analysis application used with Service 3 performs idempotent operations. Which service(s) would be ideal for Azure Storage Queue?
Service 1 and 3
Service 2 only
Service 3 only
Service 1 and 2.
Service 3 only
Explanation
Azure Storage Queues and Azure Service Bus Queues have several similar use cases, but their service limitations make them ideal for specific services.
Storage Queues cannot guarantee FIFO delivery, while Service Bus Queues can.
Storage Queues cannot detect duplicate messages in a queue.
Storage Queues have a maximum message TTL of 7 days, while Service Bus Queues TTL can be unlimited.
Storage Queues have a maximum file size of 64 KB, and although they can provide a pointer to larger size files if necessary, doing so decreases the speed of the service. Service Bus Queues are capable of delivering larger messages.
Storage Queues are generally recommended for large, asynchronous workflows while Service Bus Queues are ideal for medium-scale transaction workflows.
Which Microsoft PowerShell Security Cmdlet converts a secure string to an encrypted standard string?
ConvertTo-EncryptedString
ConvertFrom-EncryptedString
ConvertTo-SecureString
ConvertFrom-SecureString
ConvertFrom-SecureString
Explanation
PowerShell has a Security module that consists of cmdlets and providers that manage the basic security features of Windows. To convert a secure string to an encrypted standard string, use the ConvertFrom-SecureString cmdlet.
What does Microsoft recommend when choosing an Azure Cosmos DB partition key?
Select partition keys that have high volumes of data for the same value.
Set the same partition key for all your documents.
Select a unique partition key for each document.
Select a partition key that prevents “hot spots” within your application.
Select a partition key that prevents “hot spots” within your application.
Explanation
Your choice of partition key should balance the need to enable the use of transactions against the requirement to distribute your entities across multiple partition keys to ensure a scalable solution. It is important to pick a property that allows writes to be distributed across a number of distinct values. Requests to the same partition key cannot exceed the throughput of a single partition, and will be throttled.
A blob can be leased to limit write and delete permissions for that specific blob to which of the following scopes?
A single user
A single Azure AD tenant
A single resource group
A single Azure AD group
A single user
Explanation
Blob leases limit write and delete permissions to the specific user who has leased the object.
You are developing an app that uses Azure Functions and need to write a trigger function that runs immediately on startup, and then every two hours thereafter. How would you code the TimerTrigger attributes to accomplish this task? Assume that you will use these six fields for the scheduling string: {second} {minute} {hour} {day} {month} {day of the week}.
TimerTrigger(“0 0 */2 * * *”, RunOnStartup = true)
TimerTrigger(“2 0 * 0 * * *”, TimerInfo = RunOnStartup)
TimerTrigger(“2 0 * 0 * * *”, RunOnStartup = true)
TimerTrigger(“0 0 */2 * * *”, TimerInfo = RunOnStartup)
TimerTrigger(“0 0 */2 * * *”, RunOnStartup = true)
Explanation
A fully featured Timer trigger for scheduled jobs that supports cron expressions, as well as other schedule expressions. The first parameter is a cron expression that declares the schedule. The second parameter alerts the timer to begin immediately.
When configuring Azure Notification Hub push notifications for your Azure App Service mobile app, which credential type is required to allow your mobile backend to connect to your notification hub?
Access policy connection strings
OAuth 2.0 authentication
Managed Service Identity authentication
HubTriggers
Access policy connection strings
Explanation
You will need to get the connection string from the Access Policies page. This is the credential that will let your mobile backend actually connect to your hub for pushing messages. It will be part of your mobile backend code.
Before you deploy a new application to its production environment, you need to integrate a monitoring solution that sends messages to the development team’s mobile devices. The key requirements for this messaging solution are: It can be deployed with minimal customization or administration required.It can deliver messages to mobile devices running Android and iOS operating systems.Which Azure solution is optimal for this scenario?
Azure Service Bus
Azure Event Hub
Azure Notification Hub
Azure Event Grid
Azure Notification Hub
Explanation
This is where Azure Notification Hubs and IoT Edge come in. The former is a ready-made smart device notification solution. Need to send push notifications to iPhones, Android phones, or tablets? Notification Hubs is your answer. The great thing about it is that it takes away a lot of the pain involved in supporting a variety of mobile devices. If you have experience as a mobile developer, then you’ll know what I am talking about. Unlike other forms of messaging, push notifications often have tricky platform-dependent logic. Scaling, managing tokens, and routing messages to different segments of users on different hardware and different versions of Android is non-trivial work for even an experienced tech team.
Notification Hub takes away most of that pain. It lets you broadcast to all platforms with a single interface. It can work both in the cloud or on-premises and includes security features like SAS, shared access secrets, and federated authentication. See the “How To” guide link for more details.
Jeremy will manage security for all applications within two subscriptions, named Subscription 1 and Subscription 2. Jeremy needs to be assigned the appropriate role to manage these resources.This new role has the following requirements:Jeremy needs to be able to assign employees he manages permanent roles within PIM.With his potential ability to assign other employees resource access in PIM, his role assignment will need administrative review.Before management activates his assignment, they would like Jeremy to complete MFA.What Azure resource role assignment within PIM will meet these requirements?
Permanent eligible assignment
Permanent active assignment
An eligible assignment with expiration
An active assignment with expiration
Permanent eligible assignment
Explanation
Permanent assignments allow users to assign other users permanent roles within PIM. Eligible assignments require the user to complete an action, which could be a justification for the role or MFA, before activating the role. Active role assignments do not need to be justified or require MFA.
You have built a Web App application that keeps returning 500 error when being called, and you’re scrambling to understand the underlying issue. What’s your first line of defense?
Turn on Web server diagnostic logs, collect and analyze
Turn on Application server diagnostic logs, collect and analyze
Open a support request to Azure Helpdesk asking for assistance
Open a Kudu console and watch application log stream
Turn on Web server diagnostic logs, collect and analyze
Explanation
Azure provides built-in diagnostics to assist with debugging an App Service web app. App Service web apps provide diagnostic functionality for logging information from both the web server and the web application. These are logically separated into web server diagnostics and application diagnostics. In order to enable logging for the web server diagnostics, you simply change the setting on the Azure Portal.
You are auditing and updating a small number of critical blobs within an Azure Blob Storage account, and those updates are recorded in a separate on-premises database. The entire update process for each blob takes roughly 30-50 seconds because the on-premises update can lag occasionally. The process has never takes longer than 50 seconds. During this update, you plan to lease each blob individually as you audit the account, to limit the potential effects to ongoing business. You want to lease the blob from the time you begin your update until the time the update is recorded in the on-premises database. Which lease operations should you perform?
Lease the blob for 60 seconds, perform the update, and break the lease.
Lease the blob for 60 seconds, perform the manual update, and release the lease.
Lease the blob indefinitely, perform the manual update and then break the lease.
Lease the blob indefinitely, perform the manual update and, then release the lease.
Lease the blob for 60 seconds, perform the update, and break the lease.
Explanation
The key to answering this question correctly is understanding how timed and indefinite (or infinite) leases operate.
Timed and indefinite leases, when released, end immediately.
Timed leases, when broken, last for the remaining time of the lease period and then end.
Indefinite leases, when broken, end immediately.
Therefore, the correct answer is to select a timed lease of 60 seconds, and break it once you’ve completed the manual update. This way, the lease will extend the full 60 seconds while the on-premises database is updated.
You are a start-up company currently hosting two small web applications, Web App 1 and Web App 2, on Azure Web Apps. Your Web Apps run on three instances on a Basic app service plan. You need to manage both web apps to meet the following requirements:Allow Web App 1 to scale from 5-8 instances based on application workload, as traffic for this web app is growing.Maintain Web App 2 on three separate instances, as this application is also growing more popular. However, Web App 2 does not require scaling capabilities yet.What steps would be most cost-effective and meet your application requirements?
Move Web App 1 to a separate Standard app service plan. Configure auto scaling for Web App 1 between a range of 5 to 8 instances based on application metrics. Keep your existing Basic app service plan for Web App 2.
Scale up to a Premium app service plan. Leave Web App 2 as it is currently configured. Configure auto scaling for Web App 1 between a range of 5 to 8 instances based on application metrics.
Move Web App 1 to a separate Premium app service plan. Configure auto scaling for Web App 1 between a range of 5 to 8 instances based on application metrics. Scale your Basic app service plan down to a Shared service plan for Web App 2.
Move Web App 1 to a separate Premium app service plan. Configure auto scaling for Web App 1 between a range of 5 to 8 instances based on application metrics. Scale up your existing service plan from Basic to Standard for Web App 2.
Move Web App 1 to a separate Standard app service plan. Configure auto scaling for Web App 1 between a range of 5 to 8 instances based on application metrics. Keep your existing Basic app service plan for Web App 2.
Explanation
App Service plans are containers for the apps that you deploy in App Service. App Service plans are offered in different tiers, with more functionality provided by higher, more expensive tiers. The following list highlights some of the distinctions between the available tiers:
Free (Windows only): Run a small number of apps for free
Shared (Windows only): Run more apps and provides support for custom domains
Basic: Run unlimited apps and scale up to three instances with built-in load balancing
Standard: The first tier recommended for production workloads. It scales up to ten (10) instances with Autoscaling support and VNet integration to access resources in your Azure virtual networks without exposing them to the internet
Premium: Scale up to 20 instances and additional storage over the standard tier
Isolated: Scale up to 100 instances, runs inside of an Azure Virtual Network isolated from other customers, and supports private access use cases
Which PowerShell command will create a new deployment slot for a web app?
New-AzWebAppSlot -ResourceGroupName [resource group name] -Name [web app name] -Slot [deployment slot name] -AppServicePlan [app service plan name]
New-AzDeploymentSlot -ResourceGroupName [resource group name] -Name [web app name] -Slot [deployment slot name] -AppServicePlan [app service plan name]
New-AzWebAppSlot -Name [web app name] -Slot [deployment slot name] -AppServicePlan [app service plan name]
New-AzWebAppDeploymentSlot -ResourceGroupName [resource group name] -Name [web app name] -Slot [deployment slot name] -AppServicePlan [app service plan name]
New-AzWebAppSlot -ResourceGroupName [resource group name] -Name [web app name] -Slot [deployment slot name] -AppServicePlan [app service plan name]
Explanation
The correct answer is:
New-AzWebAppSlot -ResourceGroupName [resource group name] -Name [web app name] -Slot [deployment slot name] -AppServicePlan [app service plan name].
All of the other answers contain errors.
Your team develops multiple mobile finance APIs for an online banking service. You need mitigate potential abuse for a single online product, a business travel expense submission service. Using Azure API Management, you need to set policies within Azure API Management to control the character types within data strings submitted to the backend via all the product APIs. Which stage and level would you need to set for this API policy in Azure API Management?
Inbound stage and Product scope
Backend stage and Specific API scope
Frontend stage at Individual Operation score
Inbound stage and Global scope
Inbound stage and Product scope
Explanation
This policy would control inbound stages APIs at the product scope, because it modifies or controls request contents before they reach the backend for all of a product’s APIs.