Practice Assessment Flashcards

1
Q

You plan to use the metrics and Key Performance Indicators (KPIs) of Azure DevOps projects to validate that your team is meeting its goals and expectations.

You need to identify the KPI that is used to measure the quality and security associated with a project in Azure DevOps.

Which KPI should you identify?

  • Application Performance
  • Lead Time
  • Mean time to recover
  • Server to Admin Ratio
A

Mean time to recover

Mean time to recover is an example of a quality and security metric. Server to Admin Ratio and Application Performance are examples of efficiency metrics. Lead Time is an example of a faster outcome metric.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You have an Azure Boards project.

You plan to track the work status of items based on different service-level classes.

You need to add items to a Kanban board.

Which Azure Boards feature should you use?

Select only one answer.

  • card customization
  • definition of done
  • split columns
  • swimlanes
A

swimlanes

You can add swimlanes to the Kanban board to visualize the status of work that supports different service-level classes. You can create an expedite swimlane to track high-priority work. Customize cards allow you to update a field without opening the work item. Split columns allow only “doing and done” without showing expedited work stages. The Definition of Done criteria defines what “done” means inside of each project.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You plan to use the metrics and Key Performance Indicators (KPIs) for Azure DevOps projects.

You need to identify the KPI that represents a faster outcome metric associated with a project in Azure DevOps.

Which KPI should you identify?

Select only one answer.

  • Lead Time
  • Mean time to detection
  • Mean time to recover
  • Retention rates
A

Lead Time

Only Lead Time is an example of a faster outcome metric associated with an Azure DevOps project. Mean time to detection and Mean time to recover are examples of a quality and security metric. Retention rates is an example of a culture metric.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You plan to create a project wiki in Azure DevOps.

You need to format the wiki pages to include headers, bulleted lists, and italicized font.

Which language should you use to format the pages?

Select only one answer.

  • HTML
  • JSON
  • Markdown
  • XML
A

Markdown

Azure DevOps wikis are written in Markdown, not HTML, JSON, or XML.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

You plan to create a project wiki in Azure DevOps.

You need to create a diagram on the wiki page.

Which syntax element should you use to designate the end of the diagram?

Select only one answer.

  • >
  • }
  • :::
  • ###
A

:::

The ::: symbol designates the beginning and end of a Mermaid element when using Markdown. The > symbol designates an element when using XML, } designates an element when using JSON, and ### is used to designate a header.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You plan to implement a measurement indicator for a new project in Azure DevOps.

You need to recommend a Key Performance Indicator (KPI) to measure how long it takes for a work item to be completed.

Which KPI should you recommend?

Select only one answer.

  • Lead Time
  • Mean time to recover
  • Retention rates
  • Server to Admin Ratio
A

Lead time

Lead Time is recommended to measure how long it takes from the creation of a work item until completion.

Incorrect: Server to Admin Ratio is used to measure whether the project is reducing the number of administrators required for a given number of servers.

Incorrect: Mean time to recover is used to measure how quickly an implementation can recover from a failure.

Incorrect: Retention rates relates to the measurement of losing staff.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You plan to design an Azure DevSecOps security validation process for your company.

You need to identify which stage in the process will include a passive penetration test.

Which stage should you identify?

Select only one answer.

  • continuous deployment
  • continuous integration
  • IDE/pull requests
  • nightly test runs
A

Continuous deployment

Continuous deployment should include passive penetration tests as well as SSL and infrastructure scans. Nightly test runs should include infrastructure scans and active penetration tests. Continuous integration should include an Open Source Software (OSS) vulnerability scan. The integrated development environment/pull request step should include static code analysis and code reviews.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You have an Azure DevOps organization.

You plan to use caching in Azure Pipelines.

You need to identify which types of pipelines you can use.

Which pipeline types should you identify?

  • classic build and classic release only
  • sic release only
  • YAML and classic build only
  • YAML only
A

YAML and classic build only.

Caching is available in YAML and classic build pipelines. It is unavailable in classic release pipelines.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You have an Azure App Service web app named app1.

You plan to track the availability of app1 by leveraging native Azure capabilities.

You need to identify which type of Azure resource you should use to implement the tracking mechanism. The solution must minimize implementation efforts.

Which Azure resource type should you use?

Select only one answer.

  • Azure App Configuration
  • Azure Automation
  • Azure DevTest Labs
  • Azure Functions
A

Azure functions

Azure Functions provides the ability to create and run custom availability tests by relying on the TrackAvailability() method (included in the Azure SDK for .NET). Azure Automation can potentially be used in this case, but this would require significantly more effort. Azure DevTest Labs and Azure App Configuration do not provide this functionality.

  • https://learn.microsoft.com/azure/azure-monitor/app/availability-azure-functions
  • https://learn.microsoft.com/training/modules/configure-provision-environments/6-set-up-run-availability-tests?ns-enrollment-type=learningpath&ns-enrollment-id=learn.wwl.az-400-design-implement-release-strategy
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You have a project in Azure DevOps that contains a build pipeline for a .NET Core application. The pipeline includes the Microsoft Visual Studio Test task.

You are reviewing the pipeline run summary in the Azure DevOps portal.

You plan to download the .coverage files to be used for code coverage analysis in Visual Studio.

You need to identify which section of the pipeline run summary you should use to access the .coverage files.

Which section of the pipeline run summary should you identify?

Select only one answer.

  • Related
  • Repository and version
  • Tests
  • Tests and Coverage
A

Related

From Related, you can download coverage extension files to be used as evidence of code coverage. Tests and Coverage can be used to configure tests and coverage or to see the general results of coverage but not to generate evidence by using the Visual Studio Test task. Tests are used to see test results but not to get evidence about code coverage. Repository and version provides information about the repository used for the pipeline.

  • https://learn.microsoft.com/azure/devops/pipelines/test/review-code-coverage-results?view=azure-devops#artifacts
  • https://learn.microsoft.com/training/modules/run-quality-tests-build-pipeline/6-perform-code-coverage
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You have a project in Azure DevOps named Project1 that contains a release pipeline named pipeline1. All users use Microsoft Teams.

A user named User1 does not have access to Project1. User1 must be able to use Teams to provide approval for initiating a run of pipeline1.

You need to identify the Azure DevOps mechanism that will allow User1 to provide the approval.

Which mechanism should you identify?

Select only one answer.

  • a post-deployment gate
  • a pre-deployment gate
  • manual intervention
  • manual validation
A

Pre-deployment gate

Approval integration can be implemented to allow users in Teams approve a pipeline run by using a pre-deployment gate without providing direct access to Azure DevOps. A post-deployment gate is executed after a release is executed. Manual intervention is used to prompt for values or parameters or to edit the release. Manual validation is similar to manual intervention, with the capability to notify users and a timeout option.

  • https://learn.microsoft.com/azure/devops/pipelines/integrations/microsoft-teams?view=azure-devops#approve-deployments-from-your-channel
  • https://learn.microsoft.com/azure/devops/pipelines/integrations/microsoft-teams?view=azure-devops#approve-deployments-from-your-channel
  • https://learn.microsoft.com/azure/devops/pipelines/integrations/microsoft-teams?view=azure-devops#approve-deployments-from-your-channel
  • https://learn.microsoft.com/training/modules/create-release-pipeline-devops/
  • https://learn.microsoft.com/azure/devops/pipelines/release/approvals/gates?view=azure-devops
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You have a project in Azure DevOps named Project1 that contains a continuous integration pipeline named Pipeline1.

You plan to use Windows-based self-hosted agents for UI tests in Pipeline1.

You need to identify the option you must configure for to apply to the agents.

Which option should you identify?

Select only one answer.

  • Enable Autologon.
  • Run a screen resolution task.
  • Run a unit test.
  • Run tests in parallel.
A

Enable autologon

When self-hosted agents are used, autologon must be enabled to allow UI tests to run. A screen resolution task allows additional configurations to be performed, but an autologon configuration is needed first to allow the test to run. To reduce the duration of the test activities, running tests in parallel can be useful, but this strategy does not address this scenario. A unit test is the first step to adding testing to the development process.

  • https://learn.microsoft.com/azure/devops/pipelines/test/ui-testing-considerations?view=azure-devops&tabs=mstest#provisioning-agents-in-azure-vms-for-ui-testing
  • https://learn.microsoft.com/training/modules/run-quality-tests-build-pipeline/
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You have a project in Azure DevOps that uses packages from NuGet and Maven public registries.

You need to verify that project-level package feeds use original packages rather than copies.

Which Azure Artifacts feature should you implement?

Select only one answer.

  • public feeds
  • security policies
  • upstream sources
  • WinDbg
A

Upstream sources.

One of the advantages of upstream sources is the control over which package is downloaded, allowing you to verify that project-level package feeds use original packages. Public feeds are used to show and control packages, but upstream sources are not allowed. A security policy is used inside of a project-scoped feed, allowing you to control how you can access the feed, but does not provide public registry access or control. To debug Azure Artifacts by using symbol servers, you can use WinDbg. This feature does not provide upstream source control.

  • https://learn.microsoft.com/azure/devops/artifacts/concepts/upstream-sources?view=azure-devops#advantages
  • https://learn.microsoft.com/azure/devops/artifacts/concepts/upstream-behavior?view=azure-devops&tabs=nuget%2Cget
  • https://learn.microsoft.com/training/modules/explore-package-dependencies/
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You plan to create a project in Azure DevOps.

You need to identify which Azure DevOps feature enables the sharing of arbitrary values across all the definitions in the project.

Which Azure DevOps feature should you identify?

Select only one answer.

  • predefined variables
  • release pipeline variables
  • stage variables
  • variable groups
A

Variable groups

Variable groups provide the ability to share arbitrary values across all the definitions in the same project. The values of predefined variables are assigned automatically, while the stage and pipeline variables have a smaller scope than the entire project.

  • https://learn.microsoft.com/training/modules/manage-modularize-tasks-templates/4-explore-variables-release-pipelines
  • https://learn.microsoft.com/training/modules/manage-modularize-tasks-templates/4-explore-variables-release-pipelines
  • https://learn.microsoft.com/azure/devops/pipelines/artifacts/nuget?view=azure-devops&tabs=yaml
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You have a project in Azure DevOps that contains build and release pipelines.

You need to change the number of parallel jobs that will be available to the agent pool allocated to the project.

At which level should you add the parallel jobs?

Select only one answer.

  • build pipeline
  • organization
  • project
  • release pipeline
A

Organization level.

Parallel jobs are added at the organization level, not the project, build pipeline, or release pipeline levels.

  • https://learn.microsoft.com/azure/devops/pipelines/licensing/concurrent-jobs?view=azure-devops&tabs=ms-hosted
  • https://learn.microsoft.com/training/modules/describe-pipelines-concurrency/2-understand-parallel-jobs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You have an Azure DevOps organization.

You plan to build two configurations, one for x86 Windows computers and the other for x64 Windows computers.

You need to identify which Azure DevOps component will allow you to build the configurations with the minimum number of duplicate elements.

What should you include in the solution?

Select only one answer.

  • two jobs in the same pipeline
  • two pipelines in the same project
  • two projects in the same organization
  • two stages in the same pipeline
A

two jobs in the same pipeline

Including two jobs allows you to perform two separate builds with the minimum amount of duplication. Two pipelines in the same project or two projects in the same organization to create two separate builds can be done, but this is unnecessary. Builds take place in a build pipeline, rather than in a release pipeline, so including two stages does not meet the requirement.

  • https://learn.microsoft.com/training/modules/explore-azure-pipelines/4-understand-key-terms
  • https://learn.microsoft.com/azure/devops/pipelines/get-started/key-pipelines-concepts?view=azure-devops
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What’s the definition of a Trigger?

A

A trigger is set up to tell the pipeline when to run. You can configure a pipeline to run upon a push to a repository at scheduled times or upon completing another build. All these actions are known as triggers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What’s the definition of a task?

A

A task is the building block of a pipeline. For example, a build pipeline might consist of build and test tasks. A release pipeline consists of deployment tasks. Each task runs a specific job in the pipeline.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What’s the definition of a stage?

A

Stages are the primary divisions in a pipeline: “build the app,” “run integration tests,” and “deploy to user acceptance testing” are good examples of stages.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What’s the definition of a release?

A

When you use the visual designer, you can create a release or a build pipeline. A release is a term used to describe one execution of a release pipeline. It’s made up of deployments to multiple stages.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What’s the definition of a pipeline?

A

A pipeline defines the continuous integration and deployment process for your app. It’s made up of steps called tasks.

It can be thought of as a script that describes how your test, build, and deployment steps are run.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What’s the definition of a job?

A

A build contains one or more jobs. Most jobs run on an agent. A job represents an execution boundary of a set of steps. All the steps run together on the same agent.

For example, you might build two configurations - x86 and x64. In this case, you have one build and two jobs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What’s the definition of a deployment target?

A

A deployment target is a virtual machine, container, web app, or any service used to host the developed application. A pipeline might deploy the app to one or more deployment targets after the build is completed and tests are run.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What’s the definition of continuous integration?

A

Continuous integration (CI) is the practice used by development teams to simplify the testing and building of code. CI helps to catch bugs or problems early in the development cycle, making them more accessible and faster to fix. Automated tests and builds are run as part of the CI process. The process can run on a schedule, whenever code is pushed, or both. Items known as artifacts are produced from CI systems. The continuous delivery release pipelines use them to drive automatic deployments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What’s the definition of continuous delivery?

A

Continuous delivery (CD) (also known as Continuous Deployment) is a process by which code is built, tested, and deployed to one or more test and production stages. Deploying and testing in multiple stages helps drive quality. Continuous integration systems produce deployable artifacts, which include infrastructure and apps. Automated release pipelines consume these artifacts to release new versions and fix existing systems. Monitoring and alerting systems constantly run to drive visibility into the entire CD process. This process ensures that errors are caught often and early.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What’s the definition of a build?

A

A build represents one execution of a pipeline. It collects the logs associated with running the steps and the test results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What’s the definition of a artifact?

A

An artifact is a collection of files or packages published by a build. Artifacts are made available for the tasks, such as distribution or deployment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

You plan a versioning strategy for a NuGet package.

You need to implement a unique prerelease label based on the date and time of the package.

Which semantic versioning should you use?

Select only one answer.

  • custom scheme
  • a generated script
  • $(Major).$(Minor).$(Patch).$(date:yyyyMMdd)
  • $(Major).$(Minor).$(rev:.r)
A

A custom scheme

In a case where a unique label is required, a custom scheme must be implemented by using date and time as unique values. $(Major).$(Minor).$(Patch).$(date:yyyyMMdd) uses variables for major, minor, patch, and date. It does not generate unique values. A script can be used to generate the version in the build pipeline. $(Major).$(Minor).$(rev:.r) is a format of semantic versioning that uses variables. It does not generate unique values based on date and time.

  • https://learn.microsoft.com/azure/devops/pipelines/artifacts/nuget?view=azure-devops&tabs=yaml#package-versioning
  • https://learn.microsoft.com/training/modules/integrate-azure-pipelines/
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

You plan to create a YAML pipeline in Azure Pipelines.

You need to represent a collection of resources targeted for deployment that are subject to approval checks.

What should you use?

Select only one answer.

  • dependencies
  • environment
  • gates
  • service connections
A

Environment

Environment represents a collection of resources targeted for deployment. Gates support the automatic collection and evaluation of external health signals prior to completing a release stage. Dependencies specify a requirement that must be met to run the next job or stage. Service connections enable a connection to a remote service that is required to execute tasks in a job.

  • https://learn.microsoft.com/training/modules/describe-pipelines-concurrency/6-describe-azure-pipelines-yaml
  • https://learn.microsoft.com/azure/devops/pipelines/get-started/pipelines-get-started?view=azure-devops#feature-availability
  • https://learn.microsoft.com/azure/devops/pipelines/process/environments?view=azure-devops
30
Q

What’s the definition of an environment?

A

An environment is a collection of resources that you can target with deployments from a pipeline. Typical examples of environment names are Dev, Test, QA, Staging, and Production. An Azure DevOps environment represents a logical target where your pipeline deploys software.

Azure DevOps environments aren’t available in classic pipelines. For classic pipelines, deployment groups offer similar functionality.

31
Q

You plan to create an Azure Pipelines release pipeline that will be used for blue-green deployments of a .NET Core web app named App1.

You need to identify which service to use to implement a staging environment and perform a blue-green deployment of App1 to the environment. The solution must minimize configuration and deployment effort.

Which service should you identify?

Select only one answer.

  • Azure App Configuration
  • Azure App Service
  • Azure Application Gateway
  • Azure Automation
A

Azure app service.

App Services includes provisions for implementing staging environments and deploying apps (including .NET Core apps). Azure Automation can potentially be configured to implement a staging environment and deploy apps, but this requires considerably more effort. App Configuration does include provisions for storing key/value pairs that represent different types of environments, but it has no ability to deploy apps into App Configuration. Similarly, Application Gateway is a load balancer that can be used to direct traffic to a staging environment, but it has no ability to deploy apps to App Gateway.

  • https://learn.microsoft.com/training/modules/implement-blue-green-deployment-feature-toggles/4-exercise-set-up-blue-green-deployment
  • https://learn.microsoft.com/azure/app-service/deploy-staging-slots
32
Q

How can you implement a staging environment to warm up and test a new version before replacing your production systems with it?

A

Azure App Service, you can use a separate deployment slot instead of the default production slot when you’re running in the Standard, Premium, or Isolated App Service plan tier. Deployment slots are live apps with their own host names. App content and configurations elements can be swapped between two deployment slots, including the production slot.

Deploying your application to a nonproduction slot has the following benefits:

You can validate app changes in a staging deployment slot before swapping it with the production slot.
Deploying an app to a slot first and swapping it into production makes sure that all instances of the slot are warmed up before being swapped into production. This eliminates downtime when you deploy your app. The traffic redirection is seamless, and no requests are dropped because of swap operations. You can automate this entire workflow by configuring auto swap when pre-swap validation isn’t needed.
After a swap, the slot with previously staged app now has the previous production app. If the changes swapped into the production slot aren’t as you expect, you can perform the same swap immediately to get your “last known good site” back.

33
Q

You have a web app that runs on Azure virtual machines in multiple Azure regions. The virtual machines are accessed by using the public IPv6 addresses assigned to their network adapters. The IPv6 addresses are NOT associated with DNS names.

You plan to use Azure Traffic Manager to load balance requests across all instances of the web apps.

You need to identify which Traffic Manager traffic distribution method supports targeting IPv6 addresses as its endpoints.

Which Traffic Manager traffic distribution method should you identify?

Select only one answer.

  • MultiValue
  • Performance
  • Priority
  • Weighted
A

MultiValue

MultiValue is the only Traffic Manager traffic distribution method that provides the ability to specify the IPv4 and IPv6 addresses of its endpoints. All others, including Performance, Priority, and Weighted, require that the endpoints be designated as DNS names only.

  • https://learn.microsoft.com/training/modules/implement-canary-releases-dark-launching/3-examine-traffic-manager
  • https://learn.microsoft.com/azure/traffic-manager/traffic-manager-overview
34
Q

You are planning a release engineering strategy for your company.

You need to recommend a deployment approach that will expedite identifying potential issues associated with a new release by making the release available to all users at once.

Which deployment approach should you use?

Select only one answer.

  • blue-green
  • canary release
  • dark launching
  • ring
A

Blue-green

A Canary release is a deployment strategy used to test new features on a specific set of users. Blue-green deployments are done in a dedicated environment used to switch users from blue to green. Dark launching, like canary, presents features to a specific set of users, but it assesses user responses to new features in the frontend rather than testing the performance of the backend. Ring gradually exposes releases by using deployment rings.

  • https://learn.microsoft.com/azure/architecture/framework/devops/release-engineering-cd#release-process
  • https://learn.microsoft.com/training/modules/implement-blue-green-deployment-feature-toggles/2-what-blue-green-deployment
35
Q

What is the blue-green release strategy?

A

Blue-green deployment is a technique that reduces risk and downtime by running two identical environments. These environments are called blue and green.

Only one of the environments is live, with the live environment serving all production traffic.

As you prepare a new version of your software, the deployment and final testing stage occur in an environment that isn’t live: in this example, green. Once you’ve deployed and thoroughly tested the software in green, switch the router or load balancer so all incoming requests go to green instead of blue.

Green is now live, and blue is idle.

This technique can eliminate downtime because of app deployment. Besides, blue-green deployment reduces risk: if something unexpected happens with your new version on the green, you can immediately roll back to the last version by switching back to blue.

When it involves database schema changes, this process isn’t straightforward. You probably can’t swap your application. In that case, your application and architecture should be built to handle both the old and the new database schema.

36
Q

You plan to implement the automated validation of Azure Resource Manager (ARM) templates for your company.

You need to identify two sections that must be present in every ARM template.

Which two sections should you identify? Each correct answer presents part of the solution.

Select all answers that apply.

  • apiProfile
  • contentVersion
  • functions
  • parameters
  • schema
A

schema & content version

The schema and contentVersion sections are mandatory in ARM templates. functions, apiProfile, and parameters are optional in ARM templates.

  • https://learn.microsoft.com/training/modules/create-azure-resource-manager-template-vs-code/2-explore-template-structure?tabs=azure-cli
37
Q

What are the component parts of an ARM template?

A
38
Q

You have an Azure DevOps pipeline.

You plan to create a Bicep file to implement PowerShell Desired State Configuration (DSC) in the pipeline.

You need to ensure that the file can be reused in other Bicep files.

Which Bicep construct should you use?

Select only one answer.

  • child resource
  • extension resource
  • module
  • user-defined data type
A

module

A module allows the Bicep files to be reused. Scope is the target of a Bicep file. A child resource exists only in the context of another resource. Extension resource is a resource that modifies another resource, such as a role assignment. User-defined data types provide the ability to define custom data types.

  • https://learn.microsoft.com/training/modules/implement-bicep/5-understand-bicep-file-structure-syntax
  • https://learn.microsoft.com/azure/azure-resource-manager/bicep/modules
  • https://learn.microsoft.com/azure/azure-resource-manager/bicep/file
39
Q

What is BICEP used for in Azure DevOps?

A

Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. In a Bicep file, you define the infrastructure you want to deploy to Azure, and then use that file throughout the development lifecycle to repeatedly deploy your infrastructure. Your resources are deployed in a consistent manner.

Bicep provides concise syntax, reliable type safety, and support for code reuse. Bicep offers a first-class authoring experience for your infrastructure-as-code solutions in Azure.

40
Q

You have an Azure subscription that contains 1,000 virtual machines.

You plan a configuration management solution for the virtual machines.

You need to recommend an agentless solution that supports declarative configuration management.

What should you include in the recommendation?

Select only one answer.

  • Ansible
  • Azure Automation
  • Chef
  • Puppet
A

Ansible

Ansible provides the ability to identify Azure resources and send configurations without an agent installed. Chef and Puppet require that an agent be installed. Azure Automation requires an agent or extension.

  • https://learn.microsoft.com/azure/developer/ansible/overview
  • https://learn.microsoft.com/training/modules/explore-infrastructure-code-configuration-management/
41
Q

You have an Azure subscription that contains 1,000 virtual machines.

You have an on-premises site that contains 100 physical servers.

You need to recommend an Azure service to centralize configuration management across both environments. The solution must minimize administrative effort and support a declarative approach.

Which Azure service should you recommend?

Select only one answer.

  • Azure Automation State Configuration
  • Azure Resource Manager (ARM)
  • Microsoft Purview
  • Microsoft Sentinel
A

Azure Automation State Configuration

Azure Automation State Configuration can work centrally for Azure and on-premises virtual machines. Guest configuration in Azure Policy requires Azure Arc-enabled resources to be used outside of Azure. ARM requires Azure Stack Hub to be used outside of Azure. Microsoft Sentinel is a SIEM and SOAR solution. Microsoft Purview is a data governance, protection, and management solution.

  • https://learn.microsoft.com/azure/automation/automation-dsc-overview
  • https://learn.microsoft.com/training/modules/implement-desired-state-configuration-dsc/4-explore-azure-automation?ns-enrollment-type=learningpath&ns-enrollment-id=learn.wwl.az-400-manage-infrastructure-as-code-using-azure
42
Q

What is Microsoft Purview

A

Microsoft Purview is a data governance, protection, and management solution.

43
Q

What is Azure Automation State Configuration?

A

Azure Automation State Configuration is an Azure configuration management service that allows you to write, manage, and compile PowerShell Desired State Configuration (DSC) configurations for nodes in any cloud or on-premises datacenter. The service also imports DSC Resources, and assigns configurations to target nodes, all in the cloud. You can access Azure Automation State Configuration in the Azure portal by selecting State configuration (DSC) under Configuration Management.

You can use Azure Automation State Configuration to manage a variety of machines:

  • Azure virtual machines
  • Azure virtual machines (classic)
  • Physical/virtual Windows machines on-premises, or in a cloud other than Azure (including AWS EC2 instances)
  • Physical/virtual Linux machines on-premises, in Azure, or in a cloud other than Azure

If you aren’t ready to manage machine configuration from the cloud, you can use Azure Automation State Configuration as a report-only endpoint. This feature allows you to set (push) configurations through DSC and view reporting details in Azure Automation

44
Q

You have an Azure DevOps organization that uses self-hosted agents to execute long-running jobs.

You plan to replace self-hosted agents with Microsoft-hosted agents.

You need to identify the maximum duration of a job run on a Microsoft-hosted agent.

What should you identify?

Select only one answer.

  • 2 hours
  • 6 hours
  • 12 hours
  • 24 hours
A

The maximum duration of a build running on a Microsoft-hosted agent is six hours.

  • https://learn.microsoft.com/training/modules/integrate-azure-pipelines/
  • https://learn.microsoft.com/azure/devops/pipelines/agents/hosted?view=azure-devops&tabs=yaml#capabilities-and-limitations
45
Q

You have an Azure DevOps organization that hosts a project named Project1. Project1 includes multiple tests.

You need to identify which tests in Project1 provide different outcomes, such as pass or fail, even when there are no changes to the source code.

Which Azure DevOps feature should you use?

Select only one answer.

  • Flaky test detection
  • Pipeline pass rate
  • Test pass rate
  • Test Results Trend
A

Flaky test detection

Flaky test detection is a feature configured at the project level and supports system and custom detection. Test pass rate and Pipeline pass rate are reports on the pipeline’s Analytics tab. Test Results Trend is a widget used to track test results.

  • https://learn.microsoft.com/training/modules/run-quality-tests-build-pipeline/
  • https://learn.microsoft.com/azure/devops/pipelines/test/flaky-test-management?view=azure-devops
46
Q

You have an Azure DevOps organization that contains a project named Project1. Project1 contains multiple automated and manual tests.

You need to reduce the storage space used by the test results.

What should you configure?

Select only one answer.

  • parallel jobs
  • retention policies
  • task groups
  • templates
A

Retention policies

Retention policies are used for test result retention and can be customized to reduce the storage space used. Parallel jobs are used to run more than one job at a time. A task group is a feature to encapsulate a sequence of tasks. Templates let you define reusable content, logic, and parameters in a pipeline.

  • https://learn.microsoft.com/training/modules/explore-azure-pipelines/
  • https://learn.microsoft.com/azure/devops/pipelines/policies/retention?view=azure-devops&tabs=yaml
47
Q

You clone a Git repository to a Linux server.

You plan to implement a Git hook that will be triggered automatically when the commit command is invoked in the repository.

You need to use one of the predefined Git hook files.

What should you do to ensure that the predefined file will be executed?

Select only one answer.

  • Add an extension to the script file.
  • Modify the location of the file.
  • Modify the permissions of the file.
  • Remove the extension from the script file.
A

Remove the extension from the script file.

To ensure that the predefined Git hook script can be executed, you must remove its existing extension (.sample), rather than adding the extension. On a Windows server, there is no need to modify the existing permissions of the script file. The file must reside in its default location.

  • https://learn.microsoft.com/training/modules/explore-git-hooks/3-implement
48
Q

You plan a Git branching strategy for your company.

You need to ensure that the strategy will dictate that pull requests must deploy to production for testing before they can merge to the main branch.

Which branching strategy should you use?

Select only one answer.

  • centralized
  • forking
  • GitHub Flow
  • trunk-based
A

Github flow

GitHub Flow is a popular trunk-based development release flow, which stipulates that pull requests must deploy to production for testing before they can merge to the main branch. This process means that all pull requests wait in the deployment queue for the merge. This provision is not part of the centralized, forking, or trunk-based strategy.

  • https://learn.microsoft.com/training/modules/manage-git-branches-workflows/2-explore-branch-workflow-types
  • https://learn.microsoft.com/azure/devops/repos/git/git-branching-guidance?view=azure-devops
  • https://learn.microsoft.com/devops/develop/how-microsoft-develops-devops
49
Q

You plan to use a trunk-based development workflow as the Git branching strategy for your company.

You are documenting the steps of the workflow.

You need to identify the step that immediately follows creating a feature branch.

Which step should you identify?

Select only one answer.

  • adding commits
  • code review
  • deployment
  • opening a pull request
A

adding commits

Adding commits follows creating a feature branch in a trunk-based development workflow. Opening a pull request follows adding commits to a trunk-based development workflow. Code review follows opening a pull request in a trunk-based development workflow. Deployment follows code review in a trunk-based development workflow.

  • https://learn.microsoft.com/training/modules/manage-git-branches-workflows/3-explore-feature-branch-workflow
50
Q

Describe a trunk-based development workflow

A

The trunk-based development Workflow assumes a central repository, and the main represents the official project history.

Instead of committing directly to their local main branch, developers create a new branch whenever they start working on a new feature.

Feature branches should have descriptive names, like new-banner-images or bug-91. The idea is to give each branch a clear, highly focused purpose.

Git makes no technical distinction between the main and feature branches, so developers can edit, stage, and commit changes to a feature branch.

  • Create a branch
  • Add commits
  • Open a PR
  • Discuss and review your code
  • Deploy
  • Merge
51
Q

You plan to use a forking workflow as the Git branching strategy for your company.

You need to identify the minimum number of repositories that each developer should use.

What is the minimum number of repositories you should identify?

Select only one answer.

  • 1
  • 2
  • 3
  • 4
A

2

When using a forking workflow, each developer should have two repositories, one private local side and the other public server-side. While it is technically possible to use only a server-side repository, this violates the principle of the forking workflow.

  • https://learn.microsoft.com/training/modules/manage-git-branches-workflows/2-explore-branch-workflow-types
  • https://learn.microsoft.com/azure/devops/repos/git/forks?view=azure-devops&tabs=visual-studio
52
Q

You have an Azure DevOps Agile-based project that includes several team members who use build and release pipelines based on continuous integration.

You notice that some pull requests are not associated with requirements captured by user stories.

You need to ensure that pull requests are blocked unless such an association exists.

Which branch policy you should implement?

Select only one answer.

  • Automatically include code reviewers.
  • Check for comment resolution.
  • Check for linked work items.
  • Require a minimum number of reviewers.
A

Check for linked work items

In Agile projects, requirements are implemented as User Story work item types. You can require associations between pull requests and work items. Linking work items provides more context for changes and ensures that updates go through your work item tracking process. Requiring a minimum number of reviewers is not used to link a work item with a pull request. Checking for comment resolution is useful to verify that reviewers’ comments are resolved, but not to link a work item with a pull request. A policy can be defined to include specific reviewers, but it cannot link a work item to the pull request.

  • https://learn.microsoft.com/training/modules/use-branch-merge-git/
  • https://learn.microsoft.com/training/modules/collaborate-pull-requests-azure-repos/
  • https://learn.microsoft.com/azure/devops/repos/git/branch-policies?view=azure-devops&tabs=browser#check-for-linked-work-items
53
Q

You have a project in Azure DevOps that has a Git repository.

You plan to define a pull request strategy for the repository.

You need to identify the Git branch policy that will condense the source branch commits into a single commit on the target branch.

Which branch policy should you implement?

Select only one answer.

  • Automatically include code reviewers.
  • Check for comment resolution.
  • Limit allowed merge types.
  • Require a minimum number of reviewers.
A

Limit allowed merge types

You can limit the allowed merge types to only squash merge, which can be used to condense the history of changes in your default branch. Requiring a minimum number of reviewers is not used to reduce the history of merge actions. Checking for comment resolution is useful to verify that reviewers’ comments are resolved but not to condense the history of changes. A policy can be defined to include specific reviewers, but it is unable to clean the branch history.

  • https://learn.microsoft.com/azure/devops/repos/git/branch-policies?view=azure-devops&tabs=browser#limit-merge-types
  • https://learn.microsoft.com/azure/devops/repos/git/merging-with-squash?view=azure-devops#how-is-a-squash-merge-helpful
  • https://learn.microsoft.com/training/modules/manage-git-branches-workflows/3-explore-feature-branch-workflow
54
Q

Your team uses a Git repository to collaborate on a project.

You discover that several large files that are larger than 50 MB were accidentally pushed into the repository.

You need to delete all files that are larger than 50 MB from the repository. The solution must ensure that commits and tags referencing the deleted files are updated automatically.

What should you use?

Select only one answer.

  • BFG
  • JAR
  • KILL
  • ZIP
A

BFG

The BFG utility is the only one that can quickly delete a subset of repository files based on criteria such as size, as well as automatically update all commits and tags. The other answer choices can be used for compressing/archiving files or terminating processes.

  • https://learn.microsoft.com/training/modules/manage-git-repositories/3-purge-repository-data
55
Q

You have an Azure Repos-based repository named Repo1 and a GitHub-based repository named Repo2.

You plan to create a YAML pipeline that will use artifacts stored in Repo1 and Repo2.

You need to identify which step should be included in the pipeline.

Which step should you identify?

Select only one answer.

  • checkout
  • pool
  • script
  • trigger
A

Checkout

The checkout step is required to gain access to repositories from a YAML pipeline. The trigger, pool, and script steps are all optional in this scenario. They are not required to gain access to repositories from a YAML pipeline.

  • https://learn.microsoft.com/training/modules/integrate-azure-pipelines/6-use-multiple-repositories-your-pipeline
56
Q

You have a project in Azure DevOps named MyProject that contains a Git repository named MyRepo and a YAML pipeline. Repo1 contains a branch named features/tools.

You plan to access the features/tools branch from the YAML pipeline.

You need to identify the URIs that reference the features/tools branch from the YAML pipeline.

Which two URIs should you identify? Each correct answer presents a complete solution.

Select all answers that apply.

  • git://MyProject/MyRepo@features/tools
  • git://MyProject/MyRepo@refs/heads/features/tools
  • https://MyProject/MyRepo@features/tools
  • https://MyProject/MyRepo@refs/heads/features/tools
A

git://MyProject/MyRepo@features/tools or git://MyProject/MyRepo@refs/heads/features/tools can be used to access the features/tool branch of a Git repository from an Azure DevOps YAML pipeline. To target a Git repository, the git: prefix must be used.

  • https://learn.microsoft.com/training/modules/integrate-azure-pipelines/6-use-multiple-repositories-your-pipeline
  • https://learn.microsoft.com/azure/devops/repos/git/forks?view=azure-devops&tabs=visual-studio
57
Q

What is the YAML syntax to checkout multiple git repositories in a pipeline?

A
resources:
  repositories:

  - repository: MyGitHubRepo # The name used to reference this repository in the checkout step.
    type: github
    endpoint: MyGitHubServiceConnection
    name: MyGitHubOrgOrUser/MyGitHubRepo

  - repository: MyBitBucketRepo
    type: bitbucket
    endpoint: MyBitBucketServiceConnection
    name: MyBitBucketOrgOrUser/MyBitBucketRepo

  - repository: MyAzureReposGitRepository
    type: git
    name: MyProject/MyAzureReposGitRepo

trigger:

- main

pool:
  vmImage: 'ubuntu-latest'

steps:

- checkout: self
- checkout: MyGitHubRepo
- checkout: MyBitBucketRepo
- checkout: MyAzureReposGitRepository

- script: dir $(Build.SourcesDirectory)
58
Q

You plan to configure GitHub Actions to access GitHub secrets.

You have the following YAML.

01  steps:
02    - shell: pwsh
03    
04        DB_PASSWORD: ${{ secrets.DBPassword }}

You need to complete the YAML to reference the secret. The solution must minimize the possibility of exposing secrets.

Which element should you use at line 03?

Select only one answer.

  • args:
  • $env:
  • env:
  • run:
A

Args

The most secure way to pass secrets to run commands is to reference them as environment variables, rather than arguments. This requires the use of the env: element. The $env: notation is used to reference an environment variable, but the intention of this question is to define rather than reference. The run: element defines which command to run, so it follows the env: notation.

  • https://learn.microsoft.com/training/modules/learn-continuous-integration-github-actions/9-use-secrets-workflow
59
Q

You have a GitHub organization that contains a single repository.

You plan to implement a GitHub workflow that will consist of multiple actions, each of which will include multiple steps. Some of the steps will require the use of a secret.

You need to identify two locations where the secret can be created.

Which two locations should you identify? Each correct answer presents a complete solution.

Select all answers that apply.

  • action
  • organization
  • repository
  • step
    -workflow
A

Organization & repository

GitHub secrets can be created at the organization and repository levels. You can use secrets in a step of an action within a workflow, but you cannot create them at any of the other three levels.

  • https://learn.microsoft.com/training/modules/learn-continuous-integration-github-actions/8-create-encrypted-secrets
60
Q

You have an Azure DevOps organization connected to a Microsoft Entra tenant.

You need to configure a Microsoft Entra ID policy to control the maximum duration of personal access tokens (PAT) for the organization.

What should you configure?

Select only one answer.

  • full-scoped PATs
  • Global PATs
  • lifespan of a PAT
  • revoke leaked PATs
A

lifespan of PAT

Lifespan can be used to define the maximum lifespan of a PAT and control its lifecycle. The full-scoped option forces the use of defined sets of scopes. Global grants access to all accessible organizations in Azure DevOps. Revoke leaked PATs automatically revokes any PATs checked in to a public GitHub repository.

  • https://learn.microsoft.com/azure/devops/organizations/accounts/manage-pats-with-policies-for-administrators?view=azure-devops#set-maximum-lifespan-for-new-pats
  • https://learn.microsoft.com/training/modules/migrate-to-devops/4-explore-authorization-access-strategy?ns-enrollment-type=learningpath&ns-enrollment-id=learn.wwl.az-400-get-started-devops-transformation-journey
61
Q

You have an Azure DevOps project that contains a YAML pipeline named Pipeline1. Pipeline1 deploys an artifact to an Azure subscription named Sub1. Sub1 contains an Azure key vault named Vault1.

You plan to configure Pipeline1 to retrieve a password stored in Vault1. You add an Key Vault v2 task to Pipeline1.

You need configure the Key Vault v2 task to reference the password to be retrieved from Vault1.

What should you include in the Key Vault v2 task?

Select only one answer.

  • connectedServiceName
  • runAsPreJob
  • secret
  • secretsFilter
A

SecretsFilter

secretsFilter provides a default value of *, which allows you to download all the secrets or a comma-separated list of secret names. runAsPreJob exposes secrets to all the tasks in a job, not just the tasks that follow. connectedServiceName selects the service connection for the Azure subscription that contains the Key Vault instance or creates a new connection. Secret is not an argument of a YAML pipeline. It is an option of the Azure pipeline CLI to create a variable and mark it as a secret.

  • https://learn.microsoft.com/azure/devops/pipelines/tasks/reference/azure-key-vault-v2?view=azure-pipelines&viewFallbackFrom=azure-devops#arguments
  • https://learn.microsoft.com/training/modules/manage-secrets-with-azure-key-vault/
62
Q

You have an Azure Pipelines CI/CD pipeline named Pipeline1. Pipeline1 includes the OWASP Zed Attack Proxy (ZAP) VSTS extension.

You need to identify the functionality that OWASP ZAP provides in the pipeline.

What should you identify?

Select only one answer.

  • Open Source Software (OSS) vulnerability scans
  • passive penetration tests
  • regression tests
  • static code analysis
A

passive penetration steps

OWASP ZAP implements a passive penetration test, not static code analysis, OSS vulnerability scan, or regression test.

  • https://learn.microsoft.com/training/modules/owasp-and-dynamic-analyzers/3-explore-owasp-zap-penetration-test
63
Q

You have a private GitHub repository named Repo1.

You need to detect whether the source code in Repo1 contains shared access signatures (SAS) of Azure Storage accounts. The solution must minimize administrative effort.

What should you configure?

Select only one answer.

  • GitHub code scanning
  • GitHub secret scanning
  • Mend Bolt
  • SonarQube
A

Github secret scanning

GitHub secret scanning can be used to automate source code analysis. A secret can be a token or a private key used for authentication. If you check a secret into a repository, anyone who has read-access to the repository can use the secret to access the external service with your privileges. GitHub code scanning relates to vulnerabilities or code errors. SonarQube is a code analysis tool that supports specific programming languages. Mend Bolt is a source code-based tool used to search open source libraries for security/licensing issues. Secret scanning is enabled by default on public repositories and cannot be configured or turned off. Secret scanning must be enabled manually on private repositories.

  • https://learn.microsoft.com/training/modules/configure-use-secret-scanning-github-repository/
  • https://docs.github.com/en/code-security/secret-scanning/secret-scanning-patterns
64
Q

You have a GitHub project that uses Open Source Software (OSS).

You need to implement a solution that evaluates OSS packages for license compliance issues and vulnerabilities.

What should you integrate into the pipeline?

Select only one answer.

  • GitHub secret scanning
  • Mend Bolt
  • OWASP Zed Attack Proxy (ZAP)
  • SonarQube
A

Mend bolt

Mend Bolt automatically detects vulnerable OSS components, outdated libraries, and license compliance issues in code. SonarQube is a source code analysis tool that supports specific programming languages. OWASP ZAP is designed to run penetration testing against applications. GitHub secret scanning is used to perform code scanning for secrets.

  • https://learn.microsoft.com/training/modules/implement-open-source-software-azure/
65
Q

You have a web app deployed to Azure App Service web app instances in multiple Azure regions.

You plan to use Application Insights to validate the availability of App1 by performing a sequence of steps on the target web apps.

You need to identify which type of Application Insights test you should use.

Which test type should you identify?

Select only one answer.

  • custom TrackAvailability
  • multi-step
  • standard
  • URL ping
A

custom track availability

TrackAvailability tests allow you to submit custom availability tests instead of multi-step web tests. With TrackAvailability and custom availability tests, you can run tests on any compute and use C# to author new tests. Multi-step web tests depend on Microsoft Visual Studio webtest files. Support for webtest files was discontinued in Visual Studio 2022. Multi-step web tests are deprecated and will be retired within Application Insights on August, 31, 2024. It will be impossible to create new multi-step web tests after August 31, 2023. URL ping and standard tests do not support multi-step functionality.

  • https://learn.microsoft.com/training/modules/monitor-app-performance/
  • https://learn.microsoft.com/azure/azure-monitor/app/availability-overview
  • https://azure.microsoft.com/updates/retirement-notice-transition-to-custom-availability-tests-in-application-insights/
66
Q

You manage 100 on-premises servers that run Linux or Windows.

You need to collect logs from all the servers and make the logs available for analysis directly from the Azure portal. The solution must meet the following requirements:

Provide the ability to send data from Linux virtual machines to multiple Log Analytics workspaces.
Support the use of XPATH queries to filter Windows events for collection.
Minimize the number of agents installed on the servers.
Which agent should you install?

Select only one answer.

  • Azure Connected Machine agent
  • Azure Monitor Agent
  • Dependency agent
  • Telegraph agent
A

Azure monitor agent

The Azure Monitor Agent replaces the Log Analytics agent, the diagnostic extension, and the Telegraph agent. It can centrally configure data collection for different data from different sets of virtual machines, sending the data from Linux virtual machines to multiple workspaces and using XPATH queries to filter Windows events for collection. The Telegraph agent supports only Linux operating systems. The Azure Connected Machine agent is used by Azure Arc, which is not required in this scenario. Using the Dependency agent increases the number of agents installed on the target servers.

  • https://learn.microsoft.com/training/modules/intro-to-azure-monitor/3-how-azure-monitor-works
  • https://learn.microsoft.com/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent?tabs=portal
67
Q

You plan to implement an Azure Pipelines release pipeline that will deploy Azure resources to development and production environments.

You need to prevent deployment to the production environment if the Azure platform raises alerts about issues affecting the development environment.

Which integration should you add to the release pipeline?

Select only one answer.

  • artifact policy
  • Azure Monitor
  • exclusive lock
  • GitHub Actions
A

Azure Monitor can be used in a release pipeline to detect whether active alerts are triggered and block or allow the next step. An artifact policy is used to check and evaluate artifacts. Exclusive lock allows only a single run from the pipeline to proceed. GitHub Actions are used to trigger an Azure pipeline to run directly from a GitHub Actions workflow.

  • https://learn.microsoft.com/azure/devops/pipelines/tasks/reference/azure-monitor-v1?view=azure-pipelines&viewFallbackFrom=azure-devops
  • https://learn.microsoft.com/training/modules/intro-to-azure-monitor/3-how-azure-monitor-works
68
Q

You plan to deploy a web-based solution by using Azure App Service and Azure DevOps.

You need to track build and release information on an Application Insights dashboard.

What should you use?

Select only one answer.

  • Azure Monitor
  • continuous monitoring
  • GitHub Actions
  • release annotations
A

Azure monitor

Release annotations allow the integration of Azure DevOps and Application Insights, showing build and release-related information to detect performance impact. Azure Monitor can be used in release pipelines to detect whether active alerts are triggered and block or allow the next step. GitHub Actions are used to trigger an Azure pipeline to run directly from a GitHub Actions workflow. Continuous monitoring allows you to add preconditions and postconditions, but not to send information to Application Insights.

  • https://learn.microsoft.com/azure/azure-monitor/app/annotations
  • https://learn.microsoft.com/training/modules/intro-to-azure-monitor/3-how-azure-monitor-works
69
Q

You collect performance logs from an Azure virtual machine named VM1 by using Log Analytics.

You have a Kusto Query Language (KQL) query that displays the results of the log-based performance data in a tabular format.

You need to modify the query to display the results as a bar chart.

Which KQL operator should you use?

Select only one answer.

  • project
  • project-away
  • render
  • summarize
A

render

The render operator allows you to display the results of a query as a bar chart. The project operator allows you to specify the columns to include in a result set. The project-away operator allows you to specify which columns to exclude from a result set. The summarize operator aggregates the results.

  • https://learn.microsoft.com/azure/data-explorer/kusto/query/renderoperator?pivots=azuredataexplorer
  • https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/
70
Q

You implement the monitoring of a distributed application named App1 by using Application Insights.

You need to identify aggregated data, including rates of requests, failures, and exceptions. The solution must minimize the amount of time and effort required to retrieve the relevant information.

Which Application Insights feature should you use?

Select only one answer.

  • Application Map
  • metrics explorer
  • Profiler
  • usage analysis
A

Metrics explorer

Metrics explorer provides direct access to aggregated data, including rates of requests, failures, and exceptions. Application Map lists the components of an app, including key metrics and alerts, but it does not provide direct access to aggregated data. Profiler allows you to inspect execution profiles of sampled requests. Usage analysis allows you to analyze user segmentation and retention.

  • https://learn.microsoft.com/training/modules/implement-tools-track-usage-flow/6-explore-application-insights
  • https://learn.microsoft.com/azure/azure-monitor/app/pre-aggregated-metrics-log-metrics