AZ-204 Private Flashcards
You are planning on using the Azure container registry service. You want to ensure that your application or service can use it for headless authentication. You also want to allow role-based access to the registry.
You decide to use the Admin account associated with the container registry
Would this fulfil the requirement?
No.
Why not:
This is only used for single user access to the registry
Azure Container Registry - Admin account
Each container registry includes an admin user account, which is disabled by default. You can enable the admin user and manage its credentials in the Azure portal, or by using the Azure CLI or other Azure tools. The admin account has full permissions to the registry.
The admin account is currently required for some scenarios to deploy an image from a container registry to certain Azure services. For example, the admin account is needed when you deploy a container image in the portal from a registry directly to Azure Container Instances or Azure Web Apps for Containers.
Important
The admin account is designed for a single user to access the registry, mainly for testing purposes. We do not recommend sharing the admin account credentials among multiple users. All users authenticating with the admin account appear as a single user with push and pull access to the registry. Changing or disabling this account disables registry access for all users who use its credentials. Individual identity is recommended for users and service principals for headless scenarios.
You are planning on using the Azure container registry service. You want to ensure that your application or service can use it for headless authentication. You also want to allow role-based access to the registry.
You decide to perform an individual login to the registry
Would this fulfil the requirement?
Yes.
Why:
This will allow you to assign role-based access control or even allow for headless authentication
Azure Container Registry/Individual Login/Azure AD
When working with your registry directly, such as pulling images to and pushing images from a development workstation to a registry you created, authenticate by using your individual Azure identity.
You are planning on using the Azure container registry service. You want to ensure that your application or service can use it for headless authentication. You also want to allow role-based access to the registry.
You decide to assign a service principal to the registry
Would this fulfil the requirement?
Yes.
Why:
If you assign a service principal to your registry, your application or service can use it for headless authentication.
Azure Container Registry/Service Principal/AD
If you assign a service principal to your registry, your application or service can use it for headless authentication. Service principals allow Azure role-based access control (Azure RBAC) to a registry, and you can assign multiple service principals to a registry. Multiple service principals allow you to define different access for different applications.
az webapp cors add
- Add allowed origins.
Code:
az webapp cors add --allowed-origins [--ids] [--name] [--resource-group] [--slot] [--subscription]
Ex:
az webapp cors add -g {myRG} -n {myAppName} –allowed-origins https://myapps.com
az webapp commands
az webapp cors remove -g {myRG} -n {myAppName} –allowed-origins https://myapps.com
az webapp cors show –name MyWebApp –resource-group MyResourceGroup
Azure Database Migration Service
You can use Azure Database Migration Service to perform an online (minimal downtime) migration of databases from an on-premises or cloud instance of MongoDB to Azure Cosmos DB’s API for MongoDB.
Using Azure Database Migration Service to perform an online migration requires creating an instance based on the Premium pricing tier.
For an optimal migration experience, Microsoft recommends creating an instance of Azure Database Migration Service in the same Azure region as the target database. Moving data across regions or geographies can slow down the migration process.
When you migrate databases to Azure by using Azure Database Migration Service, you can do an offline or an online migration. With an offline migration, application downtime starts when the migration starts. With an online migration, downtime is limited to the time to cut over at the end of migration. We suggest that you test an offline migration to determine whether the downtime is acceptable; if not, do an online migration.
The service uses the Data Migration Assistant to generate assessment reports that provide recommendations to guide you through the changes required prior to performing a migration
Azure Migrate
Azure Migrate provides a centralized hub to assess and migrate to Azure on-premises servers, infrastructure, applications, and data. It provides the following:
Unified migration platform: A single portal to start, run, and track your migration to Azure.
Range of tools: A range of tools for assessment and migration.
Data Migration Assistant
Data Migration Assistant helps pinpoint potential problems blocking migration. It identifies unsupported features, new features that can benefit you after migration, and the right path for database migration.
Azure Cosmos DB Data Migration Tool
The Azure Cosmos DB Data Migration tool is an open source tool designed for small migrations.
This tutorial provides instructions on using the Azure Cosmos DB Data Migration tool, which can import data from various sources into Azure Cosmos containers and tables. You can import from JSON files, CSV files, SQL, MongoDB, Azure Table storage, Amazon DynamoDB, and even Azure Cosmos DB SQL API collections. You migrate that data to collections and tables for use with Azure Cosmos DB. The Data Migration tool can also be used when migrating from a single partition collection to a multi-partition collection for the SQL API.
Integration Service Environment
Sometimes, your logic apps need access to secured resources, such as virtual machines (VMs) and other systems or services, that are inside or connected to an Azure virtual network. To set up this access, you can create an integration service environment (ISE).
If your logic apps need access to virtual networks that use private endpoints, you must create, deploy, and run those logic apps inside an ISE.
When you create an ISE, Azure injects or deploys that ISE into your Azure virtual network. You can then use this ISE as the location for the logic apps and integration accounts that need access.
Azure App Service Environment
The Azure App Service Environment is an Azure App Service feature that provides a fully isolated and dedicated environment for securely running App Service apps at high scale.
App Service environments (ASEs) are appropriate for application workloads that require:
Very high scale.
Isolation and secure network access.
High memory utilization.
Customers can create multiple ASEs within a single Azure region or across multiple Azure regions. This flexibility makes ASEs ideal for horizontally scaling stateless application tiers in support of high requests per second (RPS) workloads.
Azure AD B2B Integration
Azure Active Directory (Azure AD) business-to-business (B2B) collaboration is a feature within External Identities that lets you invite guest users to collaborate with your organization
VNet Service Endpoint
Virtual Network (VNet) service endpoint provides secure and direct connectivity to Azure services over an optimized route over the Azure backbone network. Endpoints allow you to secure your critical Azure service resources to only your virtual networks. Service Endpoints enables private IP addresses in the VNet to reach the endpoint of an Azure service without needing a public IP address on the VNet.
You are developing an ASP.Net Core application. This application would need to be deployed to the Azure Web App service from a GitHub repository. The web application contains static content that is generated by a script.
You are planning on using the Azure Web App continuous deployment feature. The script which is used to generate static content needs to run first before the web site can start serving traffic.
Which of the following are options that can be used for this fulfilling this requirement?
Customize the deployment by creating a .deployment file at the root of the repository. Ensure the deployment file calls the script which generates the static content.
.deployment file
Deployment configuration files let you override the default heuristics of deployment by allowing you to specify a project or folder to be deployed. It has to be at the root of the repository and it’s in .ini format.
Code:
[config]
command = deploy.cmd
Powershell:
command = powershell -NoProfile -NoLogo -ExecutionPolicy Unrestricted -Command “& “$pwd\deploy.ps1” 2>&1 | echo”
\
Deploying a specific ASP.NET or ASP.NET Core project file
You can specify the path to the project file, relative to the root of your repo. Note that this is not a path to the solution file (.sln), but to the project file (.csproj/.vbproj). The reason for this is that Kudu only builds the minimal dependency tree for this project, and avoids building unrelated projects in the solution that are not needed by the web project.
Azure Function authLevels
Determines what keys, if any, need to be present on the request in order to invoke the function. The authorization level can be one of the following values:
anonymous—No API key is required.
function—A function-specific API key is required. This is the default value if none is provided.
admin—The master key is required.
Azure Functions Blob storage binding
Integrating with Blob storage allows you to build functions that react to changes in blob data as well as read and write values.
Azure Functions HTTP triggers
Azure Functions may be invoked via HTTP requests to build serverless APIs and respond to webhooks.
Run a function from an HTTP request
Return an HTTP response from a function
Azure Functions Queue storage trigger
Azure Functions can run as new Azure Queue storage messages are created and can write queue messages within a function.
Run a function as queue storage data changes
Write queue storage messages
Azure Functions Timer Trigger
A timer trigger lets you run a function on a schedule.
Your company has the requirement to deploy a web application to an Azure Windows virtual machine. You have to configure remote access to RDP into the machine.
You decide to create an Inbound Network Security Group rule to allow traffic on port 3389
Would this fulfil the requirement?
Yes.
Why:
In order to connect to a Windows virtual machine in Azure, you have to create an Inbound port rule in the Network Security Group
Azure Notification Hubs
The Notification Hub is used for sending notifications to devices
Azure Notification Hubs provide an easy-to-use and scaled-out push engine that enables you to send notifications to any platform (iOS, Android, Windows, etc.) from any back-end (cloud or on-premises). Notification Hubs works great for both enterprise and consumer scenarios. Here are a few example scenarios:
Send breaking news notifications to millions with low latency.
Send location-based coupons to interested user segments.
You have to create an Azure Virtual Machine using a PowerShell script.
Which of the following command can be used to create the new virtual machine?
New-AzVm
write a new row to Azure Table storage whenever a new message appears in Azure Queue storage
{ "bindings": [ { "type": "queueTrigger", "direction": "in", "name": "order", "queueName": "myqueue-items", "connection": "MY_STORAGE_ACCT_APP_SETTING" }, { "type": "table", "direction": "out", "name": "$return", "tableName": "outTable", "connection": "MY_TABLE_STORAGE_ACCT_APP_SETTING" } ] }
You have to setup a data store using Azure Cosmos DB. The documents that would be stored in Cosmos DB would contain hundreds of properties. The Azure Cosmos DB account would be using the SQL API.
The issue currently is that in the design stage it has been noticed that there are no distinct values in the documents that can be used for partitioning.
You need to choose a partition key that would ensure workloads are spread evenly over the partitions.
Which of the following are strategies that can be implemented?
Choose 2 answers from the options given below:
Employing a strategy of concatenation of multiple property values with a random suffix appended
Using a hash suffix that is appended to a property value
SQL API/Partition Key
It’s the best practice to have a partition key with many distinct values, such as hundreds or thousands.
The goal is to distribute your data and workload evenly across the items associated with these partition key values.
If such a property doesn’t exist in your data, you can construct a synthetic partition key
Concatenate multiple properties of an item
You can form a partition key by concatenating multiple property values into a single artificial partitionKey property. These keys are referred to as synthetic keys.
partition key with a random suffix
Another possible strategy to distribute the workload more evenly is to append a random number at the end of the partition key value. When you distribute items in this way, you can perform parallel write operations across partitions.
An example is if a partition key represents a date. You might choose a random number between 1 and 400 and concatenate it as a suffix to the date. This method results in partition key values like 2018-08-09.1,2018-08-09.2, and so on, through 2018-08-09.400.
Use a partition key with pre-calculated suffixes
The random suffix strategy can greatly improve write throughput, but it’s difficult to read a specific item. You don’t know the suffix value that was used when you wrote the item. To make it easier to read individual items, use the pre-calculated suffixes strategy. Instead of using a random number to distribute the items among the partitions, use a number that is calculated based on something that you want to query.
Azure Active Directory app manifest
The application manifest contains a definition of all the attributes of an application object in the Microsoft identity platform. It also serves as a mechanism for updating the application object.
Manifest reference/id attribute
The unique identifier for the app in the directory. This ID is not the identifier used to identify the app in any protocol transaction. It’s used for the referencing the object in directory queries.
code:
“id”: “f7f9acfc-ae0c-4d6c-b489-0a81dc1652dd”,
Manifest reference/accessTokenAcceptedVersion attribute
Specifies the access token version expected by the resource. This parameter changes the version and format of the JWT produced independent of the endpoint or client used to request the access token.
code:
“accessTokenAcceptedVersion”: 2,
Manifest reference/addIns attribute
Defines custom behavior that a consuming service can use to call an app in specific contexts. For example, applications that can render file streams may set the addIns property for its “FileHandler” functionality. This parameter will let services like Microsoft 365 call the application in the context of a document the user is working on.
code: "addIns": [ { "id": "968A844F-7A47-430C-9163-07AE7C31D407", "type":" FileHandler", "properties": [ { "key": "version", "value": "2" } ] } ],
Manifest reference/oauth2AllowImplicitFlow attribute
Specifies whether this web app can request OAuth2.0 implicit flow access tokens. The default is false. This flag is used for browser-based apps, like JavaScript single-page apps.
code:
“oauth2AllowImplicitFlow”: false,
Configure group claims for applications with Azure Active Directory
Azure Active Directory can provide a users group membership information in tokens for use within applications. Two main patterns are supported:
- Groups identified by their Azure Active Directory object identifier (OID) attribute
- Groups identified by sAMAccountName or GroupSID attributes for Active Directory (AD) synchronized groups and users
Group claims for applications migrating from AD FS and other identity providers
Many applications configured to authenticate with AD FS rely on group membership information in the form of Windows AD group attributes. These attributes are the group sAMAccountName, which may be qualified by-domain name, or the Windows Group Security Identifier (GroupSID). When the application is federated with AD FS, AD FS uses the TokenGroups function to retrieve the group memberships for the user.
Options for applications to consume group information
Applications can call the MS Graph groups endpoint to obtain group information for the authenticated user. This call ensures that all the groups a user is a member of are available even when there are a large number of groups involved. Group enumeration is then independent of token size limitations.
Prerequisites for using Group attributes synchronized from Active Directory
Group membership claims can be emitted in tokens for any group if you use the ObjectId format. To use group claims in formats other than the group ObjectId, the groups must be synchronized from Active Directory using Azure AD Connect.
groupMembershipClaims attribute
Configures the groups claim issued in a user or OAuth 2.0 access token that the app expects. To set this attribute, use one of the following valid string values:
code:
“groupMembershipClaims”: “SecurityGroup”,
Acquire a token from Azure AD for authorizing requests from a client application
A key advantage of using Azure Active Directory (Azure AD) with Azure Blob storage or Queue storage is that your credentials no longer need to be stored in your code. Instead, you can request an OAuth 2.0 access token from the Microsoft identity platform. Azure AD authenticates the security principal (a user, group, or service principal) running the application. If authentication succeeds, Azure AD returns the access token to the application, and the application can then use the access token to authorize requests to Azure Blob storage or Queue storage.
Grant your registered app permissions to Azure Storage
Azure Storage API
This step enables your application to authorize requests to Azure Storage with Azure AD.
observe that the available permission type is Delegated permissions. This option is selected for you by default.
Under Permissions, select the checkbox next to user_impersonation
az webapp log commands
az webapp log config
Configure logging for a web app.
az webapp log deployment
Manage web app deployment logs.
az webapp log deployment list
List deployments associated with web app.
az webapp log deployment show
Show deployment logs of the latest deployment, or a specific deployment if deployment-id is specified.
az webapp log download
Download a web app’s log history as a zip file.
az webapp log show
Get the details of a web app’s logging configuration.
az webapp log tail
Start live log tracing for a web app.
az webapp log config
az webapp log config –name MyWebapp –resource-group MyResourceGroup –web-server-logging off
az webapp log download
az webapp log download –name MyWebApp –resource-group MyResourceGroup
Microsoft Graph API/Permission
Delegated/User.Read
Azure Event Grid
- Azure Event Grid allows you to easily build applications with event-based architectures.
- First, select the Azure resource you would like to subscribe to, and then give the event handler or WebHook endpoint to send the event to.
- Event Grid has built-in support for events coming from Azure services, like storage blobs and resource groups. Event Grid also has support for your own events, using custom topics.
Azure Event Hubs
- Azure Event Hubs is a big data streaming platform and event ingestion service. It can receive and process millions of events per second.
- Data sent to an event hub can be transformed and stored by using any real-time analytics provider or batching/storage adapters.
- The following scenarios are some of the scenarios where you can use Event Hubs:
Anomaly detection (fraud/outliers)
Application logging
Analytics pipelines, such as clickstreams
Live dashboarding
Archiving data
Transaction processing
User telemetry processing
Device telemetry streaming
Azure Service Bus
- fully managed enterprise message broker with message queues and public-subscribe topics.
- Service Bus is used to decouple applications and services from each other, for load balancing work across competing workers, for safely routing and transferring data and control across service and application boundaries, and for coordinating transactional work that requires a high-degree of reliability.
- Data is transferred between different applications and services using messages. A message is a container decorated with metadata, and can contain any kind of information, including structured data encoded with common formats such as JSON, XML, Apache Avro, or Plain Text.
Which notebook format is used in Azure Databricks?
DBC HTML IPython notebook RMarkdown https://docs.microsoft.com/en-us/azure/databricks/notebooks/notebooks-manage
Tags applied at a resource group level are propagated to resources within the resource group
Incorrect