(AZ-204 topic) Develop for Azure Storage Flashcards

Test takers should have familiarity with deploying Storage Accounts and developing solutions with Blob Containers. Additionally, they’ll be expected to know how to create, configure, & develop with Azure CosmosDB. Questions for this domain comprise 15% of the total questions for this exam.

1
Q

What is the best way to optimize costs for your blob storage?

  • Use the lifecycle management policy in your blob storage.
  • Delete blobs, blob versions, and blob snapshots when no longer needed.
  • Transition storage to the cold tier when not needed.
  • Create rules to clean up unused blobs.
A

-Use the lifecycle management policy in your blob storage.

Azure Blob Storage lifecycle management offers a rich, rule-based policy to keep your account as lean and efficient as possible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

In .NET how would you retrieve blob metadata asynchronously with C#. Fill in the blank. “blob” is of type BlobClient. ```

BlobProperties properties = await blob.__________;

foreach (var metadataItem in properties.Metadata) { Console.WriteLine($”\tKey: {metadataItem.Key}”); Console.WriteLine($”\tValue: {metadataItem.Value}”); } ```

  • GetBlobMetadataAsync()
  • GetPropertiesAsync()
  • GetMetadataAsync()
  • LoadPropertiesAsync()
A

-GetPropertiesAsync()

Bingo! This operation returns all user-defined metadata, standard HTTP properties, and system properties for the blob. It does not return the content of the blob.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Magic Corporation wants a new Storage Account that leverages blob storage. The primary use of this account will be to store data that needs to be kept around and accessed for legal and compliance purposes and not much else. Magic Corporation does quarterly compliance audits and last needed data from this data pool roughly a year ago. The company is of course cost-conscious, but the data needs to be available within three days of requesting it. What data tier should be configured to meet these needs?

  • Hot
  • Archive
  • Cool
  • Secure
A

-Cool

While the company will be using this Storage Account to store data for long-term retention, it still will be performing quarterly compliance audits that will require the data to be accessed in a semi-regular way. The Cool tier creates cost-optimization for long-term storage, but with the ability to access the data ad hoc if necessary and without re-hydrating.

The archive tier is meant for situations where the data will only need to be accessed once every 6 months or more. Data stored here is stored at the most efficient rates, but needs to be re-hydrated before it can be accessed and transmission rates are very expensive after re-hydration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which consistency model should you use in Cosmos DB if latency has to be minimized as a priority?

  • Eventual
  • Bounded Staleness
  • Strong
  • Session
A

-Eventual

This model offers high availability and low latency along with the highest throughput of all.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

You need to update your company’s Blob Inventory Policy. which of the following Azure CLI commands will accomplish this?

  • az storage account blob-policy set -g azuredale –account-name azuredalestorage –update “policy.rules[0].name=newname”
  • az storage account blob-inventory-policy set -g azuredale –account-name azuredalestorage –update “policy.rules[0].name=newname”
  • az storage account blob-inventory-policy update -g azuredale –account-name azuredalestorage –set “policy.rules[0].name=newname”
  • az storage account blob-policy update -g azuredale –account-name azuredalestorage –set “policy.rules[0].name=newname”
A

-az storage account blob-inventory-policy update -g azuredale –account-name azuredalestorage –set “policy.rules[0].name=newname”

Using the update command, you can change parameters in your policy quickly and easily.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You’ve been recently tasked with building a basic company website that will host information about the company such as its history, contact information, agent information, and social media links. It needs to be secured both internally and externally, inexpensive, and easily modified if updates need to be applied. Additionally, the website is expected to remain operational with at least 3 9’s (99.9%) availability. What solution would be the best solution for this scenario?

  • Create an Azure Container Instance running a containerized web server.
  • Build a Static Web Site using Azure Blob Storage.
  • Create an Azure Web App Service instance using the S1 SKU.
  • Create an Azure Virtual Machine to host your website.
A

-Build a Static Web Site using Azure Blob Storage.

Creating a Static Web Page in your Storage Account will create a blob container that you can control access to, configure a custom domain, SSL, and allow you to create static pages that are only billed based on your storage type, the number of reads against the site, and how much storage you consume. Your storage can be regularly backed up and has guaranteed SLAs without the need to select a higher-cost service plan.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

In which situation would you use a Shared Access Signature (SAS) token for your blob storage?

  • To grant access to a storage container for short periods of time
  • To grant limited access to storage resources to a third party
  • When a third party user needs admin rights to the storage account
  • When a third party needs long term access to a specific blob
A

-To grant limited access to storage resources to a third party

A SAS provides secure delegated access to resources in the storage account, and you get granular control over how a client can access the data in the account.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You are creating an application that uses Cosmos DB for writing customer transactions. While the application isn’t dependent on real-time updates, it is critical the transactions arrive in the right order and stay consistent for all sessions. Which consistency model should you use for Cosmos DB?

  • Strong
  • Consistent Prefix
  • Session
  • Eventual
A

-Consistent Prefix

Data is updated in the correct order, but there is no guarantee of when that might be.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You have been tasked with creating some new features for an existing product. The choice has been made to use Cosmos DB for its global scalability and low read/write latency. The database schema for the product resembles a traditional relational database, but needs to remain flexible as the product use grows. Which is the most appropriate API to choose for using Cosmos DB?

  • Cassandra
  • Core (SQL)
  • MongoDB
  • Gremlin
A

-Core (SQL)

The existing product uses a traditional relational database, which makes Core (SQL) the best choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which type of Storage Blob is best for Random Access files?

  • Append
  • Page
  • Block
  • Table
A

-Page

Page blobs are best for when data is being constantly updated like in the case of VHD files which are constantly and randomly being accessed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What can you do to automatically transition your blobs between storage tiers based on factors like last modified date?

  • Use Lifecycle Management.
  • Use a Blob Trigger to initiate a tier swap
  • Use Azure Automation.
  • Create an Azure Function to transition the blobs.
A

-Use Lifecycle Management.

Azure lifecycle management can move blobs between tiers based on rules you set in the lifecycle management console.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which storage tier is best for storing reports that are updated once per quarter but are frequently accessed by your leadership staff?

  • Hot
  • Warm
  • Cool
  • Archive
A

-Hot

While updates are infrequent, the report is still referenced frequently and should be readily available when needed. This makes the Hot tier the best option because of the frequent access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Which of the following is NOT a supported API in Cosmos DB?

  • Cassandra
  • MongoDB
  • PostGres
  • SQL
A

-PostGres

Cosmos does not support PostGres as an API. The SQL API would be the closest API to this.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Logical Partitions in CosmosDB are decided by what?

  • The Partition Mode
  • The Partition Group
  • The Partition Index
  • The Partition Key
A

-The Partition Key

The Partition Key tells Cosmos how to organize you data and will also impact the physical partitions it creates as well.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which CosmosDB consistency model offers the least amount of data consistency when there are updates to be made?

  • Session
  • Eventual Consistency
  • Boundless Staleness
  • Consistent Prefix
A

-Eventual Consistency

Data is updated out of order, eventually creating a state of consistency. This is only useful when you data doesn’t necessarily need immediate or consistent updates in your other nodes and regions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

CosmosDB Partitions come in which of the following two forms?

  • Primary, Secondary
  • Logical, Physical
  • Relational, Non-Relational
  • Logical, Cache
A

-Logical, Physical

Physical partitions are WHERE your data is stored, while Logical partitions make up HOW your data is stored.

17
Q

You are the administrator of the Nutex Corporation. You use Azure Cosmos DB storage. You have 70,000
documents in one development database with 2.5 GB data and 200 MB indexing data. The newest document is from September 27, 2019, the oldest from June 21, 2018. You want to use a simple SQL API-based solution to remove all data before November 15, 2018.

What is your preferred solution?

  • Develop a microservice
  • SQL query
  • User-defined function
  • Cosmos DB stored procedure
  • Time To Live (TTL)
A

-Time To Live (TTL)

You would use Time to Live (TTL) because you can set a TTL for documents and/or containers. You can enable TTL for documents and wait for the Cosmos DB cleanup to start. After the cleanup finishes, every document in a collection stored in a Cosmos DB contains a _ts property representing the Unix time of the last update. The _ts property for June 21, 2018 is 1529539200. The _ts property for November 15, 2018 is 1542240000. You can choose to delete documents that have a timestamp 1529539200 <= _ts < 1542240000.

You would not use a user-defined function. While you could use a user-defined function to accomplish this, an SQL API using the TTL feature to query and delete the documents is simpler. A user-defined function would be much more effort.

You would not use an SQL query because you cannot simply use a DELETE based on a Unix TimeStamp. You can use a query language with a SELECT statement, such as SELECT … from WHERE …. However you are not able to run DELETE * from c WHERE c._ts < unixTimeStamp.

You would not use a Cosmos DB stored procedure because with that you have restrictions for the result length and the handling of continuation tokens.

You would not develop a microservice. While you could develop a microservice to do this, it is simpler to use an SQL API with the TTL feature to query and delete the documents. Developing a microservice is much more effort.

18
Q

You are the administrator of the Nutex Corporation. You use an Azure blob storage general purpose v2 account. You want to define a lifecycle management policy. The policy rule has to include the following requirements:

  • Tier blob to cool tier 30 days after last modification.
  • Tier blob to archive tier 90 days after last modification.
  • Delete blob 7 years after last modification.
  • Delete blob snapshots 90 days after snapshot creation.

See image for .json template with blanks.

Code:

  1. tierToCool
  2. tierToArchive
  3. delete
  4. tierToBackup
  5. 7
  6. tierToWarm
A

See image for solution

The tierToCool action is not used with a snapshot, but is used with a base blob. This action supports blobs currently at hot tier.

The TierToArchive action is not used with a snapshot, but is used with a base blob. This action supports blobs currently at either the hot or cool tier.

The delete supports both the base blob and the snapshot. If you define more than one action on the same blob, lifecycle management applies the least expensive action to the blob. For example, the delete action is cheaper than the tierToArchive action. The tierToArchive action is cheaper than the tierToCool action.

You would set the value for a blob deleted 7 years after the last modification to 2555 because this would be the number of days in 7 years. The value should be stated in days and not years.

You would not select tierToBackup or tierToWarm. These are not valid actions.

19
Q

You are the administrator of the Nutex Corporation. You want to do the following tasks:

 Copy a blob to another storage account.

 Copy a directory to another storage account.

 Copy a container to another storage account.

 Copy all containers, directories, and blobs to another storage account.

You have the following AzCopy command:

azcopy copy ‘https://mysourceaccount.blob.core.windows.net/mycontainer/myTextFile.txt?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D’ ‘https://mydestinationaccount.blob.core.windows.net/mycontainer/myTextFile.txt’

What is it doing?

A

You can use the following example to copy a blob to another storage account:

azcopy copy ‘https://mysourceaccount.blob.core.windows.net/mycontainer/myTextFile.txt?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D’ ‘https://mydestinationaccount.blob.core.windows.net/mycontainer/myTextFile.txt’

This command uses the following syntax:

azcopy copy ‘https://.blob.core.windows.net//?’ ‘https://.blob.core.windows.net//’

In the above example, mysourceaccount is the source storage account. The value of the first mycontainer is the container name. The blob path is the first myTextFile.txt. The SAS token is represented by sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D. The destination storage account is mydestinationaccount. The destination container is the second mycontainer. The destination blob path is the second myTextFile.txt.

20
Q

You are the administrator of the Nutex Corporation. You want to do the following tasks:

 Copy a blob to another storage account.

 Copy a directory to another storage account.

 Copy a container to another storage account.

 Copy all containers, directories, and blobs to another storage account.

You have the following AzCopy command:

azcopy copy ‘https://mysourceaccount.blob.core.windows.net/mycontainer/myBlobDirectory?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D’ ‘https://mydestinationaccount.blob.core.windows.net/mycontainer’ –recursive

What is it doing?

A

You can use the following example to copy a directory to another storage account:

azcopy copy ‘https://mysourceaccount.blob.core.windows.net/mycontainer/myBlobDirectory?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D’ ‘https://mydestinationaccount.blob.core.windows.net/mycontainer’ -recursive

The above example uses the following syntax:

azcopy copy ‘https://.blob.core.windows.net//?’ ‘https://.blob.core.windows.net/’ –recursive

In the above example, mysourceaccount is the source storage account. The value of the first mycontainer is the container name. The directory path is myBlobDirectory. The SAS token is represented by sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D. The destination storage account is mydestinationaccount. The destination container is the second mycontainer. The –recursive parameter checks sub-directories when coping from a local file system.

21
Q

You are the administrator of the Nutex Corporation. You want to do the following tasks:

 Copy a blob to another storage account.

 Copy a directory to another storage account.

 Copy a container to another storage account.

 Copy all containers, directories, and blobs to another storage account.

You have the following AzCopy command:

azcopy copy ‘https://mysourceaccount.blob.core.windows.net/mycontainer?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D’ ‘https://mydestinationaccount.blob.core.windows.net/mycontainer’ –recursive

What is it doing?

A

You would use the following example to copy a container to another storage account:

azcopy copy ‘https://mysourceaccount.blob.core.windows.net/mycontainer?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D’ ‘https://mydestinationaccount.blob.core.windows.net/mycontainer’ –recursive

The above example uses the following syntax:

azcopy copy ‘https://.blob.core.windows.net/?’ ‘https://.blob.core.windows.net/’ –recursive

In the above example, mysourceaccount is the source storage account. The value of the first mycontainer is the container name. The directory path is myBlobDirectory. The SAS token is represented by sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D. The destination storage account is mydestinationaccount. The destination container is the second mycontainer. The –recursive parameter checks sub-directories when coping from a local file system.

22
Q

You are the administrator of the Nutex Corporation. You want to do the following tasks:

 Copy a blob to another storage account.

 Copy a directory to another storage account.

 Copy a container to another storage account.

 Copy all containers, directories, and blobs to another storage account.

You have the following AzCopy command:

azcopy copy ‘https://mysourceaccount.blob.core.windows.net?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D’ ‘https://mydestinationaccount.blob.core.windows.net’ –recursive

What is it doing?

A

You would use the following example to copy all containers, directories, and blobs to another storage account:

azcopy copy ‘https://mysourceaccount.blob.core.windows.net?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D’ ‘https://mydestinationaccount.blob.core.windows.net’ –recursive

The above example uses the following syntax:

azcopy copy ‘https://.blob.core.windows.net/?’ ‘https://.blob.core.windows.net/’ –recursive’

In the above example, mysourceaccount is the source storage account. The SAS token is represented by sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D. The destination storage account is mydestinationaccount. The –recursive parameter checks sub-directories when coping from a local file system.

23
Q

You are working as an Azure developer for your company and are involved in an application review for a corporate system implemented around the globe. You want to split your Cosmo DB across containers and in this way provide guaranteed throughput for each container.

What would you consider first?

  • Data access patterns
  • Throughput for each container
  • Retry logic in your application
  • Create a partition key for each container
A

Create a partition key for each container

You would first create a partition key for each container. All containers created inside a database with a provisioned throughput must be created with a partition key. If you provision throughput on a container, the throughput is guaranteed for that container, backed by the SLA. A good partitioning strategy is a primary role in cost optimization in Azure Cosmos DB.

You would not consider the throughput for each container. The throughput for the container can be adjusted later.

You would not retry logic in your application first. Retry logic is a code implementation pattern than can always be fixed.

You would not consider data access patterns. These patterns are code implementation patterns than can always be fixed at a later stage.

24
Q

You are the administrator of Nutex. You’ have developed a globally distributed application. This application uses one Cosmos DB storage as the storage solution. You have the following requirements:

  • Consistency level: Strong
  • Cost for read operations compared to other consistency levels
  • Calculate request units for item size of 1 KB.

Which statement about the cost for read operations is true?

  1. The cost of a read operation (in terms of request units consumed) with strong consistency is lower than session and eventual, but the same as bounded staleness.
  2. The cost of a read operation (in terms of request units consumed) with strong consistency is higher than session and eventual, but the same as bounded staleness.
A

True: The cost of a read operation (in terms of request units consumed) with strong consistency is higher than session and eventual, but the same as bounded staleness.

The cost of a read operation (in terms of request units consumed) with bounded staleness is higher than session and eventual consistency, but is similar to strong consistency. The cost of a read operation (in terms of request units consumed) with session consistency level is less than strong and bounded staleness, but more than eventual consistency.

25
Q

Your application depends on Azure storage. You use the Azure storage diagnostics to capture metrics and log data. This data can later be used to analyze the storage service usage and diagnose issues with requests made against the storage account.

You need to use a cmdlet to change the retention period of captured log data for the blob service. Which cmdlet can help you accomplish the task?

  • Set-AzureStorageAccount
  • Set-AzureStorageServiceMetricsProperty
  • Set-AzureStorageServiceLoggingProperty
  • You can only use the management portal to set the retention period.
A
  • Set-AzureStorageServiceLoggingProperty

You would use Set-AzureStorageServiceLoggingProperty. With this cmdlet, you can modify the retention policy for log settings for Blob, Table, or Queue service. The following example turns on logging for read, write, and delete requests in the Queue service in your default storage account with retention set to three days:

Set-AzureStorageServiceLoggingProperty -ServiceType Queue -LoggingOperations read,write,delete -RetentionDays 3

You would not use Set-AzureStorageServiceMetricsProperty. This cmdlet is similar to the one above, but it modifies the retention policy for the metric settings for Blob, Table, or Queue service instead of the log settings.

You would not use Set-AzureStorageAccount. This cmdlet is used to modify the label and type of a storage account.

You would not select the option that the retention period can only be set using the management portal. You can depend on PowerShell commands to set almost anything in Azure. In this scenario, you can use Set-AzureStorageServiceLoggingProperty to set the retention period for log settings for the blob.

26
Q

You work as an Azure developer for your company and are involved in a code review for a corporate system implemented around the globe. The code look likes the following:

*private static async Task ReadDocumentAsync()
 {
 Console.WriteLine("\n1.2 - Reading Document by Id");
 // Note that Reads require a partition key to be spcified.
 var response = await client.ReadDocumentAsync(
 UriFactory.CreateDocumentUri(databaseName, collectionName, "SalesOrder1"),
 new RequestOptions { PartitionKey = new PartitionKey("Account1") });
 // You can measure the throughput consumed by any operation by inspecting the RequestCharge property
 Console.WriteLine("Document read by Id {0}", response.Resource);
 Console.WriteLine("Request Units Charge for reading a Document by Id {0}", response.RequestCharge);
 SalesOrder readOrder = (SalesOrder)(dynamic)response.Resource;
 //\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
 // 1.3 - Read ALL documents in a Collection
 //\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
 Console.WriteLine("\n1.3 - Reading all documents in a collection");
 string continuationToken = null;
 do
 {
 //code
 }*

Another developer proposes to remove the part of the code that reads the partition key.

When will it be possible to remove the code and have the application work?

  • It is possible. The code that reads the partition key can be skipped – collection DB just does a full scan.
  • It is possible. The code that reads the partition key can be skipped – collection DB just does a full scan, but it is not recommended because it will be slower.
  • It is not possible. The code that reads the partition key is mandatory.
  • It is possible. The code that reads the partition key can be skipped if your collection is not partitioned.
  • It is possible. The code that reads the partition key can be skipped – collection DB just gets a default Partition Key.
A
  • It is possible. The code that reads the partition key can be skipped if your collection is not partitioned.

The code that reads the partition key can be skipped if your collection is not partitioned. Reads require a partition key to be specified. However, this can be skipped if your collection is not partitioned. i.e. does not have a partition key defined during creation.

The option that states the partition key is mandatory is the wrong choice when the collection is not partitioned.

The statement that says, “The code that reads the partition key can be skipped – collection just gets default Partition Key” is not true because the collection will not get a default Partition Key.

The following statements are incorrect because the collection DB does not perform a full scan:

  • The code that reads the partition key can be skipped – collection DB just does a full scan.
  • The code that reads the partition key can be skipped – collection DB just does a full scan, but it is not recommended because it will be slower.
27
Q

You manage a Cosmos DB at Nutex Corporation. Every once in a while there is a storage problem that needs attention.

The Nutex cloud services team wants you to generate an alert to monitor the Cosmos DB storage and trigger when available space gets below a specified threshold.

What options are available for you to create the desired alert? (Choose all that apply.)

  • In the Azure CLI, execute the command az monitor alert create.
  • Using a Windows Server cmd.exe session, execute az monitor alert create.
  • In Visual Studio, use the .NET SDK, to call the DocumentClient.ReadDocumentCollectionAsync method.
  • In the Azure Portal, under Azure CosmosDB properties, add an Alert Rule.
  • Use an Azure PowerShell Script to execute Add-AzMetricAlertRule.
A
  • In the Azure CLI, execute the command az monitor alert create.
  • In the Azure Portal, under Azure CosmosDB properties, add an Alert Rule.
  • Use an Azure PowerShell Script to execute Add-AzMetricAlertRule.

You can execute the Add-AzMetricAlertRule cmdlet with the appropriate options and arguments to successfully create an alert as desired. The following creates a metric alert rule for a website:

Add-AzMetricAlertRule -Name “MyMetricRule” -Location “East US” -ResourceGroup “Default-Web-EastUS” -Operator GreaterThan -Threshold 2 -WindowSize 00:05:00 -MetricName “Requests” -Description “Pura Vida” -TimeAggregationOperator Total

You can execute the command az monitor alert create in the Cloud Shell or from a local machine to create an alert. The following creates a high CPU usage alert on a VM with no actions:

az monitor alert create -n rule1 -g {ResourceGroup} –target {VirtualMachineID} –condition “Percentage CPU > 90 avg 5m”

In the Azure Portal, under Azure CosmosDB properties, you can add an Alert Rule. Using the Web User Interface, you can click your way to successfully creating an alert to monitor Azure Cosmos DB.

You cannot use the .NET SDK in Visual Studio to call the DocumentClient.ReadDocumentCollectionAsync method. While you can use the .NET SDK to interact with Azure and create alerts, the specified method will not succeed in doing so.

You cannot use a Windows Server cmd.exe session to execute az monitor alert create. There currently are no plans to support cmd.exe interaction with Azure and therefore this would fail.

28
Q

You are the administrator of the Nutex Corporation. You want to retrieve Azure blob storage container property metadata. Which C# code content can you apply to the missing section? (Choose all that apply.)

*public static async Task ReadContainerMetadataAsync(CloudBlobContainer container)
{
 try
 {
 // Fetch container attributes in order to populate the container's properties and metadata.*

await container.________________();

// Enumerate the container’s metadata.
Console.WriteLine(“Container metadata:”);
foreach (var metadataItem in container.Metadata)
{
Console.WriteLine(“\tKey: {0}”, metadataItem.Key);
Console.WriteLine(“\tValue: {0}”, metadataItem.Value);
}
}
catch (StorageException e)
{
Console.WriteLine(“
HTTP error code {0}: {1}”,
e.RequestInformation.
httpStatusCode,
e.RequestInformation.ErrorCode);
Console.WriteLine(e.Message);
Console.ReadLine();
}
}

  • FetchAttributesAsync
  • GetAttributesAsync
  • FetchAttributes
  • FetchPropertiesAsync
A
  • FetchAttributesAsync
  • FetchAttributes

You would use the FetchAttributesAsync or FetchAttributes method. Either one of these methods fetch or retrieve a container’s properties.

You would not use the FetchPropertiesAsync method. This method is used in previous versions of Azure and could be used to populate a blob’s properties or metadata.

There is no GetAttributesAsync method in Azure for .NET. This method can work with Amazon’s AWS and retrieves the attributes for the queue identified by the queue URL asynchronously. It is used with the AWSSDK.Core.dll assembly.

29
Q

You have to implement the azcopy tool to copy objects from a local folder named D:\whizlabs to a container named “demo” within the storage account shown in the attached image.

You have to complete the below command to copy all of the objects in the local folder.

azcopy copy “_________” “_____________________/?sv=2018-03-28&ss=bjqt&srt=sco&sp=rwddgcup&se=2019-05-01T05:01:17Z&st=2019-04-30T21:01:17Z&spr=https&sig=MGCXiyEzbtttkr3ewJIh2AR8KrghSy1DGM9ovN734bQF4%3D” ________________

Which of the following would go into the second blank?

  • https://whizlabsstore2020.blob.core.windows.net/demo/
  • https://whizlabsstore2020/demo
  • D:\whizlabs
  • whizlabs
A
  • https://whizlabsstore2020.blob.core.windows.net/demo/
    https: //docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10
30
Q
A