From Tests Flashcards
You are using CodePipeline automatically deploy code to your environments every time a developer pushes new code to CodeCommit, however your newly built code fails one of the automated tests you have configured as part of your pipeline. How does CodePipeline deal with this failure?
The code is still deployed however CodePipeline sends an SNS notification that one or more tests have failed
CodePipeline deploys only the code changes that passed the automated tests
The pipeline stops immediately because one stage has failed
CodePipeline automatically retries the failed stage of the pipeline
The pipeline stops immediately because one stage has failed
You have a motion sensor that reads 300 items of data every 30 seconds. Each item consists of 5kb. Your application uses eventually consistent reads. In order for your application to keep up, what should you set the read throughput to? 5 10 30 20
10
Which of the following can optimise the performance of a large a scan in DynamoDB? Run a single scan rather than multiple smaller scans Run smaller scans in parallel Increase the page size Increase your read capacity units
Run smaller scans in parallel
Which of the following are ways of remediating a ProvisionedThroughputExceeded error from DynamoDB? [Select 2]
Reduce the frequency of requests to the DynamoDB table
Increase the frequency of requests to the DynamoDB table
Move your application to a larger instance type
Exponential Backoff
Reduce the frequency of requests to the DynamoDB table
Exponential Backoff
What is the difference between a Global Secondary Index and a Local Secondary Index [Select 2]
You can create a Local Secondary Index at any time but you can only create a Global Secondary Index at table creation time
You can delete a Global Secondary Index at any time
You can delete a Local Secondary Index at any time
You can create a Global Secondary Index at any time but you can only create a Local Secondary Index at table creation time
You can create a Global Secondary Index at any time but you can only create a Local Secondary Index at table creation time
You can delete a Global Secondary Index at any time
How can you prevent CloudFormation from deleting your entire stack on failure? [Select 2]
Use the –disable-rollback flag with the AWS CLI
Set Termination Protection to Enabled in the CloudFormation console
Use the –disable-rollback flag with the AWS CLI
Use the –enable-termination-protection flag with the AWS CLI
Use the –disable-rollback flag with the AWS CLI
Use the –disable-rollback flag with the AWS CLI
Which of the following are recommended ways to optimise a query or scan in DynamoDB? [Select 2]
Reduce the page size to return fewer items per results page
Filter your results based on the Primary Key and Sort Key
Set your queries to be eventually consistent
Run parallel scans
A smaller page size means uses fewer read operations and creates a “pause” between each request which reduces the impact of a query or scan operation. A larger number of smaller operations can allow other critical requests to succeed without throttling. For large tables, a parallel scan can complete much faster than a sequential one, if the table’s provisioned read throughput is not already being fully used.
A Developer requires a multi-threaded in-memory cache to place in front of an Amazon RDS database. Which caching solution should the Developer choose?
Amazon RedShift Amazon DynamoDB DAX Amazon ElastiCache Redis Amazon ElastiCache Memcached
Amazon ElastiCache Memcached
INCORRECT: “Amazon ElastiCache Redis” is incorrect as Redis it not multi-threaded.
To reduce the cost of API actions performed on an Amazon SQS queue, a Developer has decided to implement long polling. Which of the following modifications should the Developer make to the API actions?
Set the ReceiveMessage API with a WaitTimeSeconds of 20
Set the SetQueueAttributes API with a DelaySeconds of 20
Set the ReceiveMessage API with a VisibilityTimeout of 30
Set the SetQueueAttributes with a MessageRetentionPeriod of 60
Set the ReceiveMessage API with a WaitTimeSeconds of 20
A Development team would use a GitHub repository and would like to migrate their application code to AWS CodeCommit.
What needs to be created before they can migrate a cloned repository to CodeCommit over HTTPS?
A GitHub secure authentication token
A set of Git credentials generated with IAM
An Amazon EC2 IAM role with CodeCommit permissions
A public and private SSH key file
A set of Git credentials generated with IAM
In this scenario the Development team need to connect to CodeCommit using HTTPS so they need either AWS access keys to use the AWS CLI or Git credentials generated by IAM.
A company is deploying an on-premise application server that will connect to several AWS services. What is the BEST way to provide the application server with permissions to authenticate to AWS services?
Create an IAM user and generate access keys. Create a credentials file on the application server
(Correct)
Create an IAM user and generate a key pair. Use the key pair in API calls to AWS services
Create an IAM role with the necessary permissions and assign it to the application server
(Incorrect)
Create an IAM group with the necessary permissions and add the on-premise application server to the group
Create an IAM user and generate access keys. Create a credentials file on the application server
INCORRECT: “Create an IAM role with the necessary permissions and assign it to the application server” is incorrect. This is an on-premises server so it is not possible to use an IAM role. If it was an EC2 instance, this would be the preferred (best practice) option.
An application uses AWS Lambda which makes remote to several downstream services. A developer wishes to add data to custom subsegments in AWS X-Ray that can be used with filter expressions. Which type of data should be used?
Annotations
Daemon
Trace ID
Metadata
Annotations
A company has a large Amazon DynamoDB table which they scan periodically so they can analyze several attributes. The scans are consuming a lot of provisioned throughput. What technique can a Developer use to minimize the impact of the scan on the table’s provisioned throughput?
Define a range key on the table Set a smaller page size for the scan Prewarm the table by updating all items Use parallel scans
Set a smaller page size for the scan
INCORRECT: “Use parallel scans” is incorrect as this will return results faster but place more burden on the table’s provisioned throughput.
A company has created a set of APIs using Amazon API Gateway and exposed them to partner companies. The APIs have caching enabled for all stages. The partners require a method of invalidating the cache that they can build into their applications.
What can the partners use to invalidate the API cache?
They can pass the HTTP header Cache-Control: max-age=0
They can invoke an AWS API endpoint which invalidates the cache
They can use the query string parameter INVALIDATE_CACHE
They must wait for the TTL to expire
They can pass the HTTP header Cache-Control: max-age=0
A company is deploying an Amazon Kinesis Data Streams application that will collect streaming data from a gaming application. Consumers will run on Amazon EC2 instances.
In this architecture, what can be deployed on consumers to act as an intermediary between the record processing logic and Kinesis Data Streams and instantiate a record processor for each shard?
Amazon Kinesis Client Library (KCL) Amazon Kinesis CLI Amazon Kinesis API AWS CLI
Amazon Kinesis Client Library (KCL)