From Tests Flashcards
You are using CodePipeline automatically deploy code to your environments every time a developer pushes new code to CodeCommit, however your newly built code fails one of the automated tests you have configured as part of your pipeline. How does CodePipeline deal with this failure?
The code is still deployed however CodePipeline sends an SNS notification that one or more tests have failed
CodePipeline deploys only the code changes that passed the automated tests
The pipeline stops immediately because one stage has failed
CodePipeline automatically retries the failed stage of the pipeline
The pipeline stops immediately because one stage has failed
You have a motion sensor that reads 300 items of data every 30 seconds. Each item consists of 5kb. Your application uses eventually consistent reads. In order for your application to keep up, what should you set the read throughput to? 5 10 30 20
10
Which of the following can optimise the performance of a large a scan in DynamoDB? Run a single scan rather than multiple smaller scans Run smaller scans in parallel Increase the page size Increase your read capacity units
Run smaller scans in parallel
Which of the following are ways of remediating a ProvisionedThroughputExceeded error from DynamoDB? [Select 2]
Reduce the frequency of requests to the DynamoDB table
Increase the frequency of requests to the DynamoDB table
Move your application to a larger instance type
Exponential Backoff
Reduce the frequency of requests to the DynamoDB table
Exponential Backoff
What is the difference between a Global Secondary Index and a Local Secondary Index [Select 2]
You can create a Local Secondary Index at any time but you can only create a Global Secondary Index at table creation time
You can delete a Global Secondary Index at any time
You can delete a Local Secondary Index at any time
You can create a Global Secondary Index at any time but you can only create a Local Secondary Index at table creation time
You can create a Global Secondary Index at any time but you can only create a Local Secondary Index at table creation time
You can delete a Global Secondary Index at any time
How can you prevent CloudFormation from deleting your entire stack on failure? [Select 2]
Use the –disable-rollback flag with the AWS CLI
Set Termination Protection to Enabled in the CloudFormation console
Use the –disable-rollback flag with the AWS CLI
Use the –enable-termination-protection flag with the AWS CLI
Use the –disable-rollback flag with the AWS CLI
Use the –disable-rollback flag with the AWS CLI
Which of the following are recommended ways to optimise a query or scan in DynamoDB? [Select 2]
Reduce the page size to return fewer items per results page
Filter your results based on the Primary Key and Sort Key
Set your queries to be eventually consistent
Run parallel scans
A smaller page size means uses fewer read operations and creates a “pause” between each request which reduces the impact of a query or scan operation. A larger number of smaller operations can allow other critical requests to succeed without throttling. For large tables, a parallel scan can complete much faster than a sequential one, if the table’s provisioned read throughput is not already being fully used.
A Developer requires a multi-threaded in-memory cache to place in front of an Amazon RDS database. Which caching solution should the Developer choose?
Amazon RedShift Amazon DynamoDB DAX Amazon ElastiCache Redis Amazon ElastiCache Memcached
Amazon ElastiCache Memcached
INCORRECT: “Amazon ElastiCache Redis” is incorrect as Redis it not multi-threaded.
To reduce the cost of API actions performed on an Amazon SQS queue, a Developer has decided to implement long polling. Which of the following modifications should the Developer make to the API actions?
Set the ReceiveMessage API with a WaitTimeSeconds of 20
Set the SetQueueAttributes API with a DelaySeconds of 20
Set the ReceiveMessage API with a VisibilityTimeout of 30
Set the SetQueueAttributes with a MessageRetentionPeriod of 60
Set the ReceiveMessage API with a WaitTimeSeconds of 20
A Development team would use a GitHub repository and would like to migrate their application code to AWS CodeCommit.
What needs to be created before they can migrate a cloned repository to CodeCommit over HTTPS?
A GitHub secure authentication token
A set of Git credentials generated with IAM
An Amazon EC2 IAM role with CodeCommit permissions
A public and private SSH key file
A set of Git credentials generated with IAM
In this scenario the Development team need to connect to CodeCommit using HTTPS so they need either AWS access keys to use the AWS CLI or Git credentials generated by IAM.
A company is deploying an on-premise application server that will connect to several AWS services. What is the BEST way to provide the application server with permissions to authenticate to AWS services?
Create an IAM user and generate access keys. Create a credentials file on the application server
(Correct)
Create an IAM user and generate a key pair. Use the key pair in API calls to AWS services
Create an IAM role with the necessary permissions and assign it to the application server
(Incorrect)
Create an IAM group with the necessary permissions and add the on-premise application server to the group
Create an IAM user and generate access keys. Create a credentials file on the application server
INCORRECT: “Create an IAM role with the necessary permissions and assign it to the application server” is incorrect. This is an on-premises server so it is not possible to use an IAM role. If it was an EC2 instance, this would be the preferred (best practice) option.
An application uses AWS Lambda which makes remote to several downstream services. A developer wishes to add data to custom subsegments in AWS X-Ray that can be used with filter expressions. Which type of data should be used?
Annotations
Daemon
Trace ID
Metadata
Annotations
A company has a large Amazon DynamoDB table which they scan periodically so they can analyze several attributes. The scans are consuming a lot of provisioned throughput. What technique can a Developer use to minimize the impact of the scan on the table’s provisioned throughput?
Define a range key on the table Set a smaller page size for the scan Prewarm the table by updating all items Use parallel scans
Set a smaller page size for the scan
INCORRECT: “Use parallel scans” is incorrect as this will return results faster but place more burden on the table’s provisioned throughput.
A company has created a set of APIs using Amazon API Gateway and exposed them to partner companies. The APIs have caching enabled for all stages. The partners require a method of invalidating the cache that they can build into their applications.
What can the partners use to invalidate the API cache?
They can pass the HTTP header Cache-Control: max-age=0
They can invoke an AWS API endpoint which invalidates the cache
They can use the query string parameter INVALIDATE_CACHE
They must wait for the TTL to expire
They can pass the HTTP header Cache-Control: max-age=0
A company is deploying an Amazon Kinesis Data Streams application that will collect streaming data from a gaming application. Consumers will run on Amazon EC2 instances.
In this architecture, what can be deployed on consumers to act as an intermediary between the record processing logic and Kinesis Data Streams and instantiate a record processor for each shard?
Amazon Kinesis Client Library (KCL) Amazon Kinesis CLI Amazon Kinesis API AWS CLI
Amazon Kinesis Client Library (KCL)
A Developer needs to scan a full DynamoDB 50GB table within non-peak hours. About half of the strongly consistent RCUs are typically used during non-peak hours and the scan duration must be minimized.
How can the Developer optimize the scan execution time without impacting production workloads?
Use sequential scans
Use parallel scans while limiting the rate
Change to eventually consistent RCUs during the scan operation
Increase the RCUs during the scan operation
Use parallel scans while limiting the rate
A company has implemented AWS CodePipeline to automate its release pipelines. The Development team is writing an AWS Lambda function that will send notifications for state changes of each of the actions in the stages.
Which steps must be taken to associate the Lambda function with the event source?
Create an Amazon CloudWatch Events rule that uses CodePipeline as an event source
Create a trigger that invokes the Lambda function from the Lambda console by selecting CodePipeline as the event source
Create an event trigger and specify the Lambda function from the CodePipeline console
Create an Amazon CloudWatch alarm that monitors status changes in CodePipeline and triggers the Lambda function
Create an Amazon CloudWatch Events rule that uses CodePipeline as an event source
INCORRECT: “Create an event trigger and specify the Lambda function from the CodePipeline console” is incorrect as CodePipeline cannot be configured as a trigger for Lambda.
A Developer is deploying an application in a microservices architecture on Amazon ECS. The Developer needs to choose the best task placement strategy to MINIMIZE the number of instances that are used. Which task placement strategy should be used?
spread random binpack weighted
binpack
binpack - Place tasks based on the least available amount of CPU or memory. This minimizes the number of instances in use.
An X-Ray daemon is being used on an Amazon ECS cluster to assist with debugging stability issues. A developer requires more detailed timing information and data related to downstream calls to AWS services.
What should the developer use to obtain this extra detail?
Metadata Filter expressions Subsegments Annotations
A segment can break down the data about the work done into subsegments. Subsegments provide more granular timing information and details about downstream calls that your application made to fulfill the original request.
A subsegment can contain additional details about a call to an AWS service, an external HTTP API, or an SQL database. You can even define arbitrary subsegments to instrument specific functions or lines of code in your application.
An organization developed an application that uses a set of APIs that are being served through Amazon API Gateway. The API calls must be authenticated based on OpenID identity providers such as Amazon, Google, or Facebook. The APIs should allow access based on a custom authorization model.
Which is the simplest and MOST secure design to use to build an authentication and authorization model for the APIs?
Use Amazon DynamoDB to store user credentials and have the application retrieve temporary credentials from AWS STS. Make API calls by passing user credentials to the APIs for authentication and authorization
Use Amazon ElastiCache to store user credentials and pass them to the APIs for authentication and authorization
Build an OpenID token broker with Amazon and Facebook. Users will authenticate with these identify providers and pass the JSON Web Token to the API to authenticate each API call
Use Amazon Cognito user pools and a custom authorizer to authenticate and authorize users based on JSON Web Tokens
With Amazon Cognito User Pools your app users can sign in either directly through a user pool or federate through a third-party identity provider (IdP). The user pool manages the overhead of handling the tokens that are returned from social sign-in through Facebook, Google, Amazon, and Apple, and from OpenID Connect (OIDC) and SAML IdPs.
After successful authentication, Amazon Cognito returns user pool tokens to your app. You can use the tokens to grant your users access to your own server-side resources, or to the Amazon API Gateway. Or, you can exchange them for AWS credentials to access other AWS services.
The ID token is a JSON Web Token (JWT) that contains claims about the identity of the authenticated user such as name, email, and phone_number. You can use this identity information inside your application. The ID token can also be used to authenticate users against your resource servers or server applications.
A developer is planning to use a Lambda function to process incoming requests from an Application Load Balancer (ALB). How can this be achieved?
Create an Auto Scaling Group (ASG) and register the Lambda function in the launch configuration
Create a target group and register the Lambda function using the AWS CLI
Setup an API in front of the ALB using API Gateway and use an integration request to map the request to the Lambda function
Configure an event-source mapping between the ALB and the Lambda function
Create a target group and register the Lambda function using the AWS CLI
You can register your Lambda functions as targets and configure a listener rule to forward requests to the target group for your Lambda function. When the load balancer forwards the request to a target group with a Lambda function as a target, it invokes your Lambda function and passes the content of the request to the Lambda function, in JSON format.
You need to create a target group, which is used in request routing, and register a Lambda function to the target group. If the request content matches a listener rule with an action to forward it to this target group, the load balancer invokes the registered Lambda function.
A Developer will be launching several Docker containers on a new Amazon ECS cluster using the EC2 Launch Type. The containers will all run a web service on port 80.
What is the EASIEST way the Developer can configure the task definition to ensure the web services run correctly and there are no port conflicts on the host instances?
Specify a unique port number for the container port and port 80 for the host port
Specify port 80 for the container port and a unique port number for the host port
Specify port 80 for the container port and port 0 for the host port
Leave both the container port and host port configuration blank
Specify port 80 for the container port and port 0 for the host port
The easiest way to do this is to set the host port number to 0 and ECS will automatically assign an available port. We also need to assign port 80 to the container port so that the web service is able to run.
A Developer is setting up a code update to Amazon ECS using AWS CodeDeploy. The Developer needs to complete the code update quickly. Which of the following deployment types should the Developer use?
In-place
Canary
Linear
Blue/green
Blue/green
INCORRECT: “In-place” is incorrect as AWS Lambda and Amazon ECS deployments cannot use an in-place deployment type.
A Developer is deploying an AWS Lambda update using AWS CodeDeploy. In the appspec.yaml file, which of the following is a valid structure for the order of hooks that should be specified?
BeforeAllowTraffic > AfterAllowTraffic
BeforeBlockTraffic > AfterBlockTraffic > BeforeAllowTraffic > AfterAllowTraffic
BeforeInstall > AfterInstall > AfterAllowTestTraffic > BeforeAllowTraffic > AfterAllowTraffic
BeforeInstall > AfterInstall > ApplicationStart > ValidateService
BeforeAllowTraffic > AfterAllowTraffic
A decoupled application is using an Amazon SQS queue. The processing layer that is retrieving messages from the queue is not able to keep up with the number of messages being placed in the queue.
What is the FIRST step the developer should take to increase the number of messages the application receives?
Use the API to update the WaitTimeSeconds parameter to a value other than 0
Use the ReceiveMessage API to retrieve up to 10 messages at a time
Configure the queue to use short polling
Add additional Amazon SQS queues and have the application poll those queues
Use the ReceiveMessage API to retrieve up to 10 messages at a time