Practice Test III - WhizLabs Flashcards
AWS CodeDeploy is used to configure a deployment group to automatically roll-back to last known good revision when a deployment fails. During roll-back, files required for deployment to earlier revision cannot be retrieved by CodeDeploy. Which of the following actions can be executed for successful roll-back. Choose 2.
A. Use Manual rollback instead of automatic rollback.
B. Manually add required files to instance.
C. Use an existing application revision.
D. Map CodeDeploy to access those files from S3 buckets.
E. Create a new application revision.
B & E. Create a new application revision and manually add required files to instance.
During AWS CodeDeploy automatic rollback, it will try to retrieve files that were part of previous versions. If these files are deleted or missing, you need to manually add those files to instance or create a new application revision.
How does a CodeDeploy rollback work?
CodeDeploy rolls back deployments by redeploying a previously deployed revision of an application as a new deployment. These rolled-back deployments are technocally new deployments, with new deployment IDs, rather than restored versions of a previous deployment.
You have a legacy application that processes messages from an SQS queue. Application uses a single thread to poll multiple queues, which of the following polling timeout will be the best option to avoid latency in processing messages?
A. Use short polling with default visibility timeout values.
B. Use long polling with higher visibility timeout values.
C. Use long polling with lower visibility timeout values.
D. Use short polling with higher visibility timeout values.
A. Use short polling with default visibility timeout values.
In this case, application is polling multiple queues with a single thread. Long polling will wait for message or timeout values for each queue, which may delay processing of messages in other queues which has messages to be processed.
What are the required configurations for setting up a bucket for static website hosting?
Enabling website hosting
Configuring index document support
Permissions required for website access
You are developing an application that will make use of Kinesis Firehose for streaming the records onto S3. Your company policy mandates that all data needs to be encrypted at rest. How can you achieve this with Kinesis Firehose? Choose 2.
A. Enable encryption on the Kinesis Data Firehose.
B. Install an SSL certificate in Kinesis Data Firehose.
C. Ensure that all data records are transferred via SSL.
D. Ensure that Kinesis streams are used to transfer the data from the producers.
A & D. Enable encryption on the Kinesis Data Firehose and Ensure that Kinesis streams are used to transfer the data from the producers.
If you have sensitive data, you can enable server-side encryption when you use Kinesis Data Firehose. However, this is only possible if you use a Kinesis stream as your data source. When you configure a Kinesis stream as the data source of a Kinesis Data Firehose delivery stream, Kinesis Data Firehose no longer stores the data at rest. Instead, the data is stored in the Kinesis stream.
Options B & C are invalid because this is used for encrypting data in transit.
What’s the difference between Kinesis Streams and Kinesis Firehose?
With Kinesis Streams, you can store the data for up to 7 days, but with Kinesis Data Firehose, you just send the data directly to, e.g. S3. You may use Kinesis Streams if you want to do some custom processing with streaming data. With Kinesis Firehose, you are simply ingesting it into S3, Redshift or ElasticSearch.
You have been told to make use of CloudFormation templates for deploying applications on EC2 instances. These instances need to be preconfigured with the NGINX web server to host the application. How could you accomplish this with CloudFormation?
You can use the cfn-init helper script in CloudFormation.
When you launch stacks, you can install and configure software applications on EC2 instances by using the cfn-init helper script and the AWS::CloudFormation::Init resource. By using AWS::CloudFormation::Init, you can describe the configurations that you want, rather than scripting procedural steps.
You are working on building microservices using Amazon ECS. This ECS will be deployed in an EC2 instance along with its ECS container agent. After the successful launching of the EC2 instance, the ECS container agent has registered this instance in a cluster. What would the status be of the container instance and its corresponding agent connection, when an ECS container instance is stopped?
Container instance status remains as ACTIVE and agent connection status is FALSE.
What is an Amazon ECS container instance?
An ECS container instance is an EC2 instance that is running the ECS container agent and has been registered into a cluster.
What is CORS?
Cross-origin resource sharing, CORS, is a browser security feature that restricts cross-origin HTTP requests that are initiated from scripts running in your browser. If your REST API’s resources receive non-simple cross-origin HTTP requests, you need to enable CORS supprt.
For simple cross-origin POST requests, what does the response from your resource need to include?
It needs to include the header Access-Control-Allow-Origin, where the value of the header key is set to ‘*’ or is set to the origins allowed to access that resource.
To support CORS for non-simple HTTP requests, what would a REST API resource need to implement?
When a browser receives a non-simple HTTP request, the CORS protocol requires the browser to send a preflight request to the server and wait for approval (or a request for credentials) from the server before sending the actual request. The preflight request appears to your API as an HTTP request that:
Includes an Origin header.
Uses the OPTIONS method.
Includes the following headers: Access-Control-Request-Method, Access-Control-Request-Headers.
To support CORS, therefore, a REST API resource needs to implement an OPTIONS method that can respond to the OPTIONS preflight request with at least the following response headers:
Access-Control-Allow-Methods
Access-Control-Allow-Headers
Access-Control-Allow-Origin
What happens when DynamoDB with a DAX cluster receives a strongly consistent read request from an application?
For strongly consistent read requests from an application, DAX cluster passes all requests to DynamoDB and does not cache the results.
What is the DAX item cache?
DAX maintains an item cache to store the results from GetItem and BatchGetItem operations. The items in the cache represent eventually consistent data from DynamoDB, and are stored by their primary key values.
What is the DAX query cache?
DAX maintains a query cache to store the results from query and scan operations. The items in this cache represent result sets from queries and scans on DynamoDB tables. These result sets are stored by their parameter values.