Notes 6 Flashcards

1
Q

An application needs to generate SMS text messages and emails for a large number of subscribers. Which AWS service can be used to send these messages to customers?

A

Amazon Simple Notification Service (Amazon SNS) is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients. In Amazon SNS, there are two types of clients—publishers and subscribers—also referred to as producers and consumers.

Publishers communicate asynchronously with subscribers by producing and sending a message to a topic, which is a logical access point and communication channel.

Subscribers (that is, web servers, email addresses, Amazon SQS queues, AWS Lambda functions) consume or receive the message or notification over one of the supported protocols (that is, Amazon SQS, HTTP/S, email, SMS, Lambda) when they are subscribed to the topic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A developer is planning to use a Lambda function to process incoming requests from an Application Load Balancer (ALB). How can this be achieved?

A

Create a target group and register the Lambda function using the AWS CLI.

You can register your Lambda functions as targets and configure a listener rule to forward requests to the target group for your Lambda function. When the load balancer forwards the request to a target group with a Lambda function as a target, it invokes your Lambda function and passes the content of the request to the Lambda function, in JSON format.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A company has three different environments: Development, QA, and Production. The company wants to deploy its code first in the Development environment, then QA, and then Production.

Which AWS service can be used to meet this requirement?

A

Use AWS CodeDeploy to create multiple deployment groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A Developer is creating a new web application that will be deployed using AWS Elastic Beanstalk from the AWS Management Console. The Developer is about to create a source bundle which will be uploaded using the console.

Which of the following are valid requirements for creating the source bundle? (Select TWO.)

A

Consist of a single ZIP file or WAR file (you can include multiple WAR files inside your ZIP file)

Not exceed 512 MB

Not include a parent folder or top-level directory (subdirectories are fine)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

AWS Lambda and Amazon ECS deployments cannot use an in-place deployment type.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company is workflow using an AWS Step Functions state machine. When testing the state machine errors were experienced in the Step Functions task state machine. To troubleshoot the issue a developer requires that the state input be included along with the error message in the state output.

Which coding practice can preserve both the original input and the error for the state?

A

Use ResultPath in a Catch statement to include the original input with the error.

A Step Functions execution receives a JSON text as input and passes that input to the first state in the workflow. Individual states receive JSON as input and usually pass JSON as output to the next state.

In the Amazon States Language, these fields filter and control the flow of JSON from state to state:

InputPath

OutputPath

ResultPath

Parameters

ResultSelector

Use ResultPath to combine a task result with task input, or to select one of these. The path you provide to ResultPath controls what information passes to the output. Use ResultPath in a Catch to include the error with the original input, instead of replacing it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

AWS Cognito User pool vs Identity pool

A

A Cognito user pool can be used to authenticate (sign in / sign up) but the Cognito identity pool is used to provide authorised access to AWS services.

User pool = authentication (sign in/up)
Identity pool = authorisation (access to AWS services)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

An application asynchronously invokes an AWS Lambda function. The application has recently been experiencing occasional errors that result in failed invocations. A developer wants to store the messages that resulted in failed invocations such that the application can automatically retry processing them.

What should the developer do to accomplish this goal with the LEAST operational overhead?

A

Configure a redrive policy on an Amazon SQS queue. Set the dead-letter queue as an event source to the Lambda function.

Amazon SQS supports dead-letter queues (DLQ), which other queues (source queues) can target for messages that can’t be processed (consumed) successfully. Dead-letter queues are useful for debugging your application or messaging system because they let you isolate unconsumed messages to determine why their processing doesn’t succeed.

The redrive policy specifies the source queue, the dead-letter queue, and the conditions under which Amazon SQS moves messages from the former to the latter if the consumer of the source queue fails to process a message a specified number of times.

You can set your DLQ as an event source to the Lambda function to drain your DLQ. This will ensure that all failed invocations are automatically retried.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

AWS Secrets Manager

  • handles key/value pair (username/password)
  • rotate credentials (passwords)

AWS KMS (Key Management Service)

  • encrypts keys use to encrypt data such as files, data in S3 etc.
  • rotate keys
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A Developer is developing a web application and will maintain separate sets of resources for the alpha, beta, and release stages. Each version runs on Amazon EC2 and uses an Elastic Load Balancer.

How can the Developer create a single page to view and manage all of the resources?

A

Create a resource group

https://docs.aws.amazon.com/ARG/latest/userguide/resource-groups.html

By default, the AWS Management Console is organized by AWS service. But with Resource Groups, you can create a custom console that organizes and consolidates information based on criteria specified in tags, or the resources in an AWS CloudFormation stack. The following list describes some of the cases in which resource grouping can help organize your resources.

An application that has different phases, such as development, staging, and production.

Projects managed by multiple departments or individuals.

A set of AWS resources that you use together for a common project or that you want to manage or monitor as a group.

A set of resources related to applications that run on a specific platform, such as Android or iOS.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Read capacity unites up to 4KB do not get divided by 4.
1 read request unit (RRU) = 1 strongly consistent read of up to 4 KB/s = 2 eventually consistent reads of up to 4 KB/s per read.

So for example:

 - Eventually consistent, 15 RCUs, 1 KB item = 30 items read per second.
 - Strongly consistent, 15 RCUs, 1 KB item = 15 items read per second.
 - Eventually consistent, 5 RCUs, 4 KB item = 10 items read per second.
 - Strongly consistent, 5 RCUs, 4 KB item = 5 items read per second.

Larger than 4KB you do the usual mathematical equation.

A read capacity unit represents one strongly consistent read per second, or two eventually consistent reads per second, for an item up to 4 KB in size. For example, suppose that you create a table with 10 provisioned read capacity units. This allows you to perform 10 strongly consistent reads per second, or 20 eventually consistent reads per second, for items up to 4 KB.

Reading an item larger than 4 KB consumes more read capacity units. For example, a strongly consistent read of an item that is 8 KB (4 KB × 2) consumes 2 read capacity units. An eventually consistent read on that same item consumes only 1 read capacity unit.

Item sizes for reads are rounded up to the next 4 KB multiple. For example, reading a 3,500-byte item consumes the same throughput as reading a 4 KB item. Therefore, the smaller (1 KB) items in this scenario would consume the same number of RCUs as the 4 KB items. Also, we know that eventually consistent reads consume half the RCUs of strongly consistent reads.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

exponential backoff = to use progressively longer waits between retries for consecutive error responses

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

An organization has an account for each environment: Production, Testing, Development. A Developer with an IAM user in the Development account needs to launch resources in the Production and Testing accounts. What is the MOST efficient way to provide access?

A

Create a role with the required permissions in the Production and Testing accounts and have the Developer assume that role

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

An application running on Amazon EC2 generates a large number of small files (1KB each) containing personally identifiable information that must be converted to ciphertext. The data will be stored on a proprietary network-attached file system. What is the SAFEST way to encrypt the data using AWS KMS?

A

Encrypt the data directly with a customer managed customer master key (CMK)

With AWS KMS you can encrypt files directly with a customer master key (CMK). A CMK can encrypt up to 4KB (4096 bytes) of data in a single encrypt, decrypt, or reencrypt operation. As CMKs cannot be exported from KMS this is a very safe way to encrypt small amounts of data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Cognito user pool vs identity pool

A

User pool use cases:

Use a user pool when you need to:

Design sign-up and sign-in webpages for your app.

Access and manage user data.

Track user device, location, and IP address, and adapt to sign-in requests of different risk levels.

Use a custom authentication flow for your app.

Identity pool use cases:

Use an identity pool when you need to:

Give your users access to AWS resources, such as an Amazon Simple Storage Service (Amazon S3) bucket or an Amazon DynamoDB table.

Generate temporary AWS credentials for unauthenticated users.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A Developer needs to manage AWS services from a local development server using the AWS CLI. How can the Developer ensure that the CLI uses their IAM permissions?

A

Run the aws configure command and provide the Developer’s IAM access key ID and secret access key

For general use, the “aws configure” command is the fastest way to set up your AWS CLI installation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

API logging in Cloudwatch

A

There are two types of API logging in CloudWatch: execution logging and access logging. In execution logging, API Gateway manages the CloudWatch Logs. The process includes creating log groups and log streams, and reporting to the log streams any caller’s requests and responses.

The logged data includes errors or execution traces (such as request or response parameter values or payloads), data used by Lambda authorizers, whether API keys are required, whether usage plans are enabled, and so on.

In access logging, you, as an API Developer, want to log who has accessed your API and how the caller accessed the API. You can create your own log group or choose an existing log group that could be managed by API Gateway.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Messages produced by an application must be pushed to multiple Amazon SQS queues. What is the BEST solution for this requirement?

A

Publish the messages to an Amazon SNS topic and subscribe each SQS queue to the topic

Amazon SNS works closely with Amazon Simple Queue Service (Amazon SQS). Both services provide different benefits for developers. Amazon SNS allows applications to send time-critical messages to multiple subscribers through a “push” mechanism, eliminating the need to periodically check or “poll” for updates.

When you subscribe an Amazon SQS queue to an Amazon SNS topic, you can publish a message to the topic and Amazon SNS sends an Amazon SQS message to the subscribed queue. The Amazon SQS message contains the subject and message that were published to the topic along with metadata about the message in a JSON document.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Which AWS services are supported for Lambda event source mappings?

A
  • Amazon DynamoDB
  • Amazon Kinesis
  • Amazon Simple Queue Service (SQS)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

AWS CodeBuild, CodeDeploy and CodePipeline description

A

AWS CodeBuild is a fully managed build service in the cloud. CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. CodeBuild eliminates the need to provision, manage, and scale your own build servers. It provides pre-packaged build environments for popular programming languages and build tools such as Apache Maven, Gradle, and more.

CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services.

AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. This service works with the other Developer Tools to create a pipeline. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

The source code for an application is stored in a file named index.js that is in a folder along with a template file that includes the following code:

AWSTemplateFormatVersion: ‘2010-09-09’
Transform: ‘AWS::Serverless-2016-10-31’
Resources:
    LambdaFunctionWithAPI:
      Type: AWS::Serverless::Function
      Properties:
        Handler: index.handler
        Runtime: nodejs12.x

What does a Developer need to do to prepare the template so it can be deployed using an AWS CLI command?

A

Run the “aws cloudformation package” command to upload the source code to an Amazon S3 bucket and produce a modified CloudFormation template

The template shown is an AWS SAM template for deploying a serverless application. This can be identified by the template header: Transform: ‘AWS::Serverless-2016-10-31’

The Developer will need to package and then deploy the template. To do this the source code must be available in the same directory or referenced using the “codeuri” parameter. Then, the Developer can use the “aws cloudformation package” or “sam package” commands to prepare the local artifacts (local paths) that your AWS CloudFormation template references.

The command uploads local artifacts, such as source code for an AWS Lambda function or a Swagger file for an AWS API Gateway REST API, to an S3 bucket. The command returns a copy of your template, replacing references to local artifacts with the S3 location where the command uploaded the artifacts.

Once that is complete the template can be deployed using the “aws cloudformation deploy” or “sam deploy” commands. Therefore, the next step in this scenario is for the Developer to run the “aws cloudformation” package command to upload the source code to an Amazon S3 bucket and produce a modified CloudFormation template. An example of this command is provided below:

aws cloudformation package –template-file /path_to_template/template.json –s3-bucket bucket-name –output-template-file packaged-template.json

22
Q

A Developer has added a Global Secondary Index (GSI) to an existing Amazon DynamoDB table. The GSI is used mainly for read operations whereas the primary table is extremely write-intensive. Recently, the Developer has noticed throttling occurring under heavy write activity on the primary table. However, the write capacity units on the primary table are not fully utilized.

What is the best explanation for why the writes are being throttled on the primary table?

A

The write capacity units on the GSI are under provisioned

Some applications might need to perform many kinds of queries, using a variety of different attributes as query criteria. To support these requirements, you can create one or more global secondary indexes and issue Query requests against these indexes in Amazon DynamoDB.

When items from a primary table are written to the GSI they consume write capacity units. It is essential to ensure the GSI has sufficient WCUs (typically, at least as many as the primary table). If writes are throttled on the GSI, the main table will be throttled (even if there’s enough WCUs on the main table). LSIs do not cause any special throttling considerations.

In this scenario, it is likely that the Developer assumed that the GSI would need fewer WCUs as it is more read-intensive and neglected to factor in the WCUs required for writing data into the GSI. Therefore, the most likely explanation is that the write capacity units on the GSI are under provisioned

23
Q

CloudWatch log filters only log the filter data after the filter is created.

After the CloudWatch Logs agent begins publishing log data to Amazon CloudWatch, you can begin searching and filtering the log data by creating one or more metric filters. Metric filters define the terms and patterns to look for in log data as it is sent to CloudWatch Logs.

CloudWatch Logs uses these metric filters to turn log data into numerical CloudWatch metrics that you can graph or set an alarm on. You can use any type of CloudWatch statistic, including percentile statistics, when viewing these metrics or setting alarms.

Filters do not retroactively filter data. Filters only publish the metric data points for events that happen after the filter was created. Filtered results return the first 50 lines, which will not be displayed if the timestamp on the filtered results is earlier than the metric creation time.

A
24
Q

A company needs to provide additional security for their APIs deployed on Amazon API Gateway. They would like to be able to authenticate their customers with a token. What is the SAFEST way to do this?

A

A Lambda authorizer (formerly known as a custom authorizer) is an API Gateway feature that uses a Lambda function to control access to your API.

A Lambda authorizer is useful if you want to implement a custom authorization scheme that uses a bearer token authentication strategy such as OAuth or SAML, or that uses request parameters to determine the caller’s identity.

When a client makes a request to one of your API’s methods, API Gateway calls your Lambda authorizer, which takes the caller’s identity as input and returns an IAM policy as output.

There are two types of Lambda authorizers:

A token-based Lambda authorizer (also called a TOKEN authorizer) receives the caller’s identity in a bearer token, such as a JSON Web Token (JWT) or an OAuth token.

A request parameter-based Lambda authorizer (also called a REQUEST authorizer) receives the caller’s identity in a combination of headers, query string parameters, stageVariables, and $context variables.

For this scenario, a Lambda authorizer is the most secure method available. It can also be used with usage plans and AWS recommend that you don’t rely only on API keys, so a Lambda authorizer is a better solution.

25
Q

In CloudFront, to enable SSL between the origin and the distribution the Developer can configure the Origin Protocol Policy. Depending on the domain name used (CloudFront default or custom), the steps are different. To enable SSL between the end-user and CloudFront the Viewer Protocol Policy should be configured.

A
26
Q

A Developer is deploying an application using Docker containers on Amazon ECS. One of the containers runs a database and should be placed on instances in the “databases” task group.

What should the Developer use to control the placement of the database task?

A

A task placement constraint is a rule that is considered during task placement. Task placement constraints can be specified when either running a task or creating a new service. The task placement constraints can be updated for existing services as well.

Amazon ECS supports the following types of task placement constraints:

distinctInstance

Place each task on a different container instance. This task placement constraint can be specified when either running a task or creating a new service.

memberOf

Place tasks on container instances that satisfy an expression. For more information about the expression syntax for constraints, see Cluster Query Language.

The memberOf task placement constraint can be specified with the following actions:

Running a task

Creating a new service

Creating a new task definition

Creating a new revision of an existing task definition
27
Q

A Developer is creating a service on Amazon ECS and needs to ensure that each task is placed on a different container instance.

How can this be achieved?

A

A task placement constraint is a rule that is considered during task placement. Task placement constraints can be specified when either running a task or creating a new service.

“Use a task placement strategy” is incorrect as this is used to select instances for task placement using the binpack, random and spread algorithms.

28
Q

A web application is using Amazon Kinesis Data Streams for ingesting IoT data that is then stored before processing for up to 24 hours.
How can the Developer implement encryption at rest for data stored in Amazon Kinesis Data Streams?

A

Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events.

Server-side encryption is a feature in Amazon Kinesis Data Streams that automatically encrypts data before it’s at rest by using an AWS KMS customer master key (CMK) you specify. Data is encrypted before it’s written to the Kinesis stream storage layer and decrypted after it’s retrieved from storage. As a result, your data is encrypted at rest within the Kinesis Data Streams service. This allows you to meet strict regulatory requirements and enhance the security of your data.

With server-side encryption, your Kinesis stream producers and consumers don’t need to manage master keys or cryptographic operations. Your data is automatically encrypted as it enters and leaves the Kinesis Data Streams service, so your data at rest is encrypted. AWS KMS provides all the master keys that are used by the server-side encryption feature. AWS KMS makes it easy to use a CMK for Kinesis that is managed by AWS, a user-specified AWS KMS CMK, or a master key imported into the AWS KMS service.

29
Q

DynamoDB stores and retrieves data based on a Primary key. There are two types of Primary key:

Partition key – unique attribute (e.g. user ID).

Value of the Partition key is input to an internal hash function which determines the partition or physical location on which the data is stored.

If you are using the Partition key as your Primary key, then no two items can have the same partition key.

Composite key – Partition key + Sort key in combination.

Example is user posting to a forum. Partition key would be the user ID, Sort key would be the timestamp of the post.

2 items may have the same Partition key, but they must have a different Sort key.

All items with the same Partition key are stored together, then sorted according to the Sort key value.

Allows you to store multiple items with the same partition key.

As stated above, if using a partition key alone as per the configuration provided with the question, then you cannot have two items with the same partition key. The only resolution is to recreate the table with a composite key consisting of the userid and timestamp attributes. In that case the Developer will be able to add multiple items with the same userid as long as the timestamp is unique.

A
30
Q

A company uses Amazon SQS to decouple an online application that generates memes. The SQS consumers poll the queue regularly to keep throughput high and this is proving to be costly and resource intensive. A Developer has been asked to review the system and propose changes that can reduce costs and the number of empty responses.

What would be the BEST approach to MINIMIZING cost?

A

The process of consuming messages from a queue depends on whether you use short or long polling. By default, Amazon SQS uses short polling, querying only a subset of its servers (based on a weighted random distribution) to determine whether any messages are available for a response. You can use long polling to reduce your costs while allowing your consumers to receive messages as soon as they arrive in the queue.

When the wait time for the ReceiveMessage API action is greater than 0, long polling is in effect. The maximum long polling wait time is 20 seconds. Long polling helps reduce the cost of using Amazon SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request) and false empty responses (when messages are available but aren’t included in a response).

Therefore, the best way to optimize resource usage and reduce the number of empty responses (and cost) is to configure long polling by setting the Imaging queue ReceiveMessageWaitTimeSeconds attribute to 20 seconds.

CORRECT: “Set the Imaging queue ReceiveMessageWaitTimeSeconds attribute to 20 seconds” is the correct answer.

INCORRECT: “Set the imaging queue visibility Timeout attribute to 20 seconds” is incorrect. This attribute configures message visibility which will not reduce empty responses.

INCORRECT: “Set the imaging queue MessageRetentionPeriod attribute to 20 seconds” is incorrect. This attribute sets the length of time, in seconds, for which Amazon SQS retains a message.

INCORRECT: “Set the DelaySeconds parameter of a message to 20 seconds” is incorrect. This attribute sets the length of time, in seconds, for which the delivery of all messages in the queue is delayed.

References:

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-short-and-long-polling.html

31
Q

Which AWS services can be used for asynchronous message passing?

A

Amazon SQS

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications.

Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications.

32
Q

A company uses an Amazon Simple Queue Service (SQS) Standard queue for an application. An issue has been identified where applications are picking up messages from the queue that are still being processed causing duplication. What can a Developer do to resolve this issue?

A

Immediately after a message is received, it remains in the queue. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours.

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SetQueueAttributes.html

33
Q

A mobile application has hundreds of users. Each user may use multiple devices to access the application. The Developer wants to assign unique identifiers to these users regardless of the device they use.

Which of the following methods should be used to obtain unique identifiers?

A

Amazon Cognito supports developer authenticated identities, in addition to web identity federation through Facebook (Identity Pools), Google (Identity Pools), Login with Amazon (Identity Pools), and Sign in with Apple (identity Pools).

With developer authenticated identities, you can register and authenticate users via your own existing authentication process, while still using Amazon Cognito to synchronize user data and access AWS resources.

Using developer authenticated identities involves interaction between the end user device, your backend for authentication, and Amazon Cognito.

34
Q

A Developer needs to run some code using Lambda in response to an event and forward the execution result to another application using a pub/sub notification.

How can the Developer accomplish this?

A

With Destinations, you can send asynchronous function execution results to a destination resource without writing code. A function execution result includes version, timestamp, request context, request payload, response context, and response payload. For each execution status (i.e. Success and Failure), you can choose one destination from four options: another Lambda function, an SNS topic, an SQS standard queue, or EventBridge.

For this scenario, the code will be run by Lambda and the execution result will then be sent to the SNS topic. The application that is subscribed to the SNS topics will then receive the notification.

https://aws.amazon.com/about-aws/whats-new/2019/11/aws-lambda-supports-destinations-for-asynchronous-invocations/

35
Q

What does an Amazon SQS delay queue accomplish?

A

Delay queues let you postpone the delivery of new messages to a queue for a number of seconds, for example, when your consumer application needs additional time to process messages.

If you create a delay queue, any messages that you send to the queue remain invisible to consumers for the duration of the delay period. The default (minimum) delay for a queue is 0 seconds. The maximum is 15 minutes.

36
Q

CloudWatch standard vs high resolution

A

CloundWatch Standard resolution = 1 minute

CloudWatch High resolution = 1 second, and you can read and retrieve it with a period of 1 second, 5 seconds, 10 seconds, 30 seconds, or any multiple of 60 seconds

37
Q

A Developer has created a task definition that includes the following JSON code:

“placementConstraints”: [
{
“expression”: “task:group == databases”,
“type”: “memberOf”
}
]
What will be the effect for tasks using this task definition?

A
38
Q

How can the Developer ensure that all data that is sent to the S3 bucket is encrypted in transit?

A

Create an S3 bucket policy that denies traffic where SecureTransport is false

INCORRECT: “Create an S3 bucket policy that denies any S3 Put request that does not include the x-amz-server-side-encryption” is incorrect. This will ensure that the data is encrypted at rest, but not in-transit.

39
Q

An AWS Lambda function must be connected to an Amazon VPC private subnet that does not have Internet access. The function also connects to an Amazon DynamoDB table. What MUST a Developer do to enable access to the DynamoDB table?

A

To connect to AWS services from a private subnet with no internet access, use VPC endpoints. A VPC endpoint for DynamoDB enables resources in a VPC to use their private IP addresses to access DynamoDB with no exposure to the public internet.

When you create a VPC endpoint for DynamoDB, any requests to a DynamoDB endpoint within the Region (for example, dynamodb.us-west-2.amazonaws.com) are routed to a private DynamoDB endpoint within the Amazon network.

40
Q

A Developer is managing an application that includes an Amazon SQS queue. The consumers that process the data from the queue are connecting in short cycles and the queue often does not return messages. The cost for API calls is increasing. How can the Developer optimize the retrieval of messages and reduce cost?

A

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-short-and-long-polling.html

41
Q

An independent software vendor (ISV) uses Amazon S3 and Amazon CloudFront to distribute software updates. They would like to provide their premium customers with access to updates faster. What is the MOST efficient way to distribute these updates only to the premium customers?

A

Create a signed URL with access to the content and distribute it to the premium customers

Create an origin access identity (OAI) and associate it with the distribution and configure permissions

42
Q

A Developer is publishing custom metrics for Amazon EC2 using the Amazon CloudWatch CLI. The Developer needs to add further context to the metrics being published by organizing them by EC2 instance and Auto Scaling Group.

What should the Developer add to the CLI command when publishing the metrics using put-metric-data?

A

In custom metrics, the –dimensions parameter is common. A dimension further clarifies what the metric is and what data it stores. You can have up to 10 dimensions in one metric, and each dimension is defined by a name and value pair.

As you can see in the above example there are two dimensions associated with the EC2 namespace. These organize the metrics by Auto Scaling Group and Per-Instance metrics. Therefore the Developer should the –dimensions parameter.

43
Q

A Development team manage a hybrid cloud environment. They would like to collect system-level metrics from on-premises servers and Amazon EC2 instances. How can the Development team collect this information MOST efficiently?

A

Install the CloudWatch agent on the on-premises servers and EC2 instances

The unified CloudWatch agent can be installed on both on-premises servers and Amazon EC2 instances using multiple operating system versions

44
Q

A Developer is creating an application that uses Amazon EC2 instances and must be highly available and fault tolerant. How should the Developer configure the VPC?

A

To ensure high availability and fault tolerance the Developer should create a subnet within each availability zone. The EC2 instances should then be distributed between these subnets.

The Developer would likely use Amazon EC2 Auto Scaling which will automatically launch instances in each subnet and then Elastic Load Balancing to distributed incoming traffic.

45
Q

A developer has created a Docker image and uploaded it to an Amazon Elastic Container Registry (ECR) repository. How can the developer pull the image to his workstation using the docker client?

A

Run “aws ecr get-login-password” use the output to login in then issue a “docker pull” command specifying the image name using “registry/repository[:tag] “

46
Q

The manager of a development team is setting up a shared S3 bucket for team members. The manager would like to use a single policy to allow each user to have access to their objects in the S3 bucket. Which feature can be used to generalize the policy?

A

When this policy is evaluated, IAM replaces the variable ${aws:username}with the friendly name of the actual current user. This means that a single policy applied to a group of users can control access to a bucket by using the username as part of the resource’s name.

CORRECT: “Variable” is the correct answer.

INCORRECT: “Condition” is incorrect. The Condition element (or Condition block) lets you specify conditions for when a policy is in effect.

INCORRECT: “Principal” is incorrect. You can use the Principal element in a policy to specify the principal that is allowed or denied access to a resource. However, in this scenario a variable is needed to create a generic policy that can provide the necessary permissions to different principals using variables.

INCORRECT: “Resource” is incorrect. The Resource element specifies the object or objects that the statement covers.

47
Q

Which of the following processes is a correct implementation of envelope encryption?

A

When you encrypt your data, your data is protected, but you have to protect your encryption key. One strategy is to encrypt it. Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.

You can even encrypt the data encryption key under another encryption key and encrypt that encryption key under another encryption key. But, eventually, one key must remain in plaintext so you can decrypt the keys and your data. This top-level plaintext key encryption key is known as the master key.

Envelope encryption offers several benefits:

Protecting data keys

When you encrypt a data key, you don’t have to worry about storing the encrypted data key, because the data key is inherently protected by encryption. You can safely store the encrypted data key alongside the encrypted data.

Encrypting the same data under multiple master keys

Encryption operations can be time consuming, particularly when the data being encrypted are large objects. Instead of re-encrypting raw data multiple times with different keys, you can re-encrypt only the data keys that protect the raw data.

Combining the strengths of multiple algorithms

In general, symmetric key algorithms are faster and produce smaller ciphertexts than public key algorithms. But public key algorithms provide inherent separation of roles and easier key management. Envelope encryption lets you combine the strengths of each strategy.

As described above, the process that should be implemented is to encrypt plaintext data with a data key and then encrypt the data key with a top-level plaintext master key.

48
Q

A website delivers images stored in an Amazon S3 bucket. The site uses Amazon Cognito-enabled and guest users without logins need to be able to view the images from the S3 bucket.

How can a Developer enable access for guest users to the AWS resources?

A

Amazon Cognito identity pools provide temporary AWS credentials for users who are guests (unauthenticated) and for users who have been authenticated and received a token. An identity pool is a store of user identity data specific to your account.

49
Q

A company has sensitive data that must be encrypted. The data is made up of 1 GB objects and there is a total of 150 GB of data.

What is the BEST approach for a Developer to encrypt the data using AWS KMS?

A

To encrypt large quantities of data with the AWS Key Management Service (KMS), you must use a data encryption key rather than a customer master keys (CMK). This is because a CMK can only encrypt up to 4KB in a single operation and in this scenario the objects are 1 GB in size.

To create a data key, call the GenerateDataKey operation. AWS KMS uses the CMK that you specify to generate a data key. The operation returns a plaintext copy of the data key and a copy of the data key encrypted under the CMK. The following image shows this operation.

AWS KMS cannot use a data key to encrypt data. But you can use the data key outside of KMS, such as by using OpenSSL or a cryptographic library like the AWS Encryption SDK.

50
Q

SQS Delay Queues vs Visibility Timeout

Delay queues are similar to visibility timeouts because both features make messages unavailable to consumers for a specific period of time. The difference between the two is that, for delay queues, a message is hidden when it is first added to queue, whereas for visibility timeouts a message is hidden only after it is consumed from the queue.

Delay Queue = message hidden when first added to queue

Visibility Timeout = message hidden after it is consumed from the queue

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-delay-queues.html

A
51
Q

To set long polling in SQS, call the “ReceiveMessage” API with the “WaitTimeSeconds” parameter set to anything greater than 0.

If “WaitTimeSeconds” is set to 0, then SQS is using short polling.

A