Notes 6 Flashcards
An application needs to generate SMS text messages and emails for a large number of subscribers. Which AWS service can be used to send these messages to customers?
Amazon Simple Notification Service (Amazon SNS) is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients. In Amazon SNS, there are two types of clients—publishers and subscribers—also referred to as producers and consumers.
Publishers communicate asynchronously with subscribers by producing and sending a message to a topic, which is a logical access point and communication channel.
Subscribers (that is, web servers, email addresses, Amazon SQS queues, AWS Lambda functions) consume or receive the message or notification over one of the supported protocols (that is, Amazon SQS, HTTP/S, email, SMS, Lambda) when they are subscribed to the topic.
A developer is planning to use a Lambda function to process incoming requests from an Application Load Balancer (ALB). How can this be achieved?
Create a target group and register the Lambda function using the AWS CLI.
You can register your Lambda functions as targets and configure a listener rule to forward requests to the target group for your Lambda function. When the load balancer forwards the request to a target group with a Lambda function as a target, it invokes your Lambda function and passes the content of the request to the Lambda function, in JSON format.
A company has three different environments: Development, QA, and Production. The company wants to deploy its code first in the Development environment, then QA, and then Production.
Which AWS service can be used to meet this requirement?
Use AWS CodeDeploy to create multiple deployment groups
A Developer is creating a new web application that will be deployed using AWS Elastic Beanstalk from the AWS Management Console. The Developer is about to create a source bundle which will be uploaded using the console.
Which of the following are valid requirements for creating the source bundle? (Select TWO.)
Consist of a single ZIP file or WAR file (you can include multiple WAR files inside your ZIP file)
Not exceed 512 MB Not include a parent folder or top-level directory (subdirectories are fine)
AWS Lambda and Amazon ECS deployments cannot use an in-place deployment type.
A company is workflow using an AWS Step Functions state machine. When testing the state machine errors were experienced in the Step Functions task state machine. To troubleshoot the issue a developer requires that the state input be included along with the error message in the state output.
Which coding practice can preserve both the original input and the error for the state?
Use ResultPath in a Catch statement to include the original input with the error.
A Step Functions execution receives a JSON text as input and passes that input to the first state in the workflow. Individual states receive JSON as input and usually pass JSON as output to the next state.
In the Amazon States Language, these fields filter and control the flow of JSON from state to state:
InputPath OutputPath ResultPath Parameters ResultSelector
Use ResultPath to combine a task result with task input, or to select one of these. The path you provide to ResultPath controls what information passes to the output. Use ResultPath in a Catch to include the error with the original input, instead of replacing it.
AWS Cognito User pool vs Identity pool
A Cognito user pool can be used to authenticate (sign in / sign up) but the Cognito identity pool is used to provide authorised access to AWS services.
User pool = authentication (sign in/up)
Identity pool = authorisation (access to AWS services)
An application asynchronously invokes an AWS Lambda function. The application has recently been experiencing occasional errors that result in failed invocations. A developer wants to store the messages that resulted in failed invocations such that the application can automatically retry processing them.
What should the developer do to accomplish this goal with the LEAST operational overhead?
Configure a redrive policy on an Amazon SQS queue. Set the dead-letter queue as an event source to the Lambda function.
Amazon SQS supports dead-letter queues (DLQ), which other queues (source queues) can target for messages that can’t be processed (consumed) successfully. Dead-letter queues are useful for debugging your application or messaging system because they let you isolate unconsumed messages to determine why their processing doesn’t succeed.
The redrive policy specifies the source queue, the dead-letter queue, and the conditions under which Amazon SQS moves messages from the former to the latter if the consumer of the source queue fails to process a message a specified number of times.
You can set your DLQ as an event source to the Lambda function to drain your DLQ. This will ensure that all failed invocations are automatically retried.
AWS Secrets Manager
- handles key/value pair (username/password)
- rotate credentials (passwords)
AWS KMS (Key Management Service)
- encrypts keys use to encrypt data such as files, data in S3 etc.
- rotate keys
A Developer is developing a web application and will maintain separate sets of resources for the alpha, beta, and release stages. Each version runs on Amazon EC2 and uses an Elastic Load Balancer.
How can the Developer create a single page to view and manage all of the resources?
Create a resource group
https://docs.aws.amazon.com/ARG/latest/userguide/resource-groups.html
By default, the AWS Management Console is organized by AWS service. But with Resource Groups, you can create a custom console that organizes and consolidates information based on criteria specified in tags, or the resources in an AWS CloudFormation stack. The following list describes some of the cases in which resource grouping can help organize your resources.
An application that has different phases, such as development, staging, and production. Projects managed by multiple departments or individuals. A set of AWS resources that you use together for a common project or that you want to manage or monitor as a group. A set of resources related to applications that run on a specific platform, such as Android or iOS.
Read capacity unites up to 4KB do not get divided by 4.
1 read request unit (RRU) = 1 strongly consistent read of up to 4 KB/s = 2 eventually consistent reads of up to 4 KB/s per read.
So for example:
- Eventually consistent, 15 RCUs, 1 KB item = 30 items read per second. - Strongly consistent, 15 RCUs, 1 KB item = 15 items read per second. - Eventually consistent, 5 RCUs, 4 KB item = 10 items read per second. - Strongly consistent, 5 RCUs, 4 KB item = 5 items read per second.
Larger than 4KB you do the usual mathematical equation.
A read capacity unit represents one strongly consistent read per second, or two eventually consistent reads per second, for an item up to 4 KB in size. For example, suppose that you create a table with 10 provisioned read capacity units. This allows you to perform 10 strongly consistent reads per second, or 20 eventually consistent reads per second, for items up to 4 KB.
Reading an item larger than 4 KB consumes more read capacity units. For example, a strongly consistent read of an item that is 8 KB (4 KB × 2) consumes 2 read capacity units. An eventually consistent read on that same item consumes only 1 read capacity unit.
Item sizes for reads are rounded up to the next 4 KB multiple. For example, reading a 3,500-byte item consumes the same throughput as reading a 4 KB item. Therefore, the smaller (1 KB) items in this scenario would consume the same number of RCUs as the 4 KB items. Also, we know that eventually consistent reads consume half the RCUs of strongly consistent reads.
exponential backoff = to use progressively longer waits between retries for consecutive error responses
An organization has an account for each environment: Production, Testing, Development. A Developer with an IAM user in the Development account needs to launch resources in the Production and Testing accounts. What is the MOST efficient way to provide access?
Create a role with the required permissions in the Production and Testing accounts and have the Developer assume that role
An application running on Amazon EC2 generates a large number of small files (1KB each) containing personally identifiable information that must be converted to ciphertext. The data will be stored on a proprietary network-attached file system. What is the SAFEST way to encrypt the data using AWS KMS?
Encrypt the data directly with a customer managed customer master key (CMK)
With AWS KMS you can encrypt files directly with a customer master key (CMK). A CMK can encrypt up to 4KB (4096 bytes) of data in a single encrypt, decrypt, or reencrypt operation. As CMKs cannot be exported from KMS this is a very safe way to encrypt small amounts of data.
Cognito user pool vs identity pool
User pool use cases:
Use a user pool when you need to:
Design sign-up and sign-in webpages for your app. Access and manage user data. Track user device, location, and IP address, and adapt to sign-in requests of different risk levels. Use a custom authentication flow for your app.
Identity pool use cases:
Use an identity pool when you need to:
Give your users access to AWS resources, such as an Amazon Simple Storage Service (Amazon S3) bucket or an Amazon DynamoDB table. Generate temporary AWS credentials for unauthenticated users.
A Developer needs to manage AWS services from a local development server using the AWS CLI. How can the Developer ensure that the CLI uses their IAM permissions?
Run the aws configure command and provide the Developer’s IAM access key ID and secret access key
For general use, the “aws configure” command is the fastest way to set up your AWS CLI installation.
API logging in Cloudwatch
There are two types of API logging in CloudWatch: execution logging and access logging. In execution logging, API Gateway manages the CloudWatch Logs. The process includes creating log groups and log streams, and reporting to the log streams any caller’s requests and responses.
The logged data includes errors or execution traces (such as request or response parameter values or payloads), data used by Lambda authorizers, whether API keys are required, whether usage plans are enabled, and so on.
In access logging, you, as an API Developer, want to log who has accessed your API and how the caller accessed the API. You can create your own log group or choose an existing log group that could be managed by API Gateway.
Messages produced by an application must be pushed to multiple Amazon SQS queues. What is the BEST solution for this requirement?
Publish the messages to an Amazon SNS topic and subscribe each SQS queue to the topic
Amazon SNS works closely with Amazon Simple Queue Service (Amazon SQS). Both services provide different benefits for developers. Amazon SNS allows applications to send time-critical messages to multiple subscribers through a “push” mechanism, eliminating the need to periodically check or “poll” for updates.
When you subscribe an Amazon SQS queue to an Amazon SNS topic, you can publish a message to the topic and Amazon SNS sends an Amazon SQS message to the subscribed queue. The Amazon SQS message contains the subject and message that were published to the topic along with metadata about the message in a JSON document.
Which AWS services are supported for Lambda event source mappings?
- Amazon DynamoDB
- Amazon Kinesis
- Amazon Simple Queue Service (SQS)
AWS CodeBuild, CodeDeploy and CodePipeline description
AWS CodeBuild is a fully managed build service in the cloud. CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. CodeBuild eliminates the need to provision, manage, and scale your own build servers. It provides pre-packaged build environments for popular programming languages and build tools such as Apache Maven, Gradle, and more.
CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services.
AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. This service works with the other Developer Tools to create a pipeline. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define.