Notes 3 Flashcards
How can you alleviate 504 errors in CloudFront?
In CloudFront you can set up an origin failover by creating an origin group with two origins with one as the primary origin and the other as the second origin, which CloudFront automatically switches to when the primary origin fails. This will alleviate the occasional HTTP 504 errors that users are experiencing.
DynamoDB conditional writes
By default, the DynamoDB write operations (PutItem, UpdateItem, DeleteItem) are unconditional: each of these operations will overwrite an existing item that has the specified primary key.
DynamoDB optionally supports conditional writes for these operations. A conditional write will succeed only if the item attributes meet one or more expected conditions. Otherwise, it returns an error.
Conditional writes are helpful in many situations. For example, you might want a PutItem operation to succeed only if there is not already an item with the same primary key. Or you could prevent an UpdateItem operation from modifying an item if one of its attributes has a certain value. Conditional writes are helpful in cases where multiple users attempt to modify the same item.
API integration with a Lambda function
You choose an API integration type according to the types of integration endpoint you work with and how you want data to pass to and from the integration endpoint. For a Lambda function, you can have two types of integration:
– Lambda proxy integration
– Lambda custom integration
In Lambda proxy integration, the setup is simple. If your API does not require content encoding or caching, you only need to set the integration’s HTTP method to POST, the integration endpoint URI to the ARN of the Lambda function invocation action of a specific Lambda function, and the credential to an IAM role with permissions to allow API Gateway to call the Lambda function on your behalf.
In Lambda non-proxy (or custom) integration, in addition to the proxy integration setup steps, you also specify how the incoming request data is mapped to the integration request and how the resulting integration response data is mapped to the method response.
For an AWS service action, you have the AWS integration of the non-proxy type only. API Gateway also supports the mock integration, where API Gateway serves as an integration endpoint to respond to a method request.
The Lambda custom integration is a special case of the AWS integration, where the integration endpoint corresponds to the function-invoking action of the Lambda service.
KMS decrypt data locally
When you encrypt your data, your data is protected, but you have to protect your encryption key. One strategy is to encrypt it. Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key.
You can even encrypt the data encryption key under another encryption key, and encrypt that encryption key under another encryption key. But, eventually, one key must remain in plaintext so you can decrypt the keys and your data. This top-level plaintext key encryption key is known as the master key.
AWS KMS helps you to protect your master keys by storing and managing them securely. Master keys stored in AWS KMS, known as customer master keys (CMKs), never leave the AWS KMS FIPS validated hardware security modules unencrypted. To use an AWS KMS CMK, you must call AWS KMS.
It is recommended that you use the following pattern to encrypt data locally in your application:
- Use the GenerateDataKey operation to get a data encryption key.
- Use the plaintext data key (returned in the Plaintext field of the response) to encrypt data locally, then erase the plaintext data key from memory.
- Store the encrypted data key (returned in the CiphertextBlob field of the response) alongside the locally encrypted data.
To decrypt data locally:
- Use the Decrypt operation to decrypt the encrypted data key. The operation returns a plaintext copy of the data key.
- Use the plaintext data key to decrypt data locally, then erase the plaintext data key from memory.
Which of the following ECS features provides you with expressions that you can use to group container instances by a specific attribute?
Cluster Query Language
When a task that uses the EC2 launch type is launched, Amazon ECS must determine where to place the task based on the requirements specified in the task definition, such as CPU and memory. Similarly, when you scale down the task count, Amazon ECS must determine which tasks to terminate. You can apply task placement strategies and constraints to customize how Amazon ECS places and terminates tasks. Task placement strategies and constraints are not supported for tasks using the Fargate launch type. By default, Fargate tasks are spread across Availability Zones.
Cluster queries are expressions that enable you to group objects. For example, you can group container instances by attributes such as Availability Zone, instance type, or custom metadata. You can add custom metadata to your container instances, known as attributes. Each attribute has a name and an optional string value. You can use the built-in attributes provided by Amazon ECS or define custom attributes.
After you have defined a group of container instances, you can customize Amazon ECS to place tasks on container instances based on group. Running tasks manually is ideal in certain situations. For example, suppose that you are developing a task but you are not ready to deploy this task with the service scheduler. Perhaps your task is a one-time or periodic batch job that does not make sense to keep running or restart when it finishes.
Hence, the correct ECS feature which provides you with expressions that you can use to group container instances by a specific attribute is s Cluster Query Language.
DynamoDB streams and Lambda triggers
Amazon DynamoDB is integrated with AWS Lambda so that you can create triggers—pieces of code that automatically respond to events in DynamoDB Streams. With triggers, you can build applications that react to data modifications in DynamoDB tables.
If you enable DynamoDB Streams on a table, you can associate the stream ARN with a Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table’s stream. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records.
The Lambda function can perform any actions you specify, such as sending a notification or initiating a workflow. For example, you can write a Lambda function to simply copy each stream record to persistent storage, such as Amazon Simple Storage Service (Amazon S3), to create a permanent audit trail of write activity in your table. Or suppose you have a mobile gaming app that writes to a GameScores table. Whenever the TopScore attribute of the GameScores table is updated, a corresponding stream record is written to the table’s stream. This event could then trigger a Lambda function that posts a congratulatory message on a social media network. (The function would simply ignore any stream records that are not updates to GameScores or that do not modify the TopScore attribute.)
Hence, the correct answer is to enable DynamoDB Streams and configure it as the event source for the Lambda function.
DynamoDB, return the write capacity units consumed by an operation.
In Dynamo DB, to return the number of write capacity units consumed by any of these operations, set the ReturnConsumedCapacity parameter to one of the following:
TOTAL — returns the total number of write capacity units consumed.
INDEXES — returns the total number of write capacity units consumed, with subtotals for the table and any secondary indexes that were affected by the operation.
NONE — no write capacity details are returned. (This is the default.)
Parameter Store
Parameter Store offers the following benefits and features:
– Use a secure, scalable, hosted secrets management service (No servers to manage).
– Improve your security posture by separating your data from your code.
– Store configuration data and secure strings in hierarchies and track versions.
– Control and audit access at granular levels.
– Configure change notifications and trigger automated actions.
– Tag parameters individually, and then secure access from different levels, including operational, parameter, Amazon EC2 tag, or path levels.
– Reference AWS Secrets Manager secrets by using Parameter Store parameters.
For system monitoring in API Gateway, you need to track the number of requests served from the backend in a given period. How can you do this?
Using “CacheMissCount”
CacheMissCount tracks the number of requests served from the backend in a given period, when API caching is enabled. On the other hand, CacheHitCount track the number of requests served from the API cache in a given period.
API Gateaway anatayltics
The metrics reported by API Gateway provide information that you can analyze in different ways. The list below shows some common uses for the metrics. These are suggestions to get you started, not a comprehensive list.
– Monitor the IntegrationLatency metrics to measure the responsiveness of the backend.
– Monitor the Latency metrics to measure the overall responsiveness of your API calls.
– Monitor the CacheHitCount and CacheMissCount metrics to optimize cache capacities to achieve a desired performance. CacheMissCount tracks the number of requests served from the backend in a given period, when API caching is enabled. On the other hand, CacheHitCount track the number of requests served from the API cache in a given period.
What do you need to enable Cross Region Replication on an S3 bucket?
To enable the cross-region replication feature in S3, the following items should be met:
– The source and destination buckets must have “versioning” enabled.
– The source and destination buckets must be in different AWS Regions.
– Amazon S3 must have permissions to replicate objects from that source bucket to the destination bucket on your behalf.
What are best practices in IAM (Identity Access and Management)
best practices in IAM are the following:
– Grant only the permissions required by the resource to perform a task
– Delete root user access keys
Lambda authorizer types
A Lambda authorizer is an API Gateway feature that uses a Lambda function to control access to your API. When a client makes a request to one of your API’s methods, API Gateway calls your Lambda authorizer, which takes the caller’s identity as input and returns an IAM policy as output.
There are two types of Lambda authorizers:
– A token-based Lambda authorizer (also called a TOKEN authorizer) receives the caller’s identity in a bearer token, such as a JSON Web Token (JWT) or an OAuth token.
– A request parameter-based Lambda authorizer (also called a REQUEST authorizer) receives the caller’s identity in a combination of headers, query string parameters, stageVariables, and $context variables.
S3 Transfer Accelleration
Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket. Transfer Acceleration leverages Amazon CloudFront’s globally distributed AWS Edge Locations. As data arrives at an AWS Edge Location, data is routed to your Amazon S3 bucket over an optimized network path.
AWS CodeStar
AWS CodeStar provides the tools you need to quickly develop, build, and deploy applications on AWS. With AWS CodeStar, you can use a variety of project templates to start developing applications on Amazon EC2, AWS Lambda, and AWS Elastic Beanstalk. The project dashboard in AWS CodeStar makes it easy to centrally monitor application activity and manage day-to-day development tasks such as recent code commits, builds, and deployments. Because AWS CodeStar integrates with Atlassian JIRA, a third-party issue tracking, and project management tool, you can create and manage JIRA issues in the AWS CodeStar dashboard.