Notes 2 Flashcards

1
Q

CloudWatch logs metrics from EC2 etc for memory usage. True or false?

A

False. It does not log memory usage. It logs CPU utilisation, Network in/out, Disk read/write bytes etc. You will need to create a custom metric to get memory usage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

CloudWatch metrics

A

“GetMetricData” - retrieve data

“PutMetricData” - published metric data points.

“GetMetricStatistics” - get statistics for specific metric.

“PutMetricAlarm” - creates or updates an alarm and associates it with a specified metric

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are CloudWatch Events?

A

Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in Amazon Web Services (AWS) resources. Using simple rules that you can quickly set up, you can match events and route them to one or more target functions or streams. CloudWatch Events becomes aware of operational changes as they occur. CloudWatch Events responds to these operational changes and takes corrective action as necessary, by sending messages to respond to the environment, activating functions, making changes, and capturing state information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

CloudWatch log storage

A

CloudWatch logs are stored indefinitely, however alarm history is only for 14 days.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

AWS X-ray “metadata” and “annotations”

A

X-ray metadata is user added, key/value pairs, not indexed and not used for searching/filtering

X-ray annotations are added by the system, which are both indexed and can be searched/filtered

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

CloudWatch Dimensions

A

A dimension in CloudWatch is a name/value pair that is part of the identity of a metric. You can assign up to 10 dimensions to a metric.

Every metric has specific characteristics that describe it, and you can think of dimensions as categories for those characteristics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does the “unified CloudWatch agent” do?

A

The “unified CloudWatch agent” enables you to do the following:

  • Collect internal system-level metrics from Amazon EC2 instances across operating systems. The metrics can include in-guest metrics, in addition to the metrics for EC2 instances. The additional metrics that can be collected are listed in Metrics collected by the CloudWatch agent.
  • Collect system-level metrics from on-premises servers. These can include servers in a hybrid environment as well as servers not managed by AWS.
  • Retrieve custom metrics from your applications or services using the StatsD and collectd protocols. StatsD is supported on both Linux servers and servers running Windows Server. collectd is supported only on Linux servers.
  • Collect logs from Amazon EC2 instances and on-premises servers, running either Linux or Windows Server.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Where do CloudTrail trail’s get stored?

A

CloudTrail trails get stored in S3 indefinitely.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Using AWS X-ray on Beanstalk/EKS/ECS/Fargate

A

To use AWS X-ray on Beanstalk you set in the console or use “.ebextensions/xray-daemon.config”

On EKS/ECS/Fargate create a Docker image that runs the daemon or use the official X-Ray docker image.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

CloudWatch vs CloudTrail

A

CloudWatch is used to keep track of performance (CPU usage etc)

CloudTrail is used to keep track of API calls from systems for use of governance (see what/who make changes in a system)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How can you create and control the encryption keys used to encrypt your data using “envelope encryption”

A

Encrypt pain text data with the data key, and then encrypt the data key with with a top-level encrypted master key.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Symmetric vs asymmetric keys in AWS KMS

A

Symmetric encryption uses a single key that needs to be shared among the people who need to receive the message.

Asymmetric encryption uses a pair of public key and a private key to encrypt and decrypt messages when communicating.

Symmetric encryption is an old technique while asymmetric encryption is relatively new.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How can you allow/cater for traffic spikes in Lambda function calls?

A

As traffic increases, Lambda increases the number of concurrent executions of your functions. If your Lambda function is getting throttled (lots of calls), then you can increase the “concurrency execution limit” to enable more traffic to hit them Lambda functions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What can X-Ray use to read the IP address from a request sent by another service (e.g. EC2)?

A

“X-Forwarded-For”

If a load balancer or other intermediary forwards a request to your application, X-Ray takes the client IP from the “X-Forwarded-For” header in the request instead of from the source IP in the IP packet. The client IP that is recorded for a forwarded request can be forged, so it should not be trusted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

An IAM role has been assigned to S3. Given that the developers mostly interact with S3 via APIs, which API should the developers call to use the IAM role?

A

AWS STS “AssumeRole”

“AssumeRole” - returns a set of temporary security credentials that you can use to access AWS resources that you might not normally have access to. These temporary credentials consist of an access key ID, a secret access key, and a security token. Typically, you use AssumeRole within your account or for cross-account access.

“AssumeRoleWithWebIdentity” - this returns a set of temporary security credentials for federated users who are authenticated through public identity providers such as Amazon, Facebook, Google, or OpenID

“AssumeRoleWithSAML” - returns a set of temporary security credentials for users who have been authenticated via a SAML authentication response

“GetSessionToken” - this is primarily used to return a set of temporary credentials for an AWS account or IAM user only

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the easiest way to deploy a Lambda function using CloudFormation?

A

Include your function source inline in the “ZipFile” parameter of the “AWS::Lambda::Function” resource in the CloudFormation template.

Note, you can upload the ZIP file to S3 and reference it, but this is not the easiest method.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What filename does Elastic Beanstalk use to configure the environment name, solution stack, and environment links?

A

env.yaml

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What file is primarily used to define periodic tasks that add jobs to your worker environment’s queue automatically at a regular interval? E.g. in Elastic Beanstalk.

A

cron.yaml

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are VPC flow logs used for?

A

VPC (Virtual Private Cloud) logs capture information about the IP traffic going to and from network interfaces in your VPC.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the safest way for an application running on EC2 to upload data to S3?

A

Use an IAM role to grant the application the necessary permission to upload to S3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is the correct way to deploy the new version to Elastic Beanstalk via the CLI?

A

Package your application into a zip file and deploy using the “eb deploy” command.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

How can you track the number of visitors to a website which has DynamoDB?

A

Use “atomic counters” to increment the counter item in the DynamoDB table for every new visitor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What can the “Kinesis Adapter” be used for?

A

The Kinesis Adapter is the recommended way to consume streams from DynamoDB for real-time processing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

DynamoDB “ StreamViewType” types

A

StreamViewType

When an item in the table is modified, StreamViewType determines what information is written to the stream for this table. Valid values for StreamViewType are:

    KEYS_ONLY - Only the key attributes of the modified item are written to the stream.

    NEW_IMAGE - The entire item, as it appears after it was modified, is written to the stream.

    OLD_IMAGE - The entire item, as it appeared before it was modified, is written to the stream.

    NEW_AND_OLD_IMAGES - Both the new and the old item images of the item are written to the stream.
25
Q

Calculating DynamoDB read capacity units (RCU)

A

1 read capacity unit (RCU) = 1 strongly consistent read (SC) for an item of size 4KB
1 read capacity unit (RCU) = 2 eventually consistent ready (EC) for an item of size 4KB

Item sizes are rounded up to the next 4KB. Meaning if the item size is calculated to 10KB then it gets rounded up to 12KB.

For strongly consistent reads, divide the item size by 4.
For eventually consistent reads, calculate based on strongly consistent, then divide the sum by 2
If the number is a decimal, round up to the next whole number.

If the item size is greater than 1, do the calculation to get the RCU based on and item size of 1, then multiple by the item size at the end.

=== Example 1 ===
10KB item size
1 strongly consistent read per second

Round 10KB up to next 4KB = 12KB

12/4 = 3 RCU

=== Example 2 ===
14KB item size
1 strongly consistent read per second

Round 14 up to 16KB

16/4 = 4 RCU

=== Example 3 ===

10KB item size
1 eventually consistent read

Round up 10 to next 4KB = 12KB

12/4 = 3 RCU

3/2 = 1.5 (rounded up) = 2 RCU

=== Example 4 ===

14KB item size
1 eventually consistent read per second

Round up 14 to next 4KB = 16KB

16/4 = 4 RCU

4/2 = 2 RCU

=== Example 5 ===
10KB item size
5 strongly consistent read

Round 10KB up to next 4KB = 12KB

12/4 = 3 RCU

3 RCU * 5 item size = 15 RCU

=== Example 5 ===
10KB item size
5 eventually consistent read

Round 10KB up to next 4KB = 12KB

12/4 = 3 RCU

3 RCU * 5 item size = 15 RCU

15 RCU / 2 = 7.5 (rounded up) = 8 RCU

============

https://www.youtube.com/watch?v=cjbn3zCBnEE

26
Q

Calculating DynamoDB write capacity units (WCU)

A

1 write capacity unit (WCU) represents 1 write per second

Item sizes for writes are rounded up to the next whole 1KB multiple, meaning 500 bytes gets rounded up to 1KB

=== Example 1 ===

3KB item size
1 write capacity units per second

3 WCU

=== Example 2 ===

500B item size, gets rounded up to 1KB

1 WCU

=== Example 3 ===

2.5KB item size, gets rounded up to 3KB

3 WCU

=== Example 4 ===

3KB item size
5 write capacity units per second

3 x 5 = 15 WCU

=== Example 5 ===

2.5KB item size, gets rounded up to 3KB
5 write capacity units

3 x 15 = 15 WCU

==================

https://www.youtube.com/watch?v=cjbn3zCBnEE

27
Q

Amazon Kinesis Data Streams resharding

A

In Amazon Kinesis Data Streams resharding there are two types of resharding operations: shard split and shard merge. In a shard split, you divide a single shard into two shards. In a shard merge, you combine two shards into a single shard. Resharding is always pairwise in the sense that you cannot split into more than two shards in a single operation, and you cannot merge more than two shards in a single operation. The shard or pair of shards that the resharding operation acts on are referred to as parent shards. The shard or pair of shards that result from the resharding operation are referred to as child shards.

The question asks how many additional instances can be launched to remediate the high CPU Utilization issue. Since there are 20 shards open and 5 instances running, launching an additional 15 instances will make each instance process exactly one shard.

28
Q

How long does a Kinesis data stream store records for?

A

A Kinesis data stream stores records from 24 hours by default, up to 168 hours.

You can increase the retention period up to 168 hours using the “IncreaseStreamRetentionPeriod” operation

29
Q

In-place deployments

A

Only deployments that use the EC2/On-Premises compute platform can use in-place deployments. AWS Lambda compute platform deployments cannot use an in-place deployment type.

30
Q

Route tables in a VPC

A

A route table contains a set of rules, called routes, that are used to determine where network traffic is directed.

Each subnet in your VPC must be associated with a route table; the table controls the routing for the subnet. A subnet can only be associated with one route table at a time, but you can associate multiple subnets with the same route table.

31
Q

Using the AWS CLI, what commands do you need to paginate when calling S3?

A
  • -page-size

- -max-items

32
Q

Lambda authorizer

A

A Lambda authorizer is an API Gateway feature that uses a Lambda function to control access to your API. When a client makes a request to one of your API’s methods, API Gateway calls your Lambda authorizer, which takes the caller’s identity as input and returns an IAM policy as output.

There are two types of Lambda authorizers:

  • A token-based Lambda authorizer (also called a TOKEN authorizer) receives the caller’s identity in a bearer token, such as a JSON Web Token (JWT) or an OAuth token.
  • A request parameter-based Lambda authorizer (also called a REQUEST authorizer) receives the caller’s identity in a combination of headers, query string parameters, stageVariables, and $context variables.
33
Q

A company has 5 different applications running on several On-Demand EC2 instances. The DevOps team is required to set up a graphical representation of the key performance metrics for each application. These system metrics must be available on a single shared screen for more effective and visible monitoring.

A

Setup a custom CloudWatch namespace with a unique metric name for each application.

A namespace is a container for CloudWatch metrics. Metrics in different namespaces are isolated from each other, so that metrics from different applications are not mistakenly aggregated into the same statistics.

34
Q

API Gateway integration types

A

The following list summarizes the supported API Gateway integration types:

AWS: This type of integration lets an API expose AWS service actions. In AWS integration, you must configure both the integration request and integration response and set up necessary data mappings from the method request to the integration request, and from the integration response to the method response.

AWS_PROXY: This type of integration lets an API method be integrated with the Lambda function invocation action with a flexible, versatile, and streamlined integration setup. This integration relies on direct interactions between the client and the integrated Lambda function.

With this type of integration, also known as the Lambda proxy integration, you do not set the integration request or the integration response. API Gateway passes the incoming request from the client as the input to the backend Lambda function. The integrated Lambda function takes the input of this format and parses the input from all available sources, including request headers, URL path variables, query string parameters, and applicable body. The function returns the result following this output format.

This is the preferred integration type to call a Lambda function through API Gateway and is not applicable to any other AWS service actions, including Lambda actions other than the function-invoking action.

HTTP: This type of integration lets an API expose HTTP endpoints in the backend. With the HTTP integration, also known as the HTTP custom integration, you must configure both the integration request and integration response. You must set up necessary data mappings from the method request to the integration request, and from the integration response to the method response.

HTTP_PROXY: The HTTP proxy integration allows a client to access the backend HTTP endpoints with a streamlined integration setup on single API method. You do not set the integration request or the integration response. API Gateway passes the incoming request from the client to the HTTP endpoint and passes the outgoing response from the HTTP endpoint to the client.

MOCK: This type of integration lets API Gateway return a response without sending the request further to the backend. This is useful for API testing because it can be used to test the integration set up without incurring charges for using the backend and to enable collaborative development of an API.

In collaborative development, a team can isolate their development effort by setting up simulations of API components owned by other teams by using the MOCK integrations. It is also used to return CORS-related headers to ensure that the API method permits CORS access. In fact, the API Gateway console integrates the OPTIONS method to support CORS with a mock integration. Gateway responses are other examples of mock integrations.
35
Q

In DynamoDB, how can you read/write multiple items?

A

For applications that need to read or write multiple items, DynamoDB provides the BatchGetItem and BatchWriteItem operations.

Using these operations can reduce the number of network round trips from your application to DynamoDB. In addition, DynamoDB performs the individual read or write operations in parallel.

36
Q

What do you need to do to stop duplicate messages in SQS FIFO queue?

A

Add a “MessageDeduplicationId” parameter to the “SendMessage” API request. Ensure that the messages are sent at least 5 minutes apart.

Amazon SQS FIFO First-In-First-Out queues are designed to enhance messaging between applications when the order of operations and events is critical, or where duplicates can’t be tolerated.

Amazon SQS FIFO queues follow exactly-once processing. It introduces a parameter called Message Deduplication ID, which is the token used for deduplication of sent messages. If a message with a particular message deduplication ID is sent successfully, any messages sent with the same message deduplication ID are accepted successfully but aren’t delivered during the 5-minute deduplication interval.

Amazon SQS continues to keep track of the message deduplication ID even after the message is received and deleted.

37
Q

IAM (Identity and Access Management) user permissions

A

By default, IAM users don’t have permission to create or modify Amazon ECS resources or perform tasks using the Amazon ECS API. This means that they also can’t do so using the Amazon ECS console or the AWS CLI. To allow IAM users to create or modify resources and perform tasks, you must create IAM policies. Policies grant IAM users permission to use specific resources and API actions. Then, attach those policies to the IAM users or groups that require those permissions.

When you attach a policy to a user or group of users, it allows or denies the users permission to perform the specified tasks on the specified resources.

38
Q

How can you protect API calls with MFA (multi-factor authentication)?

A

The “GetSessionToken” API returns a set of temporary credentials for an AWS account or IAM user. The credentials consist of an access key ID, a secret access key, and a security token. Typically, you use GetSessionToken if you want to use MFA to protect programmatic calls to specific AWS API operations like Amazon EC2 StopInstances. MFA-enabled IAM users would need to call GetSessionToken and submit an MFA code that is associated with their MFA device.

39
Q

Kinesis re-sharding

A
  • You can decrease the stream’s capacity by merging shards

- You can increase the stream’s capacity by splitting shards

40
Q

A write-heavy data analytics application is using DynamoDB database which has global secondary index. Whenever the application is performing heavy write activities on the table, the DynamoDB requests return a “ProvisionedThroughputExceededException”.

Which of the following is the MOST likely cause of this issue?

A

You exceeded your maximum allowed provisioned throughput for a table or for one or more global secondary indexes. To view performance metrics for provisioned throughput vs. consumed throughput, open the Amazon CloudWatch console.

Example: Your request rate is too high. The AWS SDKs for DynamoDB automatically retry requests that receive this exception. Your request is eventually successful, unless your retry queue is too large to finish. Reduce the frequency of requests using Error retries and exponential backoff.

The provisioned write capacity for the global secondary index is less than the write capacity of the base table.

When you create a global secondary index on a provisioned mode table, you must specify read and write capacity units for the expected workload on that index. The provisioned throughput settings of a global secondary index are separate from those of its base table.

41
Q

How can you ensure your DynamoDB database writes are protected from being overwritten by other write operations that are occurring at the same time without affecting the application performance?

A

Use “Optimistic locking”

Optimistic locking is a strategy to ensure that the client-side item that you are updating (or deleting) is the same as the item in DynamoDB. If you use this strategy, then your database writes are protected from being overwritten by the writes of others — and vice-versa.

42
Q

AWS Cognito temporary credentials

A

Amazon Cognito identity pools provide temporary AWS credentials for users who are guests (unauthenticated) and for users who have been authenticated and received a token. An identity pool is a store of user identity data specific to your account.

Amazon Cognito identity pools enable you to create unique identities and assign permissions for users. Your identity pool can include:

  • Users in an Amazon Cognito user pool
  • Users who authenticate with external identity providers such as Facebook, Google, or a SAML-based identity provider
  • Users authenticated via your own existing authentication process

With an identity pool, you can obtain temporary AWS credentials with permissions you define to directly access other AWS services or to access resources through Amazon API Gateway.

43
Q

How can you implementing server-side encryption with customer-provided encryption keys (SSE-C) to your S3 bucket, which will allow you to set your own encryption keys?

A

x-amz-server-side​-encryption​-customer-algorithm, x-amz-server-side-encryption-customer-key and x-amz-server-side-encryption-customer-key-MD5 headers on the upload request.

x-amz-server-side-encryption-customer-algorithm - This header specifies the encryption algorithm. The header value must be “AES256”.

x-amz-server-side-encryption-customer-key - This header provides the 256-bit, base64-encoded encryption key for Amazon S3 to use to encrypt or decrypt your data.

x-amz-server-side-encryption-customer-key-MD5 - This header provides the base64-encoded 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure the encryption key was transmitted without error.

44
Q

What is the most manageable and cost effective way to send status updates to a third party analytics provider using a Lambda function? This need to be scheduled to run every 30mins?

A

integrating CloudWatch Events with Lambda which will automatically trigger the function every 30 minutes

You can also create a Lambda function and direct AWS Lambda to execute it on a regular schedule. You can specify a fixed rate (for example, execute a Lambda function every hour or 15 minutes), or you can specify a Cron expression.

45
Q

AWS Lambda invokation types

A

AWS Lambda supports synchronous and asynchronous invocation of a Lambda function. You can control the invocation type only when you invoke a Lambda function (referred to as on-demand invocation).

When you use AWS services as a trigger, the invocation type is predetermined for each service. You have no control over the invocation type that these event sources use when they invoke your Lambda function.

In the Invoke API, you have 3 options to choose from for the InvocationType:

RequestResponse (default) - Invoke the function synchronously. Keep the connection open until the function returns a response or times out. The API response includes the function response and additional data.

Event - Invoke the function asynchronously. Send events that fail multiple times to the function’s dead-letter queue (if it’s configured). The API response only includes a status code.

DryRun - Validate parameter values and verify that the user or role has permission to invoke the function

46
Q

How can you increase a Lambda function CPU available?

A

Increase the memory size.

In the AWS Lambda resource model, you choose the amount of memory you want for your function and are allocated proportional CPU power and other resources. An increase in memory size triggers an equivalent increase in CPU available to your function.

47
Q

How can you provide a more detailed report using X-Ray

A

Use “subsegments”

A segment can break down the data about the work done into subsegments. Subsegments provide more granular timing information and details about downstream calls that your application made to fulfill the original request. A subsegment can contain additional details about a call to an AWS service, an external HTTP API, or an SQL database. You can even define arbitrary subsegments to instrument specific functions or lines of code in your application.

48
Q

What are X-Ray subsegment fields?

A

Below are the optional subsegment fields:

“namespace” - aws for AWS SDK calls; remote for other downstream calls.

“http” - http object with information about an outgoing HTTP call.

“aws” - aws object with information about the downstream AWS resource that your application called.

error, throttle, fault, and cause - error fields that indicate an error occurred and that include information about the exception that caused the error.

“annotations” - annotations object with key-value pairs that you want X-Ray to index for search.

“metadata” - metadata object with any additional data that you want to store in the segment.

“subsegments” - array of subsegment objects.

“precursor_ids” - array of subsegment IDs that identifies subsegments with the same parent that completed prior to this subsegment.

49
Q

CORS config

A

A CORS configuration is an XML file that contains a series of rules within a . A configuration can have up to 100 rules. A rule is defined by one of the following tags:

AllowedOrigin – Specifies domain origins that you allow to make cross-domain requests.

AllowedMethod – Specifies a type of request you allow (GET, PUT, POST, DELETE, HEAD) in cross-domain requests.

AllowedHeader – Specifies the headers allowed in a preflight request.

Below are some of the CORSRule elements:

MaxAgeSeconds – Specifies the amount of time in seconds (in this example, 3000) that the browser caches an Amazon S3 response to a preflight OPTIONS request for the specified resource. By caching the response, the browser does not have to send preflight requests to Amazon S3 if the original request will be repeated.

ExposeHeader – Identifies the response headers (in this example, x-amz-server-side-encryption, x-amz-request-id, and x-amz-id-2) that customers are able to access from their applications (for example, from a JavaScript XMLHttpRequest object).

50
Q

Beanstalk in a multi container environment

A

You can create docker environments that support multiple containers per Amazon EC2 instance with multicontainer Docker platform for Elastic Beanstalk.

Elastic Beanstalk uses Amazon Elastic Container Service (Amazon ECS) to coordinate container deployments to multicontainer Docker environments. Amazon ECS provides tools to manage a cluster of instances running Docker containers. Elastic Beanstalk takes care of Amazon ECS tasks including cluster creation, task definition and execution.

51
Q

How can you automatically delete items in DynamoDB after a period of time?

A

Time To Live (TTL) for DynamoDB allows you to define when items in a table expire so that they can be automatically deleted from the database.

TTL is provided at no extra cost as a way to reduce storage usage and reduce the cost of storing irrelevant data without using provisioned throughput. With TTL enabled on a table, you can set a timestamp for deletion on a per-item basis, allowing you to limit storage usage to only those records that are relevant.

52
Q

How can you upload data to S3 using SSE-KMS (Server-side encryption)?

A

To upload an object to the S3 bucket which uses SSE-KMS, you have to send a request with an “x-amz-server-side-encryption” header with the value of “aws:kms”

53
Q

EC2 monitoring time periods

A

EC2 monitoring period by default for your instance is enabled for basic monitoring. You can optionally enable detailed monitoring. After you enable detailed monitoring, the Amazon EC2 console displays monitoring graphs with a 1-minute period for the instance. The following table describes basic and detailed monitoring for instances.

Basic – Data is available automatically in 5-minute periods at no charge.

Detailed – Data is available in 1-minute periods for an additional cost. To get this level of data, you must specifically enable it for the instance. For the instances where you’ve enabled detailed monitoring, you can also get aggregated data across groups of similar instances.

54
Q

AWS SAM (Serverless Application Model) command

A

The SAM package command zips your code artifacts, uploads them to Amazon S3, and produces a packaged AWS SAM template file that’s ready to be used. The sam deploy command uses this file to deploy your application.

55
Q

AWS SAM (Serverless Application Model) command

A

The SAM package command zips your code artifacts, uploads them to Amazon S3, and produces a packaged AWS SAM template file that’s ready to be used. The sam deploy command uses this file to deploy your application.

56
Q

Lambda unreserved concurrency pool

A

AWS Lambda will keep the unreserved concurrency pool at a minimum of 100 concurrent executions, so that functions that do not have specific limits set can still process requests. So, in practice, if your total account limit is 1000, you are limited to allocating 900 to individual functions.

57
Q

What does encryption “in transit” and “at rest” mean?

A

Data protection refers to protecting data while in transit (as it travels to and from Amazon S3) and at rest (while it is stored on disks in Amazon S3 data centers).

58
Q

Data encryption types to S3

A

Data protection refers to protecting data while in transit (as it travels to and from Amazon S3) and at rest (while it is stored on disks in Amazon S3 data centers). You can protect data in transit by using SSL or by using client-side encryption.

You have the following options for protecting data at rest in Amazon S3:

Use Server-Side Encryption – You request Amazon S3 to encrypt your object before saving it on disks in its data centers and decrypt it when you download the objects.

  1. Use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
  2. Use Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)
  3. Use Server-Side Encryption with Customer-Provided Keys (SSE-C)
59
Q

What is AWS Config?

A

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. With Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This enables you to simplify compliance auditing, security analysis, change management, and operational troubleshooting.