General Review Flashcards

1
Q

What is Cloudtrail?

A

AWS Lambda is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in AWS Lambda. … If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket, including events for AWS Lambda.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Cloudwatch vs CloudTrail

A

CloudWatch Logs reports on application logs, while CloudTrail Logs provide you specific information on what occurred in your AWS account. CloudWatch Events is a near real time stream of system events describing changes to your AWS resources. CloudTrail focuses more on AWS API calls made in your AWS account.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Cloudwatch Metrics

A

Start at 1 minute. Use high-resolution to go higher.

1 minute for detailed monitoring.

5 minutes for standard monitoring.

Can be on-premisis.

For application-specific events you need a custom metric.

The minumum for custom is 1 minute.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are serverless technologies?

A

Serverless applications don’t require provisioning, maintaining, and administering servers for backend components such as compute, databases, storage, stream processing, message queueing, and more. You also no longer need to worry about ensuring application fault tolerance and availability. Instead, AWS handles all of these capabilities for you. Lamda and @edge Fargate S3 EFS Dynamo Aurora - database will automatically start up, shut down, and scale RDS Proxy API Proxy SNS/SQS/STEP/Kinesis/Athena/Dev Tooling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Step

A

AWS Step Functions makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Building applications from individual components that each perform a discrete function lets you scale and change applications quickly. Step Functions is a reliable way to coordinate components and step through the functions of your application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Secrets Manager vs Parameter Store

A

https://scalesec.com/blog/a-comparison-of-secrets-managers-for-aws/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

SQS Delay Queues

A

0-900 seconds

Postpone delivery of new messages to a queue. For FIFO only, takes place immediately on messages in queue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Best pracotsce for Large SQS Messages using S3

A
  • Use S3
  • Use Extended client library for Java
  • SDK for Java
  • Specifiy that messages are always stored on S3 or > 256.
  • Send message referencing object on S3
  • Delete message from S3
  • Ge mesage object from S3
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Paginations

A

–page-size

also

–max-items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Kinesis

A

Client library runs on consumer instances

They track the number of shards on stream

Discover new shards

WIth KCL, number of instances should not exceed shards

You don’t need more than one per shard. Only consider it if CPU utilization is high.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Lamda Limits

A

Lamda limit is a safetry feature: “Reserved Concurrency” 1,000 per second

429 HTTP Status if rejected - Call AWS Support

Upload package size 50mb max. 250 unzipped.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Lamda and VPC Resources

A
  • Allow function to connection to private subnet
  • It needs VPC Configuration
    • Private subnet
    • Security Group
  • Use vpc-config param to add
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

x-ray

A

You need both the daemon and the SDK

Install on EC2 instance or on-premisis server or on EC32 within beanstalk

I fusing ECS, install alongside

Annotations add indexed key-value pairs

Dynomo, Lamda, API Gateway

Sampling: 1 request per seconds, 5% or subsequent.

You can use the AWS Elastic Beanstalk console or a configuration file to run the AWS X-Ray daemon on the instances in your environment. X-Ray is an AWS service that gathers data about the requests that your application serves, and uses it to construct a service map that you can use to identify issues with your application and opportunities for optimization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Beanstalk and Docker

A

Single Container

Multiple - Use EBS beanstalk to build EBS cluster and deploy multiple containers to each instance

UPload a zip file and beanstalk will do the rest.

Code can be updated from local machine or public S3 bucket. You can also use CodeCommit, but must use the beanstalk CLI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Cloudwatch vs CloudTrail vs Config

A
  • Cloudwatch monitors performance
  • Cloudtrail monitors API calls in the platform
  • AWS Config records the state of your environment and can notify of changes.
17
Q

Assume-Role-With-Web-Identity

A
  • Authenticate with Web Identity Provider
  • Application makes the Assume-Role-With-Web-Identity call
  • If suuccessfull, STS will return tem creds
18
Q

SQS

A

Long poll max timeout: 20 seconds

Max vis timeout: 12 hours

Default vis timout: 30 seconds

256 KB message size

1 minute -14 day retention. Def 4 days.

19
Q

Beanstalk

A

.config files in .ebextensions folder

YAML or JSON

20
Q

API

A
21
Q

VPC Flow Log

A

Logs packets coming thought VPC subnets.

Aggregates packages according to capture window.

Analyze traffic to your VPC and auditing process.

22
Q

EB vs CLoudFormation

A

They’re actually pretty different. Elastic Beanstalk is intended to make developers’ lives easier. CloudFormation is intended to make systems engineers’ lives easier.

Elastic Beanstalk is a PaaS-like layer ontop of AWS’s IaaS services which abstracts away the underlying EC2 instances, Elastic Load Balancers, auto scaling groups, etc. This makes it a lot easier for developers, who don’t want to be dealing with all the systems stuff, to get their application quickly deployed on AWS. It’s very similar to other PaaS products such as Heroku, EngineYard, Google App Engine, etc. With Elastic Beanstalk, you don’t need to understand how any of the underlying magic works.

CloudFormation, on the other hand, doesn’t automatically do anything. It’s simply a way to define all the resources needed for deployment in a huge JSON file. So a CloudFormation template might actually create two ElasticBeanstalk environments (production and staging), a couple of ElasticCache clusters, a DyanmoDB table, and then the proper DNS in Route53. I then upload this template to AWS, walk away, and 45 minutes later everything is ready and waiting. Since it’s just a plain-text JSON file, I can stick it in my source control which provides a great way to version my application deployments. It also ensures that I have a repeatable, “known good” configuration that I can quickly deploy in a different region.

23
Q

Sync Asnc Lamda

A

Within AWS Lambda, functions invoked synchronously and asynchronously are handled in different ways when they fail, which can cause some unexpected side effects in your program logic. If you are synchronously invoking functions directly, the invoking application is responsible for all retries. If you are using integrations they may have additional retries built in. Functions that are invoked asynchronously don’t rely on the invoking application for retries. In this case, the retries are built in and run automatically. The invocation will be retried twice with delays in-between. If it fails on both retries, the event is discarded. With asynchronous invocations, you are able to set up a Dead Letter Queue which can be used to keep the failed event from being discarded. The Dead Letter Queue allows you to send unprocessed events to an Amazon SQSor SNS queue for you to build logic to deal with.

24
Q

Dynamo Limits

A

Data Types

String

The length of a String is constrained by the maximum item size of 400 KB.

Strings are Unicode with UTF-8 binary encoding. Because UTF-8 is a variable width encoding, DynamoDB determines the length of a String using its UTF-8 bytes.

Number

A Number can have up to 38 digits of precision, and can be positive, negative, or zero.

Positive range: 1E-130 to 9.9999999999999999999999999999999999999E+125

Negative range: -9.9999999999999999999999999999999999999E+125 to -1E-130

DynamoDB uses JSON strings to represent Number data in requests and replies. For more information, see DynamoDB Low-Level API.

If number precision is important, you should pass numbers to DynamoDB using strings that you convert from a number type.

Binary

The length of a Binary is constrained by the maximum item size of 400 KB.

Applications that work with Binary attributes must encode the data in base64 format before sending it to DynamoDB. Upon receipt of the data, DynamoDB decodes it into an unsigned byte array and uses that as the length of the attribute.

Items

Item Size

The maximum item size in DynamoDB is 400 KB, which includes both attribute name binary length (UTF-8 length) and attribute value lengths (again binary length). The attribute name counts towards the size limit.

25
Q

Cognitor User vs Identity Pool

A

User Pool

Say you were creating a new web or mobile app and you were thinking about how to handle user registration, authentication, and account recovery. This is where Cognito User Pools would come in. Cognito User Pool handles all of this and as a developer you just need to use the SDK to retrieve user related information.

Identity Pool

Cognito Identity Pool (or Cognito Federated Identities) on the other hand is a way to authorize your users to use the various AWS services. Say you wanted to allow a user to have access to your S3 bucket so that they could upload a file; you could specify that while creating an Identity Pool. And to create these levels of access, the Identity Pool has its own concept of an identity (or user). The source of these identities (or users) could be a Cognito User Pool or even Facebook or Google.

26
Q

SAM

A

The AWS Serverless Application Model (AWS SAM) is an open-source framework that you can use to build serverless applications on AWS. It consists of the AWS SAM template specification that you use to define your serverless applications, and the AWS SAM command line interface (AWS SAM CLI) that you use to build, test, and deploy your serverless applications.

Because AWS SAM is an extension of AWS CloudFormation, you get the reliable deployment capabilities of AWS CloudFormation. You can define resources by using AWS CloudFormation in your AWS SAM template. Also, you can use the full suite of resources, intrinsic functions, and other template features that are available in AWS CloudFormation.

You can use AWS SAM with a suite of AWS tools for building serverless applications. The AWS SAM CLI lets you locally build, test, and debug serverless applications that are defined by AWS SAM templates. The CLI provides a Lambda-like execution environment locally. It helps you catch issues upfront by providing parity with the actual Lambda execution environment. To step through and debug your code to understand what the code is doing, you can use AWS SAM with AWS toolkits like the AWS Toolkit for JetBrains, AWS Toolkit for PyCharm, AWS Toolkit for IntelliJ, and AWS Toolkit for Visual Studio Code. This tightens the feedback loop by making it possible for you to find and troubleshoot issues that you might run into in the cloud.

Therefore, the most suitable service to use in this scenario is AWS SAM.

CloudFormation is incorrect because although this service can certainly be used to deploy Lambda, API Gateway, DynamoDB, and other AWS resources of your serverless application, it doesn’t have the capability to locally build, test, and debug your application like what AWS SAM has. In addition, AWS SAM is a more suitable service to use if you want to deploy and manage your serverless applications in AWS just as mentioned above.

OpsWorks is incorrect because this service is primarily used as a configuration management service that provides managed instances of Chef and Puppet.

Elastic Beanstalk is incorrect because this service is not suitable for deploying serverless applications. In addition, it doesn’t have the capability to locally build, test, and debug your serverless applications as effectively as what AWS SAM can do.

Because AWS SAM integrates with other AWS services, creating serverless applications with AWS SAM provides the following benefits:

Single-deployment configuration. AWS SAM makes it easy to organize related components and resources, and operate on a single stack. You can use AWS SAM to share configuration (such as memory and timeouts) between resources, and deploy all related resources together as a single, versioned entity.

Extension of AWS CloudFormation. Because AWS SAM is an extension of AWS CloudFormation, you get the reliable deployment capabilities of AWS CloudFormation. You can define resources by using AWS CloudFormation in your AWS SAM template. Also, you can use the full suite of resources, intrinsic functions, and other template features that are available in AWS CloudFormation.

Built-in best practices. You can use AWS SAM to define and deploy your infrastructure as config. This makes it possible for you to use and enforce best practices such as code reviews. Also, with a few lines of configuration, you can enable safe deployments through CodeDeploy, and can enable tracing by using AWS X-Ray.

Local debugging and testing. The AWS SAM CLI lets you locally build, test, and debug serverless applications that are defined by AWS SAM templates. The CLI provides a Lambda-like execution environment locally. It helps you catch issues upfront by providing parity with the actual Lambda execution environment. To step through and debug your code to understand what the code is doing, you can use AWS SAM with AWS toolkits like the AWS Toolkit for JetBrains, AWS Toolkit for PyCharm, AWS Toolkit for IntelliJ, and AWS Toolkit for Visual Studio Code. This tightens the feedback loop by making it possible for you to find and troubleshoot issues that you might run into in the cloud.

Deep integration with development tools. You can use AWS SAM with a suite of AWS tools for building serverless applications. You can discover new applications in the AWS Serverless Application Repository. For authoring, testing, and debugging AWS SAM–based serverless applications, you can use the AWS Cloud9 IDE. To build a deployment pipeline for your serverless applications, you can use CodeBuild, CodeDeploy, and CodePipeline. You can also use AWS CodeStar to get started with a project structure, code repository, and a CI/CD pipeline that’s automatically configured for you. To deploy your serverless application, you can use the Jenkins plugin. You can use the Stackery.io toolkit to build production-ready applications.

27
Q

Lamda Context

A

When AWS Lambda executes your Lambda function, it provisions and manages the resources needed to run your Lambda function. When you create a Lambda function, you specify configuration information, such as the amount of memory and maximum execution time that you want to allow for your Lambda function. When a Lambda function is invoked, AWS Lambda launches an execution context based on the configuration settings you provide. The execution context is a temporary runtime environment that initializes any external dependencies of your Lambda function code, such as database connections or HTTP endpoints. This affords subsequent invocations better performance because there is no need to “cold-start” or initialize those external dependencies, as explained below.

28
Q

Stage Variables

A

Stage variables are name-value pairs that you can define as configuration attributes associated with a deployment stage of a REST API. They act like environment variables and can be used in your API setup and mapping templates.

For example, you can define a stage variable in a stage configuration, and then set its value as the URL string of an HTTP integration for a method in your REST API. Later, you can reference the URL string using the associated stage variable name from the API setup. This way, you can use the same API setup with a different endpoint at each stage by resetting the stage variable value to the corresponding URLs. You can also access stage variables in the mapping templates, or pass configuration parameters to your AWS Lambda or HTTP backend.

29
Q

Alarms

A

When you create an alarm, you specify three settings to enable CloudWatch to evaluate when to change the alarm state:

– Period is the length of time to evaluate the metric or expression to create each individual data point for an alarm. It is expressed in seconds. If you choose one minute as the period, there is one datapoint every minute.

– Evaluation Period is the number of the most recent periods, or data points, to evaluate when determining alarm state.

– Datapoints to Alarm is the number of data points within the evaluation period that must be breaching to cause the alarm to go to the ALARM state. The breaching data points do not have to be consecutive, they just must all be within the last number of data points equal to Evaluation Period.

In the following figure, the alarm threshold is set to three units. The alarm is configured to go to the ALARM state and both Evaluation Period and Datapoints to Alarm are 3. That is, when all three datapoints in the most recent three consecutive periods are above the threshold, the alarm goes to the ALARM state. In the figure, this happens in the third through fifth time periods. At period six, the value dips below the threshold, so one of the periods being evaluated is not breaching, and the alarm state changes to OK. During the ninth time period, the threshold is breached again, but for only one period. Consequently, the alarm state remains OK.

Hence, the option that says: Set both the Evaluation Period and Datapoints to Alarm to 3 is the correct answer.

The option that says: Use high-resolution metrics is incorrect because the scenario says that it only needs to monitor the HTTP server errors every minute, and not its sub-minute activity. If you set an alarm on a high-resolution metric, you can specify a high-resolution alarm with a period of 10 seconds or 30 seconds. Hence, this option is irrelevant in this scenario.

The option that says: Set both the Period and Datapoints to Alarm to 3 is incorrect because you should set the Evaluation Period and not the Period setting.

The option that says: Use metric math in CloudWatch to properly compute the threshold is incorrect because the Metric Math feature is only applicable for scenarios where you need to query multiple CloudWatch metrics or if you want to use math expressions to create new time series based on selected metrics.

30
Q

Lamda Traffic Shifting

A

By default, an alias points to a single Lambda function version. When the alias is updated to point to a different function version, incoming request traffic in turn instantly points to the updated version. This exposes that alias to any potential instabilities introduced by the new version. To minimize this impact, you can implement the routing-config parameter of the Lambda alias that allows you to point to two different versions of the Lambda function and dictate what percentage of incoming traffic is sent to each version.

31
Q

Lambda Temp Folder Cold-Start

A

When AWS Lambda executes your Lambda function, it provisions and manages the resources needed to run your Lambda function. When you create a Lambda function, you specify configuration information, such as the amount of memory and maximum execution time that you want to allow for your Lambda function. When a Lambda function is invoked, AWS Lambda launches an execution context based on the configuration settings you provide. The execution context is a temporary runtime environment that initializes any external dependencies of your Lambda function code, such as database connections or HTTP endpoints. This affords subsequent invocations better performance because there is no need to “cold-start” or initialize those external dependencies, as explained below.

It takes time to set up an execution context and do the necessary “bootstrapping”, which adds some latency each time the Lambda function is invoked. You typically see this latency when a Lambda function is invoked for the first time or after it has been updated because AWS Lambda tries to reuse the execution context for subsequent invocations of the Lambda function.

After a Lambda function is executed, AWS Lambda maintains the execution context for some time in anticipation of another Lambda function invocation. In effect, the service freezes the execution context after a Lambda function completes, and thaws the context for reuse, if AWS Lambda chooses to reuse the context when the Lambda function is invoked again.

Each execution context provides 512 MB of additional disk space in the /tmp directory. The directory content remains when the execution context is frozen, providing transient cache that can be used for multiple invocations. You can add extra code to check if the cache has the data that you stored.

32
Q
A