Aws Developer Flashcards

1
Q

Aws kvm

A

Kernel based virtual machine is an open sourced hypervisor for virtualizing compute infrastructure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

VPC 2 facts

A

5 CIDR blocks per VPC

No overlap

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Range formula

A

2^(32-x)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Hypervisor is a __ layer

A

Software

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Load balance within region use ___

Across region use

A

ELB

Route53

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is best practice for using dev, test, prod

Enhanced security…

A

Create a different account for each so if one is accessed/hacked prod is safe

Different accounts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

VPC needs

A

Internet gateway
Route table
Assign pub IP, private IP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

EIP is needed for

EIP stays with account

A

Elastic IP address gives you a persistent IP address so you can stop instance and will have same IP when restart

EIP stays with account

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why should an EIP be attached to

A

The advantage of associating the Elastic IP address with the Elastic network interface instead of directly with the instance is that you can move all the attributes of the network interface from one instance to another in a single step

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

AWS code commit is powered by

A

s3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

IAM can?

A

Can help with federated users

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

IAM best practice

A

We strongly recommend that you do not use the root user for your everyday tasks, even the administrative ones. Instead, adhere to the best practice of using the root user only to create your first IAM user. Then securely lock away the root user credentials and use them to perform only a few account and service management tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Two ways IAM helps secure your account

Also allows ___?

A

Multi-factor authentication (MFA)
You can add two-factor authentication to your account and to individual users for extra security. With MFA you or your users must provide not only a password or access key to work with your account, but also a code from a specially configured device.

Identity federation
You can allow users who already have passwords elsewhere—for example, in your corporate network or with an internet identity provider—to get temporary access to your AWS account.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Programmatic access can be accessed with

Note that for this the user will have to

A

AWS API
AWS CLI
AWS SDK
and other tools

Access key ID and Secret access key

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

AWS SAM Build command:

A

The sam build command processes your AWS SAM template file, application code, and any applicable language-specific files and dependencies. The command also copies build artifacts in the format and location expected for subsequent steps in your workflow.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

how to grant access to your AWS account

A

To allow users access to the AWS Management Console and AWS Command Line Interface (AWS CLI), you have two options. The first one is to create identities and allow users to log in using a username and password managed by the IAM service. The second approach is to use federation
to allow your users to use their existing corporate credentials to log into the AWS console and CLI.

Each approach has its use cases. Federation is generally better for enterprises that have an existing central directory or plan to need more than the current limit of 5,000 IAM users.

Note: Access to all AWS accounts is managed by AWS IAM. Regardless of the approach you choose, make sure to familiarize yourself with and follow IAM best practices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

AWS CodePipeline is primarily used

A

AWS CodePipeline is for automating the build, test, and deploy phases of your release process every time there is a code change.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

AWS Data Pipeline

A

AWS Data Pipeline is used for automating the movement and transformation of data between different AWS compute and storage services, as well as on-premises data sources, at specified intervals. It integrates with AWS services such as AWS services such as Amazon S3, Amazon RDS, Amazon DynamoDB, and Amazon EMR. It is not used for managing the coordination of multiple AWS Services into serverless workflows.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

AWS Data Pipeline

A

AWS Data Pipeline is used for automating the movement and transformation of data between different AWS compute and storage services, as well as on-premises data sources, at specified intervals. It integrates with AWS services such as AWS services such as Amazon S3, Amazon RDS, Amazon DynamoDB, and Amazon EMR. It is not used for managing the coordination of multiple AWS Services into serverless workflows.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

AWS cloudFormation

A

Simplify infrastructure management building code templates (json or YAML)

Quickly replicate infrastructure by reusing templates

Easily control and track changes and rollback actions and version control

StackSets let’s you provision common set of AWS resources across multiple accounts and regions in a single cloudFormation template

Can build custom extensions to a stack template with AWS lambda

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

AWS cloudFormation stacks

How to make changes?

A

Manage related resources as a single unit called a stack

All the resources provisioned in a stack are defined in the cloudFormation temple

To update a stack create a CHANGE SET
Summary of proposed changes

   Let's you see how your changes might impact   

    the resources in the current stack
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

cloudFormation Template keys

A

Description

Metadata

Parameters

Rules

Mappings

Conditions

Transform

Resources

Outputs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

AWS SAM compiles into

A

cloudFormation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Security in the cloud is composed of six areas:

A

Foundations

Identity and access management

Detection

Infrastructure protection

Data protection

Incident response

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

CodeBuild

A

A fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy

A build project defines how CodeBuild will run a build. It includes information such as where to get the source code, which build environment to use, the build commands to run, and where to store the build output.

A build environment is the combination of operating system, programming language runtime, and tools used by CodeBuild to run a build.

The build specification is a YAML file that lets you choose the commands to run at each phase of the build and other settings. Without a build spec, CodeBuild cannot successfully convert your build input into build output or locate the build output artifact in the build environment to upload to your output bucket.If you include a build spec as part of the source code, by default, the build spec file must be named buildspec.yml and placed in the root of your source directory.

A collection of input files is called build input artifacts or build input and a deployable version of a source code is called build output artifact or build output.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

AWS CodeDeploy

A

A fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers.

An Application is a name that uniquely identifies the application you want to deploy. CodeDeploy uses this name, which functions as a container, to ensure the correct combination of revision, deployment configuration, and deployment group are referenced during a deployment.

Compute platform is the platform on which CodeDeploy deploys an application (EC2, ECS, Lambda, On-premises servers).

Deployment configuration is a set of deployment rules and deployment success and failure conditions used by CodeDeploy during a deployment.

Deployment group contains individually tagged instances, Amazon EC2 instances in Amazon EC2 Auto Scaling groups, or both.In an Amazon ECS deployment, a deployment group specifies the Amazon ECS service, load balancer, optional test listener, and two target groups. It also specifies when to reroute traffic to the replacement task set and when to terminate the original task set and ECS application after a successful deployment.In an AWS Lambda deployment, a deployment group defines a set of CodeDeploy configurations for future deployments of an AWS Lambda function.

In an EC2/On-Premises deployment, a deployment group is a set of individual instances targeted for a deployment.In an in-place deployment, the instances in the deployment group are updated with the latest application revision.In a blue/green deployment, traffic is rerouted from one set of instances to another by deregistering the original instances from a load balancer and registering a replacement set of instances that typically has the latest application revision already installed.

for an AWS Lambda deployment is a YAML- or JSON-formatted application specification file (AppSpec file) that specifies information about the Lambda function to deploy. The revision can be stored in Amazon S3 buckets.for an Amazon ECS deployment is a YAML- or JSON-formatted file that specifies the Amazon ECS task definition used for the deployment, a container name and port mapping used to route traffic, and optional Lambda functions run after deployment lifecycle events.for an EC2/On-Premises deployment is an archive file that contains source content (source code, webpages, executable files, and deployment scripts) and an application specification file. The revision can be stored in Amazon S3 buckets or GitHub repositories.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

AWS CodeDeploy lifecycle events

A
ApplicationStop
DownloadBundle
BeforeInstall
Install
AfterInstall
ApplicationStart
ValidateService
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

AWS CodeDeploy deployment with lambda

A

You must choose one of the following deployment configuration types to specify how traffic is shifted from the original Lambda function version to the new version:

Canary: Traffic is shifted in two increments. You can choose from predefined canary options that specify the percentage of traffic shifted to your updated Lambda function version in the first increment and the interval, in minutes, before the remaining traffic is shifted in the second increment.

Linear: Traffic is shifted in equal increments with an equal number of minutes between each increment. You can choose from predefined linear options that specify the percentage of traffic shifted in each increment and the number of minutes between each increment.

All-at-once: All traffic is shifted from the original Lambda function to the updated Lambda function version all at once.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

If you don’t specify a deployment configuration, CodeDeploy uses the

A

If you don’t specify a deployment configuration, CodeDeploy uses the CodeDeployDefault.OneAtATime deployment configuration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Monitoring CodeDeploy

A

Monitoring

In CodeDeploy, you should at the minimum monitor the following itemsDeployment events and statusInstance events and statusTools and ServicesAmazon CloudWatch Alarms, Events and LogsAWS CloudTrailAmazon SNSAWS CodeDeploy console

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

S3 is

A

“Read after Write Consistent” for new PUTS

“Eventually Consistent” for Overwrite PUTS

DELETE is eventually Consistent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

SSE-KMS Offers additional protection than sse-s3

A

AWS KMS Allows for you to have separate permissions fot the use of envelope key

Provides you with audit trail when the key was used and by whom

Create/manage the encryption key by yourself if you wish

More flexible than sse-s3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Multipart upload

A

If Object size 100MB

Initiation
Upload parts
Completion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Cross region replication (crr) when you delete a bucket

A

Without versioning it deletes the replica in the destination bucket (adds delete marker)

With version Id does not replicate the deletion marker

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

S3 crr cross region replication is enabled for encryption types:

A

SSE-S3 or SSE-KMS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

CodeDeploy AppSpec.yml keys

A
Version:
OS:
Files:
Hooks: 
   BeforeInstall:
   AfterInstall:
   ApplicationStart:
   ApplicationStop:
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

CodeDeploy agent is not required for

A

ECS or lambda

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

CodeDeploy deployment types

A

In-place only for ec2 on-prem
Load balancer: Fleet of ec2

        All at once
        Half at once 
        One at a time
              Moment where cannot access app
Blue/Green
   Will not interrupt existing env
Customer traffic never affected

 ECS and Lambda only have blue/green
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

VPC flow logs

A

Apis entering exiting vpc

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

DynamoDB supports two kinds of indexes:

A

Global secondary index – An index with a partition key and sort key that can be different from those on the table.

Local secondary index – An index that has the same partition key as the table, but a different sort key.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Lambda recursive fxn resolved by

A

Avoid using recursive code in your Lambda function, wherein the function automatically calls itself until some arbitrary criteria is met. This could lead to unintended volume of function invocations and escalated costs. If you do accidentally do so, set the function reserved concurrency to 0 immediately to throttle all invocations to the function, while you update the code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Lambda You can configure the following items for a published lambda function version:

A

You can configure the following items for a published function version:

Triggers

Destinations

Provisioned concurrency

Asynchronous invocation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Elastic Beanstalk supports two methods of saving configuration option settings. Configuration files in YAML or JSON format can be included in your application’s source code in a directory named ?

A

.ebextensions

.ebextensions have the lowest level of precedence and are overridden by settings at any other level.

To use configuration files, create a folder named .ebextensions at the top level of your project’s source code. Add a file with the extension .config and specify options in the following manner:

option_settings:
  - namespace:  namespace
    option_name:  option name
    value:  option value
  - namespace:  namespace
    option_name:  option name
    value:  option value
For example, the following configuration file sets the application's health check url to /health:

healthcheckurl.config

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

This configures the Elastic Load Balancing load balancer in your Elastic Beanstalk environment to make an HTTP request to the path

A

This configures the Elastic Load Balancing load balancer in your Elastic Beanstalk environment to make an HTTP request to the path/healthto each EC2 instance to determine if it is healthy or not.

45
Q

An AppSpec file does not exist on an instance .

A

An AppSpec file does not exist on an instance before you deploy to it. For this reason, the ApplicationStop hook does not run the first time you deploy to the instance. You can use the ApplicationStop hook the second time you deploy to an instance.

46
Q

Configuring the AWS X-Ray daemon flags

13 flags

A

-n or –region, which you use to set the region that the daemon uses to send trace data to X-Ray.

If you are running the daemon locally, that is, not on Amazon EC2, you can add the -o option to skip checking for instance profile credentials so the daemon will become ready more quickly.

Supported environment variables

AWS_REGION – Specifies the AWS Region of the X-Ray service endpoint.

HTTPS_PROXY – Specifies a proxy address for the daemon to upload segments through. This can be either the DNS domain names or IP addresses and port numbers used by your proxy servers.

Command Line Options

-b, –bind – Listen for segment documents on a different UDP port.
–bind “127.0.0.1:3000”
Default – 2000.

  • t, –bind-tcp – Listen for calls to the X-Ray service on a different TCP port.
  • bind-tcp “127.0.0.1:3000”
  • c, –config – Load a configuration file from the specified path.
  • -config “/home/ec2-user/xray-daemon.yaml”
  • f, –log-file – Output logs to the specified file path.
  • -log-file “/var/log/xray-daemon.log”
  • l, –log-level – Log level, from most verbose to least: dev, debug, info, warn, error, prod.
  • -log-level warn

Default – prod

-m, –buffer-memory – Change the amount of memory in MB that buffers can use (minimum 3).
–buffer-memory 50
Default – 1% of available memory.

-o, –local-mode – Don’t check for EC2 instance metadata.

  • r, –role-arn – Assume the specified IAM role to upload segments to a different account.
    e. g. –role-arn “arn:aws:iam::123456789012:role/xray-cross-account”

-a, –resource-arn – Amazon Resource Name (ARN) of the AWS resource running the daemon.

  • p, –proxy-address – Upload segments to AWS X-Ray through a proxy. The proxy server’s protocol must be specified.
    e. g. –proxy-address “http://192.0.2.0:3000”
  • n, –region – Send segments to X-Ray service in a specific region.
  • v, –version – Show AWS X-Ray daemon version.
  • h, –help – Show the help screen.

You can also use a YAML format file to configure the daemon. Pass the configuration file to the daemon by using the -c option.

~$ ./xray -c ~/xray-daemon.yaml

47
Q

Amazon DynamoDB Streams

A

Amazon DynamoDB Streams

AWS customers use Amazon DynamoDB to store mission-critical data. You can integrate Amazon DynamoDB Streams with other AWS services to build ordered, near-real-time data processing capabilities. DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records from a DynamoDB stream in near-real time.

Partitions and Data Distribution

Amazon DynamoDB stores data in partitions. A partition is an allocation of storage for a table, backed by solid state drives (SSDs) and automatically replicated across multiple Availability Zones within an AWS Region. Partition management is handled entirely by DynamoDB—you never have to manage partitions yourself.

When you create a table, the initial status of the table is CREATING. During this phase, DynamoDB allocates sufficient partitions to the table so that it can handle your provisioned throughput requirements. You can begin writing and reading table data after the table status changes to ACTIVE.

DynamoDB allocates additional partitions to a table in the following situations:

If you increase the table’s provisioned throughput settings beyond what the existing partitions can support.

If an existing partition fills to capacity and more storage space is required.

Partition management occurs automatically in the background and is transparent to your applications. Your table remains available throughout and fully supports your provisioned throughput requirements.

For more details, see Partition Key Design.

Global secondary indexes in DynamoDB are also composed of partitions. The data in a global secondary index is stored separately from the data in its base table, but index partitions behave in much the same way as table partitions.

Process DynamoDB streams using AWS Lambda
AWS Lambda is a service that lets you run code without provisioning or managing servers. For example, Lambda can execute your code based on a DynamoDB Streams event (such as inserting, updating, or deleting an item). Lambda polls a DynamoDB stream and, when it detects new records, invokes your Lambda function and passes in one or more events.

To understand how Lambda processes DynamoDB streams, you have to understand how DynamoDB streams work. DynamoDB stores data in partitions, which are based on either a partition key only or both a partition key and a sort key. When you enable a stream on a DynamoDB table, DynamoDB creates at least one shard per partition.

Shards in DynamoDB streams are collections of stream records. Each stream record represents a single data modification in the DynamoDB table to which the stream belongs.

48
Q

DynamoDB

Every local secondary index must meet the following conditions:

A

The partition key is the same as that of its base table.

The sort key consists of exactly one scalar attribute.

The sort key of the base table is projected into the index, where it acts as a non-key attribute.

49
Q

If your application needs to query a table infrequently, but must perform many writes or updates against the data in the table

A

consider projecting KEYS_ONLY. The local secondary index would be of minimal size, but would still be available when needed for query activity.

50
Q

All data in a primary key (composite key) is sorted by

A

Sort key

Primary key + sort key = composite key

51
Q

Does shield have WAF?

A

When you subscribe to Shield Advanced, AWS WAF is included at no extra cost

52
Q

If you’re new to Amazon Cognito Sync, use

A

AWS AppSync. Like Amazon Cognito Sync, AWS AppSync is a service for synchronizing application data across devices.

It enables user data like app preferences or game state to be synchronized. It also extends these capabilities by allowing multiple users to synchronize and collaborate in real time on shared data.

53
Q

Amazon Cognito Streams

A

Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time.

Using Amazon Cognito Streams, you can move all of your Sync data to Kinesis, which can then be streamed to a data warehouse tool such as Amazon Redshift for further analysis. To learn more about Kinesis, see Getting Started Using Amazon Kinesis.

54
Q

EB Immutable environment updates

A

To perform an immutable environment update, Elastic Beanstalk creates a second, temporary Auto Scaling group behind your environment’s load balancer to contain the new instances. First, Elastic Beanstalk launches a single instance with the new configuration in the new group. This instance serves traffic alongside all of the instances in the original Auto Scaling group that are running the previous configuration.

When the first instance passes health checks, Elastic Beanstalk launches additional instances with the new configuration, matching the number of instances running in the original Auto Scaling group

55
Q

DynamoDB

If possible, you should avoid using a

A

If possible, you should avoid using a Scan operation on a large table or index with a filter that removes many results. Also, as a table or index grows, the Scan operation slows. The Scan operation examines every item for the requested values and can use up the provisioned throughput for a large table or index in a single operation. For faster response times, design your tables and indexes so that your applications can use Query instead of Scan. (For tables, you can also consider using the GetItem and BatchGetItem APIs.)

Alternatively, design your application to use Scan operations in a way that minimizes the impact on your request rate.

56
Q

Run lambda in vpc
Lambda scales automatically so need on events it processes so make sure vpc can support capacity:
ENI requirements formula:

A

Projected peak concurrent executions * (memory in GB/3 GB)

1000 *1/3 = 333 IP addresses for ENI

57
Q

S3 objects upload size

A

More than 100MB multi part upload recommend

More than 5GB MUST use multi part upload

58
Q

S3 events

A

Can use eventBridge, SQS, SNS, Lambda to notify of events

59
Q

How should I choose between S3 Transfer Acceleration and Amazon CloudFront’s PUT/POST?

A

S3 Transfer Acceleration optimizes the TCP protocol and adds additional intelligence between the client and the S3 bucket, making S3 Transfer Acceleration a better choice if a higher throughput is desired. If you have objects that are smaller than 1 GB or if the data set is less than 1 GB in size, you should consider using Amazon CloudFront’s PUT/POST commands for optimal performance.

60
Q

S3 encryption KMS vs SSE-C

A

SSE-KMS lets AWS Key Management Service (AWS KMS) manage your encryption keys. Using AWS KMS to manage your keys provides several additional benefits. With AWS KMS, there are separate permissions for the use of the KMS key, providing an additional layer of control as well as protection against unauthorized access to your objects stored in Amazon S3. AWS KMS provides an audit trail so you can see who used your key to access which object and when, as well as view failed attempts to access data from users without permission to decrypt the data. Also, AWS KMS provides additional security controls to support customer efforts to comply with PCI-DSS, HIPAA/HITECH, and FedRAMP industry requirements.

SSE-C lets Amazon S3 perform the encyption and decryption of your objects while retaining control of the keys used to encrypt objects. With SSE-C, you don’t need to implement or use a client-side library to perform the encryption and decryption of objects you store in Amazon S3, but you do need to manage the keys that you send to Amazon S3 to encrypt and decrypt objects. Use SSE-C if you want to maintain your own encryption keys, but don’t want to implement or leverage a client-side encryption library.

61
Q

When to use EBS?

A

EBS’s use case is more easily understood than the other two. It must be paired with an EC2 instance. So when you need a high-performance storage service for a single instance, use EBS.

62
Q

When to use EFS?

A

EFS may be used whenever you need a shared file storage option for multiple EC2 instances with automatic, high-performance scaling.

EFS’s key benefits
Within its role as a shared file storage service for multiple EC2 instances, EFS provides many benefits:

Adaptive throughput – EFS’s performance can scale in-line with its storage, operating at a higher throughput for sudden, high-volume file dumps, reaching up to 500,000 IOPS or 10 GB per secondTotally elastic – once you’ve spun up an EFS instance, you can add add files without worrying about provisioning or disturbing your application’s performanceAdditional accessibility – EFS can be mounted from different EC2 instances, but it can also cross the AWS region boundary via the use of VPC peering

63
Q

SSD-BACKED VOLUME optimized for

A

Apps frequent read/write operation with small I/O size

Main criteria is I/O use ssd optimized

More expensive than HDD

GENERAL purpose 3 IOPS / Gibibyte

1 Gibibyte = 1.074 Gigabytes

Min Gibi is 100 max is 10000

64
Q

HDD-backed volumes best for

A

If Throughput is the measure of the amount of data transferred from/to a storage device in a second.

Throughput is most important feature

65
Q

Throughput Optimized options

A

Frequent accessed throughput intensive
Low cost

Cold HDD
Less frequent accessed non intensive workload
Cheapest

Neither can be root volume for ec2

66
Q

Data on EBS volume

A

Stays persistent even if instance terminates

67
Q

Most important EC2 cloudWatch metrics

A

EC2 Metrics:

CPU, DISK, NETWORK, STATUS CHECK

68
Q

Cross zone load balancer default

A

Application load balancer enabled

Network load balancer disabled default

69
Q

NAT VS Internet Gateway

A

NAT Gateway is subnet level

Internet Gateway (IGW) is VPC level

70
Q

VPC Endpoints

A
Interface Endpoint
   Elastic Network Interface
     IP address
       E.g cloudWatch, cloudFormation
   Gateway Endpoint
     Target for a specific route in route table
        e.g S3, DynamoDB
71
Q

AWS Fault Injection Simulator experiment templates consist of:

A
1Action Set
2Targets
3Stop Conditions
4 AWS Resources can be specified in target components using:
     1 resource id
     2 resource tag
     3 resource filters

To specify AWS resources, paths, & values can be specified in resource filters components. To select a specif amazon ec2 instance

“filters”:[
{ “path”: “placement.AvailabiltyZone”,
“values”: [“my-az-zone-i-want”]
}
]

72
Q

AWS CodeStar features

A

AWS CodeStar provides the tools you need to quickly develop, build, and deploy applications on AWS. With AWS CodeStar, you can use a variety of project templates to start developing applications on Amazon EC2, AWS Lambda, and AWS Elastic Beanstalk. AWS CodeStar projects support many popular programming languages including Java, JavaScript, PHP, Ruby, and Python. AWS CodeStar allows you to accelerate application delivery by providing a pre-configured continuous delivery toolchain for developing, building, testing, and deploying your projects on AWS. You can easily secure access for your projects through built-in security policies for various roles including owners, contributors, and viewers. The project dashboard in AWS CodeStar makes it easy to centrally monitor application activity and manage day-to-day development tasks such as recent code commits, builds, and deployments. Because AWS CodeStar integrates with Atlassian JIRA, a third-party issue tracking and project management tool, you can create and manage JIRA issues in the AWS CodeStar dashboard.

73
Q

AWS codeStar project templates is for

A

Are preconfigured delivery toolchains for developing, building, texting, deploying projects on aws services

74
Q

AWS CodeStar application endpoint file is

A

To monitor running applications once provisioned by AWS CodeStar project templates

75
Q

CodePipeline pre-built plugins are

A

Useful to integrate third party developer tools

76
Q

CodePipeline Declarative Templates

A

Help release workflows and it’s stages/actions

77
Q

Some of the features offered by AWS CodePipeline are:

A

Workflow ModelingAWS IntegrationsPre-Built Plugins
On the other hand, AWS CodeStar provides the following key features:

Start developing on AWS in minutesManage software delivery in one placeWork across your team securely

78
Q

Working with AWS CloudFormation StackSets

A

AWS CloudFormation StackSets extends the capability of stacks by enabling you to create, update, or delete stacks across multiple accounts and AWS Regions with a single operation. Using an administrator account, you define and manage an AWS CloudFormation template, and use the template as the basis for provisioning stacks into selected target accounts across specified AWS Regions.

79
Q

AWS CloudFormation StackSets

A

AWS CloudFormation Documentation
AWS CloudFormation enables you to create and provision AWS infrastructure deployments predictably and repeatedly. It helps you leverage AWS products such as Amazon EC2, Amazon Elastic Block Store, Amazon SNS, Elastic Load Balancing, and Auto Scaling to build highly reliable, highly scalable, cost-effective applications in the cloud without worrying about creating and configuring the underlying AWS infrastructure. AWS CloudFormation enables you to use a template file to create and delete a collection of resources together as a single unit (a stack).

A stack set lets you create stacks in AWS accounts across regions by using a single CloudFormation template. All the resources included in each stack are defined by the stack set’s CloudFormation template. As you create the stack set, you specify the template to use, in addition to any parameters and capabilities that template requires.

After you’ve defined a stack set, you can create, update, or delete stacks in the target accounts and AWS Regions you specify. When you create, update, or delete stacks, you can also specify operation preferences, such as the order of Regions in which you want the operation to be performed, the failure tolerance beyond which stack operations stop, and the number of accounts in which operations are performed on stacks concurrently.

A stack set is a regional resource. If you create a stack set in one AWS Region, you can’t see it or change it in other Regions.

80
Q

How to secure a REST API?

All APIs must be secured through proper authentication and monitoring. The two main ways to secure REST APIs include:

A
  1. Authentication tokens
    These are used to authorize users to make the API call. Authentication tokens check that the users are who they claim to be and that they have access rights for that particular API call. For example, when you log in to your email server, your email client uses authentication tokens for secure access.
  2. API keys
    API keys verify the program or application making the API call. They identify the application and ensure it has the access rights required to make the particular API call. API keys are not as secure as tokens but they allow API monitoring in order to gather data on usage. You may have noticed a long string of characters and numbers in your browser URL when you visit different websites. This string is an API key the website uses to make internal API calls.
81
Q

You can use access keys if you use? (3)

A

Rest API
AWS SDK,
Query API operations

82
Q

AWS CloudTrail enables

A

Can backtrack user activities based on api calls

Filter through logs generated

See who stopped what

What did x occur

Why did y change

When did z do that

AWS CloudTrail enables auditing, security monitoring, and operational troubleshooting by tracking your user activity and API usage. With CloudTrail, you can record two types of events: management events capturing control plane actions such as creating or deleting Amazon Simple Storage Service (Amazon S3) buckets, and data events capturing high volume data plane actions such as reading or writing an Amazon S3 object. You pay only for what you use of the paid features listed below. There are no minimum fees or upfront commitments. Features that are provided at no additional charge are also listed below.

83
Q

CloudTrail vs CloudWatch

A

CloudWatch Logs reports on application logs, while CloudTrail Logs provide you specific information on what occurred in your AWS account. CloudWatch Events is a near real time stream of system events describing changes to your AWS resources. CloudTrail focuses more on AWS API calls made in your AWS account.

84
Q

Features of API Gateway

A

Features of API Gateway

Support for stateful (WebSocket) and stateless (HTTP and REST) APIs.

Powerful, flexible authentication mechanisms, such as AWS Identity and Access Management policies, Lambda authorizer functions, and Amazon Cognito user pools.

Developer portal for publishing your APIs.

Canary release deployments for safely rolling out changes.

CloudTrail logging and monitoring of API usage and API changes.

CloudWatch access logging and execution logging, including the ability to set alarms. For more information, see Monitoring REST API execution with Amazon CloudWatch metrics and Monitoring WebSocket API execution with CloudWatch metrics.

Ability to use AWS CloudFormation templates to enable API creation. For more information, see Amazon API Gateway Resource Types Reference and Amazon API Gateway V2 Resource Types Reference.

Support for custom domain names.

Integration with AWS WAF for protecting your APIs against common web exploits.

Integration with AWS X-Ray for understanding and triaging performance latencies.

85
Q

Build an API Gateway REST API with Lambda integration

A

To build an API with Lambda integrations, you can use Lambda proxy integration or Lambda non-proxy integration.

In Lambda proxy integration, the input to the integrated Lambda function can be expressed as any combination of request headers, path variables, query string parameters, and body.

In addition, the Lambda function can use API configuration settings to influence its execution logic.

For an API developer, setting up a Lambda proxy integration is simple. Other than choosing a particular Lambda function in a given region, you have little else to do. API Gateway configures the integration request and integration response for you. Once set up, the integrated API method can evolve with the backend without modifying the existing settings. This is possible because the backend Lambda function developer parses the incoming request data and responds with desired results to the client when nothing goes wrong or responds with error messages when anything goes wrong.

86
Q

On-prem instances do not use

A

IAM Instance profiles

87
Q

To use CodeDeploy on EC2 instances or on-premises servers

A

To use CodeDeploy on EC2 instances or on-premises servers, the CodeDeploy agent must be installed first. We recommend installing and updating the CodeDeploy agent with AWS Systems Manager

88
Q

AWS CloudFormation templates

A

Do not embed credentials in your templates
Dynamic references provide a compact, powerful way for you to reference external values that are stored and managed in other services, such as the AWS Systems Manager Parameter Store or AWS Secrets Manager. When you use a dynamic reference, CloudFormation retrieves the value of the specified reference when necessary during stack and change set operations, and passes the value to the appropriate resource. However, CloudFormation never stores the actual reference value. For more
Use AWS-specific parameter types

Use parameter constraints
Use AWS-specific parameter types
If your template requires inputs for existing AWS-specific values, such as existing Amazon Virtual Private Cloud IDs or an Amazon EC2 key pair name, use AWS-specific parameter types. For example, you can specify a parameter as type AWS::EC2::KeyPair::KeyName, which takes an existing key pair name that’s in your AWS account and in the region where you are creating the stack. AWS CloudFormation can quickly validate values for AWS-specific parameter types before creating your stack. Also, if you use the CloudFormation console, CloudFormation shows a drop down list of valid values, so you don’t have to look up or memorize the correct VPC IDs or key pair names

Use AWS::CloudFormation::Init to deploy software applications on Amazon EC2 instances

Use the latest helper scripts

Validate templates before using them

89
Q

CloudFormation best practices

A

Organize your stacks by lifecycle and ownership

Use cross-stack references to export shared resources
use cross-stack references to export resources from a stack so that other stacks can use them. Stacks can use the exported resources by calling them using the Fn::ImportValue function.

Verify quotas for all resource types

Use modules to reuse resource configurations
     consuming template, which makes it possible for you to access the resources inside the module using a Ref or Fn::GetAtt     

To make templates reusable, use the parameters, mappings, and conditions sections so that you can customize your stacks when you create them. For example, for your development environments, you can specify a lower-cost instance type compared to your production environment, but all other configurations and settings remain the same.

Use AWS-specific parameter types
If your template requires inputs for existing AWS-specific values, such as existing Amazon Virtual Private Cloud IDs or an Amazon EC2 key pair name, use AWS-specific parameter types. For example, you can specify a parameter as type AWS::EC2::KeyPair::KeyName

Use parameter constraints
With constraints, you can describe allowed input values so that CloudFormation catches any not valid values before creating a stack. You can set constraints such as a minimum length, maximum length, and allowed patterns

Use the latest helper scripts
The helper scripts are updated periodically
yum install -y aws-cfn-bootstrap

Validate templates before using them
use the aws cloudformation validate-template command or ValidateTemplate operation.

Validate templates for organization policy compliance
You can also validate your template for compliance to organization policy guidelines. AWS CloudFormation Guard (cfn-guard)

Create change sets before updating your stacks
Change sets allow you to see how proposed changes to a stack might impact your running resources before you implement them. CloudFormation doesn’t make any changes to your stack until you run the change set, allowing you to decide whether to proceed with your proposed changes or create another change set.

Use stack policies
Stack policies help protect critical stack resources from unintentional updates that could cause resources to be interrupted or even replaced. A stack policy is a JSON document that describes what update actions can be performed on designated resources. Specify a stack policy whenever you create a stack that has critical resources.

Update your Amazon EC2 instances regularly
On all your Amazon EC2 Windows instances and Amazon EC2 Linux instances created with CloudFormation, regularly run the yum update command to update the RPM package. This ensures that you get the latest fixes and security updates.

90
Q

Amazon SQS delay queues

A

Delay queues let you postpone the delivery of new messages to consumers for a number of seconds, for example, when your consumer application needs additional time to process messages. If you create a delay queue, any messages that you send to the queue remain invisible to consumers for the duration of the delay period. The default (minimum) delay for a queue is 0 seconds. The maximum is 15 minutes. For information about configuring delay queues using the console see Configuring queue parameters (console).

Note

For standard queues, the per-queue delay setting is not retroactive—changing the setting doesn’t affect the delay of messages already in the queue.

For FIFO queues, the per-queue delay setting is retroactive—changing the setting affects the delay of messages already in the queue.

91
Q

SQS Visibility timeout vs message timers

A

Message timers let you specify an initial invisibility period for a message added to a queue. The default (minimum) invisibility period for a message is 0 seconds. The maximum is 15 minutes.

To prevent other consumers from processing a message redundantly, SQS sets a visibility timeout, a period of time SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours.

92
Q

SQS delay queues

A

Delay queues let you postpone the delivery of new messages to a queue for a number of seconds.

93
Q

SQS queue standard

A

Standard QueuesDefault queue type.Makes a best effort to preserve the order of messages.Stores copies of your messages on multiple servers for redundancy and high availability.Consumes messages using short polling (default) – take a subset of SQS servers (based on a weighted random distribution) and returns messages from only those servers.

94
Q

SQS message timers

A

Allow Amazon sqs to use message times DelaySeconds value to set a delay

95
Q

DynamoDB Reserved Words

A

pr — ProductReviews

1 Reserved words
Sometimes you might need to write an expression containing an attribute name that conflicts with a DynamoDB reserved word. (For a complete list of reserved words, see Reserved Words in DynamoDB.)

Note

If an attribute name begins with a number or contains a space, a special character, or a reserved word, you must use an expression attribute name to replace that attribute’s name in the expression.

For example, the following AWS CLI example would fail because COMMENT is a reserved word.

2 Attribute Names Containing Dots
In an expression, a dot (“.”) is interpreted as a separator character in a document path. However, DynamoDB also allows you to use a dot character as part of an attribute name. This can be ambiguous in some cases. To illustrate, suppose that you wanted to retrieve the Safety.Warning attribute from a ProductCatalog item (see Specifying Item Attributes When Using Expressions).

Suppose that you wanted to access Safety.Warning using a projection expression.

DynamoDB would return an empty result, rather than the expected string (“Always wear a helmet”). This is because DynamoDB interprets a dot in an expression as a document path separator. In this case, you must define an expression attribute name (such as #sw) as a substitute for Safety.Warning. You could then use the following projection expression.

you must define an expression attribute name (such as #sw) as a substitute for Safety.Warning. You could then use the following projection expression.

aws dynamodb get-item \

- -table-name ProductCatalog \
- -key '{"Id":{"N":"123"}}' \
- -projection-expression "#sw" \
- -expression-attribute-names '{"#sw":"Safety.Warning"}'

DynamoDB would then return the correct result.

The correct approach would be to define an expression attribute name for each element in the document path:

You could then use #pr.#1star for the projection expression.

aws dynamodb get-item \

- -table-name ProductCatalog \
- -key '{"Id":{"N":"123"}}' \
- -projection-expression "#sw" \
- -expression-attribute-names '{"#sw":"Safety.Warning"}'

3 Nested Attributes
Suppose that you wanted to access the nested attribute ProductReviews.OneStar, using the following projection expression.

The correct approach would be to define an expression attribute name for each element in the document path:

You could then use #pr.#1star for the projection expression.

4 Repeating Attribute Names

To make this more concise, you can replace ProductReviews with an expression attribute name such as #pr. The revised expression would now look like the following.

96
Q

DynamoDB

A

DynamoDB

NoSQL databases are non-relational and distributed
Do not support query joins (limited support)

don’t perform aggregations such as “SUM”, “AVG” etc

Scales horizontally

Maximum size of an item is 400KM (items = record)

97
Q

Data Types in DynamoDB

Key types

A

Data Types

Scalar Types - String, Numberm Binary, Boolean

Document Types - List, Map

DynamoDB - Primary KeysOptions 1 - Partition Key (HASH)Unique for each itemDiverse so that is distributed
Option 2 - Partition Key + Sort Key (HASH + RANGE)Combination must be uniqueGrouped by partition keyE.g. Music Artist, Song TitleChoose the attribute with the “highest cardinality” to be the partition key
Remember that in a partition key and sort key combination, the data will be heavily skewed toward the partition key. This is important to understand because DynamoDB stores data in locations based on the partition key.

98
Q

DynamoDB throttling

A

DynamoDB Throttling

If you exceed provisioned RCU or WCU, you will get a “ProvisionedThroughputExceededException”

CausesHot Keys - one partition key is being read too many times
Hot Partition
Very large items - RCU and WCU depends on size of items

Solutions
Exponential backoff (part of SDK)Distribute partition keysIf RCU is the issue, consider DynamoDB Accelerator (DAX)
99
Q

DynamoDB Projection Expressions

A

To read data from a table, you use operations such as GetItem, Query, or Scan. Amazon DynamoDB returns all the item attributes by default. To get only some, rather than all of the attributes, use a projection expression.

A projection expression is a string that identifies the attributes that you want. To retrieve a single attribute, specify its name. For multiple attributes, the names must be comma-separated.

The following are some examples of projection expressions, based on the ProductCatalog item from Specifying Item Attributes When Using Expressions:

A single top-level attribute.

Title

Three top-level attributes. DynamoDB retrieves the entire Color set.

Title, Price, Color

Four top-level attributes. DynamoDB returns the entire contents of RelatedItems and ProductReviews.

Title, Description, RelatedItems, ProductReviews

You can use any attribute name in a projection expression, provided that the first character is a-z or A-Z and the second character (if present) is a-z, A-Z, or 0-9. If an attribute name does not meet this requirement, you must define an expression attribute name as a placeholder. For more information, see Expression Attribute Names in DynamoDB.

100
Q

AWS SAM Templates

A

Transform: specific version of AWS SAM to use
One or more macros for AWS to process. Tells cloud formation to transform to serverless and non serverless before the build and converts to standard cloud formation resource

Mappings: literal mapping of keys associated with values in used to specify conditional parameters

Parameters: Values passed to template at runtime

Format Version:
Just a reference to AWS cloud formation (not Sam template)

101
Q

Lambda You can configure the following items for a published function version: (5)

A

Triggers

Destinations

Provisioned concurrency

Asynchronous invocation

Database proxies

102
Q

three settings to enable CloudWatch to evaluate when to change the alarm state:

A

three settings to enable CloudWatch to evaluate when to change the alarm state:

  • Period is the length of time to evaluate the metric or expression to create each individual data point for an alarm. It is expressed in seconds. If you choose one minute as the period, there is one datapoint every minute.
  • Evaluation Period is the number of the most recent periods, or data points, to evaluate when determining alarm state.
  • Datapoints to Alarm is the number of data points within the evaluation period that must be breaching to cause the alarm to go to the ALARM state. The breaching data points do not have to be consecutive, they just must all be within the last number of data points equal to Evaluation Period.
103
Q

cloudWatch high-resolution metrics

A

Can specify sub minute activity alarms

104
Q

Lambda important

A

Stateless
Scales automatically
SOC, HIPPA, ISO compliant
Log streams automatically monitor invocations and report to cloudWatch

105
Q

Projection expression vs filter expressions

A

DynamoDB returns all of the item attributes by default. To get just some, rather than all of the attributes, use a projection expression.

A projection expression is a string that identifies the attributes you want. To retrieve a single attribute, specify its name. For multiple attributes, the names must be comma-separated.

Using condition expressions is incorrect because this is primarily used to determine which items should be modified for data manipulation operations such as PutItem, UpdateItem, and DeleteItem calls.

Using expression attribute names is incorrect because this is a placeholder that you use in a projection expression, as an alternative to an actual attribute name. An expression attribute name must begin with a #, and be followed by one or more alphanumeric characters.

Using filter expressions is incorrect because it simply determines which items (and not the attributes) within the Query results should be returned to you. All of the other results are discarded. Take note that the scenario says that you have to fetch specific attributes and not specific items.

106
Q

504 error likely:

A

Time-out

107
Q

AWS Lambda uses environment variables to facilitate communication with the X-Ray daemon and configure the X-Ray SDK.

A

_X_AMZN_TRACE_ID: Contains the tracing header, which includes the sampling decision, trace ID, and parent segment ID. If Lambda receives a tracing header when your function is invoked, that header will be used to populate the _X_AMZN_TRACE_ID environment variable. If a tracing header was not received, Lambda will generate one for you.

AWS_XRAY_CONTEXT_MISSING: The X-Ray SDK uses this variable to determine its behavior in the event that your function tries to record X-Ray data, but a tracing header is not available. Lambda sets this value to LOG_ERROR by default.

AWS_XRAY_DAEMON_ADDRESS: This environment variable exposes the X-Ray daemon’s address in the following format: IP_ADDRESS:PORT. You can use the X-Ray daemon’s address to send trace data to the X-Ray daemon directly, without using the X-Ray SDK.

108
Q

The following are the Gateway response types which are associated with the HTTP 504 error in API Gateway:

A

INTEGRATION_FAILURE - The gateway response for an integration failed error. If the response type is unspecified, this response defaults to the DEFAULT_5XX type.

INTEGRATION_TIMEOUT - The gateway response for an integration timed out error. If the response type is unspecified, this response defaults to the DEFAULT_5XX type.

For the integration timeout, the range is from 50 milliseconds to 29 seconds for all integration types, including Lambda, Lambda proxy, HTTP, HTTP proxy, and AWS integrations.

109
Q

ECS Cluster. To minimize the number of instances in use, she must select a strategy

A

A task placement strategy is an algorithm for selecting instances for task placement or tasks for termination. Task placement strategies can be specified when either running a task or creating a new service.

Amazon ECS supports the following task placement strategies:

binpack - Place tasks based on the least available amount of CPU or memory. This minimizes the number of instances in use.

random - Place tasks randomly.

spread - Place tasks evenly based on the specified value. Accepted values are attribute key-value pairs, instanceId, or host.