First Batch Flashcards

1
Q

ECR

A

Elastic Container Registry - Helps manage containers, pay for what you use

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Interface Endpoint

A

There are two types of VPC endpoints: Interface Endpoints and Gateway Endpoints. An Interface Endpoint is an Elastic Network Interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service. A Gateway Endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. The following AWS services are supported: Amazon S3 and DynamoDB. You must remember that only these two services use a VPC gateway endpoint. The rest of the AWS services use VPC interface endpoints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Gateway Endpoint

A

There are two types of VPC endpoints: Interface Endpoints and Gateway Endpoints. An Interface Endpoint is an Elastic Network Interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service. A Gateway Endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. The following AWS services are supported: Amazon S3 and DynamoDB. You must remember that only these two services use a VPC gateway endpoint. The rest of the AWS services use VPC interface endpoints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Simple Workflow Solution

A

This is similar to Step Functions in that it helps you manage and organize lambda functions but - Simple Workflow service: Not exactly server less , older version, more involvement, not really used any more, ask DO YOU NEED CHILD PROCESSES

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Recovery Point Object

A

This is dealing with Disaster Recovery - it is about how recently before the incident occurred did the data get backed up. The time between this and the incident is correlated to the ‘Data Loss’.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Recovery Time Objective

A

This is dealing with Disaster Recovery.

It is the amount of time after an incident occurs will the service be back to being functional. The time between this and the disaster is known as ‘Down Time’.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

EMR

A

Elastic Map Reduce: Helps create Hadoop clusters to analyze and process a vast amount of data, it will use EC2 instances to do this and it will know how to organize the EC2 instances to do this

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

ECS

A

Elastic Container Service: This is an AWS service to manage, create and organize your containers. The basic version of this involves organizing and setting up EC2 instances. You have to delegate and create your EC2 instances, apply a ECS agent and then you can add tasks to these instances based on how much workload they can handle.

From AWS website: ‘fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications.’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

ECS Agent

A

An ECS Agent is a software/packet/ that you run/place inside of an EC2 instance so that EC2 instance can connect and work with ECS to run tasks and coordinate as needed to complete the desired outcome

From AWS website: ‘The Amazon ECS container agent allows container instances to connect to your cluster. The Amazon ECS container agent is included in the Amazon ECS-optimized AMIs, but you can also install it on any Amazon EC2 instance that supports the Amazon ECS specification. The Amazon ECS container agent is only supported on Amazon EC2 instances.’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Cache

A

Cache is ‘in-memory’ storage that is quick and helps maintain easy access to frequently accessed data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

ElastiCache

A

in memory, elastic cache requires high code changes, elastic ache does not use IAM Authentication, Redis Auth, SSL supported.

Has two version: Redis and Memcached

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

RedShift

A

RedShift is a service that is used to do analysis on large datasets. RedShift is a node based service that has between 1 - 120 nodes and each node can have up to 160GB of data. RedShift is incredibly quick and is even faster than Athena for queries and analysis thanks to the use of indexes.

Extra: It is not Multi AZ, data can be imported fro S3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

RedShift Spectrum

A

From AWS website: feature within Amazon Web Services’ Redshift data warehousing service that lets a data analyst conduct fast, complex analysis on objects stored on the AWS cloud. With Redshift Spectrum, an analyst can perform SQL queries on data stored in Amazon S3 buckets.

More power without loading data into S3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Athena

A

It is a analytics data warehouse that is very fast and powerful. It is more specifically used for S3 data analysis and queries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Glue

A

It is an ETL that is used to prepare data for analysis, fully server less and is often used to send data to redshift

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

ECS Agent and AMIs

A

You can more quickly/automate ECS Agent set up by having AMIs that can incorporate them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

ELB

A

Elastic Load Balancer - is the same thing as Classic Load Balancer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

CICD

A

Continuous Integration and Continuous Deployment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

CloudFormation

A

Currently we have done a lot of manual code, WHAT IF OUR INFRASTRUCTURE WAS SET UP IN OUR CODE, CF is a declarative way of outlining your AWS infrastructure, ex: I want a security group, 2 EC2, and EFS, etc., Cf does it in right order and correct way , it will tell us cost for each component and the estimated code, we can restart and recreate infrastructure quickly, stacks to create and set things quickly, templates are updated in S3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Data Sync

A

Data Sync is a service used to quickly move data on to AWS.

AWS DataSync is an online data transfer service that simplifies, automates, and accelerates moving data between on-premises storage systems and AWS Storage services, as well as between AWS Storage services. You can use DataSync to migrate active datasets to AWS, archive data to free up on-premises storage capacity, replicate data to AWS for business continuity, or transfer data to the cloud for analysis and processing. Can go to S3, EFS, FSx for Windows File Server

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Database Migration Service

A

AWS Database Migration Service (AWS DMS) helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from the most widely used commercial and open-source databases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

All Migration services

A

1) Backup and Restore
2) Pilot Light
3) Warm Standby
4a) Hot Site/Multi Site 4b)same

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

ElasticBeanstalk

A

Elastic Beanstalk is the fastest and simplest way to deploy your application on AWS. You simply use the AWS Management Console, a Git repository, or an integrated development environment (IDE) such as Eclipse or Visual Studio to upload your application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

CloudFront

A

This is a cache on the global network that provides cache information based on what style of cacheing the developer selects. Additionally, CloudFront makes use of things such as Signed URLs, OAI for S3, is HTTP, if integrated with EC2 either the the EC2 has to be public or the ALB that connects to it has to be public and then given permission to access the S3, and it has GEORESTRICTIONS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
EC2 Metrics
Standard is 5 min resolution, high resolution is 1 minute
26
CloudWatch Features
Logs, Dashboards, Alarms, Agents, Custom Metrics
27
CloudWatch Logs
VPC Flow Logs, Elastic Beanstalk, Route 53, ECS
28
Cloud Watch Custom Metrics Resolution
1/5/10/30s
29
CloudWatch Alarm Resolution
10/30/60s
30
CloudWatch connection to EC2
CloudWatch can be used to have metrics being measured on S3 and then we can see that when it is unhealthy we can make the EC2 to restore
31
CloudWatch Agent
A cloud watch agent is an file/process that is run on EC2 that allows the EC2 to connect and send data to cloudwatch so that we can process the data there
32
S3 Event Notification Possible Outsourcing Locations
SNS, SQS, Lambda
33
Elastic Load Balancers do what...
They distribute load across machines
34
What are some of the key offerings/services relating to EC2
- Rent EC2 - Store on EBS - Scale the number using ASG - Distribute load across machines using ELB
35
OS for EC2s
Linux, Windows, Mac OS
36
NEtwork Attached EC2 storage
EBS and EFS
37
EC2 user Data
Bootstrap script (configure at first launch) for EC2 From AWS Website: you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives.
38
Bootstrapping means...
launching commands when a machine starts, the script is only run once at the instance first start, the more you add the more it has to do at boot, ex: install updates, download thing from internet, install software, etc.
39
Security groups
- contorl how traffic is alowed into or out of EC2 - only contain allow rules - can reference by IP or security group - they are a firewall on our EC2 instances - they regulate access to ports, authorized IP ranges, control of inbound network and control of outbound network
40
What do security groups regulate
they regulate access to ports, authorized IP ranges, control of inbound network and control of outbound network
41
Security groups can be attached to multiple instances true or false
T
42
Security groups are locked to a region, T or F
T
43
Security groups do not live outside EC2
F
44
Security groups cannot regulate/block based on other security groups: T or F
F
45
Port 22
log into a instance (linux, EC@)
46
Port 21
upload files into a file share
47
port 80
access unsecured websites
48
port 443
access secured websites
49
What OS''s can you SSH into (EC2 instances)
Mac, Linux, Windows >= 10
50
Putty is used for
Windows (all versions)
51
EC2 instance connect
- use we-browsers and can connect to any OS
52
EC2 Purchasing Options
- On Demand: short workload and predictable - Reserved (min 1 year) - Spot Instances: Short, workloads and can lose the connection Dedicated Hosts: Book an entire physical server, control instance placement
53
Types of Reserved instances
Reserved: long workloads Convertible Reserved Instances: Long workloads with flexible instances Scheduled Reserved: every Thursday between 3 and 6
54
On Demand
Short term and un interrrupted workloads
55
Reserved instances
up to 70% discount 1 year you get discount 3 year you get even more discount purchasing options: up front savings, and all up front even more savings Reserve a specific instance type good for steady state usage convertible allows you to change the instance types
56
Spot Instances
up to 90% discount can lose them if the current sport price is more than your max price the most cost-efficient instances batch jobs data analysis image processing You declare max spot price the hourly spot price varies based on offer and capacity you can choose to stop or terminate your instance within a 2 minute grace period
57
Dedicated Host
3 years, expensive, useful for software that have complicated licensing model companies that have strong regulatory or compliance needs
58
Dedicated Instances
instances running on hardware thats dedicated to you you don't get access to underlying hardware per instance billing as opposed to per host billing for dedicated hosts no control over instance placement
59
Spot Block
Block spot instance during a specific time frame, think without interruptions The instance may be reclaimed in very very rare instances
60
The process and details of how to terminate spot Instances
Spot request: has max price, desired number of instances, launch specifications, request type, valid from, valid til If you instance gets stopped the spot request is back to getting you a instance If you want to cancel a spot request, it has to be in open, active, or disabled cancelling a spot request does not terminate the instance you must first cancel a spot request then you terminate the associated spot instance
61
Spot Fleets
set of spot instances + (optional) On Demand Instances the spot fleet will try to meet the target capacity Strategies to allocate Spot instances: lowestPrice, diversified, capcaityOptimized Spot Fleets allows us to automatically request Spot Instances with the lowest price GIVES US SAVINGS BECAUSE IT CHOOSES THE RIGHT SPOT INSTANCE POOL TO CHOOSE FROM
62
IPv4 vs IPv6
IPv4 is more common and IPv6 is IOT
63
EC2 Placement Groups
Cluster: instances a in low latency in single AZ, if rack fail then all instances fail, big data that needs to be done fast Spread: spread across difference hardware, max is 7 instances per group per AZ, all instances are in different hardwares and the instances/hardware are failed, so because spread on different hardwares, then within an AZ a hardware failure doesn't mean that other instances fail, god for critical workloads Partition: spread instances across many different partitions, within an AZ, scales to 100s of instances per group, we create partition where each partition is a different rack and so the racks are safe from failure, each partition is safe from failure
64
Elastic Network Interface (ENI)
virtual netowrk cards, allow instances to connect to internet, Each ENI: primary private IPv4, one or more secondary IPv4, one elastic Bound to specific AZ can create eni independently and attach them on the fly for failover
65
EC2 'Stop': what is the most important aspect to consider when you end a EC2 with the 'Stop' option
The data on disk is kept intact in the next start
66
EC2 'Terminate': what is the most important aspect to consider when you end a EC2 instance with the 'Terminate' option
any EBS volumes also set up to be destroyed is lost, secondary (not meant to be destroyed) not destroyed
67
What is an EC2 Start look like, what are the steps that are taken in starting up an EC2 instance
First Start: OS boots and the EC2 User Data script is run following starts: the OS boots up then your application starts, caches get warmed up and that can take time
68
What does an EC2 Hibernate look like, what are some of the most important aspects to remember for an EC2 Hibernate
RAM is preserved when you restart the instance boot is much faster (The OS is not stopped / restarted ) the whole RAM is written to EBS root volume The root EBS Volume must be encrypted, Then it is stopped when the Root instance are not deleted Good to know: Max RAM is 150GB, root volumne must be encrypted EBS, and only works for On Demand and Reserved Instances and cannot be hibernated for more than 60 days
69
EBS
- network drive that you can attach to you EC2, can only be mounted to one EC2 at a time, one specific AZ, think 'usb stick' - since network there might be a bit of latency, it can be detached from an EC2 instance and attached to another - provision capacity but can increase capacity - can attach 2 EBS to 1 EC2 - can leave completely unattached
70
EBS Snapshots
make a backup of your EBS volume at a point in time, not necessary to detach volume to do snapshot, but recommended, can copy snapshots across AZ or region
71
AMI
Amazon Machine Image customization of EC2 instance you add your own software, config, OS faster boot / configuration time because all your software is pre packaged built for specific region but can be copied across regions 3 kinds: public, your own, AWS Marketplace AMI Steps to make an AMI: launch EC2 with your configurations, stop, build AMI (create it officially) , make more EC2 from that AMI
72
EC2 instance store
EBS volumnes are network drives with good but "limited performance if you need a high performance hardware disk, use EC2 instance store better I/O, lose their storage when stoped, good for buffer, risk of data loss if hardware fails
73
EBS Multi Attach Family
io1/io2 family
74
EBS encryption
you should do since has minimal impact, it is handled transparently (you have nothing to do), leverages keys from KMS
75
EFS
Elastic File System: managed NFS, can be mounted to many EC2, more expensive, highly available, scalable You need to use security groups to access EFS encryption at rest KMS Scale: 1000s of current NFS, high throughput performance Mode: General purpose, Max I/O Throughput Mode: Bursting and Provisioned
76
EFS Modes
Performance: 1) General 2) I/O Throughput: 1) Burting 2) Provisioned
77
Load Balancing
Load balances are servers that forward traffic to multiple servers downstream spread load one point of access seamlessly handle failures enforce stickiness high AZ
78
Elastic Load Balancer is a....
managed load balances aws guarantees that it will be working, takes care of upgrades, and provides only a few configs kno easier to handle IT DOES HEALTH CHECKS
79
Elastic Load Balancers Health Checks
crucial for load balancers they enable the load to know if the instances it is forwarding traffic to are available to 'reply' the health check is done on a port checks for 200
80
Load Balancer Securiy Groups
Before Load Balancers, the EC2 Security dealt with a range of IPs, now it is looking for another security group, the security group of the Load Balancer. Users can reach Load Balancer from anywhere, allow users to connect EC2 should only allow data from Load Balancer
81
ALB
Layer 7, load balancing to multiple HTTP apps across machines (machines grouped on something called target groups) Same thing ^^^ but on same machines (containers) redirects routing tables to different target groups
82
ALB tables and different target groups options: what are the different ways that targeting can be set on an ALB
Path based routing: example.com/users and example.com/owners hostname based routing: one.example.com and other.example.com Query based routing example.com/users?id=123&orders=false quick review: path, hostname, query string and headers port mapping feature to redirect to a dynamic port in ECS
83
ALB Target Groups
EC2, ECS tasks, Lambda functions, IP addresses
84
ELB Features
``` Sticky Sessions (same client same instance), Cross Zone Load Balancing (balance between all AZs), SSL Certificates (), Connection Draining () ```
85
SSL/TLS Certificate
SSL Certificate allows traffic between your clients and your load balancer to be encrypted in transit Secure Socket Layer TLS is the newer version and TLS are mainly used the data is encrypted over public before lb, and then once inside it becomes unencrypted in the private VPC HTTPS -> HTTP Load balancer uses certificate, you can manage certificate using AWS certificate Manager, can create upload you own certificates
86
Load Balancer SSL certificates
Load balancer uses certificate, you can manage certificate using AWS certificate Manager, can create upload you own certificates
87
SNI
multiple SSL certificates on ALB
88
ELB Connection Draining
CLB connection draining ALB and ELB is DeRegistration Delay can be disabled if value is set to 0 stops sending new requests to the EC2 instance set value low if requests are short
89
ASG
Auto Scaling Groups: The load balancer will automatically add the extra EC2 instances when scaled out, has scaling policies and scaling alarms (cloudwatch alarm) and when the alarm goes off it will scale off or in (The alarm decides whether we are going to scale in or out), New rules: beter auto scaling rules (THIS IS MANAGED BY EC2): Brain dump: scaling policies can be on CPU, Network, custome metrics or secheudle ASGs use Laynch configs or launch template, TAM roles attached to an ASG will get assigned ot EC2, ASG are free, having instances under an AG means that if something goes wrong and one is terminated it will automatically be added back by the ASG, ASG can terminate instances marked as unhelathy by an LB
90
Launch Configuration has...
AMI, instance type, EC2 User Data, EBS Volumes, Security Groups, SSH Key Pair
91
Scaling Policies
Customer metrics are important Scaling cooldowns are a scaling activity happens you are in the cooldown period , during this period the ASG will not launch new instances
92
RDS
Relational Database Service: DB use SQL as query language RDS is managed: provisoning, OS patching, continous backups and restore, monitoring dashboards, RR for improved read, multi AZ set up for DR, maintenance windows for upgreades, CANNOT SSH into instance
93
Backups in RDS
Automatically enabled in RDS Transaction logs are backed up by RDS every 5 min, Daily full backup of the DB 7 day retention
94
RDS snapshots vs Backups
Snapshots are manually done by user
95
RDS Auto Scaling
Enable the feature helps you increase storage on your RDS DB instance dynamically when RDS detects you are running out of free DB storage it scales automatically must set max storage threshold good for unpredictable workloads recap: enable, will auto scale, will do it for you, must set data max limit
96
RDS Read Replicas
We can create up to 5 read replicas the read replicas can be within AZ, Cross AZ, or Cross Region, Async Scale reads can promote to own Database and will live and have it's own lifecycle use case: need to do performance and analytics on RDS, then create read replica and set it to be where that is the analytics is done read replica is only for SELECT free if the read replica in same region
97
RDS Multi AZ
It is for disaster recovery Sync replication one DNS name INCREASE AVAILABILITY, IT IS A STAND BY (SO IT IS NOT READ FROM UNLESS FAILURE OF MASTER), NOT INCREASING SCALING, FOR FAILOVER ONLY
98
RDS from single AZ to Multi AZ
Zero downtime operation click modify following happens internally - snap shot take, new DB restored from SS in new AZ, sync established
99
RDS encryption
At Rest, In flight, KMS is used, SSL, SSL certificate,
100
Encryption process of unencrypted to encrypted
SS = Screen Shot SS, copy SS and enable encryption, restore the DB from the Encrypted, migrate applications to new DB, delete old summary: SS, copy SS as Encrypted, files in new, apps to new, delete old
101
Where are RDS DB usually deployed
Private subnet
102
RDS - IAM Auth
NEED TO STUDY
103
Aurora
cloud optimied, postgres and MySQL DB, storage automatically grows in increments of 10GB up to 64TB can have 15 replicas failover is instance native HA automatically start in 3 AZ 1 master that is a write
104
Failover in Aurora
Instant, native HA
105
Writer Endpoint
Think Aurora, this is where the client connects to aster to write
106
Reader Endpoint
Think Aurora, this is where the read replicas from Aurora send their read data to the user
107
All Aurora advanced featuers
Auto scaling, Global (one primary region, but we get 5 secondary regions - replication lag is less than 1 second, promoting another region, for disaster recovery is an RTO of < 1 min), custom endpoints (usually goes with different instance types), serverless (good for intermitten, infrequent and unproductive workload), Multi Master( this is HA for wrier node, all become Writers as well as readers) Machine Leanring
108
Global Aurora
- 1 primary region - up to 5 secondary regions - up to 16 RR per secondary regions - helps with decreasing latency - replication lag (data updates to secondary regions, takes up to 1 second) - promoting another region has an RTO of < 1 minute
109
S3
- infinitely scaling storage - max object size is 5TB - versioning - encryption: SSE-S3, SSE-KMS, SSE-C, Client Side Encryption
110
S3 Objects and Buckets
- objects are files, buckets are directories - buckets must have a globally unique name - objects have files = key - the key is the Full path - key can be split into prefix and object name, the last part - there are no directories in S3, it's just keys with long names
111
S3 versioning
- versioning (enabled at bucket level, new versions, we version files)
112
S3 Encryption
encryption: SSE-S3, SSE-KMS, SSE-C, Client Side Encryption SSE-S3: keys handled and managed by S3, Object encrypted server side, encryption type, must (encrypted in S3) SSE-KMS: encryption keys handled and managed by KMS KMS advantage is user control and audit trail object is encrypted server side (encrypted in S3) SSE-C: server side encryption using data keys fully managed by customer, outside of AWS, HTTPS MUST BE USED, key must be provided in HTTP headers for every HTTP request made (Encrypted in S3) Client Side encryption: clients must encrypt data themselves before sending to S3, clients must decrypt when they receiver data
113
S3 is an HTTPS service so
HTTP endpoint : non enrypted or HTTPS: encryption in flight most use HTTPS HTTPS is mandatory for SSE-C
114
S3 Security
User based: IAM policies - which API calls should be allowed for a specific use from IAM console Resource based: - Bucket policies - Object Access control list - Bucket Access control list STUDY MORE,
115
S3 MFA Delete - what is it needed for
permanently delete an object version in S3 suspend versioning on the bucket
116
who can enable MFA delete
only the bucket owner
117
What other configuration/setting do I need to have set up in order to use MFA delete
versioning on the S3 bucket
118
S3 Default Encryption
if you upload an encrypted object tin S3 it will automatically encrypt it if the 'default encryption' option is set this is applied to all objects
119
What is a way to force encryption of an object
bucket policy
120
S3 Access Logs
log all access to S3 buckets can do analysis later (Athena) make sure to put in another bucket these logs!
121
S3 Replication
CRR and SRR Async enable versioning in both buckets can have buckets in different accounts must give proper IAM permissions to S3 so that first bucket can write to second Only new objects replicated no chaining of replication
122
S3 Pre signed URLs
pre signed url has default of 3600s users given a pre signed url inherit the permissions of the person who generated the url
123
S3 storage classes
``` Standard Standard Infrequent Acces (IA_ One zone IA S3 intelligent Tiering Glacier Glacier Deep archive ```
124
S3 Lifecycle rules
Transition actions: when objects are transition to another storage Expiration actions: configure objects to expire after some time Rules can be created for certain prefix Rules can be created for certain object tags
125
S3 Analytics
can set up s3 analytics to help determine when to transition objects from standard to standard IA takes 24h to 48h for first start updated daily think lifecycle rules
126
S3 Baseline Performance
S3 automaticaly scales to high request rates, latency 100-200ms you can get at least 3500 PUT/COPY/POST/DELETE and 5500 GET/HEAD per second per prefix
127
KMS limits for S3
adds more time, there are extra requests for reads/actions will throttle if too many request
128
How to optimize S3 performance
Multi part upload and 'S3 Transfer Acceleration'
129
S3 Select and Glacier Select
Retrieve less data using SQL by performing server side filtering less network transfer, less CPU cost client side
130
S3 Event Notification
can create rules for specific events can send events to SNS, SQS, Lambda
131
S3 Requester Pays
in general bucket owners pay for all S3 Storage and data transfer costs associated with their bucket with Requester pays buckets, the requester instead of the bucket owner pays the cost of the request and the data download from the bucket requested cannot be random, must be authenticated in AWS
132
Athena
Server-less service, perform analytics in S# leave files in S3, perform analytics directly against S3 Logs can be analyzed too
133
CLoudFront general info
Content Delivery Network (CDN) improves read performance, content is cached at the edge uses edge location (ex: LA, Mumbai, Australia, London) edge locations serve cached content of CloudFront, edge location have local cache Also has Geo Restriction: Whitelist and Blacklist, make sure you are meeting regulators expectations TTL for about a day for cached content
134
CLoudFront Origins
S3: for distributing files and caching them at the edge, enhanced security with CloudFront OAI, can be used as ingress Custom Origin (HTTP): ALB, EC2, S3 website, Any HTTP backend you want
135
OAI for CloudFront
Specific to CloudFront I believe, it is how the CloudFront edge locations access the S3 bucket, it takes on an OAI role, which then the S3 checks if allowed, and if it does allow, the edge locations can access the S3 bucket information
136
CloudFront ALB or EC2 as origin
Either the EC2 must be public or if private, the ALB must be public and the EC2 must allow the ALB to access it
137
CloudFront vs S3 Cross Region Replication
CloudFront: - Global Edge Network, for global distribution - Files are cached for a TTL - Great for static content to be everywhere S3 CRR - must be setup for each region you want replication to happen - files are updated in near real-time - read only - great for dynamic content that needs to be available at low latency in FEW REGIONS
138
CloudFront Signed URL/Signed Cookies
Signed URL = access to individual files Signed Cookies = access to multiple files CloudFront accesses objects inS3 using OAI Client wants to access objects through cloudfront, we give client signed url which will allow them to access to cloudfront and thus allow them access to S3
139
Signed URL vs S3 Pre signed url
if you want people to have access to cloudfront distirbution that is in front of S3 use Signed If you want people to have direct access to S3, use Pre Signed URLs
140
CloudFront Advanced Concepts
Pricing: Different locations have different costs (Price Classes: All, 200, 100 with descending costs) Multiple Origins: To route to different kind of origins based on content type: go to ALB or S3 etc., can also increase HA and do failover, if there is an error on origin A go to origin B can do with EC2 or S3 for example Field Level Encryption: it's a lot of info - too lazy
141
Global Accelerator
Leverage the AWS internal network to route to your application consistent performance health checks security also uses AWS global network and edge locations no cacheing, good for UDP
142
Storage Gateway
Think Hybrid Cloud use cases: disaster recovery, backup and restore, tiered storage 3 types: file, volume, tape
143
File Gatway
Configured S3 buckets are accessible using NFS and SMB protocl supports S3 standard, IA, One Zone IA most recent cached integrated with Active Directory
144
Volume Gateway
Block storage backed by S3 backed by EBS cached volumes stored volumes
145
Problem with synchronous apps?
Spikes in traffic could cause major issues so we should use async/decoupled apps that allow us to space out and manage spikes
146
SQS
- producers and consumers - producers send messages, consumers poll messages - unlimited throughput, unlimited number of messages in queue, default retention of messages: 4 days, max of 14 days, low latency, - duplicate messages, out of ordering messaging
147
SQS Producers
Producers use SDK to produce to message message retention is default 4 days, and up to 14 days can send order with order id, customer id, any attribute you want We can have multiple consumers at least once delivery ASG (consumers run within ASG, we can have CloudWatch Metric
148
SQS Consumers
Consumers (running on EC2, servers, lambdas) Poll SQS for messages Process messages once done, the consumer deletes message using DeleteMessageAPI
149
SQS Security
Encryption: in flight using HTTPS, at rest using KMS, Client-side encryption IAM policies to regulate access to SQS API SQS Access Policies, similar to S3 bucket policies (good for cross account access and other services access such as SNS to SQS
150
SQS Queue Access Policies
2 good use cases: Cross Account Access Policies: allow another account (EC@ instance) to poll from SQS Publish S3 Event notifications: allow S3 bucket to write to SQS queue
151
SQS Features
Message Visibility Timeout (how long a message is invisible to other consumers, 30 second default) Dead Letter Queue (if a consumer fails to process message, it goes back to queue, and if that happens often it goes to DLQ, good retention of 14 days) Request Response (Requesters and response systems allows you to have EC2 instances in a requesters groups and same thing for responders group, allows to scale requesters and responders, allows for for different requests and different responders, SQS TEMPORARY QUEUE CLIENT) Delay Queues (delay a message so consumers don't see immediately, up to 15 min, I think this is similar to short and long polling ) FIFO Queues (300 messages/s without batching, up to 3000 with batching) ASG Amazon SNS
152
SNS
one message and many receivers publish, subscribe event producer sends message to one SNS topic many event receivers listen to SNS topic notification SNS Access policies, similar to SQS access policies SNS Message Filtering SNS has FIFO too
153
SNS and SQS - Fan Out Pattern
push once in SNS topic, and receive in all SQS queues that are subscribers fully decoupled, no data loss, make sure SQS queue access policy allows for SNS to write use case, S3 events to multiple queues, S3 event can only have one S3 evnet rule SNS has FIFO too, super similar to SQS FIFO, ordering by message group ID, deduplication using a deduplication ID SNS Message Filtering
154
Kinesis
makes it easy to collect, process, and analyze streaming data in real time Kinesis Data Streams: Streams are made up of Shards, the more shards, the more throughput
155
What are the 4 components/parts of Kinesis
Kinesis Data Streams: capture, process, and store data streams Kinesis Data Firehose, load data streams in AWS data sotres Kinesis Data Analytics: analyze data with SQL or Apache Fink Kinesis Video Streams: capture, process and store video streams
156
Kinesis
makes it easy to collect, process, and analyze streaming data in real time
157
What are the 4 components/parts of Kinesis
Kinesis Data Streams: capture, process, and store data streams Kinesis Data Firehose, load data streams in AWS data sotres Kinesis Data Analytics: analyze data with SQL or Apache Fink Kinesis Video Streams: capture, process and store video streams
158
Kinesis Data Streams
Kinesis Data Streams: Streams are made up of Shards, the more shards, the more throughput. There is a producer that sends a record (Partition Key and Data Blob), the shard then sends data to consumer with Record (same partition key, sequence no, data blob) shared or enhanced consumption once data is inserted in Kinesis it can't be deleted, data that shares the same partition goes to the same shard, billing is per shard provisioned 1 Shard = 1MB/s or 1000 msg/s managed scaling replay capability write custom code (producer/ consumer) real time (200ms)
159
Kinesis Firehose
Record is up to 1MB, can send to lambda to transform data, then there are batch writes (not instant writes, near real time service) 3 main destinations: S3, Amazon Redshift (COPY through S3, and Amazon ElastiSearch) 3rd party destinations too also customer end points with HTTP Endpoint fully managed, no administration, auto scaling, server-less (unlike Kinesis Data Streams) near real time (60s) no data storage
160
3 main destinations fo Kinesis Firehose
S3 RedShift (COPY through S3) ElastiSearch
161
How to organize/group data into Kinesis Data Streams?
Partition Key - the same key goes to the same shard
162
SQS FIFO and Group ID
For SQS FIFO, if you don't use a Group ID, messages are consumed in the order they are sent, with only one consumer For SQS FIFO, if you do use a group ID (similar to Partition Key in Kinesis) each group will have a different consumer and the consumers can read
163
SQS vs SNS vs Kinesis
SQS: SQS consumer "pull data", data deleted after consumed, can have as many consumers as we want, no need to provision throughput SNS: push data to subscribers, data not persisted, pub/sub, no need to provision throughput Kinesis: standard vs enhanced, replay data
164
Amazon MQ
When migrating to the cloud, instead of re-engineering the application to use SQS and SNS we can use Amazon MQ
165
What is the significance of ALB to ECS
ALB is used to expose the containers to internet
166
ECS order of creation
``` First create ECS cluster then ASG then put Containers in each AZ you want (the ASG and ECS containers can span multiple AZs then add ECS agent then configure and assign the ECS tasks ```
167
AWS Fargate
Do not provision infrastructure servless runs containers based on CPU and RAM ECS Cluster, then AWS Fargate will run tasks, assign an ENI (each task gets it's own unique ENI)
168
When running many tasks on Fargate need to make sure of one thing...
Make sure there are enough free/available IPs to have each task have it's own unique ENI
169
IAM Roles for ECS tasks
Need IAM roles for ECS tasks 2 parts: role for agent, role for task Role for Agent: Role attached to instance, used by instance, to connect to ECS service, ECR service Role for Task Role: Attached directly to task, this will allow each task to have it's own role, defined in task definition specific to its uses and needs
170
ECS and EFS
Can create an EFS file system that connects to EC2 Tasks and Fargate Tasks tasks launched in any AZ will be able to share the same data
171
ECS Services and Tasks and Load Balancing
ECS cluster can connect each task to a ALB, multiple tasks can be in a single ECS Container Instance You must allow on the EC2 instances any port from the ALB security group DONE THROUGH PORTS Load balancing : Each task has unique IP, same port, ENIs security group allow the ALB on the task port (same idea as above)
172
ECS tasks invoked by Event Bridge
STUDY MORE
173
ECS Scaling
Can be based on CPU usage where one service runs multiple tasks, the cpu usage (where each task goes on a unique instance, we average out the instances CPU usage and then check to see if we need to scale) SQS queue length is another scaling situation NOTE: 2 forms of scaling, service scaling and ASG scaling (if not enough instances to run new task, ECS Capacity Providers will scale ASG)
174
ECS Rolling updates
When updating from v1 to v2, where the is an ECS update, we have to manage how many tasks can be started and stopped and in which order set min healthy percentage and max percentage
175
Lambda Overview
Serverless virtual functions, limited by time, auto scaling, EASY PRICING and on demand, runs for max of 15 min several restrictions on the hardware (there are lots of limits, study more)
176
Lambda@Edge
Synchronous implementation of lambda it's when you have deployed a CDN using CloudFront and we want to run a global AWS Lambda alongside use cases: more responsive apps, don't manage servers, customize the CDN content, pay for what you use allows you to edit: Viewer Request -> Origin Request -> Origin Response -> Viewer Response (in order of user requesting data to receiving data)
177
Dynamo DB
Think Key-Value Pair, NoSQL (not relational), Primary keys used Fully managed, Highly Available, server-less, Scales to massive workloads, incredibly powerful integrated with IAM for security, authorization, and administration Provisioned Throughput, RCU, WCU: 1 strongly consistent read, or 2 eventually consistent read AND write of 1KB per second OPTION to set up autoscaling of throughput can use burst credit and ProvisionedThroughputException
178
Dynamo DB Advanced Features
DAX (easy implementation, speeds up READS) dynamo DB Streams (change logs, do analytics on it, mainly used with lambda to be integrated) On Demand: no capacity planning need for RCU or WCU but 1.5x cost, useful with high spikes and unpredictable workload, also good for when app is so low throughput, it's good to just have the on demand setup to make sure you aren't overspending on RCU and WCU Global Tables (CRR): all regions replication to other regions, helps with latency
179
API Gateway
Allows us to have a server-less REST API Integrations: Lambda, HTTP, AWS Service (all these can be exposed/connected to with API Gateway)
180
API Gateway Security
3 types: IAM permissions, lambda authorizers, Cognito User Pool IAM Permissions (sig v4) Lambda Authorizer, uses lambda to authorize the validity of the token, good for 4rd party type of authentication,
181
CloudWatch Custom Metrics
set your own custom metrics, use API call, PutMetricData, ability to use dimensions to segment metrics (instance.id, environment.name) Metric resolution: - Standard === 1 minute - High Resolution === 1/5/10/30 seconds Important: metrics can go backward and forward in time
182
CloudWatch Dashboards
Dashboards give quick access to key metrics and alarms global can include graphs from different accounts can set auto refersh (10s, 1m, 2m, 5m, 15m)
183
CloudWatch Logs List
``` ElasticBeanstalk:collection of logs from app ECS: collection from containers AWS Lambda: function logs VPC Flow Logs: VPC Specific Logs API Gateway: Route53: log DNS queries CloudTrail based on filter CloudWatch log agents: for example on EC2 machines ```
184
CloudWatch Agent and CloudWatch Log Agents
Default: no logs from your EC2 machine will go to CloudWatch You ned to run a CloudWatch agent on EC2 to push log files Make sure IAM permissions are correct can be done on premise as well Logs Agent old, Unified Agent in newer can do metrics and logs with Unified
185
CloudWatch Alarms
Trigger notifiation for any metric States: OK, INSUFFICIENT, ALARM Period: length of time in second to evaluate metrics Targets: EC2 (Stop, Terminate, Reboot)
186
Common CloudWatch Alarm situation with EC2
EC2 Instance Recovery Move host from one EC2 to another Once alarm triggered then we do instace recovery AND SNS You will have same private, public, Elastic IP, metadata, placement group, Instance store lost, non root EBS kept
187
CloudWatch Events
Intercept events from AWS services: ex: EC2 instacne state, S3, AND can intercept any APII call with CloudTrail integration A JSON payload is created from the event and passed to target, ex of type of payloads: Compute, Integration, Orchestration, Maintenance
188
CloudTrail
Provides governance, compliance, and audit for your AWS account CloudTrail is enabled by default get a history of events/API calls made within your AWS account can put logs into cloudwatch logs or S3 gets all calls made by SDL, CLI, Console, IAM Users and IAM Roles
189
CloudTrail Events Types
Management Events: operates that are performed on resources in your AWS account Data Events: by default not logged, ex: S3 GetObject, Delete Object, Lambda execution activity CloudTrail Ins
190
CloudTrail Insights
So many logs and data can be confusing CloudTrail Insights to detect unusual activity in your account continously analyze write events to detect unusual patterns Events are stored for 90 days, to keep longer send to S3 and then use Athena to analyze
191
AWS Config
Helps with auditing and recording compliance of AWS service helps record configurations and changes over time questions that can be solved - do my buckets have public access, is there unrestricted SSH access to my security groups, how has ALB config changed over time can use SNS to alert changes store data in S3 (use Athena) can use AWS managed config rules, can make custom rules overview of configs and (analyze) compliance Remediation Actions available
192
STS
Allows limited and temporary access to AWS resources token is valid for up to one hour AssumeRole Assume RoleWithSaML AssumeRoleWithWebIdentity GetSessionToken Most common use cases: Using STS to Assume a Role Can also do Cross account access with STS
193
Microsoft Active Directory
Centralized security management, create account, assign permissions Database of objects: User accounts, computers, printers, file shares, Security Groups objects are organized in trees, group of trees is a forest all computers are connected to Domain Controller allowing users to be accessible on any single machine
194
AWS Directory Services
AWS Managed Microsoft AD AD Connector Simple AD
195
Organizations
Global service manage multiple AWS accounts the main account is the master account Service Control Policies: restrict access to certain services (in production account, can't have access to service A) org overrules independent access rules (ex: org allows access to A, ind says no access to A: you have access to A, ex: org denies access to A, ind allows access to A, no access to A
196
Migrating accounts from one organization to another
1) remove member account from org 2) invite to new org 3) accept invite master account migrate 1) remove all members 2) delete old org 3) repeat process above to invite and add
197
CloudHSM
KMS => AWS manages the software for encryption CloudHSM => AWS provisions encryption hardware Dedicated Hardware (HSM) You manage your own encryption keys entirely HSM is tamper resistant supports both symmetric and asymmetric Good to use with SSE-C encryption clusters spread multi AZ
198
Shield
AWS Shield Standard: free for every AWs customer, protection from DDoS AWS SHield Advanced: team to help you against DDoS and you have higher fees
199
WAF
Wirefall that will protect your ap from common web exploits Layer 7 is HTTP Deploy on ALB, API Gateway, CloudFront Define Web Access Control List: rules an include Ip addresses, HTTP headers, HTTP body, or URL strings, protects from DDS, SQL injection, XSS scripting(common attack), and can use rate based rules for DDoS
200
AWS Firewall manager
manage rules in all accounts of an AWS org WAF rules, AWS Sheild Advanced common set of security rules
201
GuardDuty
Intelligent threat discovery to protect AWS account use ML alg one click to anable Can setup CloudWatch Event rules to be notified in case of findings analyzes VPCflow logs, CloudTrail flow logs, and DNS logs, this goes to GuardDuty, (uses ML) then send data to SNS or lambda
202
Amazon Inspector
Automated Security Assessments for EC2 instances only EC2 instances run assessment and then will send list of vulnerabilities of EC2
203
Macie
perform data security and data privacy Sensitive data PII will notify using cloud-watch events / events bridge
204
Shared Responsibility Model Diagram
Study more, it's at the end of the security section, short video, watch after studying the rest of security
205
CIDR, Private, Public IP
CIDR are used for Security Group rules and networking in general - they help define an IP address range 2 components: base IP & subnet mask the base IP represents an IP contained in the range subnet mask can take 2 forms subnet masks basically allows part of the underlying IP to get additional next values from the base IP - Ex: /32 allows for 1 IP, /31 allows or 2 IP, 30 allows for 4 I. (2^0, 2^1, 2^2) BASE RULE: 2^32 - 2 ^ (32 - /number) again this gives us the IP range Private IP can only allow certain values (ranges) anything not in these ranges are public IP
206
Default VPC Walkthrough
This is the default VPC that comes with all new accounts new instances are launched into default VPC IF no subnet is specified Default VPC have internet connectivity and all instances have public IP we also get a public and private DNS name
207
VPC overview
VPC = Virtual Private Cloud can have multiple VPCs in a region max CIDR per VPC is 5. For each CIDR: - min size /28: - max size /16: VPC is private so only private IP range is allowed
208
Subnet overview
subnets are tied to specific AZs can have public and private subnets AWS reserves 5 IPs address in each subnet these 5 IPs are not available for use and cannot be assigned to an instance
209
Internet Gateways
Help instances and VPCs connect to internet scales horizontally and is HA must be created separately from VPC one VPC can only be attached to one IGW
210
Route Table
Route table allows our EC2 instances to link/connect to IGW so that they may connect to the internet
211
NAT instances and NAT gateways
NAT instances allow instances in private subnets to connect to internet must be launched in public subnet must disable EC2 flag must have Elastic IP there must be a route table in private and it must point to NAT instance BUT - not highly available, not multi AZ, would need to creeate ASG in multi AZ must manage security gorups and rules (nbound and outbound) Because of this we use the newer NAT Gateway
212
NAT Gateway
AWS managed NAT, higher bandwidth, better availability pay by the hour for usaeg and bandwidth NAT is created in a specific AZ Cannot be used by an instance in that subnet, can be used by instances in other subnets requires IGW no security group to manage/required
213
DNS resolution in VPC
enableDnsSupport - Default is True - helps decide if DNS resolution is supported for the VPC - if true, queries the AWS DNS server at 169.254.169.253 enable DnsHostname: - False by default for new VPC, true default for Default VPC - assign public hostname if you use custom DNS domain names in a private zone in Route 53
214
Network ACLs and Security Group
Security group goes around EC then there is subnet NACL outside of both of these NACL is stateless SG is stateful (if inbound goes in, outbound goes out regardless of rules - and vice versa) NACLS are a firewall which control traffic from and to subnet, default allows everything out and in One NACL per subnet, new subnets are assigned default NACL New, non default NACL denies everything
215
VPC Peering
connect 2 VPC make them behave as if they were in the same network must not have overlapping CIDR VPC peering connection is not transitive can work inter region and cross account
216
VPC Endpoints
Situation: what if we wanted to have all traffic be private (going through router and IGW goes through internet) we want to use endpoints endpoints allow you to connect to AWS services using a private network scale horizontally and are redundant if there is an issue, check these two things: Check DNS setting resolution in your VPC, Check Route Tables
217
VPC Flow Logs + Athena
FLow Logs: VPC flow logs, subnet flow logs, Elastic Network Interface Flow Logs helps to monitor and troubleshoot connectivity issues flow logs data can go to S3 / CloudWatch logs
218
Bastion Hosts
We can use a Bastion Host to SSH into our private instance The bastion is in the public subnet which is then connected to all other private subnets bastion host security group must be tightened exam question: Make sure the Bastion Host only has port 22 traffic from the IP you need, not from the security groups of your other instances
219
Site to Site VPN
Corporate Data Center customer gateway on customer side VPN Gateway on. VPC side then together use that to create a site to site VPN
220
Direct Connect (DX)
Provides a dedicated private connection from a remote network to your VPC deidcated connection must be set up between your Data Center and AWS direct need to setup a Virtual Private Gateway on your VPC access S3 and private EC2 on same connection use cases: increased bandwidth and hybrind set up there is a AWS Direct Connect Location Tkaes a month to set up
221
AES 128 vs AES 256
AES-128 is faster and more efficient and less likely to have a full attack developed against it (due to a stronger key schedule). AES-256 is more resistant to brute force attacks and is only weak against related key attacks (which should never happen anyway).Feb 15, 2021
222
When a new subnet is created, how does the connection to a Route Table work?
When a new subnet is created, it is automatically associate with the main route table.
223
Backup and restore
This is the lowest cost DR approach that simply entails creating online backups of all data and applications
224
Warm standby
The term warm standby is used to describe a DR scenario in which a scaled-down version of a fully functional environment is always running in the cloud.
225
pilot light
A small part of your infrastructure is always running simultaneously syncing mutable data (as databases or documents), while other parts of your infrastructure are switched off and used only during testing. Unlike a backup and recovery approach, you must ensure that your most critical core elements are already configured and running in AWS (the pilot light). When the time comes for recovery, you can rapidly provision a full-scale production environment around the critical core.
226
Multi-site
A multi-site solution runs on AWS as well as on your existing on-site infrastructure in an active- active configuration.
227
If you want to store and access credentials what should you use: IAM policy, KMS, AWS Key Management Service, or Systems Manager Parameter Store?
Systems Manager Parameter Store: - You cannot store credentials in KMS or IAM policies, it is used for creating and managing encryption keys - Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management.
228
How does a IAM policy allow the service it is applied to to have access to the service it is requesting access to (i.e. what are the two forms of access IAM policies grant):
Within an IAM policy you can grant either programmatic access or AWS Management Console access to Amazon S3 resources.
229
If there is only one ALB, does latency based routing have an impact?
No! "With only one ALB latency-based record serves no purpose."
230
Amazon EMR
Amazon EMR is a web service that enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data. EMR utilizes a hosted Hadoop framework running on Amazon EC2 and Amazon S3.
231
How to get CloudTrail to track all events from all AWS accounts
You can create a CloudTrail trail in the management account with the organization trails option enabled and this will create the trail in all AWS accounts within the organization.
232
CloudTrail Management Events vs Data Events
Data events provide visibility into the resource operations performed on or within a resource. These are also known as data plane operations. Data events are often high-volume activities. Management events provide visibility into management operations that are performed on resources in your AWS account. These are also known as control plane operations. Example management events include:
233
Amazon EventBridge
You can use EventBridge to detect and react to changes in the status of AWS Config events. You can create a rule that runs whenever there is a state transition, or when there is a transition to one or more states that are of interest. Then, based on rules you create, Amazon EventBridge invokes one or more target actions when an event matches the values you specify in a rule.Depending on the type of event, you might want to send notifications, capture event information, take corrective action, initiate events, or take other actions.
234
API Gateway Throttling and Bursting Situations
You can throttle and monitor requests to protect your backend. Resiliency through throttling rules based on the number of requests per second for each HTTP method (GET, PUT). Throttling can be configured at multiple levels including Global and Service Call. When request submissions exceed the steady-state request rate and burst limits, API Gateway fails the limit-exceeding requests and returns 429 Too Many Requests error responses to the client.
235
AWS Batch
AWS Batch Multi-node parallel jobs enable you to run single jobs that span multiple Amazon EC2 instances. With AWS Batch multi-node parallel jobs, you can run large-scale, tightly coupled, high performance computing applications and distributed GPU model training without the need to launch, configure, and manage Amazon EC2 resources directly.
236
When you're using an Edge device, the data migration process has the following stages:
1. You use the AWS Schema Conversion Tool (AWS SCT) to extract the data locally and move it to an Edge device. 2. You ship the Edge device or devices back to AWS. 3. After AWS receives your shipment, the Edge device automatically loads its data into an Amazon S3 bucket. 4. AWS DMS takes the files and migrates the data to the target data store. If you are using change data capture (CDC), those updates are written to the Amazon S3 bucket and then applied to the target data store.
237
For all uniquely custom metrics you must have a cloud watch agent to send the data from the EC2 instance)
T
238
Are there lifecycle policies for EFS?
Yes there are! Ex: AFTER_7_DAYS lifecycle policy
239
Athena analysis means:
you can do queries that allow for analysis, you can search, you can then use these searches to analyze the data
240
ElastiSearch details: what kind of data store is ElastiSearch
It is a NoSQL data store that is document-oriented, scalable, and schemaless by default. While joins are primarily an SQL concept, they are equally important in the NoSQL world as well. SQL-style joins are not supported in Elasticsearch as first-class citizens
241
RAID 0 vs RAID 1
RAID 0 is two locations of data, no copies, but very fast reads and writes RAID 1 is two locations of data, copies (one copy in one location, another copy in another), but no acceleration of read
242
Aurora Multi Master works for more than one region: True or False
Multi master only works within a Region it does not work across Regions. F
243
Aurora Global RPO and RTO
This provides your application with an effective Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of less than 1 minute, providing a strong foundation for a global business continuity plan
244
CLoudWatch Logs cannot send notifications, it only tracks information that you can then view and analyze
CloudWatch Logs can track the number of errors that occur in your application logs and send you a notification whenever the rate of errors exceeds a threshold you specify. F
245
Can you use CloudWatch events for tracking logs and then sending notifications on those logs
Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in Amazon Web Services (AWS) resources. Though you can generate custom application-level events and publish them to CloudWatch Events this is not the best tool for monitoring application logs.
246
CloudTrail is only used for API calls?
CloudTrail is used for monitoring API activity on your account, not for monitoring application logs. T
247
CloudWatch Events
Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in Amazon Web Services (AWS) resources. Though you can generate custom application-level events and publish them to CloudWatch Events this is not the best tool for monitoring application logs.
248
scripts and AWS CloudFormation do what?
is a method of automatically creating the resources.
249
Does Management Console automatically create resources?
Using the AWS Management Console is not a method of automatically creating the resources. AWS CloudFormation does that
250
Can you connect Fargate to Lustre?
No - "It is not supported to connect Fargate to FSx for Lustre."
251
SCP use case for limiting EC2 instance size launches
"Service control policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization. An SCP defines a guardrail, or sets limits, on the actions that the account's administrator can delegate to the IAM users and roles in the affected accounts. In this case the Solutions Architect can use an SCP to define a restriction that denies the launch of large EC2 instances. The SCP can be applied to all accounts, and this will ensure that even those users with permissions to launch EC2 instances will be restricted to smaller EC2 instance types."
252
EFS Lifecycle policy limit
With EFS you can transition files to EFS IA after a file has not been accessed for a specified period of time with options up to 90 days. You cannot transition based on an age of 2 years.
253
S3 availability zones
For S3 Standard, S3 Standard-IA, and S3 Glacier storage classes, your objects are automatically stored across multiple devices spanning a minimum of three Availability Zones, each separated by miles across an AWS Region.
254
How soon can we move to Standard IA?
Though there is no minimum duration when storing data in S3 Standard, you cannot transition to Standard IA within 30 days. This can be seen when trying to create a lifecycle rule:
255
Can you use AWS WAF on a NLB?
No: you cannot use AWS WAF with a network load balancer.
256
WAF applications on Load Balancers
The AWS Web Application Firewall (WAF) is available on the Application Load Balancer (ALB). You can use AWS WAF directly on Application Load Balancers (both internal and external) in a VPC, to protect your websites and web services.
257
What is more expensive Global Accelerator or CloudFront?
Global Accelerator Global Accelerator is an "expensive way of getting the content closer to users compared to using CloudFront. "
258
RDS Encryption from unencrypted master
you can't modify an existing unencrypted Amazon RDS DB instance to make the instance encrypted, and you can't create an encrypted read replica from an unencrypted instance.
259
A bucket policy can be applied to only allow traffic from a VPC endpoint: T or F
True
260
A popular use case of ElastiCache Redis
ElastiCache Redis has a good use case for autocompletion
261
S3 is NOT ideal for dynamic data: T or F?
T "dynamic data that is presented by [an] application is unlikely to be stored in an S3 bucket."
262
What kind of storage does an instance store provide?
An instance store provides temporary block-level storage
263
FSx description
"With Amazon FSx, you can launch highly durable and available file systems that can span multiple availability zones (AZs) and can be accessed from up to thousands of compute instances using the industry-standard Server Message Block (SMB) protocol." - Highly Durable - Multi AZ - Up to thousands of instances
264
EC2 Memory usage is a standard metric within CloudWatch for EC2
"there is no standard metric for collecting EC2 memory usage in CloudWatch the data will not already exist there to be retrieved."
265
Who manages the keys in With SSE-S3?
With SSE-S3, Amazon manage the keys for you
266
How can you connect to an S3 bucket: HTTPS or HTTP?
However, you cannot connect to an Amazon S3 static website using HTTPS - only HTTP
267
Can you connect to S3 with CloudFront using HTTPS?
You can create an Amazon CloudFront distribution that uses an S3 bucket as the origin. This will allow you to serve the static content using the HTTPS protocol.
268
To serve a static website hosted on Amazon S3, you can deploy a CloudFront distribution using one of these configurations:
Using a REST API endpoint as the origin with access restricted by an origin access identity (OAI). Using a website endpoint as the origin with anonymous (public) access allowed. Using a website endpoint as the origin with access restricted by a Referer header.
269
ECS service has auto scaling abilities: T or F?
T "Amazon ECS uses the AWS Application Auto Scaling service to scales tasks. This is configured through Amazon ECS using Amazon ECS Service Auto Scaling."
270
FSx for Lustre use cases and description
Amazon FSx for Lustre provides a high-performance file system optimized for fast processing of workloads such as machine learning, high-performance computing (HPC), video processing, financial modeling, and electronic design automation (EDA).
271
FSx for Lustre and S3
"When linked to an Amazon S3 bucket, FSx for Lustre transparently presents objects as files, allowing you to run your workload without managing data transfer from S3."
272
Can you use S3 encryption with S3 managed encryption keys?
No "You cannot use S3 managed keys with client-side encryption."
273
NFS File Storage systems
The solution must use NFS file shares to access the migrated data without code modification. This means you can use either Amazon EFS or AWS Storage Gateway – File Gateway. Both of these can be mounted using NFS from on-premises applications.
274
AWS Storage Gateway: File vs Volume
FILE: "An AWS Storage Gateway File Gateway provides your applications a file interface to seamlessly store files as objects in Amazon S3, and access them using industry standard file protocols. This removes the files from the on-premises NAS device and provides a method of directly mounting the file share for on-premises servers and clients." VOLUME: "A volume gateway uses block-based protocols. In this case we are replacing a NAS device which uses file-level protocols so the best option is a file gateway."
275
NAS device uses what kind of protocols?
file-level protocols
276
Redshift use cases
RedShift is a columnar data warehouse DB that is ideal for running long complex queries. RedShift can also improve performance for repeat queries by caching the result and returning the cached result when queries are re-run. long complex queries AND repeat queries (by caching results when queries are re run)
277
pre signed URLs details
A presigned URL gives you access to the object identified in the URL. When you create a presigned URL, you must provide your security credentials and then specify a bucket name, an object key, an HTTP method (PUT for uploading objects), and an expiration date and time. The presigned URLs are valid only for the specified duration. That is, you must start the action before the expiration date and time.
278
Who can write in an EFS file system
"After creating a file system, by default, only the root user (UID 0) has read-write-execute permissions. For other users to modify the file system, the root user must explicitly grant them access. One common use case is to create a “writable” subdirectory under this file system root for each user you create on the EC2 instance and mount it on the user’s home directory. All files and subdirectories the user creates in their home directory are then created on the Amazon EFS file system"
279
Elastic Fabric Adapter
An Elastic Fabric Adapter is an AWS Elastic Network Adapter (ENA) with added capabilities. The EFA lets you apply the scale, flexibility, and elasticity of the AWS Cloud to tightly-coupled HPC apps. It is ideal for tightly coupled app as it uses the Message Passing Interface (MPI).
280
Another way to look at the purpose/application of Global Accelerator
Global Accelerator is a "service that is used for directing users to different instances of the application in different regions based on latency."
281
You can use Self Signed Certificates for RDS: T or F?
F You cannot use self-signed certificates with RDS.
282
What is more durable S3 or EFS?
S3
283
What is more cost effective S3 or EFS?
S3