test questions Flashcards
Cold Attach
Warm Attach
Hot Attach
You can attach a network interface to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach). You can detach secondary network interfaces when the instance is running or stopped. However, you can’t detach the primary network interface. You can move a network interface from one instance to another if the instances are in the same Availability Zone and VPC but in different subnets. When launching an instance using the CLI, API, or an SDK, you can specify the primary network interface and additional network interfaces. Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance. A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly. Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves. Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance. If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing. If possible, use a secondary private IPv4 address on the primary network interface instead. For more information, see Assigning a secondary private IPv4 address. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html
Changing the Tenancy of an Instance
Dedicated - hardware** that’s dedicated to a **single* customer
Host- Dedicated Hosts give you additional visibility and control over how instances are placed on a physical server, and you can reliably use the same physical server over time.*
Dedicated to host, can happen after its stopped after launching. vice versa same. It will change at next launch. WARM approach
Warm standby
Pilot Light
Multi-site/Hot Standby
backup and restore
Pilot light only provision the critical part in the backup site, e.g a slave db instance.
Pilot Light: This method keeps “critical applications” no copy to this and data at the ready so that it can be quickly retrieved if needed.
———
Warm Standby:
“smaller scale version” of resources dedicated to this, once failover occurs this will scale up. This method keeps a duplicate version of your business’ core elements running on standby at all times, which makes for a little downtime and an almost seamless transition.
Little Downtime
———
Multi-Site Solution:
NO DOWNTIME; Also known as a Hot Standby, this method “fully replicates” your company’s
data/applications between two or more active locations and splits your traffic/usage between them. If a disaster strikes, everything is simply rerouted to the unaffected area, which means you’ll suffer almost zero downtime. However, by running two separate environments simultaneously, you will obviously incur much higher costs.
IOPS AND LEVELS EBS (gp2 EBS (io1) EBS ST1 EBS SC1 EBS MAX
EBS (gp2) 16000 iops 3 IOPS/GiB EBS (io1) 64000 iops 50:1 IOPS to GiB EBS ST1 250mbps 500mbps EBS SC1 250mbps EBS MAX 80k per instance!!!! Max 2375 MB/s per instance, 1000 MiB/s (vol) (io1)
EBS VS INSTANCE STORE IOPS
EBS
Require up to 64,000 IOPS and 1,000 MiB/s per volume
Require up to 80,000 IOPS and 2,375 MB/s per instance
When to use Instance Store
Great value, they’re included in the cost of an instance.
More than 80,000 IOPS and 2,375 MB/s
If you need temporary storage, or can handle volatility.
instance vs ebs general
Instance Store
Direct (local) attached storage
Super fast
Ephemeral storage or temporary storage
Elastic Block Store (EBS)
Network attached storage
Volumes delivered over the network
Persistent storage lives on past the lifetime of the instance
Creating a Canary
CloudWatch
The purpose of a canary deployment is to reduce the risk of deploying a new version that impacts the workload. The method will incrementally deploy the new version, making it visible to new users in a slow fashion.
CloudWatch Synthetics (announced at AWS re:Invent 2019) allows you to monitor your sites, API endpoints, web workflows, and more. … as you create your canaries, you can set CloudWatch alarms so that you are notified when thresholds based on performance, behavior, or site integrity are crossed.Apr 23, 2020
aws import export
AWS Import/Export is a service you can use to transfer large amounts of data from physical storage devices into AWS. You mail your portable storage devices to AWS and AWS Import/Export transfers data directly off of your storage devices using Amazon’s high-speed internal network.
Application Load Balancer
A listener checks for connection requests from clients, using the protocol and port that you configure.
Each rule consists of a priority, one or more actions, and one or more conditions. When the conditions for a rule are met, then its actions are performed. You must define a default rule for each listener, and you can optionally define additional rules.
seventh layer of the Open Systems Interconnection (OSI) model
Support for path-based routing. You can configure rules for your listener that forward requests based on the URL in the request. This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL.
Support for host-based routing. You can configure rules for your listener that forward requests based on the host field in the HTTP header. This enables you to route requests to multiple domains using a single load balancer.
benefits of application load balancer
Benefits of migrating from a Classic Load Balancer
Using an Application Load Balancer instead of a Classic Load Balancer has the following benefits:
Support for path-based routing. You can configure rules for your listener that forward requests based on the URL in the request. This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL.
Support for host-based routing. You can configure rules for your listener that forward requests based on the host field in the HTTP header. This enables you to route requests to multiple domains using a single load balancer.
Support for routing based on fields in the request, such as standard and custom HTTP headers and methods, query parameters, and source IP addresses.
Support for routing requests to multiple applications on a single EC2 instance. You can register each instance or IP address with the same target group using multiple ports.
Support for redirecting requests from one URL to another.
Support for returning a custom HTTP response.
Support for registering targets by IP address, including targets outside the VPC for the load balancer.
Support for registering Lambda functions as targets.
Support for the load balancer to authenticate users of your applications through their corporate or social identities before routing requests.
Support for containerized applications. Amazon Elastic Container Service (Amazon ECS) can select an unused port when scheduling a task and register the task with a target group using this port. This enables you to make efficient use of your clusters.
Support for monitoring the health of each service independently, as health checks are defined at the target group level and many CloudWatch metrics are reported at the target group level. Attaching a target group to an Auto Scaling group enables you to scale each service dynamically based on demand.
Access logs contain additional information and are stored in compressed format.
Improved load balancer performance.
TLS and SSL with load balancer
only for layer 7, which is classic or more recently ALS application
DAX vs elasticache
Elasticache is a cache engine based on Memcached or Redis, and it’s usable with RDS engines and DynamoDB.
DAX is AWS technology and it’s usable only with DynamoDB.
Amazon ElastiCache is categorized as Data Replication, Database as a Service (DBaaS), and Key Value Databases
Cache frequently accessed data in-memory.
Amazon DynamoDB Accelerator (DAX) is categorized as Web Server Accelerator
Delivers up to 10x performance improvement from milliseconds to microseconds or even at millions of requests per second.
dax
Correct. Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache that can reduce Amazon DynamoDB response times from milliseconds to microseconds, even at millions of requests per second. While DynamoDB offers consistent single-digit millisecond latency, DynamoDB with DAX takes performance to the next level with response times in microseconds for millions of requests per second for read-heavy workloads. With DAX, your applications remain fast and responsive, even when a popular event or news story drives unprecedented request volumes your way. No tuning required.
ElastiCache
Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases. There are two types of ElastiCache available: Memcached and Redis. Here is a good overview and comparison between them: https://aws.amazon.com/elasticache/redis-vs-memcached/
vCPU limit On-Demand Instances
There is a limit on the number of running On-Demand Instances per AWS account per Region. On-Demand Instance limits are managed in terms of the number of virtual central processing units (vCPUs) that your running On-Demand Instances are using, regardless of the instance type
- before you had limits for each EC2 instance type. That’s a nightmare to manage if you run different types of instances for different types of load. At scale, all you care about is computing power
- each instance type comes with a certain number of vCPU (see here: https://ec2instances.info/)
- now, instead of so many limits for the so many types of EC2 instances, you get just one limit to manage your entire EC2 fleet, and that’s the vCPU limit, which is computed thanks to mapping the instance type you’re currently using to the number of vCPU. This allows you to run mixed workloads of on-demand with different instance types without shooting yourself in the foot and hitting some random instance limit
Amazon Redshift clusters
Amazon Redshift is a data warehouse product
An Amazon Redshift cluster consists of nodes. Each cluster has a leader node and one or more compute nodes. The leader node receives queries from client applications, parses the queries, and develops query execution plans. The leader node then coordinates the parallel execution of these plans with the compute nodes and aggregates the intermediate results from these nodes. It then finally returns the results back to the client applications.
Compute nodes execute the query execution plans and transmit data among themselves to serve these queries. The intermediate results are sent to the leader node for aggregation before being sent back to the client applications. For more information about leader nodes and compute nodes, see Data warehouse system architecture in the Amazon Redshift Database Developer Guide.
security group limits
5 per instance
You can have 60 inbound and 60 outbound rules per security group (making a total of 120 rules). This quota is enforced separately for IPv4 rules and IPv6 rules; for example, a security group can have 60 inbound rules for IPv4 traffic and 60 inbound rules for IPv6 traffic.
CloudWatch default metrics
CPU
DISK
NETWORK
CRON JOBS
Scheduled tasks
Amazon ECS supports the ability to schedule tasks on either a cron-like schedule or in a response to CloudWatch Events. This is supported for Amazon ECS tasks using both the Fargate and EC2 launch types.
SES VS SNS
SES is BULK EMAIL
SNS is for automation in working on decoupled servies
SNS can do phones, sqs, mobile, http etc
Amazon SES belongs to “Transactional Email” category of the tech stack, while Amazon SNS can be primarily classified under “Mobile Push Messaging”.
What is Amazon SES? Bulk and transactional email-sending service. Amazon SES eliminates the complexity and expense of building an in-house email solution or licensing, installing, and operating a third-party email service. The service integrates with other AWS services, making it easy to send emails from applications being hosted on services such as Amazon EC2.
What is Amazon SNS? Fully managed push messaging service. Amazon Simple Notification Service makes it simple and cost-effective to push to mobile devices such as iPhone, iPad, Android, Kindle Fire, and internet connected smart devices, as well as pushing to other distributed services. Besides pushing cloud notifications directly to mobile devices, SNS can also deliver notifications by SMS text message or email, to Simple Queue Service (SQS) queues, or to any HTTP endpoint.
RDS compability with failover
MariaDB, MySQL, Oracle, and PostgreSQL
Amazon RDS uses several different technologies to provide failover support. Multi-AZ deployments for MariaDB, MySQL, Oracle, and PostgreSQL DB instances use Amazon’s failover technology. SQL Server DB instances use SQL Server Database Mirroring (DBM) or Always On Availability Groups (AGs).
What can an EBS volume do when snapshotting the volume is in progress
The volume can be used normally while the snapshot is in progress.
You can create a point-in-time snapshot of an EBS volume and use it as a baseline for new volumes or for data backup. If you make periodic snapshots of a volume, the snapshots are incremental; the new snapshot saves only the blocks that have changed since your last snapshot. Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed. While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html
ENI attachments time
Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves.
when to use instance store over EBS
past 2000mbps and 80000 iops
TEMPORARY
STATELESS
NEEDS HIGH IOPS
when to install CW agent, what information can be attained
MEMORY AND SPECIFIC METRICS
AWS provide a registry of open data sets , how much cost?
FREE
Snapshots and deleting old snapshot of first full snapshot, explain
Even though snapshots are saved incrementally, the snapshot deletion process is designed so that you need to retain only the most recent snapshot in order to create volumes.
Data that was present on a volume, held in an earlier snapshot or series of snapshots, that is subsequently deleted from that volume at a later time, is still considered unique data of the earlier snapshots. This unique data is not deleted from the sequence of snapshots unless all snapshots that reference the unique data are deleted.
AWS Organizations service control policy and an IAM policy?
AWS Organizations SCPs don’t replace associating IAM policies within an AWS account.
IAM policies allow or deny access to AWS services or API actions that work with IAM. An IAM policy can be applied only to IAM identities (users, groups, or roles). IAM policies can’t restrict the AWS account root user.
You can use SCPs to allow or deny access to AWS services for individual AWS accounts with AWS Organizations member accounts, or for groups of accounts within an organizational unit (OU). The specified actions from an attached SCP affect all IAM identities including the root user of the member account.
Now, using SCPs, you can specify Conditions, Resources, and NotAction to deny access across accounts in your organization or organizational unit. For example, you can use SCPs to restrict access to specific AWS Regions, or prevent your IAM principals from deleting common resources, such as an IAM role used for your central administrators. You can also define exceptions to your governance controls, restricting service actions for all IAM entities (users, roles, and root) in the account except a specific administrator role.
bucket policy vs ACL
An S3 ACL is a sub-resource that’s attached to every S3 bucket and object. It defines which AWS accounts or groups are granted access and the type of access.
A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for the bucket and the objects in it. Object permissions apply only to the objects that the bucket owner creates.
When to use docker vs lambda
AUTOMATION WITH LONGER SCALING: Docker is a software container platform, It lets you packages all your tools into one isolated container. That container will be run as a service, e.g : Nginx, mysql server, redis.
AUTOMATION AND SCALING: AWS Lambda is a FAAS (Function as a service), it lets you run code without provisioning or managing servers.
elastic IP
Per account, per region
Cost if not attached to anything
Enhanced networking needs
Enhanced Networking enables you to get significantly higher packet per second (PPS) performance, lower network jitter and lower latencies. This feature uses a new network virtualization stack that provides higher I/O performance and lower CPU utilization compared to traditional implementations.
Cluster placement groups
grouped together on same instance or same machine
Availability zone can do max 20ec2 usually, or based on v-cpu. still 20..
MAKE SURE ALL SAME TYPE AND SIZE FOR THE AVAIL ZONE, MACHINE MAY BE BEST USING THIS
This means if you have an amount and need more you will need to terminate entire amount because they are on the same machine.
BGP routing
allowing dynamic routing for VPN connections
We recommend that you use BGP-capable devices, when available, because the BGP protocol offers robust liveness detection checks that can assist failover to the second VPN tunnel if the first tunnel goes down. Devices that don’t support BGP may also perform health checks to assist failover to the second tunnel when needed.
Auto scaling groups and Cloud-init
When using autoscaling and automated ec2 creation its best to pass commands in cloud init for access files from S3, for security procedure make sure that a role is used, a role will be able to be taken away easier if needed.
The cloud-init package configures specific aspects of a new Amazon Linux instance when it is launched; most notably, it configures the .ssh/authorized_keys file for the ec2-user so you can log in with your own private key. For more information, see cloud-init.
Site to site VPN, can it connect two VPC’s?
NO
DynamoDB and read/write capacity.
You can set RCU limits to limit speed, increase limit to increase consumption to allow increased scalling.
Amazon DynamoDB has two read/write capacity modes for processing reads and writes on your tables:
Read 4kb
Write 1kb
On-demand
On-Demand Mode- pay as you go
Thousands of operations per second
pay as you go
for unknown workload that is unpredictable.
Peak Traffic and Scaling, the scaling depends on YOUR PEAK TRAFFIC, so it scales to your PEAK load!!!
provisioned- just pay before
this is throttled, system tries to maintain capacity
specify reads and writes
autoscaling can change in response to changes
good for predictable, consistent, forecasted workload
POSIX permissions
EFS posix compliant and uses nfsv4 protocol, SHARED
EBS is block store, only for one location at a time. NOT SHARED
SSE KMS vs Client and SSE C
SSE-S3, CMK
SSE KMS, AWS alkows users to create and manage keys, but aws manages encryption, SERVER SIDE
-KMS can perform cryptographic operations itself.
-AWS KMS encrypts only the object data. Any object metadata is not encrypted.
FIPS 140-2 Regional service
——-
SSEC, client manage keys, AWS manage encryption and decryption. S3 encrypts data
S3 services manages the actual encryption and decryption
———
SSE-S3 AES256 AWS manage key and encryption, SERVER SIDE, encryption happens in S3
S3 generates fully managed and rotated master key automatically.
Master key used to encrypt, the encrypted master key along with item is stored together
————–
Client- AWS sees nothing
CMK- customer master key, managed by KMS, physical keys, used for encryption
Shield vs Shield advanced barriers
Anything including application load balancer and below needs ADVANCED
Anything that starts at route 53 and cloudfront uses FREE
cloudfront without route 53 is difficult
Fanout use case
SNS has fanout to send multiple requests for SQS queues.
s3 website
Static, if links are the key then you can scale this much easier and cheaper, ec2 not needed, EC2 is only needed for dynamic pages.
Multicast networking
In computer networking, multicast is group communication where data transmission is addressed to a group of destination computers simultaneously.
build servers in ASG/LT group and use OS?
dead-letter queues
Messages need to be processed, sometimes if they are errored, the return message of “process complete” doesnt occur, which puts the message back into the main queue, you would be best to assign this to another list called a DEAD LETTER QUEUE.
Amazon SQS supports dead-letter queues, which other queues (source queues) can target for messages that can’t be processed (consumed) successfully. Dead-letter queues are useful for debugging your application or messaging system because they let you isolate problematic messages to determine why their processing doesn’t succeed. For information about creating a queue and configuring a dead-letter queue for it using the Amazon SQS console
egress gateway vs NAT gateway
EGRESS IS ONLY FOR IPV6
NAT GATEWAY ALLOWS INTERNAL IPV4 TO NAT TRANSLATE TO CONNECT ONLINE
Cloud HSM
AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud
HSM will not integrate with AWS by design and uses industry standard APIs.
PKCS#11
Java Cryptography Extensions (JCE)
Microsoft CryptoNG (CNG) libraries
INTEGRATION:
With KMS, it is used as a custom Key store
not HA, needs endpoint in subnet of VPC
Cloud HSM Use Cases
No native AWS integration with AWS products. You can’t use S3 SSE with CloudHSM.
Can offload the SSL/TLS processing from webservers.
CloudHSM is much more efficient to do these encryption processes.
Oracle Databases can use CloudHSM to enable transparent data encryption (TDE)
Can protect the private keys an issuing certificate authority.
Anything that needs to interact with non AWS products.
CMK and KMS and CUSTOM key stores
Customer Master keys, can be generated in cloudHSM
used in KMS to create customer owned keys.
Custom Key store is supported by KMS, backed by CloudHSM, KMS generates and stores NON extractible encrypted keys.
CUSTOMER OWNED AND CUSTOMER MANAGED
However, you might consider creating a custom key store if your organization has any of the following requirements:
Key material cannot be stored in a shared environment.
Key material must be backed up in multiple AWS Regions.
Key material must be subject to a secondary, independent audit path.
The HSMs that generate and store key material must be certified at FIPS 140-2 Level 3.
FIPS 140-2 Level 3.
FIPS 140-2 Level 3.
FIPS 140-2 Level 3.
Amazon MQ
when you are currently using messaging systems outside of amazon and want to migrate into AWS
It supports industry-standard APIs and protocols
switch from any standards-based message broker to Amazon MQ without rewriting the messaging code in your applications
Enhanced VPC Routing
FOR REDSHIFT ONLY
ALLOWS CUSTOM ENDPOINTS
CANNOT MAKE FLOWLOG OF REDSHIFT CLUSTER
separation of labor for clusters
Redshift forces all COPY and UNLOAD traffic between your cluster and your data repositories through your Amazon VPC. By using enhanced VPC routing, you can use standard VPC features, such as VPC security groups, network access control lists (ACLs), VPC endpoints, VPC endpoint policies, internet gateways, and Domain Name System (DNS) servers, as described in the Amazon VPC User Guide.
REDSHIFT and components
Columnar storage
Paralell processing , columnar
result caching
backs up to S3
Components
Cluster, set of nodes, LEADER and COMPUTE slave nodes
One DB per cluster, scale by adding nodes, or better types
Leader: Accepts paralell connections and requests and fowards them to compute
Compute: Execute search, sent to leader node
NODE type
Dense storage large HDD
Dense Compute Performance large SSD
REDSHIFT SPECTRUM
Queries against exabytes of data in S3
no Ehanced vpc routing
scans only columns rather than rows
Lambda Edge
Lambda@Edge lets you run Lambda functions to customize the content that CloudFront delivers, executing the functions in AWS locations closer to the viewer.
RUN VIA in response to CloudFront events,
events
- After CloudFront receives a request from a viewer (viewer request)
- Before CloudFront forwards the request to the origin (origin request)
- After CloudFront receives the response from the origin (origin response)
- Before CloudFront forwards the response to the viewer (viewer response)
Cloudfront errors
501 not setup
502 internet issues
503 server down
504 timeout, dns slow
Instance store and CLI
Stop would mean that persistant information is required, CLI doesnt allow stopping of instance stores, only EBS.
It gives a CLI error.
DynamoDB and Partition Key
Partition key: A simple primary key, composed of one attribute known as the partition key. Attributes in DynamoDB are similar in many ways to fields or columns in other database systems.
Partition key and sort key: Referred to as a composite primary key, this type of key is composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key. Following is an example.
aurora and dynamodb and throttling
Partition keys and request throttling
DynamoDB evenly distributes provisioned throughput—read capacity units (RCUs) and write capacity units (WCUs)—among partitions and automatically supports your access patterns using the throughput you have provisioned. However, if your access pattern exceeds 3000 RCU or 1000 WCU for a single partition key value, your requests might be throttled with a ProvisionedThroughputExceededException error.
Reading or writing above the limit can be caused by these issues:
Uneven distribution of data due to the wrong choice of partition key
Frequent access of the same key in a partition (the most popular item, also known as a hot key)
A request rate greater than the provisioned throughput
To avoid request throttling, design your DynamoDB table with the right partition key to meet your access requirements and provide even distribution of data.
high cardinality vs low cardinality
Values that are unique, colors are low card, ID are high cardinal,
**repeating items over and over for low cardinality
WHEN YOU SEARCH YOU WANT TO MAKE SURE YOU DONT OVERSEARCH ALL ITEMS, HAVING EASY TO SEARCH TERMS MAKE IT BETTER
Recommendations for partition keys
Use high-cardinality attributes. These are attributes that have distinct values for each item, like e-mailid, employee_no, customerid, sessionid, orderid, and so on.
Use composite attributes. Try to combine more than one attribute to form a unique key, if that meets your access pattern. For example, consider an orders table with customerid+productid+countrycode as the partition key and order_date as the sort key.
Cache the popular items when there is a high volume of read traffic using Amazon DynamoDB Accelerator (DAX). The cache acts as a low-pass filter, preventing reads of unusually popular items from swamping partitions. For example, consider a table that has deals information for products. Some deals are expected to be more popular than others during major sale events like Black Friday or Cyber Monday. DAX is a fully managed, in-memory cache for DynamoDB that doesn’t require developers to manage cache invalidation, data population, or cluster management. DAX also is compatible with DynamoDB API calls, so developers can incorporate it more easily into existing applications.
Add random numbers or digits from a predetermined range for write-heavy use cases. so that a query comes up with less choices and doesn’t need to read everything
API gateway and api throttling
Provides managed AWS endpoints.
Can also perform authentication to prove you are who you claim.
You can create an API and present it to your customers for use.
THROTTLE
API tracks requests, owners set rate limit for REST and BURST, rest meaning norm and burst for high
Beyond limit request will equal 429 HTTP response, which will protect backend!
RESULT CACHING
caching can allow reduced traffic to source, TTL will determine how long it needs to stay and alternative outside management api will help inavlidate cache for each stage.
RDS enhanced monitoring
Cloudwatch only does CPU utilization
EHANCED allow MEMORY and CPU BANDWITH
default termination policy, SCALING
- multiple AZ, choose one with most INSTANCES
- when even, choose oldest launch config
- which is next to closest next billing hour
- NEXT IS RANDOM
IAM DB Authentication
works with MYSQL AND POSTRESQL, dont use password, use a token.
IAM database authentication provides the following benefits:
Network traffic to and from the database is encrypted using Secure Sockets Layer (SSL).
You can use IAM to centrally manage access to your database resources, instead of managing access individually on each DB instance.
For applications running on Amazon EC2, you can use profile credentials specific to your EC2 instance to access your database instead of a password, for greater security.
STS NOT COMPATIBLE WITH RDS
KINESIS STREAM DATA INTO
S3
REDSHIFT
ELASTISEARCH
SPLUNK
storage
HOT
WARM
COLD
hot, freq access
warm, less freq access
cold, rare acess
Amazon FSx LINUX VS WINDOWS
LUSTRE
parallel hot storage!
is a high-performance file system for fast processing of workloads. Lustre is a popular open-source parallel file system which stores data across multiple network file servers to maximize performance and reduce bottlenecks.
NOT PARALELL!!!
is a fully managed Microsoft Windows file system with full support for the SMB protocol, Windows NTFS, Microsoft Active Directory ( AD ) Integration.
WHICH DB HAS
SYNCHO
ASYNCHRO
replications?
Aurora synchronously replicates the data across Availability Zones to six storage nodes associated with your cluster volume.
RDS NOT SYNCHRONOUS
MULTI AZ IS SYNCHRONOUS
DYNAMO IS ASYNCH
IO AND ST (throughput) OR SC and throughput
IO is good for high throughput but small IO tasks
ST is only high throughput at large IO tasks or SEQUENTIAL
AURORA STRUCTURE
Cluster, each connection via a specific DB instance.
to connect to aurora, the host name and port is given to a intermediate handler called an ENDPOINT.
15 READ ONLY INSTANCES CAN BE USED, THESE CAN BE GIVEN DIFFERENT ROLES
Endpoints can be used to load balance requests
And multiple endpoints can be used for specific type of requests, you can assign certain tasks to be on certain instances, group those in a specific endpoint and have requests portal through them.
CUSTOM ENDPOINTS WILL NEED TO BE USED
need to use a custom endpoint to load-balance the database connections based on the specified criteria.
Cloudfront and DNS alias with IPV6 and IPV4
requests that use both ipv6 and ipv4 will both need aliases on A and AAAA records, CNAME doesnt work with cloudfront on ZONE APEX which is ROOT domain
after apex it will need a new target which is the A and AAAA
ST1` and SC1
ST1 is expensive high throughput, large data sequential small IO FREQ ACCESS
SC1 is similar, slower throughput but less expensive
INFREQ ACESS
SNI SERVER NAME INDICATION
allows multiple domains to serve SSL traffic over the same IP address by including the hostname which the viewers are trying to connect to.
SSL cert with need to be made with AWS cert manager
create cloudfront distro
associate cert with distro
enable support for SNI
NOT ON CLASSIC LOAD BALANCER
WORKS WITH APP LOAD OR CLOUDFRONT ONLY
VPC peering translation
not transitive, not a HA or fault tolerant method
use connections to each site without a peering conneciton
KINESIS STREAM SHARD TABLE CAPACITY
shard iterator expires unexpectedly.
DynamoDB table used by kinesis doesnt have enough capacity to store lease data
happens with large number of shards
INCREASE WRITE CAP of shard table to fix it
DAX is for read improve, not writing via kinesis shards
lambda with and without step
STEPS only for multiple items or services to be used
Lambda alone is cheaper, as long as its quick
LAMBDA HAS 15 MIN LIMIT\
AWS Lambda supports synchronous and asynchronous invocation of a Lambda function. You can control the invocation type only when you invoke a Lambda function. When you use an AWS service as a trigger, the invocation type is predetermined for each service. You have no control over the invocation type that these event sources use when they invoke your Lambda function. Since the processing only takes 5 minutes, Lambda is also a cost-effective choice.
decoupled and resources
SQS and SWF
Amazon Simple Queue Service (SQS) and Amazon Simple Workflow Service (SWF) are the services that you can use for creating a decoupled architecture in AWS. Decoupled architecture is a type of computing architecture that enables computing components or layers to execute independently while still interfacing with each other.
datasync vs storage gateway
datasync COPY LARGE AMOUNTS OF DATA
ST GATEWAY, CONTINOUS FILE TRANSFER
DS: to S3 or EFS or FSX SMB windows
SG: S3 ONLY
outside connection to internal, what is outside VPC
Customer gateway
VPN
border vpc, Virtual private gateway
in vpc
routers
VPN and customer gateway
To create a VPN connection, you must create a customer gateway resource in AWS, which provides information to AWS about your customer gateway device. Next, you have to set up an Internet-routable IP address (static) of the customer gateway’s external interface.
THIS IS CLIENT MODEM!!!!!! OR ROUTER!!!! ONSITE FIREWALL
Cognito
vs SSO
Vs STS
Cognito: user authentication and not for providing access to your AWS resources
SSO: uses STS but not for issuing credentials. SINGLE SIGN ON
STS AWS Security Token Service (AWS STS) is the service that you can use to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use.
Elastic Load Balancing and Amazon EC2 Auto Scaling
You can use Elastic Load Balancing to manage incoming requests by optimally routing traffic so that no one instance is overwhelmed. … You can also optionally enable Amazon EC2 Auto Scaling to replace instances in your Auto Scaling group based on health checks provided by Elastic Load Balancing.
Elastic Load Balancing is used to automatically distribute your incoming application traffic across all the EC2 instances that you are running. You can use Elastic Load Balancing to manage incoming requests by optimally routing traffic so that no one instance is overwhelmed.
To use Elastic Load Balancing with your Auto Scaling group, you set up a load balancer and then you attach the load balancer to your Auto Scaling group to register the group with the load balancer.
Your load balancer acts as a single point of contact for all incoming web traffic to your Auto Scaling group. When an instance is added to your group, it needs to register with the load balancer or no traffic is routed to it. When an instance is removed from your group, it must deregister from the load balancer or traffic continues to be routed to it.
geolocation vs geoproximity
BIAS!!!
Geoproximity Routing lets Amazon Route 53 route traffic to your resources based on the geographic location of your users and your resources. You can also optionally choose to route more traffic or less to a given resource by specifying a value, known as a !!!bias. A bias expands or shrinks the size of the geographic region from which traffic is routed to a resource.!!!
Geolocation Routing lets you choose the resources that serve your traffic based on the geographic location of your users, meaning the location that DNS queries originate from.
Geolocation Routing is incorrect because you cannot control the coverage size from which traffic is routed to your instance in Geolocation Routing. It just lets you choose the instances that will serve traffic based on the location of your users.*
Perfect Forward Secrecy
Perfect forward secrecy means that a piece of an encryption system automatically and frequently changes the keys it uses to encrypt and decrypt information, such that if the latest key is compromised, it exposes only a small portion of the user’s sensitive data.
CloudFront and Elastic Load Balancing are the two AWS services that support Perfect Forward Secrecy. Hence, the correct answer is: CloudFront and Elastic Load Balancing.
EC2 and S3, CloudTrail and CloudWatch, and Trusted Advisor and GovCloud are incorrect since these services do not use Perfect Forward Secrecy. SSL/TLS is commonly used when you have sensitive data travelling through the public network.
Cognito ID.
You can use Amazon Cognito to deliver temporary, limited-privilege credentials to your application so that your users can access AWS resources. Amazon Cognito identity pools support both authenticated and unauthenticated identities. You can retrieve a unique Amazon Cognito identifier (identity ID) for your end user immediately if you’re allowing unauthenticated users or after you’ve set the login tokens in the credentials provider if you’re authenticating users.
kinesis firehose vs data stream
STREAMS INGESTS AND STORES DATA FOR PROCESSING data available for 24 hours for customized processing LAMBDA USED HERE
FIREHOSE
To load into specific programs
S3, Elasticsearch Service, or Redshift, where data can be copied for processing through additional services.
Health check and routing policy
Weighting and Latency can be used in conjunction with health routing policy,
Weighted just seperates the conneciton according to your specs
Latency makes it route to best region of latency
After failure, the policy still is in affect.
Aurora Cluster and Reader endpoints and load balance
you can assign specific endpoints and termination for aurora
ONE CLUSTER
15 READERS
Cluster has its own endpoint for writing and reading
readers have only reading
Balance to reader is built in by querying reader endpoint
you can more CUSTOM ENDPOINTS for more control of read distro
role vs group
AWS Groups are the standard groups which you can consider as collection of several users and a user can belong to multiple groups.
AWS IAM Roles are all together different species; they operate like individual users except that they work mostly towards the impersonation style and perform communication with AWS API calls without specifying the credentials.
VPC IPV4 CIDR
16 to 28
65000 to 16
Classless Inter-Domain Routing
S3 and data consistency
S3 is immediate write
but eventual consistent for puts deletes overwrites
also for programs that access S3 from multiple regions they may access different reads and will have a parallel form of information that may be inconsistent.
happens when
FREQ WRITES AND READS
MULTIPLE regions
EC2 Reserved Instance expires, what happens
However, when an RI expires, you might notice a change in the pricing of one or more of your instances. This is because any instances that were covered by the RI pricing benefit are now billed at the on-demand price.
SNI, server name indication
Allows multiple SSL certs for the same IP address
*** BIND multiple certificates behind same secure listener behind load balancer
ALB will auto choose proper TLS Cert for each client.
for diff domains, not sub domains
Wildcard
CERT for multiple SUB domains, not diff domains
CreationPolicy
Wait on resource config before stack creation proceeds
cfn-signal allows it to signal for next step
When EC2 bills
Not billed pending
not billed preparing to stop
not billed for stopped
Billed for hibernate preparing to stop
Billed for terminated until next bill period
DynamoDB auto scale
Default, except when made in CLI
Lambda and scaling
AWS Lambda scales your functions automatically on your behalf. Every time an event notification is received for your function, AWS Lambda quickly locates free capacity within its compute fleet and runs your code. Since your code is stateless, AWS Lambda can start as many copies of your function as needed without lengthy deployment and configuration delays.