High Availability Flashcards
EC2
_____ - increases intance sizes as required, using reserved instances
___ - increase teh number of ec2 instances, based on autoscaling
scalability
elasticity
DynamoDB
___-unlimited amount of storage
___ = increase additional IOPS for additional spikes in traffic. Decrease that IOPS after teh spike.
scalability
elasticity
RDS
____ - increase instance size, ie: from small to medium
___ = not very elastic, can’t scale RDS based on demand
scalability
elasticity
_____ RDS allows:
scalability - modify the instance type
elasticity - ____ serverless
aurora
____ allows you to have an exact copy of your prod database in anotehr AZ. aws handles the replication for you, so when your prod DB is written to, this write will automatically be synchronized to the stand by DB. in the event of planned DB maintenance, DB instance failure, or an AZ failure, RDS will automatically failover to the standby so that DB operations can resume quickly without administrative intervention.
Multi-AZ
multi-az deployments for the mysql, oracle, postgresql engines utilize___ physical repl;ication to keep data on teh standby up to date with teh primary
multi-az deloyment for the sql server engines use ____ logical replication to scheive the same result, employing SQL server native mirroring technology.
Both approaches safeguard your data in teh event of a DB instance failure or loss of a AZ
synchronous
RDS multiAZ failover advantages
- high availability
- backups are taken from ____ whih avoids IO suspension to the primary
- restores are taken from ____ which avoids IO suspension to teh primary
secondary
T or F
you can force failover from one AZ to anotherby rebooting your instance. This can be done through the AWS management console or by using RebootDBInstance API call.
T
RDS multi AZ is a scaling solution
T or F
False
It is not a scaling solution
T or F
AMazon handles the failover for you. Done by updating the private DNS for the DB endpoint.
T
Bcakups and Restores are taken from the secondary multi-az instance.
T or F
T
T or F
read replcas are used to scale
T
RDS ___ ____ make it easy to take advantage of supported engiens built in replication functionality to elastically scale out beyond the capacity constraints of a single DB instance for read heavy DB workloads.
read replicas
T or F
you can create a read replica with a few clicks in teh aws mgmt console or using the CreateDBINtanceREadReplica API. once the read replica is created, DB upadtes on the source DB instance will be replicated using a supported engine’s native, asynchronous replication. You cna create multiple read replicas for a given source DB instance and distribute your app’s read traffic amongst them.
T
when to use read replicas -
scaling beyond the compute or IO capacity of a single DB instance for read heavy DB workloads. This excess read traffic can be directed to one or more REAd replicas.
serving read traffic while the source DB instance is available. if your source DB instance cannot take IO requests (due to IO suspension for backups or scheduled maintenance), you can direct read traffic to your read replicas.
Business reporting or data warehousing scenarios; you may want business reporting queries to run against a read replica, rather than your primary prod DB instance.
t
Read replica supported verisons:
myqsl
postgresql
mariaDB
for all 3, amazon uses these engines native ____ rerpliction to update the read replica
asyncronous
aurora read replicas
employs an ___ backed virtualized storage layer purpose built for DB workloads. aurora replica share the same underlying storage as teh source instance, lowering costs and avoiding the need to copy data to the replica nodes.
SSD
When reating read replias, aws iwll take a snapshot of your DB.
if multi az is NOT enabled:
this snapshot will be of your primary DB and can cause brief IO suspension for around __ minute
-if multi az IS enabled
the snapshot will be of our ___ BD and you will not experience any performance hits on your primary database.
1
secondary
when a new read replica is created, you will be able to connect to it using a new end point DNS address
T or F
T
you can promote a read replica to its own standalone DB. doing this will break the replication link between the primary and the secondary
T
read replica axam tips
you can have up to __ read replcias for mysql, postgresql, and marioDB
- you can have read replicas in different ____ for all engines
- replicaiton is ___ only, not synchronous
- read replicas can be build off ____ databases
- read replicas themselves can now be ____
- db snapshots and automated backups ___ be taken of read replicas
- keymetric to look for is ___ ___
5
regions
asyncronous
multi az
multi az
cannot
replica lag
Steps to encrypt RDS snapshots
- take a snap of existing RDS intance
- copy the snap to the same/diff region
- encrypt the copy during the copy process
- restore teh snap
t
How to shar encrypted snaps between accounts:
you can share DB snaps that have been encrypted at rest using teh aes-256 encryption algorithm.
to do this:
- create custom KMS encryption key
- create RDS snapshot using the custom key
- share the custom aws kms encryptino key that was used to enctype th snap
- use the aws mgmt console, aws cli, or rds API to share teh encrypted snap with the other accounts.
t
restrictions for sharing encrypted snaps:
- you can’t share encrypted snaps as public
- you can’t share oracle or MS sql server snaps tha are encrypted using transparent data encryption
- you can’t share a snapshot that has been encrypted using teh default aws lms encryption key of the aws acount that shared the snap.
yup
services with maintenance windows
rds
elasticache
redshift
dynamo db dax
neptune
docdb
yes
services without a maintenance window
ec2
lambda
qldb
when it comes to monitoring our caching engins there are 4 important things to look at:
CPU Utilization
Swap Usage
Evictions
Concurrent COnnections
yes
aurora is a mysql/postgres compatible, relational, db engine that combines the speed and availability of high end commercial db with the simplicity and cost effectiveness of open source dbs. autota provides up to __x better performance than mysql (and __x better than postgresQL) at a price point one tenth that if a commercial DB while delivering similar performance and availability
5, 3
Aurora Scaling
Start wth __GB, scales in _GB increments to __T (storage autoscaling)
Compute resources can scale up to ___vCPUs and ___GiB of Memory
___ copies of your data is contained in each AZ, with minimum of __ AZs. ___ copies of your data
10, 10, 64
64 and 488
2, 3, 6
Aurora is designed to transparently hadnle the loss of up to ___ copies of data without affecting database write availability and up to ___ copies without affecting read availability
2, 3
2 types of replicas available for aurora:
aurora (15 replicas)
mysql read replcias (15)
YES
AURORA 100% CPU UTILIZATION
is it writes causing teh issue? what do i do?
is it reads causing teh issue? what do i do?
scale up for writes (increase instance size)
scale out for reads (increase the number of read replicas)
Aurora ___ is an on demand, auto scaling confirguration fo aurora (mysql compatibl edition) where teh db will auto start up, shut down, and scale up or down capacity based on your apps needs.
YOu pay on a per second basis for the db capacity you use whe the db is active, and you can migrate between standard and serverless configurations with a few clicks in the RDS mgmt console.
serverless
aurora tips
encryption at rest is turned on by default. once encryption is turned on, all read replicas will be encrypted.
failover is defined by teiers. the lower the tier the higher the priority with tier 0 being the highest priority available.
yes
Instances not launching in to auto scaling groups
below is a list of things to look for it your instances are not lauinchiung in to an autoscaling group:
- associated key pair doesnot exist
- security group does not exist
- Auatoscaling config is not working correctly
- ASG not found
- instance type specified is not supported in teh az
- az is no longer supported
- invalid EBS device mapping
- autoscaling service is not enabled on your account
- attempting to attach an EBS block device to an instance store AMI
yes
these are terms for which service?
Edge location - this is the location where content will be cached. this is separate to an AWs region/az
origin - this is the origin of all the files that the CND will distribute. this can be an s3 bucket, an ec2 intance, an elastic load balancer, or route53
distribution - this is teh name given the CDN which consists of a collection of edge locations
cloudFront
types of cloudfront
web distribution - used for websites
rtmp - used for media streaming
yes
the more requests that CLoudFront is able to serve from edge locations, the better it works.
The ratio of requests served from edge locations (rather tahn teh origin) is known as the cache hit ratio. The more requests from edge locations, the better the perfomance.
THis ratio is known as what?
cache hit ratios
maximize cache hit ratio
the following strategies can maximize your cache hit ratios
- specifying how long CF caches your objects
- caching based on query string parameters
- caching based on cookie values
- caching based on rquest headers
- remove accept encoding header when compression is not needed.
to increase your cache hit ratio, you can configure your origin to add a cache control max age directive to your objects, and specify the longest practical value for max age. the shorter the cache duration the more frequently CF forawrds another request to your origin to determine whether the object has changed and if so, to get the latest verison.
caching based on query string parameters
query string params are case sensitive. ensure your app uses consistent variables.
caching based on cookie values
create separate cache behavios forstatic and dynamic content and configure CF to forwward cookies ot your origin only for dynamic content.
yes
caching based on request headers
if you configure CF to cache based on request headers, you can improve caching if you configure CF to forward and cache based on only specified headers instead od forwarding and caching based on all headers.
y
remove accept encoding header when compression is not needed
by default, when CF receives a request, itchecks the value of teh accept-encoding header. if the value of teh header contains gzip, then CF adds teh ehader and value gzip - Accept-Encoding:gzip - to the cache key, and then forwards it to the origin. this behavior ensures that the CF servers either an object or a compressed versio of teh object, based on the value of teh Accept-Encoding header.
If te compression is not enabled-because the origin doesn’t support it, CF doesn’t support it, or the content is no compressible, you can increase teh cache hit ratio by specifying different behavior.
sure
serving media content by using HTTP
you can use CF to deliver on demand video or live streaming video using any HTTP origin. one wyyou can setup tvideo workflows in teh cloud is by using CF together with awS media services.
y