Prac exam number 3 and above Flashcards
account has a private hosted zone and associated VPC. DNS queries of private hosted zone are unresolved. What needs to be fixed.
Options overlapping namespace, resolver rules, Enable DNS hostnames etc for PHZ, fix the NS record.
Correct - enable DNS hostnames and DNS resolution for private hosted zones.
* PHZ queries must be by amazon Vpc dns, they are not enabled for non default privat clouds not created by wizard.
* route 53 auctomatically creates NS and SOA records.
memorise: shrine domain private hosted zone, private party in the shine, hostname guest list.
Incorrect answered by me - remove overlapping namespaces for the priv and pub hosted zones.
incorrect - fix the NS record and SOA records that could be incorrect
fix conflicst between PHZ and any resolver rules
content management app with web servers on EC2 and Aurora. everything is in us-east-1. 90 % customers in US and europe. poor performance from customers in europe, high load times. Options to fix
plausible options
a. Setup another fleet of EC2 instances for web in eu-west-1. enable latency routing in route 53
b. Setup another fleet of ec2 … as above… enable geolocation routing policy in Route 53
c. Create Auora read replicas in eu-west-1
a and c correct. (memorise: route 53 - route 66 - latency route highway fastest road, throw the map out the window)
b incorrect, my response.
“you cannot use geolocation routing to reduce latency” hard to believe, use caution, learn this.
auora multi az. db reads causing high io and latency for writes. What would you do?
Set up read replica modify the app to use the correct endpoint
Auora db cluster = ( db instances + cluster volume spanning multi AZ) and (cluster has a primary rw and replicas ro)
max 15 read replicas.
automatic fali over to replicas.
distractors:
provision another db, link to primary as read rep.
read through caching
use multi az standby instance to read.
Web App Firewall attached to CF. ALB is under CF. how to block an IP?
easy one. Create a ip match condition on the WAF to block the ip.
WAF can
block SQL injection.
cross site scripting
filter trafic patterns
WAf can be in front of CF, ALB, API gateway.
multiple AWS accounts in one region, managed by (one?) aws organisation. all EC2 instances should be able to communicate privately. most cost effective solution?
options:
Create a transit gateway and link all the VPC in all accounts together.
Create a VPC in one account and share one or more subnets with the other accounts using Resource Access Manager.
VPC peering.
answer - resource access manager. RAM. The big wooly sheep that shares its hay with others.
transit gw would work but be more expensive.
vpc peering would work but me a mess, and comilicated. and not scalable with too many connections.
Share resources with (between) any aws accounts that are **within an org*. Share:
transit gateways
subnets
licence manager config
rout 53 resolver rules.
policies and perms are transfered applied when shared.
eliminates duplicate resources in multiple accounts.
steps
create a resource share, specify resources, specify account principles. There is no cost for RAM.
share a sensitive db from RDS with another aws account 3rd party. they must have their own copy. options:
read replica with iam db auth
snapshot in s3 with iam role
encrypted snapshot of db. encrypt with key management service and give access to KMS to 3rd party account. (using KMS key policy) – correct answer.
snap in s3 is incorrect because users can’t access the snap, its for db use only.
read replica is overkill for audit purposes, and the auditor wont have their own copy of the db.
monitoring app for desktop, sending telemetry data to AWS every 1 minute. must process in order. independently. scale the number of consumers to be equal to the number of desktops. options
Kinesis data stream send data with partition id that uses desktop id.
SQS fifo queue, data is sent with group id attribute, representing desktop id.
correct but lucky. - sqs fifo. Group id allows us to have a consumer per group id attribute, each consumer filters on group id, and scale the number of consumers.
incorrect kinesis data streams. with desktop id per shard. this would sort of work, but you would need to have too many shards when you scale. with one shard per cosumer. in practice kinesis has many more producers than shards. many to one.
HARD. data centre unreliable, natural disasters. not ready to go fully cloud so set up fail over env in aws, web servers (EC2) that connect to external vendors. data must be uniform on prem and aws. focus on the** least amount of downtime.** options
A. 1. Set up route 53 failover record. route change from unhealthy resource to healthy.
2. Run app servers on EC2 behind App LB and auto scale group
3. Set up AWS storage gateway with stored volumes to back up to S3 (correct)
memorise - route 66 failed road bridge broken. the other road has a storage gateway, big self storage warehouse with a chinese gateway entrance.
B. route 53 failover record
direct connect from vpc to on prem data centre (long wait time)
App servers EC2, auto scale group.
Run **Lambda **to execute **cloud formation **template to create an App LB. (incorrect)
Cloud formation in other options takes time to provision and fails the criteria.
What is storage gateway ? - hybrid cloud service, on prem access to unlimited cloud storage.
low latency. caches frequently accessed data on prem
stores data securely and durably in aws.
syncs only data changes.
integrates with S3
What are data transfer costs of read replicas within AZ, region, between regions?
Data replicates between the primary and the read replica which depends on it’s location
No charge within a region
Charges across regions.
Auto scaling default termination policy. how does it apply to 4 instances A oldest launch template, B oldest launch config, C newest launch config, D next billing hour.
- Which AZ have the most instances, and at least one instance not proteced from a scale in
- Try to align to the allocation strategy of the on demand vs spot instance that is terminating
- Whether any instances use the oldest launch template or config
- After all of the above, which is closest to the next billing hour.
RDS for MySQL performance issues even with read replicas. needs to address and move to global. most cost effective.
Auora global database. the other options were silly. easy one.
EC2 user data default behaviour, which option is true:
correct
user data scripts exec as root
user data runs only during boot on first launch
(also shell scripts and cloud init directives, can add to launch wizard as file or text)
incorrect
instance is running, update UD using root creds
scripts do not have root privs for exec
user data exec every time ec2 is restarded (not by default but this can be configured)
authentication for API gateway with built in user management, options for best fit:
Cognito user pools. built in user management, sign up, integrate with Google plus, fb, twitter, amazon, apple, sdk. customisable ui.
also provides saml, mfa, security, user migration with lambda triggers.
incorrect - api gataway lambda authorizer. not built in, dev needed.
identity pools, creds and pool tokens to access aws services. exchange pool tokens for aws creds. not an auth mechanism.
RDS read replicas encription true/false options
master db encrypted, read replicas are encrypted.
other optinos stupid. master enc, rr anything
master unencr, rr encr ,or anything.
social media on ec2 fleet, behind App LB. and cloudfront. decouple user auth from app. options for minimal dev effort
Cognito auth with user pools for App LB
incorrect - cognito user pools with CF.
Cognito identity pools with cf or alb.
note: user pool is auth sign in or with an identity provider
identit pool is a temp token to access services.
user pools does not integrate with cloudfront unless you use lambda at edge.
mobile gaming app using RDS mysql, urgent issue storage available space low. minimum dev effort options:
storage auto scaling for my sql rds. or any RDS. triggers
< 10 percennt free
lasts for 5 min
6 hours since last modification.
other options silly.
EBS volume connected to EC2, memorise one option for when EC2 terminates,
EBS defalut config is to delete volume on termination (true) of ec2 instance.
to change this -
DeleteOnTermination = false
EBS volume termination setting options,
EBS defalut config is to delete volume on termination of ec2 instance.
DeleteOnTermination = false