Practice Test 5 Flashcards
What is redshift Spectrum?
query and retrieve structured and semi structured data from files in amazon S3, without loading into RS tables
Spectrum is a separate compute layer to RS . spectrum queries use less of the cluster processing policy than other queries
Select count(*) from S3.ext_table group by ….
What does WAF get deployed with / on?
Cloudfront,
App Load Balancer
API Gateway
protects from SQL injection, cross site script, filtering of patterns and ip addresses.
How do you change the launch config of an ASG
Create a new launch config with correct instance type, ami, key pair, security groups, block device.
Modify ASG with new launch config
Delete the old launch config
dup
dup
company needs to analyse customer service calls for sentiment analysis via SQL queries. What services should be used, options:
Correct: use Transcribe to convert audio files to text, then Athena to understand customer sentiment. The diagram shown in the answer includes SQS and Lam to convert the files into CSV, pretty cheeky not to mention that, because the raw files would not be SQL compatable?
Incorrect: Transcribe and Quicksight to analyse. (my selection) problem is that QS is for dashboards and visuals, graphs, does not do data analyis.
dup
dup
Aurora My SQL dbe cluster. team wants access to test databases re created from prod data, with least effort, and quickly.
Correct: database cloning to create multiple clones of production, each clone becomes a testdb.
* very fast to use cloning because it uses (points to) the same storage file blocks as the production db so it is faster than a snapshot
* clone uses the copy on write protocol, data copied when it changes, it creates new pages in the source or target clone to move toa new state.
* cannot clone across regions.
* limit of 15 clones based on a copy.
Incorcect. take a backup of auora mysql using mysqldump, create new dbs, restore from backup.
Manual failover process with ALB and Route53, to point to a secondary ALB in a different AZ. How to improve
Enable route 53 health check, for individual ELB nodes in each AZ. automatically routes traffic away from unhealthy ALB and AZ.
dup
dup
web app for ambulance management. workload can be managed on 2 EC2 instances and can peak to 6 with traffic. There are 3 AZ , be careful. Which option is best fit.
incorrect: Min capacity of 2 with 1 instance in each AZ, max 6 (my selection)
* problem with this is that if an AZ goes down, there is only 1 instance.
* question is misleading to have 3 zones in the first place. be careful. with HA planning, you can lose a zone and still need the min spec in one zone
correct: Min capacity 4 with 2 instances in each AZ, max 6
mobile apps that capture and send data to kinesis data streams, getting an exception ProvisionedThroughputExceededException, messages sent one by one at a high rate. Keep costs at a minimum. Options to fix:
incorrect: increase the number of shards. “short term fix but increased cost”
correct: use batch messages - to increase throughput and reduce overhead, batching can use parallel http requests, at no extra cost.
Healthcare HIPAA compliant in memory database that supports caching the results of SQL queries. options
correct: ElastiCache for Redis/memcached
* Redis only - HIPAA compliant and PCI compliant
* no mention of SQL queries but read the question carefully, the cache is often used with SQL dbs to cache results.
company has mircoservices with 4 example urls and some subdomains. wantss to use one LB to route requests
correct: application LB can route based on
* host based routing domains, wildcards, subdomains
* path based routing
* http header
* http method
* query string param
* source ip address or cidr range
dup
dup
company wants to use Web app filewall WAF to protect EC2 data. How to use WAF
incorrect: WAF can be directly configured only on an ALB or API gateway, then from there to EC2
* the problem is the word only, trick question. @$#%
correct: Cloudfront in front of EC2 Deploy, WAF on Cloudfront
* WAf is tightly integrated with CF and ALB, and api gateway. the problem was with the wording of the question for ALB and api gw.
health records on S3, archival solution based on glacier to enforce reg and compliance controls on data access. options
Correct: S3 Glacier Vault to store the sensitive archived data, then vault lock policy to enforce compliance
* a vault is a glacier concept separate to vault lock. it seems to be like a bucket
* vault lock is specifically for compliance controls.
* vault lock policy
* write once read many
incorrect: Glacier to store … data ten use an S3 access control list to enforce compliance.
* controls access to buckets and objects, cannot be used to enforce compliance !! not sure i agree.
Amazon S3 read after write consistency
after successful write of a new object, or change of existing object, a subsequent read gets the latest version.
* strong consistency for list operations
* all ops are strongly consistent get put list, object tags, acls, metatada.
dup
dup
Dynamo DB table encryption
aws encrypts all dynamo tables, not negotiable
* uses aws owned CMK (customer master key)
* these don’t write to cloudtrail logs, cannot view track or autid them.
* managed by aws, no input required.
dup
dup
company has video streaming using ALB. routing to EC2, ALB removes an instance from the pool of healthy instances, if it detects an unhealthy instance, but the ASG fails to provision teh replacement instance. how do you explain this?
Correct: ASG is using the EC2 based health check and the ALB is using the ALB based health check.
* ALB hase a built in health check, it is possible for the
* ALB to fail beacuse it pings an instance and get’s no response
* ASG can still be successful because it can get a response.
* in this case ALB will remove the instance but ASG won’t provision a new one.
* bloody confusing, memorise don’t try to fully understand.
incorrect: ASG is using the ALB health check, and the ALB is using the EC2 health check.
* ALB cannot use EC2 based health checks
Cross zone LB for ALB vs NLB
by default cross zone LB is
* disabled for NLB
* enabled for ALB
dup
dup
Direct connect encryption true false?
Fales. DC does not encrypt. it is a private connection.
it can be done with “transit encryption” not supported by DC.
company wants to caching with geospatial support, to cache a relational database. options
Correct - Elasticache for Redis. Redis has richer features than Memcached, and geospatial is a feature in Redis.
* purpose built commands for working with real time geospatial data at scale.
* find the distance between 2 elements, or all elements in a radius.
More redis features not in memcached
* snaps
* replication
* transactions
* pub/sub
* health data compliant HI…A
Memcachd has multithread architecture that redis does not.
dup
dup
optimise resources across countries and regions, best practices, cost optimise, perf, security
Correct : Trusted advisor
* cost optim
* perf, sec, fault tol, service limits
incorrect: Systems manager, aws config
trying to attach an EBS volume in a different AZ, whats the problem
correct: EBS volumes are locked to one AZ.
Spread placement group of 15 x EC2 instances, how many AZ are required?
max 7 running instances per AZ per spread placement group, so 15 / 7 = 2 remainder 1. or 3 AZ.
what is the url format for an S3 bucket
bucket-name.s3-website.Region.amazonaws.com
or replace with …website-Region…
What can the AdministratorAccess policy do / not do
can do
delete s3 bucket from prod
change password of own iam account
delete iam user of manager
cannot
close the companies iam account (needs root)
configure a s3 bucket to enable MFA delete (needs root)
dup
dup
encryption of object metadata in S3, is it possible?
False - encrypt object metadata using server side encryption
encryption facts about EBS
I think this assumes that the volume is encrypted in the first place, then:
data at rest is encrypted
snapshots are encrypted
data in transit between volume and instance is encr.
near real time solution to share hundreds of thousands of fin trans with multiple internal applications. should remove sensitive details from transactions, store clensed transactions in a document db for low latency retrieval.. How
Incorrect
* stream txn to Data Firehose
* use Lam to remove sensitive data
* store in dynamo db
* internal apps can consume raw txn off firehose (this is incorrect, firehose can only have one consumer.)
* firehose has a limited range of consumers S3, elasticsearch, redshift.
Correct
* stream txn into kinesis data streams
* use Lamdba integration to remove sensitive data,
* store cleansde trans action in dynamo
* internal apps can consume raw txn off kinesis data stream. can have multiple consumers of many kinds including Spark and KCL
What is the url to get the public ip of an instance?
<domain>/latest/meta-data/public-ipv4
</domain>
Is SQS allowed as an S3 event notification destination?
SQS standard is allowed
SQS Fifo is not allowed
other supported event consumer destinations
SNS
Lamdba
Event bridge for specific events. object created / deleted, and related, metadata changed. event bridge gets all the events if configured.
dup
dup