Practice Test 4 Flashcards
SAAS company has a requirement to allow-list 2 ip addresses when bank is accessing external services accross intenernet. must have HA, support scaling 10 instances. How?
incorrect: ALB plus ASG because they expose a DNS record / domain name rather than an ip address, the ip address (elastic net interface) is private
correct: Network LB plus ASG. NLB is best for low latency and high throughput work, with millions or req per sec. operates at layer 4. which includes EC2 microservices containers.
They expose a fixed ip to the public.
NLB does not support security groups.
What is the only resource-based policy that IAM supports? plus more about policies.
correct: Trust Policy
* defines what principle; roles users, accounts, fed. users, services, can assume a role.
* you attach a trust policy and identity based policy with an IAM role.
* A role trust policy is a required resource-based policy that is attached to a role in IAM.
incorrect: Org - Service Control Policies SCP of an org unit. specify maximum permissions or intersection when applied to iam accounts.
ACL - service policies to control principles in another account can access a resource. cannot be used in the same account.
Perms boundary
More about roles vs resource based policy
When you are assigned a role you give up all your prior permissions and adopt the perms of the role
with resource based permissions (policy) like bucket policy, you keep your prior perms.
resource policies used by S3 , SNS, SQS, lambda, CW, API gateway, and more.
IAM role (not resource policy) is used by Kinesis data streams, systems manager run command, ECS task …
website for evaluating coding skill. uses Redis ElastiCache cluster. how to improve security, leveraging username and password
correct: RedisAuth (i got this right)
Redis auth tokens enabble redis to require a token password before clients can run commands.
ElastiCache also supports IAM auth.
redis auth supports rotating the auth token.
distractors: Lamdba resource policy, security group.
click stream data with real time analytics required, without loss when there are traffic spikes. what architecture solution would you recommend?
Correct, and answered correctly:
* Kinesis data streams to capture
* Feed into Kinesis data analytics for real time processing
* Out put to Kinesis data firehose (yes it can accept analytics as a source) and store in S3
distractor. Data streams > Firehose > S3 > Athena to analyse. which is not real time.
ALB plus ASG and fleet of EC2.
ALB is in subnet 10.0.1.0/24 and
ASG in subnet 10.0.4.0/22
How would you configure the security group of the EC2 instances to allow incoming traffic from ALB. ?
incorrect: add a rule to authorise cidr 10.0.1.0/24 (this would work but would not guarantee that only the ALB can access the instances.
correct: add a rule to the EC2 SG to authorise the security group of the ALB. Yes ALB can have a security group
customer focused web app on EC2 web servers with RDS Postgres which is in a private sub net, which allows inbound traffic from selected EC2. DB uses KMS at rest. How to facilitate secure access to the DB?
correct: configure RDS to use SSL for data in transit. every RDS has SSL cert and use the –ssl_ca param when connecting and ref the pub key. SSL can be forced on all connections. (answered correctly)
incorrect: IAM authentication to access DB instead of user creds. This does work with Mysql and postgres, no password needed, only auth token. “it would not significantly enhance the security as much as SSL.”
* this is the correct answer in a question specifically about db auth
big data analytics co. writes data and log files to S3 buckets. Now they want to stream the existing data “files” and ongoing file updates from S3 to Kinesis data streams. (what happens next is irellevant). What is the fastest possible way of building a solution?
Incorrect: S3 event notification to trigger Lambda for the file create event, Lamdba will then send the data to Data Streams. Why? would require significant development effort to write the data into Kinesis data streams. bad fit.
Correct: Database Migration Service as a bridge between S3 and Kinesis data streams.
* DMS can have S3 as a source
* No code needs to be written with DMS, no complex config.
* DMS can do real time updates (change data capture) from S3 into KDS after the initial migration
* DMS can also stream into Amazon Managed Streaming - Kafka.
* DMS can scale up and down with the workload
health company storage in s3 for regulatory guidelines. data cannot be deleted until the regulatory time period has expired. What solution?
incorrect: S3 Glacier Vault Lock. “since vault lock is only for glacier and not for S3 it cannot be used.!!! bad question unbelievable.
Correct: S3 Object Lock. memorise this. (this was my first choice, which i revised). Within object lock there is
* Legal Hold
* Retention period setting, which has the following 2 options:
* Governance mode - restrict certain users, allow others.
* Complience mode - no changes by any user including root until the retention period is passed.
* Write once read many model. WORM.
I got confused because there was no mention of complience mode. be careful. Also parts of object lock are incorrect, so this option is ambiguous.
company uses ElastiCache Redis, and wants a robust DR strategy for caching layer that guarantees minimal downtime and data loss, and good app performance. Which solution, assuming the question refers to Redis ElastiCache?
correct: Multi AZ config with auto failover functionality. correctly answered.
* Low data loss potential
* Low perf impact
* Low to high cost, considering the cost of failure (this is confusing. careful.)
* ElastiCache Cluster is a term used for this config.
Incorrect: add read replicas across multi AZ. ElastiCache allows you to add 5 read replicas across multi AZ. Take read traffic off primary db. not as a fault tolerant solution.
What is aws trusted advisor
an online tool with real time guidance to help provision resources following best practices. for workflows, apps, recommendations, optimisation. it does not provide reusable infra templates like cloud formation.
What services support VPC Endpoints
VPC Gateway endpoints support S3 and Dynamo
* specifies a target for a route in route table for traffic destined for S3 / dyn.
VPC Interface Endpoints
* an elastice network interface with a private ip from the range of your subnet that will be a nentry point for your supported service
* Most other services support interface endpoints.
company uses DynamoDB table, not used during night hours, during day r/w traffic is unpredictable. Spikes can happen quickly. options for capacity modes:
Setup DynamoDB with:
* global table in provisioned capacity mode
* provisioned capacity mode with auto scaling (this is real, auto scaling refers to provisioned r/w and table capacity)
* on-demand capacity mode (correct, selected by me, all others incorrect)
* global secondary index
on-demand is flexible billing option for serving thousands of req per sec, without capacity planning. pay per request for r/w. good for unknown workloads.
provisioned specify the number of r/w per sec.
can use autoscaling to adjust capacity
autoscaling is the default capacity settting for on demand or provisioned.
uses cloud watch to monitor and trigger.
company wants to connect VPCs and on prem through a central hub. Solution with least op overhead.
correct: Transit Gateway. connect vpcs and on prem with single gateway. manage a single connection from central gw to each vpc. on prem data centre, or remote site. acts as a hub. (i got this answer)
incorrect Transit VPC- which is a real thing, not in notes. connects vpcs and vpns, using EC2 routers and NAT. you need to manually manage the vpns, higher complexity
company to migrate on prem app to aws, with app servers and MS Sql Server. Need max possible availability of db, minimis-ing operational and management overhead.
incorrect: RDS SQL server in cross region multi AZ deployment. No such thing.
correct: RDS SQL server in Multi AZ deployment.
* db mirroring
* always on availibility groups
* RDS monitors and maintains health, auto repair, auto failover.
* general purpose SSD or iops ssd.
* auto backups and db snaps.
web app with multiple domains, move to microservices, use the same LB, linked to target groups by url:
checkout.mycorp.com
www.mycorp.com
yourcorp/profile
yourcorp/search
all of these need to be HTTPS endpoints.
options assigning the correct cert to each domain with minimal config effort
incorrect: SSL wildcard, change the ELB SSL policy
correct: Use SSL certificates with SNI (Server Name Indication)
Today we’re launching support for multiple TLS/SSL certificates on Application Load Balancers (ALB) using Server Name Indication (SNI). You can now host multiple TLS secured applications, each with its own TLS certificate, behind a single load balancer. In order to use SNI, all you need to do is bind multiple certificates to the same secure (SSL) listener on your load balancer. ALB will automatically choose the optimal TLS certificate for each client. These new features are provided at no additional charge.