AWS Dev Associate Flashcards
Pass aws cert exam!
You plan on running an open-source MongoDB database year-round on EC2. Which instance launch mode should you choose?
- on-demand
- reserved instances
- spot instances
Reserved Instances
You are launching an EC2 instance in us-east-1 using this Python script snippet:
> ec2.create_instances(ImageId=’ami-b23a5e7’, MinCount=1, MaxCount=1)
It works well, so you decide to deploy your script in us-west-1 as well. There, the script does not work and fails with “ami not found” error. What’s the problem?
- AMI is region locked and the same ID cannot be used across regions
- The AMI needs to first be shared to another region. The same ID can then be used
AMI is region locked and the same ID cannot be used across regions
You would like to deploy a database technology and the vendor license bills you based on the physical cores and underlying network socket visibility. Which EC2 launch modes allow you to get visibility into them?
- spot ins
- dedicated hosts
- on-demand
Dedicated Hosts
You are running a critical workload of three hours per week, on Monday. As a solutions architect, which EC2 Instance Launch Type should you choose to maximize the cost savings while ensuring the application stability?
- on-demand ins
- reserved ins
- spot ins
- scheduled reserved ins
Scheduled Reserved Instances
Scalability vs High Availability
S: app can handle greater load by adapting (v & h: elasticity)
HA: running app in 2+ data centers, can survive data center loss
Load Balancers provide a
- static IPv4 we can use in our application
- static DNS name we can use in our app
- static IPv6 we can use in our app
static DNS name we can use in our app - ELB chapter
You are running a website with a load balancer and 10 EC2 instances. Your users are complaining about the fact that your website always asks them to re-authenticate when they switch pages. You are puzzled, because it’s working just fine on your machine and in the dev environment with 1 server. What could be the reason?
- the app must have a bug
- the load balancer does not have stickiness enabled
- the EC2 Instances log out users b/c they don’t see their true IPs
the load balancer does not have stickiness enabled
Your application is using an Application Load Balancer. It turns out your application only sees traffic coming from private IP which are in fact your load balancer’s. What should you do to find the true IP of the clients connected to your website?
- Modify the frontend of the website so that users send their IP in the requests
- Look into the X-Forwarded-For header in the backend
- Look into the X-Forwarded-Proto header in the backend
Look into the X-Forwarded-For header in the backend
You quickly created an ELB and it turns out your users are complaining about the fact that sometimes, the servers just don’t work. You realise that indeed, your servers do crash from time to time. How to protect your users from seeing these crashes?
- enable Stickiness
- enable Health Checks
- enable SSL Termination
enable Health Checks - CLB hands on
You are designing a high performance application that will require millions of connections to be handled, as well as low latency. The best Load Balancer for this is
- ALB
- CLB
- NLB
network load balancer
Application Load Balancers handle all these protocols except
- HTTP
- HTTPS
- Websocket
- TCP
TCP - ELB overview
The application load balancer can redirect to different target groups based on all these except…
- Hostname
- Request Path
- Client IP
Client IP
You are running at desired capacity of 3 and the maximum capacity of 3. You have alarms set at 60% CPU to scale out your application. Your application is now running at 80% capacity. What will happen?
- Nothing
- The desired capacity will go up to 4 and max will stay at 3
- The desired capacity will go up to 4 and max will stay at 4
nothing - auto scaling groups hands on
I have an ASG and an ALB, and I setup my ASG to get health status of instances thanks to my ALB. One instance has just been reported unhealthy. What will happen?
- the ASG will keep the instance running & restart the app
- the ASG will detach the EC2 instance from the group
- the ASG will terminate the EC2 instance
the ASG will terminate the EC2 instance
Your boss wants to scale your ASG based on the number of requests per minute your application makes to your database.
- it’s impossible
- create a CloudWatch custom metric & build alarm on this to scale your ASG
- enable detailed monitoring & use that to scale your ASG
create a CloudWatch custom metric & build alarm on this to scale your ASG
Scaling an instance from an r4.large to an r4.4xlarge is called
- horizontal
- vertical
scalability?
vertical scalability
Running an application on an auto scaling group that scales the number of instances in and out is called
- horizontal
- vertical
scalability?
horizontal scalability
You would like to expose a fixed static IP to your end-users for compliance purposes, so they can write firewall rules that will be stable and approved by regulators. Which Load Balancer should you use?
- ALB with Elastic IP attached to it
- NLB
- CLB
network load balancer
Network Load Balancers expose a public static IP, whereas an Application or Classic Load Balancer exposes a static DNS (URL)
A web application hosted in EC2 is managed by an ASG. You are exposing this application through an Application Load Balancer. The ALB is deployed on the VPC with the following CIDR: 192.168.0.0/18. How do you configure the EC2 instance security group to ensure only the ALB can access the port 80?
- open up the EC2 security group on port 80 to 0.0.0.0/0
- open up the EC2 security group on port 80 to 192.168.0.0/18
- open up the EC2 security group on port 80 to the ALB’s security group
open up the EC2 security group on port 80 to the ALB’s security group
This is the most secure way of ensuring only the ALB can access the EC2 instances. Referencing by security groups in rules is an extremely powerful rule and many questions at the exam rely on it.
Your application load balancer is hosting 3 target groups with hostnames being users.example.com, api.external.example.com, and checkout.example.com. You would like to expose HTTPS traffic for each of these hostnames. How do you configure your ALB SSL certificates to make this work?
- use SNI
- use a wildcard SSL certificate
- use an HTTP to HTTPS redirect rule
- use a security group SSL certificate
use SNI
SNI (Server Name Indication) is a feature allowing you to expose multiple SSL certs if the client supports it. Read more here: https://aws.amazon.com/blogs/aws/new-application-load-balancer-sni/
The Application Load Balancers target groups can be all of these EXCEPT…
- EC2 Instances
- IP Addresses
- Lambda functions
- NLBs
network load balancer
You are running an application in 3 AZ, with an Auto Scaling Group and a Classic Load Balancer. It seems that the traffic is not evenly distributed amongst all the backend EC2 instances, with some AZ being overloaded. Which feature should help distribute the traffic across all the available EC2 instances?
- stickiness
- cross zone load balancing
- target group routing rules
- HTTPS termination
cross zone load balancing - video #40
Your Application Load Balancer (ALB) currently is routing to two target groups, each of them is routed to based on hostname rules. You have been tasked with enabling HTTPS traffic for each hostname and have loaded the certificates onto the ALB. Which ALB feature will help it choose the right certificate for your clients?
- TLS Termination
- Server Name Indication (SNI)
- SSL Security Policies
- Host Header
Server Name Indication (SNI)
An application is deployed with an Application Load Balancer and an Auto Scaling Group. Currently, the scaling of the Auto Scaling Group is done manually and you would like to define a scaling policy that will ensure the average number of connections to your EC2 instances is averaging at around 1000. Which scaling policy should you use?
- simple scaling policy
- step scaling policy
- target tracking
- scheduled scaling
target tracking
Your instance in us-east-1a just got terminated, and the attached EBS volume is now available. Your colleague tells you he can’t seem to attach it to your instance in us-east-1b.
- Missing IAM permissions
- EBS volumes are region locked
- EBS volumes are AZ locked
EBS volumes are AZ locked
EBS Volumes are created for a specific AZ. It is possible to migrate them between different AZ through backup and restore
You would like to have the same data being accessible as an NFS drive cross AZ on all your EC2 instances. What do you recommend?
Mount an…
- EFS
- EBS
- Instance Store
EFS
EFS is a network file system (NFS) and allows to mount the same file system on EC2 instances that are in different AZ
You would like to have a high-performance cache for your application that mustn’t be shared. You don’t mind losing the cache upon termination of your instance. Which storage mechanism do you recommend as a Solution Architect?
- Instance Store
- EBS
- EFS
Instance Store
Instance Store provide the best disk performance
You are running a high-performance database that requires an IOPS of 210,000 for its underlying filesystem. What do you recommend?
- EBS gp2 drive
- EBS io1 drive
- EC2 Instance Store
- EFS
EC2 Instance Store
Is running a DB on EC2 instance store possible? It is possible to run a database on EC2. It is also possible to use instance store, but there are some considerations to have. The data will be lost if the instance is stopped, but it can be restarted without problems. One can also set up a replication mechanism on another EC2 instance with instance store to have a standby copy. One can also have back-up mechanisms. It’s all up to how you want to set up your architecture to validate your requirements. In this case, it’s around IOPS, and we build an architecture of replication and back up around it
My company would like to have a MySQL database that is going to be available even in case of a disaster in the AWS Cloud. I should setup
- Read Replicas
- Encryption
- Multi AZ
Multi AZ
Our RDS database struggles to keep up with the demand of the users from our website. Our million users mostly read news, and we don’t post news very often. Which solution will NOT help fix this problem?
- ElastiCache cluster
- RDS Read Replicas
- RDS Multi AZ
RDS Multi AZ - video #54 RDS overview
We have setup read replicas on our RDS database, but our users are complaining that upon updating their social media posts, they do not see the update right away
- must be a bug in our app
- read replicas have async replication & therefore it’s likely users will only observe eventual consistency
- we should have setup multi-az instead
Read Replicas have async replication & therefore it’s likely users will only observe eventual consistency
Which RDS Classic (not Aurora) feature does not require us to change our SQL connection string?
- Read Replicas
- Multi AZ
Multi AZ
NOT read replicas: Read Replicas add new endpoints for databases to read from and therefore we must change our application to have the list of these endpoints in order to balance the read load and connect to the databases
You want to ensure your Redis cluster will always be available, enable…
- Read Replicas
- Multi AZ
Multi AZ
Your application functions on an ASG behind an ALB. Users have to constantly log back in and you’d rather not enable stickiness on your ALB as you fear it will overload some servers. What should you do?
- create your own load balancer & deploy that on EC2 instances
- store session data in RDS
- store session data in ElastiCache
- store session data in a shared EBS volume
store session data in ElastiCache
personal note: seems quickest :P
One analytics application is currently performing its queries against your main production database. These queries slow down the database which impacts the main user experience. What should you do to improve the situation?
- set up a Read Replica
- set up Multi AZ
- run the analytics queries at night
- increase the RDS instance size
Set up a Read Replica
Read Replicas will help as our analytics application can now perform queries against it, and these queries won’t impact the main production database.
- You have a requirement to use TDE (Transparent Data Encryption) on top of KMS. Which database technology does NOT support TDE on RDS?
- PostgreSQL
- Oracle
- MS SQL Server
PostgreSQL
Which RDS database technology does NOT support IAM authentication?
- Oracle
- PostgreSQL
- MySQL
PostgreSQL
You would like to ensure you have a database available in another region if a disaster happens to your main region. Which database do you recommend?
- RDS w/ Read Replicas in another AZ
- RDS w/ multi AZ
- Aurora Read Replicas in another AZ
- Aurora Global DB
Aurora Global Database
Global Databases allow you to have cross region replication
You are managing a PostgreSQL database and for security reasons, you would like to ensure users are authenticated using short-lived credentials. What do you suggest doing?
- Install PostgreSQL on EC2 and install the pg_iam module. Authenticate using IAM username and password
- Use PostgreSQL for RDS and install the pg_iam module. Authenticate using IAM username and password
- Use PostgreSQL for RDS and authenticate using a token obtained through the RDS service
- Use PostgreSQL for RDS and force SSL connections. Authenticate using SSL certificates that you regularly rotate
Use PostgreSQL for RDS and authenticate using a token obtained through the RDS service
In this case, IAM is leveraged to obtain the RDS service token, so this is the IAM authentication use case.
Your organisation wants to enforce SSL connections on your MySQL database
- Change your security group rules to only allow SSL traffic
- Download certificates and change your application to connect using SSL
- Apply a ‘REQUIRE SSL’ statement to all your users in your SQL DB
- Enable RDS encryption
Apply a ‘REQUIRE SSL’ statement to all your users in your SQL DB - video #54 RDS overview
You are implementing a caching strategy with ElastiCache and would like to ensure that only the data that is often requested will be loaded in ElastiCache, as your cache size is small. Which caching strategy should you implement?
- lazy loading
- write through
- TTL
lazy loading
Lazy Loading would only cache data that is actively requested from the database
You are serving web pages for a very dynamic website and you have a requirement to keep latency to a minimum for every single user when they do a read request. Writes can take longer to happen. Which caching strategy do you recommend?
- cache aside
- write-through
- TTL
write-through
this has longer writes, but the reads are quick and the data is always updated in the cache
Tell what each record (for Route53) does:
- A
- AAAA
- CNAME
- Alias
A: hostname to IPv4
AAAA: hostname to IPv6
CNAME: hostname to hostname
Alias: hostname to AWS resource
You have purchased “mycoolcompany.com” on the AWS registrar and would like for it to point to [lb1-1234.us-east-2.elb.amazonaws.com]. What sort of Route 53 record is NOT POSSIBLE to set up for this?
- CNAME
- Alias
CNAME
The DNS protocol does not allow you to create a CNAME record for the top node of a DNS namespace (mycoolcompany.com), also known as the zone apex
Route53 - Simple vs Weighted vs Latency vs Failover ?
(look up answer)
You have deployed a new Elastic Beanstalk environment and would like to direct 5% of your production traffic to this new environment, in order to monitor for CloudWatch metrics and ensuring no bugs exist. What type of Route 53 records allows you to do so?
- Simple
- Weighted
- Latency
- Failover
Weighted
Weighted allows you to redirect a part of the traffic based on a weight (hence a percentage). It’s common to use to send a part of a traffic to a new application you’re deploying
After updating a Route 53 record to point “myapp.mydomain.com” from an old Load Balancer to a new load balancer, it looks like the users are still not redirected to your new load balancer. You are wondering why…
- it’s b/c of the alias record
- it’s b/c of the CNAME record
- it’s b/c of the TTL (time to live)
- it’s b/c of the health checks
TTL
DNS records have a TTL (Time to Live) in order for clients to know for how long to caches these values and not overload the DNS with DNS requests. TTL should be set to strike a balance between how long the value should be cached vs how much pressure should go on the DNS.
You want your users to get the best possible user experience and that means minimizing the response time from your servers to your users. Which routing policy will help?
- Multi Value
- Weighted
- Latency
- Geo location
Latency
Latency will evaluate the latency results and help your users get a DNS response that will minimize their latency (e.g. response time)
You have a legal requirement that people in any country but France should not be able to access your website. Which Route 53 record helps you in achieving this?
- Latency
- Simple
- Geo location
- Multi Value
Geo location
Direct Connect
Direct Connect: connection between on-prem data center to AWS (private)
VPC
VPC: virtual private cloud, used default when creating EC2 instances
Subnets
Subnets: tied to specific AZ, where we launch EC2 instances from
Internet Gateway
Internet Gateway: provides internet access to public subnets & instances
NAT Gateway / Instances
NAT Gateway / Instances: provides internet access to public subnets & instances
NACL
NACL: firewall for inbound/outbound, stateless, for subnet rules
Security Groups
Security Groups: stateful, operate on EC2 instance level or ENI, can reference other security groups
VPC Peering
VPC Peering: connects 2 VPC (non overlapping IP addresses)
VPC Endpoints
VPC Endpoints: private access to AWS Services within
VPC Flow Logs
VPC Flow Logs: network traffic logs
Site to Site VPN
Site to Site VPN: connection between on-prem data center to AWS (public)
You have set up an internet gateway in your VPC, but your EC2 instances still don’t have access to the internet.
Which of the following is NOT a possible issue?
- Route Tables are missing entries
- The security group does not allow network in
- The NACL does not allow network traffic out
The security group does not allow network in
security groups are stateful and if traffic can go out, then it can go back in
You would like to provide internet access to your instances in private subnets with IPv4, while making sure this solution requires the least amount of administration and scales seamlessly. What should you use?
- NAT Instances with Source / Destination Check flag off
- NAT Gateway
NAT Gateway
Your EC2 instance in a private subnet must access the AWS APIs privately. You must keep all traffic within the AWS network. What do you recommend?
- NAT Gateway in public subnet & Internet Gateway
- VPC Endpoints
- Direct Connect
VPC Endpoints
I tried creating an S3 bucket named “dev” but it didn’t work. This is a new AWS Account and I have no buckets at all. What is the cause?
- I’m missing IAM permissions to create a bucket
- Bucket names must be globally unique and “dev” is already taken
Bucket names must be globally unique and “dev” is already taken - video #83
You’ve added files in your bucket and then enabled versioning. The files you’ve already added will have which version?
- 1
- 0
- -1
- null
null
Any file that is not versioned prior to enabling versioning will have version “null”
Your client wants to make sure the encryption is happening in S3, but wants to fully manage the encryption keys and never store them in AWS. You recommend
- SSE-S3
- SSE-KMS
- SSE-C
- Client Side Encryption
SSE-C - video #87
SSE-S3
encryption keys entirely handled & managed by AWS
SSE-KMS
encryption keys handled & managed by AWS’s Key Management Service
- gives user control (who has control)
- provides audit trail (who did what)
SSE-C
encryption keys handled & managed by you outside of AWS
- S3 does not store any keys
- keys get used then discarded
- must use HTTPS (encryption in transit)
- keys are provided in HTTP headers
- more management on your end since you created the keys
Client Side Encryption
when you (client) encrypt the object before uploading to S3
- S3 Encryption Client helps with encryption
- You must encrypt/decrypt data yourself before sending/when retrieving to/from S3
- Customer fully manages keys & encryption cycle
- Encryption all happens outside of S3
Encryption in transit (SSL/TLS)
- free to use endpoints you want, but HTTPS is recommended
- use HTTPS endpoint by default
- HTTPS is mandatory for SSE-C
Your company wants data to be encrypted in S3, and maintain control of the rotation policy for the encryption keys. You recommend
- SSE-S3
- SSE-KMS
- SSE-C
- Client Side Encryption
SSE-KMS
Your company does not trust S3 for encryption and wants it to happen on the application. You recommend
- SSE-S3
- SSE-KMS
- SSE-C
- Client Side Encryption
Client Side Encryption
The bucket policy allows our users to read/write files in the bucket, yet we were not able to perform a PutObject API call. What is your assessment?
- the bucket policy is wrong
- the IAM user has an explicit DENY in the attached IAM policy
- you need to contact AWS Support to life this limit
the IAM user has an explicit DENY in the attached IAM policy
Explicit DENY in an IAM policy will take precedence over a bucket policy permission