Ûdemy Flashcards
What is a proper definition of an IAM role?
A) IAM Users in multiple User Groups
B) An IAM entity that defines a password policy for IAM users
C) An IAM entity that defines a set of permissions for making requests to AWS services, and will be used by an AWS service
D) Permissions assigned to IAM Users to perform Actions
C) An IAM entity that defines a set of permissions for making requests to AWS services, and will be used by an AWS service
Some AWS services need to perform actions on your behalf. To do so, you assign permissions to AWS services with IAM Roles.
Which of the following is an IAM Security Tool?
A) IAM Credentials Report
B) IAM Root Account Manager
C) IAM Services Report
D) IAM Security Advisor
A) IAM Credentials Report
IAM Credentials report lists all your AWS Account’s IAM Users and the status of their various credentials.
Which answer is INCORRECT regarding IAM Users?
A) IAM Users can belong to multiple User Groups
B) IAM Users don’t have to belong to a User Group
C) IAM Policies can be attached directly to IAM Users
D) IAM Users access AWS services using root account credentials
D) IAM Users access AWS services using root account credentials
IAM Users access AWS services using their own credentials (username & password or Access Keys).
Which of the following is an IAM best practice?
A) Create several IAM Users for one physical person
B) Don’t use the root user account
C) Share your AWS account credentials with your colleague, so (s)he can perform a task for you
D) Do not enable MFA for easier access
B) Don’t use the root user account
Use the root account only to create your first IAM User and a few account/service management tasks. For everyday tasks, use an IAM User.
What are IAM Policies?
A) A set of policies that defines how AWS accounts interact with each other
B) JSON documents that define a set of permissions for making requests to AWS services, and can be used by IAM Users, User Groups, and IAM Roles
C) A set of policies that define a password for IAM Users
D) A set of policies define by AWS that show how customers interact with AWS
B) JSON documents that define a set of permissions for making requests to AWS services, and can be used by IAM Users, User Groups, and IAM Roles
Which principle should you apply regarding IAM Permissions?
A) Grant most privilege
B) Grant more permissions if your employee asks you to
C) Grant least privilege
D) Restrict root account permissions
C) Grant least privilege
Don’t give more permissions than the user needs.
What should you do to increase your root account security?
A) Remove permissions from the root account
B) Only access AWS services through AWS Command Line Interface (CLI)
C) Don’t create IAM Users, only access your AWS account using the root account
D) Enable Multi-Factor Authentication (MFA)
D) Enable Multi-Factor Authentication (MFA)
When you enable MFA, this adds another layer of security. Even if your password is stolen, lost, or hacked your account is not compromised.
TRUE / FALSE
IAM User Groups can contain IAM Users and other User Groups.
FALSE
IAM User Groups can contain only IAM Users.
An IAM policy consists of one or more statements. A statement in an IAM Policy consists of the following, EXCEPT:
A) Effect
B) Principal
C) Version
D) Action
C) Version
A statement in an IAM Policy consists of Sid, Effect, Principal, Action, Resource, and Condition. Version is part of the IAM Policy itself, not the statement.
Which EC2 Purchasing Option can provide you the biggest discount, but it is not suitable for critical jobs or databases?
A) Convertible Reserved Instances
B) Dedicated Hosts
C) Spot Instances
C) Spot Instances
Spot Instances are good for short workloads and this is the cheapest EC2 Purchasing Option. But, they are less reliable because you can lose your EC2 instance.
What should you use to control traffic in and out of EC2 instances?
A) Network Access Control List (NACL)
B) Security Groups
C) IAM Policies
B) Security Groups
Security Groups operate at the EC2 instance level and can control traffic.
How long can you reserve an EC2 Reserved Instance?
A) 1 or 3 years
B) 2 or 4 years
C) 6 months or 1 year
D) Anytime between 1 and 3 years
A) 1 or 3 years
EC2 Reserved Instances can be reserved for 1 or 3 years only.
You would like to deploy a High-Performance Computing (HPC) application on EC2 instances. Which EC2 instance type should you choose?
A) Storage Optimized
B) Compute Optimized
C) Memory Optimized
D) General Purpose
B) Compute Optimized
Compute Optimized EC2 instances are great for compute-intensive workloads requiring high-performance processors (e.g., batch processing, media transcoding, high-performance computing, scientific modeling & machine learning, and dedicated gaming servers).
Which EC2 Purchasing Option should you use for an application you plan to run on a server continuously for 1 year?
A) Reserved Instances
B) Spot Instances
C) On-Demand Instances
A) Reserved Instances
Reserved Instances are good for long workloads. You can reserve EC2 instances for 1 or 3 years.
You are preparing to launch an application that will be hosted on a set of EC2 instances. This application needs some software installation and some OS packages need to be updated during the first launch. What is the best way to achieve this when you launch the EC2 instances?
A) Connect to each EC2 instance using SSH, then install the required software and update your OS packages manually
B) Write a bash script that installs the required software and updates to your OS, then contact AWS Support and provide them with the script. They will run it on your EC2 instances at launch
C) Write a bash script that installs the required software and updates to your OS, then use this script in EC2 User Data when you launch your EC2 instance
C) Write a bash script that installs the required software and updates to your OS, then use this script in EC2 User Data when you launch your EC2 instance
EC2 User Data is used to bootstrap your EC2 instances using a bash script. This script can contain commands such as installing software/packages, download files from the Internet, or anything you want.
Which EC2 Instance Type should you choose for a critical application that uses an in-memory database?
A) Compute Optimized
B) Storage Optimized
C) Memory Optimized
D) General Purpose
C) Memory Optimized
Memory Optimized EC2 instances are great for workloads requiring large data sets in memory.
You have an e-commerce application with an OLTP database hosted on-premises. This application has popularity which results in its database has thousands of requests per second. You want to migrate the database to an EC2 instance. Which EC2 Instance Type should you choose to handle this high-frequency OLTP database?
A) Compute Optimized
B) Storage Optimized
C) Memory Optimized
D) General Purpose
B) Storage Optimized
Storage Optimized EC2 instances are great for workloads requiring high, sequential read/write access to large data sets on local storage.
TRUE / FALSE
Security Groups can be attached to only one EC2 instance
FALSE
Security Groups can be attached to multiple EC2 instances within the same AWS Region/VPC.
You’re planning to migrate on-premises applications to AWS. Your company has strict compliance requirements that require your applications to run on dedicated servers. You also need to use your own server-bound software license to reduce costs. Which EC2 Purchasing Option is suitable for you?
A) Convertible Reserved Instances
B) Dedicated Hosts
C) Spot Instances
B) Dedicated Hosts
Dedicated Hosts are good for companies with strong compliance needs or for software that have complicated licensing models. This is the most expensive EC2 Purchasing Option available.
You would like to deploy a database technology on an EC2 instance and the vendor license bills you based on the physical cores and underlying network socket visibility. Which EC2 Purchasing Option allows you to get visibility into them?
A) Spot Instances
B) On-Demand
C) Dedicated Hosts
D) Reserved Instances
C) Dedicated Hosts
You have launched an EC2 instance that will host a NodeJS application. After installing all the required software and configured your application, you noted down the EC2 instance public IPv4 so you can access it. Then, you stopped and then started your EC2 instance to complete the application configuration. After restart, you can’t access the EC2 instance, and you found that the EC2 instance public IPv4 has been changed. What should you do to assign a fixed public IPv4 to your EC2 instance?
A) Allocate an Elastic IP and assign it to your EC2 instance
B) From inside your EC2 instance OS, change network configuration from DHCP to static and assign it a public IPv4
C) Contact AWS Support and request a fixed public IPv4 to your EC2 Instance
D) This can’t be done, you can only assign a fixed private IPv4 to your EC2 instance
A) Allocate an Elastic IP and assign it to your EC2 instance
Elastic IP is a public IPv4 that you own as long as you want and you can attach it to one EC2 instance at a time.
Spot Fleet is a set of Spot Instances and optionally ……………
A) Reserved Instances
B) On-Demand Instances
C) Dedicated Hosts
D) Dedicated Instances
B) On-Demand Instances
Spot Fleet is a set of Spot Instances and optionally On-demand Instances. It allows you to automatically request Spot Instances with the lowest price.
You have an application performing big data analysis hosted on a fleet of EC2 instances. You want to ensure your EC2 instances have the highest networking performance while communicating with each other. Which EC2 Placement Group should you choose?
A) Spread Placement Group
B) Cluster Placement Group
C) Partition Placement Group
B) Cluster Placement Group
Cluster Placement Groups place your EC2 instances next to each other which gives you high-performance computing and networking.
You have a critical application hosted on a fleet of EC2 instances in which you want to achieve maximum availability when there’s an AZ failure. Which EC2 Placement Group should you choose?
A) Cluster Placement Group
B) Partition Placement Group
C) Spread Placement Group
C) Spread Placement Group
Spread Placement Group places your EC2 instances on different physical hardware across different AZs.
TRUE / FALSE
Elastic Network Interface (ENI) can be attached to EC2 instances in another AZ.
FALSE
Elastic Network Interfaces (ENIs) are bounded to a specific AZ. You can not attach an ENI to an EC2 instance in a different AZ.
The following are true regarding EC2 Hibernate, EXCEPT:
A) EC2 Instance Root Volume must be an Instance Store volume
B) Supports On-Demand and Reserved Instances
C) EC2 Instance RAM must be less than 150GB
D) EC2 Instance Root Volume type must be an EBS volume
A) EC2 Instance Root Volume must be an Instance Store volume
To enable EC2 Hibernate, the EC2 Instance Root Volume type must be an EBS volume and must be encrypted to ensure the protection of sensitive content.
You have just terminated an EC2 instance in us-east-1a, and its attached EBS volume is now available. Your teammate tries to attach it to an EC2 instance in us-east-1b but he can’t. What is a possible cause for this?
A) He’s missing IAM permissions
B) EBS volumes are locked to an AWS Region
C) EBS volumes are locked to an Availability Zone
C) EBS volumes are locked to an Availability Zone
EBS Volumes are created for a specific AZ. It is possible to migrate them between different AZs using EBS Snapshots.
You have launched an EC2 instance with two EBS volumes, Root volume type and the other EBS volume type to store the data. A month later you are planning to terminate the EC2 instance. What’s the default behavior that will happen to each EBS volume?
A) Both the root volume type and the EBS volume type will be deleted
B) The Root Volume type will be deleted and the EBS volume type will not be deleted
C) The root volume type will not be deleted and the EBS volume type will be deleted
D) Both the root volume type and the EBS volume type will not be deleted
B) The Root Volume type will be deleted and the EBS volume type will not be deleted
By default, the Root volume type will be deleted as its “Delete On Termination” attribute checked by default. Any other EBS volume types will not be deleted as its “Delete On Termination” attribute disabled by default.
TRUE / FALSE
You can use an AMI in N.Virginia Region us-east-1 to launch an EC2 instance in any AWS Region.
FALSE
AMIs are built for a specific AWS Region, they’re unique for each AWS Region. You can’t launch an EC2 instance using an AMI in another AWS Region, but you can copy the AMI to the target AWS Region and then use it to create your EC2 instances.
Which of the following EBS volume types can be used as boot volumes when you create EC2 instances?
A) gp2, gp3, st1, sc1
B) gp2, gp3, io1, io2
C) io1, io2, st1, sc1
B) gp2, gp3, io1, io2
When creating EC2 instances, you can only use the following EBS volume types as boot volumes: gp2, gp3, io1, io2, and Magnetic (Standard).
What is EBS Multi-Attach?
A) Attach the same EBS volume to multiple EC2 instances in multiple AZs
B) Attach multiple EBS volumes in the same AZ to the same EC2 instance
C) Attach the same EBS volume to multiple EC2 instances in the same AZ
D) Attach multiple EBS volumes in multiple AZs to the same EC2 instance
C) Attach the same EBS volume to multiple EC2 instances in the same AZ
Using EBS Multi-Attach, you can attach the same EBS volume to multiple EC2 instances in the same AZ. Each EC2 instance has full read/write permissions.
You would like to encrypt an unencrypted EBS volume attached to your EC2 instance. What should you do?
A) Create an EBS snapshot of your EBS volume. Copy the snapshot and tick the option to encrypt the copied snapshot. Then, use the encrypted snapshot to create a new EBS volume
B) Select your EBS volume, choose Edit Attributes, then tick the Encrypt using KMS option
C) Create a new encrypted EBS volume, then copy data from your unencrypted EBS volume to the new EBS volume
D) Submit a request to AWS Support to encrypt your EBS volume
A) Create an EBS snapshot of your EBS volume. Copy the snapshot and tick the option to encrypt the copied snapshot. Then, use the encrypted snapshot to create a new EBS volume
You have a fleet of EC2 instances distributes across AZs that process a large data set. What do you recommend to make the same data to be accessible as an NFS drive to all of your EC2 instances?
A) Use EBS
B) Use EFS
C) Use an Instance Store
B) Use EFS
EFS is a network file system (NFS) that allows you to mount the same file system on EC2 instances that are in different AZs.
You would like to have a high-performance local cache for your application hosted on an EC2 instance. You don’t mind losing the cache upon the termination of your EC2 instance. Which storage mechanism do you recommend as a Solutions Architect?
A) EBS
B) EFS
C) Instance Store
C) Instance Store
EC2 Instance Store provides the best disk I/O performance.
You are running a high-performance database that requires an IOPS of 310,000 for its underlying storage. What do you recommend?
A) Use an EBS gp2 drive
B) Use an EBS io1 drive
C) Use an EC2 Instance Store
D) Use an EBS io2 Block Express Drive
C) Use an EC2 Instance Store
You can run a database on an EC2 instance that uses an Instance Store, but you’ll have a problem that the data will be lost if the EC2 instance is stopped (it can be restarted without problems). One solution is that you can set up a replication mechanism on another EC2 instance with an Instance Store to have a standby copy. Another solution is to set up backup mechanisms for your data. It’s all up to you how you want to set up your architecture to validate your requirements. In this use case, it’s around IOPS, so we have to choose an EC2 Instance Store.
Scaling an EC2 instance from r4.large to r4.4xlarge is called …………………
A) Horizontal Scalability
B) Vertical Scalability
B) Vertical Scalability
Running an application on an Auto Scaling Group that scales the number of EC2 instances in and out is called …………………
A) Horizontal Scalability
B) Vertical Scalability
A) Horizontal Scalability
Elastic Load Balancers provide a …………………..
A) static IPv4 we can use in our application
B) static DNS name we can use in our application
C) static IPv6 we can use in our application
B) static DNS name we can use in our application
Only Network Load Balancer provides both static DNS name and static IP. While, Application Load Balancer provides a static DNS name but it does NOT provide a static IP. The reason being that AWS wants your Elastic Load Balancer to be accessible using a static endpoint, even if the underlying infrastructure that AWS manages changes.
You are running a website on 10 EC2 instances fronted by an Elastic Load Balancer. Your users are complaining about the fact that the website always asks them to re-authenticate when they are moving between website pages. You are puzzled because it’s working just fine on your machine and in the Dev environment with 1 EC2 instance. What could be the reason?
A) Your website must have an issue when hosted on multiple EC2 instances
B) The EC2 instances log out users as they can’t see their IP addresses, instead, they receive ELB IP addresses
C) The Elastic Load Balancer does not have Sticky Sessions enabled
C) The Elastic Load Balancer does not have Sticky Sessions enabled
ELB Sticky Session feature ensures traffic for the same client is always redirected to the same target (e.g., EC2 instance). This helps that the client does not lose his session data.
You are using an Application Load Balancer to distribute traffic to your website hosted on EC2 instances. It turns out that your website only sees traffic coming from private IPv4 addresses which are in fact your Application Load Balancer’s IP addresses. What should you do to get the IP address of clients connected to your website?
A) Modify your website’s frontend so that users send their IP in every request
B) Modify your website’s backend to get the client IP address from the X-Forwarded-For header
C) Modify your website’s backend to get the client IP address from the X-Forwarded-Port header
D) Modify your website’s backend to get the client IP address from the X-Forwarded-Proto header
B) Modify your website’s backend to get the client IP address from the X-Forwarded-For header
When using an Application Load Balancer to distribute traffic to your EC2 instances, the IP address you’ll receive requests from will be the ALB’s private IP addresses. To get the client’s IP address, ALB adds an additional header called “X-Forwarded-For” contains the client’s IP address.
You hosted an application on a set of EC2 instances fronted by an Elastic Load Balancer. A week later, users begin complaining that sometimes the application just doesn’t work. You investigate the issue and found that some EC2 instances crash from time to time. What should you do to protect users from connecting to the EC2 instances that are crashing?
A) Enable ELB Health Checks
B) Enable ELB Stickiness
C) Enable SSL Termination
D) Enable Cross-Zone Load Balancing
A) Enable ELB Health Checks
When you enable ELB Health Checks, your ELB won’t send traffic to unhealthy (crashed) EC2 instances.
You are working as a Solutions Architect for a company and you are required to design an architecture for a high-performance, low-latency application that will receive millions of requests per second. Which type of Elastic Load Balancer should you choose?
A) Application Load Balancer
B) Classic Load Balancer
C) Network Load Balancer
C) Network Load Balancer
Network Load Balancer provides the highest performance and lowest latency if your application needs it.
Application Load Balancers support the following protocols, EXCEPT:
A) HTTP
B) HTTPS
C) TCP
D) WebSocket
C) TCP
Application Load Balancers support HTTP, HTTPS and WebSocket
Application Load Balancers can route traffic to different Target Groups based on the following, EXCEPT:
A) Client’s Location (Geography)
B) Hostname
C) Request URL Path
D) Source IP Address
A) Client’s Location (Geography)
ALBs can route traffic to different Target Groups based on URL Path, Hostname, HTTP Headers, and Query Strings.
Registered targets in a Target Groups for an Application Load Balancer can be one of the following, EXCEPT:
A) EC2 Instances
B) Network Load Balancer
C) Private IP Address
D) Lambda Functions
B) Network Load Balancer
For compliance purposes, you would like to expose a fixed static IP address to your end-users so that they can write firewall rules that will be stable and approved by regulators. What type of Elastic Load Balancer would you choose?
A) Application Load Balancer with an Elastic IP attached to it
B) Network Load Balancer
C) Classic Load Balancer
B) Network Load Balancer
Network Load Balancer has one static IP address per AZ and you can attach an Elastic IP address to it. Application Load Balancers and Classic Load Balancers have a static DNS name.
You want to create a custom application-based cookie in your Application Load Balancer. Which of the following you can use as a cookie name?
A) AWSALBAPP
B) APPUSERC
C) AWSALBTG
D) AWSALB
B) APPUSERC
The other cookie names are reserved by the ELB (AWSALB, AWSALBAPP, AWSALBTG).
You have a Network Load Balancer that distributes traffic across a set of EC2 instances in us-east-1. You have 2 EC2 instances in us-east-1b AZ and 5 EC2 instances in us-east-1e AZ. You have noticed that the CPU utilization is higher in the EC2 instances in us-east-1b AZ. After more investigation, you noticed that the traffic is equally distributed across the two AZs. How would you solve this problem?
A) Enable Cross-Zone Load Balancing
B) Enable Sticky Sessions
C) Enable ELB Health Checks
D) Enable SSL Termination
A) Enable Cross-Zone Load Balancing
When Cross-Zone Load Balancing is enabled, ELB distributes traffic evenly across all registered EC2 instances in all AZs.
Which feature in both Application Load Balancers and Network Load Balancers allows you to load multiple SSL certificates on one listener?
A) TLS Termination
B) Server Name Indication (SNI)
C) SSL Security Policies
D) Host Headers
B) Server Name Indication (SNI)
You have an Application Load Balancer that is configured to redirect traffic to 3 Target Groups based on the following hostnames: users.example.com, api.external.example.com, and checkout.example.com. You would like to configure HTTPS for each of these hostnames. How do you configure the ALB to make this work?
A) Use an HTTP to HTTPS redirect rule
B) Use a security group SSL certificate
C) Use Server Name Indication
C) Use Server Name Indication
Server Name Indication (SNI) allows you to expose multiple HTTPS applications each with its own SSL certificate on the same listener. Read more here: https://aws.amazon.com/blogs/aws/new-application-load-balancer-sni/
You have an application hosted on a set of EC2 instances managed by an Auto Scaling Group that you configured both desired and maximum capacity to 3. Also, you have created a CloudWatch Alarm that is configured to scale out your ASG when CPU Utilization reaches 60%. Your application suddenly received huge traffic and is now running at 80% CPU Utilization. What will happen?
A) Nothing
B) The desired capacity will go up to 4 and the maximum capacity will stay at 3
C) The desired capacity will go up to 4 and the maximum capacity will stay at 4
A) Nothing
The Auto Scaling Group can’t go over the maximum capacity (you configured) during scale-out events.
You have an Auto Scaling Group fronted by an Application Load Balancer. You have configured the ASG to use ALB Health Checks, then one EC2 instance has just been reported unhealthy. What will happen to the EC2 instance?
A) The ASG will keep the instance running and restart the application
B) The ASG will detach the EC2 instance and leave it running
C) The ASG will terminate the EC2 instance
C) The ASG will terminate the EC2 instance
You can configure the Auto Scaling Group to determine the EC2 instances’ health based on Application Load Balancer Health Checks instead of EC2 Status Checks (default). When an EC2 instance fails the ALB Health Checks, it is marked unhealthy and will be terminated while the ASG launches a new EC2 instance.
Your boss asked you to scale your Auto Scaling Group based on the number of requests per minute your application makes to your database. What should you do?
A) Create a CloudWatch custom metric, then create a CloudWatch Alarm on this metric to scale your ASG
B) You politely tell him that it’s impossible
C) Enable Detailed Monitoring then create a CloudWatch Alarm to scale your ASG
A) Create a CloudWatch custom metric, then create a CloudWatch Alarm on this metric to scale your ASG
There’s no CloudWatch Metric for “requests per minute” for backend-to-database connections. You need to create a CloudWatch Custom Metric, then create a CloudWatch Alarm.
A web application hosted on a fleet of EC2 instances managed by an Auto Scaling Group. You are exposing this application through an Application Load Balancer. Both the EC2 instances and the ALB are deployed on a VPC with the following CIDR 192.168.0.0/18. How do you configure the EC2 instances’ security group to ensure only the ALB can access them on port 80?
A) Add an inbound rule with port 80 and 0.0.0.0/0 as the source
B) Add an inbound rule with port 80 and 192.168.0.0/18 as the source
C) Add an inbound rule with port 80 and ALB’s Security Group as the source
D) Load an SSL certificate on the ALB
C) Add an inbound rule with port 80 and ALB’s Security Group as the source
This is the most secure way of ensuring only the ALB can access the EC2 instances. Referencing by security groups in rules is an extremely powerful rule and many questions at the exam rely on it. Make sure you fully master the concepts behind it!
There is an Auto Scaling Configured running in eu-west-2 region, that is configured to spawn into two Availability Zones eu-west-2a and eu-west-2b. Currently, 3 EC2 instances are running in eu-west-2a and 4 EC2 instances are running in eu-west-2b. The ASG is about to scale in. Which EC2 instance will get terminated?
A) A random EC2 instance in eu-west-2a
B) The EC2 instance in eu-west-2a with the oldest Launch Template version
C) A random EC2 instance in eu-west-2b
D) The EC2 instance in eu-west-2b with the oldest Launch Template version
D) The EC2 instance in eu-west-2b with the oldest Launch Template version
Make sure you remember the Default Termination Policy for Auto Scaling Group. It tries to balance across AZs first, then terminates based on the age of the Launch Configuration.
An application is deployed with an Application Load Balancer and an Auto Scaling Group. Currently, you manually scale the ASG and you would like to define a Scaling Policy that will ensure the average number of connections to your EC2 instances is around 1000. Which Scaling Policy should you use?
A) Simple Scaling Policy
B) Step Scaling Policy
C) Target Tracking Policy
D) Scheduled Scaling Policy
C) Target Tracking Policy
Your application hosted on EC2 instances managed by an Auto Scaling Group suddenly receives a spike in traffic which triggers your ASG to scale out and a new EC2 instance has been launched. The traffic continuously increases but the ASG doesn’t launch any new EC2 instances immediately but after 5 minutes. What is a possible cause for this behavior?
A) Cooldown Period
B) Lifecycle Hooks
C) Target Tracking Policy
D) Launch Template
A) Cooldown Period
For each Auto Scaling Group, there’s a Cooldown Period after each scaling activity. In this period, the ASG doesn’t launch or terminate EC2 instances. This gives time to metrics to stabilize. The default value for the Cooldown Period is 300 seconds (5 minutes).
A company has an Auto Scaling Group where random EC2 instances suddenly crashed in the past month. They can’t troubleshoot why the EC2 instances crash as the ASG terminates the unhealthy EC2 instances and replaces them with new EC2 instances. What will you do to troubleshoot the issue and prevent unhealthy instances from being terminated by the ASG?
A) Use AWS Lambda to pause the EC2 instance before terminating
B) Use ASG Lifecycle Hooks to pause the EC2 instance in the Terminating state for troubleshooting
C) Use CloudWatch Logs to troubleshoot the issue
B) Use ASG Lifecycle Hooks to pause the EC2 instance in the Terminating state for troubleshooting
Amazon RDS supports the following databases, EXCEPT:
A) MongoDB
B) MySQL
C) MariaDB
D) Microsoft SQL Server
A) MongoDB
RDS supports MySQL, PostgreSQL, MariaDB, Oracle, MS SQL Server, and Amazon Aurora.
You’re planning for a new solution that requires a MySQL database that must be available even in case of a disaster in one of the Availability Zones. What should you use?
A) Create Read Replicas
B) Enable Encryption
C) Enable Multi-AZ
C) Enable Multi-AZ
Multi-AZ helps when you plan a disaster recovery for an entire AZ going down. If you plan against an entire AWS Region going down, you should use backups and replication across AWS Regions.
We have an RDS database that struggles to keep up with the demand of requests from our website. Our million users mostly read news, and we don’t post news very often. Which solution is NOT adapted to this problem?
A) An ElastiCache Cluster
B) RDS Multi-AZ
C) RDS Read Replicas
B) RDS Multi-AZ
Be very careful with the way you read questions at the exam. Here, the question is asking which solution is NOT adapted to this problem. ElastiCache and RDS Read Replicas do indeed help with scaling reads.
You have set up read replicas on your RDS database, but users are complaining that upon updating their social media posts, they do not see their updated posts right away. What is a possible cause for this?
A) There must be a bug in your application
B) Read Replicas have Asynchronous Replication, therefore it’s likely your users will only read Eventual Consistency
C) You should have set up Multi-AZ instead
B) Read Replicas have Asynchronous Replication, therefore it’s likely your users will only read Eventual Consistency
Which RDS (NOT Aurora) feature when used does not require you to change the SQL connection string?
A) Multi-AZ
B) Read Replicas
A) Multi-AZ
Your application running on a fleet of EC2 instances managed by an Auto Scaling Group behind an Application Load Balancer. Users have to constantly log back in and you don’t want to enable Sticky Sessions on your ALB as you fear it will overload some EC2 instances. What should you do?
A) Use your own custom Load Balancer on EC2 instances instead of using ALB
B) Store session data in RDS
C) Store session data in ElastiCache
D) Store session data in a shared EBS volume
C) Store session data in ElastiCache
Storing Session Data in ElastiCache is a common pattern to ensuring different EC2 instances can retrieve your user’s state if needed.
An analytics application is currently performing its queries against your main production RDS database. These queries run at any time of the day and slow down the RDS database which impacts your users’ experience. What should you do to improve the users’ experience?
A) Setup a Read Replica
B) Setup Multi-AZ
C) Run the analytics queries at night
A) Setup a Read Replica
Read Replicas will help as your analytics application can now perform queries against it, and these queries won’t impact the main production RDS database.
You would like to ensure you have a replica of your database available in another AWS Region if a disaster happens to your main AWS Region. Which database do you recommend to implement this easily?
A) RDS Read Replicas
B) RDS Multi-AZ
C) Aurora Read Replicas
D) Aurora Global Database
D) Aurora Global Database
You would like to ensure you have a replica of your database available in another AWS Region if a disaster happens to your main AWS Region. Which database do you recommend to implement this easily?
How can you enhance the security of your ElastiCache Redis Cluster by forcing users to enter a password when they connect?
A) Use Redis Auth
B) Use IAM Auth
C) Use Security Groups
A) Use Redis Auth
Your company has a production Node.js application that is using RDS MySQL 5.6 as its database. A new application programmed in Java will perform some heavy analytics workload to create a dashboard on a regular hourly basis. What is the most cost-effective solution you can implement to minimize disruption for the main application?
A) Enable Multi-AZ for the RDS database and run the analytics workload on the standby database
B) Create a Read Replica in a different AZ and run the analytics workload on the replica database
C) Create a Read Replica in a different AZ and run the analytics workload on the source database
B) Create a Read Replica in a different AZ and run the analytics workload on the replica database
You would like to create a disaster recovery strategy for your RDS PostgreSQL database so that in case of a regional outage the database can be quickly made available for both read and write workloads in another AWS Region. The DR database must be highly available. What do you recommend?
A) Create a Read Replica in the same region and enable Multi-AZ on the main database
B) Create a Read Replica in a different region and enable Multi-AZ on the Read Replica
C) Create a Read Replica in the same region and enable Multi-AZ on the Read Replica
D) Enable Multi-Region option on the main database
B) Create a Read Replica in a different region and enable Multi-AZ on the Read Replica
You have migrated the MySQL database from on-premises to RDS. You have a lot of applications and developers interacting with your database. Each developer has an IAM user in the company’s AWS account. What is a suitable approach to give access to developers to the MySQL RDS DB instance instead of creating a DB user for each one?
A) By default IAM users have access to your RDS database
B) Use Amazon Cognito
C) Enable IAM Database Authentication
C) Enable IAM Database Authentication
Which of the following statement is true regarding replication in both RDS Read Replicas and Multi-AZ?
A) Read Replica uses Asynchronous Replication and Multi-AZ uses Asynchronous Replication
B) Read Replica uses Asynchronous Replication and Multi-AZ uses Synchronous Replication
C) Read Replica uses Synchronous Replication and Multi-AZ uses Synchronous Replication
D) Read Replica uses Synchronous Replication and Multi-AZ uses Asynchronous Replication
B) Read Replica uses Asynchronous Replication and Multi-AZ uses Synchronous Replication
How do you encrypt an unencrypted RDS DB instance?
A) Do it straight from AWS Console, select your RDS DB instance, choose Actions then Encrypt Using KMS
B) Do it straight from AWS Console, after stopping the RDS DB instance
C) Create a snapshot of the unencrypted RDS DB instance, copy the snapshot and tick “Enable encryption,” then restore the RDS DB instance from the encrypted snapshot
C) Create a snapshot of the unencrypted RDS DB instance, copy the snapshot and tick “Enable encryption,” then restore the RDS DB instance from the encrypted snapshot
For your RDS database, you can have up to ………… Read Replicas.
A) 3
B) 5
C) 7
B) 5
Which RDS database technology does NOT support IAM Database Authentication?
A) Oracle
B) PostgreSQL
C) MySQL
A) Oracle
You have an un-encrypted RDS DB instance and you want to create Read Replicas. Can you configure the RDS Read Replicas to be encrypted?
A) No
B) Yes
A) No
You can not create encrypted Read Replicas from an unencrypted RDS DB instance.
An application running in production is using an Aurora Cluster as its database. Your development team would like to run a version of the application in a scaled-down application with the ability to perform some heavy workload on a need-basis. Most of the time, the application will be unused. Your CIO has tasked you with helping the team to achieve this while minimizing costs. What do you suggest?
A) Use an Aurora Global Database
B) Use an RDS Database
C) Use Aurora Serverless
D) Run Aurora on EC2, and write a script to shut down the EC2 instance at night
C) Use Aurora Serverless
How many Aurora Read Replicas can you have in a single Aurora DB Cluster?
A) 5
B) 10
C) 15
C) 15
Amazon Aurora supports both …………………….. databases.
A) MySQL and MariaDB
B) MySQL and PostgreSQL
C) Oracle and MariaDB
D) Oracle and MS SQL Server
B) MySQL and PostgreSQL
You work as a Solutions Architect for a gaming company. One of the games mandates that players are ranked in real-time based on their score. Your boss asked you to design then implement an effective and highly available solution to create a gaming leaderboard. What should you use?
A) Use RDS for MySQL
B) Use an Amazon Aurora
C) Use ElastiCache for Memcached
D) Use ElastiCache for Redis- Sorted Sets
D) Use ElastiCache for Redis- Sorted Sets
You have purchased mycoolcompany.com on Amazon Route 53 Registrar and would like the domain to point to your Elastic Load Balancer my-elb-1234567890.us-west-2.elb.amazonaws.com. Which Route 53 Record type must you use here?
A) CNAME
B) Alias
B) Alias
You have deployed a new Elastic Beanstalk environment and would like to direct 5% of your production traffic to this new environment. This allows you to monitor for CloudWatch metrics and ensuring that there’re no bugs exist with your new environment. Which Route 53 Record type allows you to do so?
A) Simple
B) Weighted
C) Latency
D) Failover
B) Weighted
Weighted Routing Policy allows you to redirect part of the traffic based on weight (e.g., percentage). It’s a common use case to send part of traffic to a new version of your application.
You have updated a Route 53 Record’s myapp.mydomain.com value to point to a new Elastic Load Balancer, but it looks like users are still redirected to the old ELB. What is a possible cause for this behavior?
A) Because of the Alias Record
B) Because of the CNAME record
C) Because of the TTL
D) Because of Route 53 Health Checks
C) Because of the TTL
Each DNS record has a TTL (Time To Live) which orders clients for how long to cache these values and not overload the DNS Resolver with DNS requests. The TTL value should be set to strike a balance between how long the value should be cached vs. how many requests should go to the DNS Resolver.
You have an application that’s hosted in two different AWS Regions us-west-1 and eu-west-2. You want your users to get the best possible user experience by minimizing the response time from application servers to your users. Which Route 53 Routing Policy should you choose?
A) Multi-Value
B) Weighted
C) Latency
D) Geolocation
C) Latency
Latency Routing Policy will evaluate the latency between your users and AWS Regions, and help them get a DNS response that will minimize their latency (e.g. response time)
You have a legal requirement that people in any country but France should NOT be able to access your website. Which Route 53 Routing Policy helps you in achieving this?
A) Latency
B) Simple
C) Multi-Value
D) Geolocation
D) Geolocation
You have purchased a domain on GoDaddy and would like to use Route 53 as the DNS Service Provider. What should you do to make this work?
A) Request for a domain transfer
B) Create a Private Hosted Zone and update the 3rd party Registrar NS records
C) Create a Public Hosted Zone and update the Route 53 NS records
D) Create a Public Hosted Zone and update the 3rd party Registrar NS records
D) Create a Public Hosted Zone and update the 3rd party Registrar NS records
Public Hosted Zones are meant to be used for people requesting your website through the Internet. Finally, NS records must be updated on the 3rd party Registrar.
Which of the following are NOT valid Route 53 Health Checks?
A) Health Check that monitors SQS Queue
B) Health Check that monitors an Endpoint
C) Health Check that monitors other Health Checks
D) Health Check that monitors CloudWatch Alarms
A) Health Check that monitors SQS Queue
Your website TriangleSunglasses.com is hosted on a fleet of EC2 instances managed by an Auto Scaling Group and fronted by an Application Load Balancer. Your ASG has been configured to scale on-demand based on the traffic going to your website. To reduce costs, you have configured the ASG to scale based on the traffic going through the ALB. To make the solution highly available, you have updated your ASG and set the minimum capacity to 2. How can you further reduce the costs while respecting the requirements?
A) Remove the ALB and use an Elastic IP instead
B) Reserve two EC2 instances
C) Reduce the minimum capacity to 1
D) Reduce the minimum capacity to 0
B) Reserve two EC2 instances
This is the way to save further costs as we will run 2 EC2 instances no matter what.
Which of the following will NOT help us while designing a STATELESS application tier?
A) Store session data in Amazon RDS
B) Store session data in Amazon ElastiCache
C) Store session data in the client HTTP cookies
D) Store session data on EBS volumes
D) Store session data on EBS volumes
EBS volumes are created in a specific AZ and can only be attached to one EC2 instance at a time.
You want to install software updates on 100s of Linux EC2 instances that you manage. You want to store these updates on shared storage which should be dynamically loaded on the EC2 instances and shouldn’t require heavy operations. What do you suggest?
A) Store the software updates on EBS and sync them using data replication software from one master in each AZ
B) Store the software updates on EFS and mount EFS as a network drive at startup
C) Package the software updates as an EBS snapshot and create EBS volumes for each new software update
D) Store the software updates on Amazon RDS
B) Store the software updates on EFS and mount EFS as a network drive at startup
EFS is a network file system (NFS) that allows you to mount the same file system to 100s of EC2 instances. Storing software updates on an EFS allows each EC2 instance to access them.
As a Solutions Architect, you’re planning to migrate a complex ERP software suite to AWS Cloud. You’re planning to host the software on a set of Linux EC2 instances managed by an Auto Scaling Group. The software traditionally takes over an hour to set up on a Linux machine. How do you recommend you speed up the installation process when there’s a scale-out event?
A) Use a Golden AMI
B) Bootstrap using EC2 User Data
C) Store the application in Amazon RDS
D) Retrieve the application setup files from EFS
A) Use a Golden AMI
Golden AMI is an image that contains all your software installed and configured so that future EC2 instances can boot up quickly from that AMI.
You’re developing an application and would like to deploy it to Elastic Beanstalk with minimal cost. You should run it in ………………
A) Single Instance Mode
B) High Availability Mode
A) Single Instance Mode
The question mentions that you’re still in the development stage and you want to reduce costs. Single Instance Mode will create one EC2 instance and one Elastic IP.
You’re deploying your application to an Elastic Beanstalk environment but you notice that the deployment process is painfully slow. After reviewing the logs, you found that your dependencies are resolved on each EC2 instance each time you deploy. How can you speed up the deployment process with minimal impact?
A) Remove some dependencies in your code
B) Place the dependencies in Amazon EFS
C) Create a Golden AMI that contains the dependencies and use that image to launch the EC2 instances
C) Create a Golden AMI that contains the dependencies and use that image to launch the EC2 instances
Golden AMI is an image that contains all your software, dependencies, and configurations, so that future EC2 instances can boot up quickly from that AMI.
You have a 25 GB file that you’re trying to upload to S3 but you’re getting errors. What is a possible cause for this?
A) The file size limit on S3 is 5GB
B) S3 Service in requested AWS Region must be down
C) Use Multi-Part upload when you upload files bigger than 5GB
C) Use Multi-Part upload when you upload files bigger than 5GB
Multi-Part Upload is recommended as soon as the file is over 100 MB.
You’re getting errors while trying to create a new S3 bucket named “dev”. You’re using a new AWS Account with no S3 buckets created before. What is a possible cause for this?
A) You’re missing IAM permissions to create an S3 bucket
B) S3 bucket names must be globally unique and “dev” is already taken
B) S3 bucket names must be globally unique and “dev” is already taken
You have enabled versioning in your S3 bucket which already contains a lot of files. Which version will the existing files have?
A) 1
B) 0
C) -1
D) null
D) null
Your client wants to make sure that file encryption is happening in S3, but he wants to fully manage the encryption keys and never store them in AWS. You recommend him to use ……………………….
A) SSE-S3
B) SSE-KMS
C) SSE-C
D) Client-Side Encryption
C) SSE-C
With SSE-C, the encryption happens in AWS and you have full control over the encryption keys.
A company you’re working for wants their data stored in S3 to be encrypted. They don’t mind the encryption keys stored and managed by AWS, but they want to maintain control over the rotation policy of the encryption keys. You recommend them to use ………………..
A) SSE-S3
B) SSE-KMS
C) SSE-C
D) Client-Side Encryption
B) SSE-KMS
With SSE-KMS, the encryption happens in AWS, and the encryption keys are managed by AWS but you have full control over the rotation policy of the encryption key. Encryption keys stored in AWS.
Your company does not trust AWS for the encryption process and wants it to happen on the application. You recommend them to use ………………..
A) SSE-S3
B) SSE - KMS
C) SSE-C
D) Client-Side Encryption
D) Client-Side Encryption
With Client-Side Encryption, you have to do the encryption yourself and you have full control over the encryption keys. You perform the encryption yourself and send the encrypted data to AWS. AWS does not know your encryption keys and cannot decrypt your data.
You have updated an S3 bucket policy to allow IAM users to read/write files in the S3 bucket, but one of the users complain that he can’t perform a PutObject API call. What is a possible cause for this?
A) The S3 bucket policy must be wrong
B) The user is lacking permissions
C) The IAM user must have an explicit DENY in the attached IAM policy
D) You need to contact AWS support to lift this limit
C) The IAM user must have an explicit DENY in the attached IAM policy
Explicit DENY in an IAM Policy will take precedence over an S3 bucket policy.
You have a website that loads files from an S3 bucket. When you try the URL of the files directly in your Chrome browser it works, but when the website you’re visiting tries to load these files it doesn’t. What’s the problem?
A) The Bucket policy is wrong
B) The IAM policy is wrong
C) CORS is wrong
D) Encryption is wrong
C) CORS is wrong
Cross-Origin Resource Sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.
An application hosted on an EC2 instance wants to upload objects to an S3 bucket using the PutObject API call, but it lacks the required permissions. What should you do?
A) From inside the EC2 instance, run aws configure and insert your personal IAM Credentials, because you have access to do the required API call
B) Ask an administrator to attach an IAM Policy to the IAM Role on your EC2 instance that authorizes it to do the required API call
C) Export the environment variables with your IAM credentials on the EC2 instance
D) Use the EC2 Metadata API call
B) Ask an administrator to attach an IAM Policy to the IAM Role on your EC2 instance that authorizes it to do the required API call
IAM Roles are the right way to provide credentials and permissions to an EC2 instance.
You and your colleague are working on an application that’s interacting with some AWS services through making API calls. Your colleague can run the application on his machine without issues, while you get API Authorization Exceptions. What should you do?
A) Send him your AWS Access Key and Secret Access Key so he can replicate the issue on his machine
B) Ask him to send you his IAM credentials so you can work without issues
C) Compare both your IAM Policy and his IAM Policy in AWS Policy Simulator to understand the differences
D) Ask him to create an EC2 instance and insert his IAM credentials inside it, so you can run the application from the EC2 instance
C) Compare both your IAM Policy and his IAM Policy in AWS Policy Simulator to understand the differences
Your administrator launched a Linux EC2 instance and gives you the EC2 Key Pair so you can SSH into it. After getting into the EC2 instance, you want to get the EC2 instance ID. What is the best way to do this?
A) Create an instance and attach it to your EC2 instance so you can perform a describe-instances API call
B) Query the user data at http://169.254.169.254/latest/user-data
C) Query the metadata at http://169.254.169.254/latest/meta-data
D) B) Query the metadata at http://254.169.254.169/latest/meta-data
C) Query the metadata at http://169.254.169.254/latest/meta-data
You have enabled versioning and want to be extra careful when it comes to deleting files on an S3 bucket. What should you enable to prevent accidental permanent deletions?
A) Use a bucket policy
B) Enable MFA Delete
C) Encrypt the files
D) Disable versioning
B) Enable MFA Delete
MFA Delete forces users to use MFA codes before deleting S3 objects. It’s an extra level of security to prevent accidental deletions.
You would like all your files in an S3 bucket to be encrypted by default. What is the optimal way of achieving this?
A) Use a bucket policy that forces HTTPS connections
B) Enable Default Encryption
C) Enable versioning
B) Enable Default Encryption
You suspect that some of your employees try to access files in an S3 bucket that they don’t have access to. How can you verify this is indeed the case without them noticing?
A) Enable S3 Access Logs and analyze them using Athena
B) Restrict their IAM policies and look at CloudTrail logs
C) Use a bucket policy
A) Enable S3 Access Logs and analyze them using Athena
S3 Access Logs log all the requests made to S3 buckets and Amazon Athena can then be used to run serverless analytics on top of the log files.
You want the content of an S3 bucket to be fully available in different AWS Regions. That will help your team perform data analysis at the lowest latency and cost possible. What S3 feature should you use?
A) Amazon CloudFront Distributions
B) S3 Versioning
C) S3 Static Website Hosting
D) S3 Replication
D) S3 Replication
S3 Replication allows you to replicate data from an S3 bucket to another in the same/different AWS Region.
You have 3 S3 buckets. One source bucket A, and two destination buckets B and C in different AWS Regions. You want to replicate objects from bucket A to both bucket B and C. How would you achieve this?
A) Configure replication from bucket A to bucket B, then from bucket A to bucket C
B) Configure replication from bucket A to bucket B, then from bucket B to bucket C
C) Configure replication from bucket A to bucket C, then from bucket C to bucket B
A) Configure replication from bucket A to bucket B, then from bucket A to bucket C
Which of the following is NOT a Glacier Deep Archive retrieval mode?
A) Expedited (1 - 5 minutes)
B) Standard (12 hours)
C) Bulk (48 hours)
A) Expedited (1 - 5 minutes)
How can you be notified when there’s an object uploaded to your S3 bucket?
A) S3 Select
B) S3 Access Logs
C) S3 Event Notifications
D) S3 Analytics
C) S3 Event Notifications
You are looking to provide temporary URLs to a growing list of federated users to allow them to perform a file upload on your S3 bucket to a specific location. What should you use?
A) S3 CORS
B) S3 Pre-Signed URL
C) S3 Bucket Policies
D) IAM Users
B) S3 Pre-Signed URL
S3 Pre-Signed URLs are temporary URLs that you generate to grant time-limited access to some actions in your S3 bucket.
You have an S3 bucket that has S3 Versioning enabled. This S3 bucket has a lot of objects, and you would like to remove old object versions to reduce costs. What’s the best approach to automate the deletion of these old object versions?
A) S3 Lifecycle Rules - Transition Actions
B) S3 Lifecycle Rules - Expiration Actions
C) S3 Access Logs
B) S3 Lifecycle Rules - Expiration Actions
How can you automate the transition of S3 objects between their different tiers?
A) AWS Lambda
B) CloudWatch Events
C) S3 Lifecycle Rules
C) S3 Lifecycle Rules
Which of the following is NOT a Glacier retrieval mode?
A) Instant (10 Seconds)
B) Expedited (1 - 5 minutes)
C) Standard (3 - 5 hours)
D) Bulk (5 - 12 hours)
A) Instant (10 Seconds)
While you’re uploading large files to an S3 bucket using Multi-part Upload, there are a lot of unfinished parts stored in the S3 bucket due to network issues. You are not using these unfinished parts and they cost you money. What is the best approach to remove these unfinished parts?
A) Use AWS Lambda to loop on each old/unfinished part and delete them
B) Request AWS Support to help you delete old/unfinished parts
C) Use an S3 Lifecycle Policy to automate old/unfinished parts deletion
C) Use an S3 Lifecycle Policy to automate old/unfinished parts deletion
Which of the following is a Serverless data analysis service allowing you to query data in S3?
A) S3 Analytics
B) Athena
C) Redshift
D) RDS
B) Athena
You are looking to get recommendations for S3 Lifecycle Rules. How can you analyze the optimal number of days to move objects between different storage tiers?
A) S3 Inventory
B) S3 Analytics
C) S3 Lifecycle Rules Advisor
B) S3 Analytics
You are looking to build an index of your files in S3, using Amazon RDS PostgreSQL. To build this index, it is necessary to read the first 250 bytes of each object in S3, which contains some metadata about the content of the file itself. There are over 100,000 files in your S3 bucket, amounting to 50 TB of data. How can you build this index efficiently?
A) Use the RDS Import feature to load the data from S2 to PostgreSQL, and run a SQL query to build the index
B) Create an application that will traverse the S3 bucket, read all the files one by one, extract the fist 250 bytes, and store that information in RDS
C) Create an application that will traverse the S3 bucket, issue a Byte Range Fetch for the first 250 bytes, and store that information in RDS
D) Create an application that will traverse the S2 bucket, use S3 Select to get the first 250 bytes, and store that information in RDS
C) Create an application that will traverse the S3 bucket, issue a Byte Range Fetch for the first 250 bytes, and store that information in RDS
For compliance reasons, your company has a policy mandate that database backups must be retained for 4 years. It shouldn’t be possible to erase them. What do you recommend?
A) Glacier Vaults with Vault Lock Policies
B) EFS network drives with restrictive Linux permissions
C) S3 with Bucket Policies
A) Glacier Vaults with Vault Lock Policies
You have a large dataset stored on-premises that you want to upload to the S3 bucket. The dataset is divided into 10 GB files. You have good bandwidth but your Internet connection isn’t stable. What is the best way to upload this dataset to S3 and ensure that the process is fast and avoid any problems with the Internet connection?
A) Use Multi-part Upload only
B) Use S3 Select & use S3 Transfer Acceleration
C) Use S3 Multi-part Upload & S3 Transfer Acceleration
C) Use S3 Multi-part Upload & S3 Transfer Acceleration
You would like to retrieve a subset of your dataset stored in S3 with the .csv format. You would like to retrieve a month of data and only 3 columns out of 10, to minimize compute and network costs. What should you use?
A) S3 Analytics
B) S3 Access Logs
C) S3 Select
D) S3 Inventory
C) S3 Select
You have a paid content that is stored in the S3 bucket. You want to distribute that content globally, so you have set up a CloudFront Distribution and configured the S3 bucket to only exchange data with your CloudFront Distribution. Which CloudFront feature allows you to securely distribute this paid content?
A) Origin Access Identity
B) S3 Pre-Signed URL
C) CloudFront Signed URL
D) CloudFront Invalidations
C) CloudFront Signed URL
CloudFront Signed URLs are commonly used to distribute paid content through dynamically generated signed URLs.
You have a CloudFront Distribution that serves your website hosted on a fleet of EC2 instances behind an Application Load Balancer. All your clients are from the United States, but you found that some malicious requests are coming from other countries. What should you do to only allow users from the US and block other countries?
A) Use CloudFront Geo Restriction
B) Use Origin Access Identity
C) Set up a security group and attach it to your CloudFront Distribution
D) Use a Route 53 Latency record and attach it to CloudFront
A) Use CloudFront Geo Restriction
You have a static website hosted on an S3 bucket. You have created a CloudFront Distribution that points to your S3 bucket to better serve your requests and improve performance. After a while, you noticed that users can still access your website directly from the S3 bucket. You want to enforce users to access the website only through CloudFront. How would you achieve that?
A) Send an email to your clients and tell them not to use the S3 endpoint
B) Configure your CloudFront Distribution and create an Origin Access Identity, then update your S3 Bucket Policy to only accept requests from your CloudFront Distribution OAI user
C) Use S3 Access Points to redirect clients to CloudFront
B) Configure your CloudFront Distribution and create an Origin Access Identity, then update your S3 Bucket Policy to only accept requests from your CloudFront Distribution OAI user
A website is hosted on a set of EC2 instances fronted by an Application Load Balancer. You have created a CloudFront Distribution and set up its origin to point to your ALB. What should you use to provide access to hundreds of private files served by your CloudFront distribution?
A) CloudFront Signed URLs
B) CloudFront Origin Access Identity
C) CloudFront Signed Cookies
D) CloudFront HTTPS Encryption
C) CloudFront Signed Cookies
Signed Cookies are useful when you want to access multiple files.
You are creating an application that is going to expose an HTTP REST API. There is a need to provide request routing rules at the HTTP level. Due to security requirements, your application can only be exposed through the use of two static IP addresses. How can you create a solution that validates these requirements?
A) Use a Network Load Balancer and attach Elastic IPs to it
B) Use AWS Global Accelerator and an Application Load Balancer
C) Use an Application Load Balancer and attach Elastic IPs to it
D) Use CloudFront with Elastic IP and an Application Load Balancer
B) Use AWS Global Accelerator and an Application Load Balancer
AWS Global Accelerator will provide us with the two static IP addresses and the ALB will provide us with the HTTP routing rules.
What does this S3 bucket policy do?
{ "Version": "2012-10-17", "Id": "Mystery policy", "Statement": [{ "Sid": "What could it be?", "Effect": "Allow", "Principal": { "CanonicalUser": "CloudFront Origin Identity Canonical User ID" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::examplebucket/*" }] }
A) Forces GetObject request to be encrypted if coming from CloudFront
B) Only allows the S3 bucket content to be accessed from your CloudFront Distribution Origin Access Identity
C) Only allows GetObject type of request on the S3 bucket from anybody
B) Only allows the S3 bucket content to be access from your CloudFront Distribution Origin Access Identity
You need to move hundreds of Terabytes into Amazon S3, then process the data using a fleet of EC2 instances. You have a 1 Gbit/s broadband. You would like to move the data faster and possibly processing it while in transit. What do you recommend?
A) Use your network
B) Use Snowcone
C) Use AWS Data Migration
D) Use Snowball Edge
D) Use Snowball Edge
Snowball Edge is the right answer as it comes with computing capabilities and allows you to pre-process the data while it’s being moved into Snowball.
You want to expose virtually infinite storage for your tape backups. You want to keep the same software you’re using and want an iSCSI compatible interface. What do you use?
A) AWS Snowball
B) AWS Storage Gateway - Tape Gateway
C) AWS Storage Gateway - Volume Gateway
D) AWS Storage Gateway - File Gateway
B) AWS Storage Gateway - Tape Gateway
Your EC2 Windows Servers need to share some data by having a Network File System mounted on them which respects the Windows security mechanisms and has integration with Microsoft Active Directory. What do you recommend?
A) Amazon FSx for Windows (File Server)
B) Amazon EFS
C) Amazon FSx for Lustre
D) Amazon S3 with File Gateway
A) Amazon FSx for Windows (File Server)
You have hundreds of Terabytes that you want to migrate to AWS S3 as soon as possible. You tried to use your network bandwidth and it will take around 3 weeks to complete the upload process. What is the recommended approach to using in this situation?
A) AWS Storage Gateway - Volume Gateway
B) S3 Multi-Part Upload
C) AWS Snowball Edge
D) AWS Data Migration Service
C) AWS Snowball Edge
You have a large dataset stored in S3 that you want to access from on-premises servers using the NFS or SMB protocol. Also, you want to authenticate access to these files through on-premises Microsoft AD. What would you use?
A) AWS Storage Gateway - Volume Gateway
B) AWS Storage Gateway - File Gateway
C) AWS Storage Gateway - Tape Gateway
D) AWS Data Migration Service
B) AWS Storage Gateway - File Gateway
You are planning to migrate your company’s infrastructure from on-premises to AWS Cloud. You have an on-premises Microsoft Windows File Server that you want to migrate. What is the most suitable AWS service you can use?
A) Amazon FSx for Windows (File Server)
B) AWS Storage Gateway - File Gateway
C) AWS Managed Microsoft AD
A) Amazon FSx for Windows (File Server)
You would like to have a distributed POSIX compliant file system that will allow you to maximize the IOPS in order to perform some High-Performance Computing (HPC) and genomics computational research. This file system has to easily scale to millions of IOPS. What do you recommend?
A) EFS with Max. IO enabled
B) Amazon FSx for Lustre
C) Amazon S3 mounted on the EC2 instances
D) EC2 instance Store
B) Amazon FSx for Lustre
Which deployment option in the FSx file system provides you with long-term storage that’s replicated within AZ?
A) Scratch File System
B) Persistent File System
B) Persistent File System
Provides long-term storage where data is replicated within the same AZ. Failed files were replaced within minutes.
Which of the following protocols is NOT supported by AWS Transfer Family?
A) File Transfer Protocol (FTP)
B) File Transfer Protocol over SSL (FTPS)
C) Transport Layer Security (TLS)
D) Secure File Transfer Protocol (SFTP)
C) Transport Layer Security (TLS)
AWS Transfer Family is a managed service for file transfers into and out of S3 or EFS using the FTP protocol, thus TLS is not supported.
You have an e-commerce website and you are preparing for Black Friday which is the biggest sale of the year. You expect that your traffic will increase by 100x. Your website already using an SQS Standard Queue, and you’re running a fleet of EC2 instances in an Auto Scaling Group to consume SQS messages. What should you do to prepare your SQS Queue?
A) Contact AWS Support to pre-warm your SQS Standard Queue
B) Enable Auto Scaling in your SQS Queue
C) Increase the capacity of the SQS Queue
D) Do nothing, SQS scales automatically
D) Do nothing, SQS scales automatically
How would you configure your SQS messages to be processed by consumers only after 5 minutes of being published to your SQS Queue?
A) Increase the DelaySeconds parameter
B) Change the Visibility Timeout
C) Enable Long Polling
D) Use Amazon SQS Extended Client
A) Increase the DelaySeconds parameter
SQS Delay Queues is a period of time during which Amazon SQS keeps new SQS messages invisible to consumers. In SQS Delay Queues, a message is hidden when it is first added to the queue. (default: 0 mins, max.: 15 mins)
You have an SQS Queue where each consumer polls 10 messages at a time and finishes processing them in 1 minute. After a while, you noticed that the same SQS messages are received by different consumers resulting in your messages being processed more than once. What should you do to resolve this issue?
A) Enable Long Polling
B) Add DelaySeconds parameter to the messages when being produced
C) Increase the Visibility Timeout
D) Decrease the Visibility Timeout
C) Increase the Visibility Timeout
SQS Visibility Timeout is a period of time during which Amazon SQS prevents other consumers from receiving and processing the message again. In Visibility Timeout, a message is hidden only after it is consumed from the queue. Increasing the Visibility Timeout gives more time to the consumer to process the message and prevent duplicate reading of the message. (default: 30 sec., min.: 0 sec., max.: 12 hours)