Quizzes Flashcards
This deck contains the AWS Udemy course quizzes and their answers
What is a proper definition of an IAM Role?
- IAM Users in multiple User Groups
- An IAM entity that defines a password policy for IAM Users
- An IAM entity that defines a set of permissions for making requests to AWS services, and will be used by an AWS Service
- Permissions assigned to IAM Users to perform actions
An IAM entity that defines a set of permissions for making requests to AWS services, and will be used by an AWS Service
Some AWS services need to perform actions on your behalf. To do so, you assign permissions to AWS services with IAM Roles.
Which of the following is an IAM Security Tool?
- IAM Credentials Report
- IAM Root Account Manager
- IAM Services Report
- IAM Security Advisor
IAM Credentials Report
IAM Credentials Report lists all your AWS Account’s IAM Users and the status of their various credentials.
Which answer is INCORRECT regarding IAM Users?
- IAM Users can belong to multiple User Groups
- IAM Users don’t have to belong to a User Group
- IAM Policies can be attached directly to IAM Users
- IAM Users access AWS Services using root account credentials
IAM Users access AWS Services using root account credentials
IAM Users access AWS services using their own credentials (username & password or Access Keys).
Which of the following is an IAM best practice?
- Create several IAM Users for one physical person
- Don’t use the root user account
- Share your AWS account credentials with your colleague, so (s)he can perform a task for you
- Do not enable MFA for easier access
Don’t use the root user account
Use the root account only to create your first IAM User and a few account/service management tasks. For everyday tasks, use an IAM User.
What are IAM Policies?
- A set of policies that defines how AWS accounts interact with each other
- JSON document that defines a set of permissions for making requests to AWS services, and can be used by AWS Users, User Groups, and IAM Roles
- A set of policies that define a password for IAM Users
- A set of policies defined by AWS that show how customers interact with AWS
JSON document that defines a set of permissions for making requests to AWS services, and can be used by AWS Users, User Groups, and IAM Roles
Which principle should you apply regarding IAM Permissions?
- Grant most privilege
- Grant more permissions if your employee asks you to
- Grant least privilege
- Restrict root account permissions
Grant least privilege
Don’t give more permissions than the user needs.
What should you do to increase your root account security?
- Remove permissions from the root account
- Only access AWS services through AWS Command Line Interface (CLI)
- Don’t create IAM Users, only access your AWS account using the root account
- Enable Multi-Factor Authentication (MFA)
Enable Multi-Factor Authentication (MFA)
When you enable MFA, this adds another layer of security. Even if your password is stolen, lost, or hacked your account is not compromised.
IAM User Groups can contain IAM Users and other User Groups
- True
- False
False
IAM User Groups can contain only IAM Users.
An IAM policy consists of one or more statements. A statement in IAM Policy consists of the following, EXCEPT:
- Effect
- Principal
- Version
- Action
- Resource
Version
A statement in an IAM Policy consists of Sid, Effect, Principal, Action, Resource, and Condition. Version is part of the IAM Policy itself, not the statement.
Which EC2 Purchasing Option can provide you the biggest discount, but it is not suitable for critical jobs or databases?
- Convertible Reserved Instances
- Dedicated Hosts
- Spot Instances
Spot Instances
Spot Instances are good for short workloads and this is the cheapest EC2 Purchasing Option. But, they are less reliable because you can lose your EC2 instance.
What should you use to control traffic in and out of EC2 instances?
- Network Access Control List (NACL)
- Security Groups
- IAM Policies
Security Groups
Security Groups operate at the EC2 instance level and can control traffic.
How long can you reserve an EC2 Reserved Instance?
- 1 or 3 years
- 2 or 4 years
- 6 months or 1 year
- Anytime between 1 and 3 years
1 or 3 years
EC2 Reserved Instances can be reserved for 1 or 3 years only.
You would like to deploy a High-Performance Computing (HPC) application on EC2 instances. Which EC2 instance type should you choose?
- Storage Optimized
- Compute Optimized
- Memory Optimized
- General Purpose
Compute Optimized
Compute Optimized EC2 instances are great for compute-intensive workloads requiring high-performance processors (e.g., batch processing, media transcoding, high-performance computing, scientific modeling & machine learning, and dedicated gaming servers).
Which EC2 Purchasing Option should you use for an application you plan to run on a server continuously for 1 year?
- Reserved Instances
- Spot Instances
- On-Demand Instances
Reserved Instances
Reserved Instances are good for long workloads. You can reserve EC2 instances for 1 or 3 years.
You are preparing to launch an application that will be hosted on a set of EC2 instances. This application needs some software installation and some OS packages need to be updated during the first launch. What is the best way to achieve this when you launch the EC2 instances?
- Connect to each EC2 instance using SSH, then install the required software and update your OS packages manually
- Write a bash script that installs the required software and updates to your OS, then contact AWS Support and provice them with the script. They will run it on your EC2 instances at launch
- Write a bash script that installs the required software and updates to your OS, then use this script in EC2 User Data when you launch your EC2 instances
Write a bash script that installs the required software and updates to your OS, then use this script in EC2 User Data when you launch your EC2 instances
EC2 User Data is used to bootstrap your EC2 instances using a bash script. This script can contain commands such as installing software/packages, download files from the Internet, or anything you want.
Which EC2 Instance Type should you choose for a critical application that uses an in-memory database?
- Compute Optimized
- Storage Optimized
- Memory Optimized
- General Purpose
Memory Optimized
Memory Optimized EC2 instances are great for workloads requiring large data sets in memory.
Security Groups can be attached to only one EC2 instance.
- True
- False
False
Security Groups can be attached to multiple EC2 instances within the same AWS Region/VPC.
You have an e-commerce application with an OLTP database hosted on-premises. This application has popularity which results in its database having thousands of requests per second. You want to migrate the database to an EC2 instance. Which EC2 Instance Type should you choose to handle this high-frequency OLTP database?
- Compute Optimized
- Storage Optimized
- Memory Optimized
- General Purpose
Storage Optimized
Storage Optimized EC2 instances are great for workloads requiring high, sequential read/write access to large data sets on local storage.
You’re planning to migrate on-premises applications to AWS. Your company has strict compliance requirements that require your applications to run on dedicated servers. You also need to use your own server-bound software license to reduce costs. Which EC2 Purchasing Option is suitable for you?
- Convertible Reserved Instances
- Dedicated Hosts
- Spot Instances
Dedicated Hosts
Dedicated Hosts are good for companies with strong compliance needs or for software that have complicated licensing models. This is the most expensive EC2 Purchasing Option available.
You would like to deploy a database technology on an EC2 instance and the vendor license bills you based on the physical cores and underlying network socket visibility. Which EC2 Purchasing Option allows you to get visibility into them?
- Spot Instances
- On-Demand
- Dedicated Hosts
- Reserved Instances
Dedicated Hosts
Spot Fleet is a set of Spot Instances and optionally ……………
- Reserved Instances
- On-Demand Instances
- Dedicated Hosts
- Dedicated Instances
On-Demand Instances
Spot Fleet is a set of Spot Instances and optionally On-demand Instances. It allows you to automatically request Spot Instances with the lowest price.
You have launched an EC2 instance that will host a NodeJS application. After installing all the required software and configured your application, you noted down the EC2 instance public IPv4 so you can access it. Then, you stopped and then started your EC2 instance to complete the application configuration. After restart, you can’t access the EC2 instance, and you found that the EC2 instance public IPv4 has been changed. What should you do to assign a fixed public IPv4 to your EC2 instance?
- Allocate an Elastic IP and assign it to your EC2 instance
- From inside your EC2 instance OS, change network configuration from DHCP to static and assign a public IPv4
- Contact AWS Support and request a fixed public IPv4 to your EC2 instance
- This can’t be done, you can only assign a fixed private IPv4 to your EC2 instance
Allocate an Elastic IP and assign it to your EC2 instance
Elastic IP is a public IPv4 that you own as long as you want and you can attach it to one EC2 instance at a time.
You have an application performing big data analysis hosted on a fleet of EC2 instances. You want to ensure your EC2 instances have the highest networking performance while communicating with each other. Which EC2 Placement Group should you choose?
- Spread Placement Group
- Cluster Placement Group
- Partition Placement Group
Cluster Placement Group
Cluster Placement Groups place your EC2 instances next to each other which gives you high-performance computing and networking.
You have a critical application hosted on a fleet of EC2 instances in which you want to achieve maximum availability when there’s an AZ failure. Which EC2 Placement Group should you choose?
- Spread Placement Group
- Cluster Placement Group
- Partition Placement Group
Spread Placement Group
Spread Placement Group places your EC2 instances on different physical hardware across different AZs.
Elastic Network Interface (ENI) can be attached to EC2 instances in another AZ.
- True
- False
False
Elastic Network Interfaces (ENIs) are bounded to a specific AZ. You can not attach an ENI to an EC2 instance in a different AZ.
The following are true regarding EC2 Hibernate, EXCEPT:
- EC2 Instance Root Volume must be an Instance Store Volume
- Supports On-Demand and Reserved Instances
- EC2 Instance RAM must be less than 150GB
- EC2 Instance Root Volume type must be an EBS Volume
EC2 Instance Root Volume must be an Instance Store Volume
To enable EC2 Hibernate, the EC2 Instance Root Volume type must be an EBS volume and must be encrypted to ensure the protection of sensitive content.
You have just terminated an EC2 instance in us-east-1a
, and its attached EBS volume is now available. Your teammate tries to attach it to an EC2 instance in us-east-1b
but he can’t. What is a possible cause for this?
- He’s missing IAM permissions
- EBS volumes are locked to an AWS Region
- EBS volumes are locked to an Availability Zone
EBS volumes are locked to an Availability Zone
EBS Volumes are created for a specific AZ. It is possible to migrate them between different AZs using EBS Snapshots.
You have launched an EC2 instance with two EBS volumes, Root volume type and the other EBS volume type to store the data. A month later you are planning to terminate the EC2 instance. What’s the default behavior that will happen to each EBS volume?
- Both the Root volume type and the EBS volume type will be deleted
- The Root volume type will be deleted and the EBS volume type will not be deleted
- The Root volume type will not be deleted and the EBS volume type will be deleted
- Both the Root volume type and the EBS volume type will not be deleted
The Root volume type will be deleted and the EBS volume type will not be deleted
By default, the Root volume type will be deleted as its “Delete On Termination” attribute checked by default. Any other EBS volume types will not be deleted as its “Delete On Termination” attribute disabled by default.
You can use an AMI in N.Virginia Region us-east-1
to launch an EC2 instance in any AWS Region.
- True
- False
False
AMIs are built for a specific AWS Region, they’re unique for each AWS Region. You can’t launch an EC2 instance using an AMI in another AWS Region, but you can copy the AMI to the target AWS Region and then use it to create your EC2 instances.
Which of the following EBS volume types can be used as boot volumes when you create EC2 instances?
- gp2
, gp3
, st1
, sc1
- gp2
, gp3
, io1
, io2
- io1
, io2
st1
, sc1
gp2, gp3, io1, io2
When creating EC2 instances, you can only use the following EBS volume types as boot volumes: gp2, gp3, io1, io2, and Magnetic (Standard).
What is EBS Multi-Attach?
- Attach the same EBS volume to multiple EC2 instances in multiple AZs
- Attach multiple EBS volumes in the same AZ to the same EC2 instance
- Attach the same EBS volume to multiple EC2 instances in the same AZ
- Attach multiple EBS volumes in multiple AZs to the same EC2 instance
Attach the same EBS volume to multiple EC2 instances in the same AZ
Using EBS Multi-Attach, you can attach the same EBS volume to multiple EC2 instances in the same AZ. Each EC2 instance has full read/write permissions.
You would like to encrypt an unencrypted EBS volume attached to your EC2 instance. What should you do?
- Create an EBS snapshot of your EBS volume. Copy the snapshot and tick the option to encrypt the copied snapshot. Then, use the encrypted snapshot to create a new EBS volume
- Select your EBS volume, choose Edit Attributes, then tick the Encrypt using KMS option
- Create a new encrypted EBS volume, then copy data from your unencrypted EBS volume to the new EBS volume
- Submit a request to AWS Support to encrypt your EBS volume
Create an EBS snapshot of your EBS volume. Copy the snapshot and tick the option to encrypt the copied snapshot. Then, use the encrypted snapshot to create a new EBS volume
You have a fleet of EC2 instances distributes across AZs that process a large data set. What do you recommend to make the same data to be accessible as an NFS drive to all of your EC2 instances?
- Use EBS
- Use EFS
- Use an Instance Store
Use EFS
EFS is a network file system (NFS) that allows you to mount the same file system on EC2 instances that are in different AZs.
You would like to have a high-performance local cache for your application hosted on an EC2 instance. You don’t mind losing the cache upon the termination of your EC2 instance. Which storage mechanism do you recommend as a Solutions Architect?
- EBS
- EFS
- Instance Store
Instance Store
EC2 Instance Store provides the best disk I/O performance.
You are running a high-performance database that requires an IOPS of 310,000 for its underlying storage. What do you recommend?
- Use an EBS gp2
drive
- Use an EBS io1
drive
- Use an EC2 Instance Store
- Use an EBS io2
Block Express drive
Use an EC2 Instance Store
You can run a database on an EC2 instance that uses an Instance Store, but you’ll have a problem that the data will be lost if the EC2 instance is stopped (it can be restarted without problems). One solution is that you can set up a replication mechanism on another EC2 instance with an Instance Store to have a standby copy. Another solution is to set up backup mechanisms for your data. It’s all up to you how you want to set up your architecture to validate your requirements. In this use case, it’s around IOPS, so we have to choose an EC2 Instance Store.
Scaling an EC2 instance from ` r4.large to
r4.4xlarge` is called …………………
- Horizontal Scalability
- Vertical Scalability
Vertical Scalability
Running an application on an Auto Scaling Group that scales the number of EC2 instances in and out is called …………………
- Horizontal Scalability
- Vertical Scalability
Horizontal Scalability
Elastic Load Balancers provide a …………………..
- static IPv4 we can use in our application
- static DNS name we can use in our application
- static IPv6 we can use in our application
static DNS name we can use in our application
Only Network Load Balancer provides both static DNS name and static IP. While, Application Load Balancer provides a static DNS name but it does NOT provide a static IP. The reason being that AWS wants your Elastic Load Balancer to be accessible using a static endpoint, even if the underlying infrastructure that AWS manages changes.
You are running a website on 10 EC2 instances fronted by an Elastic Load Balancer. Your users are complaining about the fact that the website always asks them to re-authenticate when they are moving between website pages. You are puzzled because it’s working just fine on your machine and in the Dev environment with 1 EC2 instance. What could be the reason?
- Your website must have an issue when hosted on multiple EC2 instances
- The EC2 instances log out users as they can’t see their IP addresses, instead, they receive ELB IP addresses
- The Elastic Load Balancer does not have Sticky Sessions enabled
The Elastic Load Balancer does not have Sticky Sessions enabled
ELB Sticky Session feature ensures traffic for the same client is always redirected to the same target (e.g., EC2 instance). This helps that the client does not lose his session data.
You are using an Application Load Balancer to distribute traffic to your website hosted on EC2 instances. It turns out that your website only sees traffic coming from private IPv4 addresses which are in fact your Application Load Balancer’s IP addresses. What should you do to get the IP address of clients connected to your website?
- Modify your website’s frontend so that users send their IP in every request
- Modify your website’s backend to get the client IP address from the X-Forwarded-For header
- Modify your website’s backend to get the client IP address from the X-Forwarded-Port header
- Modify your website’s backend to get the client IP address from the X-Forwarded-Proto header
Modify your website’s backend to get the client IP address from the X-Forwarded-For header
When using an Application Load Balancer to distribute traffic to your EC2 instances, the IP address you’ll receive requests from will be the ALB’s private IP addresses. To get the client’s IP address, ALB adds an additional header called “X-Forwarded-For” contains the client’s IP address.
You hosted an application on a set of EC2 instances fronted by an Elastic Load Balancer. A week later, users begin complaining that sometimes the application just doesn’t work. You investigate the issue and found that some EC2 instances crash from time to time. What should you do to protect users from connecting to the EC2 instances that are crashing?
- Enable ELB Health Checks
- Enable ELB Stickiness
- Enable SSL Termination
- Enable Cross-Zone Load Balancing
Enable ELB Health Checks
When you enable ELB Health Checks, your ELB won’t send traffic to unhealthy (crashed) EC2 instances.
You are working as a Solutions Architect for a company and you are required to design an architecture for a high-performance, low-latency application that will receive millions of requests per second. Which type of Elastic Load Balancer should you choose?
- Application Load Balancer
- Classic Load Balancer
- Network Load Balancer
Network Load Balancer
Network Load Balancer provides the highest performance and lowest latency if your application needs it.
Application Load Balancers support the following protocols, EXCEPT:
- HTTP
- HTTPS
- TCP
- WebSocket
TCP
Application Load Balancers support HTTP, HTTPS and WebSocket.
Application Load Balancers can route traffic to different Target Groups based on the following, EXCEPT:
- Client’s Location (Geography)
- Hostname
- Request URL Path
- Source IP Address
Client’s Location (Geography)
ALBs can route traffic to different Target Groups based on URL Path, Hostname, HTTP Headers, and Query Strings.
Registered targets in a Target Groups for an Application Load Balancer can be one of the following, EXCEPT:
- EC2 Instances
- Network Load Balancer
- Private IP Addresses
- Lambda Functions
Network Load Balancer
For compliance purposes, you would like to expose a fixed static IP address to your end-users so that they can write firewall rules that will be stable and approved by regulators. What type of Elastic Load Balancer would you choose?
- Application Load Balancer with an Elastic IP attached to it
- Network Load Balancer
- Classic Load Balancer
Network Load Balancer
Network Load Balancer has one static IP address per AZ and you can attach an Elastic IP address to it. Application Load Balancers and Classic Load Balancers have a static DNS name.
You want to create a custom application-based cookie in your Application Load Balancer. Which of the following you can use as a cookie name?
- AWSALBAPP
- APPUSERC
- AWSALBTG
- AWSALB
APPUSERC
The following cookie names are reserved by the ELB (AWSALB, AWSALBAPP, AWSALBTG).
You have a Network Load Balancer that distributes traffic across a set of EC2 instances in us-east-1
. You have 2 EC2 instances in us-east-1b
AZ and 5 EC2 instances in us-east-1e
AZ. You have noticed that the CPU utilization is higher in the EC2 instances in us-east-1b
AZ. After more investigation, you noticed that the traffic is equally distributed across the two AZs. How would you solve this problem?
- Enable Cross-Zone Load Balancing
- Enable Sticky Sessions
- Enable ELB Health Checks
- Enable SSL Termination
Enable Cross-Zone Load Balancing
When Cross-Zone Load Balancing is enabled, ELB distributes traffic evenly across all registered EC2 instances in all AZs.
Which feature in both Application Load Balancers and Network Load Balancers allows you to load multiple SSL certificates on one listener?
- TLS Termination
- Server Name Indication (SNI)
- SSL Security Policies
- Host Headers
Server Name Indication (SNI)
You have an Application Load Balancer that is configured to redirect traffic to 3 Target Groups based on the following hostnames: users.example.com, api.external.example.com, and checkout.example.com. You would like to configure HTTPS for each of these hostnames. How do you configure the ALB to make this work?
- Use an HTTP to HTTPS redirect rule
- Use a security group SSL certificate
- Use Server Name Indication (SNI)
Use Server Name Indication (SNI)
Server Name Indication (SNI) allows you to expose multiple HTTPS applications each with its own SSL certificate on the same listener. Read more here: https://aws.amazon.com/blogs/aws/new-application-load-balancer-sni/
You have an application hosted on a set of EC2 instances managed by an Auto Scaling Group that you configured both desired and maximum capacity to 3. Also, you have created a CloudWatch Alarm that is configured to scale out your ASG when CPU Utilization reaches 60%. Your application suddenly received huge traffic and is now running at 80% CPU Utilization. What will happen?
- Nothing
- The desired capacity will go up to 4 and the maximum capacity will stay at 3
- The desired capacity will go up to 4 and the maximum capacity will stay at 4
Nothing
The Auto Scaling Group can’t go over the maximum capacity (you configured) during scale-out events.
You have an Auto Scaling Group fronted by an Application Load Balancer. You have configured the ASG to use ALB Health Checks, then one EC2 instance has just been reported unhealthy. What will happen to the EC2 instance?
- The ASG will keep the instance running and re-start the application
- The ASG will detach the EC2 instance and leave it running
- The ASG will terminate the EC2 instance
The ASG will terminate the EC2 instance
You can configure the Auto Scaling Group to determine the EC2 instances’ health based on Application Load Balancer Health Checks instead of EC2 Status Checks (default). When an EC2 instance fails the ALB Health Checks, it is marked unhealthy and will be terminated while the ASG launches a new EC2 instance.
Your boss asked you to scale your Auto Scaling Group based on the number of requests per minute your application makes to your database. What should you do?
- Create a CloudWatch custom metric then create a Cloudwatch Alarm on this metric to scale your ASG
- You politelly tell him it’s impossible
- Enable Detailed Monitoring then create a CloudWatch alarm to scale your ASG
Create a CloudWatch custom metric then create a Cloudwatch Alarm on this metric to scale your ASG
There’s no CloudWatch Metric for “requests per minute” for backend-to-database connections. You need to create a CloudWatch Custom Metric, then create a CloudWatch Alarm.
An application is deployed with an Application Load Balancer and an Auto Scaling Group. Currently, you manually scale the ASG and you would like to define a Scaling Policy that will ensure the average number of connections to your EC2 instances is around 1000. Which Scaling Policy should you use?
- Simple Scaling Policy
- Step Scaling Policy
- Target Tracking Policy
- Scheduled Scaling Policy
Target Tracking Policy
You have an ASG and a Network Load Balancer. The application on your ASG supports the HTTP protocol and is integrated with the Load Balancer health checks. You are currently using the TCP health checks. You would like to migrate to using HTTP health checks, what do you do?
- Migrate to an Application Load Balancer
- Migrate to health check to HTTP
Migrate to health check to HTTP
the NLB supports HTTP health checks as well as TCP and HTTPS
You have a website hosted in EC2 instances in an Auto Scaling Group fronted by an Application Load Balancer. Currently, the website is served over HTTP, and you have been tasked to configure it to use HTTPS. You have created a certificate in ACM and attached it to the Application Load Balancer. What you can do to force users to access the website using HTTPS instead of HTTP?
- Send an email to all customers to use HTTPS instead of HTTP
- Configure the Application Load Balancer to redirect HTTP to HTTPS
- Configure the DNS record to redirect HTTP to HTTPS
Configure the Application Load Balancer to redirect HTTP to HTTPS
Amazon RDS supports the following databases, EXCEPT:
- MongoDB
- MySQL
- MariaDB
- Microsoft SQL Server
MongoDB
RDS supports MySQL, PostgreSQL, MariaDB, Oracle, MS SQL Server, and Amazon Aurora.
You’re planning for a new solution that requires a MySQL database that must be available even in case of a disaster in one of the Availability Zones. What should you use?
- Create Read Replicas
- Enable Encryption
- Enable Multi-AZ
Enable Multi-AZ
Multi-AZ helps when you plan a disaster recovery for an entire AZ going down. If you plan against an entire AWS Region going down, you should use backups and replication across AWS Regions.
We have an RDS database that struggles to keep up with the demand of requests from our website. Our million users mostly read news, and we don’t post news very often. Which solution is NOT adapted to this problem?
- An ElastiCache Cluster
- RDS Multi-AZ
- RDS Read Replicas
RDS Multi-AZ
ElastiCache and RDS Read Replicas do indeed help with scaling reads.
You have set up read replicas on your RDS database, but users are complaining that upon updating their social media posts, they do not see their updated posts right away. What is a possible cause for this?
- There must be a bug in your application
- Read Replicas have Asynchronous Replication, therefore it’s likely your users will only read Eventual Consistency
- You should have setup Multi-AZ instead
Read Replicas have Asynchronous Replication, therefore it’s likely your users will only read Eventual Consistency
Which RDS (NOT Aurora) feature when used does not require you to change the SQL connection string?
- Multi-AZ
- Read Replicas
Multi-AZ
Multi-AZ keeps the same connection string regardless of which database is up.
Your application running on a fleet of EC2 instances managed by an Auto Scaling Group behind an Application Load Balancer. Users have to constantly log back in and you don’t want to enable Sticky Sessions on your ALB as you fear it will overload some EC2 instances. What should you do?
- Use your own custom Load Balancer on EC2 instances instead of using ALB
- Store session data in RDS
- Store session data in ElastiCache
- Store session data in a shared EBS volume
Store session data in ElastiCache
Storing Session Data in ElastiCache is a common pattern to ensuring different EC2 instances can retrieve your user’s state if needed.
An analytics application is currently performing its queries against your main production RDS database. These queries run at any time of the day and slow down the RDS database which impacts your users’ experience. What should you do to improve the users’ experience?
- Setup a Read Replica
- Setup Multi-AZ
- Run the analytics queries at night
Setup a Read Replica
Read Replicas will help as your analytics application can now perform queries against it, and these queries won’t impact the main production RDS database.
You would like to ensure you have a replica of your database available in another AWS Region if a disaster happens to your main AWS Region. Which database do you recommend to implement this easily?
- RDS Read Replicas
- RDS Multi-AZ
- Aurora Read Replicas
- Aurora Global Database
Aurora Global Database
Aurora Global Databases allows you to have an Aurora Replica in another AWS Region, with up to 5 secondary regions.
How can you enhance the security of your ElastiCache Redis Cluster by allowing users to access your ElastiCache Redis Cluster using their IAM Identities (e.g., Users, Roles)?
- Using Redis Authentication
- Using IAM Authentication
- Use Security Groups
Using IAM Authentication
Your company has a production Node.js application that is using RDS MySQL 5.6 as its database. A new application programmed in Java will perform some heavy analytics workload to create a dashboard on a regular hourly basis. What is the most cost-effective solution you can implement to minimize disruption for the main application?
- Enable Multi-AZ for the RDS database and run the analytics workload on the standby database
- Create a Read Replica in a different AZ and run the analytics on the replica database
- Create a Read Replica in a different AZ and run the analytics workload on the source database
Create a Read Replica in a different AZ and run the analytics on the replica database
You would like to create a disaster recovery strategy for your RDS PostgreSQL database so that in case of a regional outage the database can be quickly made available for both read and write workloads in another AWS Region. The DR database must be highly available. What do you recommend?
- Create a Read Replica in the same region and enable Multi-AZ on the main database
- Create a Read Replica in a different region and enable Multi-AZ on the Read Replica
- Create a Read Replica in the same region and enable Multi-AZ on the Read Replica
- Enable Multi-Region option on the main database
Create a Read Replica in a different region and enable Multi-AZ on the Read Replica
You have migrated the MySQL database from on-premises to RDS. You have a lot of applications and developers interacting with your database. Each developer has an IAM user in the company’s AWS account. What is a suitable approach to give access to developers to the MySQL RDS DB instance instead of creating a DB user for each one?
- By default IAM users have access to your RDS database
- Use Amazon Cognito
- Enable IAM Database Authentication
Enable IAM Database Authentication
Which of the following statement is true regarding replication in both RDS Read Replicas and Multi-AZ?
- Read Replica uses Asynchronous Replication and Multi-AZ uses Asynchronous Replication
- Read Replica uses Asynchronous Replication and Multi-AZ uses Synchronous Replication
- Read Replica uses Synchronous Replication and Multi-AZ uses Synchronous Replication
- Read Replica uses Synchronous Replication and Multi-AZ uses Asynchronous Replication
Read Replica uses Asynchronous Replication and Multi-AZ uses Synchronous Replication
How do you encrypt an unencrypted RDS DB instance?
- Do it straight from AWS Console, select your RDS DB instance, choose Actions then Encrypt using KMS
- Do it straight from AWS Console, after stopping the RDS DB instance
- Create a snapshot of the unencrypted RDS DS instance, copy the snapshot and tick “Enable encryption”, then restore the RDS DB instance from the encrypted snapshot
Create a snapshot of the unencrypted RDS DS instance, copy the snapshot and tick “Enable encryption”, then restore the RDS DB instance from the encrypted snapshot
For your RDS database, you can have up to ………… Read Replicas.
- 5
- 15
- 7
15
Which RDS database technology does NOT support IAM Database Authentication?
- Oracle
- PostgreSQL
- MySQL
Oracle
You have an un-encrypted RDS DB instance and you want to create Read Replicas. Can you configure the RDS Read Replicas to be encrypted?
- No
- Yes
No
You can not create encrypted Read Replicas from an unencrypted RDS DB instance.
An application running in production is using an Aurora Cluster as its database. Your development team would like to run a version of the application in a scaled-down application with the ability to perform some heavy workload on a need-basis. Most of the time, the application will be unused. Your CIO has tasked you with helping the team to achieve this while minimizing costs. What do you suggest?
- Use an Aurora Global Database
- Use an RDS Database
- Use Aurora Serverless
- Run Aurora on EC2, and write script to shut down the EC2 instance at night
Use Aurora Serverless
How many Aurora Read Replicas can you have in a single Aurora DB Cluster?
- 5
- 10
- 15
15
Amazon Aurora supports both …………………….. databases.
- MySQL and MariaDB
- MySQL and PostgreSQL
- Oracle and MariaDB
- Oracle and MS SQL Server
MySQL and PostgreSQL
You work as a Solutions Architect for a gaming company. One of the games mandates that players are ranked in real-time based on their score. Your boss asked you to design then implement an effective and highly available solution to create a gaming leaderboard. What should you use?
- Use RDS for MySQL
- Use Amazon Aurora
- Use ElastiCache for Memcached
- Use ElastiCache for Redis - Sorted Sets
Use ElastiCache for Redis - Sorted Sets
You need full customization of an Oracle Database on AWS. You would like to benefit from using the AWS services. What do you recommend?
- RDS for Oracle
- RDS Custom for Oracle
- Deploy Oracle on EC2
RDS Custom for Oracle
You need to store long-term backups for your Aurora database for disaster recovery and audit purposes. What do you recommend?
- Enable Automated Backups
- Perform On Demand Backups
- Use Aurora Database Cloning
Perform On Demand Backups
Your development team would like to perform a suite of read and write tests against your production Aurora database because they need access to production data as soon as possible. What do you advise?
- Create an Aurora Read Replica for them
- Do the test against the production database
- Make a DB Snapshot and Restore it into a new database
- Use the Aurora Cloning Feature
Use the Aurora Cloning Feature
You have 100 EC2 instances connected to your RDS database and you see that upon a maintenance of the database, all your applications take a lot of time to reconnect to RDS, due to poor application logic. How do you improve this?
- Fix all the applications
- Disable Multi-AZ
- Enable Multi-AZ
- Use an RDS Proxy
Use an RDS Proxy
This reduces the failover time by up to 66% and keeps connection actives for your applications
You have purchased mycoolcompany.com on Amazon Route 53 Registrar and would like the domain to point to your Elastic Load Balancer my-elb-1234567890.us-west-2.elb.amazonaws.com. Which Route 53 Record type must you use here?
- CNAME
- Alias
Alias
You have deployed a new Elastic Beanstalk environment and would like to direct 5% of your production traffic to this new environment. This allows you to monitor for CloudWatch metrics and ensuring that there’re no bugs exist with your new environment. Which Route 53 Record type allows you to do so?
- Simple
- Weighted
- Latency
- Failover
Weighted
Weighted Routing Policy allows you to redirect part of the traffic based on weight (e.g., percentage). It’s a common use case to send part of traffic to a new version of your application.
You have updated a Route 53 Record’s myapp.mydomain.com value to point to a new Elastic Load Balancer, but it looks like users are still redirected to the old ELB. What is a possible cause for this behavior?
- Because of the Alias record
- Because of the CNAME record
- Because of the TTL
- Because of Route 53 Health Checks
Because of the TTL
Each DNS record has a TTL (Time To Live) which orders clients for how long to cache these values and not overload the DNS Resolver with DNS requests. The TTL value should be set to strike a balance between how long the value should be cached vs. how many requests should go to the DNS Resolver.
You have an application that’s hosted in two different AWS Regions us-west-1
and eu-west-2
. You want your users to get the best possible user experience by minimizing the response time from application servers to your users. Which Route 53 Routing Policy should you choose?
- Multi Value
- Weighted
- Latency
- Geolocation
Latency
Latency Routing Policy will evaluate the latency between your users and AWS Regions, and help them get a DNS response that will minimize their latency (e.g. response time)
You have a legal requirement that people in any country but France should NOT be able to access your website. Which Route 53 Routing Policy helps you in achieving this?
- Latency
- Simple
- Multi Value
- Geolocation
Geolocation
You have purchased a domain on GoDaddy and would like to use Route 53 as the DNS Service Provider. What should you do to make this work?
- Request for a domain transfer
- Create a Private Hosted Zone and update the 3rd party Registrar NS records
- Create a Public Hosted Zone and update the Route 53 NS records
- Create a Public Hosted Zone and update the 3rd party Registrar NS records
Create a Public Hosted Zone and update the 3rd party Registrar NS records
Public Hosted Zones are meant to be used for people requesting your website through the Internet. Finally, NS records must be updated on the 3rd party Registrar.
Which of the following are NOT valid Route 53 Health Checks?
- Health Check that monitors a SQS Queue
- Health Check that monitors an Endpoint
- Health Check that monitors other Health Checks
- Health Check that monitors CloudWatch Alarms
Health Check that monitors a SQS Queue
Your website TriangleSunglasses.com is hosted on a fleet of EC2 instances managed by an Auto Scaling Group and fronted by an Application Load Balancer. Your ASG has been configured to scale on-demand based on the traffic going to your website. To reduce costs, you have configured the ASG to scale based on the traffic going through the ALB. To make the solution highly available, you have updated your ASG and set the minimum capacity to 2. How can you further reduce the costs while respecting the requirements?
- Remove the ALB and use an Elastic IP Instead
- Reserve two EC2 Instances
- Reduce the minimum capacity to 1
- Reduce the minimum capacity to 0
Reserve two EC2 Instances
This is the way to save further costs as we will run 2 EC2 instances no matter what.
Which of the following will NOT help us while designing a STATELESS application tier?
- Store session data in Amazon RDS
- Store session data in Amazon ElastiCache
- Store session data in the client HTTP cookies
- Store session data on EBS Volumes
Store session data on EBS Volumes
EBS volumes are created in a specific AZ and can only be attached to one EC2 instance at a time.
You want to install software updates on 100s of Linux EC2 instances that you manage. You want to store these updates on shared storage which should be dynamically loaded on the EC2 instances and shouldn’t require heavy operations. What do you suggest?
- Store the software updates on EBS and sync them using data replication software from one master in each AZ
- Store the software updates on EFS and mount EFS as a network drive at startup
- Package the software updates as an EBS snapshot and create EBS volumes for each new software update
- Store the software updates on Amazon RDS
Store the software updates on EFS and mount EFS as a network drive at startup
EFS is a network file system (NFS) that allows you to mount the same file system to 100s of EC2 instances. Storing software updates on an EFS allows each EC2 instance to access them.
As a Solutions Architect, you’re planning to migrate a complex ERP software suite to AWS Cloud. You’re planning to host the software on a set of Linux EC2 instances managed by an Auto Scaling Group. The software traditionally takes over an hour to set up on a Linux machine. How do you recommend you speed up the installation process when there’s a scale-out event?
- Use a Golden AMI
- Bootstrap using EC2 User Data
- Store the application in Amazon RDS
- Retrieve the application setup files from RDS
Use a Golden AMI
Golden AMI is an image that contains all your software installed and configured so that future EC2 instances can boot up quickly from that AMI.
You’re developing an application and would like to deploy it to Elastic Beanstalk with minimal cost. You should run it in ………………
- Single Instance Mode
- High Availability Mode
Single Instance Mode
The question mentions that you’re still in the development stage and you want to reduce costs. Single Instance Mode will create one EC2 instance and one Elastic IP.
You’re deploying your application to an Elastic Beanstalk environment but you notice that the deployment process is painfully slow. After reviewing the logs, you found that your dependencies are resolved on each EC2 instance each time you deploy. How can you speed up the deployment process with minimal impact?
- Remove some dependencies in your code
- Place the dependencies in Amazon EFS
- Create a Golden AMI that contains the depedencies and use that image to launch the EC2 instances
Create a Golden AMI that contains the depedencies and use that image to launch the EC2 instances
Golden AMI is an image that contains all your software, dependencies, and configurations, so that future EC2 instances can boot up quickly from that AMI.
You have a 25 GB file that you’re trying to upload to S3 but you’re getting errors. What is a possible solution for this?
- The file size limit on S3 is 5GB
- Update your bucket policy to allow the larger file
- Use Multi-Part upload when uploading files larger than 5GB
- Encrypt the file
Use Multi-Part upload when uploading files larger than 5GB
Multi-Part Upload is recommended as soon as the file is over 100 MB.
You’re getting errors while trying to create a new S3 bucket named “dev”. You’re using a new AWS Account with no S3 buckets created before. What is a possible cause for this?
- You’re missing IAM permissions to create an S3 bucket
- S3 bucket names must be globally unique and “dev” is already taken
S3 bucket names must be globally unique and “dev” is already taken
You have enabled versioning in your S3 bucket which already contains a lot of files. Which version will the existing files have?
- 1
- 0
- -1
- null
null
You have updated an S3 bucket policy to allow IAM users to read/write files in the S3 bucket, but one of the users complain that he can’t perform a PutObject
API call. What is a possible cause for this?
- The S3 bucket policy must be wrong
- The user is lacking permissions
- The IAM user must have an explicit DENY in the attached IAM Policy
- You need to contact AWS Support to lift this limit
The IAM user must have an explicit DENY in the attached IAM Policy
Explicit DENY in an IAM Policy will take precedence over an S3 bucket policy.
You want the content of an S3 bucket to be fully available in different AWS Regions. That will help your team perform data analysis at the lowest latency and cost possible. What S3 feature should you use?
- Amazon Cloudfront Distributions
- S3 Versioning
- S3 Static Website Hosting
- S3 Replication
S3 Replication
S3 Replication allows you to replicate data from an S3 bucket to another in the same/different AWS Region.
You have 3 S3 buckets. One source bucket A, and two destination buckets B and C in different AWS Regions. You want to replicate objects from bucket A to both bucket B and C. How would you achieve this?
- Configure replication from bucket A to bucket B, then from bucket A to bucket C
- Configure replication from bucket A to bucket B, then from bucket B to bucket C
- Configure replication from bucket A to bucket C, then from bucket C to bucket B
Configure replication from bucket A to bucket B, then from bucket A to bucket C
Which of the following is NOT a Glacier Deep Archive retrieval mode?
- Expedited (1-5 minutes)
- Standard (12 hours)
- Bulk (48hs)
Expedited (1-5 minutes)
Which of the following is NOT a Glacier Flexible retrieval mode?
- Instant (10 seconds)
- Expedited (1-5 minutes)
- Standard (3-5 hours)
- Bulk (5-12 hours)
Instant (10 seconds)
How can you be notified when there’s an object uploaded to your S3 bucket?
- S3 Select
- S3 Access Logs
- S3 Event Notifications
- S3 Analytics
S3 Event Notifications
You have an S3 bucket that has S3 Versioning enabled. This S3 bucket has a lot of objects, and you would like to remove old object versions to reduce costs. What’s the best approach to automate the deletion of these old object versions?
- S3 Lifecycle Rules - Transition Actions
- S3 Lifecycle Rules - Expiration Actions
- S3 Access Logs
S3 Lifecycle Rules - Expiration Actions
How can you automate the transition of S3 objects between their different tiers?
- AWS Lambda
- CloudWatch Events
- S3 Lifecycle Rules
S3 Lifecycle Rules
While you’re uploading large files to an S3 bucket using Multi-part Upload, there are a lot of unfinished parts stored in the S3 bucket due to network issues. You are not using these unfinished parts and they cost you money. What is the best approach to remove these unfinished parts?
- Use AWS Lambda to loop on each old/unfinished part and delete them
- Request AWS Support to help you delete old/unfinished parts
- Use an S3 Lifecycle Policy to automate old/unfinished parts deletion
Use an S3 Lifecycle Policy to automate old/unfinished parts deletion
You are looking to get recommendations for S3 Lifecycle Rules. How can you analyze the optimal number of days to move objects between different storage tiers?
- S3 Inventory
- S3 Analytics
- S3 Lifecycle Rules Advisor
S3 Analytics
You are looking to build an index of your files in S3, using Amazon RDS PostgreSQL. To build this index, it is necessary to read the first 250 bytes of each object in S3, which contains some metadata about the content of the file itself. There are over 100,000 files in your S3 bucket, amounting to 50 TB of data. How can you build this index efficiently?
- Use RDS Import feature to load the data form S3 to PostgreSQL, and run a SQL query to build the index
- Create an application that will traverse the S3 bucket, read all the files one by one, extract the first 250 bytes, and store that information in RDS
- Create an application that will traverse the S3 bucket, issue a Byte Range Fetch for the first 250 bytes, and store that information in RDS
- Create an application that will traverse the S3 bucket, use S3 Select to get the first 250 bytes, and store that information in RDS
Create an application that will traverse the S3 bucket, issue a Byte Range Fetch for the first 250 bytes, and store that information in RDS
You have a large dataset stored on-premises that you want to upload to the S3 bucket. The dataset is divided into 10 GB files. You have good bandwidth but your Internet connection isn’t stable. What is the best way to upload this dataset to S3 and ensure that the process is fast and avoid any problems with the Internet connection?
- Use Multi-part Upload Only
- Use S3 Select & Use S3 Transfer Acceleration
- Use S3 Multi-part Upload & S3 Transfer Acceleration
Use S3 Multi-part Upload & S3 Transfer Acceleration
You would like to retrieve a subset of your dataset stored in S3 with the .csv
format. You would like to retrieve a month of data and only 3 columns out of 10, to minimize compute and network costs. What should you use?
- S3 Analytics
- S3 Access Logs
- S3 Select
- S3 Inventory
S3 Select
A company is preparing for compliance and regulatory review on its infrastructure on AWS. Currently, they have their files stored on S3 buckets that are not encrypted, which must be encrypted as required for compliance and regulatory review. Which S3 feature allows them to encrypt all files in their S3 buckets in the most efficient and cost-effective way?
- S3 Access Points
- S3 Cross-Region Replication
- S3 Batch Operations
- S3 Lifecycle Rules
S3 Batch Operations
Your client wants to make sure that file encryption is happening in S3, but he wants to fully manage the encryption keys and never store them in AWS. You recommend him to use ……………………….
- SSE-S3
- SSE-KMS
- SSE-C
- Client-Side Encryption
SSE-C
With SSE-C, the encryption happens in AWS and you have full control over the encryption keys.
A company you’re working for wants their data stored in S3 to be encrypted. They don’t mind the encryption keys stored and managed by AWS, but they want to maintain control over the rotation policy of the encryption keys. You recommend them to use ………………..
- SSE-S3
- SSE-KMS
- SSE-C
- Client-Side Encryption
SSE-KMS
With SSE-KMS, the encryption happens in AWS, and the encryption keys are managed by AWS but you have full control over the rotation policy of the encryption key. Encryption keys stored in AWS.
Your company does not trust AWS for the encryption process and wants it to happen on the application. You recommend them to use ………………..
- SSE-S3
- SSE-KMS
- SSE-C
- Client-Side Encryption
Client-Side Encryption
With Client-Side Encryption, you have to do the encryption yourself and you have full control over the encryption keys. You perform the encryption yourself and send the encrypted data to AWS. AWS does not know your encryption keys and cannot decrypt your data.
You have a website that loads files from an S3 bucket. When you try the URL of the files directly in your Chrome browser it works, but when a website with a different domain tries to load these files it doesn’t. What’s the problem?
- The Bucket policy is wrong
- The IAM policy is wrong
- CORS is wrong
- Encryption is wrong
CORS is wrong
Cross-Origin Resource Sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. To learn more about CORS, go here: https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
An e-commerce company has its customers and orders data stored in an S3 bucket. The company’s CEO wants to generate a report to show the list of customers and the revenue for each customer. Customer data stored in files on the S3 bucket has sensitive information that we don’t want to expose in the report. How do you recommend the report can be created without exposing sensitive information?
- Use S3 Object Lambda to change the objects before they are retrieved by the report generator application
- Create another S3 bucket. Create a lambda function to process each file, remove the sensitive information, and then move them to the new S3 bucket
- Use S3 Object Lock to lock the sensitive information from being fetched by the report generator application
Use S3 Object Lambda to change the objects before they are retrieved by the report generator application
You suspect that some of your employees try to access files in an S3 bucket that they don’t have access to. How can you verify this is indeed the case without them noticing?
- Enable S3 Access Logs and analyze them using Athena
- Restrict their IAM policies and look at CloudTrail logs
- Use a bucket policy
Enable S3 Access Logs and analyze them using Athena
S3 Access Logs log all the requests made to S3 buckets and Amazon Athena can then be used to run serverless analytics on top of the log files.
You are looking to provide temporary URLs to a growing list of federated users to allow them to perform a file upload on your S3 bucket to a specific location. What should you use?
- S3 CORS
- S3 Pre-Signed URL
- S3 Bucket Policies
S3 Pre-Signed URL
S3 Pre-Signed URLs are temporary URLs that you generate to grant time-limited access to some actions in your S3 bucket.
For compliance reasons, your company has a policy mandate that database backups must be retained for 4 years. It shouldn’t be possible to erase them. What do you recommend?
- Glacier Vaults with Vault Lock Policies
- EFS network drives with restrictive Linux Permissions
- S3 with Bucket Policies
Glacier Vaults with Vault Lock Policies
You would like all your files in an S3 bucket to be encrypted by default. What is the optimal way of achieving this?
- Use a bucket policy that forces HTTPS connections
- Do nothing, Amazon S3 automatically encrypts new objects using Server-Side Encryption with S3-Managed Keys (SSE-S3)
- Enable Versioning
Do nothing, Amazon S3 automatically encrypts new objects using Server-Side Encryption with S3-Managed Keys (SSE-S3)
You have enabled versioning and want to be extra careful when it comes to deleting files on an S3 bucket. What should you enable to prevent accidental permanent deletions?
- Use a bucket policy
- Enable MFA Delete
- Encrypt the files
- Disable Versioning
Enable MFA Delete
MFA Delete forces users to use MFA codes before deleting S3 objects. It’s an extra level of security to prevent accidental deletions.
A company has its data and files stored on some S3 buckets. Some of these files need to be kept for a predefined period of time and protected from being overwritten and deletion according to company compliance policy. Which S3 feature helps you in doing this?
- S3 Object Lock - Retention Governance Mode
- S3 Versioning
- S3 Object Lock - Retention Compliance Mode
- S3 Glacier Vault Lock
S3 Object Lock - Retention Compliance Mode
Which of the following S3 Object Lock configuration allows you to prevent an object or its versions from being overwritten or deleted indefinitely and gives you the ability to remove it manually?
- Retention Governance Mode
- Retention Compliance Mode
- Legal Hold
Legal Hold
You have a CloudFront Distribution that serves your website hosted on a fleet of EC2 instances behind an Application Load Balancer. All your clients are from the United States, but you found that some malicious requests are coming from other countries. What should you do to only allow users from the US and block other countries?
- Use CloudFront Geo Restriction
- Use Origin Access Control
- Set up a security group and attach it to your CloudFront Distribution
- Use a Route 53 Latency record and attach it to CloudFront
Use CloudFront Geo Restriction