AWSExam_2 Flashcards
Your IT Director instructed you to ensure that all of the AWS resources in your VPC don’t go beyond their respective service limits. You should prepare a system that provides you real-time guidance in provisioning your resources that adheres to the AWS best practices.
Which of the following is the MOST appropriate service to use to satisfy this task?
AWS Trusted Advisor
AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices. It inspects your AWS environment and makes recommendations for saving money, improving system performance and reliability, or closing security gaps.
What is Amazon Inspector?
Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.
You are a Solutions Architect working for a startup which is currently migrating their production environment to AWS. Your manager asked you to set up access to the AWS console using Identity Access Management (IAM). You have created 5 users for your system administrators using the AWS CLI.
What further steps do you need to take to enable your system administrators to get access to the AWS console?
Provide a password for each user created and give these passwords to your system administrators..
The AWS Management Console is the web interface used to manage your AWS resources using your web browser. To access this, your users should have a password that they can use to login to the web console.
You have EC2 instances running on your VPC. You have both UAT and production EC2 instances running. You want to ensure that employees who are responsible for the UAT instances don’t have the access to work on the production instances to minimize security risks. Which of the following would be the best way to achieve this?
Define the tags on the UAT and production servers and add a condition to the IAM policy which allows access to specific tags.
A leading e-commerce company is in need of a storage solution that can be accessed by 1000 Linux servers in multiple availability zones. The service should be able to handle the rapidly changing data at scale while still maintaining high performance. It should also be highly durable and highly available whenever the servers will pull data from it, with little need for management. As the Solutions Architect, which of the following services is the most cost-effective choice that you should use to meet the above requirement?
EFS
in this scenario, the keywords are rapidly changing data and 1000 Linux servers.
Amazon EFS is a file storage service for use with Amazon EC2. Amazon EFS provides a file system interface, file system access semantics (such as strong consistency and file locking), and concurrently-accessible storage for up to thousands of Amazon EC2 instances. EFS provides the same level of high availability and high scalability like S3 however, this service is more suitable for scenarios where it is required to have a POSIX-compatible file system or if you are storing rapidly changing data.
You are assigned to design a highly available architecture in AWS. You have two target groups with three EC2 instances each, which are added to an Application Load Balancer. In the security group of the EC2 instance, you have verified that the port 80 for HTTP is allowed. However, the instances are still showing out of service from the load balancer. What could be the root cause of this issue?
- The wrong instance type was used for the EC2 instance
- The instances are using the wrong AMI
- The health check configuration is not properly defined
- The wrong subnet was used in your VPC
The health check configuration is not properly defined
You are working as an IT Consultant for a large media company where you are tasked to design a web application that stores static assets in an Amazon Simple Storage Service (S3) bucket. You expect this S3 bucket to immediately receive over 2000 PUT requests and 3500 GET requests per second at peak hour. What should you do to ensure optimal performance?
Do nothing. Amazon S3 will automatically manage performance at this scale.
Amazon S3 now provides increased performance to support at least 3,500 requests per second to add data and 5,500 requests per second to retrieve data, which can save significant processing time for no additional charge. Each S3 prefix can support these request rates, making it simple to increase performance significantly.
company which has both an on-premises data center as well as an AWS cloud infrastructure. They store their graphics, audios, videos, and other multimedia assets primarily in their on-premises storage server and use an S3 Standard storage class bucket as a backup. Their data are heavily used for only a week (7 days) but after that period, it will be infrequently used by their customers. You are instructed to save storage costs in AWS yet maintain the ability to fetch their media assets in a matter of minutes for a surprise annual data audit, which will be conducted both on-premises and on their cloud storage. Which of the following options should you implement to meet the above requirement? (Choose 2)
- set af lifecycle policy in the bucket to transition to S3 - IA after 30 days
- set a lifecycle policy in the bucket to transition the data to S3 - OneZone IA after one week (7 days)
- set a lifecycle policy in the bucket to transition to S3 Glacier Deep Archive after one week (7 days)
- set a lifecycle policy to transition to S3 - IA after one week (7 days)
- set a lifecycle policy to transition to Glacier after one week (7 days)
- set af lifecycle policy in the bucket to transition to S3 - IA after 30 days
- ⇒ Objects must be stored at least 30 days in S3 standard before you can transition them to S3 IA or S3 OneZone IA
- set a lifecycle policy to transition to Glacier after one week (7 days)
- can retrieve data within minutes
You are setting up a cost-effective architecture for a log processing application which has frequently accessed, throughput-intensive workloads with large, sequential I/O operations. The application should be hosted in an already existing On-Demand EC2 instance in your VPC. You have to attach a new EBS volume that will be used by the application. Which of the following is the most suitable EBS volume type that you should use in this scenario?
EBS throughput optimized HDD (st1)
Throughput Optimized HDD (st1) volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. This volume type is a good fit for large, sequential workloads such as Amazon EMR, ETL, data warehouses, and log processing. Bootable st1 volumes are not supported.
Throughput Optimized HDD (st1) volumes, though similar to Cold HDD (sc1) volumes, are designed to support frequently accessed data. (Cold HDD for less frequently accessed workloads)
You have an existing On-demand EC2 instance and you are planning to create a new EBS volume that will be attached to this instance. The data that will be stored are confidential medical records so you have to make sure that the data is protected. How can you secure the data at rest of the new EBS volume that you will create?
Create an encrypted EBS volume by ticking the encryption tickbox and attach it to the instance
You created a new CloudFormation template that creates 4 EC2 instances and are connected to one Elastic Load Balancer (ELB). Which section of the template should you configure to get the Domain Name Server hostname of the ELB upon the creation of the AWS stack?
Outputs
Outputs is an optional section of the CloudFormation template that describes the values that are returned whenever you view your stack’s properties.
An On-Demand EC2 instance is launched into a VPC subnet with the Network ACL configured to allow all inbound traffic and deny all outbound traffic. The instance’s security group has an inbound rule to allow SSH from any IP address and does not have any outbound rules. In this scenario, what are the changes needed to allow SSH connection to the instance?
The outbound network ACL needs to be modified to allow outbound traffic
In order for you to establish an SSH connection from your home computer to your EC2 instance, you need to do the following:
- On the Security Group, add an Inbound Rule to allow SSH traffic to your EC2 instance.
- On the NACL, add both an Inbound and Outbound Rule to allow SSH traffic to your EC2 instance.
An investment bank has a distributed batch processing application which is hosted in an Auto Scaling group of Spot EC2 instances with an SQS queue. You configured your components to use client-side buffering so that the calls made from the client will be buffered first and then sent as a batch request to SQS. What is a period of time during which the SQS queue prevents other consuming components from receiving and processing a message?
Visibility Timeout
Immediately after the message is received, it remains in the queue. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The maximum is 12 hours.
A web application is deployed in an On-Demand EC2 instance in your VPC. There is an issue with the application which requires you to connect to it via an SSH connection. Which of the following is needed in order to access an EC2 instance from the Internet? (Choose 3)
- An Internet gateway
- A Private IP address attached to the instance
- A Public IP address attached to the instance
- a Private Elastic IP address attached to the instance
- A route entry to the internet gateway in the Route table of the VPC
- a VPN peering connection
- An Internet gateway
- A Public IP address attached to the instance
- A route entry to the internet gateway in the Route table of the VPC

An e-commerce application is using a fanout messaging pattern for its order management system. For every order, it sends an Amazon SNS message to an SNS topic, and the message is replicated and pushed to multiple Amazon SQS queues for parallel asynchronous processing. A Spot EC2 instance retrieves the message from each SQS queue and processes the message. There was an incident that while an EC2 instance is currently processing a message, the instance was abruptly terminated, and the processing was not completed in time. In this scenario, what happens to the SQS message?
when the message visibility timeout expires, the message becomes available for processing by other EC2 instances..
Because Amazon SQS is a distributed system, there’s no guarantee that the consumer actually receives the message (for example, due to a connectivity issue, or due to an issue in the consumer application). Thus, the consumer must delete the message from the queue after receiving and processing it.
Immediately after the message is received, it remains in the queue. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The maximum is 12 hour
What are Dead Letter Queues?
Amazon SQS supports dead-letter queues, which other queues (source queues) can target for messages that can’t be processed (consumed) successfully. Dead-letter queues are useful for debugging your application or messaging system because they let you isolate problematic messages to determine why their processing doesn’t succeed.
You just joined a large tech company with an existing Amazon VPC. When reviewing the Auto Scaling events, you noticed that their web application is scaling up and down multiple times within the hour. What design change could you make to optimize cost while preserving elasticity?
Change the cooldown period of the Auto Scaling group and set the CloudWatch metric to a higher threshold….
Since the application is scaling up and down multiple times within the hour, the issue lies on the cooldown period of the Auto Scaling group.
The cooldown period is a configurable setting for your Auto Scaling group that helps to ensure that it doesn’t launch or terminate additional instances before the previous scaling activity takes effect. After the Auto Scaling group dynamically scales using a simple scaling policy, it waits for the cooldown period to complete before resuming scaling activities.
When you manually scale your Auto Scaling group, the default is not to wait for the cooldown period, but you can override the default and honor the cooldown period. If an instance becomes unhealthy, the Auto Scaling group does not wait for the cooldown period to complete before replacing the unhealthy instance.
You are a working as a Solutions Architect for a fast-growing startup which just started operations during the past 3 months. They currently have an on-premises Active Directory and 10 computers. To save costs in procuring physical workstations, they decided to deploy virtual desktops for their new employees in a virtual private cloud in AWS. The new cloud infrastructure should leverage on the existing security controls in AWS but can still communicate with their on-premises network. Which set of AWS services will you use to meet these requirements?
- AWS Directory Services, VPN connection and AWS IAM
- AWS Directory Services, VPN Connection and Amazon workspace
- AWS Directory Services, VPN Connection and ClassicLink
- AWS Directory Services, VPN connection and S3
AWS Directory Services, VPN Connection and Amazon workspace
First, you need a VPN connection to connect the VPC and your on-premises network. Second, you need AWS Directory Services to integrate with your on-premises Active Directory and lastly, you need to use Amazon Workspace to create the needed virtual desktops in your VPC.
You are running an EC2 instance store-based instance. You shut it down and then start the instance. You noticed that the data which you have saved earlier is no longer available. What might be the cause of this?
the EC2 instance was using instance store volumes, which are ephemeral and ony live for the life of the instance
An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.
You are working for a top IT Consultancy that has a VPC with two On-Demand EC2 instances with Elastic IP addresses. You were notified that your EC2 instances are currently under SSH brute force attacks over the Internet. Their IT Security team has identified the IP addresses where these attacks originated. You have to immediately implement a temporary fix to stop these attacks while the team is setting up AWS WAF, GuardDuty, and AWS Shield Advanced to permanently fix the security vulnerability. Which of the following provides the quickest way to stop the attacks to your instances?
Block the IP addresses in the Network Access Control List
(Removing the Internet Gateway from the VPC is incorrect because doing this will also make your EC2 instance inaccessible to you as it will cut down the connection to the Internet.)
What is a static Anycast IP address for?
Assigning a static Anycast IP address to each EC2 instance is primarily used by AWS Global Accelerator to enable organizations to seamlessly route traffic to multiple regions and improve availability and performance for their end-users.
You have a web application hosted on a fleet of EC2 instances located in two Availability Zones that are all placed behind an Application Load Balancer. As a Solutions Architect, you have to add a health check configuration to ensure your application is highly-available. Which health checks will you implement?
HTTP or HTTPS health check
The type of ELB that is mentioned here is an Application Elastic Load Balancer. This is used if you want a flexible feature set for your web applications with HTTP and HTTPS traffic. Conversely, it only allows 2 types of health check: HTTP and HTTPS.
When is TCP health checks offered?
TCP health checks are only offered in Network Load Balancer. it is used if you need ultra-high performance.
You are implementing a hybrid architecture for your company where you are connecting their Amazon Virtual Private Cloud (VPC) to their on-premises network. Which of the following can be used to create a private connection between the VPC and your company’s on-premises network?
Direct Connect
Direct Connect creates a direct, private connection from your on-premises data center to AWS, letting you establish a 1-gigabit or 10-gigabit dedicated network connection using Ethernet fiber-optic cable.



