AWS SAA C03 Exam Notes Flashcards

1
Q

A company uses EC2 instances and stores data on EBS volumes and must ensure all data in encrypted at rest using KMS and control key rotation

A

Create a customer managed key

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Migration of multi tier on premises to AWS, must minimize application changes and improve application resiliency during migration

A

Migrate the web tier to EC2 in an Auto Scaling Group behind ALB
Migrate the database to RDS Multi AZ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do you enable WAF on CLB

A

Replace CLB with ALB

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Running an SMB server that stores large files that are accessed up to 7 days after that a max of 24 hours after Creation date

A

AWS File Gateway and S3 Lifecycle policy to Glacier Deep Archive afterwards

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Allow instances in private subnet to access public Internet

A

NAT Gateway

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Ivp6 Traffic

A

Egress Only Internet Gateway

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Web application database RDS on ASG heavy read load

A

Read Replica
Elasticache cluster cache of query results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Secure environment variables for lambda and api credentials used by a lambda function across multiple environments

A

Encrypt environment with a new KMS key

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

RDS exclusive availability to EC2

A

IAM DB Authentication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Windows Server

A

Amazon Fsx for Windows File Server

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Early morning or specific time of day EC2 traffic surge

A

Scheduling or time scaling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Migrate the data on-premises to the DR site on AWS over a few days. We have 15TB of data and our on-premises data center has a 1.5Gbps internet connection. Company security policy requires network encryption during data transfer. Which solution is the most appropriate from a cost perspective?

A

Configure VPN between on premises and AWS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Created 2 public and 2 private subnets on my VPC. Web applications are developed as microservices, so you will build multiple EC2s to place those resources. Depending on the URL, you will need to target different EC2 instances to route the requests. Which load balancer best practice meets these requirements?

A

Build ALB in the public subnets, Build EC2 private subnets, distribute requests per the url

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A developer is building a new online transaction processing (OLTP) application for a small, highly read-write intensive database. A single table in the database is continuously updated throughout the day, so developers want to ensure good performance in database access. Which EBS storage option is suitable for maintaining application performance?

A

Provisioned IOPS SSD

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Backups must be retained for 7 years for compliance purposes. We rarely have access to backup files, and if we need to restore a backup, we typically give 5 business days‘ notice. The company is currently exploring cloud-based capabilities to reduce the storage costs and operational burden of tape management, and wants to minimize the disruption of migrating from tape backup to the cloud. Which storage solution is the most cost effective?

A

Backup to S3 Glacier using Storage Gateway Tape Gateway

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Your accounting application runs on-premises and uses MySQL as its database. The business department reported that there were times when performance degraded, and analysis revealed that it occurred when users were performing reporting tasks during working hours. You are looking to improve performance and are considering moving to AWS. Which solution is the most cost-effective from a build and operations perspective? (Select one)

A

Deploy Aurora MYSQL in Multi AZ with multiple read replicas

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A network engineer created two VPCs, named VPC1 and VPC2. I have EC2 running in each VPC and I need to access EC2 in VPC2 from EC2 in VPC1. Since applications exchange large amounts of data across EC2 across VPCs, communication between VPCs must have no single point of failure, have sufficient bandwidth, and be secure. Which solution is right to meet these requirements?

A

Connect VPC’s via transit gateways

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Building an EC2 instance capable of high-performance computing in order to create a physics computing system that analyzes natural phenomena. This instance requires low-latency, high-throughput networking and adequate storage capacity. Which EC2 instance launch option meets your requirements?

A

Cluster Placement Group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A solution architect is designing an application that uses EBS volumes on EC2 running in the Tokyo region of AWS. As a disaster countermeasure, it is necessary to back up the EBS volume in another region and restore the EBS volume in another region when a disaster occurs. What is the most efficient way to meet this requirement?

A

Create an EBS Snapshot and copy to the desired region

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

The system you are building on AWS requires data encryption to handle confidential data, and has the following management requirements for encryption keys. ?Managed as a single tenant ?Cryptographic module that satisfies FIPS 140-2 Level 3 Which is the best solution to meet the above requirements?

A

CloudHSM

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A company runs a shopping application that uses Amazon DynamoDB to store customer information. In case of data corruption, a solutions architect needs to design a solution that meets a recovery point objective (RPO) of 15 minutes and a recovery time objective (RTO) of 1 hour.

A

Configure DynamoDB point-in-time recovery. For RPO recovery, restore to the desired point in time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

A company’s HTTP application is behind a Network Load Balancer (NLB). The NLB’s target group is configured to use an Amazon EC2 Auto Scaling group with multiple EC2 instances that run the web service. The company notices that the NLB is not detecting HTTP errors for the application. These errors require a manual restart of the EC2 instances that run the web service. The company needs to improve the application’s availability without writing custom scripts or code.

A

Replace the NLB with an Application Load Balancer. Enable HTTP health checks by supplying the URL of the company’s application. Configure an Auto Scaling action to replace unhealthy instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

A company is running a business-critical web application on Amazon EC2 instances behind an Application Load Balancer. The EC2 instances are in an Auto Scaling group. The application uses an Amazon Aurora PostgreSQL database that is deployed in a single Availability Zone. The company wants the application to be highly available with minimum downtime and minimum loss of data. How should the company achieve this?

A

Configure the Auto Scaling group to use multiple Availability Zones. Configure the database as Multi-AZ. Configure an Amazon RDS Proxy instance for the database.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

A solutions architect is designing the cloud architecture for a new application being deployed on AWS. The process should run in parallel while adding and removing application nodes as needed based on the number of jobs to be processed. The processor application is stateless. How must the solutions architect ensure that the application is loosely coupled and the job items are durably stored?

A

Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

A company is running a batch job on an EC2 instance inside a private subnet. The instance gathers input data from an S3 bucket in the same region through a NAT Gateway. The company is looking for a solution that will reduce costs without imposing risks on redundancy or availability.

A

Remove the NAT Gateway and use a Gateway VPC endpoint to access the S3 bucket from the instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

A specific type of Elastic Load Balancer that uses UDP as the protocol for communication between clients and thousands of game servers around the world.

A

Use Network Load Balancer for TCP/UDP protocols.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Encrypt EBS volumes restored from the unencrypted EBS snapshots

A

Copy the snapshot and enable encryption with a new symmetric CMK while creating an EBS volume using the snapshot.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

A company plans to deploy a Docker-based batch application in AWS. The application will be used to process both mission-critical data as well as non-essential batch jobs.

A

Use ECS as the container management service then set up a combination of Reserved and Spot EC2 Instances for processing mission-critical and non-essential batch jobs respectively.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

A healthcare company stores sensitive patient health records in their on-premises storage systems. These records must be kept indefinitely and protected from any type of modifications once they are stored. Compliance regulations mandate that the records must have granular access control and each data access must be audited at all levels. Currently, there are millions of obsolete records that are not accessed by their web application, and their on-premises storage is quickly running out of space. The Solutions Architect must design a solution to immediately move existing records to AWS and support the ever-growing number of new health records.

A

Set up AWS DataSync to move the existing health records from the on-premises network to the AWS Cloud. Launch a new Amazon S3 bucket to store existing and new records. Enable AWS CloudTrail with Data Events and Amazon S3 Object Lock in the bucket.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

A company has an application hosted in an Amazon ECS Cluster behind an Application Load Balancer. The Solutions Architect is building a sophisticated web filtering solution that allows or blocks web requests based on the country that the requests originate from. However, the solution should still allow specific IP addresses from that country.

A

Add another rule in the AWS WAF web ACL with a geo match condition that blocks requests that originate from a specific country.
Using AWS WAF, create a web ACL with a rule that explicitly allows requests from approved IP addresses declared in an IP Set.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

A tech company that you are working for has undertaken a Total Cost Of Ownership (TCO) analysis evaluating the use of Amazon S3 versus acquiring more storage hardware. The result was that all 1200 employees would be granted access to use Amazon S3 for the storage of their personal documents.

A

Configure an IAM role and an IAM Policy to access the bucket.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

HPC for Linux

A

Amazon Fsx for Lustre

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

A company is designing a resilient architecture for its application that relies heavily on Amazon DynamoDB for data storage. The solutions architect is looking for a caching solution to improve read performance and reduce the load on DynamoDB. What service should the architect recommend for this scenario?

A

Amazon DynamoDB Accelerator (DAX).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

A company is deploying Amazon EC2 instances for a web application that requires access to both public and private resources. They want to ensure that the EC2 instances have public IP addresses for external access and private IP addresses for communication within their Virtual Private Cloud (VPC). Which configuration should they use?

A

Launch the EC2 instances in a public subnet with both public and private IP addresses enabled. Configure appropriate security groups for public and private access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

A company wants to design a disaster recovery architecture that ensures business continuity in the event of a regional AWS service outage. Which AWS service can help them achieve this goal?

A

AWS Global Accelerator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

A company is designing an application that requires both scalability and high availability. The application consists of microservices deployed on AWS Fargate. Each microservice has varying resource demands based on the time of day, and the company aims to optimize costs without sacrificing performance. What strategies and AWS services should the solutions architect recommend to achieve both scalability and cost optimization?

A

Use AWS Fargate for microservices with AWS Auto Scaling based on custom metrics, implement an API Gateway for communication between microservices, and utilize Amazon Aurora Serverless for cost-efficient database scalability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

A financial institution is migrating its monolithic application to a microservices architecture on AWS. The institution wants to ensure that each microservice has its own isolated storage with granular access control. What AWS service should the solutions architect recommend for secure and isolated microservices storage?

A

Amazon DynamoDB with fine-grained access control

38
Q

A company has a three-tier application for image sharing. The application uses an Amazon EC2 instance for the front-end layer, another EC2 instance for the application layer, and a third EC2 instance for a MySQL database. A solutions architect must design a scalable and highly available solution that requires the least amount of change to the application. Which solution meets these requirements?

A

Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end layer and the application layer. Move the database to an Amazon RDS Multi-AZ DB instance. Use Amazon S3 to store and serve users’ images.

39
Q

A company has an application that ingests incoming messages, and dozens of other applications and microservices quickly consume these messages. The message volume varies drastically and can sometimes reach 100,000 messages per second. The company aims to decouple the solution and increase scalability. What solution best meets these requirements?

A

Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with multiple Amazon Simple Queue Service (Amazon SQS) subscriptions. Configure consumer applications to process the messages from the queues.

40
Q

An application runs on an Amazon EC2 instance that has an Elastic IP address in VPC A. The application requires access to a database in VPC B. Both VPCs are in the same AWS account. Which solution will provide the required access MOST securely?

A

Configure a VPC peering connection between VPC A and VPC B.

41
Q

DDOS Layer 7

A

WAF and Cloudffront

42
Q

AWS ML to identify PII

A

AWS Textract to extract and Comprehend to identify PII

43
Q

Encryption questions with Self managed keys

A

Pick the option with “Create a new customer managed key”

44
Q

Ensure NLB equal distribution between 2 availability zones

A

Disable Cross Zone load balancing

45
Q

A company’s cloud operations team wants to standardize resource remediation. The company wants to provide a standard set of governance evaluations and remediations to all member accounts in its organization in AWS Organizations.

Which self-managed AWS service can the company use to meet these requirements with the LEAST amount of operational effort?

A

AWS Config conformance packs are collections of AWS Config rules and remediation actions that you can deploy as a single entity in an account and a Region or across an organization in AWS Organizations.

46
Q

company runs an application on three very large Amazon EC2 instances in a single Availability Zone in the us-east-1 Region. Multiple 16 TB Amazon Elastic Block Store (Amazon EBS) volumes are attached to each EC2 instance. The operations team uses an AWS Lambda script triggered by a schedule-based Amazon EventBridge rule to stop the instances on evenings and weekends, and start the instances on weekday mornings.

Before deploying the solution, the company used the public AWS pricing documentation to estimate the overall costs of running this data warehouse solution 5 days a week for 10 hours a day. When looking at monthly Cost Explorer charges for this new account, the overall charges are higher than the estimate.

What is the MOST likely cost factor that the company overlooked?

A

When you stop an EC2 instance, AWS does not charge you for usage or data transfer. However, AWS does charge you for the storage of any EBS volumes that are attached to the instance.

47
Q

A company wants to build an immutable infrastructure for its software applications. The company wants to test the software applications before sending traffic to them. The company seeks an efficient solution that limits the effects of application bugs.

Which combination of steps should a solutions architect recommend? (Select TWO.)

A

Route 53 weighted routing gives you the ability to send a percentage of traffic to multiple resources. You can use a blue/green deployment strategy to deploy software applications predictably and to quickly roll back deployments if tests fail.

AWS CloudFormation-The company could use a separate environment to test changes before the company deploys changes to production.

48
Q

An environment has an Auto Scaling group across two Availability Zones referred to as AZ-a and AZ-b. AZ-a has four Amazon EC2 instances, and AZ-b has three EC2 instances. The Auto Scaling group uses a default termination policy. None of the instances are protected from a scale-in event.

How will Auto Scaling proceed if there is a scale-in event?

A

The default termination policy helps ensure that instances are distributed evenly across Availability Zones for high availability. This action is the starting point of the default termination policy if the Availability Zones have an unequal number of instances and the instances are unprotected.

49
Q

A solutions architect is designing a database solution that must support a high rate of random disk reads and writes. It must provide consistent performance, and requires long-term persistence.

Which storage solution meets these requirements?

A

Provisioned IOPS volumes support a high rate of random disk reads and writes. Provisioned IOPS volumes handle I/O-intensive workloads (particularly database workloads) that are sensitive to storage performance and consistency. Provisioned IOPS volumes use a consistent IOPS rate that you specify when you create them. Amazon EBS delivers the provisioned performance 99.9% of the time.

50
Q

A team has an application that detects when new objects are uploaded into an Amazon S3 bucket. The uploads invoke an AWS Lambda function to write object metadata into an Amazon DynamoDB table and an Amazon RDS for PostgreSQL database.

Which action should the team take to ensure high availability?

A

By default, Amazon RDS is deployed to a single Availability Zone. Multi-AZ is the standard option to provide high availability. In a Multi-AZ setup, RDS DB instances are synchronously replicated in other Availability Zones to provide high availability and failover support.

51
Q

An application launched on Amazon EC2 instances needs to publish personally identifiable information (PII) about customers using Amazon Simple Notification Service (Amazon SNS). The application is launched in private subnets within an Amazon VPC.

What is the MOST secure way to allow the application to access service endpoints in the same AWS Region?

A

The use of PrivateLink does not require a public IP address on the instances or public access from the instance subnet. Traffic remains within the Region of the VPC and provides no single point of failure with the VPC endpoint. PrivateLink is the feature that powers VPC endpoints.

52
Q

A solutions architect is responsible for a new highly available three-tier architecture on AWS. An Application Load Balancer distributes traffic to two different Availability Zones with an auto scaling group that consists of Amazon EC2 instances and a Multi-AZ Amazon RDS DB instance. The solutions architect must recommend a multi-Region recovery plan with a recovery time objective (RTO) of 30 minutes. Because of budget constraints, the solutions architect cannot recommend a plan that replicates the entire architecture. The recovery plan should not use the secondary Region unless necessary.

Which disaster recovery strategy will meet these requirements?

A

A pilot light strategy meets all the requirements. This strategy does not have a large increase in cost. This strategy offers an RTO within 10s of minutes.

53
Q

A company is looking for ways to incorporate its current AWS usage expenditure into its operational expense tracking dashboard. A solutions architect has been tasked with proposing a method that enables the company to fetch its current year‘s cost data and project the costs for the forthcoming 12 months programmatically.
Which approach would fulfill these needs with the MINIMUM operational burden?

Leverage the AWS Cost Explorer API to retrieve usage cost-related data, using pagin

A

AWS Cost Explorer API

54
Q

An Amazon DynamoDB table has a variable load, ranging from sustained heavy usage some days, to only having small spikes on others. The load is 80% read and 20% write. The provisioned throughput capacity has been configured to account for the heavy load to ensure throttling does not occur.
What would be the most efficient solution to optimize cost?

A

Create DynamoDB auto scaling policy

55
Q

A company is transitioning their web presence into the AWS cloud. As part of the migration the company will be running a web application both on-premises and in AWS for a period of time. During the period of co-existence the client would like 80% of the traffic to hit the AWS-based web servers and 20% to be directed to the on-premises web servers.
What method can a Solutions Architect use to distribute traffic as requested?

Use Route 53 with a weighted routing policy and configure the respective weigh

A

Route 53 with weighted routing policy

56
Q

A Solutions Architect needs to capture information about the traffic that reaches an Amazon Elastic Load Balancer. The information should include the source, destination, and protocol.
What is the most secure and reliable method for gathering this data?

A

You can use VPC Flow Logs to capture detailed information about the traffic going to and from your Elastic Load Balancer. Create a flow log for each network interface for your load balancer. There is one network interface per load balancer subnet.

57
Q

The database layer of an on-premises web application is being migrated to AWS. The database currently uses an in-memory cache. A Solutions Architect must deliver a solution that supports high availability and replication for the caching layer.
Which service should the Solutions Architect recommend?

Amazon ElastiCache Redis

A

Amazon ElastiCache Redis

CORRECT: “Amazon ElastiCache Redis“ is the correct answer.
INCORRECT: “Amazon ElastiCache Memcached“ is incorrect as it does not support high availability or multi-AZ.

58
Q

A corporation has a web-based multiplayer gaming service that operates using both TCP and UDP protocols. Amazon Route 53 is currently employed to direct application traffic to a set of Network Load Balancers (NLBs) in various AWS Regions. To prepare for an increase in user activity, the company must enhance application performance and reduce latency.
Which approach will best meet these requirements?

Implement AWS Global Accelerator ahead of the NLBs

A

AWS Global Accelerator is designed to improve the availability and performance of your applications for local and global users. It directs traffic to optimal endpoints over the AWS global network, thus enhancing the performance of your TCP and UDP traffic by routing packets through the AWS global network infrastructure, reducing jitter, and improving overall game performance.

59
Q

A web application runs on a series of Amazon EC2 instances behind an Application Load Balancer (ALB). A Solutions Architect is updating the configuration with a health check and needs to select the protocol to use. What options are available? (choose 2)

A

An Application Load Balancer periodically sends requests to its registered targets to test their status. These tests are called health checks.
Each load balancer node routes requests only to the healthy targets in the enabled Availability Zones for the load balancer. Each load balancer node checks the health of each target, using the health check settings for the target groups with which the target is registered. After your target is registered, it must pass one health check to be considered healthy. After each health check is completed, the load balancer node closes the connection that was established for the health check.
If a target group contains only unhealthy registered targets, the load balancer nodes route requests across its unhealthy targets.
For an ALB the possible protocols are HTTP and HTTPS. The default is the HTTP protocol.

60
Q

A healthcare company is migrating its patient record system to AWS. The company receives thousands of encrypted patient data files every day through FTP. An on-premises server processes the data files twice a day. However, the processing job takes hours to finish.
The company wants the AWS solution to process incoming data files as soon as they arrive with minimal changes to the FTP clients that send the files. The solution must delete the incoming data files after the files have been processed successfully. Processing for each file needs to take around 10 minutes.
Which solution will meet these requirements in the MOST operationally efficient way?

A

AWS Transfer Family provides fully managed support for file transfers directly into and out of Amazon S3 using SFTP. Storing incoming files in S3 Standard offers high durability, availability, and performance object storage for frequently accessed data.
AWS Lambda can respond immediately to S3 events, which allows processing of files as soon as they arrive. Lambda can also delete the files after processing. This meets all requirements and is operationally efficient, as it requires minimal management and has low costs.
CORRECT: “Use AWS Transfer Family to create an SFTP server to store incoming files in Amazon S3 Standard. Create an AWS Lambda function to process the files and to delete the files after they are processed. Use an S3 event notification to invoke the Lambda function when the files arrive“ is the correct answer (as explained above.)

61
Q

A company is in the process of improving its security posture and wants to analyze and rectify a high volume of failed login attempts and unauthorized activities being logged in AWS CloudTrail.
What is the most efficient solution to help the company identify these security events with the LEAST amount of operational effort?

A

Amazon Athena can directly query data from S3 (where CloudTrail logs are stored) using standard SQL, making it a powerful and efficient tool for analyzing these logs. You don‘t need to manage any infrastructure or write custom scripts, and you can quickly write and run queries to identify the required security events.
CORRECT: “Use Amazon Athena to directly query CloudTrail logs for failed logins and unauthorized activities“ is the correct answer (as explained above.)

62
Q

A company requires an Elastic Load Balancer (ELB) for an application they are planning to deploy on AWS. The application requires extremely high throughput and extremely low latencies. The connections will be made using the TCP protocol and the ELB must support load balancing to multiple ports on an instance. Which ELB would should the company use?

A

The Network Load Balancer operates at the connection level (Layer 4), routing connections to targets – Amazon EC2 instances, containers and IP addresses based on IP protocol data. It is architected to handle millions of requests/sec, sudden volatile traffic patterns and provides extremely low latencies.
The NLB provides high throughput and extremely low latencies and is designed to handle traffic as it grows and can load balance millions of requests/second. NLB also supports load balancing to multiple ports on an instance.
CORRECT: “Network Load Balancer“ is the correct answer.
INCORRECT: “Classic Load Balancer“ is incorrect. The CLB operates using the TCP, SSL, HTTP and HTTPS protocols. It is not the best choice for requirements of extremely high throughput and low latency and does not support load balancing to multiple ports on an instance.
INCORRECT: “Application Load Balancer“ is incorrect. The ALB operates at the HTTP and HTTPS level only (does not support TCP load balancing).
INCORRECT: “Route 53“ is incorrect. Route 53 is a DNS service, it is not a type of ELB (though you can do some types of load balancing with it).
References:

63
Q

A telecommunication company has an API that allows users to manage their mobile plans and services. The API experiences significant traffic spikes during specific times such as end of the month and special offer periods. The company needs to ensure low latency response time consistently to ensure a good user experience. The solution should also minimize operational overhead.
Which solution would meet these requirements MOST efficiently?

A

Amazon API Gateway and AWS Lambda together make a highly scalable solution for APIs. Provisioned concurrency in Lambda ensures that there is always a warm pool of functions ready to quickly respond to API requests, thereby guaranteeing low latency even during peak traffic times.

64
Q

A financial services company wants a single log processing model for all the log files (consisting of system logs, application logs, database logs, etc) that can be processed in a serverless fashion and then durably stored for downstream analytics. The company wants to use an AWS managed service that automatically scales to match the throughput of the log data and requires no ongoing administration.
As a solutions architect, which of the following AWS services would you recommend solving this problem?

A

Kinesis Data Firehose

65
Q

A traffic law enforcement company is building a solution that has thousands of edge devices that collectively generate 1 TB of status alerts each day. These devices provide vehicle information and number plate data whenever alerts detecting red light jumps are detected. Each entry is around 2Kb in size. A solutions architect needs to implement a solution to ingest and store the alerts for future analysis.
The company wants a highly available solution. However, the company needs to minimize costs and does not want to manage additional infrastructure. Additionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days.
What is the MOST operationally efficient solution that meets these requirements?

A

Data ingestion is a good use case for since it is scalable and can achieve the volumes required. Also, an S3 lifecycle configuration is appropriate for the requirement for data retention.
CORRECT: “Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days“ is the correct answer (as explained above.)

66
Q

A three-tier web application is composed of a front end hosted on an Amazon EC2 instance in public subnet, application middleware hosted on EC2 in a private subnet and a database hosted on an Amazon RDS MySQL database in a private subnet. The database layer should be restricted to only allow incoming connections from the application.
Which of the following options makes sure that database can only be accessed by the application layer?

A

Security groups are stateful. All inbound traffic is blocked by default in custom security groups. If you create an inbound rule allowing traffic in, that traffic is automatically allowed back out again. You cannot block specific IP address using Security groups (instead use Network Access Control Lists).
In this case the solution is to allow inbound traffic from the security group ID of the security group attached to the application layer. The rule should specify the appropriate protocol and port. This will ensure only the application layer can communicate with the database.
CORRECT: “Create a security group that allows inbound traffic from the security group that is assigned to instances in the private subnets. Attach the security group to the DB instances“ is the correct answer (as explained above.)

67
Q

An application is running in a private subnet of an Amazon VPC and must have outbound internet access for downloading updates. The Solutions Architect does not want the application exposed to inbound connection attempts. Which steps should be taken?

A

To enable outbound connectivity for instances in private subnets a NAT gateway can be created. The NAT gateway is created in a public subnet and a route must be created in the private subnet pointing to the NAT gateway for internet-bound traffic. An internet gateway must be attached to the VPC to facilitate outbound connections.

68
Q

An organization is planning their disaster recovery solution. They plan to run a scaled down version of a fully functional environment. In a DR situation the recovery time must be minimized.
Which DR strategy should a Solutions Architect recommend?

A

The term warm standby is used to describe a DR scenario in which a scaled-down version of a fully functional environment is always running in the cloud. A warm standby solution extends the pilot light elements and preparation.
It further decreases the recovery time because some services are always running. By identifying your business-critical systems, you can fully duplicate these systems on AWS and have them always on.
CORRECT: “Warm standby“ is the correct answer.

69
Q

A media company has grown significantly in the past few months and the management team are concerned about compliance, governance, auditing, and security. The management team requires that configuration changes are tracked a history of API calls is recorded.
What should a solutions architect do to meet these requirements?

A

Use AWS Config to track configuration changes and AWS CloudTrail to record API

70
Q

A Solutions Architect is designing an application for processing and extracting data from log files. The log files are generated by an application and the number and frequency of updates varies. The files are up to 1 GB in size and processing will take around 40 seconds for each file.
Which solution is the most cost-effective?

A

The question asks for the most cost-effective solution and therefor a serverless and automated solution will be the best choice.
AWS Lambda can run custom code in response to Amazon S3 bucket events. You upload your custom code to AWS Lambda and create a function. When Amazon S3 detects an event of a specific type (for example, an object created event), it can publish the event to AWS Lambda and invoke your function in Lambda. In response, AWS Lambda executes your function.

71
Q

A large company is currently using multiple AWS accounts as part of its cloud deployment model, and these accounts are currently structured using AWS Organizations. A Solutions Architect has been tasked with limiting access to an Amazon S3 bucket to only users of accounts that are enrolled with AWS Organizations. The Solutions Architect wants to avoid listing the many dozens of account IDs in the Bucket policy, as there are many accounts the frequent changes.
Which strategy meets these requirements with the LEAST amount of effort?

A

The aws:PrincipalOrgID global key provides a simpler alternative to manually listing and updating all the account IDs for all AWS accounts that exist within an Organization. The following Amazon S3 bucket policy allows members of any account in the ‘123456789’ organization to add an object into the ‘mydctbucket’ bucket.

72
Q

A company runs an API on a Linux server in their on-premises data center. The company are planning to migrate the API to the AWS cloud. The company require a highly available, scalable and cost-effective solution. What should a Solutions Architect recommend?

A

CORRECT: “Migrate the API to Amazon API Gateway and use AWS Lambda as the backend“ is the correct answer.
INCORRECT: “Migrate the API to Amazon API Gateway and migrate the backend to Amazon EC2“ is incorrect. This is a less available and cost-effective solution for the backend compared to AWS Lambda.
INCORRECT: “Migrate the API server to Amazon EC2 instances in an Auto Scaling group and attach an Application Load Balancer“ is incorrect. Firstly, it may be difficult to load balance to an API. Additionally, this is a less cost-effective solution.
INCORRECT: “Migrate the API to Amazon CloudFront and use AWS Lambda as the origin“ is incorrect. You cannot migrate an API to CloudFront. You can use CloudFront in front of API Gateway but that is not what this answer specifies.

73
Q

A Solutions Architect is designing a migration strategy for a company moving to the AWS Cloud. The company use a shared Microsoft filesystem that uses Distributed File System Namespaces (DFSN). What will be the MOST suitable migration strategy for the filesystem?

A

The destination filesystem should be Amazon FSx for Windows File Server. This supports DFSN and is the most suitable storage solution for Microsoft filesystems. AWS DataSync supports migrating to the Amazon FSx and automates the process.
CORRECT: “Use AWS DataSync to migrate to Amazon FSx for Windows File Server“ is the correct answer.
INCORRECT: “Use the AWS Server Migration Service to migrate to Amazon FSx for Lustre“ is incorrect. The server migration service is used to migrate virtual machines and FSx for Lustre does not support Windows filesystems.
INCORRECT: “Use AWS DataSync to migrate to an Amazon EFS filesystem“ is incorrect. You can migrate data to EFS using DataSync but it is the wrong destination for a Microsoft filesystem (Linux only).
INCORRECT: “Use the AWS Server Migration Service to migrate to an Amazon S3 bucket“ is incorrect. The server migration service is used to migrate virtual machines and Amazon S3 is an object-based storage system and unsuitable for hosting a Microsoft filesystem.
References:
https://aws.amazon.com/blogs/storage/migrate-to-amazon-fsx-for-windows-file-server-using-aws-datasync/
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/migrate-files-fsx.html
Topic: amazon-fsx

74
Q

A large quantity of data is stored on a NAS device on-premises and accessed using the SMB protocol. The company require a managed service for hosting the filesystem and a tool to automate the migration.
Which actions should a Solutions Architect take?

A

Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessible over the industry-standard Server Message Block (SMB) protocol. This is the most suitable destination for this use case.
AWS DataSync can be used to move large amounts of data online between on-premises storage and Amazon S3, Amazon EFS, or Amazon FSx for Windows File Server. The source datastore can be Server Message Block (SMB) file servers.

75
Q

A company has an on-premises server that uses a MySQL database to process and store customer information. The company wants to migrate to an AWS database service to achieve higher availability and to improve application performance. Additionally, the company wants to offload reporting workloads from its primary database to ensure it remains performant.
Which solution will meet these requirements in the MOST operationally efficient way?

A

Amazon Aurora with MySQL compatibility is a good fit for achieving high availability and improved performance. Aurora automatically distributes the data across multiple AZs in a single region. Additionally, Aurora allows the creation of up to 15 Aurora Replicas that share the same underlying volume as the primary instance. Directing reporting functions to the Aurora Replica is an effective way to offload reporting workloads from the primary database.
CORRECT: “Use Amazon Aurora with MySQL compatibility. Direct the reporting functions to use one of the Aurora Replicas“ is the correct answer (as explained above.)
INCORRECT: “Use Amazon RDS with MySQL in a Single-AZ deployment. Create a read replica in the same availability zone as the primary DB instance. Direct the reporting functions to the read replica“ is incorrect.
Though you can use Amazon RDS with MySQL in a Single-AZ deployment and create a read replica, it is not the most operationally efficient option as it does not provide the high availability that Aurora‘s architecture offers.
INCORRECT: “Use AWS Database Migration Service (AWS DMS) to create an Amazon Aurora DB cluster in multiple AWS Regions. Point the reporting functions toward a separate DB instance from the primary DB instance“ is incorrect.
Using AWS DMS to create Amazon Aurora DB clusters in multiple AWS Regions would be overkill for the requirements. It could also introduce additional complexity and doesn’t specifically address using a replica for reporting purposes.
INCORRECT: “Use Amazon EC2 instances to deploy a self-managed MySQL database with a replication setup for reporting purposes. Place instances in multiple availability zones and manage backups and patching manually“ is incorrect.

76
Q

The Chief Financial Officer of a large corporation is looking for an AWS native tool which will help reduce their cloud spend. After receiving a budget alarm, the company has decided that they need to reduce their spend across their different areas of compute and need insights into their spend to decide where they can reduce cost.
What is the easiest way to achieve this goal?

A

AWS Compute Optimizer helps you identify the optimal AWS resource configurations, such as Amazon Elastic Compute Cloud (EC2) instance types, Amazon Elastic Block Store (EBS) volume configurations, and AWS Lambda function memory sizes, using machine learning to analyze historical utilization metrics. AWS Compute Optimizer provides a set of APIs and a console experience to help you reduce costs and increase workload performance by recommending the optimal AWS resources for your AWS workloads.

77
Q

A DevOps team uses an Amazon RDS MySQL database running for running resource-intensive tests each month. The instance has Performance Insights enabled and is only used once a month for up to 48 hours. As part of an effort to reduce AWS spend, the team wants to reduce the cost of running the tests without reducing the memory and compute attributes of the DB instance.
Which solution meets these requirements MOST cost-effectively?

A

Taking a snapshot of the instance and storing the snapshot is the most cost-effective solution. When needed, a new database can be created from the snapshot. Performance Insights can be enabled on the new instance if needed. Note that the previous data from Performance Insights will not be associated with the new instance, however this was not a requirement.

78
Q

A Solutions Architect is designing an application that will run on an Amazon EC2 instance. The application must asynchronously invoke an AWS Lambda function to analyze thousands of .CSV files. The services should be decoupled.
Which service can be used to decouple the compute services?

A

You can use a Lambda function to process Amazon Simple Notification Service notifications. Amazon SNS supports Lambda functions as a target for messages sent to a topic. This solution decouples the Amazon EC2 application from Lambda and ensures the Lambda function is invoked.
CORRECT: “Amazon SNS“ is the correct answer.

79
Q

A Solutions Architect working for a large financial institution is building an application to manage their customers financial information and their sensitive personal information. The Solutions Architect requires that the storage layer can store immutable data out of the box, with the ability to encrypt the data at rest and requires that the storage layer provides ACID properties. They also want to use a containerized solution to manage the compute layer.
Which solution will meet these requirements with the LEAST amount of operational overhead?

A

he solution requires that the storage layer be immutable. This immutability can only be delivered by Amazon Quantum Ledger Database (QLDB), as Amazon QLDB has a built-in immutable journal that stores an accurate and sequenced entry of every data change. The journal is append-only, meaning that data can only be added to a journal, and it cannot be overwritten or deleted.
Secondly the compute layer needs to not only be containerized, and implemented with the least possible operational overhead. The option that best fits these requirements is Amazon ECS on AWS Fargate, as AWS Fargate is a Serverless, containerized deployment option.

80
Q

An application is deployed on multiple AWS regions and accessed from around the world. The application exposes static public IP addresses. Some users are experiencing poor performance when accessing the application over the Internet.
What should a solutions architect recommend to reduce internet latency?

A

AWS Global Accelerator is a service in which you create accelerators to improve availability and performance of your applications for local and global users. Global Accelerator directs traffic to optimal endpoints over the AWS global network. This improves the availability and performance of your internet applications that are used by a global audience. Global Accelerator is a global service that supports endpoints in multiple AWS Regions, which are listed in the AWS Region Table.
By default, Global Accelerator provides you with two static IP addresses that you associate with your accelerator. (Or, instead of using the IP addresses that Global Accelerator provides, you can configure these entry points to be IPv4 addresses from your own IP address ranges that you bring to Global Accelerator.)

81
Q

A high-performance file system is required for a financial modelling application. The data set will be stored on Amazon S3 and the storage solution must have seamless integration so objects can be accessed as files.
Which storage solution should be used?

A

Amazon FSx for Lustre provides a high-performance file system optimized for fast processing of workloads such as machine learning, high performance computing (HPC), video processing, financial modeling, and electronic design automation (EDA). Amazon FSx works natively with Amazon S3, letting you transparently access your S3 objects as files on Amazon FSx to run analyses for hours to months.

82
Q

ElastiCache for Redis

A

Amazon ElastiCache for Redis – Redis, which stands for Remote Dictionary Server, is a fast, open-source, in-memory key-value data store for use as a database, cache, message broker, and queue. Redis now delivers sub-millisecond response times enabling millions of requests per second for real-time applications in Gaming, Ad-Tech, Financial Services, Healthcare, and IoT. Redis is a popular choice for caching, session management, gaming, leaderboards, real-time analytics, geospatial, ride-hailing, chat/messaging, media streaming, and pub/sub apps.

83
Q

An e-commerce company is looking for a solution with high availability, as it plans to migrate its flagship application to a fleet of Amazon EC2 instances. The solution should allow for content-based routing as part of the architecture.
As a Solutions Architect, which of the following will you suggest for the company?

A

Use an Application Load Balancer for distributing traffic to the EC2 instances spread across different Availability Zones. Configure Auto Scaling group to mask any failure of an instance
The Application Load Balancer (ALB) is best suited for load balancing HTTP and HTTPS traffic and provides advanced request routing targeted at the delivery of modern application architectures, including microservices and containers. Operating at the individual request level (Layer 7), the Application Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) based on the content of the request.
This is the correct option since the question has a specific requirement for content-based routing which can be configured via the Application Load Balancer. Different AZs provide high availability to the overall architecture and Auto Scaling group will help mask any instance failures.

84
Q

An Electronic Design Automation (EDA) application produces massive volumes of data that can be divided into two categories. The ‘hot data‘ needs to be both processed and stored quickly in a parallel and distributed fashion. The ‘cold data‘ needs to be kept for reference with quick access for reads and updates at a low cost.
Which of the following AWS services is BEST suited to accelerate the aforementioned chip design process?

A

Amazon FSx for Lustre
Amazon FSx for Lustre makes it easy and cost-effective to launch and run the world’s most popular high-performance file system. It is used for workloads such as machine learning, high-performance computing (HPC), video processing, and financial modeling. The open-source Lustre file system is designed for applications that require fast storage – where you want your storage to keep up with your compute. FSx for Lustre integrates with Amazon S3, making it easy to process data sets with the Lustre file system. When linked to an S3 bucket, an FSx for Lustre file system transparently presents S3 objects as files and allows you to write changed data back to S3.
FSx for Lustre provides the ability to both process the ‘hot data‘ in a parallel and distributed fashion as well as easily store the ‘cold data‘ on Amazon S3. Therefore this option is the BEST fit for the given problem statement.

85
Q

A gaming company is looking at improving the availability and performance of its global flagship application which utilizes UDP protocol and needs to support fast regional failover in case an AWS Region goes down. The company wants to continue using its own custom DNS service.
Which of the following AWS services represents the best solution for this use-case?

A

AWS Global Accelerator – AWS Global Accelerator utilizes the Amazon global network, allowing you to improve the performance of your applications by lowering first-byte latency (the round trip time for a packet to go from a client to your endpoint and back again) and jitter (the variation of latency), and increasing throughput (the amount of time it takes to transfer data) as compared to the public internet.
Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover.

86
Q

As part of a pilot program, a biotechnology company wants to integrate data files from its on-premises analytical application with AWS Cloud via an NFS interface.
Which of the following AWS service is the MOST efficient solution for the given use-case?

AWS Storage Gateway - File Gateway

A

AWS Storage Gateway – File Gateway
AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. The service provides three different types of gateways – Tape Gateway, File Gateway, and Volume Gateway – that seamlessly connect on-premises applications to cloud storage, caching data locally for low-latency access.

87
Q

A US-based healthcare startup is building an interactive diagnostic tool for COVID-19 related assessments. The users would be required to capture their personal health records via this tool. As this is sensitive health information, the backup of the user data must be kept encrypted in S3. The startup does not want to provide its own encryption keys but still wants to maintain an audit trail of when an encryption key was used and by whom.
Which of the following is the BEST solution for this use-case?

A

Use SSE-KMS to encrypt the user data on S3
AWS Key Management Service (AWS KMS) is a service that combines secure, highly available hardware and software to provide a key management system scaled for the cloud. When you use server-side encryption with AWS KMS (SSE-KMS), you can specify a customer-managed CMK that you have already created. SSE-KMS provides you with an audit trail that shows when your CMK was used and by whom. Therefore SSE-KMS is the correct solution for this use-case.
Server Side Encryption in S3:

88
Q

A company has VPC with private subnet, there are some services running inside the private subnet which needs to access the internet using IPv6 traffic.
Which service can be used to deliver this solution in the MOST cost-effective && scalability manner?

A

IPv6 traffic =Egress-only internet gateway

89
Q

VPN Logs =VPC Flow Logs

A

VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs or Amazon S3. After you’ve created a flow log, you can retrieve and view its data in the chosen destination.

90
Q

to provide VPC private connection to AWS services = Use VPC endpoint

A

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

91
Q

A company is designing bidding application which will receive 1000 bids per second, these bids will be processed in order without losing any bid. there will be multiple services to process each bid.
What is the best AWS service to handle this requirement?

A

Amazon Kinesis Data Streams enables real-time processing of streaming big data. It provides ordering of records, as well as the ability to read and/or replay records in the same order to multiple Amazon Kinesis Applications. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications reading from the same Amazon Kinesis data stream (for example, to perform counting, aggregation, and filtering).
Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly scalable hosted queue for storing messages as they travel between computers. Amazon SQS lets you easily move data between distributed application components and helps you build applications in which messages are processed independently (with message-level ack/fail semantics), such as automated workflows.