saa-c02-part-11 Flashcards

1
Q

A company is designing a new web service that will run on Amazon EC2 instances behind an Elastic Load Balancer. However, many of the web service clients can only reach IP addresses whitelisted on their firewalls.

What should a solutions architect recommend to meet the clients’ needs?

  1. A Network Load Balancer with an associated Elastic IP address.
  2. An Application Load Balancer with an associated Elastic IP address
  3. An A record in an Amazon Route 53 hosted zone pointing to an Elastic IP address
  4. An EC2 instance with a public IP address running as a proxy in front of the load balancer
A
  1. A Network Load Balancer with an associated Elastic IP address

IP addresses whitelisted on their firewalls = BYOIP = trusted IPs as Elastic IP addresses (EIP) to a Network Load Balancer (NLB).

IP addresses = not route 53 alias

EIP = not ALB

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A company wants to host a web application on AWS that will communicate to a database within a VPC. The application should be highly available.

What should a solutions architect recommend?

  1. Create two Amazon EC2 instances to host the web servers behind a load balancer, and then deploy the database on a large instance.
  2. Deploy a load balancer in multiple Availability Zones with an Auto Scaling group for the web servers, and then deploy Amazon RDS in multiple Availability Zones.
  3. Deploy a load balancer in the public subnet with an Auto Scaling group for the web servers, and then deploy the database on an Amazon EC2 instance in the private subnet.
  4. Deploy two web servers with an Auto Scaling group, configure a domain that points to the two web servers, and then deploy a database architecture in multiple Availability Zones.
A
  1. Deploy a load balancer in multiple Availability Zones with an Auto Scaling group for the web servers, and then deploy Amazon RDS in multiple Availability Zones.

highly available = multi AZ = 2,4

communicate to a database within a VPC = Amazon RDS = more money for amazon

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A company’s packaged application dynamically creates and returns single-use text files in response to user requests. The company is using Amazon CloudFront for distribution, but wants to further reduce data transfer costs. The company cannot modify the application’s source code.

What should a solutions architect do to reduce costs?

  1. Use Lambda@Edge to compress the files as they are sent to users.
  2. Enable Amazon S3 Transfer Acceleration to reduce the response times.
  3. Enable caching on the CloudFront distribution to store generated files at the edge.
  4. Use Amazon S3 multipart uploads to move the files to Amazon S3 before returning them to users.
A
  1. Use Lambda@Edge to compress the files as they are sent to users.

reduce data transfer costs = compress the files = smaller files

single-use text files = can’t cache = not 3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A database is on an Amazon RDS MySQL 5.6 Multi-AZ DB instance that experiences highly dynamic reads. Application developers notice a significant slowdown when testing read performance from a secondary AWS Region. The developers want a solution that provides less than 1 second of read replication latency.

What should the solutions architect recommend?

  1. Install MySQL on Amazon EC2 in the secondary Region.
  2. Migrate the database to Amazon Aurora with cross-Region replicas.
  3. Create another RDS for MySQL read replica in the secondary Region.
  4. Implement Amazon ElastiCache to improve database query performance.
A
  1. Migrate the database to Amazon Aurora with cross-Region replicas.

1 second of read replication = Aurora can support less than 1 second, RDS MySQL cannot

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A company is planning to deploy an Amazon RDS DB instance running Amazon Aurora. The company has a backup retention policy requirement of 90 days.

Which solution should a solutions architect recommend?

  1. Set the backup retention period to 90 days when creating the RDS DB instance.
  2. Configure RDS to copy automated snapshots to a user-managed Amazon S3 bucket with a lifecycle policy set to delete after 90 days.
  3. Create an AWS Backup plan to perform a daily snapshot of the RDS database with the retention set to 90 days. Create an AWS Backup job to schedule the execution of the backup plan daily.
  4. Use a daily scheduled event with Amazon CloudWatch Events to execute a custom AWS Lambda function that makes a copy of the RDS automated snapshot. Purge snapshots older than 90 days.
A
  1. Configure RDS to copy automated snapshots to a user-managed Amazon S3 bucket with a lifecycle policy set to delete after 90 days.

Aurora backups go to S3 = S3 bucket with a lifecycle policy set to delete after 90 days.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company currently has 250 TB of backup files stored in Amazon S3 in a vendor’s proprietary format. Using a Linux-based software application provided by the vendor, the company wants to retrieve files from Amazon S3, transform the files to an industry-standard format, and re-upload them to Amazon S3. The company wants to minimize the data transfer charges associated with this conversation.

What should a solutions architect do to accomplish this?

  1. Install the conversion software as an Amazon S3 batch operation so the data is transformed without leaving Amazon S3.
  2. Install the conversion software onto an on-premises virtual machine. Perform the transformation and re-upload the files to Amazon S3 from the virtual machine.
  3. Use AWS Snowball Edge devices to export the data and install the conversion software onto the devices. Perform the data transformation and re-upload the files to Amazon S3 from the Snowball Edge devices.
  4. Launch an Amazon EC2 instance in the same Region as Amazon S3 and install the conversion software onto the instance. Perform the transformation and re-upload the files to Amazon S3 from the EC2 instance.
A
  1. Launch an Amazon EC2 instance in the same Region as Amazon S3 and install the conversion software onto the instance. Perform the transformation and re-upload the files to Amazon S3 from the EC2 instance.

minimize the data transfer charges = not snowball = in aws ETL = 1,4

software application provided by the vendor = not lambda S3 batch = 4 wins

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company is migrating a NoSQL database cluster to Amazon EC2. The database automatically replicates data to maintain at least three copies of the data. I/O throughput of the servers is the highest priority. Which instance type should a solutions architect recommend for the migration?

  1. Storage optimized instances with instance store
  2. Burstable general purpose instances with an Amazon Elastic Block Store (Amazon EBS) volume
  3. Memory optimized instances with Amazon Elastic Block Store (Amazon EBS) optimization enabled
  4. Compute optimized instances with Amazon Elastic Block Store (Amazon EBS) optimization enabled
A

I/O throughput of the servers is the highest priority = instance store

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/storage-optimized-instances.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company has a large Microsoft SharePoint deployment running on-premises that requires Microsoft Windows shared file storage. The company wants to migrate this workload to the AWS Cloud and is considering various storage options. The storage solution must be highly available and integrated with Active Directory for access control.

Which solution will satisfy these requirements?

  1. Configure Amazon EFS Amazon Elastic File System (Amazon EFS) storage and set the Active Directory domain for authentication.
  2. Create an SMB file share on an AWS Storage Gateway file gateway in two Availability Zones.
  3. Create an Amazon S3 bucket and configure Microsoft Windows Server to mount it as a volume.
  4. Create an Amazon FSx for Windows File Server file system on AWS and set the Active Directory domain for authentication.
A
  1. Create an Amazon FSx for Windows File Server file system on AWS and set the Active Directory domain for authentication.

Windows shared file storage = FSx

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A company has a web application with sporadic usage patterns. There is heavy usage at the beginning of each month, moderate usage at the start of each week, and unpredictable usage during the week. The application consists of a web server and a MySQL database server running inside the data center. The company would like to move the application to the AWS Cloud, and needs to select a cost-effective database platform that will not require database modifications.

Which solution will meet these requirements?

  1. Amazon DynamoDB
  2. Amazon RDS for MySQL
  3. MySQL-compatible Amazon Aurora Serverless
  4. MySQL deployed on Amazon EC2 in an Auto Scaling group
A
  1. MySQL-compatible Amazon Aurora Serverless

sporadic usage patterns = ASG or Serverless = 3,4

cost-effective = 3 is cheaper

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A solutions architect is designing the storage architecture for a new web application used for storing and viewing engineering drawings. All application components will be deployed on the AWS infrastructure.

The application design must support caching to minimize the amount of time that users wait for the engineering drawings to load. The application must be able to store petabytes of data.

Which combination of storage and caching should the solutions architect use?

  1. Amazon S3 with Amazon CloudFront
  2. Amazon S3 Glacier with Amazon ElastiCache
  3. Amazon Elastic Block Store (Amazon EBS) volumes with Amazon CloudFront
  4. AWS Storage Gateway with Amazon ElastiCache
A
  1. Amazon S3 with Amazon CloudFront

minimize the amount of time +S3 = CloudFront

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A solutions architect is creating an application that will handle batch processing of large amounts of data. The input data will be held in Amazon S3 and the output data will be stored in a different S3 bucket. For processing, the application will transfer the data over the network between multiple Amazon EC2 instances.

What should the solutions architect do to reduce the overall data transfer costs?

  1. Place all the EC2 instances in an Auto Scaling group.
  2. Place all the EC2 instances in the same AWS Region.
  3. Place all the EC2 instances in the same Availability Zone.
  4. Place all the EC2 instances in private subnets in multiple Availability Zones.
A
  1. Place all the EC2 instances in the same Availability Zone.

reduce the overall data transfer costs = close together as possible = same AZ

“Also, be aware of inter-Availability Zones data transfer charges between Amazon EC2 instances, even within the same region. If possible, the instances in a development or test environment that need to communicate with each other should be co-located within the same Availability Zone to avoid data transfer charges. (This doesn’t apply to production workloads which will most likely need to span multiple Availability Zones for high availability.)” https://aws.amazon.com/blogs/mt/using-aws-cost-explorer-to-analyze-data-transfer-costs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company hosts its core network services, including directory services and DNS, in its on-premises data center. The data center is connected to the AWS Cloud using AWS Direct Connect (DX). Additional AWS accounts are planned that will require quick, cost-effective, and consistent access to these network services.

What should a solutions architect implement to meet these requirements with the LEAST amount of operational overhead?

  1. Create a DX connection in each new account. Route the network traffic to the on-premises servers.
  2. Configure VPC endpoints in the DX VPC for all required services. Route the network traffic to the on-premises servers.
  3. Create a VPN connection between each new account and the DX VPC. Route the network traffic to the on-premises servers.
  4. Configure AWS Transit Gateway between the accounts. Assign DX to the transit gateway and route network traffic to the on-premises servers.
A
  1. Configure AWS Transit Gateway between the accounts. Assign DX to the transit gateway and route network traffic to the on-premises servers.

LEAST amount of operational overhead = AWS Transit Gateway connects VPCs and on-premises networks through a central hub

https://aws.amazon.com/transit-gateway/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A company operates an ecommerce website on Amazon EC2 instances behind an Application Load Balancer (ALB) in an Auto Scaling group. The site is experiencing performance issues related to a high request rate from illegitimate external systems with changing IP addresses. The security team is worried about potential DDoS attacks against the website. The company must block the illegitimate incoming requests in a way that has a minimal impact on legitimate users.

What should a solutions architect recommend?

  1. Deploy Amazon Inspector and associate it with the ALB.
  2. Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule.
  3. Deploy rules to the network ACLs associated with the ALB to block the incoming traffic.
  4. Deploy Amazon GuardDuty and enable rate-limiting protection when configuring GuardDuty.
A
  1. Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule.

high request rate = WAF rate-limiting

IP addresses = WAF

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A company receives structured and semi-structured data from various sources once every day. A solutions architect needs to design a solution that leverages big data processing frameworks. The data should be accessible using SQL queries and business intelligence tools.

What should the solutions architect recommend to build the MOST high-performing solution?

  1. Use AWS Glue to process data and Amazon S3 to store data.
  2. Use Amazon EMR to process data and Amazon Redshift to store data.
  3. Use Amazon EC2 to process data and Amazon Elastic Block Store (Amazon EBS) to store data.
  4. Use Amazon Kinesis Data Analytics to process data and Amazon Elastic File System (Amazon EFS) to store data.
A
  1. Use Amazon EMR to process data and Amazon Redshift to store data.

SQL queries = Redshift

big data processing = EMR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A company is hosting an election reporting website on AWS for users around the world. The website uses Amazon EC2 instances for the web and application tiers in an Auto Scaling group with Application Load Balancers. The database tier uses an Amazon RDS for MySQL database. The website is updated with election results once an hour and has historically observed hundreds of users accessing the reports.

The company is expecting a significant increase in demand because of upcoming elections in different countries. A solutions architect must improve the website’s ability to handle additional demand while minimizing the need for additional EC2 instances.

Which solution will meet these requirements?

  1. Launch an Amazon ElastiCache cluster to cache common database queries.
  2. Launch an Amazon CloudFront web distribution to cache commonly requested website content.
  3. Enable disk-based caching on the EC2 instances to cache commonly requested website content.
  4. Deploy a reverse proxy into the design using an EC2 instance with caching enabled for commonly requested website content.
A
  1. Launch an Amazon CloudFront web distribution to cache commonly requested website content.

improve the website’s ability to handle additional demand = caching = CloudFront

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A company is building a website that relies on reading and writing to an Amazon DynamoDB database. The traffic associated with the website predictably peaks during business hours on weekdays and declines overnight and during weekends. A solutions architect needs to design a cost-effective solution that can handle the load.

What should the solutions architect do to meet these requirements?

  1. Enable DynamoDB Accelerator (DAX) to cache the data.
  2. Enable Multi-AZ replication for the DynamoDB database.
  3. Enable DynamoDB auto scaling when creating the tables.
  4. Enable DynamoDB On-Demand capacity allocation when creating the tables.
A
  1. Enable DynamoDB auto scaling when creating the tables.

reading and writing predictably peaks = scaling needed

On-Demand = more expensive = wrong

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html

17
Q

A company uses Amazon Redshift for its data warehouse. The company wants to ensure high durability for its data in case of any component failure.

What should a solutions architect recommend?

  1. Enable concurrency scaling.
  2. Enable cross-Region snapshots.
  3. Increase the data retention period.
  4. Deploy Amazon Redshift in Multi-AZ.
A
  1. Enable cross-Region snapshots.

high durability = cross region

18
Q

A company has data stored in an on-premises data center that is used by several on-premises applications. The company wants to maintain its existing application environment and be able to use AWS services for data analytics and future visualizations.

Which storage service should a solutions architect recommend?

  1. Amazon Redshift
  2. AWS Storage Gateway for files
  3. Amazon Elastic Block Store (Amazon EBS)
  4. Amazon Elastic File System (Amazon EFS)
A
  1. AWS Storage Gateway for files

on-premises to AWS only answer = 2

19
Q

A solutions architect must design a solution that uses Amazon CloudFront with an Amazon S3 origin to store a static website. The company’s security policy requires that all website traffic be inspected by AWS WAF.

How should the solutions architect comply with these requirements?

  1. Configure an S3 bucket policy to accept requests coming from the AWS WAF Amazon Resource Name (ARN) only.
  2. Configure Amazon CloudFront to forward all incoming requests to AWS WAF before requesting content from the S3 origin.
  3. Configure a security group that allows Amazon CloudFront IP addresses to access Amazon S3 only. Associate AWS WAF to CloudFront.
  4. Configure Amazon CloudFront and Amazon S3 to use an origin access identity (OAI) to restrict access to the S3 bucket. Enable AWS WAF on the distribution.
A
  1. Configure Amazon CloudFront and Amazon S3 to use an origin access identity (OAI) to restrict access to the S3 bucket. Enable AWS WAF on the distribution

CloudFront to S3 restrictions = OAI

20
Q

A company has a 143 TB MySQL database that it wants to migrate to AWS. The plan is to use Amazon Aurora MySQL as the platform going forward. The company has a 100 Mbps AWS Direct Connect connection to Amazon VPC.

Which solution meets the company’s needs and takes the LEAST amount of time?

  1. Use a gateway endpoint for Amazon S3. Migrate the data to Amazon S3. Import the data into Aurora.
  2. Upgrade the Direct Connect link to 500 Mbps. Copy the data to Amazon S3. Import the data into Aurora.
  3. Order an AWS Snowmobile and copy the database backup to it. Have AWS import the data into Amazon S3. Import the backup into Aurora.
  4. Order four 50-TB AWS Snowball devices and copy the database backup onto them. Have AWS import the data into Amazon S3. Import the data into Aurora.
A
  1. Order four 50-TB AWS Snowball devices and copy the database backup onto them. Have AWS import the data into Amazon S3. Import the data into Aurora.

If the answers have one where you order multiple snowball devices that is right answer.