saa-c02-part-14 Flashcards

1
Q

A company is designing a website that uses an Amazon S3 bucket to store static images. The company wants all future requests to have faster response times while reducing both latency and cost.
Which service configuration should a solutions architect recommend?
1. Deploy a NAT server in front of Amazon S3.
2. Deploy Amazon CloudFront in front of Amazon S3.
3. Deploy a Network Load Balancer in front of Amazon S3.
4. Configure Auto Scaling to automatically adjust the capacity of the website.

A
  1. Deploy Amazon CloudFront in front of Amazon S3.

faster response times = CloudFront

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A company has an on-premises MySQL database used by the global sales team with infrequent access patterns. The sales team requires the database to have minimal downtime. A database administrator wants to migrate this database to AWS without selecting a particular instance type in anticipation of more users in the future.

Which service should a solutions architect recommend?

  1. Amazon Aurora MySQL
  2. Amazon Aurora Serverless for MySQL
  3. Amazon Redshift Spectrum
  4. Amazon RDS for MySQL
A
  1. Amazon Aurora Serverless for MySQL

without selecting a particular instance type = serverless

https://searchcloudcomputing.techtarget.com/answer/When-should-I-use-Amazon-RDS-vs-Aurora-Serverless

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A company needs to comply with a regulatory requirement that states all emails must be stored and archived externally for 7 years. An administrator has created compressed email files on premises and wants a managed service to transfer the files to AWS storage.

Which managed service should a solutions architect recommend?

  1. Amazon Elastic File System (Amazon EFS)
  2. Amazon S3 Glacier
  3. AWS Backup
  4. AWS Storage Gateway
A
  1. AWS Storage Gateway

on premises + to transfer the files to AWS = AWS Storage Gateway

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A company has hired a new cloud engineer who should not have access to an Amazon S3 bucket named CompanyConfidential. The cloud engineer must be able to read from and write to an S3 bucket called AdminTools.

Which IAM policy will meet these requirements?

A

Make sure S3 bucket resource names end with /*

You need a deny and a allow effect

least privilege

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A company that hosts its web application on AWS wants to ensure all Amazon EC2 instances, Amazon RDS DB instances, and Amazon Redshift clusters are configured with tags. The company wants to minimize the effort of configuring and operating this check.

What should a solutions architect do to accomplish this?

  1. Use AWS Config rules to define and detect resources that are not properly tagged.
  2. Use Cost Explorer to display resources that are not properly tagged. Tag those resources manually.
  3. Write API calls to check all resources for proper tag allocation. Periodically run the code on an EC2 instance.
  4. Write API calls to check all resources for proper tag allocation. Schedule an AWS Lambda function through Amazon CloudWatch to periodically run the code.
A
  1. Use AWS Config rules to define and detect resources that are not properly tagged.

configured with tags = AWS Config to check

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company has a live chat application running on its on-premises servers that use WebSockets. The company wants to migrate the application to AWS. Application traffic is inconsistent, and the company expects there to be more traffic with sharp spikes in the future.

The company wants a highly scalable solution with no server maintenance nor advanced capacity planning.

Which solution meets these requirements?

  1. Use Amazon API Gateway and AWS Lambda with an Amazon DynamoDB table as the data store. Configure the DynamoDB table for provisioned capacity.
  2. Use Amazon API Gateway and AWS Lambda with an Amazon DynamoDB table as the data store. Configure the DynamoDB table for on-demand capacity.
  3. Run Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group with an Amazon DynamoDB table as the data store. Configure the DynamoDB table for on-demand capacity.
  4. Run Amazon EC2 instances behind a Network Load Balancer in an Auto Scaling group with an Amazon DynamoDB table as the data store. Configure the DynamoDB table for provisioned capacity.
A
  1. Use Amazon API Gateway and AWS Lambda with an Amazon DynamoDB table as the data store. Configure the DynamoDB table for on-demand capacity.

no server maintenance = Lambda = 1,2

sharp spikes in the future = scaling needed = on-demand capacity.

nor advanced capacity planning = not provisioned capacity = not 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company hosts its static website content from an Amazon S3 bucket in the us-east-1 Region. Content is made available through an Amazon CloudFront origin pointing to that bucket. Cross-Region replication is set to create a second copy of the bucket in the ap-southeast-1 Region. Management wants a solution that provides greater availability for the website.

Which combination of actions should a solutions architect take to increase availability? (Choose two.)

  1. Add both buckets to the CloudFront origin.
  2. Configure failover routing in Amazon Route 53.
  3. Create a record in Amazon Route 53 pointing to the replica bucket.
  4. Create an additional CloudFront origin pointing to the ap-southeast-1 bucket.
  5. Set up a CloudFront origin group with the us-east-1 bucket as the primary and the ap-southeast-1 bucket as the secondary.
A
  1. Create an additional CloudFront origin pointing to the ap-southeast-1 bucket.
  2. Set up a CloudFront origin group with the us-east-1 bucket as the primary and the ap-southeast-1 bucket as the secondary.

greater availability = CloudFront origin already exists = need CloudFront origin group

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/images/origingroups-overview.png

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company hosts a training site on a fleet of Amazon EC2 instances. The company anticipates that its new course, which consists of dozens of training videos on the site, will be extremely popular when it is released in 1 week.

What should a solutions architect do to minimize the anticipated server load?

  1. Store the videos in Amazon ElastiCache for Redis. Update the web servers to serve the videos using the ElastiCache API.
  2. Store the videos in Amazon Elastic File System (Amazon EFS). Create a user data script for the web servers to mount the EFS volume.
  3. Store the videos in an Amazon S3 bucket. Create an Amazon CloudFront distribution with an origin access identity (OAI) of that S3 bucket. Restrict Amazon S3 access to the OAI.
  4. Store the videos in an Amazon S3 bucket. Create an AWS Storage Gateway file gateway to access the S3 bucket. Create a user data script for the web servers to mount the file gateway.
A
  1. Store the videos in an Amazon S3 bucket. Create an Amazon CloudFront distribution with an origin access identity (OAI) of that S3 bucket. Restrict Amazon S3 access to the OAI.

minimize the anticipated server load = use CloudFront for caching

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A company runs a production application on a fleet of Amazon EC2 instances. The application reads the data from an Amazon SQS queue and processes the messages in parallel. The message volume is unpredictable and often has intermittent traffic. This application should continually process messages without any downtime.

Which solution meets these requirements MOST cost-effectively?

  1. Use Spot Instances exclusively to handle the maximum capacity required.
  2. Use Reserved Instances exclusively to handle the maximum capacity required.
  3. Use Reserved Instances for the baseline capacity and use Spot Instances to handle additional capacity.
  4. Use Reserved Instances for the baseline capacity and use On-Demand Instances to handle additional capacity.
A
  1. Use Reserved Instances for the baseline capacity and use Spot Instances to handle additional capacity.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A company has a hybrid application hosted on multiple on-premises servers with static IP addresses. There is already a VPN that provides connectivity between the VPC and the on-premises network. The company wants to distribute TCP traffic across the on-premises servers for internet users.

What should a solutions architect recommend to provide a highly available and scalable solution?

  1. Launch an internet-facing Network Load Balancer (NLB) and register on-premises IP addresses with the NLB.
  2. Launch an internet-facing Application Load Balancer (ALB) and register on-premises IP addresses with the ALB.
  3. Launch an Amazon EC2 instance, attach an Elastic IP address, and distribute traffic to the on-premises servers.
  4. Launch an Amazon EC2 instance with public IP addresses in an Auto Scaling group and distribute traffic to the on-premises servers.
A
  1. Launch an internet-facing Network Load Balancer (NLB) and register on-premises IP addresses with the NLB.

an Amazon EC2 instance = 1 = not HA so not 3,4

static IP addresses = NLB = 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Management has decided to deploy all AWS VPCs with IPv6 enabled. After some time, a solutions architect tries to launch a new instance and receives an error stating that there is not enough IP address space available in the subnet.

What should the solutions architect do to fix this?

  1. Check to make sure that only IPv6 was used during the VPC creation.
  2. Create a new IPv4 subnet with a larger range, and then launch the instance.
  3. Create a new IPv6-only subnet with a large range, and then launch the instance.
  4. Disable the IPv4 subnet and migrate all instances to IPv6 only. Once that is complete, launch the instance.
A
  1. Create a new IPv4 subnet with a larger range, and then launch the instance.

not enough IP address space = didn’t say not enough IPv6, error is referring to IPv4

IPv4 cannot be disabled so 1,3,4 = wrong

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company has a build server that is in an Auto Scaling group and often has multiple Linux instances running. The build server requires consistent and mountable shared NFS storage for jobs and configurations.

Which storage option should a solutions architect recommend?

  1. Amazon S3
  2. Amazon FSx
  3. Amazon Elastic Block Store (Amazon EBS)
  4. Amazon Elastic File System (Amazon EFS)
A
  1. Amazon Elastic File System (Amazon EFS)

Linux + consistent and mountable shared NFS = Amazon Elastic File System (Amazon EFS)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A company has an image processing workload running on Amazon Elastic Container Service (Amazon ECS) in two private subnets. Each private subnet uses a NAT instance for internet access. All images are stored in Amazon S3 buckets. The company is concerned about the data transfer costs between Amazon ECS and Amazon S3.

What should a solutions architect do to reduce costs?

  1. Configure a NAT gateway to replace the NAT instances.
  2. Configure a gateway endpoint for traffic destined to Amazon S3.
  3. Configure an interface endpoint for traffic destined to Amazon S3.
  4. Configure Amazon CloudFront for the S3 bucket storing the images.
A
  1. Configure a gateway endpoint for traffic destined to Amazon S3.

data transfer to S3 = endpoint needed

gateway = more than 1 instance access

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The financial application at a company stores monthly reports in an Amazon S3 bucket. The vice president of finance has mandated that all access to these reports be logged and that any modifications to the log files be detected.

Which actions can a solutions architect take to meet these requirements?

  1. Use S3 server access logging on the bucket that houses the reports with the read and write data events and log file validation options enabled.
  2. Use S3 server access logging on the bucket that houses the reports with the read and write management events and log file validation options enabled.
  3. Use AWS CloudTrail to create a new trail. Configure the trail to log read and write data events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation.
  4. Use AWS CloudTrail to create a new trail. Configure the trail to log read and write management events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation.
A
  1. Use AWS CloudTrail to create a new trail. Configure the trail to log read and write data events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation.

access to these reports be logged = CloudTrail = 3,4

Access, modifications to the log and deletions = DATA EVENTS = 3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A company has an on-premises volume backup solution that has reached its end of life. The company wants to use AWS as part of a new backup solution and wants to maintain local access to all the data while it is backed up on AWS. The company wants to ensure that the data backed up on AWS is automatically and securely transferred.

Which solution meets these requirements?

  1. Use AWS Snowball to migrate data out of the on-premises solution to Amazon S3. Configure on-premises systems to mount the Snowball S3 endpoint to provide local access to the data.
  2. Use AWS Snowball Edge to migrate data out of the on-premises solution to Amazon S3. Use the Snowball Edge file interface to provide on-premises systems with local access to the data.
  3. Use AWS Storage Gateway and configure a cached volume gateway. Run the Storage Gateway software appliance on premises and configure a percentage of data to cache locally. Mount the gateway storage volumes to provide local access to the data.
  4. Use AWS Storage Gateway and configure a stored volume gateway. Run the Storage Gateway software appliance on premises and map the gateway storage volumes to on-premises storage. Mount the gateway storage volumes to provide local access to the data.
A
  1. Use AWS Storage Gateway and configure a stored volume gateway. Run the Storage Gateway software appliance on premises and map the gateway storage volumes to on-premises storage. Mount the gateway storage volumes to provide local access to the data.

automatically transferred = not snowball = Storage Gateway = 3,4

local access to all the data = not cached (frequently accessed data ) = 4

stored volume gateway = entire dataset is available for low-latency access

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A company is using a third-party vendor to manage its marketplace analytics. The vendor needs limited programmatic access to resources in the company’s account. All the needed policies have been created to grant appropriate access.

Which additional component will provide the vendor with the MOST secure access to the account?

  1. Create an IAM user.
  2. Implement a service control policy (SCP)
  3. Use a cross-account role with an external ID.
  4. Configure a single sign-on (SSO) identity provider.
A
  1. Use a cross-account role with an external ID.

limited programmatic access= role

third-party vendor = external ID

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html

17
Q

A company is developing an ecommerce application that will consist of a load-balanced front end, a container-based application, and a relational database. A solutions architect needs to create a highly available solution that operates with as little manual intervention as possible.

Which solutions meet these requirements? (Choose two.)

  1. Create an Amazon RDS DB instance in Multi-AZ mode.
  2. Create an Amazon RDS DB instance and one or more replicas in another Availability Zone.
  3. Create an Amazon EC2 instance-based Docker cluster to handle the dynamic application load.
  4. Create an Amazon Elastic Container Service (Amazon ECS) cluster with a Fargate launch type to handle the dynamic application load.
  5. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an Amazon EC2 launch type to handle the dynamic application load.
A
  1. Create an Amazon RDS DB instance in Multi-AZ mode.
  2. Create an Amazon Elastic Container Service (Amazon ECS) cluster with a Fargate launch type to handle the dynamic application load.

highly available = multi AZ = 1,2

container-based application = ECS + Fargate

18
Q

A company has an ecommerce application that stores data in an on-premises SQL database. The company has decided to migrate this database to AWS. However, as part of the migration, the company wants to find a way to attain sub-millisecond responses to common read requests.

A solutions architect knows that the increase in speed is paramount and that a small percentage of stale data returned in the database reads is acceptable.

What should the solutions architect recommend?

  1. Build Amazon RDS read replicas.
  2. Build the database as a larger instance type.
  3. Build a database cache using Amazon ElastiCache.
  4. Build a database cache using Amazon Elasticsearch Service (Amazon ES).
A
  1. Build a database cache using Amazon ElastiCache = Redis.

stale data returned in the database reads = cache problem so caching is in answer = 3,4

https://aws.amazon.com/redis/

19
Q

A company has an application that ingests incoming messages. These messages are then quickly consumed by dozens of other applications and microservices. The number of messages varies drastically and sometimes spikes as high as 100,000 each second. The company wants to decouple the solution and increase scalability.

Which solution meets these requirements?

  1. Persist the messages to Amazon Kinesis Data Analytics. All the applications will read and process the messages.
  2. Deploy the application on Amazon EC2 instances in an Auto Scaling group, which scales the number of EC2 instances based on CPU metrics.
  3. Write the messages to Amazon Kinesis Data Streams with a single shard. All applications will read from the stream and process the messages.
  4. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with one or more Amazon Simple Queue Service (Amazon SQS) subscriptions. All applications then process the messages from the queues.
A
  1. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with one or more Amazon Simple Queue Service (Amazon SQS) subscriptions. All applications then process the messages from the queues.

decouple the solution and increase scalability = SQS + SNS Fanout

20
Q

A solutions architect is designing the cloud architecture for a company that needs to host hundreds of machine learning models for its users. During startup, the models need to load up to 10 GB of data from Amazon S3 into memory, but they do not need disk access. Most of the models are used sporadically, but the users expect all of them to be highly available and accessible with low latency.

Which solution meets the requirements and is MOST cost-effective?

  1. Deploy models as AWS Lambda functions behind an Amazon API Gateway for each model.
  2. Deploy models as Amazon Elastic Container Service (Amazon ECS) services behind an Application Load Balancer for each model.
  3. Deploy models as AWS Lambda functions behind a single Amazon API Gateway with path-based routing where one path corresponds to each model.
  4. Deploy models as Amazon Elastic Container Service (Amazon ECS) services behind a single Application Load Balancer with path-based routing where one path corresponds to each model.
A
  1. Deploy models as AWS Lambda functions behind a single Amazon API Gateway with path-based routing where one path corresponds to each model.

load up to 10 GB = lambda just big enough

sporadically + machine learning= lambda = 1,3

1 = Amazon API Gateway for each model = multiple api gateways = expensive = wrong

3 wins