saa-c02-part-09 Flashcards

1
Q

A company is using a tape backup solution to store its key application data offsite. The daily data volume is around 50 TB. The company needs to retain the backups for 7 years for regulatory purposes. The backups are rarely accessed, and a week’s notice is typically given if a backup needs to be restored.
The company is now considering a cloud-based option to reduce the storage costs and operational burden of managing tapes. The company also wants to make sure that the transition from tape backups to the cloud minimizes disruptions.

Which storage solution is MOST cost-effective?

  1. Use Amazon Storage Gateway to back up to Amazon Glacier Deep Archive.
  2. Use AWS Snowball Edge to directly integrate the backups with Amazon S3 Glacier.
  3. Copy the backup data to Amazon S3 and create a lifecycle policy to move the data to Amazon S3 Glacier.
  4. Use Amazon Storage Gateway to back up to Amazon S3 and create a lifecycle policy to move the backup to Amazon S3 Glacier.
A
  1. Use Amazon Storage Gateway to back up to Amazon Glacier Deep Archive.

MOST cost-effective = glacier deep

week’s notice = plenty of time to request data = Glacier Deep

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A company requires a durable backup storage solution for its on-premises database servers while ensuring on-premises applications maintain access to these backups for quick recovery. The company will use AWS storage services as the destination for these backups. A solutions architect is designing a solution with minimal operational overhead.
Which solution should the solutions architect implement?

  1. Deploy an AWS Storage Gateway file gateway on-premises and associate it with an Amazon S3 bucket.
  2. Back up the databases to an AWS Storage Gateway volume gateway and access it using the Amazon S3 API.
  3. Transfer the database backup files to an Amazon Elastic Block Store (Amazon EBS) volume attached to an Amazon EC2 instance.
  4. Back up the database directly to an AWS Snowball device and use lifecycle rules to move the data to Amazon S3 Glacier Deep Archive.
A
  1. Deploy an AWS Storage Gateway file gateway on-premises and associate it with an Amazon S3 bucket.

on-premises = gateway needed

backup storage solution = static = S3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A company decides to migrate its three-tier web application from on-premises to the AWS Cloud. The new database must be capable of dynamically scaling storage capacity and performing table joins.
Which AWS service meets these requirements?

  1. Amazon Aurora
  2. Amazon RDS for SqlServer
  3. Amazon DynamoDB Streams
  4. Amazon DynamoDB on-demand
A
  1. Amazon Aurora

performing table joins = RDS

dynamically scaling storage capacity = Amazon Aurora

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A company mandates that an Amazon S3 gateway endpoint must allow traffic to trusted buckets only.

Which method should a solutions architect implement to meet this requirement?

  1. Create a bucket policy for each of the company’s trusted S3 buckets that allows traffic only from the company’s trusted VPCs.
  2. Create a bucket policy for each of the company’s trusted S3 buckets that allows traffic only from the company’s S3 gateway endpoint IDs.
  3. Create an S3 endpoint policy for each of the company’s S3 gateway endpoints that blocks access from any VPC other than the company’s trusted VPCs.
  4. Create an S3 endpoint policy for each of the company’s S3 gateway endpoints that provides access to the Amazon Resource Name (ARN) of the trusted S3 buckets.
A
  1. Create an S3 endpoint policy for each of the company’s S3 gateway endpoints that provides access to the Amazon Resource Name (ARN) of the trusted S3 buckets.

Everything is deny by default so you need to provide

2 = wrong because policy for each of the company’s trusted S3 buckets = many policies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A company is using a VPC peering strategy to connect its VPCs in a single Region to allow for cross-communication. A recent increase in account creations and VPCs has made it difficult to maintain the VPC peering strategy, and the company expects to grow to hundreds of VPCs. There are also new requests to create site-to-site VPNs with some of the VPCs. A solutions architect has been tasked with creating a centrally managed networking setup for multiple accounts, VPCs, and VPNs.

Which networking solution meets these requirements?

  1. Configure shared VPCs and VPNs and share to each other.
  2. Configure a hub-and-spoke VPC and route all traffic through VPC peering.
  3. Configure an AWS Direct Connect connection between all VPCs and VPNs.
  4. Configure a transit gateway with AWS Transit Gateway and connect all VPCs and VPNs.
A
  1. Configure a transit gateway with AWS Transit Gateway and connect all VPCs and VPNs.

grow to hundreds of VPCs = VPC complexity reduction = transit gateway

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A solutions architect is helping a developer design a new ecommerce shopping cart application using AWS services. The developer is unsure of the current database schema and expects to make changes as the ecommerce site grows. The solution needs to be highly resilient and capable of automatically scaling read and write capacity.

Which database solution meets these requirements?

  1. Amazon Aurora PostgreSQL
  2. Amazon DynamoDB with on-demand enabled
  3. Amazon DynamoDB with DynamoDB Streams enabled
  4. Amazon SQS and Amazon Aurora PostgreSQL
A
  1. Amazon DynamoDB with on-demand enabled

unsure of the current database schema = DynamoDB

automatically scaling read and write = on-demand enabled

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A solutions architect must migrate a Windows internet information Services (IIS) web application to AWS. The application currently relies on a file share hosted in the user’s on-premises network-attached storage (NAS). The solutions architected has proposed migrating the IIS web servers to Amazon EC2 instances in multiple Availability Zones that are connected to the storage solution, and configuring an Elastic Load Balancer attached to the instances.

Which replacement to the on-premises file share is MOST resilient and durable?

  1. Migrate the file Share to Amazon RDS.
  2. Migrate the file Share to AWS Storage Gateway
  3. Migrate the file Share to Amazon FSx for Windows File Server.
  4. Migrate the file share to Amazon Elastic File System (Amazon EFS)
A
  1. Migrate the file Share to Amazon FSx for Windows File Server.

Windows = FSx

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company needs to implement a relational database with a multi-Region disaster recovery Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of 1 minute.

Which AWS solution can achieve this?

  1. Amazon Aurora Global Database
  2. Amazon DynamoDB global tables
  3. Amazon RDS for MySQL with Multi-AZ enabled
  4. Amazon RDS for MySQL with a cross-Region snapshot copy
A
  1. Amazon Aurora Global Database

multi-Region + 1 second +RDS = Aurora Global Database

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A company runs a web service on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across two Availability Zones. The company needs a minimum of four instances at all times to meet the required service level agreement (SLA) while keeping costs low.

If an Availability Zone fails, how can the company remain compliant with the SLA?

  1. Add a target tracking scaling policy with a short cooldown period.
  2. Change the Auto Scaling group launch configuration to use a larger instance type.
  3. Change the Auto Scaling group to use six servers across three Availability Zones.
  4. Change the Auto Scaling group to use eight servers across two Availability Zones.
A
  1. Change the Auto Scaling group to use eight servers across two Availability Zones.

minimum of four instances at all times = 4 in AZA 4 in AZB = eight

two Availability Zones = two Availability Zones

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A company is reviewing its AWS Cloud deployment to ensure its data is not accessed by anyone without appropriate authorization. A solutions architect is tasked with identifying all open Amazon S3 buckets and recording any S3 bucket configuration changes.

What should the solutions architect do to accomplish this?

  1. Enable AWS Config service with the appropriate rules
  2. Enable AWS Trusted Advisor with the appropriate checks.
  3. Write a script using an AWS SDK to generate a bucket report
  4. Enable Amazon S3 server access logging and configure Amazon CloudWatch Events.
A
  1. Enable AWS Config service with the appropriate rules.

configuration changes = Config service

https://aws.amazon.com/config/

https://aws.amazon.com/blogs/security/how-to-use-aws-config-to-monitor-for-and-respond-to-amazon-s3-buckets-allowing-public-access/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company is planning to build a new web application on AWS. The company expects predictable traffic most of the year and very high traffic on occasion. The web application needs to be highly available and fault tolerant with minimal latency.

What should a solutions architect recommend to meet these requirements?

  1. Use an Amazon Route 53 routing policy to distribute requests to two AWS Regions, each with one Amazon EC2 instance.
  2. Use Amazon EC2 instances in an Auto Scaling group with an Application Load Balancer across multiple Availability Zones.
  3. Use Amazon EC2 instances in a cluster placement group with an Application Load Balancer across multiple Availability Zones.
  4. Use Amazon EC2 instances in a cluster placement group and include the cluster placement group within a new Auto Scaling group.
A
  1. Use Amazon EC2 instances in an Auto Scaling group with an Application Load Balancer across multiple Availability Zones.

highly available = multi AZ = 2,3

cluster doesn’t support multiple AZs so 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company is designing a web application using AWS that processes insurance quotes. Users will request quotes from the application. Quotes must be separated by quote type must be responded to within 24 hours, and must not be lost. The solution should be simple to set up and maintain.

Which solution meets these requirements?

  1. Create multiple Amazon Kinesis data streams based on the quote type. Configure the web application to send messages to the proper data stream. Configure each backend group of application servers to pool messages from its own data stream using the Kinesis Client Library (KCL).
  2. Create multiple Amazon Simple Notification Service (Amazon SNS) topics and register Amazon SQS queues to their own SNS topic based on the quote type. Configure the web application to publish messages to the SNS topic queue. Configure each backend application server to work its own SQS queue.
  3. Create a single Amazon Simple Notification Service (Amazon SNS) topic and subscribe the Amazon SQS queues to the SNS topic. Configure SNS message filtering to publish messages to the proper SQS queue based on the quote type. Configure each backend application server to work its own SQS queue.
  4. Create multiple Amazon Kinesis Data Firehose delivery streams based on the quote type to deliver data streams to an Amazon Elasticsearch Service (Amazon ES) cluster. Configure the web application to send messages to the proper delivery stream. Configure each backend group of application servers to search for the messages from Amazon ES and process them accordingly.
A
  1. Create a single Amazon Simple Notification Service (Amazon SNS) topic and subscribe the Amazon SQS queues to the SNS topic. Configure SNS message filtering to publish messages to the proper SQS queue based on the quote type. Configure each backend application server to work its own SQS queue.

Queues subscribe to topics so only 3 makes sense

must be separated = SNS topic fanout

Ignore Kinesis, we are not analyzing anything

https://aws.amazon.com/getting-started/hands-on/filter-messages-published-to-topics/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A solutions architect is using Amazon S3 to design the storage architecture of a new digital media application. The media files must be resilient to the loss of an Availability Zone. Some files are accessed frequently while other files are rarely accessed in an unpredictable pattern. The solutions architect must minimize the costs of storing and retrieving the media files.

Which storage option meets these requirements?

  1. S3 Standard
  2. S3 Intelligent-Tiering
  3. S3 Standard-Infrequent Access (S3 Standard-IA)
  4. S3 One Zone-Infrequent Access (S3 One Zone-IA)
A
  1. S3 Intelligent-Tiering

unpredictable pattern + minimize the costs = Intelligent-Tiering

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A company is running a three-tier web application to process credit card payments. The front-end user interface consists of static webpages. The application tier can have long-running processes. The database tier uses MySQL.

The application is currently running on a single, general purpose large Amazon EC2 instance. A solutions architect needs to decouple the services to make the web application highly available.

Which solution would provide the HIGHEST availability?

  1. Move static assets to Amazon CloudFront. Leave the application in EC2 in an Auto Scaling group. Move the database to Amazon RDS to deploy Multi-AZ.
  2. Move static assets and the application into a medium EC2 instance. Leave the database on the large instance. Place both instances in an Auto Scaling group.
  3. Move static assets to Amazon S3. Move the application to AWS Lambda with the concurrency limit set. Move the database to Amazon DynamoDB with on-demand enabled.
  4. Move static assets to Amazon S3. Move the application to Amazon Elastic Container Service (Amazon ECS) containers with Auto Scaling enabled. Move the database to Amazon RDS to deploy Multi-AZ.
A
  1. Move static assets to Amazon S3. Move the application to Amazon Elastic Container Service (Amazon ECS) containers with Auto Scaling enabled. Move the database to Amazon RDS to deploy Multi-AZ.

static webpages = S3 = 3,4

highly available = multi AZ = 4

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A media company stores video content in an Amazon Elastic Block Store (Amazon EBS) volume. A certain video file has become popular and a large number of users across the world are accessing this content. This has resulted in a cost increase.

Which action will DECREASE cost without compromising user accessibility?

  1. Change the EBS volume to Provisioned IOPS (PIOPS).
  2. Store the video in an Amazon S3 bucket and create an Amazon CloudFront distribution.
  3. Split the video into multiple, smaller segments so users are routed to the requested video segments only.
  4. Clear an Amazon S3 bucket in each Region and upload the videos so users are routed to the nearest S3 bucket.
A
  1. Store the video in an Amazon S3 bucket and create an Amazon CloudFront distribution.

users across the world = CloudFront

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A solutions architect is designing the cloud architecture for a new application being deployed to AWS. The application allows users to interactively download and upload files. Files older than 2 years will be accessed less frequently. The solutions architect needs to ensure that the application can scale to any number of files while maintaining high availability and durability.

Which scalable solutions should the solutions architect recommend? (Choose two.)

  1. Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Glacier.
  2. Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Standard-Infrequent Access (S3 Standard-IA)
  3. Store the files on Amazon Elastic File System (Amazon EFS) with a lifecycle policy that moves objects older than 2 years to EFS Infrequent Access (EFS IA).
  4. Store the files in Amazon Elastic Block Store (Amazon EBS) volumes. Schedule snapshots of the volumes. Use the snapshots to archive data older than 2 years.
  5. Store the files in RAID-striped Amazon Elastic Block Store (Amazon EBS) volumes. Schedule snapshots of the volumes. Use the snapshots to archive data older than 2 years.
A
  1. Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Glacier.
  2. Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Standard-Infrequent Access (S3 Standard-IA)

Files older than 2 years will be accessed less frequently = S3 = 1,2

17
Q

A company has recently updated its internal security standards. The company must now ensure all Amazon S3 buckets and Amazon Elastic Block Store (Amazon EBS) volumes are encrypted with keys created and periodically rotated by internal security specialists. The company is looking for a native, software-based AWS service to accomplish this goal.

What should a solutions architect recommend as a solution?

  1. Use AWS Secrets Manager with customer master keys (CMKs) to store master key material and apply a routine to create a new CMK periodically and replace it in AWS Secrets Manager.
  2. Use AWS Key Management Service (AWS KMS) with customer master keys (CMKs) to store master key material and apply a routine to re-create a new key periodically and replace it in AWS KMS.
  3. Use an AWS CloudHSM cluster with customer master keys (CMKs) to store master key material and apply a routine to re-create a new key periodically and replace it in the CloudHSM cluster nodes.
  4. Use AWS Systems Manager Parameter Store with customer master keys (CMKs) to store master key material and apply a routine to re-create a new key periodically and replace it in the Parameter Store.
A
  1. Use AWS Key Management Service (AWS KMS) with customer master keys (CMKs) to store master key material and apply a routine to re-create a new key periodically and replace it in AWS KMS.

keys created and periodically rotated = KMS = 2

native, software-based AWS service = KMS

18
Q

A company’s dynamic website is hosted using on-premises servers in the United States. The company is launching its product in Europe, and it wants to optimize site loading times for new European users. The site’s backend must remain in the United States. The product is being launched in a few days, and an immediate solution is needed.

What should the solutions architect recommend?

  1. Launch an Amazon EC2 instance in us-east-1 and migrate the site to it.
  2. Move the website to Amazon S3. Use cross-Region replication between Regions.
  3. Use Amazon CloudFront with a custom origin pointing to the on-premises servers.
  4. Use an Amazon Route 53 geo-proximity routing policy pointing to on-premises servers.
A
  1. Use Amazon CloudFront with a custom origin pointing to the on-premises servers.

in the United States to product in Europe = edge caching = CloudFront

19
Q

A development team needs to host a website that will be accessed by other teams. The website contents consist of HTML, CSS, client-side JavaScript, and images.

Which method is the MOST cost-effective for hosting the website?

  1. Containerize the website and host it in AWS Fargate.
  2. Create an Amazon S3 bucket and host the website there.
  3. Deploy a web server on an Amazon EC2 instance to host the website.
  4. Configure an Application Load Balancer with an AWS Lambda target that uses the Express.js framework
A
  1. Create an Amazon S3 bucket and host the website there.

website contents consist of HTML, CSS, client-side JavaScript, and images = static = S3