saa-c02-part-09 Flashcards
A company is using a tape backup solution to store its key application data offsite. The daily data volume is around 50 TB. The company needs to retain the backups for 7 years for regulatory purposes. The backups are rarely accessed, and a week’s notice is typically given if a backup needs to be restored.
The company is now considering a cloud-based option to reduce the storage costs and operational burden of managing tapes. The company also wants to make sure that the transition from tape backups to the cloud minimizes disruptions.
Which storage solution is MOST cost-effective?
- Use Amazon Storage Gateway to back up to Amazon Glacier Deep Archive.
- Use AWS Snowball Edge to directly integrate the backups with Amazon S3 Glacier.
- Copy the backup data to Amazon S3 and create a lifecycle policy to move the data to Amazon S3 Glacier.
- Use Amazon Storage Gateway to back up to Amazon S3 and create a lifecycle policy to move the backup to Amazon S3 Glacier.
- Use Amazon Storage Gateway to back up to Amazon Glacier Deep Archive.
MOST cost-effective = glacier deep
week’s notice = plenty of time to request data = Glacier Deep
A company requires a durable backup storage solution for its on-premises database servers while ensuring on-premises applications maintain access to these backups for quick recovery. The company will use AWS storage services as the destination for these backups. A solutions architect is designing a solution with minimal operational overhead.
Which solution should the solutions architect implement?
- Deploy an AWS Storage Gateway file gateway on-premises and associate it with an Amazon S3 bucket.
- Back up the databases to an AWS Storage Gateway volume gateway and access it using the Amazon S3 API.
- Transfer the database backup files to an Amazon Elastic Block Store (Amazon EBS) volume attached to an Amazon EC2 instance.
- Back up the database directly to an AWS Snowball device and use lifecycle rules to move the data to Amazon S3 Glacier Deep Archive.
- Deploy an AWS Storage Gateway file gateway on-premises and associate it with an Amazon S3 bucket.
on-premises = gateway needed
backup storage solution = static = S3
A company decides to migrate its three-tier web application from on-premises to the AWS Cloud. The new database must be capable of dynamically scaling storage capacity and performing table joins.
Which AWS service meets these requirements?
- Amazon Aurora
- Amazon RDS for SqlServer
- Amazon DynamoDB Streams
- Amazon DynamoDB on-demand
- Amazon Aurora
performing table joins = RDS
dynamically scaling storage capacity = Amazon Aurora
A company mandates that an Amazon S3 gateway endpoint must allow traffic to trusted buckets only.
Which method should a solutions architect implement to meet this requirement?
- Create a bucket policy for each of the company’s trusted S3 buckets that allows traffic only from the company’s trusted VPCs.
- Create a bucket policy for each of the company’s trusted S3 buckets that allows traffic only from the company’s S3 gateway endpoint IDs.
- Create an S3 endpoint policy for each of the company’s S3 gateway endpoints that blocks access from any VPC other than the company’s trusted VPCs.
- Create an S3 endpoint policy for each of the company’s S3 gateway endpoints that provides access to the Amazon Resource Name (ARN) of the trusted S3 buckets.
- Create an S3 endpoint policy for each of the company’s S3 gateway endpoints that provides access to the Amazon Resource Name (ARN) of the trusted S3 buckets.
Everything is deny by default so you need to provide
2 = wrong because policy for each of the company’s trusted S3 buckets = many policies
A company is using a VPC peering strategy to connect its VPCs in a single Region to allow for cross-communication. A recent increase in account creations and VPCs has made it difficult to maintain the VPC peering strategy, and the company expects to grow to hundreds of VPCs. There are also new requests to create site-to-site VPNs with some of the VPCs. A solutions architect has been tasked with creating a centrally managed networking setup for multiple accounts, VPCs, and VPNs.
Which networking solution meets these requirements?
- Configure shared VPCs and VPNs and share to each other.
- Configure a hub-and-spoke VPC and route all traffic through VPC peering.
- Configure an AWS Direct Connect connection between all VPCs and VPNs.
- Configure a transit gateway with AWS Transit Gateway and connect all VPCs and VPNs.
- Configure a transit gateway with AWS Transit Gateway and connect all VPCs and VPNs.
grow to hundreds of VPCs = VPC complexity reduction = transit gateway
A solutions architect is helping a developer design a new ecommerce shopping cart application using AWS services. The developer is unsure of the current database schema and expects to make changes as the ecommerce site grows. The solution needs to be highly resilient and capable of automatically scaling read and write capacity.
Which database solution meets these requirements?
- Amazon Aurora PostgreSQL
- Amazon DynamoDB with on-demand enabled
- Amazon DynamoDB with DynamoDB Streams enabled
- Amazon SQS and Amazon Aurora PostgreSQL
- Amazon DynamoDB with on-demand enabled
unsure of the current database schema = DynamoDB
automatically scaling read and write = on-demand enabled
A solutions architect must migrate a Windows internet information Services (IIS) web application to AWS. The application currently relies on a file share hosted in the user’s on-premises network-attached storage (NAS). The solutions architected has proposed migrating the IIS web servers to Amazon EC2 instances in multiple Availability Zones that are connected to the storage solution, and configuring an Elastic Load Balancer attached to the instances.
Which replacement to the on-premises file share is MOST resilient and durable?
- Migrate the file Share to Amazon RDS.
- Migrate the file Share to AWS Storage Gateway
- Migrate the file Share to Amazon FSx for Windows File Server.
- Migrate the file share to Amazon Elastic File System (Amazon EFS)
- Migrate the file Share to Amazon FSx for Windows File Server.
Windows = FSx
A company needs to implement a relational database with a multi-Region disaster recovery Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of 1 minute.
Which AWS solution can achieve this?
- Amazon Aurora Global Database
- Amazon DynamoDB global tables
- Amazon RDS for MySQL with Multi-AZ enabled
- Amazon RDS for MySQL with a cross-Region snapshot copy
- Amazon Aurora Global Database
multi-Region + 1 second +RDS = Aurora Global Database
A company runs a web service on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across two Availability Zones. The company needs a minimum of four instances at all times to meet the required service level agreement (SLA) while keeping costs low.
If an Availability Zone fails, how can the company remain compliant with the SLA?
- Add a target tracking scaling policy with a short cooldown period.
- Change the Auto Scaling group launch configuration to use a larger instance type.
- Change the Auto Scaling group to use six servers across three Availability Zones.
- Change the Auto Scaling group to use eight servers across two Availability Zones.
- Change the Auto Scaling group to use eight servers across two Availability Zones.
minimum of four instances at all times = 4 in AZA 4 in AZB = eight
two Availability Zones = two Availability Zones
A company is reviewing its AWS Cloud deployment to ensure its data is not accessed by anyone without appropriate authorization. A solutions architect is tasked with identifying all open Amazon S3 buckets and recording any S3 bucket configuration changes.
What should the solutions architect do to accomplish this?
- Enable AWS Config service with the appropriate rules
- Enable AWS Trusted Advisor with the appropriate checks.
- Write a script using an AWS SDK to generate a bucket report
- Enable Amazon S3 server access logging and configure Amazon CloudWatch Events.
- Enable AWS Config service with the appropriate rules.
configuration changes = Config service
https://aws.amazon.com/config/
https://aws.amazon.com/blogs/security/how-to-use-aws-config-to-monitor-for-and-respond-to-amazon-s3-buckets-allowing-public-access/
A company is planning to build a new web application on AWS. The company expects predictable traffic most of the year and very high traffic on occasion. The web application needs to be highly available and fault tolerant with minimal latency.
What should a solutions architect recommend to meet these requirements?
- Use an Amazon Route 53 routing policy to distribute requests to two AWS Regions, each with one Amazon EC2 instance.
- Use Amazon EC2 instances in an Auto Scaling group with an Application Load Balancer across multiple Availability Zones.
- Use Amazon EC2 instances in a cluster placement group with an Application Load Balancer across multiple Availability Zones.
- Use Amazon EC2 instances in a cluster placement group and include the cluster placement group within a new Auto Scaling group.
- Use Amazon EC2 instances in an Auto Scaling group with an Application Load Balancer across multiple Availability Zones.
highly available = multi AZ = 2,3
cluster doesn’t support multiple AZs so 2
A company is designing a web application using AWS that processes insurance quotes. Users will request quotes from the application. Quotes must be separated by quote type must be responded to within 24 hours, and must not be lost. The solution should be simple to set up and maintain.
Which solution meets these requirements?
- Create multiple Amazon Kinesis data streams based on the quote type. Configure the web application to send messages to the proper data stream. Configure each backend group of application servers to pool messages from its own data stream using the Kinesis Client Library (KCL).
- Create multiple Amazon Simple Notification Service (Amazon SNS) topics and register Amazon SQS queues to their own SNS topic based on the quote type. Configure the web application to publish messages to the SNS topic queue. Configure each backend application server to work its own SQS queue.
- Create a single Amazon Simple Notification Service (Amazon SNS) topic and subscribe the Amazon SQS queues to the SNS topic. Configure SNS message filtering to publish messages to the proper SQS queue based on the quote type. Configure each backend application server to work its own SQS queue.
- Create multiple Amazon Kinesis Data Firehose delivery streams based on the quote type to deliver data streams to an Amazon Elasticsearch Service (Amazon ES) cluster. Configure the web application to send messages to the proper delivery stream. Configure each backend group of application servers to search for the messages from Amazon ES and process them accordingly.
- Create a single Amazon Simple Notification Service (Amazon SNS) topic and subscribe the Amazon SQS queues to the SNS topic. Configure SNS message filtering to publish messages to the proper SQS queue based on the quote type. Configure each backend application server to work its own SQS queue.
Queues subscribe to topics so only 3 makes sense
must be separated = SNS topic fanout
Ignore Kinesis, we are not analyzing anything
https://aws.amazon.com/getting-started/hands-on/filter-messages-published-to-topics/
A solutions architect is using Amazon S3 to design the storage architecture of a new digital media application. The media files must be resilient to the loss of an Availability Zone. Some files are accessed frequently while other files are rarely accessed in an unpredictable pattern. The solutions architect must minimize the costs of storing and retrieving the media files.
Which storage option meets these requirements?
- S3 Standard
- S3 Intelligent-Tiering
- S3 Standard-Infrequent Access (S3 Standard-IA)
- S3 One Zone-Infrequent Access (S3 One Zone-IA)
- S3 Intelligent-Tiering
unpredictable pattern + minimize the costs = Intelligent-Tiering
A company is running a three-tier web application to process credit card payments. The front-end user interface consists of static webpages. The application tier can have long-running processes. The database tier uses MySQL.
The application is currently running on a single, general purpose large Amazon EC2 instance. A solutions architect needs to decouple the services to make the web application highly available.
Which solution would provide the HIGHEST availability?
- Move static assets to Amazon CloudFront. Leave the application in EC2 in an Auto Scaling group. Move the database to Amazon RDS to deploy Multi-AZ.
- Move static assets and the application into a medium EC2 instance. Leave the database on the large instance. Place both instances in an Auto Scaling group.
- Move static assets to Amazon S3. Move the application to AWS Lambda with the concurrency limit set. Move the database to Amazon DynamoDB with on-demand enabled.
- Move static assets to Amazon S3. Move the application to Amazon Elastic Container Service (Amazon ECS) containers with Auto Scaling enabled. Move the database to Amazon RDS to deploy Multi-AZ.
- Move static assets to Amazon S3. Move the application to Amazon Elastic Container Service (Amazon ECS) containers with Auto Scaling enabled. Move the database to Amazon RDS to deploy Multi-AZ.
static webpages = S3 = 3,4
highly available = multi AZ = 4
A media company stores video content in an Amazon Elastic Block Store (Amazon EBS) volume. A certain video file has become popular and a large number of users across the world are accessing this content. This has resulted in a cost increase.
Which action will DECREASE cost without compromising user accessibility?
- Change the EBS volume to Provisioned IOPS (PIOPS).
- Store the video in an Amazon S3 bucket and create an Amazon CloudFront distribution.
- Split the video into multiple, smaller segments so users are routed to the requested video segments only.
- Clear an Amazon S3 bucket in each Region and upload the videos so users are routed to the nearest S3 bucket.
- Store the video in an Amazon S3 bucket and create an Amazon CloudFront distribution.
users across the world = CloudFront