saa-c02-part-12 Flashcards
A company hosts an online shopping application that stores all orders in an Amazon RDS for PostgreSQL Single-AZ DB instance. Management wants to eliminate single points of failure and has asked a solutions architect to recommend an approach to minimize database downtime without requiring any changes to the application code.
Which solution meets these requirements?
- Convert the existing database instance to a Multi-AZ deployment by modifying the database instance and specifying the Multi-AZ option.
- Create a new RDS Multi-AZ deployment. Take a snapshot of the current RDS instance and restore the new Multi-AZ deployment with the snapshot.
- Create a read-only replica of the PostgreSQL database in another Availability Zone. Use Amazon Route 53 weighted record sets to distribute requests across the databases.
- Place the RDS for PostgreSQL database in an Amazon EC2 Auto Scaling group with a minimum group size of two. Use Amazon Route 53 weighted record sets to distribute requests across instances.
- Convert the existing database instance to a Multi-AZ deployment by modifying the database instance and specifying the Multi-AZ option.
Single-AZ + without requiring any changes to the application code = modify to MultiAZ
https://aws.amazon.com/rds/features/multi-az/ To convert an existing Single-AZ DB Instance to a Multi-AZ deployment, use the “Modify”
A company has a 10 Gbps AWS Direct Connect connection from its on-premises servers to AWS. The workloads using the connection are critical. The company requires a disaster recovery strategy with maximum resiliency that maintains the current connection bandwidth at a minimum.
What should a solutions architect recommend?
- Set up a new Direct Connect connection in another AWS Region.
- Set up a new AWS managed VPN connection in another AWS Region.
- Set up two new Direct Connect connections: one in the current AWS Region and one in another Region.
- Set up two new AWS managed VPN connections: one in the current AWS Region and one in another Region.
- Set up a new Direct Connect connection in another AWS Region.
disaster recovery = DX in another AWS Region
3 = wrong no reason to setup another DX in same region
A solutions architect is designing a VPC with public and private subnets. The VPC and subnets use IPv4 CIDR blocks. There is one public subnet and one private subnet in each of three Availability Zones (AZs) for high availability. An internet gateway is used to provide internet access for the public subnets. The private subnets require access to the internet to allow Amazon EC2 instances to download software updates.
What should the solutions architect do to enable internet access for the private subnets?
- Create three NAT gateways, one for each public subnet in each AZ. Create a private route table for each AZ that forwards non-VPC traffic to the NAT gateway in its AZ.
- Create three NAT instances, one for each private subnet in each AZ. Create a private route table for each AZ that forwards non-VPC traffic to the NAT instance in its AZ.
- Create a second internet gateway on one of the private subnets. Update the route table for the private subnets that forward non-VPC traffic to the private internet gateway.
- Create an egress-only internet gateway on one of the public subnets. Update the route table for the private subnets that forward non-VPC traffic to the egress-only internet gateway.
- Create three NAT gateways, one for each public subnet in each AZ. Create a private route table for each AZ that forwards non-VPC traffic to the NAT gateway in its AZ.
access to the internet = NAT = 1,2
NAT = only in public = 1
As part of budget planning, management wants a report of AWS billed items listed by user. The data will be used to create department budgets. A solutions architect needs to determine the most efficient way to obtain this report information.
Which solution meets these requirements?
- Run a query with Amazon Athena to generate the report.
- Create a report in Cost Explorer and download the report.
- Access the bill details from the billing dashboard and download the bill.
- Modify a cost budget in AWS Budgets to alert with Amazon Simple Email Service (Amazon SES).
- Create a report in Cost Explorer and download the report.
billing report = Cost Explorer
You can filter the cost data associated with each member account in an organization using Cost Explorer https://aws.amazon.com/premiumsupport/knowledge-center/consolidated-linked-billing-report/
A company with facilities in North America, Europe, and Asia is designing new distributed application to optimize its global supply chain and manufacturing process. The orders booked on one continent should be visible to all Regions in a second or less. The database should be able to support failover with a short Recovery Time Objective (RTO). The uptime of the application is important to ensure that manufacturing is not impacted.
What should a solutions architect recommend?
- Use Amazon DynamoDB global tables.
- Use Amazon Aurora Global Database.
- Use Amazon RDS for MySQL with a cross-Region read replica.
- Use Amazon RDS for PostgreSQL with a cross-Region read replica.
- Use Amazon Aurora Global Database.
in a second or less = fast global db = Aurora Global Database
A company’s near-real-time streaming application is running on AWS. As the data is ingested, a job runs on the data and takes 30 minutes to complete. The workload frequently experiences high latency due to large amounts of incoming data. A solutions architect needs to design a scalable and serverless solution to enhance performance.
Which combination of steps should the solutions architect take? (Choose two.)
- Use Amazon Kinesis Data Firehose to ingest the data.
- Use AWS Lambda with AWS Step Functions to process the data.
- Use AWS Database Migration Service (AWS DMS) to ingest the data.
- Use Amazon EC2 instances in an Auto Scaling group to process the data.
- Use AWS Fargate with Amazon Elastic Container Service (Amazon ECS) to process the data.
- Use Amazon Kinesis Data Firehose to ingest the data.
- Use AWS Fargate with Amazon Elastic Container Service (Amazon ECS) to process the data.
serverless = Fargate
real-time streaming = Kinesis
An application running on an Amazon EC2 instance needs to access an Amazon DynamoDB table. Both the EC2 instance and the DynamoDB table are in the same AWS account. A solutions architect must configure the necessary permissions.
Which solution will allow least privilege access to the DynamoDB table from the EC2 instance?
- Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Create an instance profile to assign this IAM role to the EC2 instance.
- Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Add the EC2 instance to the trust relationship policy document to allow it to assume the role.
- Create an IAM user with the appropriate policy to allow access to the DynamoDB table. Store the credentials in an Amazon S3 bucket and read them from within the application code directly.
- Create an IAM user with the appropriate policy to allow access to the DynamoDB table. Ensure that the application stores the IAM credentials securely on local storage and uses them to make the DynamoDB calls.
- Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Create an instance profile to assign this IAM role to the EC2 instance.
instance needs permissions = instance role = 1,2
instance profile = role assumption
trust relationship = already trusted in same AWS account = wrong
A solutions architect is designing a solution that involves orchestrating a series of Amazon Elastic Container Service (Amazon ECS) task types running on Amazon EC2 instances that are part of an ECS cluster. The output and state data for all tasks needs to be stored. The amount of data output by each task is approximately 10 MB, and there could be hundreds of tasks running at a time. The system should be optimized for high-frequency reading and writing. As old outputs are archived and deleted, the storage size is not expected to exceed 1 TB.
Which storage solution should the solutions architect recommend?
- An Amazon DynamoDB table accessible by all ECS cluster instances.
- An Amazon Elastic File System (Amazon EFS) with Provisioned Throughput mode.
- An Amazon Elastic File System (Amazon EFS) file system with Bursting Throughput mode.
- An Amazon Elastic Block Store (Amazon EBS) volume mounted to the ECS cluster instances.
- An Amazon Elastic File System (Amazon EFS) with Provisioned Throughput mode.
data for all tasks needs to be stored = EFS
high-frequency reading and writing = Provisioned Throughput
https://docs.aws.amazon.com/efs/latest/ug/performance.html
An online photo application lets users upload photos and perform image editing operations. The application offers two classes of service: free and paid. Photos submitted by paid users are processed before those submitted by free users. Photos are uploaded to Amazon S3 and the job information is sent to Amazon SQS.
Which configuration should a solutions architect recommend?
- Use one SQS FIFO queue. Assign a higher priority to the paid photos so they are processed first.
- Use two SQS FIFO queues: one for paid and one for free. Set the free queue to use short polling and the paid queue to use long polling.
- Use two SQS standard queues: one for paid and one for free. Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue.
- Use one SQS standard queue. Set the visibility timeout of the paid photos to zero. Configure Amazon EC2 instances to prioritize visibility settings so paid photos are processed first.
- Use two SQS FIFO queues: one for paid and one for free. Set the free queue to use short polling and the paid queue to use long polling.
paid users are processed before those submitted by free = Priority = 2,3
2 doesn’t prioritize paid over free. short = possible to not process message
Use separate queues to provide prioritization of work. https://aws.amazon.com/sqs/features/
A company wants to migrate its MySQL database from on premises to AWS. The company recently experienced a database outage that significantly impacted the business. To ensure this does not happen again, the company wants a reliable database solution on AWS that minimizes data loss and stores every transaction on at least two nodes.
Which solution meets these requirements?
- Create an Amazon RDS DB instance with synchronous replication to three nodes in three Availability Zones.
- Create an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data.
- Create an Amazon RDS MySQL DB instance and then create a read replica in a separate AWS Region that synchronously replicates the data.
- Create an Amazon EC2 instance with a MySQL engine installed that triggers an AWS Lambda function to synchronously replicate the data to an Amazon RDS MySQL DB instance.
- Create an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data.
reliable = Multi AZ = 2
A company stores user data in AWS. The data is used continuously with peak usage during business hours. Access patterns vary, with some data not being used for months at a time. A solutions architect must choose a cost-effective solution that maintains the highest level of durability while maintaining high availability.
Which storage solution meets these requirements?
- Amazon S3 Standard
- Amazon S3 Intelligent-Tiering
- Amazon S3 Glacier Deep Archive
- Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
- Amazon S3 Intelligent-Tiering
Access patterns vary = Intelligent-Tiering
A company receives inconsistent service from its data center provider because the company is headquartered in an area affected by natural disasters. The company is not ready to fully migrate to the AWS Cloud, but it wants a failure environment on AWS in case the on-premises data center fails.
The company runs web servers that connect to external vendors. The data available on AWS and on premises must be uniform.
Which solution should a solutions architect recommend that has the LEAST amount of downtime?
- Configure an Amazon Route 53 failover record. Run application servers on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3.
- Configure an Amazon Route 53 failover record. Execute an AWS CloudFormation template from a script to create Amazon EC2 instances behind an Application Load Balancer. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3.
- Configure an Amazon Route 53 failover record. Set up an AWS Direct Connect connection between a VPC and the data center. Run application servers on Amazon EC2 in an Auto Scaling group. Run an AWS Lambda function to execute an AWS CloudFormation template to create an Application Load Balancer.
- Configure an Amazon Route 53 failover record. Run an AWS Lambda function to execute an AWS CloudFormation template to launch two Amazon EC2 instances. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3. Set up an AWS Direct Connect connection between a VPC and the data center.
- Configure an Amazon Route 53 failover record. Run application servers on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3.
backup needed = S3 = 1,2,4
web servers = ASG = 1
A company has three VPCs named Development, Testing, and Production in the us-east-1 Region. The three VPCs need to be connected to an on-premises data center and are designed to be separate to maintain security and prevent any resource sharing. A solutions architect needs to find a scalable and secure solution.
What should the solutions architect recommend?
- Create an AWS Direct Connect connection and a VPN connection for each VPC to connect back to the data center.
- Create VPC peers from all the VPCs to the Production VPC. Use an AWS Direct Connect connection from the Production VPC back to the data center.
- Connect VPN connections from all the VPCs to a VPN in the Production VPC. Use a VPN connection from the Production VPC back to the data center.
- Create a new VPC called Network. Within the Network VPC, create an AWS Transit Gateway with an AWS Direct Connect connection back to the data center. Attach all the other VPCs to the Network VPC.
- Create a new VPC called Network. Within the Network VPC, create an AWS Transit Gateway with an AWS Direct Connect connection back to the data center. Attach all the other VPCs to the Network VPC.
complex VPC needing simplification = AWS Transit Gateway
https://aws.amazon.com/premiumsupport/knowledge-center/transit-gateway-connect-vpcs-from-vpn/
What should a solutions architect do to ensure that all objects uploaded to an Amazon S3 bucket are encrypted?
- Update the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set.
- Update the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set to private.
- Update the bucket policy to deny if the PutObject does not have an aws:SecureTransport header set to true.
- Update the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption header set.
- Update the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption header set.
memorize “x-amz-server”
https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/
A company needs a secure connection between its on-premises environment and AWS. This connection does not need high bandwidth and will handle a small amount of traffic. The connection should be set up quickly.
What is the MOST cost-effective method to establish this type of connection?
- Implement a client VPN.
- Implement AWS Direct Connect.
- Implement a bastion host on Amazon EC2.
- Implement an AWS Site-to-Site VPN connection.
- Implement an AWS Site-to-Site VPN connection.
set up quickly = VPN
not need high bandwidth = VPN
client VPN = BYOD to aws = wrong