saa-c02-part-12 Flashcards

1
Q

A company hosts an online shopping application that stores all orders in an Amazon RDS for PostgreSQL Single-AZ DB instance. Management wants to eliminate single points of failure and has asked a solutions architect to recommend an approach to minimize database downtime without requiring any changes to the application code.

Which solution meets these requirements?

  1. Convert the existing database instance to a Multi-AZ deployment by modifying the database instance and specifying the Multi-AZ option.
  2. Create a new RDS Multi-AZ deployment. Take a snapshot of the current RDS instance and restore the new Multi-AZ deployment with the snapshot.
  3. Create a read-only replica of the PostgreSQL database in another Availability Zone. Use Amazon Route 53 weighted record sets to distribute requests across the databases.
  4. Place the RDS for PostgreSQL database in an Amazon EC2 Auto Scaling group with a minimum group size of two. Use Amazon Route 53 weighted record sets to distribute requests across instances.
A
  1. Convert the existing database instance to a Multi-AZ deployment by modifying the database instance and specifying the Multi-AZ option.

Single-AZ + without requiring any changes to the application code = modify to MultiAZ

https://aws.amazon.com/rds/features/multi-az/ To convert an existing Single-AZ DB Instance to a Multi-AZ deployment, use the “Modify

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A company has a 10 Gbps AWS Direct Connect connection from its on-premises servers to AWS. The workloads using the connection are critical. The company requires a disaster recovery strategy with maximum resiliency that maintains the current connection bandwidth at a minimum.

What should a solutions architect recommend?

  1. Set up a new Direct Connect connection in another AWS Region.
  2. Set up a new AWS managed VPN connection in another AWS Region.
  3. Set up two new Direct Connect connections: one in the current AWS Region and one in another Region.
  4. Set up two new AWS managed VPN connections: one in the current AWS Region and one in another Region.
A
  1. Set up a new Direct Connect connection in another AWS Region.

disaster recovery = DX in another AWS Region

3 = wrong no reason to setup another DX in same region

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A solutions architect is designing a VPC with public and private subnets. The VPC and subnets use IPv4 CIDR blocks. There is one public subnet and one private subnet in each of three Availability Zones (AZs) for high availability. An internet gateway is used to provide internet access for the public subnets. The private subnets require access to the internet to allow Amazon EC2 instances to download software updates.

What should the solutions architect do to enable internet access for the private subnets?

  1. Create three NAT gateways, one for each public subnet in each AZ. Create a private route table for each AZ that forwards non-VPC traffic to the NAT gateway in its AZ.
  2. Create three NAT instances, one for each private subnet in each AZ. Create a private route table for each AZ that forwards non-VPC traffic to the NAT instance in its AZ.
  3. Create a second internet gateway on one of the private subnets. Update the route table for the private subnets that forward non-VPC traffic to the private internet gateway.
  4. Create an egress-only internet gateway on one of the public subnets. Update the route table for the private subnets that forward non-VPC traffic to the egress-only internet gateway.
A
  1. Create three NAT gateways, one for each public subnet in each AZ. Create a private route table for each AZ that forwards non-VPC traffic to the NAT gateway in its AZ.

access to the internet = NAT = 1,2

NAT = only in public = 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

As part of budget planning, management wants a report of AWS billed items listed by user. The data will be used to create department budgets. A solutions architect needs to determine the most efficient way to obtain this report information.

Which solution meets these requirements?

  1. Run a query with Amazon Athena to generate the report.
  2. Create a report in Cost Explorer and download the report.
  3. Access the bill details from the billing dashboard and download the bill.
  4. Modify a cost budget in AWS Budgets to alert with Amazon Simple Email Service (Amazon SES).
A
  1. Create a report in Cost Explorer and download the report.

billing report = Cost Explorer

You can filter the cost data associated with each member account in an organization using Cost Explorer https://aws.amazon.com/premiumsupport/knowledge-center/consolidated-linked-billing-report/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A company with facilities in North America, Europe, and Asia is designing new distributed application to optimize its global supply chain and manufacturing process. The orders booked on one continent should be visible to all Regions in a second or less. The database should be able to support failover with a short Recovery Time Objective (RTO). The uptime of the application is important to ensure that manufacturing is not impacted.

What should a solutions architect recommend?

  1. Use Amazon DynamoDB global tables.
  2. Use Amazon Aurora Global Database.
  3. Use Amazon RDS for MySQL with a cross-Region read replica.
  4. Use Amazon RDS for PostgreSQL with a cross-Region read replica.
A
  1. Use Amazon Aurora Global Database.

in a second or less = fast global db = Aurora Global Database

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company’s near-real-time streaming application is running on AWS. As the data is ingested, a job runs on the data and takes 30 minutes to complete. The workload frequently experiences high latency due to large amounts of incoming data. A solutions architect needs to design a scalable and serverless solution to enhance performance.

Which combination of steps should the solutions architect take? (Choose two.)

  1. Use Amazon Kinesis Data Firehose to ingest the data.
  2. Use AWS Lambda with AWS Step Functions to process the data.
  3. Use AWS Database Migration Service (AWS DMS) to ingest the data.
  4. Use Amazon EC2 instances in an Auto Scaling group to process the data.
  5. Use AWS Fargate with Amazon Elastic Container Service (Amazon ECS) to process the data.
A
  1. Use Amazon Kinesis Data Firehose to ingest the data.
  2. Use AWS Fargate with Amazon Elastic Container Service (Amazon ECS) to process the data.

serverless = Fargate

real-time streaming = Kinesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

An application running on an Amazon EC2 instance needs to access an Amazon DynamoDB table. Both the EC2 instance and the DynamoDB table are in the same AWS account. A solutions architect must configure the necessary permissions.

Which solution will allow least privilege access to the DynamoDB table from the EC2 instance?

  1. Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Create an instance profile to assign this IAM role to the EC2 instance.
  2. Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Add the EC2 instance to the trust relationship policy document to allow it to assume the role.
  3. Create an IAM user with the appropriate policy to allow access to the DynamoDB table. Store the credentials in an Amazon S3 bucket and read them from within the application code directly.
  4. Create an IAM user with the appropriate policy to allow access to the DynamoDB table. Ensure that the application stores the IAM credentials securely on local storage and uses them to make the DynamoDB calls.
A
  1. Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Create an instance profile to assign this IAM role to the EC2 instance.

instance needs permissions = instance role = 1,2

instance profile = role assumption

trust relationship = already trusted in same AWS account = wrong

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A solutions architect is designing a solution that involves orchestrating a series of Amazon Elastic Container Service (Amazon ECS) task types running on Amazon EC2 instances that are part of an ECS cluster. The output and state data for all tasks needs to be stored. The amount of data output by each task is approximately 10 MB, and there could be hundreds of tasks running at a time. The system should be optimized for high-frequency reading and writing. As old outputs are archived and deleted, the storage size is not expected to exceed 1 TB.

Which storage solution should the solutions architect recommend?

  1. An Amazon DynamoDB table accessible by all ECS cluster instances.
  2. An Amazon Elastic File System (Amazon EFS) with Provisioned Throughput mode.
  3. An Amazon Elastic File System (Amazon EFS) file system with Bursting Throughput mode.
  4. An Amazon Elastic Block Store (Amazon EBS) volume mounted to the ECS cluster instances.
A
  1. An Amazon Elastic File System (Amazon EFS) with Provisioned Throughput mode.

data for all tasks needs to be stored = EFS

high-frequency reading and writing = Provisioned Throughput

https://docs.aws.amazon.com/efs/latest/ug/performance.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

An online photo application lets users upload photos and perform image editing operations. The application offers two classes of service: free and paid. Photos submitted by paid users are processed before those submitted by free users. Photos are uploaded to Amazon S3 and the job information is sent to Amazon SQS.

Which configuration should a solutions architect recommend?

  1. Use one SQS FIFO queue. Assign a higher priority to the paid photos so they are processed first.
  2. Use two SQS FIFO queues: one for paid and one for free. Set the free queue to use short polling and the paid queue to use long polling.
  3. Use two SQS standard queues: one for paid and one for free. Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue.
  4. Use one SQS standard queue. Set the visibility timeout of the paid photos to zero. Configure Amazon EC2 instances to prioritize visibility settings so paid photos are processed first.
A
  1. Use two SQS FIFO queues: one for paid and one for free. Set the free queue to use short polling and the paid queue to use long polling.

paid users are processed before those submitted by free = Priority = 2,3

2 doesn’t prioritize paid over free. short = possible to not process message

Use separate queues to provide prioritization of work. https://aws.amazon.com/sqs/features/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A company wants to migrate its MySQL database from on premises to AWS. The company recently experienced a database outage that significantly impacted the business. To ensure this does not happen again, the company wants a reliable database solution on AWS that minimizes data loss and stores every transaction on at least two nodes.

Which solution meets these requirements?

  1. Create an Amazon RDS DB instance with synchronous replication to three nodes in three Availability Zones.
  2. Create an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data.
  3. Create an Amazon RDS MySQL DB instance and then create a read replica in a separate AWS Region that synchronously replicates the data.
  4. Create an Amazon EC2 instance with a MySQL engine installed that triggers an AWS Lambda function to synchronously replicate the data to an Amazon RDS MySQL DB instance.
A
  1. Create an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data.

reliable = Multi AZ = 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company stores user data in AWS. The data is used continuously with peak usage during business hours. Access patterns vary, with some data not being used for months at a time. A solutions architect must choose a cost-effective solution that maintains the highest level of durability while maintaining high availability.

Which storage solution meets these requirements?

  1. Amazon S3 Standard
  2. Amazon S3 Intelligent-Tiering
  3. Amazon S3 Glacier Deep Archive
  4. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
A
  1. Amazon S3 Intelligent-Tiering

Access patterns vary = Intelligent-Tiering

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company receives inconsistent service from its data center provider because the company is headquartered in an area affected by natural disasters. The company is not ready to fully migrate to the AWS Cloud, but it wants a failure environment on AWS in case the on-premises data center fails.

The company runs web servers that connect to external vendors. The data available on AWS and on premises must be uniform.

Which solution should a solutions architect recommend that has the LEAST amount of downtime?

  1. Configure an Amazon Route 53 failover record. Run application servers on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3.
  2. Configure an Amazon Route 53 failover record. Execute an AWS CloudFormation template from a script to create Amazon EC2 instances behind an Application Load Balancer. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3.
  3. Configure an Amazon Route 53 failover record. Set up an AWS Direct Connect connection between a VPC and the data center. Run application servers on Amazon EC2 in an Auto Scaling group. Run an AWS Lambda function to execute an AWS CloudFormation template to create an Application Load Balancer.
  4. Configure an Amazon Route 53 failover record. Run an AWS Lambda function to execute an AWS CloudFormation template to launch two Amazon EC2 instances. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3. Set up an AWS Direct Connect connection between a VPC and the data center.
A
  1. Configure an Amazon Route 53 failover record. Run application servers on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group. Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3.

backup needed = S3 = 1,2,4

web servers = ASG = 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A company has three VPCs named Development, Testing, and Production in the us-east-1 Region. The three VPCs need to be connected to an on-premises data center and are designed to be separate to maintain security and prevent any resource sharing. A solutions architect needs to find a scalable and secure solution.

What should the solutions architect recommend?

  1. Create an AWS Direct Connect connection and a VPN connection for each VPC to connect back to the data center.
  2. Create VPC peers from all the VPCs to the Production VPC. Use an AWS Direct Connect connection from the Production VPC back to the data center.
  3. Connect VPN connections from all the VPCs to a VPN in the Production VPC. Use a VPN connection from the Production VPC back to the data center.
  4. Create a new VPC called Network. Within the Network VPC, create an AWS Transit Gateway with an AWS Direct Connect connection back to the data center. Attach all the other VPCs to the Network VPC.
A
  1. Create a new VPC called Network. Within the Network VPC, create an AWS Transit Gateway with an AWS Direct Connect connection back to the data center. Attach all the other VPCs to the Network VPC.

complex VPC needing simplification = AWS Transit Gateway

https://aws.amazon.com/premiumsupport/knowledge-center/transit-gateway-connect-vpcs-from-vpn/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What should a solutions architect do to ensure that all objects uploaded to an Amazon S3 bucket are encrypted?

  1. Update the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set.
  2. Update the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set to private.
  3. Update the bucket policy to deny if the PutObject does not have an aws:SecureTransport header set to true.
  4. Update the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption header set.
A
  1. Update the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption header set.

memorize “x-amz-server

https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A company needs a secure connection between its on-premises environment and AWS. This connection does not need high bandwidth and will handle a small amount of traffic. The connection should be set up quickly.

What is the MOST cost-effective method to establish this type of connection?

  1. Implement a client VPN.
  2. Implement AWS Direct Connect.
  3. Implement a bastion host on Amazon EC2.
  4. Implement an AWS Site-to-Site VPN connection.
A
  1. Implement an AWS Site-to-Site VPN connection.

set up quickly = VPN

not need high bandwidth = VPN

client VPN = BYOD to aws = wrong

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A company uses Application Load Balancers (ALBs) in different AWS Regions. The ALBs receive inconsistent traffic that can spike and drop throughout the year. The company’s networking team needs to allow the IP addresses of the ALBs in the on-premises firewall to enable connectivity.

Which solution is the MOST scalable with minimal configuration changes?

  1. Write an AWS Lambda script to get the IP addresses of the ALBs in different Regions. Update the on-premises firewall’s rule to allow the IP addresses of the ALBs.
  2. Migrate all ALBs in different Regions to the Network Load Balancer (NLBs). Update the on-premises firewall’s rule to allow the Elastic IP addresses of all the NLBs.
  3. Launch AWS Global Accelerator. Register the ALBs in different Regions to the accelerator. Update the on-premises firewall’s rule to allow static IP addresses associated with the accelerator.
  4. Launch a Network Load Balancer (NLB) in one Region. Register the private IP addresses of the ALBs in different Regions with the NLB. Update the on-premises firewall’s rule to allow the Elastic IP address attached to the NLB.
A
  1. Launch AWS Global Accelerator. Register the ALBs in different Regions to the accelerator. Update the on-premises firewall’s rule to allow static IP addresses associated with the accelerator.

allow the IP addresses of the ALBs = Global Accelerator

17
Q

A company runs a high performance computing (HPC) workload on AWS. The workload required low-latency network performance and high network throughput with tightly coupled node-to-node communication. The Amazon EC2 instances are properly sized for compute and storage capacity, and are launched using default options.

What should a solutions architect propose to improve the performance of the workload?

  1. Choose a cluster placement group while launching Amazon EC2 instances.
  2. Choose dedicated instance tenancy while launching Amazon EC2 instances.
  3. Choose an Elastic Inference accelerator while launching Amazon EC2 instances.
  4. Choose the required capacity reservation while launching Amazon EC2 instances.
A

1. Choose a cluster placement group while launching Amazon EC2 instances.

high performance computing (HPC) = cluster placement group

18
Q

A company uses a legacy on-premises analytics application that operates on gigabytes of .csv files and represents months of data. The legacy application cannot handle the growing size of .csv files. New .csv files are added daily from various data sources to a central on-premises storage location. The company wants to continue to support the legacy application while users learn AWS analytics services. To achieve this, a solutions architect wants to maintain two synchronized copies of all the .csv files on-premises and in Amazon S3.

Which solution should the solutions architect recommend?

  1. Deploy AWS DataSync on-premises. Configure DataSync to continuously replicate the .csv files between the company’s on-premises storage and the company’s S3 bucket.
  2. Deploy an on-premises file gateway. Configure data sources to write the .csv files to the file gateway. Point the legacy analytics application to the file gateway. The file gateway should replicate the .csv files to Amazon S3.
  3. Deploy an on-premises volume gateway. Configure data sources to write the .csv files to the volume gateway. Point the legacy analytics application to the volume gateway. The volume gateway should replicate data to Amazon S3.
  4. Deploy AWS DataSync on-premises. Configure DataSync to continuously replicate the .csv files between on-premises and Amazon Elastic File System (Amazon EFS). Enable replication from Amazon Elastic File System (Amazon EFS) to the company’s S3 bucket.
A
  1. Deploy AWS DataSync on-premises. Configure DataSync to continuously replicate the .csv files between the company’s on-premises storage and the company’s S3 bucket.

on-premises and in Amazon S3 = DataSync = 1,4

19
Q

A company has media and application files that need to be shared internally. Users currently are authenticated using Active Directory and access files from a Microsoft Windows platform. The chief executive officer wants to keep the same user permissions, but wants the company to improve the process as the company is reaching its storage capacity limit.

What should a solutions architect recommend?

  1. Set up a corporate Amazon S3 bucket and move all media and application files.
  2. Configure Amazon FSx for Windows File Server and move all the media and application files.
  3. Configure Amazon Elastic File System (Amazon EFS) and move all media and application files.
  4. Set up Amazon EC2 on Windows, attach multiple Amazon Elastic Block Store (Amazon EBS) volumes, and move all media and application files.
A
  1. Configure Amazon FSx for Windows File Server and move all the media and application files.

Active Directory and access files from a Microsoft Windows = FSx

20
Q

A company is deploying a web portal. The company wants to ensure that only the web portion of the application is publicly accessible. To accomplish this, the VPC was designed with two public subnets and two private subnets. The application will run on several Amazon EC2 instances in an Auto Scaling group. SSL termination must be offloaded from the EC2 instances.

What should a solutions architect do to ensure these requirements are met?

  1. Configure the Network Load Balancer in the public subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer.
  2. Configure the Network Load Balancer in the public subnets. Configure the Auto Scaling group in the public subnets and associate it with the Application Load Balancer.
  3. Configure the Application Load Balancer in the public subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer.
  4. Configure the Application Load Balancer in the private subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer.
A
  1. Configure the Application Load Balancer in the public subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer.

SSL termination = ALB

ALB = Public subnet

NLB = TLS layer4 = wrong