saa-c02-part-05 Flashcards
A company’s operations team has an existing Amazon S3 bucket configured to notify an Amazon SQS queue when new objects are created within the bucket. The development team also wants to receive events when new objects are created. The existing operations team workflow must remain intact.
Which solution would satisfy these requirements?
- Create another SQS queue. Update the S3 events in the bucket to also update the new queue when a new object is created.
- Create a new SQS queue that only allows Amazon S3 to access the queue. Update Amazon S3 to update this queue when a new object is created.
- Create an Amazon SNS topic and SQS queue for the bucket updates. Update the bucket to send events to the new topic. Updates both queues to poll Amazon SNS.
- Create an Amazon SNS topic and SQS queue for the bucket updates. Update the bucket to send events to the new topic. Add subscriptions for both queues in the topic.
- Create an Amazon SNS topic and SQS queue for the bucket updates. Update the bucket to send events to the new topic. Add subscriptions for both queues in the topic.
operations team + development team = 2 teams want events = fanout = SNS Topic
SQS ARN subscribes to a Topic
An application runs on Amazon EC2 instances in private subnets. The application needs to access an Amazon DynamoDB table. What is the MOST secure way to access the table while ensuring that the traffic does not leave the AWS network?
- Use a VPC endpoint for DynamoDB.
- Use a NAT gateway in a public subnet.
- Use a NAT instance in a private subnet.
- Use the internet gateway attached to the VPC.
- Use a VPC endpoint for DynamoDB.
application needs to access an Amazon DynamoDB = endpoint
A company built an application that lets users check in to places they visit, rank the places, and add reviews about their experiences. The application is successful with a rapid increase in the number of users every month.
The chief technology officer fears the database supporting the current Infrastructure may not handle the new load the following month because the single Amazon RDS for MySQL instance has triggered alarms related to resource exhaustion due to read requests.
What can a solutions architect recommend to prevent service Interruptions at the database layer with minimal changes to code?
- Create RDS read replicas and redirect read-only traffic to the read replica endpoints. Enable a Multi-AZ deployment.
- Create an Amazon EMR cluster and migrate the data to a Hadoop Distributed File System (HDFS) with a replication factor of 3.
- Create an Amazon ElastiCache cluster and redirect all read-only traffic to the cluster. Set up the cluster to be deployed in three Availability Zones.
- Create an Amazon DynamoDB table to replace the RDS instance and redirect all read-only traffic to the DynamoDB table. Enable DynamoDB Accelerator to offload traffic from the main table.
- Create RDS read replicas and redirect read-only traffic to the read replica endpoints. Enable a Multi-AZ deployment
not handle the new load + resource exhaustion = performance issues = read replicas
A company is looking for a solution that can store video archives in AWS from old news footage. The company needs to minimize costs and will rarely need to restore these files. When the files are needed, they must be available in a maximum of five minutes.
What is the MOST cost-effective solution?
- Store the video archives in Amazon S3 Glacier and use Expedited retrievals.
- Store the video archives in Amazon S3 Glacier and use Standard retrievals.
- Store the video archives in Amazon S3 Standard-Infrequent Access (S3 Standard-IA).
- Store the video archives in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).
- Store the video archives in Amazon S3 Glacier and use Expedited retrievals.
must be available in a maximum of five minutes = S3 Glacier and use Expedited retrievals
rarely need to restore = Glacier
A company has created a VPC with multiple private subnets in multiple Availability Zones (AZs) and one public subnet in one of the AZs. The public subnet is used to launch a NAT gateway. There are instances in the private subnets that use a NAT gateway to connect to the internet. In case of an AZ failure, the company wants to ensure that the instances are not all experiencing internet connectivity issues and that there is a backup plan ready.
Which solution should a solutions architect recommend that is MOST highly available?
- Create a new public subnet with a NAT gateway in the same AZ. Distribute the traffic between the two NAT gateways.
- Create an Amazon EC2 NAT instance in a new public subnet. Distribute the traffic between the NAT gateway and the NAT instance.
- Create public subnets in each AZ and launch a NAT gateway in each subnet. Configure the traffic from the private subnets in each AZ to the respective NAT gateway.
- Create an Amazon EC2 NAT instance in the same public subnet. Replace the NAT gateway with the NAT instance and associate the instance with an Auto Scaling group with an appropriate scaling policy.
- Create public subnets in each AZ and launch a NAT gateway in each subnet. Configure the traffic from the private subnets in each AZ to the respective NAT gateway.
MOST highly available = multi AZ = each AZ
A healthcare company stores highly sensitive patient records. Compliance requires that multiple copies be stored in different locations. Each record must be stored for 7 years. The company has a service level agreement (SLA) to provide records to government agencies immediately for the first 30 days and then within 4 hours of a request thereafter.
What should a solutions architect recommend?
- Use Amazon S3 with cross-Region replication enabled. After 30 days, transition the data to Amazon S3 Glacier using lifecycle policy.
- Use Amazon S3 with cross-origin resource sharing (CORS) enabled. After 30 days, transition the data to Amazon S3 Glacier using a lifecycle policy.
- Use Amazon S3 with cross-Region replication enabled. After 30 days, transition the data to Amazon S3 Glacier Deep Achieve using a lifecycle policy.
- Use Amazon S3 with cross-origin resource sharing (CORS) enabled. After 30 days, transition the data to Amazon S3 Glacier Deep Archive using a lifecycle policy.
- Use Amazon S3 with cross-Region replication enabled. After 30 days, transition the data to Amazon S3 Glacier using lifecycle policy.
be stored in different locations = cross-Region = 1,3
within 4 hours = not Deep that would take too long for 4 hour retrieval so.. = 1
7 years = Glacier
A company recently deployed a new auditing system to centralize information about operating system versions, patching, and installed software for Amazon EC2 instances. A solutions architect must ensure all instances provisioned through EC2 Auto Scaling groups successfully send reports to the auditing system as soon as they are launched and terminated.
Which solution achieves these goals MOST efficiently?
- Use a scheduled AWS Lambda function and execute a script remotely on all EC2 instances to send data to the audit system.
- Use EC2 Auto Scaling lifecycle hooks to execute a custom script to send data to the audit system when instances are launched and terminated.
- Use an EC2 Auto Scaling launch configuration to execute a custom script through user data to send data to the audit system when instances are launched and terminated.
- Execute a custom script on the instance operating system to send data to the audit system. Configure the script to be executed by the EC2 Auto Scaling group when the instance starts and is terminated.
- Use EC2 Auto Scaling lifecycle hooks to execute a custom script to send data to the audit system when instances are launched and terminated.
send reports to the auditing system as soon as they are launched and terminated = cant script on the instances = need external event triggers = Auto Scaling lifecycle hooks
A company recently implemented hybrid cloud connectivity using AWS Direct Connect and is migrating data to Amazon S3. The company is looking for a fully managed solution that will automate and accelerate the replication of data between the on-premises storage systems and AWS storage services.
Which solution should a solutions architect recommend to keep the data private?
- Deploy an AWS DataSync agent for the on-premises environment. Configure a sync job to replicate the data and connect it with an AWS service endpoint.
- Deploy an AWS DataSync agent for the on-premises environment. Schedule a batch job to replicate point-in-time snapshots to AWS.
- Deploy an AWS Storage Gateway volume gateway for the on-premises environment. Configure it to store data locally, and asynchronously back up point-in-time snapshots to AWS.
- Deploy an AWS Storage Gateway file gateway for the on-premises environment. Configure it to store data locally, and asynchronously back up point-in-time snapshots to AWS.
- Deploy an AWS DataSync agent for the on-premises environment. Configure a sync job to replicate the data and connect it with an AWS service endpoint.
fully managed solution = DataSync
keep the data private = endpoints
A company has 150 TB of archived image data stored on-premises that needs to be moved to the AWS Cloud within the next month. The company’s current network connection allows up to 100 Mbps uploads for this purpose during the night only.
What is the MOST cost-effective mechanism to move this data and meet the migration deadline?
- Use AWS Snowmobile to ship the data to AWS.
- Order multiple AWS Snowball devices to ship the data to AWS.
- Enable Amazon S3 Transfer Acceleration and securely upload the data.
- Create an Amazon S3 VPC endpoint and establish a VPN to upload the data.
- Order multiple AWS Snowball devices to ship the data to AWS.
MOST cost-effective = snowball
A public-facing web application queries a database hosted on an Amazon EC2 instance in a private subnet. A large number of queries involve multiple table joins, and the application performance has been degrading due to an increase in complex queries. The application team will be performing updates to improve performance.
What should a solutions architect recommend to the application team? (Choose two.)
- Cache query data in Amazon SQS
- Create a read replica to offload queries
- Migrate the database to Amazon Athena
- Implement Amazon DynamoDB Accelerator to cache data.
- Migrate the database to Amazon RDS
- Create a read replica to offload queries
- Migrate the database to Amazon RDS
queries = read replicas
multiple table joins = RDS = 3,5 (not 3 though because s3 static data is not good for database CRUD)
A company is seeing access requests by some suspicious IP addresses. The security team discovers the requests are from different IP addresses under the same CIDR range.
What should a solutions architect recommend to the team?
- Add a rule in the inbound table of the security to deny the traffic from that CIDR range.
- Add a rule in the outbound table of the security group to deny the traffic from that CIDR range.
- Add a deny rule in the inbound table of the network ACL with a lower number than other rules.
- Add a deny rule in the outbound table of the network ACL with a lower rule number than other rules.
- Add a deny rule in the inbound table of the network ACL with a lower number than other rules.
CIDR rules = network ACL
protection = inbound
A company recently expanded globally and wants to make its application accessible to users in those geographic locations. The application is deployed on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group. The company needs the ability shift traffic from resources in one region to another.
What should a solutions architect recommend?
- Configure an Amazon Route 53 latency routing policy.
- Configure an Amazon Route 53 geolocation routing policy.
- Configure an Amazon Route 53 geoproximity routing policy.
- Configure an Amazon Route 53 multivalue answer routing policy.
- Configure an Amazon Route 53 geoproximity routing policy.
shift traffic from resources = geoproximity
shift traffic from users = geolocation
A company wants to replicate its data to AWS to recover in the event of a disaster. Today, a system administrator has scripts that copy data to a NFS share. Individual backup files need to be accessed with low latency by application administrators to deal with errors in processing.
What should a solutions architect recommend to meet these requirements?
- Modify the script to copy data to an Amazon S3 bucket instead of the on-premises NFS share.
- Modify the script to copy data to an Amazon S3 Glacier Archive instead of the on-premises NFS share.
- Modify the script to copy data to an Amazon Elastic File System (Amazon EFS) volume instead of the on-premises NFS share.
- Modify the script to copy data to an AWS Storage Gateway for File Gateway virtual appliance instead of the on-premises NFS share.
- Modify the script to copy data to an AWS Storage Gateway for File Gateway virtual appliance instead of the on-premises NFS share.
On Premise access by admins with low latency = Storage Gateway for File Gateway
backup = Storage Gateway
An application requires a development environment (DEV) and production environment (PROD) for several years. The DEV instances will run for 10 hours each day during normal business hours, while the PROD instances will run 24 hours each day. A solutions architect needs to determine a compute instance purchase strategy to minimize costs.
Which solution is the MOST cost-effective?
- DEV with Spot Instances and PROD with On-Demand Instances
- DEV with On-Demand Instances and PROD with Spot Instances
- DEV with Scheduled Reserved Instances and PROD with Reserved Instances
- DEV with On-Demand Instances and PROD with Scheduled Reserved Instances
- DEV with Scheduled Reserved Instances and PROD with Reserved Instances
DEV = 10 hours each day during normal business hours = scheduled = answer 3
A company runs multiple Amazon EC2 Linux instances in a VPC with applications that use a hierarchical directory structure. The applications need to rapidly and concurrently read and write to shared storage.
How can this be achieved?
- Create an Amazon Elastic File System (Amazon EFS) file system and mount it from each EC2 instance.
- Create an Amazon S3 bucket and permit access from all the EC2 instances in the VPC.
- Create a file system on an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1) volume. Attach the volume to all the EC2 instances.
- Create file systems on Amazon Elastic Block Store (Amazon EBS) volumes attached to each EC2 instance. Synchronize the Amazon Elastic Block Store (Amazon EBS) volumes across the different EC2 instances.
- Create an Amazon Elastic File System (Amazon EFS) file system and mount it from each EC2 instance.
concurrent read and write = EFS
Linux = EFS