More that I got wrong Flashcards
I need to automatically and repeatably create many member accounts within an Organization. The accounts also need to be moved into an OU and have VPCs and subnets created.
The best solution is to use a combination of scripts and AWS CloudFormation. You will also leverage the AWS Organizations API. This solution can provide all of the requirements.
CORRECT: “Use CloudFormation with scripts” is the correct answer.
Store objects in S3 encrypted at rest. Manage the key rotation for me, but let me control access to the keys.
SSE-KMS requires that AWS manage the data key but you manage the customer master key (CMK) in AWS KMS. You can choose a customer-managed CMK or the AWS-managed CMK for Amazon S3 in your account.
For this scenario, the solutions architect should use SSE-KMS with a customer-managed CMK. That way KMS will manage the data key but the company can configure key policies defining who can access the keys
I need a high-performance file storage that works natively with S3.
Amazon FSx for Lustre provides a high-performance file system optimized for fast processing of workloads such as machine learning, high performance computing (HPC), video processing, financial modeling, and electronic design automation (EDA). Amazon FSx works natively with Amazon S3, letting you transparently access your S3 objects as files on Amazon FSx to run analyses for hours to months.
EFS and EBS can NOT present S3 objects as files to an application.
My microservices behind an API Gateway API are not designed to elastically scale when large increases in demand occur. What do I do?
The individual microservices are not designed to scale. Therefore, the best way to ensure they are not overwhelmed by requests is to decouple the requests from the microservices. An Amazon SQS queue can be created, and the API Gateway can be configured to add incoming requests to the queue. The microservices can then pick up the requests from the queue when they are ready to process them.
CORRECT: “Create an Amazon SQS queue to store incoming requests. Configure the microservices to retrieve the requests from the queue for processing” is the correct answer.
INCORRECT: “Use an Elastic Load Balancer to distribute the traffic between the microservices. Configure Amazon CloudWatch metrics to monitor traffic to the microservices” is incorrect. You cannot use an ELB to spread traffic across many different individual microservices as the requests must be directed to individual microservices. Therefore, you would need a target group per microservice, and you would need Auto Scaling to scale the microservices.
What compute service is suitable for a workload that can range broadly in demand from no requests to large bursts of traffic?
AWS Lambda is an ideal solution as you pay only when requests are made and it can easily scale to accommodate the large bursts in traffic. Lambda works well with both API Gateway and Amazon RDS.
I need a DR plan to failover my 2TB Aurora MySQL database to another region with an RTO of 10 minutes and RPO of 5 minutes.
Amazon Aurora Global Database is designed for globally distributed applications, allowing a single Amazon Aurora database to span multiple AWS regions. It replicates your data with no impact on database performance, enables fast local reads with low latency in each region, and provides disaster recovery from region-wide outages
If your primary region suffers a performance degradation or outage, you can promote one of the secondary regions to take read/write responsibilities. An Aurora cluster can recover in less than 1 minute even in the event of a complete regional outage.
This provides your application with an effective Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of less than 1 minute, providing a strong foundation for a global business continuity plan
CORRECT: “Recreate the database as an Aurora global database with the primary DB cluster in us-east-1 and a secondary DB cluster in us-west-2. Use an Amazon EventBridge rule that invokes an AWS Lambda function to promote the DB cluster in us-west-2 when failure is detected” is the correct answer.
❗️INCORRECT: “Create a cross-Region Aurora MySQL read replica in us-west-2 Region. Configure an Amazon EventBridge rule that invokes an AWS Lambda function that promotes the read replica in us-west-2 when failure is detected” is incorrect. This may not meet the RTO objectives as large databases may well take more than 10 minutes to promote.
An application runs on Amazon EC2 Linux instances. The application generates log files which are written using standard API calls (REST). A storage solution is required that can be used to store the files indefinitely and must allow concurrent access to all files.
Which storage service meets these requirements and is the MOST cost-effective?
The application is writing the files using API calls which means it will be compatible with Amazon S3 which uses a REST API. S3 is a massively scalable key-based object store that is well-suited to allowing concurrent access to the files from many instances.
Amazon S3 will also be the most cost-effective choice.
❗️EFS is more expensive.
A company has an A1 VPC and a B2 VPC. The A1 VPC uses VPNs through a customer gateway to connect to a single device in an on-premises data center. The B2 VPC uses a virtual private gateway attached to two AWS Direct Connect (DX) connections. Both VPCs are connected using a single VPC peering connection.
How can a Solutions Architect improve this architecture to remove any single point of failure?
The only single point of failure in this architecture is the customer gateway device in the on-premises data center. A customer gateway device is the on-premises (client) side of the connection into the VPC. The customer gateway configuration is created within AWS, but the actual device is a physical or virtual device running in the on-premises data center. If this device is a single device, then if it fails the VPN connections will fail. The AWS side of the VPN link is the virtual private gateway, and this is a redundant device.
CORRECT: “Add additional VPNs to the A1 VPC from a ⭐️second customer gateway device⭐️” is the correct answer.
An organization is extending a secure development environment into AWS. They have already secured the VPC including removing the Internet Gateway and setting up a Direct Connect connection. What else needs to be done to add encryption?
A VPG is used to set up an AWS VPN which you can use in combination with Direct Connect to encrypt all data that traverses the Direct Connect link. This combination provides an IPsec-encrypted private connection that also reduces network costs, increases bandwidth throughput, and provides a more consistent network experience than internet-based VPN connections.
CORRECT: “Setup a Virtual Private Gateway (VPG)” is the correct answer.
A systems administrator of a company wants to detect and remediate the compromise of services such as Amazon EC2 instances and Amazon S3 buckets.
Amazon GuardDuty gives you access to built-in detection techniques that are developed and optimized for the cloud. The detection algorithms are maintained and continuously improved upon by AWS Security. The primary detection categories include reconnaissance, instance compromise, account compromise, and bucket compromise.
Amazon GuardDuty offers HTTPS APIs, CLI tools, and Amazon CloudWatch Events to support automated security responses to security findings. For example, you can automate the response workflow by using CloudWatch Events as an event source to trigger an AWS Lambda function.
CORRECT: “Amazon GuardDuty” is the correct answer.
A company hosts data in an S3 bucket that users around the world download from their website using a URL that resolves to a domain name. The company needs to provide low latency access to users and plans to use Amazon Route 53 for hosting DNS records.
Which solution meets these requirements?
This is a simple requirement for low latency access to the contents of an Amazon S3 bucket for global users. The best solution here is to use Amazon CloudFront to cache the content in Edge Locations around the world. This involves creating a web distribution that points to an S3 origin (the bucket) and then create an Alias record in Route 53 that resolves the applications URL to the CloudFront distribution endpoint.
CORRECT: “Create a web distribution on Amazon CloudFront pointing to an Amazon S3 origin. Create an ALIAS record in the Amazon Route 53 hosted zone that points to the CloudFront distribution, resolving to the application’s URL domain name” is the correct answer.
An application allows users to upload and download files. Files older than 2 years will be accessed less frequently. A solutions architect needs to ensure that the application can scale to any number of files while maintaining high availability and durability.
Which scalable solutions should the solutions architect recommend?
S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. This combination of low cost and high performance make S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files.
CORRECT: “Store the files on Amazon S3 with a lifecycle policy that moves objects older than 2 years to S3 Standard Infrequent Access (S3 Standard-IA)” is the correct answer.
INCORRECT: “Store the files on Amazon Elastic File System (EFS) with a lifecycle policy that moves objects older than 2 years to EFS Infrequent Access (EFS IA)” is incorrect. With EFS you can transition files to EFS IA after a file has not been accessed for a specified period of time with options up to 90 days. You cannot transition based on an age of 2 years.
A development team needs to run up a few lab servers on a weekend for a new project. The servers will need to run uninterrupted for a few hours. Which EC2 pricing option would be most suitable?
On-Demand pricing ensures that instances will not be terminated and is the most economical option. Use on-demand for ad-hoc requirements where you cannot tolerate interruption.
CORRECT: “On-Demand” is the correct answer.
A Solutions Architect is designing the system monitoring and deployment layers of a serverless application. The system monitoring layer will manage system visibility through recording logs and metrics and the deployment layer will deploy the application stack and manage workload changes through a release management process.
The Architect needs to select the most appropriate AWS services for these functions. Which services and frameworks should be used for the system monitoring and deployment layers? (choose 2)
AWS Serverless Application Model (AWS SAM) is an extension of AWS CloudFormation that is used to package, test, and deploy serverless applications.
With Amazon CloudWatch, you can access system metrics on all the AWS services you use, consolidate system and application level logs, and create business key performance indicators (KPIs) as custom metrics for your specific needs.
CORRECT: “Use AWS SAM to package, test, and deploy the serverless application stack” is a correct answer.
CORRECT: “Use Amazon CloudWatch for consolidating system and application logs and monitoring custom metrics” is also a correct answer.
A company has multiple Amazon VPCs that are peered with each other. The company would like to use a single Elastic Load Balancer (ELB) to route traffic to multiple EC2 instances in peered VPCs within the same region. How can this be achieved?
With ALB and NLB IP addresses can be used to register:
- Instances in a peered VPC.
- AWS resources that are addressable by IP address and port.
- On-premises resources linked to AWS through Direct Connect or a VPN connection.
CORRECT: “This is possible using the Network Load Balancer (NLB) and Application Load Balancer (ALB) if using IP addresses as targets” is the correct answer.