Review Mode Diagnostic Test – AWS Solutions Architect Associate Flashcards
A healthcare company uses its on-premises infrastructure to run legacy applications that require specialized customizations to the underlying Oracle database as well as its host operating system (OS). The company also wants to improve the availability of the Oracle database layer. The company has hired you as an AWS Certified Solutions Architect – Associate to build a solution on AWS that meets these requirements while minimizing the underlying infrastructure maintenance effort.
Which of the following options represents the best solution for this use case?
- Leverage multi-AZ configuration of Amazon RDS Custom for Oracle that allows the Database Administrator (DBA) to access and customize the database environment and the underlying operating system
- Deploy the Oracle database layer on multiple Amazon EC2 instances spread across two Availability Zones (AZs). This deployment configuration guarantees high availability and also allows the Database Administrator (DBA) to access and customize the database environment and the underlying operating system
- Leverage cross AZ read-replica configuration of Amazon RDS for Oracle that allows the Database Administrator (DBA) to access and customize the database environment and the underlying operating system
- Leverage multi-AZ configuration of Amazon RDS for Oracle that allows the Database Administrator (DBA) to access and customize the database environment and the underlying operating system
Correct answer
Leverage multi-AZ configuration of Amazon RDS Custom for Oracle that allows the Database Administrator (DBA) to access and customize the database environment and the underlying operating system
Amazon RDS is a managed service that makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks. Amazon RDS can automatically back up your database and keep your database software up to date with the latest version. However, RDS does not allow you to access the host OS of the database.
For the given use-case, you need to use Amazon RDS Custom for Oracle as it allows you to access and customize your database server host and operating system, for example by applying special patches and changing the database software settings to support third-party applications that require privileged access. Amazon RDS Custom for Oracle facilitates these functionalities with minimum infrastructure maintenance effort. You need to set up the RDS Custom for Oracle in multi-AZ configuration for high availability.
Incorrect options:
Leverage multi-AZ configuration of Amazon RDS for Oracle that allows the Database Administrator (DBA) to access and customize the database environment and the underlying operating system
Leverage cross AZ read-replica configuration of Amazon RDS for Oracle that allows the Database Administrator (DBA) to access and customize the database environment and the underlying operating system
Amazon RDS for Oracle does not allow you to access and customize your database server host and operating system. Therefore, both these options are incorrect.
Deploy the Oracle database layer on multiple Amazon EC2 instances spread across two Availability Zones (AZs). This deployment configuration guarantees high availability and also allows the Database Administrator (DBA) to access and customize the database environment and the underlying operating system - The use case requires that the best solution should involve minimum infrastructure maintenance effort. When you use Amazon EC2 instances to host the databases, you need to manage the server health, server maintenance, server patching, and database maintenance tasks yourself. In addition, you will also need to manage the multi-AZ configuration by deploying Amazon EC2 instances across two Availability Zones (AZs), perhaps by using an Auto Scaling group. These steps entail significant maintenance effort. Hence this option is incorrect.
Volumes in Amazon Elastic Block Store (Amazon EBS) are automatically replicated within an Availability Zone.
True
False
True
What is required to enable internet access for instances in a public subnet?
- Associate an Elastic IP address with each instance in the public subnet
- Create a NAT gateway in the public subnet
- Configure a route table with a destination of 0.0.0/0 to the internet gateway.
- Enable the auto assign public IP feature on the public subnet
To enable internet access for instances in a public subnet, you need to:
Configure a route table with a destination of 0.0.0/0 to the internet gateway12.
This ensures that traffic destined for the internet is routed through the internet gateway, allowing your instances to communicate with the internet.
A company seeks to rearchitect its subsystem to an event-driven design using AWS Lambda. However, the company has some hesitations, being that their workloads are all container-based Java services. The company wants to minimize cold starts and outlier latencies when serving requests in the most cost-effective manner.
Which would meet the requirements?
- Set up Lambda layers for dependencies.
- Configure response streaming for Lambda functions.
- Enable Lambda provisioned concurrency.
- Enable Lambda SnapStart.
- Enable Lambda SnapStart.
AWS Lambda is a compute service that lets you run code without provisioning or managing servers.
Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling and logging. With Lambda, all you need to do is supply your code in one of the language runtimes that Lambda supports.
With Lambda SnapStart for Java, Lambda initializes functions as new versions are published. Lambda then takes a Firecracker microVM snapshot of the memory and disk state of the initialized execution environment, encrypts the snapshot, and caches it for low-latency access.
In this scenario, leveraging the cached initialized environment in Lambda SnapStart is, in a way, cheating cold starts.
What are the primary benefits of using Lambda layers, and why might they not significantly reduce startup times?
A) They improve function security.
B) They are used for build/space optimization and dependency reuse.
C) They increase cold start times.
D) They simplify function logic.
Correct Answer: B) They are used for build/space optimization and dependency reuse.
Explanation: Lambda layers help in organizing and reusing dependencies across multiple functions, which can save space and simplify deployment. However, they do not substantially reduce startup times, as their main purpose is not to optimize the initialization process but to manage dependencies more efficiently.
How does response streaming for Lambda functions affect time-to-first-byte latencies, and why might it be insufficient for Java runtimes?
A) It increases overall latencies.
B) It improves time-to-first-byte latencies but may not address startup time issues for Java runtimes.
C) It reduces cold start times.
D) It enhances function security.
Correct Answer: C) It reduces cold start times.
Explanation: Streaming responses can enhance the speed at which the first byte of data is sent to the client, improving perceived performance. However, for runtimes like Java, where the startup time is significantly long, streaming responses alone cannot mitigate the initial delay caused by the cold start.
Why might enabling Lambda provisioned concurrency not be cost-effective during periods of minimal invocations?
A) It increases cold start times.
B) It incurs overhead costs.
C) It reduces function performance.
D) It complicates deployment.
Correct Answer: B) It incurs overhead costs.
Explanation: While provisioned concurrency can reduce cold start times, it comes with additional costs. These costs may not be justified during periods when the function experiences minimal to zero invocations, making it an inefficient choice in such scenarios.
A company uses Amazon EC2 instances, Amazon RDS and, Amazon S3 to run its application. During a recent review of its infrastructure costs, the company noticed unusual spending patterns.
The company wants to monitor usage costs and send alerts to the appropriate departments when there is unusual spending from their workload.
Which option will meet these requirements?
- Create a zero spend budget template in AWS Budgets.
- Use AWS Cost Explorer and enable multi-year data at monthly granularity.
- In the AWS Billing and Cost Management console, create a cost monitor using AWS 4. Cost Anomaly Detection.
- Enable Amazon CloudWatch to monitor costs and detect unusual spending.
AWS Cost Anomaly Detection is an AWS Cost Management feature. This feature uses machine learning models to detect and alert on anomalous spend patterns in your deployed AWS services.
Using AWS Cost Anomaly Detection includes the following benefits:
– receive alerts individually in aggregated reports either in an email message or an Amazon SNS topic.
– evaluate spending patterns using machine learning methods to minimize false positive alerts. For example, you can evaluate weekly or monthly seasonality and natural growth.
– investigate the root cause of the anomaly, such as the AWS account, service, Region, or usage type that’s driving the cost increase.
– configure how to evaluate your costs. Choose whether you want to analyze all of your AWS services independently or analyze specific member accounts, cost allocation tags, or cost categories.
Hence, the correct answer is: In the AWS Billing and Cost Management console, create a cost monitor using AWS Cost Anomaly Detection.
A consumer goods company runs its website, handling online orders from customers, and supporting microservices on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
A solution is required to route requests to the hosted services based on URL paths.
Which option will meet this requirement with the LEAST amount of setup?
- Provision an Application Load Balancer (ALB) using the AWS Load Balancer Controller.
- Use a Lambda function to proxy requests to Amazon EKS.
- Provision an NGINX Ingress controller.
- Provision a Network Load Balancer (NLB) using the AWS Load Balancer Controller.
Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that eliminates the need to install, operate, and maintain your own Kubernetes control plane on Amazon Web Services (AWS).
An Amazon EKS add-on is software that provides supporting operational capabilities to Kubernetes applications but is not specific to the application. This includes software like observability agents or Kubernetes drivers that allow the cluster to interact with underlying AWS resources for networking, compute, and storage. For instance, the community-initiated AWS Load Balancer Controller simplifies the managing and provisioning of load balancing resources on AWS, initially conceived for Ingress-related load balancing and later expanded to include Service-related load balancing, i.e., L4 and NLB.
AWS Elastic Load Balancing automatically distributes your incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in one or more Availability Zones. It monitors the health of its registered targets, and routes traffic only to the healthy targets. Elastic Load Balancing scales your load balancer as your incoming traffic changes over time. It can automatically scale to the vast majority of workloads.
The ELB family of products includes:
– Application Load Balancer
– Network Load Balancer
– Gateway Load Balancer
In this scenario, using the AWS Load Balancer Controller add-on should help by provisioning the ALB and other AWS resources.
A company runs an internal application on AWS which uses EC2 instances for compute and Amazon RDS for PostgreSQL for its data store. Considering the application only runs during working hours on weekdays, a solution is required to optimize costs with minimal operational overhead.
Which solution would satisfy these requirements?
- Deploy the CloudFormation template of the Instance Scheduler on AWS. Set up the start and stop schedules of the EC2 instance and RDS DB instance.
- Purchase a compute savings plan for EC2 and RDS.
- Create a CloudWatch alarm that triggers a Lambda function when CPU utilization falls below an idle threshold. In the function, implement logic for stopping both the EC2 instance and the RDS database.
- Purchase reserved instance subscriptions for EC2 and RDS
Deploy the CloudFormation template of the Instance Scheduler on AWS. Set up the start and stop schedules of the EC2 instance and RDS DB instance.
The important aspect in this scenario is the usage pattern, which doesn’t fit the continuous usage model assumed by Reserved Instance subscriptions or compute savings plans. It’s essential to understand that the Instance Scheduler is not an AWS service or feature per se, but a CloudFormation template provided by AWS. By deploying this template, you can simply set the desired start and stop schedules for your EC2 and RDS instances to match your application’s operating hours.
A company is refactoring an application’s architecture to leverage an AWS Lambda function and Amazon API Gateway. The application receives 5 KB JSON blobs, processes it, and stores the result to an Amazon Aurora database. The application only needs to acknowledge when it has accepted a request, not when it has processed it.
Initially, the application failed to meet acceptance criteria due to numerous throttling errors during data processing. To resolve this issue, the company had to raise the default number of Lambda concurrent executions several times.
How can a solutions architect improve the scalability of the infrastructure to reduce throttling errors?
- Rewrite the Lambda function code into 2 functions – one function to receive the information and another function to persist the information into the database. Use Amazon Simple Queue Service (Amazon SQS) to signal available work to the second Lambda function.
- Rewrite the Lambda function code into 2 functions – one function to receive the information and another function to persist the information into the database. Use Amazon Simple Notification Service (Amazon SNS) to signal available work to the second Lambda function.
- Migrate the data store from Aurora to Amazon DynamoDB. Set up a DynamoDB Accelerator (DAX) cluster between the application and the DynamoDB database. Configure the application to route DynamoDB requests to the DAX cluster using the DAX client SDK.
- Rewrite the Lambda function code into a Go application that runs on an EC2 instance. Use native Go drivers for database connection.
- Rewrite the Lambda function code into 2 functions – one function to receive the information and another function to persist the information into the database. Use Amazon Simple Queue Service (Amazon SQS) to signal available work to the second Lambda function.
Amazon Simple Queue Service (Amazon SQS) offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components. Amazon SQS offers common constructs such as dead-letter queues and cost allocation tags.
Amazon SQS provides a generic web services API that you can access using any programming language supported by AWS SDK. A single subscriber typically processes messages in the queue. This does not necessarily mean a single consumer consuming the whole queue serially. For example, there could be concurrent executions of an AWS Lambda function, each consuming a different queue item. The important thing is that queue items are usually consumed only once. In some use cases, several subscribers need to act on the same item; Amazon SQS and Amazon SNS are often used together to create a fanout messaging application in such scenarios.
Amazon SNS is a publish-subscribe service that provides message delivery from publishers (also known as producers) to multiple subscriber endpoints(also known as consumers). Publishers communicate asynchronously with subscribers by sending messages to a topic, which is a logical access point and communication channel. Subscribers can subscribe to an Amazon SNS topic and receive published messages using a supported endpoint type, such as Amazon Data Firehose, Amazon SQS, Lambda, HTTP, email, mobile push notifications, and mobile text messages (SMS). Amazon SNS acts as a message router and delivers messages to subscribers in real-time. If a subscriber is not available at the time of message publication, the message is not stored for later retrieval.
In this scenario, we can split the application’s logic into two Lambda functions: one for receiving requests and another for processing them, with Amazon SQS serving as a bridge between them. This setup allows SQS to act as a buffer, holding incoming data when the processing function is busy. It effectively manages traffic spikes, ensuring no data is lost during periods of high demand. You can create an event source mapping for the SQS queue to invoke the processing Lambda function. If the function fails due to throttling, Lambda will implement a back-off strategy to retry the function, reducing the occurrence of throttling errors.
Hence, the correct answer is Rewrite the Lambda function code into 2 functions – one function to receive the information and another function to persist the information into the database. Use Amazon Simple Queue Service (Amazon SQS) to signal available work to the second Lambda function.
A company operates a three-tier application with all its components hosted on AWS, except for the data layer. This data layer consists of a MySQL-compatible database located on-premises, connected to AWS through a Site-to-Site VPN. The database’s memory consumption ranges from 2 to 16 GiB.
On the other hand, the application experiences unpredictable traffic patterns, including spikes and periods of zero activity, which translates to irregular load on the database. The company wants to replace the on-premises database with a managed service that can automatically adapt to varying demands.
Which option will meet these requirements?
- Configure an Amazon Aurora Serverless v2 database with a minimum capacity of 1 and a maximum of 8 Aurora capacity units (ACUs).
- Set up an Amazon Aurora database with a memory-optimized DB instance class type.
- Set up an Amazon RDS for the MySQL database with 4 gigabytes of memory.
- Set up an Amazon DynamoDB table with auto-scaling enabled. Configure the table with a minimum of 2 and a maximum of 16 capacity units.
- Configure an Amazon Aurora Serverless v2 database with a minimum capacity of 1 and a maximum of 8 Aurora capacity units (ACUs).
Amazon Aurora is a relational database service that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. Aurora is fully compatible with MySQL and PostgreSQL, allowing existing applications and tools to run without requiring modification.
With Aurora Serverless V2, you can mix and match provisioned capacity writer/reader instances with serverless writer/reader that match your workloads. Taking an example of a read-heavy but consistent workload with erratic writes, the cluster can be configured to include provisioned reader instance/s and a serverless writer.
In this scenario, due to the unpredictable traffic pattern and the fact that only the range of memory consumption is known, it is best to simply allocate the corresponding number of ACUs and let Aurora Serverless handle the scaling up/down. The unit of measure for Aurora Serverless v2 is the Aurora capacity unit (ACU). Each ACU includes about 2 gibibytes (GiB) of memory, along with the necessary CPU and networking. By setting a range with a minimum of 1 ACU and a maximum of 8 ACUs, you can ensure that the database scales efficiently to meet demand, from minimal to peak usage.
Hence, the correct answer is: Configure an Amazon Aurora Serverless v2 database with a minimum capacity of 1 and a maximum of 8 Aurora capacity units (ACUs).
A company needs to limit the visibility and discoverability of workloads across AWS accounts with differing administrative requirements. Using AWS Organizations, a solutions architect and accounts stakeholders grouped accounts into organizational units (OU).
A solution is required that will monitor changes to the OU hierarchy and allow stakeholders to subscribe to related alerts.
Which option meets these requirements with the LEAST administrative overhead?
- Use AWS Control Tower to provision the AWS accounts. Set up AWS Config aggregated rules.
- Use AWS Service Catalog to create accounts in Organizations. Configure an AWS CloudTrail organization trail to consolidate changes across accounts.
- Use AWS Control Tower to provision the AWS accounts. Enable account drift notifications.
- Use AWS CloudFormation stacksets to create resources across Organizations. Initiate the drift detection operation on a stack to identify changes.
- Use AWS Control Tower to provision the AWS accounts. Enable account drift notifications.
AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage. AWS Organizations includes account management and consolidated billing capabilities that enable you to better meet the budgetary, security, and compliance needs of your business.
AWS Control Tower is a high-level service offering a straightforward way to set up and govern an AWS multi-account environment, following prescriptive best practices. AWS Control Tower orchestrates the capabilities of several other AWS services, including AWS Organizations, AWS Service Catalog, and AWS IAM Identity Center, to build a landing zone in less than an hour.
In this scenario, the feature mechanisms that enable the management of account drift are already built into the Control Tower. It requires no further engineering setup and avoids fragmenting administration to yet another service. This directly supports monitoring changes in the OU hierarchy, such as accounts being added or removed, enabling stakeholders to easily subscribe to alerts on these events.
Hence, the correct answer is: Use AWS Control Tower to provision the AWS accounts. Enable account drift notifications.