S.A.A. Test 1 Questions Flashcards
Retention period rules for objects stored in S3?
Retention period rules for objects stored in S3:
- different versions of a single object can have different retention modes and periods
- When you apply retention period to an object version explicitly you specify a retain until date for the object version
- When you use bucket default settings, you don’t specify a Retain Until Date. Instead, you specify a duration, in either days or years, for which every object version placed in the bucket should be protected.
The flagship application for a gaming company connects to an Amazon Aurora database and the entire technology stack is currently deployed in the United States. Now, the company has plans to expand to Europe and Asia for its operations. It needs the games table to be accessible globally but needs the users and games_played tables to be regional only.
How would you implement this with minimal application refactoring?
Explanation
Correct option:
Use an Amazon Aurora Global Database for the games table and use Amazon Aurora for the users and games_played tables
Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora features a distributed, fault-tolerant, self-healing storage system that auto-scales up to 128TB per database instance. Aurora is not an in-memory database.
Amazon Aurora Global Database is designed for globally distributed applications, allowing a single Amazon Aurora database to span multiple AWS regions. It replicates your data with no impact on database performance, enables fast local reads with low latency in each region, and provides disaster recovery from region-wide outages. Amazon Aurora Global Database is the correct choice for the given use-case.
For the given use-case, we, therefore, need to have two Aurora clusters, one for the global table (games table) and the other one for the local tables (users and games_played tables).
An IT Company wants to move all the compute components of its AWS Cloud infrastructure into serverless architecture. Their development stack comprises a mix of backend programming languages and the company would like to explore the support offered by the AWS Lambda runtime for their programming languages stack.
Explanation
Correct options:
C#/.NET
Go
A runtime is a version of a programming language or framework that you can use to write Lambda functions. AWS Lambda supports runtimes for the following languages:
C#/.NET
Go
Java
Node.js
Python
Ruby
A large IT company wants to federate its workforce into AWS accounts and business applications.
Explanation
Correct options:
Use AWS Single Sign-On (SSO)
Use AWS Identity and Access Management (IAM)
Identity federation is a system of trust between two parties for the purpose of authenticating users and conveying the information needed to authorize their access to resources. In this system, an identity provider (IdP) is responsible for user authentication, and a service provider (SP), such as a service or an application, controls access to resources. By administrative agreement and configuration, the SP trusts the IdP to authenticate users and relies on the information provided by the IdP about them. After authenticating a user, the IdP sends the SP a message, called an assertion, containing the user’s sign-in name and other attributes that the SP needs to establish a session with the user and to determine the scope of resource access that the SP should grant. Federation is a common approach to building access control systems that manage users centrally within a central IdP and govern their access to multiple applications and services acting as SPs.
You can use two AWS services to federate your workforce into AWS accounts and business applications: AWS Single Sign-On (SSO) or AWS Identity and Access Management (IAM). AWS SSO is a great choice to help you define federated access permissions for your users based on their group memberships in a single centralized directory. If you use multiple directories or want to manage the permissions based on user attributes, consider AWS IAM as your design alternative.
Incorrect options:
The development team at an e-commerce startup has set up multiple microservices running on EC2 instances under an Application Load Balancer. The team wants to route traffic to multiple back-end services based on the URL path of the HTTP header. So it wants requests for https://www.example.com/orders to go to a specific microservice and requests for https://www.example.com/products to go to another microservice.
Which of the following features of Application Load Balancers can be used for this use-case?
Explanation
Correct option:
Path-based Routing
Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions.
If your application is composed of several individual services, an Application Load Balancer can route a request to a service based on the content of the request. Here are the different types -
Host-based Routing:
You can route a client request based on the Host field of the HTTP header allowing you to route to multiple domains from the same load balancer.
Path-based Routing:
You can route a client request based on the URL path of the HTTP header.
HTTP header-based routing:
You can route a client request based on the value of any standard or custom HTTP header.
HTTP method-based routing:
You can route a client request based on any standard or custom HTTP method.
Query string parameter-based routing:
You can route a client request based on the query string or query parameters.
Source IP address CIDR-based routing:
You can route a client request based on source IP address CIDR from where the request originates.
Path-based Routing Overview:
You can use path conditions to define rules that route requests based on the URL in the request (also known as path-based routing).
The path pattern is applied only to the path of the URL, not to its query parameters. via - https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#path-conditions
Incorrect options:
Query string parameter-based routing
HTTP header-based routing
Host-based Routing
As mentioned earlier in the explanation, none of these three types of routing support requests based on the URL path of the HTTP header. Hence these three are incorrect.
Question 14: Incorrect
A gaming company is looking at improving the availability and performance of its global flagship application which utilizes UDP protocol and needs to support fast regional failover in case an AWS Region goes down. The company wants to continue using its own custom DNS service.
Which of the following AWS services represents the best solution for this use-case?
Explanation
Correct option:
AWS Global Accelerator - AWS Global Accelerator utilizes the Amazon global network, allowing you to improve the performance of your applications by lowering first-byte latency (the round trip time for a packet to go from a client to your endpoint and back again) and jitter (the variation of latency), and increasing throughput (the amount of time it takes to transfer data) as compared to the public internet.
Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover.
Question 15: Incorrect
The engineering team at a data analytics company has observed that its flagship application functions at its peak performance when the underlying EC2 instances have a CPU utilization of about 50%. The application is built on a fleet of EC2 instances managed under an Auto Scaling group. The workflow requests are handled by an internal Application Load Balancer that routes the requests to the instances.
As a solutions architect, what would you recommend so that the application runs near its peak performance state?
Explanation
Correct option:
Configure the Auto Scaling group to use target tracking policy and set the CPU utilization as the target metric with a target value of 50%
An Auto Scaling group contains a collection of Amazon EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management. An Auto Scaling group also enables you to use Amazon EC2 Auto Scaling features such as health check replacements and scaling policies.
With target tracking scaling policies, you select a scaling metric and set a target value. Amazon EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value.
For example, you can use target tracking scaling to:
Configure a target tracking scaling policy to keep the average aggregate CPU utilization of your Auto Scaling group at 50 percent. This meets the requirements specified in the given use-case and therefore, this is the correct option.
Target Tracking Policy Overview: via - https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html
Incorrect options:
Configure the Auto Scaling group to use step scaling policy and set the CPU utilization as the target metric with a target value of 50%
Configure the Auto Scaling group to use simple scaling policy and set the CPU utilization as the target metric with a target value of 50%
With step scaling and simple scaling, you choose scaling metrics and threshold values for the CloudWatch alarms that trigger the scaling process. Neither step scaling nor simple scaling can be configured to use a target metric for CPU utilization, hence both these options are incorrect.
Configure the Auto Scaling group to use a Cloudwatch alarm triggered on a CPU utilization threshold of 50% - An Auto Scaling group cannot directly use a Cloudwatch alarm as the source for a scale-in or scale-out event, hence this option is incorrect.
A developer has created a new Application Load Balancer but has not registered any targets with the target groups. Which of the following errors would be generated by the Load Balancer?
Explanation
Correct option:
HTTP 503: Service unavailable
The Load Balancer generates the HTTP 503: Service unavailable error when the target groups for the load balancer have no registered targets.
Incorrect options:
HTTP 500: Internal server error
HTTP 502: Bad gateway
HTTP 504: Gateway timeout = ACL block . lambda timed out
Question 18: Correct
A financial services company recently launched an initiative to improve the security of its AWS resources and it had enabled AWS Shield Advanced across multiple AWS accounts owned by the company. Upon analysis, the company has found that the costs incurred are much higher than expected.
Which of the following would you attribute as the underlying reason for the unexpectedly high costs for AWS Shield Advanced service?
Explanation
Correct option:
Consolidated billing has not been enabled. All the AWS accounts should fall under a single consolidated billing for the monthly fee to be charged only once - If your organization has multiple AWS accounts, then you can subscribe multiple AWS Accounts to AWS Shield Advanced by individually enabling it on each account using the AWS Management Console or API. You will pay the monthly fee once as long as the AWS accounts are all under a single consolidated billing, and you own all the AWS accounts and resources in those accounts.
Incorrect options:
AWS Shield Advanced is being used for custom servers, that are not part of AWS Cloud, thereby resulting in increased costs - AWS Shield Advanced does offer protection to resources outside of AWS. This should not cause unexpected spike in billing costs.
AWS Shield Advanced also covers AWS Shield Standard plan, thereby resulting in increased costs - AWS Shield Standard is automatically enabled for all AWS customers at no additional cost. AWS Shield Advanced is an optional paid service.
Savings Plans has not been enabled for the AWS Shield Advanced service across all the AWS accounts - This option has been added as a distractor. Savings Plans is a flexible pricing model that offers low prices on EC2, Lambda, and Fargate usage, in exchange for a commitment to a consistent amount of usage (measured in $/hour) for a 1 or 3 year term. Savings Plans is not applicable for the AWS Shield Advanced service.
Question 21: Incorrect
A retail company uses Amazon EC2 instances, API Gateway, Amazon RDS, Elastic Load Balancer and CloudFront services. To improve the security of these services, the Risk Advisory group has suggested a feasibility check for using the Amazon GuardDuty service.
Which of the following would you identify as data sources supported by GuardDuty?
Explanation
Correct option:
VPC Flow Logs, DNS logs, CloudTrail events - Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon S3. With the cloud, the collection and aggregation of account and network activities is simplified, but it can be time-consuming for security teams to continuously analyze event log data for potential threats. With GuardDuty, you now have an intelligent and cost-effective option for continuous threat detection in AWS. The service uses machine learning, anomaly detection, and integrated threat intelligence to identify and prioritize potential threats.
GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail events, Amazon VPC Flow Logs, and DNS logs.
With a few clicks in the AWS Management Console, GuardDuty can be enabled with no software or hardware to deploy or maintain. By integrating with Amazon CloudWatch Events, GuardDuty alerts are actionable, easy to aggregate across multiple accounts, and straightforward to push into existing event management and workflow systems.
How GuardDuty works: via - https://aws.amazon.com/guardduty/
Incorrect options:
VPC Flow Logs, API Gateway logs, S3 access logs
ELB logs, DNS logs, CloudTrail events
CloudFront logs, API Gateway logs, CloudTrail events
These three options contradict the explanation provided above, so these options are incorrect.
Question 22: Incorrect
A solutions architect has created a new Application Load Balancer and has configured a target group with IP address as a target type.
Which of the following types of IP addresses is allowed as a valid value for this target type?
Explanation
Correct option:
Private IP address
When you create a target group, you specify its target type, which can be an Instance, IP or a Lambda function.
For IP address target type, you can route traffic using any private IP address from one or more network interfaces.
Incorrect options:
Public IP address
Elastic IP address
You can’t specify publicly routable IP addresses as values for IP target type, so both these options are incorrect.
Dynamic IP address - There is no such thing as a dynamic IP address. This option has been added as a distractor.
Question 24: Incorrect
A healthcare startup needs to enforce compliance and regulatory guidelines for objects stored in Amazon S3. One of the key requirements is to provide adequate protection against accidental deletion of objects.
As a solutions architect, what are your recommendations to address these guidelines? (Select two)
Explanation
Correct options:
Enable versioning on the bucket - Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. Versioning-enabled buckets enable you to recover objects from accidental deletion or overwrite.
For example:
If you overwrite an object, it results in a new object version in the bucket. You can always restore the previous version. If you delete an object, instead of removing it permanently, Amazon S3 inserts a delete marker, which becomes the current object version. You can always restore the previous version. Hence, this is the correct option.
Enable MFA delete on the bucket - To provide additional protection, multi-factor authentication (MFA) delete can be enabled. MFA delete requires secondary authentication to take place before objects can be permanently deleted from an Amazon S3 bucket. Hence, this is the correct option.
Question 25: Incorrect
A financial services company uses Amazon GuardDuty for analyzing its AWS account metadata to meet the compliance guidelines. However, the company has now decided to stop using GuardDuty service. All the existing findings have to be deleted and cannot persist anywhere on AWS Cloud.
Which of the following techniques will help the company meet this requirement?
Explanation
Correct option:
Amazon GuardDuty offers threat detection that enables you to continuously monitor and protect your AWS accounts, workloads, and data stored in Amazon S3. GuardDuty analyzes continuous streams of meta-data generated from your account and network activity found in AWS CloudTrail Events, Amazon VPC Flow Logs, and DNS Logs. It also uses integrated threat intelligence such as known malicious IP addresses, anomaly detection, and machine learning to identify threats more accurately.
Disable the service in the general settings - Disabling the service will delete all remaining data, including your findings and configurations before relinquishing the service permissions and resetting the service. So, this is the correct option for our use case.
Incorrect options:
Suspend the service in the general settings - You can stop Amazon GuardDuty from analyzing your data sources at any time by choosing to suspend the service in the general settings. This will immediately stop the service from analyzing data, but does not delete your existing findings or configurations.
De-register the service under services tab - This is a made-up option, used only as a distractor.
Raise a service request with Amazon to completely delete the data from all their backups - There is no need to create a service request as you can delete the existing findings by disabling the service.
Question 27: Incorrect
A Big Data analytics company wants to set up an AWS cloud architecture that throttles requests in case of sudden traffic spikes. The company is looking for AWS services that can be used for buffering or throttling to handle such traffic variations.
Which of the following services can be used to support this requirement?
Explanation
Correct option:
Throttling is the process of limiting the number of requests an authorized program can submit to a given operation in a given amount of time.
Amazon API Gateway, Amazon SQS and Amazon Kinesis - To prevent your API from being overwhelmed by too many requests, Amazon API Gateway throttles requests to your API using the token bucket algorithm, where a token counts for a request. Specifically, API Gateway sets a limit on a steady-state rate and a burst of request submissions against all APIs in your account. In the token bucket algorithm, the burst is the maximum bucket size.
Amazon SQS - Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Amazon SQS offers buffer capabilities to smooth out temporary volume spikes without losing messages or increasing latency.
Amazon Kinesis - Amazon Kinesis is a fully managed, scalable service that can ingest, buffer, and process streaming data in real-time.
Incorrect options:
Amazon SQS, Amazon SNS and AWS Lambda - Amazon SQS has the ability to buffer its messages. Amazon Simple Notification Service (SNS) cannot buffer messages and is generally used with SQS to provide the buffering facility. When requests come in faster than your Lambda function can scale, or when your function is at maximum concurrency, additional requests fail as the Lambda throttles those requests with error code 429 status code. So, this combination of services is incorrect.
Amazon Gateway Endpoints, Amazon SQS and Amazon Kinesis - A Gateway Endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. This cannot help in throttling or buffering of requests. Amazon SQS and Kinesis can buffer incoming data. Since Gateway Endpoint is an incorrect service for throttling or buffering, this option is incorrect.
Elastic Load Balancer, Amazon SQS, AWS Lambda - Elastic Load Balancer cannot throttle requests. Amazon SQS can be used to buffer messages. When requests come in faster than your Lambda function can scale, or when your function is at maximum concurrency, additional requests fail as the Lambda throttles those requests with error code 429 status code. So, this combination of services is incorrect.