Udemy Exam 4 Flashcards
The engineering team at a leading e-commerce company is anticipating a surge in the traffic because of a flash sale planned for the weekend. You have estimated the web traffic to be 10x. The content of your website is highly dynamic and changes very often.
As a Solutions Architect, which of the following options would you recommend to make sure your infrastructure scales for that day?
Use an Auto Scaling Group
- An Auto Scaling group (ASG) contains a collection of Amazon EC2 instances that are treated as a logical grouping for automatic scaling and management.
- An Auto Scaling group also enables you to use Amazon EC2 Auto Scaling features such as health check replacements and scaling policies.
- Both maintaining the number of instances in an Auto Scaling group and automatic scaling are the core functionality of the Amazon EC2 Auto Scaling service.
- The size of an Auto Scaling group depends on the number of instances that you set as the desired capacity.
- You can adjust its size to meet demand, either manually or by using automatic scaling.
- An Auto Scaling group starts by launching enough instances to meet its desired capacity.
- It maintains this number of instances by performing periodic health checks on the instances in the group.
- The Auto Scaling group continues to maintain a fixed number of instances even if an instance becomes unhealthy.
- If an instance becomes unhealthy, the group terminates the unhealthy instance and launches another instance to replace it.
You have developed a new REST API leveraging the API Gateway, AWS Lambda and Aurora database services. Most of the workload on the website is read-heavy. The data rarely changes and it is acceptable to serve users outdated data for about 24 hours. Recently, the website has been experiencing high load and the costs incurred on the Aurora database have been very high.
How can you easily reduce the costs while improving performance, with minimal changes?
Enable API Gateway Caching
Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.
APIs act as the “front door” for applications to access data, business logic, or functionality from your backend services.
Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications.
API Gateway supports containerized and serverless workloads, as well as web applications.
- You can enable API caching in Amazon API Gateway to cache your endpoint’s responses.
- With caching, you can reduce the number of calls made to your endpoint and also improve the latency of requests to your API.
- When you enable caching for a stage, API Gateway caches responses from your endpoint for a specified time-to-live (TTL) period, in seconds.
- API Gateway then responds to the request by looking up the endpoint response from the cache instead of requesting your endpoint.
- The default TTL value for API caching is 300 seconds.
- The maximum TTL value is 3600 seconds. TTL=0 means caching is disabled.
- Using API Gateway Caching feature is the answer for the use case, as we can accept stale data for about 24 hours.
A healthcare company is evaluating storage options on Amazon S3 to meet regulatory guidelines. The data should be stored in such a way on S3 that it cannot be deleted until the regulatory time period has expired.
As a solutions architect, which of the following would you recommend for the given requirement?
Use S3 Object Lock
- Amazon S3 Object Lock is an Amazon S3 feature that allows you to store objects using a write once, read many (WORM) model.
You can use WORM protection for scenarios where it is imperative that data is not changed or deleted after it has been written.
- Whether your business has a requirement to satisfy compliance regulations in the financial or healthcare sector, or you simply want to capture a golden copy of business records for later auditing and reconciliation, S3 Object Lock is the right tool for you.
Object Lock can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.
As a Solutions Architect, you are tasked to design a distributed application that will run on various EC2 instances. This application needs to have the highest performance local disk to cache data. Also, data is copied through an EC2 to EC2 replication mechanism. It is acceptable if the instance loses its data when stopped or terminated.
Which storage solution do you recommend?
Instance Store
- An instance store provides temporary block-level storage for your instance.
- This storage is located on disks that are physically attached to the host computer.
Instance store is ideal for the temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.
- Instance store volumes are included as part of the instance’s usage cost.
- Some instance types use NVMe or SATA-based solid-state drives (SSD) to deliver high random I/O performance.
This is a good option when you need storage with very low latency, but you don’t need the data to persist when the instance terminates.
A systems administrator is creating IAM policies and attaching them to IAM identities. After creating the necessary identity-based policies, the administrator is now creating resource-based policies.
Which is the only resource-based policy that the IAM service supports?
You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources.
- A policy is an object in AWS that, when associated with an identity or resource, defines their permissions.
- Resource-based policies are JSON policy documents that you attach to a resource such as an Amazon S3 bucket.
- These policies grant the specified principal permission to perform specific actions on that resource and define under what conditions this applies.
Trust policy
- Trust policies define which principal entities (accounts, users, roles, and federated users) can assume the role.
- An IAM role is both an identity and a resource that supports resource-based policies.
For this reason, you must attach both a trust policy and an identity-based policy to an IAM role. The IAM service supports only one type of resource-based policy called a role trust policy, which is attached to an IAM role.
For security purposes, a team has decided to put their instances in a private subnet. They plan to deploy a VPC endpoint to access these services. The members of the team would like to know about the only two AWS services that require a Gateway Endpoint instead of an Interface Endpoint.
As a solutions architect, which of the following services would you suggest for this requirement? (Select two)
- Amazon S3
- DynamoDB
A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.
Instances in your VPC do not require public IP addresses to communicate with resources in the service.
Traffic between your VPC and the other service does not leave the Amazon network.
Endpoints are virtual devices.
They are horizontally scaled, redundant, and highly available VPC components.
They allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.
There are two types of VPC endpoints:
Interface Endpoints
- An Interface Endpoint is an Elastic Network Interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service.
Gateway Endpoints
- A Gateway Endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service.
The following AWS services are supported: Amazon S3 and DynamoDB.
You must remember that only these two services use a VPC gateway endpoint. The rest of the AWS services use VPC interface endpoints.
As a solutions architect, you have created a solution that utilizes an Application Load Balancer with stickiness and an Auto Scaling Group (ASG). The ASG spawns across 2 Availability Zones (AZ). AZ-A has 3 EC2 instances and AZ-B has 4 EC2 instances. The ASG is about to go into a scale-in event due to the triggering of a CloudWatch alarm.
What will happen under the default ASG configuration?
The instance with the oldest launch configuration will be terminated in AZ-B
- Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application.
- You create collections of EC2 instances, called Auto Scaling groups.
- You can specify the minimum number of instances in each Auto Scaling group, and Amazon EC2 Auto Scaling ensures that your group never goes below this size.
- With each Auto Scaling group, you can control when it adds instances (referred to as scaling out) or removes instances (referred to as scaling in) from your network architecture.
- The default termination policy is designed to help ensure that your instances span Availability Zones evenly for high availability.
- The default policy is kept generic and flexible to cover a range of scenarios.
The default termination policy behavior is as follows:
- Determine which Availability Zones have the most instances and at least one instance that is not protected from scale-in.
- Determine which instances to terminate to align the remaining instances to the allocation strategy for the On-Demand or Spot Instance that is terminating.
- Determine whether any of the instances use the oldest launch template or configuration:
- a. Determine whether any of the instances use the oldest launch template unless there are instances that use a launch configuration.
- b. Determine whether any of the instances use the oldest launch configuration.
- After applying all of the above criteria, if there are multiple unprotected instances to terminate, determine which instances are closest to the next billing hour.
Per the given use-case, AZs will be balanced first, then the instance with the oldest launch configuration within that AZ will be terminated.
As an e-sport tournament hosting company, you have servers that need to scale and be highly available. Therefore you have deployed an Elastic Load Balancer (ELB) with an Auto Scaling group (ASG) across 3 Availability Zones (AZs). When e-sport tournaments are running, the servers need to scale quickly. And when tournaments are done, the servers can be idle. As a general rule, you would like to be highly available, have the capacity to scale and optimize your costs.
What do you recommend? (Select two)
Set the minimum capacity to 2
Use Reserved Instances for the minimum capacity
- An Auto Scaling group contains a collection of Amazon EC2 instances that are treated as a logical grouping for automatic scaling and management.
- An Auto Scaling group also enables you to use Amazon EC2 Auto Scaling features such as health check replacements and scaling policies.
- Both maintaining the number of instances in an Auto Scaling group and automatic scaling are the core functionality of the Amazon EC2 Auto Scaling service.
- You configure the size of your Auto Scaling group by setting the minimum, maximum, and desired capacity.
- The minimum and maximum capacity are required to create an Auto Scaling group, while the desired capacity is optional.
- If you do not define your desired capacity upfront, it defaults to your minimum capacity.
- An Auto Scaling group is elastic as long as it has different values for minimum and maximum capacity.
- All requests to change the Auto Scaling group’s desired capacity (either by manual scaling or automatic scaling) must fall within these limits.
- Here, even though our ASG is deployed across 3 AZs, the minimum capacity to be highly available is 2.
- When we specify 2 as the minimum capacity, the ASG would create these 2 instances in separate AZs.
- If demand goes up, the ASG would spin up a new instance in the third AZ. Later as the demand subsides, the ASG would scale-in and the instance count would be back to 2.
- Reserved Instances provide you with significant savings on your Amazon EC2 costs compared to On-Demand Instance pricing.
- Reserved Instances are not physical instances, but rather a billing discount applied to the use of On-Demand Instances in your account.
- These On-Demand Instances must match certain attributes, such as instance type and Region, to benefit from the billing discount.
- Since minimum capacity will always be maintained, it is cost-effective to choose reserved instances than any other option.
- In case of an AZ outage, the instance in that AZ would go down however the other instance would still be available.
- The ASG would provision the replacement instance in the third AZ to keep the minimum count to 2.
A CRM company has a SaaS (Software as a Service) application that feeds updates to other in-house and third-party applications. The SaaS application and the in-house applications are being migrated to use AWS services for this inter-application communication.
As a Solutions Architect, which of the following would you suggest to asynchronously decouple the architecture?
Use Amazon EventBridge to decouple the system architecture
- Both Amazon EventBridge and Amazon SNS can be used to develop event-driven applications, but for this use case, EventBridge is the right fit.
- Amazon EventBridge is recommended when you want to build an application that reacts to events from SaaS applications and/or AWS services.
- Amazon EventBridge is the only event-based service that integrates directly with third-party SaaS partners.
- Amazon EventBridge also automatically ingests events from over 90 AWS services without requiring developers to create any resources in their account.
- Further, Amazon EventBridge uses a defined JSON-based structure for events and allows you to create rules that are applied across the entire event body to select events to forward to a target.
- Amazon EventBridge currently supports over 15 AWS services as targets, including AWS Lambda, Amazon SQS, Amazon SNS, and Amazon Kinesis Streams and Firehose, among others.
- At launch, Amazon EventBridge is has limited throughput (see Service Limits) which can be increased upon request, and typical latency of around half a second.
The engineering team at an e-commerce company has been tasked with migrating to a serverless architecture. The team wants to focus on the key points of consideration when using Lambda as a backbone for this architecture.
As a Solutions Architect, which of the following options would you identify as correct for the given requirement? (Select three)
By default, Lambda functions always operate from an AWS-owned VPC and hence have access to any public internet address or public AWS APIs.
- Once a Lambda function is VPC-enabled, it will need a route through a NAT gateway in a public subnet to access public resources
- Lambda functions always operate from an AWS-owned VPC.
- By default, your function has the full ability to make network requests to any public internet address — this includes access to any of the public AWS APIs.
For example, your function can interact with AWS DynamoDB APIs to PutItem or Query for records.
- You should only enable your functions for VPC access when you need to interact with a private resource located in a private subnet.
- An RDS instance is a good example.
- Once your function is VPC-enabled, all network traffic from your function is subject to the routing rules of your VPC/Subnet.
- If your function needs to interact with a public resource, you will need a route through a NAT gateway in a public subnet.
- Since Lambda functions can scale extremely quickly, its a good idea to deploy a CloudWatch Alarm that notifies your team when function metrics such as ConcurrentExecutions or Invocations exceeds the expected threshold -
- Since Lambda functions can scale extremely quickly, this means you should have controls in place to notify you when you have a spike in concurrency.
- A good idea is to deploy a CloudWatch Alarm that notifies your team when function metrics such as ConcurrentExecutions or Invocations exceeds your threshold.
- You should create an AWS Budget so you can monitor costs on a daily basis.
- If you intend to reuse code in more than one Lambda function, you should consider creating a Lambda Layer for the reusable code
- You can configure your Lambda function to pull in additional code and content in the form of layers.
- A layer is a ZIP archive that contains libraries, a custom runtime, or other dependencies.
- With layers, you can use libraries in your function without needing to include them in your deployment package.
- Layers let you keep your deployment package small, which makes development easier.
- A function can use up to 5 layers at a time.
- You can create layers, or use layers published by AWS and other AWS customers.
- Layers support resource-based policies for granting layer usage permissions to specific AWS accounts, AWS Organizations, or all accounts.
- The total unzipped size of the function and all layers can’t exceed the unzipped deployment package size limit of 250 MB.
A company wants to grant access to an S3 bucket to users in its own AWS account as well as to users in another AWS account. Which of the following options can be used to meet this requirement?
Use a bucket policy to grant permission to users in its account as well as to users in another account
A bucket policy is a type of resource-based policy that can be used to grant permissions to the principal that is specified in the policy.
Principals can be in the same account as the resource or in other accounts.
For cross-account permissions to other AWS accounts or users in another account, you must use a bucket policy.
You are working for a SaaS (Software as a Service) company as a solutions architect and help design solutions for the company’s customers. One of the customers is a bank and has a requirement to whitelist up to two public IPs when the bank is accessing external services across the internet.
Which architectural choice do you recommend to maintain high availability, support scaling-up to 10 instances and comply with the bank’s requirements?
*Use a Network Load Balancer with an Auto Scaling Group (ASG) *
Network Load Balancer is best suited for use-cases involving low latency and high throughput workloads that involve scaling to millions of requests per second.
- Network Load Balancer operates at the connection level (Layer 4), routing connections to targets - Amazon EC2 instances, microservices, and containers – within Amazon Virtual Private Cloud (Amazon VPC) based on IP protocol data.
- A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model.
- It can handle millions of requests per second.
- Network Load Balancers expose a fixed IP to the public web, therefore allowing your application to be predictably reached using these IPs, while allowing you to scale your application behind the Network Load Balancer using an ASG.
The engineering team at a social media company has recently migrated to AWS Cloud from its on-premises data center. The team is evaluating CloudFront to be used as a CDN for its flagship application. The team has hired you as an AWS Certified Solutions Architect Associate to advise on CloudFront capabilities on routing, security, and high availability.
Which of the following would you identify as correct regarding CloudFront? (Select three)
CloudFront can route to multiple origins based on the content type
- You can configure a single CloudFront web distribution to serve different types of requests from multiple origins.
For example, if you are building a website that serves static content from an Amazon Simple Storage Service (Amazon S3) bucket and dynamic content from a load balancer, you can serve both types of content from a CloudFront web distribution.
- Use an origin group with primary and secondary origins to configure CloudFront for high availability and failover
- You can set up CloudFront with origin failover for scenarios that require high availability.
- To get started, you create an origin group with two origins: a primary and a secondary.
- If the primary origin is unavailable or returns specific HTTP response status codes that indicate a failure, CloudFront automatically switches to the secondary origin.
- To set up origin failover, you must have a distribution with at least two origins.
- Next, you create an origin group for your distribution that includes two origins, setting one as the primary.
- Finally, you create or update a cache behavior to use the origin group.
A startup’s cloud infrastructure consists of a few Amazon EC2 instances, Amazon RDS instances and Amazon S3 storage. A year into their business operations, the startup is incurring costs that seem too high for their business requirements.
Which of the following options represents a valid cost-optimization solution?
Use AWS Cost Explorer Resource Optimization to get a report of EC2 instances that are either idle or have low utilization and use AWS Compute Optimizer to look at instance type recommendations
AWS Cost Explorer
- Helps you identify under-utilized EC2 instances that may be downsized on an instance by instance basis within the same instance family, and also understand the potential impact on your AWS bill by taking into account your Reserved Instances and Savings Plans.
AWS Compute Optimizer
- Recommends optimal AWS Compute resources for your workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics.
- Compute Optimizer helps you choose the optimal Amazon EC2 instance types, including those that are part of an Amazon EC2 Auto Scaling group, based on your utilization data.
A photo hosting service publishes a master pack of beautiful mountain images, every month, that are over 50 GB in size and downloaded all around the world. The content is currently hosted on EFS and distributed by Elastic Load Balancing (ELB) and Amazon EC2 instances. The website is experiencing high load each month and very high network costs.
As a Solutions Architect, what can you recommend that won’t force an application refactor and reduce network costs and EC2 load drastically?
Create a CloudFront distribution
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.
- CloudFront points of presence (POPs) (edge locations) make sure that popular content can be served quickly to your viewers.
- CloudFront also has regional edge caches that bring more of your content closer to your viewers, even when the content is not popular enough to stay at a POP, to help improve performance for that content.
- Regional edge caches help with all types of content, particularly content that tends to become less popular over time.
Examples include user-generated content, such as video, photos, or artwork; e-commerce assets such as product photos and videos; and news and event-related content that might suddenly find new popularity.
- For the given use case, you need to create a CloudFront distribution to add a caching layer in front of your ELB.
- That caching layer will be very effective as the image pack is a static file, and therefore it would save the network costs significantly without requiring an application refactor.