Udemy Exam 4 Flashcards

1
Q

The engineering team at a leading e-commerce company is anticipating a surge in the traffic because of a flash sale planned for the weekend. You have estimated the web traffic to be 10x. The content of your website is highly dynamic and changes very often.

As a Solutions Architect, which of the following options would you recommend to make sure your infrastructure scales for that day?

A

Use an Auto Scaling Group

  • An Auto Scaling group (ASG) contains a collection of Amazon EC2 instances that are treated as a logical grouping for automatic scaling and management.
  • An Auto Scaling group also enables you to use Amazon EC2 Auto Scaling features such as health check replacements and scaling policies.
  • Both maintaining the number of instances in an Auto Scaling group and automatic scaling are the core functionality of the Amazon EC2 Auto Scaling service.
  • The size of an Auto Scaling group depends on the number of instances that you set as the desired capacity.
  • You can adjust its size to meet demand, either manually or by using automatic scaling.
  • An Auto Scaling group starts by launching enough instances to meet its desired capacity.
  • It maintains this number of instances by performing periodic health checks on the instances in the group.
  • The Auto Scaling group continues to maintain a fixed number of instances even if an instance becomes unhealthy.
  • If an instance becomes unhealthy, the group terminates the unhealthy instance and launches another instance to replace it.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You have developed a new REST API leveraging the API Gateway, AWS Lambda and Aurora database services. Most of the workload on the website is read-heavy. The data rarely changes and it is acceptable to serve users outdated data for about 24 hours. Recently, the website has been experiencing high load and the costs incurred on the Aurora database have been very high.

How can you easily reduce the costs while improving performance, with minimal changes?

A

Enable API Gateway Caching

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.

APIs act as the “front door” for applications to access data, business logic, or functionality from your backend services.

Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications.

API Gateway supports containerized and serverless workloads, as well as web applications.

  • You can enable API caching in Amazon API Gateway to cache your endpoint’s responses.
  • With caching, you can reduce the number of calls made to your endpoint and also improve the latency of requests to your API.
  • When you enable caching for a stage, API Gateway caches responses from your endpoint for a specified time-to-live (TTL) period, in seconds.
  • API Gateway then responds to the request by looking up the endpoint response from the cache instead of requesting your endpoint.
  • The default TTL value for API caching is 300 seconds.
  • The maximum TTL value is 3600 seconds. TTL=0 means caching is disabled.
  • Using API Gateway Caching feature is the answer for the use case, as we can accept stale data for about 24 hours.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A healthcare company is evaluating storage options on Amazon S3 to meet regulatory guidelines. The data should be stored in such a way on S3 that it cannot be deleted until the regulatory time period has expired.

As a solutions architect, which of the following would you recommend for the given requirement?

A

Use S3 Object Lock

  • Amazon S3 Object Lock is an Amazon S3 feature that allows you to store objects using a write once, read many (WORM) model.

You can use WORM protection for scenarios where it is imperative that data is not changed or deleted after it has been written.

  • Whether your business has a requirement to satisfy compliance regulations in the financial or healthcare sector, or you simply want to capture a golden copy of business records for later auditing and reconciliation, S3 Object Lock is the right tool for you.

Object Lock can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

As a Solutions Architect, you are tasked to design a distributed application that will run on various EC2 instances. This application needs to have the highest performance local disk to cache data. Also, data is copied through an EC2 to EC2 replication mechanism. It is acceptable if the instance loses its data when stopped or terminated.

Which storage solution do you recommend?

A

Instance Store

  • An instance store provides temporary block-level storage for your instance.
  • This storage is located on disks that are physically attached to the host computer.

Instance store is ideal for the temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.

  • Instance store volumes are included as part of the instance’s usage cost.
  • Some instance types use NVMe or SATA-based solid-state drives (SSD) to deliver high random I/O performance.

This is a good option when you need storage with very low latency, but you don’t need the data to persist when the instance terminates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A systems administrator is creating IAM policies and attaching them to IAM identities. After creating the necessary identity-based policies, the administrator is now creating resource-based policies.

Which is the only resource-based policy that the IAM service supports?

A

You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources.

  • A policy is an object in AWS that, when associated with an identity or resource, defines their permissions.
  • Resource-based policies are JSON policy documents that you attach to a resource such as an Amazon S3 bucket.
  • These policies grant the specified principal permission to perform specific actions on that resource and define under what conditions this applies.

Trust policy

  • Trust policies define which principal entities (accounts, users, roles, and federated users) can assume the role.
  • An IAM role is both an identity and a resource that supports resource-based policies.

For this reason, you must attach both a trust policy and an identity-based policy to an IAM role. The IAM service supports only one type of resource-based policy called a role trust policy, which is attached to an IAM role.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

For security purposes, a team has decided to put their instances in a private subnet. They plan to deploy a VPC endpoint to access these services. The members of the team would like to know about the only two AWS services that require a Gateway Endpoint instead of an Interface Endpoint.

As a solutions architect, which of the following services would you suggest for this requirement? (Select two)

A
  1. Amazon S3
  2. DynamoDB

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.

Instances in your VPC do not require public IP addresses to communicate with resources in the service.

Traffic between your VPC and the other service does not leave the Amazon network.

Endpoints are virtual devices.

They are horizontally scaled, redundant, and highly available VPC components.

They allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.

There are two types of VPC endpoints:

Interface Endpoints

  • An Interface Endpoint is an Elastic Network Interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service.

Gateway Endpoints

  • A Gateway Endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service.

The following AWS services are supported: Amazon S3 and DynamoDB.

You must remember that only these two services use a VPC gateway endpoint. The rest of the AWS services use VPC interface endpoints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

As a solutions architect, you have created a solution that utilizes an Application Load Balancer with stickiness and an Auto Scaling Group (ASG). The ASG spawns across 2 Availability Zones (AZ). AZ-A has 3 EC2 instances and AZ-B has 4 EC2 instances. The ASG is about to go into a scale-in event due to the triggering of a CloudWatch alarm.

What will happen under the default ASG configuration?

A

The instance with the oldest launch configuration will be terminated in AZ-B

  • Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application.
  • You create collections of EC2 instances, called Auto Scaling groups.
  • You can specify the minimum number of instances in each Auto Scaling group, and Amazon EC2 Auto Scaling ensures that your group never goes below this size.
  • With each Auto Scaling group, you can control when it adds instances (referred to as scaling out) or removes instances (referred to as scaling in) from your network architecture.
  • The default termination policy is designed to help ensure that your instances span Availability Zones evenly for high availability.
  • The default policy is kept generic and flexible to cover a range of scenarios.

The default termination policy behavior is as follows:

  1. Determine which Availability Zones have the most instances and at least one instance that is not protected from scale-in.
  2. Determine which instances to terminate to align the remaining instances to the allocation strategy for the On-Demand or Spot Instance that is terminating.
  3. Determine whether any of the instances use the oldest launch template or configuration:
  4. a. Determine whether any of the instances use the oldest launch template unless there are instances that use a launch configuration.
  5. b. Determine whether any of the instances use the oldest launch configuration.
  6. After applying all of the above criteria, if there are multiple unprotected instances to terminate, determine which instances are closest to the next billing hour.

Per the given use-case, AZs will be balanced first, then the instance with the oldest launch configuration within that AZ will be terminated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

As an e-sport tournament hosting company, you have servers that need to scale and be highly available. Therefore you have deployed an Elastic Load Balancer (ELB) with an Auto Scaling group (ASG) across 3 Availability Zones (AZs). When e-sport tournaments are running, the servers need to scale quickly. And when tournaments are done, the servers can be idle. As a general rule, you would like to be highly available, have the capacity to scale and optimize your costs.

What do you recommend? (Select two)

A

Set the minimum capacity to 2
Use Reserved Instances for the minimum capacity

  • An Auto Scaling group contains a collection of Amazon EC2 instances that are treated as a logical grouping for automatic scaling and management.
  • An Auto Scaling group also enables you to use Amazon EC2 Auto Scaling features such as health check replacements and scaling policies.
  • Both maintaining the number of instances in an Auto Scaling group and automatic scaling are the core functionality of the Amazon EC2 Auto Scaling service.
  • You configure the size of your Auto Scaling group by setting the minimum, maximum, and desired capacity.
  • The minimum and maximum capacity are required to create an Auto Scaling group, while the desired capacity is optional.
  • If you do not define your desired capacity upfront, it defaults to your minimum capacity.
  • An Auto Scaling group is elastic as long as it has different values for minimum and maximum capacity.
  • All requests to change the Auto Scaling group’s desired capacity (either by manual scaling or automatic scaling) must fall within these limits.
  • Here, even though our ASG is deployed across 3 AZs, the minimum capacity to be highly available is 2.
  • When we specify 2 as the minimum capacity, the ASG would create these 2 instances in separate AZs.
  • If demand goes up, the ASG would spin up a new instance in the third AZ. Later as the demand subsides, the ASG would scale-in and the instance count would be back to 2.
  • Reserved Instances provide you with significant savings on your Amazon EC2 costs compared to On-Demand Instance pricing.
  • Reserved Instances are not physical instances, but rather a billing discount applied to the use of On-Demand Instances in your account.
  • These On-Demand Instances must match certain attributes, such as instance type and Region, to benefit from the billing discount.
  • Since minimum capacity will always be maintained, it is cost-effective to choose reserved instances than any other option.
  • In case of an AZ outage, the instance in that AZ would go down however the other instance would still be available.
  • The ASG would provision the replacement instance in the third AZ to keep the minimum count to 2.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A CRM company has a SaaS (Software as a Service) application that feeds updates to other in-house and third-party applications. The SaaS application and the in-house applications are being migrated to use AWS services for this inter-application communication.

As a Solutions Architect, which of the following would you suggest to asynchronously decouple the architecture?

A

Use Amazon EventBridge to decouple the system architecture

  • Both Amazon EventBridge and Amazon SNS can be used to develop event-driven applications, but for this use case, EventBridge is the right fit.
  • Amazon EventBridge is recommended when you want to build an application that reacts to events from SaaS applications and/or AWS services.
  • Amazon EventBridge is the only event-based service that integrates directly with third-party SaaS partners.
  • Amazon EventBridge also automatically ingests events from over 90 AWS services without requiring developers to create any resources in their account.
  • Further, Amazon EventBridge uses a defined JSON-based structure for events and allows you to create rules that are applied across the entire event body to select events to forward to a target.
  • Amazon EventBridge currently supports over 15 AWS services as targets, including AWS Lambda, Amazon SQS, Amazon SNS, and Amazon Kinesis Streams and Firehose, among others.
  • At launch, Amazon EventBridge is has limited throughput (see Service Limits) which can be increased upon request, and typical latency of around half a second.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The engineering team at an e-commerce company has been tasked with migrating to a serverless architecture. The team wants to focus on the key points of consideration when using Lambda as a backbone for this architecture.

As a Solutions Architect, which of the following options would you identify as correct for the given requirement? (Select three)

A

By default, Lambda functions always operate from an AWS-owned VPC and hence have access to any public internet address or public AWS APIs.

  • Once a Lambda function is VPC-enabled, it will need a route through a NAT gateway in a public subnet to access public resources
  • Lambda functions always operate from an AWS-owned VPC.
  • By default, your function has the full ability to make network requests to any public internet address — this includes access to any of the public AWS APIs.

For example, your function can interact with AWS DynamoDB APIs to PutItem or Query for records.

  • You should only enable your functions for VPC access when you need to interact with a private resource located in a private subnet.
    • An RDS instance is a good example.
      • Once your function is VPC-enabled, all network traffic from your function is subject to the routing rules of your VPC/Subnet.
  • If your function needs to interact with a public resource, you will need a route through a NAT gateway in a public subnet.
  • Since Lambda functions can scale extremely quickly, its a good idea to deploy a CloudWatch Alarm that notifies your team when function metrics such as ConcurrentExecutions or Invocations exceeds the expected threshold -
  • Since Lambda functions can scale extremely quickly, this means you should have controls in place to notify you when you have a spike in concurrency.
  • A good idea is to deploy a CloudWatch Alarm that notifies your team when function metrics such as ConcurrentExecutions or Invocations exceeds your threshold.
  • You should create an AWS Budget so you can monitor costs on a daily basis.
  • If you intend to reuse code in more than one Lambda function, you should consider creating a Lambda Layer for the reusable code
  • You can configure your Lambda function to pull in additional code and content in the form of layers.
  • A layer is a ZIP archive that contains libraries, a custom runtime, or other dependencies.
  • With layers, you can use libraries in your function without needing to include them in your deployment package.
  • Layers let you keep your deployment package small, which makes development easier.
  • A function can use up to 5 layers at a time.
  • You can create layers, or use layers published by AWS and other AWS customers.
  • Layers support resource-based policies for granting layer usage permissions to specific AWS accounts, AWS Organizations, or all accounts.
  • The total unzipped size of the function and all layers can’t exceed the unzipped deployment package size limit of 250 MB.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company wants to grant access to an S3 bucket to users in its own AWS account as well as to users in another AWS account. Which of the following options can be used to meet this requirement?

A

Use a bucket policy to grant permission to users in its account as well as to users in another account

A bucket policy is a type of resource-based policy that can be used to grant permissions to the principal that is specified in the policy.

Principals can be in the same account as the resource or in other accounts.

For cross-account permissions to other AWS accounts or users in another account, you must use a bucket policy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You are working for a SaaS (Software as a Service) company as a solutions architect and help design solutions for the company’s customers. One of the customers is a bank and has a requirement to whitelist up to two public IPs when the bank is accessing external services across the internet.

Which architectural choice do you recommend to maintain high availability, support scaling-up to 10 instances and comply with the bank’s requirements?

A

*Use a Network Load Balancer with an Auto Scaling Group (ASG) *

Network Load Balancer is best suited for use-cases involving low latency and high throughput workloads that involve scaling to millions of requests per second.

  • Network Load Balancer operates at the connection level (Layer 4), routing connections to targets - Amazon EC2 instances, microservices, and containers – within Amazon Virtual Private Cloud (Amazon VPC) based on IP protocol data.
  • A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model.
  • It can handle millions of requests per second.
  • Network Load Balancers expose a fixed IP to the public web, therefore allowing your application to be predictably reached using these IPs, while allowing you to scale your application behind the Network Load Balancer using an ASG.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The engineering team at a social media company has recently migrated to AWS Cloud from its on-premises data center. The team is evaluating CloudFront to be used as a CDN for its flagship application. The team has hired you as an AWS Certified Solutions Architect Associate to advise on CloudFront capabilities on routing, security, and high availability.

Which of the following would you identify as correct regarding CloudFront? (Select three)

A

CloudFront can route to multiple origins based on the content type

  • You can configure a single CloudFront web distribution to serve different types of requests from multiple origins.

For example, if you are building a website that serves static content from an Amazon Simple Storage Service (Amazon S3) bucket and dynamic content from a load balancer, you can serve both types of content from a CloudFront web distribution.

  • Use an origin group with primary and secondary origins to configure CloudFront for high availability and failover
  • You can set up CloudFront with origin failover for scenarios that require high availability.
  • To get started, you create an origin group with two origins: a primary and a secondary.
  • If the primary origin is unavailable or returns specific HTTP response status codes that indicate a failure, CloudFront automatically switches to the secondary origin.
  1. To set up origin failover, you must have a distribution with at least two origins.
  2. Next, you create an origin group for your distribution that includes two origins, setting one as the primary.
  3. Finally, you create or update a cache behavior to use the origin group.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A startup’s cloud infrastructure consists of a few Amazon EC2 instances, Amazon RDS instances and Amazon S3 storage. A year into their business operations, the startup is incurring costs that seem too high for their business requirements.

Which of the following options represents a valid cost-optimization solution?

A

Use AWS Cost Explorer Resource Optimization to get a report of EC2 instances that are either idle or have low utilization and use AWS Compute Optimizer to look at instance type recommendations

AWS Cost Explorer

  • Helps you identify under-utilized EC2 instances that may be downsized on an instance by instance basis within the same instance family, and also understand the potential impact on your AWS bill by taking into account your Reserved Instances and Savings Plans.

AWS Compute Optimizer

  • Recommends optimal AWS Compute resources for your workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics.
  • Compute Optimizer helps you choose the optimal Amazon EC2 instance types, including those that are part of an Amazon EC2 Auto Scaling group, based on your utilization data.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A photo hosting service publishes a master pack of beautiful mountain images, every month, that are over 50 GB in size and downloaded all around the world. The content is currently hosted on EFS and distributed by Elastic Load Balancing (ELB) and Amazon EC2 instances. The website is experiencing high load each month and very high network costs.

As a Solutions Architect, what can you recommend that won’t force an application refactor and reduce network costs and EC2 load drastically?

A

Create a CloudFront distribution

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.

  • CloudFront points of presence (POPs) (edge locations) make sure that popular content can be served quickly to your viewers.
  • CloudFront also has regional edge caches that bring more of your content closer to your viewers, even when the content is not popular enough to stay at a POP, to help improve performance for that content.
  • Regional edge caches help with all types of content, particularly content that tends to become less popular over time.

Examples include user-generated content, such as video, photos, or artwork; e-commerce assets such as product photos and videos; and news and event-related content that might suddenly find new popularity.

  • For the given use case, you need to create a CloudFront distribution to add a caching layer in front of your ELB.
  • That caching layer will be very effective as the image pack is a static file, and therefore it would save the network costs significantly without requiring an application refactor.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You are working as a Solutions Architect for a photo processing company that has a proprietary algorithm to compress an image without any loss in quality. Because of the efficiency of the algorithm, your clients are willing to wait for a response that carries their compressed images back. You also want to process these jobs asynchronously and scale quickly, to cater to the high demand. Additionally, you also want the job to be retried in case of failures.

Which combination of choices do you recommend to minimize cost and comply with the requirements? (Select two)

A

EC2 Spot Instances

A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price.

Because Spot Instances enable you to request unused EC2 instances at steep discounts, you can lower your Amazon EC2 costs significantly.

The hourly price for a Spot Instance is called a Spot price.

The Spot price of each instance type in each Availability Zone is set by Amazon EC2 and adjusted gradually based on the long-term supply of and demand for Spot Instances.

Your Spot Instance runs whenever capacity is available and the maximum price per hour for your request exceeds the Spot price.

To process these jobs, due to the unpredictable nature of their volume, and the desire to save on costs, spot Instances are recommended as compared to on-demand instances. As spot instances are cheaper than reserved instances and do not require long term commitment, spot instances are a better fit for the given use-case.

Amazon Simple Queue Service (SQS)

  • Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications.

SQS offers two types of message queues.

  1. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery.
  2. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.
  • SQS will allow you to buffer the image compression requests and process them asynchronously.
  • It also has a direct built-in mechanism for retries and scales seamlessly.
17
Q

A leading e-commerce company runs its IT infrastructure on AWS Cloud. The company has a batch job running at 7 am daily on an RDS database. It processes shipping orders for the past day, and usually gets around 2000 records that need to be processed sequentially in a batch job via a shell script. The processing of each record takes about 3 seconds.

What platform do you recommend to run this batch job?

A

Amazon EC2

  • Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud.
  • It is designed to make web-scale cloud computing easier for developers.
  • Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction.
  • It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment.

AWS Batch can be used to plan, schedule, and execute your batch computing workloads on Amazon EC2 Instances.

Amazon EC2 is the right choice as it can accommodate batch processing and run customized scripts, as is the needed requirement.

18
Q

You are working as an AWS architect for a weather tracking facility. You are asked to set up a Disaster Recovery (DR) mechanism with minimum costs. In case of failure, the facility can only bear data loss of a few minutes without jeopardizing the forecasting models.

As a Solutions Architect, which DR method will you suggest?

A

Pilot Light

  • The term pilot light is often used to describe a DR scenario in which a minimal version of an environment is always running in the cloud.
  • The idea of the pilot light is an analogy that comes from the gas heater.
  • In a gas heater, a small flame that’s always on can quickly ignite the entire furnace to heat up a house.
  • This scenario is similar to a backup-and-restore scenario.

For example, with AWS you can maintain a pilot light by configuring and running the most critical core elements of your system in AWS.

  • For the given use-case, a small part of the backup infrastructure is always running simultaneously syncing mutable data (such as databases or documents) so that there is no loss of critical data.
  • When the time comes for recovery, you can rapidly provision a full-scale production environment around the critical core.

For Pilot light, RPO is in minutes

19
Q

Your company runs a web portal to match developers to clients who need their help. As a solutions architect, you’ve designed the architecture of the website to be fully serverless with API Gateway & AWS Lambda. The backend uses a DynamoDB table. You would like to automatically congratulate your developers on important milestones, such as - their first paid contract. All the contracts are stored in DynamoDB.

Which DynamoDB feature can you use to implement this functionality such that there is LEAST delay in sending automatic notifications?

A

Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale.

  • It’s a fully managed, multi-Region, multi-master, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications.

DynamoDB Streams + Lambda

  • A DynamoDB stream is an ordered flow of information about changes to items in a DynamoDB table.
  • When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table.
  • Whenever an application creates, updates, or deletes items in the table, DynamoDB Streams writes a stream record with the primary key attributes of the items that were modified.
  • A stream record contains information about a data modification to a single item in a DynamoDB table.

DynamoDB Streams

  • Will contain a stream of all the changes that happen to a DynamoDB table.

It can be chained with a Lambda function that will be triggered to react to these changes, one of which is the developer’s milestone.

20
Q

An IT company has a large number of clients opting to build their APIs by using Docker containers. To facilitate the hosting of these containers, the company is looking at various orchestration services available with AWS.

As a Solutions Architect, which of the following solutions will you suggest? (Select two)

A

Use Amazon EKS with AWS Fargate for serverless orchestration of the containerized services

Use Amazon ECS with AWS Fargate for serverless orchestration of the containerized services

  • Building APIs with Docker containers has been gaining momentum over the years.
  • For hosting and exposing these container-based APIs, they need a solution which supports HTTP requests routing, autoscaling, and high availability.
  • In some cases, user authorization is also needed.
  • For this purpose, many organizations are orchestrating their containerized services with Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS), while hosting their containers on Amazon EC2 or AWS Fargate.
  • Then, they can add scalability and high availability with Service Auto Scaling (in Amazon ECS) or Horizontal Pod Auto Scaler (in Amazon EKS), and they expose the services through load balancers.
  • When you use Amazon ECS as an orchestrator (with EC2 or Fargate launch type), you also have the option to expose your services with Amazon API Gateway and AWS Cloud Map instead of a load balancer.
  • AWS Cloud Map is used for service discovery: no matter how Amazon ECS tasks scale, AWS Cloud Map service names would point to the right set of Amazon ECS tasks.
  • Then, API Gateway HTTP APIs can be used to define API routes and point them to the corresponding AWS Cloud Map services.
21
Q

A Big Data analytics company writes data and log files in Amazon S3 buckets. The company now wants to stream the existing data files as well as any ongoing file updates from Amazon S3 to Amazon Kinesis Data Streams.

As a Solutions Architect, which of the following would you suggest as the fastest possible way of building a solution for this requirement?

A

Leverage AWS Database Migration Service (AWS DMS) as a bridge between Amazon S3 and Amazon Kinesis Data Streams

  • You can achieve this by using AWS Database Migration Service (AWS DMS).
  • AWS DMS enables you to seamlessly migrate data from supported sources to relational databases, data warehouses, streaming platforms, and other data stores in AWS cloud.
  • AWS DMS lets you expand the existing application to stream data from Amazon S3 into Amazon Kinesis Data Streams for real-time analytics without writing and maintaining new code.
  • AWS DMS supports specifying Amazon S3 as the source and streaming services like Kinesis and Amazon Managed Streaming of Kafka (Amazon MSK) as the target.
  • AWS DMS allows migration of full and change data capture (CDC) files to these services.
  • AWS DMS performs this task out of box without any complex configuration or code development.
  • You can also configure an AWS DMS replication instance to scale up or down depending on the workload.
  • AWS DMS supports Amazon S3 as the source and Kinesis as the target, so data stored in an S3 bucket is streamed to Kinesis.

Several consumers, such as AWS Lambda, Amazon Kinesis Data Firehose, Amazon Kinesis Data Analytics, and the Kinesis Consumer Library (KCL), can consume the data concurrently to perform real-time analytics on the dataset.

  • Each AWS service in this architecture can scale independently as needed.
22
Q

Your company is deploying a website running on Elastic Beanstalk. The website takes over 45 minutes for the installation and contains both static as well as dynamic files that must be generated during the installation process.

As a Solutions Architect, you would like to bring the time to create a new Instance in your Elastic Beanstalk deployment to be less than 2 minutes. What do you recommend? (Select two)

A
  • AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.

You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring.

  • At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time.
  • When you create an AWS Elastic Beanstalk environment, you can specify an Amazon Machine Image (AMI) to use instead of the standard Elastic Beanstalk AMI included in your platform version.
  • A custom AMI can improve provisioning times when instances are launched in your environment if you need to install a lot of software that isn’t included in the standard AMIs.

Create a Golden AMI with the static installation components already setup

  • A Golden AMI is an AMI that you standardize through configuration, consistent security patching, and hardening.
  • It also contains agents you approve for logging, security, performance monitoring, etc.

Use EC2 user data to customize the dynamic installation parts at boot time

  • EC2 instance user data is the data that you specified in the form of a configuration script while launching your instance.
  • You can use EC2 user data to customize the dynamic installation parts at boot time, rather than installing the application itself at boot time.
23
Q

A developer in your company has set up a classic 2 tier architecture consisting of an Application Load Balancer and an Auto Scaling group (ASG) managing a fleet of EC2 instances. The ALB is deployed in a subnet of size 10.0.1.0/18 and the ASG is deployed in a subnet of size 10.0.4.0/17.

As a solutions architect, you would like to adhere to the security pillar of the well-architected framework. How do you configure the security group of the EC2 instances to only allow traffic coming from the ALB?

A
  • An Auto Scaling group (ASG) contains a collection of Amazon EC2 instances that are treated as a logical grouping for automatic scaling and management.
  • An Auto Scaling group also enables you to use Amazon EC2 Auto Scaling features such as health check replacements and scaling policies.
  • Both maintaining the number of instances in an Auto Scaling group and automatic scaling are the core functionality of the Amazon EC2 Auto Scaling service.

Add a rule to authorize the security group of the ALB

  • A security group acts as a virtual firewall that controls the traffic for one or more instances.
  • When you launch an instance, you can specify one or more security groups; otherwise, we use the default security group.
  • You can add rules to each security group that allow traffic to or from its associated instances.
  • You can modify the rules for a security group at any time; the new rules are automatically applied to all instances that are associated with the security group.
  • When deciding to allow traffic to reach an instance, all the rules from all the security groups that are associated with the instance are evaluated.

The following are the characteristics of security group rules:

  1. By default, security groups allow all outbound traffic.
  2. Security group rules are always permissive; you can’t create rules that deny access.
  3. Security groups are stateful

Application Load Balancer (ALB) operates at the request level (layer 7), routing traffic to targets – EC2 instances, containers, IP addresses and Lambda functions based on the content of the request.

Ideal for advanced load balancing of HTTP and HTTPS traffic, Application Load Balancer provides advanced request routing targeted at delivery of modern application architectures, including microservices and container-based applications.

24
Q

You have an S3 bucket that contains files in two different folders - s3://my-bucket/images and s3://my-bucket/thumbnails. When an image is first uploaded and new, it is viewed several times. But after 45 days, analytics prove that image files are on average rarely requested, but the thumbnails still are. After 180 days, you would like to archive the image files and the thumbnails. Overall you would like to remain highly available to prevent disasters happening against a whole AZ.

How can you implement an efficient cost strategy for your S3 bucket? (Select two)

A
  • To manage your S3 objects, so they are stored cost-effectively throughout their lifecycle, configure their Amazon S3 Lifecycle.
  • An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects.

There are two types of actions:

Transition actions

  • Define when objects transition to another storage class.
  • For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after you created them, or archive objects to the S3 Glacier storage class one year after creating them.

Expiration actions

  • Define when objects expire. Amazon S3 deletes expired objects on your behalf.
  • Create a Lifecycle Policy to transition objects to S3 Standard IA using a prefix after 45 days

S3 Standard-IA

  • Is for data that is accessed less frequently but requires rapid access when needed.
  • S3 Standard-IA offers high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee.
  • This combination of low cost and high performance makes S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files.
  • The minimum storage duration charge is 30 days.
  • As the use-case mentions that after 45 days, image files are rarely requested, but the thumbnails still are.
  • So you need to use a prefix while configuring the Lifecycle Policy so that only objects in the s3://my-bucket/images are transitioned to Standard IA and not all the objects in the bucket.
  • Create a Lifecycle Policy to transition all objects to Glacier after 180 days
  • Amazon S3 Glacier and S3 Glacier Deep Archive are secure, durable, and extremely low-cost Amazon S3 cloud storage classes for data archiving and long-term backup.
  • They are designed to deliver 99.999999999% durability, and provide comprehensive security and compliance capabilities that can help meet even the most stringent regulatory requirements.
25
Q

The development team at a social media company wants to handle some complicated queries such as “What are the number of likes on the videos that have been posted by friends of a user A?”.

As a solutions architect, which of the following AWS database services would you suggest as the BEST fit to handle such use cases?

A

Amazon Neptune

  • Amazon Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets.
  • The core of Amazon Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds latency.
  • Neptune powers graph use cases such as recommendation engines, fraud detection, knowledge graphs, drug discovery, and network security.
  • Amazon Neptune is highly available, with read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across Availability Zones.
  • Neptune is secure with support for HTTPS encrypted client connections and encryption at rest.
  • Neptune is fully managed, so you no longer need to worry about database management tasks such as hardware provisioning, software patching, setup, configuration, or backups.
  • Amazon Neptune can quickly and easily process large sets of user-profiles and interactions to build social networking applications.
  • Neptune enables highly interactive graph queries with high throughput to bring social features into your applications.

For example, if you are building a social feed into your application, you can use Neptune to provide results that prioritize showing your users the latest updates from their family, from friends whose updates they ‘Like,’ and from friends who live close to them.

26
Q

A company runs a popular dating website on the AWS Cloud. As a Solutions Architect, you’ve designed the architecture of the website to follow a serverless pattern on the AWS Cloud using API Gateway and AWS Lambda. The backend uses an RDS PostgreSQL database. Currently, the application uses a username and password combination to connect the Lambda function to the RDS database.

You would like to improve the security at the authentication level by leveraging short-lived credentials. What will you choose? (Select two)

A

Use IAM authentication from Lambda to RDS PostgreSQL

Attach an AWS Identity and Access Management (IAM) role to AWS Lambda

  • You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication.
  • IAM database authentication works with MySQL and PostgreSQL.
  • With this authentication method, you don’t need to use a password when you connect to a DB instance. Instead, you use an authentication token.
  • An authentication token is a unique string of characters that Amazon RDS generates on request.
  • Authentication tokens are generated using AWS Signature Version 4.
  • Each token has a lifetime of 15 minutes.
  • You don’t need to store user credentials in the database, because authentication is managed externally using IAM.
  • You can also still use standard database authentication.

IAM database authentication provides the following benefits:

  1. Network traffic to and from the database is encrypted using Secure Sockets Layer (SSL).
  2. You can use IAM to centrally manage access to your database resources, instead of managing access individually on each DB instance.
  3. For applications running on Amazon EC2, you can use profile credentials specific to your EC2 instance to access your database instead of a password, for greater security.
27
Q

A company is looking for an AWS service to help its remote employees access applications by delivering a cloud desktop, which is accessible from any location with an internet connection, using any supported device.

As a Solutions Architect, which of the following AWS services would you recommend for this use-case?

A

Amazon Workspaces

  • Amazon WorkSpaces is a managed, secure Desktop-as-a-Service (DaaS) solution.
  • You can use Amazon WorkSpaces to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe.
  • You can pay either monthly or hourly, just for the WorkSpaces you launch, which helps you save money when compared to traditional desktops and on-premises VDI solutions.
  • Amazon WorkSpaces helps you eliminate the complexity in managing hardware inventory, OS versions and patches, and Virtual Desktop Infrastructure (VDI), which helps simplify your desktop delivery strategy.

With Amazon WorkSpaces, your users get a fast, responsive desktop of their choice that they can access anywhere, anytime, from any supported device.

28
Q

A CRM web application was written as a monolith in PHP and is facing scaling issues because of performance bottlenecks. The CTO wants to re-engineer towards microservices architecture and expose their website from the same load balancer, linked to different target groups with different URLs: checkout.mycorp.com, www.mycorp.com, mycorp.com/profile and mycorp.com/search. The CTO would like to expose all these URLs as HTTPS endpoints for security purposes.

As a solutions architect, which of the following would you recommend as a solution that requires MINIMAL configuration effort?

A

Use SSL certificates with SNI

  • You can host multiple TLS secured applications, each with its own TLS certificate, behind a single load balancer.
  • To use SNI, all you need to do is bind multiple certificates to the same secure listener on your load balancer.
  • ALB will automatically choose the optimal TLS certificate for each client.
  • ALB’s smart certificate selection goes beyond SNI.
  • In addition to containing a list of valid domain names, certificates also describe the type of key exchange and cryptography that the server supports, as well as the signature algorithm (SHA2, SHA1, MD5) used to sign the certificate.
  • With SNI support AWS makes it easy to use more than one certificate with the same ALB.
  • The most common reason you might want to use multiple certificates is to handle different domains with the same load balancer.
  • It’s always been possible to use wildcard and subject-alternate-name (SAN) certificates with ALB, but these come with limitations.
  • Wildcard certificates only work for related subdomains that match a simple pattern and while SAN certificates can support many different domains, the same certificate authority has to authenticate each one.
  • That means you have to reauthenticate and reprovision your certificate every time you add a new domain.
29
Q

A media company uses Amazon ElastiCache Redis to enhance the performance of its RDS database layer. The company wants a robust disaster recovery strategy for its caching layer that guarantees minimal downtime as well as minimal data loss while ensuring good application performance.

Which of the following solutions will you recommend to address the given use-case?

A

Opt for Multi-AZ configuration with automatic failover functionality to help mitigate failure

Multi-AZ is the best option when data retention, minimal downtime, and application performance are a priority.

  • Data-loss potential - Low. Multi-AZ provides fault tolerance for every scenario, including hardware-related issues.
  • Performance impact - Low. Of the available options, Multi-AZ provides the fastest time to recovery, because there is no manual procedure to follow after the process is implemented.
  • Cost - Low to high. Multi-AZ is the lowest-cost option.

Use Multi-AZ when you can’t risk losing data because of hardware failure or you can’t afford the downtime required by other options in your response to an outage.

30
Q

An e-commerce company has copied 1 PB of data from its on-premises data center to an Amazon S3 bucket in the us-west-1 Region using an AWS Direct Connect link. The company now wants to copy the data to another S3 bucket in the us-east-1 Region. The on-premises data center does not allow the use of AWS Snowball.

As a Solutions Architect, which of the following would you recommend to accomplish this?

A

Copy data from the source bucket to the destination bucket using the aws S3 sync command

  • The aws S3 sync command uses the CopyObject APIs to copy objects between S3 buckets.
  • The sync command lists the source and target buckets to identify objects that are in the source bucket but that aren’t in the target bucket.
  • The command also identifies objects in the source bucket that have different LastModified dates than the objects that are in the target bucket.
  • The sync command on a versioned bucket copies only the current version of the object—previous versions aren’t copied.
  • By default, this preserves object metadata, but the access control lists (ACLs) are set to FULL_CONTROL for your AWS account, which removes any additional ACLs.
  • If the operation fails, you can run the sync command again without duplicating previously copied objects.

You can use the command like so:

aws s3 sync s3://DOC-EXAMPLE-BUCKET-SOURCE s3://DOC-EXAMPLE-BUCKET-TARGET

31
Q

A company wants an easy way to deploy and manage large fleets of Snowball devices. Which of the following solutions can be used to address the given requirements?

A

AWS OpsHub

  • AWS OpsHub is a graphical user interface you can use to manage your AWS Snowball devices, enabling you to rapidly deploy edge computing workloads and simplify data migration to the cloud.
  • With just a few clicks in AWS OpsHub, you have the full functionality of the Snowball devices at your fingertips

    • you can unlock and configure devices,
    • drag-and-drop data to devices,
    • launch applications, and
    • monitor device metrics.

AWS OpsHub supports the Snowball Edge Storage Optimized and Snowball Edge Compute Optimized devices.

32
Q

A social media company wants the capability to dynamically alter the size of a geographic area from which traffic is routed to a specific server resource.

Which feature of Route 53 can help achieve this functionality?

A

Geoproximity routing

Geoproximity routing lets Amazon Route 53 route traffic to your resources based on the geographic location of your users and your resources.

You can also optionally choose to route more traffic or less to a given resource by specifying a value, known as a bias.

A bias expands or shrinks the size of the geographic region from which traffic is routed to a resource.

To optionally change the size of the geographic region from which Route 53 routes traffic to a resource, specify the applicable value for the bias:

  • To expand the size of the geographic region from which Route 53 routes traffic to a resource, specify a positive integer from 1 to 99 for the bias. Route 53 shrinks the size of adjacent regions.
  • To shrink the size of the geographic region from which Route 53 routes traffic to a resource, specify a negative bias of -1 to -99. Route 53 expands the size of adjacent regions.
33
Q

Your company has deployed an application that will perform a lot of overwrites and deletes on data and require the latest information to be available anytime data is read via queries on database tables.

As a Solutions Architect, which database technology will you recommend?

A

Amazon Relational Database Service (Amazon RDS)

  • Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud.
  • It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups.
  • RDS allows you to create, read, update, and delete records without any item lock or ambiguity.

All RDS transactions must be ACID compliant or be Atomic, Consistent, Isolated, and Durable to ensure data integrity.

  • Atomicity requires that either transaction as a whole is successfully executed or if a part of the transaction fails, then the entire transaction be invalidated.
  • Consistency mandates the data written to the database as part of the transaction must adhere to all defined rules, and restrictions including constraints, cascades, and triggers.
  • Isolation is critical to achieving concurrency control and makes sure each transaction is independent unto itself.
  • Durability requires that all of the changes made to the database be permanent once a transaction is completed.
34
Q

A company has noticed that its EBS storage volume (io1) accounts for 90% of the cost and the remaining 10% cost can be attributed to the EC2 instance. The CloudWatch metrics report that both the EC2 instance and the EBS volume are under-utilized. The CloudWatch metrics also show that the EBS volume has occasional I/O bursts. The entire infrastructure is managed by AWS CloudFormation.

As a Solutions Architect, what do you propose to reduce the costs?

A

Amazon EBS provides the various volume types, that differ in performance characteristics and price so that you can tailor your storage performance and cost to the needs of your applications.

The volumes types fall into two categories:

SSD-backed volumes optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS.

HDD-backed volumes optimized for large streaming workloads where throughput (measured in MiB/s) is a better performance measure than IOPS

Provisioned IOPS SSD (io1) volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads, that are sensitive to storage performance and consistency. Unlike gp2, which uses a bucket and credit model to calculate performance, an io1 volume allows you to specify a consistent IOPS rate when you create the volume, and Amazon EBS delivers the provisioned performance 99.9 percent of the time.

Convert the Amazon EC2 instance EBS volume to gp2

General Purpose SSD (gp2) volumes offer cost-effective storage that is ideal for a broad range of workloads.

These volumes deliver single-digit millisecond latencies and the ability to burst to 3,000 IOPS for an extended duration.

Between a minimum of 100 IOPS (at 33.33 GiB and below) and a maximum of 16,000 IOPS (at 5,334 GiB and above), baseline performance scales linearly at 3 IOPS per GiB of volume size.

AWS designs gp2 volumes to deliver a provisioned performance of 99% uptime. A gp2 volume can range in size from 1 GiB to 16 TiB.

Therefore, gp2 is the right choice as it is more cost-effective than io1, and it also allows a burst in performance when needed.