Udemy Exam 2 Flashcards

1
Q

An IT company is working on client engagement to build a real-time data analytics tool for the Internet of Things (IoT) data. The IoT data is funneled into Kinesis Data Streams which further acts as the source of a delivery stream for Kinesis Firehose. The engineering team has now configured a Kinesis Agent to send IoT data from another set of devices to the same Firehose delivery stream. They noticed that data is not reaching Firehose as expected.

As a solutions architect, which of the following options would you attribute as the MOST plausible root cause behind this issue?

A

**Kinesis Agent cannot write to a Kinesis Firehose for which the delivery stream source is already set as Kinesis Data

  • Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores, and analytics tools.
  • It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration.
  • It can also batch, compress, transform, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security.
  • When a Kinesis data stream is configured as the source of a Firehose delivery stream, Firehose’s PutRecord and PutRecordBatch operations are disabled and Kinesis Agent cannot write to Firehose delivery stream directly.
  • Data needs to be added to the Kinesis data stream through the Kinesis Data Streams PutRecord and PutRecords operations instead.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A financial services company wants to implement a solution that ensures that the order of financial transactions is preserved and no duplicate transactions are created.

As a solutions architect, which of the following solutions would you recommend?

A

Publish transaction updates using SNS FIFO topic, which is subscribed by SQS FIFO queue for further processing

The two most common forms of asynchronous service-to-service communication are message queues and publish/subscribe messaging:

With message queues

  • Messages are stored on the queue until they are processed and deleted by a consumer.

Amazon Simple Queue Service (SQS) provides a fully managed message queuing service with no administrative overhead.

  • With pub/sub messaging, a message published to a topic is delivered to all subscribers to the topic.

Amazon Simple Notification Service (SNS)

Is a fully managed pub/sub messaging service that enables message delivery to a large number of subscribers.

  • Each subscriber can also set a filter policy to receive only the messages that it cares about.

Per the use-case, the financial transactions have to be processed and stored in the exact order they take place. So SNS FIFO is the right choice, subscribed b SQS FIFO.

With SQS

  • You can use FIFO (First-In-First-Out) queues to preserve the order in which messages are sent and received and to avoid that a message is processed more than once.
  • Similar capabilities for pub/sub messaging is achieved through SNS FIFO topics, providing strict message ordering and deduplicated message delivery to one or more subscribers.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The engineering team at an e-commerce company is working on cost optimizations for EC2 instances. The team wants to manage the workload using a mix of on-demand and spot instances across multiple instance types. They would like to create an Auto Scaling group with a mix of these instances.

Which of the following options would allow the engineering team to provision the instances for this use-case?

A

You can only use a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost

A launch template is similar to a launch configuration, in that it specifies instance configuration information such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, security groups, and the other parameters that you use to launch EC2 instances.

Also, defining a launch template instead of a launch configuration allows you to have multiple versions of a template.

With launch templates, you can provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

An IT company wants to optimize the costs incurred on its fleet of 100 EC2 instances for the next year. Based on historical analyses, the engineering team observed that 70 of these instances handle the compute services of its flagship application and need to be always available. The other 30 instances are used to handle batch jobs that can afford a delay in processing.

As a solutions architect, which of the following would you recommend as the MOST cost-optimal solution?

A

Purchase 70 reserved instances and 30 spot instances

  • As 70 instances need to be always available, these can be purchased as reserved instances for a one-year duration.
  • The other 30 instances responsible for the batch job can be purchased as spot instances.
  • Even if some of the spot instances are interrupted, other spot instances can continue with the job.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A manufacturing company receives unreliable service from its data center provider because the company is located in an area prone to natural disasters. The company is not ready to fully migrate to the AWS Cloud, but it wants a failover environment on AWS in case the on-premises data center fails. The company runs web servers that connect to external vendors. The data available on AWS and on-premises must be uniform.

Which of the following solutions would have the LEAST amount of downtime?

A
  1. Set up a Route 53 failover record.
  2. Run application servers on EC2 instances behind an Application Load Balancer in an Auto Scaling group.
  3. Set up AWS Storage Gateway with stored volumes to back up data to S3

If you have multiple resources that perform the same function, you can configure DNS failover so that Route 53 will route your traffic from an unhealthy resource to a healthy resource.

Elastic Load Balancing

  • Is used to automatically distribute your incoming application traffic across all the EC2 instances that you are running.
  • You can use Elastic Load Balancing to manage incoming requests by optimally routing traffic so that no one instance is overwhelmed.
  • Your load balancer acts as a single point of contact for all incoming web traffic to your Auto Scaling group.

AWS Storage Gateway

  • Is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage.
  • It provides low-latency performance by caching frequently accessed data on-premises while storing data securely and durably in Amazon cloud storage services.
  • Storage Gateway optimizes data transfer to AWS by sending only changed data and compressing data.
  • Storage Gateway also integrates natively with Amazon S3 cloud storage which makes your data available for in-cloud processing.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The engineering manager for a content management application wants to set up RDS read replicas to provide enhanced performance and read scalability. The manager wants to understand the data transfer charges while setting up RDS read replicas.

Which of the following would you identify as correct regarding the data transfer charges for RDS read replicas?

A

There are data transfer charges for replicating data across AWS Regions

RDS Read Replicas

  • Provide enhanced performance and durability for RDS database (DB) instances.
  • They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.
  • A read replica is billed as a standard DB Instance and at the same rates.
  • You are not charged for the data transfer incurred in replicating data between your source DB instance and read replica within the same AWS Region.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You would like to migrate an AWS account from an AWS Organization A to an AWS Organization B. What are the steps do to it?

A
  1. Remove the member account from the old organization.
  2. Send an invite to the member account from the new Organization.
  3. Accept the invite to the new organization from the member account

AWS Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS.

Using AWS Organizations

  • You can automate account creation, create groups of accounts to reflect your business needs, and apply policies for these groups for governance.
  • You can also simplify billing by setting up a single payment method for all of your AWS accounts.
  • Through integrations with other AWS services, you can use Organizations to define central configurations and resource sharing across accounts in your organization.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

The engineering team at a logistics company has noticed that the Auto Scaling group (ASG) is not terminating an unhealthy Amazon EC2 instance.

As a Solutions Architect, which of the following options would you suggest to troubleshoot the issue? (Select three)

A

The health check grace period for the instance has not expired - Amazon EC2 Auto Scaling doesn’t terminate an instance that came into service based on EC2 status checks and ELB health checks until the health check grace period expires.

The instance maybe in Impaired status

  • Amazon EC2 Auto Scaling does not immediately terminate instances with an Impaired status.
  • Instead, Amazon EC2 Auto Scaling waits a few minutes for the instance to recover.
  • Amazon EC2 Auto Scaling might also delay or not terminate instances that fail to report data for status checks.
  • This usually happens when there is insufficient data for the status check metrics in Amazon CloudWatch.

The instance has failed the ELB health check status -

  • By default, Amazon EC2 Auto Scaling doesn’t use the results of ELB health checks to determine an instance’s health status when the group’s health check configuration is set to EC2.
  • As a result, Amazon EC2 Auto Scaling doesn’t terminate instances that fail ELB health checks.
  • If an instance’s status is OutofService on the ELB console, but the instance’s status is Healthy on the Amazon EC2 Auto Scaling console, confirm that the health check type is set to ELB.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A retail company wants to rollout and test a blue-green deployment for its global application in the next 48 hours. Most of the customers use mobile phones which are prone to DNS caching. The company has only two days left for the annual Thanksgiving sale to commence.

As a Solutions Architect, which of the following options would you recommend to test the deployment on as many users as possible in the given time frame?

A

Blue/green deployment is a technique for releasing applications by shifting traffic between two identical environments running different versions of the application:

  • “Blue” is the currently running version
  • “Green” the new version.
  • This type of deployment allows you to test features in the green environment without impacting the currently running version of your application.
  • When you’re satisfied that the green version is working properly, you can gradually reroute the traffic from the old blue environment to the new green environment.
  • Blue/green deployments can mitigate common risks associated with deploying software, such as downtime and rollback capability.
  • Use AWS Global Accelerator to distribute a portion of traffic to a particular deployment -
  • AWS Global Accelerator is a network layer service that directs traffic to optimal endpoints over the AWS global network, this improves the availability and performance of your internet applications.
  • It provides two static anycast IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers, Elastic IP addresses or Amazon EC2 instances, in a single or in multiple AWS regions.
  • AWS Global Accelerator uses endpoint weights to determine the proportion of traffic that is directed to endpoints in an endpoint group, and traffic dials to control the percentage of traffic that is directed to an endpoint group (an AWS region where your application is deployed).
  • While relying on the DNS service is a great option for blue/green deployments, it may not fit use-cases that require a fast and controlled transition of the traffic.
  • Some client devices and internet resolvers cache DNS answers for long periods; this DNS feature improves the efficiency of the DNS service as it reduces the DNS traffic across the Internet, and serves as a resiliency technique by preventing authoritative name-server overloads.
  • The downside of this in blue/green deployments is that you don’t know how long it will take before all of your users receive updated IP addresses when you update a record, change your routing preference or when there is an application failure.

With AWS Global Accelerator, you can shift traffic gradually or all at once between the blue and the green environment and vice-versa without being subject to DNS caching on client devices and internet resolvers, traffic dials and endpoint weights changes are effective within seconds.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You would like to use Snowball to move on-premises backups into a long term archival tier on AWS. Which solution provides the MOST cost savings?

A

Create a Snowball job and target an S3 bucket. Create a lifecycle policy to immediately move data to Glacier Deep Archive

AWS Snowball, a part of the AWS Snow Family

  • Is a data migration and edge computing device that comes in two options.
    • Snowball Edge Storage Optimized devices
      • Provide both block storage and Amazon S3-compatible object storage, and 40 vCPUs.
      • They are well suited for local storage and large scale data transfer.

Snowball Edge Compute Optimized devices

  • Provide 52 vCPUs, block and object storage, and an optional GPU for use cases like advanced machine learning and full-motion video analysis in disconnected environments.

Snowball Edge Storage Optimized

  • Is the optimal choice if you need to securely and quickly transfer dozens of terabytes to petabytes of data to AWS.
  • It provides up to 80 TB of usable HDD storage, 40 vCPUs, 1 TB of SATA SSD storage, and up to 40 Gb network connectivity to address large scale data transfer and pre-processing use cases.

The original Snowball devices were transitioned out of service and Snowball Edge Storage Optimized are now the primary devices used for data transfer. You may see the Snowball device on the exam, just remember that the original Snowball device had 80TB of storage space.

You can’t move data directly from Snowball into Glacier, you need to go through S3 first, and then use a lifecycle policy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company has many VPC in various accounts, that need to be connected in a star network with one another and connected with on-premises networks through Direct Connect.

What do you recommend?

A

Transit Gateway

AWS Transit Gateway

  • Is a service that enables customers to connect their Amazon Virtual Private Clouds (VPCs) and their on-premises networks to a single gateway.
  • With AWS Transit Gateway, you only have to create and manage a single connection from the central gateway into each Amazon VPC, on-premises data center, or remote office across your network.

Transit Gateway acts as a hub that controls how traffic is routed among all the connected networks which act like spokes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You have multiple AWS accounts within a single AWS Region managed by AWS Organizations and you would like to ensure all EC2 instances in all these accounts can communicate privately. Which of the following solutions provides the capability at the CHEAPEST cost?

A

Create a VPC in an account and share one or more of its subnets with the other accounts using Resource Access Manager

AWS Resource Access Manager (RAM)

  • Is a service that enables you to easily and securely share AWS resources with any AWS account or within your AWS Organization.
  • You can share AWS Transit Gateways, Subnets, AWS License Manager configurations, and Amazon Route 53 Resolver rules resources with RAM.
  • RAM eliminates the need to create duplicate resources in multiple accounts, reducing the operational overhead of managing those resources in every single account you own.
  • You can create resources centrally in a multi-account environment, and use RAM to share those resources across accounts in three simple steps:
  1. Create a Resource Share
  2. Specify resources,
  3. Specify accounts.

RAM is available to you at no additional charge.

The correct solution is to share the subnet(s) within a VPC using RAM.

This will allow all EC2 instances to be deployed in the same VPC (although from different accounts) and easily communicate with one another.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

An IT company provides S3 bucket access to specific users within the same account for completing project specific work. With changing business requirements, cross-account S3 access requests are also growing every month. The company is looking for a solution that can offer user level as well as account-level access permissions for the data stored in S3 buckets.

As a Solutions Architect, which of the following would you suggest as the MOST optimized way of controlling access for this use-case?

A

Use Amazon S3 Bucket Policies

  • Bucket policies in Amazon S3 can be used to add or deny permissions across some or all of the objects within a single bucket.
  • Policies can be attached to users, groups, or Amazon S3 buckets, enabling centralized management of permissions.
  • With bucket policies, you can grant users within your AWS Account or other AWS Accounts access to your Amazon S3 resources.
  • You can further restrict access to specific resources based on certain conditions.

For example, you can restrict access based on request time (Date Condition), whether the request was sent using SSL (Boolean Conditions), a requester’s IP address (IP Address Condition), or based on the requester’s client application (String Conditions). To identify these conditions, you use policy keys.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A company has recently launched a new mobile gaming application that the users are adopting rapidly. The company uses RDS MySQL as the database. The engineering team wants an urgent solution to this issue where the rapidly increasing workload might exceed the available database storage.

As a solutions architect, which of the following solutions would you recommend so that it requires minimum development and systems administration effort to address this requirement?

A

Enable storage auto-scaling for RDS MySQL

If your workload is unpredictable, you can enable storage autoscaling for an Amazon RDS DB instance.

  • With storage autoscaling enabled, when Amazon RDS detects that you are running out of free database space it automatically scales up your storage.
  • Amazon RDS starts a storage modification for an autoscaling-enabled DB instance when these factors apply:
    • Free available space is less than 10 percent of the allocated storage.
    • The low-storage condition lasts at least five minutes.
    • At least six hours have passed since the last storage modification.
  • The maximum storage threshold is the limit that you set for autoscaling the DB instance.
  • You can’t set the maximum storage threshold for autoscaling-enabled instances to a value greater than the maximum allocated storage.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Upon a security review of your AWS account, an AWS consultant has found that a few RDS databases are un-encrypted. As a Solutions Architect, what steps must be taken to encrypt the RDS databases?

A
  • Take a snapshot of the database,
  • Copy it as an encrypted snapshot,
  • Restore a database from the encrypted snapshot.
  • Terminate the previous database
  • Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud.
  • It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups.
  • You can encrypt your Amazon RDS DB instances and snapshots at rest by enabling the encryption option for your Amazon RDS DB instances.
  • Data that is encrypted at rest includes the underlying storage for DB instances, its automated backups, read replicas, and snapshots.

You can only enable encryption for an Amazon RDS DB instance when you create it, not after the DB instance is created.

  • However, because you can encrypt a copy of an unencrypted DB snapshot, you can effectively add encryption to an unencrypted DB instance.

That is, you can create a snapshot of your DB instance, and then create an encrypted copy of that snapshot.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A big data consulting firm needs to set up a data lake on Amazon S3 for a Health-Care client. The data lake is split in raw and refined zones. For compliance reasons, the source data needs to be kept for a minimum of 5 years. The source data arrives in the raw zone and is then processed via an AWS Glue based ETL job into the refined zone. The business analysts run ad-hoc queries only on the data in the refined zone using AWS Athena. The team is concerned about the cost of data storage in both the raw and refined zones as the data is increasing at a rate of 1TB daily in each zone.

As a solutions architect, which of the following would you recommend as the MOST cost-optimal solution? (Select two)

A

Setup a lifecycle policy to transition the raw zone data into Glacier Deep Archive after 1 day of object creation

  • You can manage your objects so that they are stored cost-effectively throughout their lifecycle by configuring their Amazon S3 Lifecycle.

An S3 Lifecycle configuration

  • Is a set of rules that define actions that Amazon S3 applies to a group of objects.
  • For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after you created them, or archive objects to the S3 Glacier storage class one year after creating them.

For the given use-case, the raw zone consists of the source data, so it cannot be deleted due to compliance reasons. Therefore, you should use a lifecycle policy to transition the raw zone data into Glacier Deep Archive after 1 day of object creation.

Use Glue ETL job to write the transformed data in the refined zone using a compressed file format

AWS Glue

  • Is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics.
  • You cannot transition the refined zone data into Glacier Deep Archive because it is used by the business analysts for ad-hoc querying.

Therefore, the best optimization is to have the refined zone data stored in a compressed format via the Glue job. The compressed data would reduce the storage cost incurred on the data in the refined zone.

17
Q

You would like to store a database password in a secure place, and enable automatic rotation of that password every 90 days. What do you recommend?

A

“Secrets Manager”

  • AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources.
  • The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.
  • Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text.
  • Secrets Manager offers secret rotation with built-in integration for Amazon RDS, Amazon Redshift, and Amazon DocumentDB.
18
Q

A financial services company has developed its flagship application on AWS Cloud with data security requirements such that the encryption key must be stored in a custom application running on-premises. The company wants to offload the data storage as well as the encryption process to Amazon S3 but continue to use the existing encryption key.

Which of the following S3 encryption options allows the company to leverage Amazon S3 for storing data with given constraints?

A

Server-Side Encryption with Customer-Provided Keys (SSE-C)

You have the following options for protecting data at rest in Amazon S3:

Server-Side Encryption – Request Amazon S3 to encrypt your object before saving it on disks in its data centers and then decrypt it when you download the objects.

Client-Side Encryption – Encrypt data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.

19
Q

A social media application is hosted on an EC2 server fleet running behind an Application Load Balancer. The application traffic is fronted by a CloudFront distribution. The engineering team wants to decouple the user authentication process for the application, so that the application servers can just focus on the business logic.

As a Solutions Architect, which of the following solutions would you recommend to the development team so that it requires minimal development effort?

A

Use Cognito Authentication via Cognito User Pools for your Application Load Balancer

  • Application Load Balancer can be used to securely authenticate users for accessing your applications.
  • This enables you to offload the work of authenticating users to your load balancer so that your applications can focus on their business logic.
  • You can use Cognito User Pools to authenticate users through well-known social IdPs, such as Amazon, Facebook, or Google, through the user pools supported by Amazon Cognito or through corporate identities, using SAML, LDAP, or Microsoft AD, through the user pools supported by Amazon Cognito.
  • You configure user authentication by creating an authenticate action for one or more listener rules.
20
Q

You have an in-memory database launched on an EC2 instance and you would like to be able to stop and start the EC2 instance without losing the in-memory state of your database. What do you recommend?

A

Use EC2 Instance Hibernate

  • When you hibernate an instance, AWS signals the operating system to perform hibernation (suspend-to-disk).
  • Hibernation saves the contents from the instance memory (RAM) to your Amazon EBS root volume.
  • AWS then persists the instance’s Amazon EBS root volume and any attached Amazon EBS data volumes.

When you start your instance: The Amazon EBS root volume is restored to its previous state The RAM contents are reloaded The processes that were previously running on the instance are resumed Previously attached data volumes are reattached and the instance retains its instance ID

For the given use-case, we must use EC2 Instance Hibernate, which preserves the in-memory state of our EC2 instance upon hibernating it.

21
Q

A company is looking at storing their less frequently accessed files on AWS that can be concurrently accessed by hundreds of EC2 instances. The company needs the most cost-effective file storage service that provides immediate access to data whenever needed.

Which of the following options represents the best solution for the given requirements?

A
  • Amazon EFS is a file storage service for use with Amazon compute (EC2, containers, serverless) and on-premises servers.
  • Amazon EFS provides a file system interface, file system access semantics (such as strong consistency and file locking), and concurrently accessible storage for up to thousands of Amazon EC2 instances.
  • The Standard–IA storage class reduces storage costs for files that are not accessed every day.
  • It does this without sacrificing the high availability, high durability, elasticity, and POSIX file system access that Amazon EFS provides.

AWS recommends Standard-IA storage if you need your full dataset to be readily accessible and want to automatically save on storage costs for files that are less frequently accessed.

22
Q

A tax computation software runs on Amazon EC2 instances behind a Classic Load Balancer. The instances are managed by an Auto Scaling Group. The tax computation software has an optimization module, which can take up to 10 minutes to find the optimal answer.

How do you ensure that when the Auto Scaling Group initiates a scale-in event, the users do not see their current requests interrupted?

A

Increase the deregistration delay to more than 10 minutes

  • Elastic Load Balancing stops sending requests to targets that are deregistering.
  • By default, Elastic Load Balancing waits 300 seconds before completing the deregistration process, which can help in-flight requests to the target to complete.
  • We need to update this value to more than 10 minutes to allow our tax software to complete in-flight requests.
23
Q

A retail company wants to share sensitive accounting data that is stored in an Amazon RDS DB instance with an external auditor. The auditor has its own AWS account and needs its own copy of the database.

Which of the following would you recommend to securely share the database with the auditor?

A

Create an encrypted snapshot of the database, share the snapshot, and allow access to the AWS Key Management Service (AWS KMS) encryption key

  • You can share the AWS Key Management Service (AWS KMS) customer master key (CMK) that was used to encrypt the snapshot with any accounts that you want to be able to access the snapshot.
  • You can share AWS KMS CMKs with another AWS account by adding the other account to the AWS KMS key policy.
  • Making an encrypted snapshot of the database will give the auditor a copy of the database, as required for the given use case.
24
Q

An e-commerce application uses an Amazon Aurora Multi-AZ deployment for its database. While analyzing the performance metrics, the engineering team has found that the database reads are causing high I/O and adding latency to the write requests against the database.

As an AWS Certified Solutions Architect Associate, what would you recommend to separate the read requests from the write requests?

A

Set up a read replica and modify the application to use the appropriate endpoint

An Amazon Aurora DB cluster

  • Consists of one or more DB instances and a cluster volume that manages the data for those DB instances.
  • An Aurora cluster volume is a virtual database storage volume that spans multiple Availability Zones, with each Availability Zone having a copy of the DB cluster data.

Two types of DB instances make up an Aurora DB cluster:

Primary DB instance

  • Supports read and write operations, and performs all of the data modifications to the cluster volume.
  • Each Aurora DB cluster has one primary DB instance.

Aurora Replica

  • Connects to the same storage volume as the primary DB instance and supports only read operations.
  • Each Aurora DB cluster can have up to 15 Aurora Replicas in addition to the primary DB instance.
  • Aurora automatically fails over to an Aurora Replica in case the primary DB instance becomes unavailable.
  • You can specify the failover priority for Aurora Replicas.
  • Aurora Replicas can also offload read workloads from the primary DB instance.
  • Aurora Replicas have two main purposes.
  • You can issue queries to them to scale the read operations for your application.
  • You typically do so by connecting to the reader endpoint of the cluster.
  • That way, Aurora can spread the load for read-only connections across as many Aurora Replicas as you have in the cluster.
  • Aurora Replicas also help to increase availability.
  • If the writer instance in a cluster becomes unavailable, Aurora automatically promotes one of the reader instances to take its place as the new writer.
  • While setting up a Multi-AZ deployment for Aurora, you create an Aurora replica or reader node in a different AZ.
  • You use the reader endpoint for read-only connections for your Aurora cluster.
  • This endpoint uses a load-balancing mechanism to help your cluster handle a query-intensive workload.
  • The reader endpoint is the endpoint that you supply to applications that do reporting or other read-only operations on the cluster.
  • The reader endpoint load-balances connections to available Aurora Replicas in an Aurora DB cluster.
25
Q

An e-commerce company operates multiple AWS accounts and has interconnected these accounts in a hub-and-spoke style using the AWS Transit Gateway. VPCs have been provisioned across these AWS accounts to facilitate network isolation.

Which of the following solutions would reduce both the administrative overhead and the costs while providing shared access to services required by workloads in each of the VPCs?

A

Build a shared services VPC

Consider an organization that has built a hub-and-spoke network with AWS Transit Gateway.

  • VPCs have been provisioned into multiple AWS accounts, perhaps to facilitate network isolation or to enable delegated network administration.
  • When deploying distributed architectures such as this, a popular approach is to build a “shared services VPC, which provides access to services required by workloads in each of the VPCs.
  • This might include directory services or VPC endpoints.
  • Sharing resources from a central location instead of building them in each VPC may reduce administrative overhead and cost.
  • A VPC endpoint allows you to privately connect your VPC to supported AWS services without requiring an Internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.
  • Endpoints are virtual devices that are horizontally scaled, redundant, and highly available VPC components.
  • They allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.
  • VPC endpoints enable you to reduce data transfer charges resulting from network communication between private VPC resources (such as Amazon Elastic Cloud Compute—or EC2—instances) and AWS Services (such as Amazon Quantum Ledger Database, or QLDB).
  • Without VPC endpoints configured, communications that originate from within a VPC destined for public AWS services must egress AWS to the public Internet in order to access AWS services.
  • This network path incurs outbound data transfer charges.
  • Data transfer charges for traffic egressing from Amazon EC2 to the Internet vary based on volume.
  • With VPC endpoints configured, communication between your VPC and the associated AWS service does not leave the Amazon network.
  • If your workload requires you to transfer significant volumes of data between your VPC and AWS, you can reduce costs by leveraging VPC endpoints.
26
Q

Amazon EC2 Auto Scaling needs to terminate an instance from Availability Zone (AZ) us-east-1a as it has the most number of instances amongst the AZs being used currently. There are 4 instances in the AZ us-east-1a like so: Instance A has the oldest launch template, Instance B has the oldest launch configuration, Instance C has the newest launch configuration and Instance D is closest to the next billing hour.

Which of the following instances would be terminated per the default termination policy?

A

Instance B

Per the default termination policy

The first priority is given to any allocation strategy for On-Demand vs Spot instances.

  • The next priority is to consider any instance with the oldest launch template unless there is an instance that uses a launch configuration.
  • Next, you need to consider any instance which has the oldest launch configuration.
  • This implies Instance B will be selected for termination and Instance C will also be ruled out as it has the newest launch configuration.
  • Instance D, which is closest to the next billing hour, is not selected as this criterion is last in the order of priority.
27
Q

A silicon valley based startup has a content management application with the web-tier running on EC2 instances and the database tier running on Amazon Aurora. Currently, the entire infrastructure is located in us-east-1 region. The startup has 90% of its customers in the US and Europe. The engineering team is getting reports of deteriorated application performance from customers in Europe with high application load time.

As a solutions architect, which of the following would you recommend addressing these performance issues? (Select two)

A
  • Setup another fleet of EC2 instances for the web tier in the eu-west-1 region.
  • Enable latency routing policy in Route 53
  • Use latency based routing when you have resources in multiple AWS Regions and you want to route traffic to the region that provides the lowest latency.
  • To use latency-based routing, you create latency records for your resources in multiple AWS Regions.
  • When Route 53 receives a DNS query for your domain or subdomain (example.com or acme.example.com), it determines which AWS Regions you’ve created latency records for, determines which region gives the user the lowest latency, and then selects a latency record for that region.
  • Route 53 responds with the value from the selected record, such as the IP address for a web server.
  • As customers in Europe are facing performance issues with high application load time, you can use latency based routing to reduce the latency.
  • Create Amazon Aurora read replicas in the eu-west-1 region
  • Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases.
  • Amazon Aurora features a distributed, fault-tolerant, self-healing storage system that auto-scales up to 64TB per database instance.
  • Amazon Aurora read replicas can be used to scale out reads across regions.
  • This will improve the application performance for users in Europe.
28
Q

You have a team of developers in your company, and you would like to ensure they can quickly experiment with AWS Managed Policies by attaching them to their accounts, but you would like to prevent them from doing an escalation of privileges, by granting themselves the AdministratorAccess managed policy. How should you proceed?

A

For each developer, define an IAM permission boundary that will restrict the managed policies they can attach to themselves

  • AWS supports permissions boundaries for IAM entities (users or roles).
  • A permissions boundary is an advanced feature for using a managed policy to set the maximum permissions that an identity-based policy can grant to an IAM entity.
  • An entity’s permissions boundary allows it to perform only the actions that are allowed by both its identity-based policies and its permissions boundaries.

Here we have to use an IAM permission boundary. They can only be applied to roles or users, not IAM groups.

29
Q

A weather forecast agency collects key weather metrics across multiple cities in the US and sends this data in the form of key-value pairs to AWS Cloud at a one-minute frequency.

As a solutions architect, which of the following AWS services would you use to build a solution for processing and then reliably storing this data with high availability? (Select two)

A

Lambda

  • With AWS Lambda, you can run code without provisioning or managing servers.
  • You pay only for the compute time that you consume—there’s no charge when your code isn’t running.
  • You can run code for virtually any type of application or backend service—all with zero administration.

DynamoDB

  • Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale.
  • It’s a fully managed, multi-region, multi-master, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications.

DynamoDB is a NoSQL database and it’s best suited to store data in key-value pairs.

AWS Lambda can be combined with DynamoDB to process and capture the key-value data from the IoT sources described in the use-case.

30
Q

Which of the following IAM policies provides read-only access to the S3 bucket mybucket and its content?

A

Correct option:

{ “Version”:”2012-10-17”, “Statement”:[{ “Effect”:”Allow”, “Action”:[ “s3:ListBucket”], “Resource”:”arn:aws:s3:::mybucket” }, { “Effect”:”Allow”, “Action”:[“s3:GetObject”], “Resource”:”arn:aws:s3:::mybucket/*” } ] }

  • You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources.
  • A policy is an object in AWS that, when associated with an identity or resource, defines their permissions.
  • AWS evaluates these policies when an IAM principal (user or role) makes a request.
  • Permissions in the policies determine whether the request is allowed or denied.
  • Most policies are stored in AWS as JSON documents.
  • AWS supports six types of policies:
  1. identity-based policies
  2. resource-based policies
  3. permissions boundaries
  4. Organizations SCPs
  5. ACLs
  6. session policies
31
Q

A big-data consulting firm is working on a client engagement where the ETL workloads are currently handled via a Hadoop cluster deployed in the on-premises data center. The client wants to migrate their ETL workloads to AWS Cloud. The AWS Cloud solution needs to be highly available with about 50 EC2 instances per Availability Zone.

As a solutions architect, which of the following EC2 placement groups would you recommend handling the distributed ETL workload?

A

Partition placement group

  • You can use placement groups to influence the placement of a group of interdependent instances to meet the needs of your workload.
  • Depending on the type of workload, you can create a placement group using one of the following placement strategies:
    • Partition – spreads your instances across logical partitions such that groups of instances in one partition do not share the underlying hardware with groups of instances in different partitions.

This strategy is typically used by large distributed and replicated workloads, such as Hadoop, Cassandra, and Kafka.

32
Q

You would like to mount a network file system on Linux instances, where files will be stored and accessed frequently at first, and then infrequently. What solution is the MOST cost-effective?

A

EFS IA

  • Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.
  • Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability.
  • Amazon EFS Infrequent Access (EFS IA) is a storage class that provides price/performance that is cost-optimized for files, not accessed every day, with storage prices up to 92% lower compared to Amazon EFS Standard.
33
Q

You are establishing a monitoring solution for desktop systems, that will be sending telemetry data into AWS every 1 minute. Data for each system must be processed in order, independently, and you would like to scale the number of consumers to be possibly equal to the number of desktop systems that are being monitored.

What do you recommend?

A

Use an SQS FIFO queue, and make sure the telemetry data is sent with a Group ID attribute representing the value of the Desktop ID

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications.

SQS offers two types of message queues.

  1. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery.
  2. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.

We, therefore, need to use an SQS FIFO queue.

  • If we don’t specify a GroupID, then all the messages are in absolute order, but we can only have 1 consumer at most.
  • To allow for multiple consumers to read data for each Desktop application, and to scale the number of consumers, we should use the “Group ID” attribute.
34
Q

An application runs big data workloads on EC2 instances. The application needs at least 20 instances to maintain a minimum acceptable performance threshold and the application needs 300 instances to handle spikes in the workload. Based on historical workloads processed by the application, it needs 80 instances 80% of the time.

As a solutions architect, which of the following would you recommend as the MOST cost-optimal solution so that it can meet the workload demand in a steady state?

A

Purchase 80 reserved instances.

Provision additional on-demand and spot instances per the workload demand (Use Auto Scaling Group with launch template to provision the mix of on-demand and spot instances)

As the steady-state workload demand is 80 instances, we can save on costs by purchasing 80 reserved instances.

Based on additional workload demand, we can specify a mix of on-demand and spot instances using Application Load Balancer with a launch template to provision the mix of on-demand and spot instances.