Udemy Exam 2 Flashcards
An IT company is working on client engagement to build a real-time data analytics tool for the Internet of Things (IoT) data. The IoT data is funneled into Kinesis Data Streams which further acts as the source of a delivery stream for Kinesis Firehose. The engineering team has now configured a Kinesis Agent to send IoT data from another set of devices to the same Firehose delivery stream. They noticed that data is not reaching Firehose as expected.
As a solutions architect, which of the following options would you attribute as the MOST plausible root cause behind this issue?
**Kinesis Agent cannot write to a Kinesis Firehose for which the delivery stream source is already set as Kinesis Data
- Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores, and analytics tools.
- It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration.
- It can also batch, compress, transform, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security.
- When a Kinesis data stream is configured as the source of a Firehose delivery stream, Firehose’s PutRecord and PutRecordBatch operations are disabled and Kinesis Agent cannot write to Firehose delivery stream directly.
- Data needs to be added to the Kinesis data stream through the Kinesis Data Streams PutRecord and PutRecords operations instead.
A financial services company wants to implement a solution that ensures that the order of financial transactions is preserved and no duplicate transactions are created.
As a solutions architect, which of the following solutions would you recommend?
Publish transaction updates using SNS FIFO topic, which is subscribed by SQS FIFO queue for further processing
The two most common forms of asynchronous service-to-service communication are message queues and publish/subscribe messaging:
With message queues
- Messages are stored on the queue until they are processed and deleted by a consumer.
Amazon Simple Queue Service (SQS) provides a fully managed message queuing service with no administrative overhead.
- With pub/sub messaging, a message published to a topic is delivered to all subscribers to the topic.
Amazon Simple Notification Service (SNS)
Is a fully managed pub/sub messaging service that enables message delivery to a large number of subscribers.
- Each subscriber can also set a filter policy to receive only the messages that it cares about.
Per the use-case, the financial transactions have to be processed and stored in the exact order they take place. So SNS FIFO is the right choice, subscribed b SQS FIFO.
With SQS
- You can use FIFO (First-In-First-Out) queues to preserve the order in which messages are sent and received and to avoid that a message is processed more than once.
- Similar capabilities for pub/sub messaging is achieved through SNS FIFO topics, providing strict message ordering and deduplicated message delivery to one or more subscribers.
The engineering team at an e-commerce company is working on cost optimizations for EC2 instances. The team wants to manage the workload using a mix of on-demand and spot instances across multiple instance types. They would like to create an Auto Scaling group with a mix of these instances.
Which of the following options would allow the engineering team to provision the instances for this use-case?
You can only use a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost
A launch template is similar to a launch configuration, in that it specifies instance configuration information such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, security groups, and the other parameters that you use to launch EC2 instances.
Also, defining a launch template instead of a launch configuration allows you to have multiple versions of a template.
With launch templates, you can provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost.
An IT company wants to optimize the costs incurred on its fleet of 100 EC2 instances for the next year. Based on historical analyses, the engineering team observed that 70 of these instances handle the compute services of its flagship application and need to be always available. The other 30 instances are used to handle batch jobs that can afford a delay in processing.
As a solutions architect, which of the following would you recommend as the MOST cost-optimal solution?
Purchase 70 reserved instances and 30 spot instances
- As 70 instances need to be always available, these can be purchased as reserved instances for a one-year duration.
- The other 30 instances responsible for the batch job can be purchased as spot instances.
- Even if some of the spot instances are interrupted, other spot instances can continue with the job.
A manufacturing company receives unreliable service from its data center provider because the company is located in an area prone to natural disasters. The company is not ready to fully migrate to the AWS Cloud, but it wants a failover environment on AWS in case the on-premises data center fails. The company runs web servers that connect to external vendors. The data available on AWS and on-premises must be uniform.
Which of the following solutions would have the LEAST amount of downtime?
- Set up a Route 53 failover record.
- Run application servers on EC2 instances behind an Application Load Balancer in an Auto Scaling group.
- Set up AWS Storage Gateway with stored volumes to back up data to S3
If you have multiple resources that perform the same function, you can configure DNS failover so that Route 53 will route your traffic from an unhealthy resource to a healthy resource.
Elastic Load Balancing
- Is used to automatically distribute your incoming application traffic across all the EC2 instances that you are running.
- You can use Elastic Load Balancing to manage incoming requests by optimally routing traffic so that no one instance is overwhelmed.
- Your load balancer acts as a single point of contact for all incoming web traffic to your Auto Scaling group.
AWS Storage Gateway
- Is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage.
- It provides low-latency performance by caching frequently accessed data on-premises while storing data securely and durably in Amazon cloud storage services.
- Storage Gateway optimizes data transfer to AWS by sending only changed data and compressing data.
- Storage Gateway also integrates natively with Amazon S3 cloud storage which makes your data available for in-cloud processing.
The engineering manager for a content management application wants to set up RDS read replicas to provide enhanced performance and read scalability. The manager wants to understand the data transfer charges while setting up RDS read replicas.
Which of the following would you identify as correct regarding the data transfer charges for RDS read replicas?
There are data transfer charges for replicating data across AWS Regions
RDS Read Replicas
- Provide enhanced performance and durability for RDS database (DB) instances.
- They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.
- A read replica is billed as a standard DB Instance and at the same rates.
- You are not charged for the data transfer incurred in replicating data between your source DB instance and read replica within the same AWS Region.
You would like to migrate an AWS account from an AWS Organization A to an AWS Organization B. What are the steps do to it?
- Remove the member account from the old organization.
- Send an invite to the member account from the new Organization.
- Accept the invite to the new organization from the member account
AWS Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS.
Using AWS Organizations
- You can automate account creation, create groups of accounts to reflect your business needs, and apply policies for these groups for governance.
- You can also simplify billing by setting up a single payment method for all of your AWS accounts.
- Through integrations with other AWS services, you can use Organizations to define central configurations and resource sharing across accounts in your organization.
The engineering team at a logistics company has noticed that the Auto Scaling group (ASG) is not terminating an unhealthy Amazon EC2 instance.
As a Solutions Architect, which of the following options would you suggest to troubleshoot the issue? (Select three)
The health check grace period for the instance has not expired - Amazon EC2 Auto Scaling doesn’t terminate an instance that came into service based on EC2 status checks and ELB health checks until the health check grace period expires.
The instance maybe in Impaired status
- Amazon EC2 Auto Scaling does not immediately terminate instances with an Impaired status.
- Instead, Amazon EC2 Auto Scaling waits a few minutes for the instance to recover.
- Amazon EC2 Auto Scaling might also delay or not terminate instances that fail to report data for status checks.
- This usually happens when there is insufficient data for the status check metrics in Amazon CloudWatch.
The instance has failed the ELB health check status -
- By default, Amazon EC2 Auto Scaling doesn’t use the results of ELB health checks to determine an instance’s health status when the group’s health check configuration is set to EC2.
- As a result, Amazon EC2 Auto Scaling doesn’t terminate instances that fail ELB health checks.
- If an instance’s status is OutofService on the ELB console, but the instance’s status is Healthy on the Amazon EC2 Auto Scaling console, confirm that the health check type is set to ELB.
A retail company wants to rollout and test a blue-green deployment for its global application in the next 48 hours. Most of the customers use mobile phones which are prone to DNS caching. The company has only two days left for the annual Thanksgiving sale to commence.
As a Solutions Architect, which of the following options would you recommend to test the deployment on as many users as possible in the given time frame?
Blue/green deployment is a technique for releasing applications by shifting traffic between two identical environments running different versions of the application:
- “Blue” is the currently running version
- “Green” the new version.
- This type of deployment allows you to test features in the green environment without impacting the currently running version of your application.
- When you’re satisfied that the green version is working properly, you can gradually reroute the traffic from the old blue environment to the new green environment.
- Blue/green deployments can mitigate common risks associated with deploying software, such as downtime and rollback capability.
- Use AWS Global Accelerator to distribute a portion of traffic to a particular deployment -
- AWS Global Accelerator is a network layer service that directs traffic to optimal endpoints over the AWS global network, this improves the availability and performance of your internet applications.
- It provides two static anycast IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers, Elastic IP addresses or Amazon EC2 instances, in a single or in multiple AWS regions.
- AWS Global Accelerator uses endpoint weights to determine the proportion of traffic that is directed to endpoints in an endpoint group, and traffic dials to control the percentage of traffic that is directed to an endpoint group (an AWS region where your application is deployed).
- While relying on the DNS service is a great option for blue/green deployments, it may not fit use-cases that require a fast and controlled transition of the traffic.
- Some client devices and internet resolvers cache DNS answers for long periods; this DNS feature improves the efficiency of the DNS service as it reduces the DNS traffic across the Internet, and serves as a resiliency technique by preventing authoritative name-server overloads.
- The downside of this in blue/green deployments is that you don’t know how long it will take before all of your users receive updated IP addresses when you update a record, change your routing preference or when there is an application failure.
With AWS Global Accelerator, you can shift traffic gradually or all at once between the blue and the green environment and vice-versa without being subject to DNS caching on client devices and internet resolvers, traffic dials and endpoint weights changes are effective within seconds.
You would like to use Snowball to move on-premises backups into a long term archival tier on AWS. Which solution provides the MOST cost savings?
Create a Snowball job and target an S3 bucket. Create a lifecycle policy to immediately move data to Glacier Deep Archive
AWS Snowball, a part of the AWS Snow Family
- Is a data migration and edge computing device that comes in two options.
- Snowball Edge Storage Optimized devices
- Provide both block storage and Amazon S3-compatible object storage, and 40 vCPUs.
- They are well suited for local storage and large scale data transfer.
- Snowball Edge Storage Optimized devices
Snowball Edge Compute Optimized devices
- Provide 52 vCPUs, block and object storage, and an optional GPU for use cases like advanced machine learning and full-motion video analysis in disconnected environments.
Snowball Edge Storage Optimized
- Is the optimal choice if you need to securely and quickly transfer dozens of terabytes to petabytes of data to AWS.
- It provides up to 80 TB of usable HDD storage, 40 vCPUs, 1 TB of SATA SSD storage, and up to 40 Gb network connectivity to address large scale data transfer and pre-processing use cases.
The original Snowball devices were transitioned out of service and Snowball Edge Storage Optimized are now the primary devices used for data transfer. You may see the Snowball device on the exam, just remember that the original Snowball device had 80TB of storage space.
You can’t move data directly from Snowball into Glacier, you need to go through S3 first, and then use a lifecycle policy.
A company has many VPC in various accounts, that need to be connected in a star network with one another and connected with on-premises networks through Direct Connect.
What do you recommend?
Transit Gateway
AWS Transit Gateway
- Is a service that enables customers to connect their Amazon Virtual Private Clouds (VPCs) and their on-premises networks to a single gateway.
- With AWS Transit Gateway, you only have to create and manage a single connection from the central gateway into each Amazon VPC, on-premises data center, or remote office across your network.
Transit Gateway acts as a hub that controls how traffic is routed among all the connected networks which act like spokes.
You have multiple AWS accounts within a single AWS Region managed by AWS Organizations and you would like to ensure all EC2 instances in all these accounts can communicate privately. Which of the following solutions provides the capability at the CHEAPEST cost?
Create a VPC in an account and share one or more of its subnets with the other accounts using Resource Access Manager
AWS Resource Access Manager (RAM)
- Is a service that enables you to easily and securely share AWS resources with any AWS account or within your AWS Organization.
- You can share AWS Transit Gateways, Subnets, AWS License Manager configurations, and Amazon Route 53 Resolver rules resources with RAM.
- RAM eliminates the need to create duplicate resources in multiple accounts, reducing the operational overhead of managing those resources in every single account you own.
- You can create resources centrally in a multi-account environment, and use RAM to share those resources across accounts in three simple steps:
- Create a Resource Share
- Specify resources,
- Specify accounts.
RAM is available to you at no additional charge.
The correct solution is to share the subnet(s) within a VPC using RAM.
This will allow all EC2 instances to be deployed in the same VPC (although from different accounts) and easily communicate with one another.
An IT company provides S3 bucket access to specific users within the same account for completing project specific work. With changing business requirements, cross-account S3 access requests are also growing every month. The company is looking for a solution that can offer user level as well as account-level access permissions for the data stored in S3 buckets.
As a Solutions Architect, which of the following would you suggest as the MOST optimized way of controlling access for this use-case?
Use Amazon S3 Bucket Policies
- Bucket policies in Amazon S3 can be used to add or deny permissions across some or all of the objects within a single bucket.
- Policies can be attached to users, groups, or Amazon S3 buckets, enabling centralized management of permissions.
- With bucket policies, you can grant users within your AWS Account or other AWS Accounts access to your Amazon S3 resources.
- You can further restrict access to specific resources based on certain conditions.
For example, you can restrict access based on request time (Date Condition), whether the request was sent using SSL (Boolean Conditions), a requester’s IP address (IP Address Condition), or based on the requester’s client application (String Conditions). To identify these conditions, you use policy keys.
A company has recently launched a new mobile gaming application that the users are adopting rapidly. The company uses RDS MySQL as the database. The engineering team wants an urgent solution to this issue where the rapidly increasing workload might exceed the available database storage.
As a solutions architect, which of the following solutions would you recommend so that it requires minimum development and systems administration effort to address this requirement?
Enable storage auto-scaling for RDS MySQL
If your workload is unpredictable, you can enable storage autoscaling for an Amazon RDS DB instance.
- With storage autoscaling enabled, when Amazon RDS detects that you are running out of free database space it automatically scales up your storage.
- Amazon RDS starts a storage modification for an autoscaling-enabled DB instance when these factors apply:
- Free available space is less than 10 percent of the allocated storage.
- The low-storage condition lasts at least five minutes.
- At least six hours have passed since the last storage modification.
- The maximum storage threshold is the limit that you set for autoscaling the DB instance.
- You can’t set the maximum storage threshold for autoscaling-enabled instances to a value greater than the maximum allocated storage.
Upon a security review of your AWS account, an AWS consultant has found that a few RDS databases are un-encrypted. As a Solutions Architect, what steps must be taken to encrypt the RDS databases?
- Take a snapshot of the database,
- Copy it as an encrypted snapshot,
- Restore a database from the encrypted snapshot.
- Terminate the previous database
- Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud.
- It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups.
- You can encrypt your Amazon RDS DB instances and snapshots at rest by enabling the encryption option for your Amazon RDS DB instances.
- Data that is encrypted at rest includes the underlying storage for DB instances, its automated backups, read replicas, and snapshots.
You can only enable encryption for an Amazon RDS DB instance when you create it, not after the DB instance is created.
- However, because you can encrypt a copy of an unencrypted DB snapshot, you can effectively add encryption to an unencrypted DB instance.
That is, you can create a snapshot of your DB instance, and then create an encrypted copy of that snapshot.