Udemy Exam 5 Flashcards
A junior developer is learning to build websites using HTML, CSS, and JavaScript. He has created a static website and then deployed it on Amazon S3. Now he can’t seem to figure out the endpoint for his super cool website.
As a solutions architect, can you help him figure out the allowed formats for the Amazon S3 website endpoints? (Select two)
http: //bucket-name.s3-website.Region.amazonaws.com
http: //bucket-name.s3-website-Region.amazonaws.com
- To host a static website on Amazon S3, you configure an Amazon S3 bucket for website hosting and then upload your website content to the bucket.
- When you configure a bucket as a static website, you enable static website hosting, set permissions, and add an index document.
- Depending on your website requirements, you can also configure other options, including redirects, web traffic logging, and custom error documents.
- When you configure your bucket as a static website, the website is available at the AWS Region-specific website endpoint of the bucket.
Depending on your Region, your Amazon S3 website endpoints follow one of these two formats.
- s3-website dash (-) Region ‐ http://bucket-name.s3-website.Region.amazonaws.com
- s3-website dot (.) Region ‐ http://bucket-name.s3-website-Region.amazonaws.com
These URLs return the default index document that you configure for the website.
You have built an application that is deployed with an Elastic Load Balancer and an Auto Scaling Group. As a Solutions Architect, you have configured aggressive CloudWatch alarms, making your Auto Scaling Group (ASG) scale in and out very quickly, renewing your fleet of Amazon EC2 instances on a daily basis. A production bug appeared two days ago, but the team is unable to SSH into the instance to debug the issue, because the instance has already been terminated by the ASG. The log files are saved on the EC2 instance.
How will you resolve the issue and make sure it doesn’t happen again?
Install a CloudWatch Logs agents on the EC2 instances to send logs to CloudWatch
- You can use the CloudWatch Logs agent installer on an existing EC2 instance to install and configure the CloudWatch Logs agent.
- After installation is complete, logs automatically flow from the instance to the log stream you create while installing the agent.
- The agent confirms that it has started and it stays running until you disable it.
Here, the natural and by far the easiest solution would be to use the CloudWatch Logs agents on the EC2 instances to automatically send log files into CloudWatch, so we can analyze them in the future easily should any problem arise.
- To control whether an Auto Scaling group can terminate a particular instance when scaling in, use instance scale-in protection.
- You can enable the instance scale-in protection setting on an Auto Scaling group or on an individual Auto Scaling instance.
- When the Auto Scaling group launches an instance, it inherits the instance scale-in protection setting of the Auto Scaling group.
- You can change the instance scale-in protection setting for an Auto Scaling group or an Auto Scaling instance at any time.
A big data analytics company is using Kinesis Data Streams (KDS) to process IoT data from the field devices of an agricultural sciences company. Multiple consumer applications are using the incoming data streams and the engineers have noticed a performance lag for the data delivery speed between producers and consumers of the data streams.
As a solutions architect, which of the following would you recommend for improving the performance for the given use-case?
Amazon Kinesis Data Streams (KDS)
- Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service.
- KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events.
- By default, the 2MB/second/shard output is shared between all of the applications consuming data from the stream.
- You should use enhanced fan-out if you have multiple consumers retrieving data from a stream in parallel.
- With enhanced fan-out developers can register stream consumers to use enhanced fan-out and receive their own 2MB/second pipe of read throughput per shard, and this throughput automatically scales with the number of shards in a stream.
An application hosted on Amazon EC2 contains sensitive personal information about all its customers and needs to be protected from all types of cyber-attacks. The company is considering using the AWS Web Application Firewall (WAF) to handle this requirement.
Can you identify the correct solution leveraging the capabilities of WAF?
Create a CloudFront distribution for the application on Amazon EC2 instances. Deploy AWS WAF on Amazon CloudFront to provide the necessary safety measures
- When you use AWS WAF with CloudFront, you can protect your applications running on any HTTP webserver, whether it’s a webserver that’s running in Amazon Elastic Compute Cloud (Amazon EC2) or a web server that you manage privately.
- You can also configure CloudFront to require HTTPS between CloudFront and your own webserver, as well as between viewers and CloudFront.
AWS WAF is tightly integrated with Amazon CloudFront and the Application Load Balancer (ALB), services that AWS customers commonly use to deliver content for their websites and applications.
When you use AWS WAF on Amazon CloudFront, your rules run in all AWS Edge Locations, located around the world close to your end-users.
- This means security doesn’t come at the expense of performance.
- Blocked requests are stopped before they reach your web servers.
- When you use AWS WAF on Application Load Balancer, your rules run in the region and can be used to protect internet-facing as well as internal load balancers.
A leading media company wants to do an accelerated online migration of hundreds of terabytes of files from their on-premises data center to Amazon S3 and then establish a mechanism to access the migrated data for ongoing updates from the on-premises applications.
As a solutions architect, which of the following would you select as the MOST performant solution for the given use-case?
Use AWS DataSync to migrate existing data to Amazon S3 and then use File Gateway to retain access to the migrated data for ongoing updates from the on-premises applications
- AWS DataSync is an online data transfer service that simplifies, automates, and accelerates copying large amounts of data to and from AWS storage services over the internet or AWS Direct Connect.
- AWS DataSync fully automates and accelerates moving large active datasets to AWS, up to 10 times faster than command-line tools.
It is natively integrated with Amazon S3, Amazon EFS, Amazon FSx for Windows File Server, Amazon CloudWatch, and AWS CloudTrail, which provides seamless and secure access to your storage services, as well as detailed monitoring of the transfer.
- DataSync uses a purpose-built network protocol and scale-out architecture to transfer data.
- A single DataSync agent is capable of saturating a 10 Gbps network link.
DataSync fully automates the data transfer.
- It comes with retry and network resiliency mechanisms, network optimizations, built-in task scheduling, monitoring via the DataSync API and Console, and CloudWatch metrics, events, and logs that provide granular visibility into the transfer process.
DataSync performs data integrity verification both during the transfer and at the end of the transfer.
- AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage.
- The service provides three different types of gateways – Tape Gateway, File Gateway, and Volume Gateway – that seamlessly connect on-premises applications to cloud storage, caching data locally for low-latency access.
- File gateway offers SMB or NFS-based access to data in Amazon S3 with local caching.
The combination of DataSync and File Gateway is the correct solution.
- AWS DataSync enables you to automate and accelerate online data transfers to AWS storage services.
- File Gateway then provides your on-premises applications with low latency access to the migrated data.
A company runs its EC2 servers behind an Application Load Balancer along with an Auto Scaling group. The engineers at the company want to be able to install proprietary tools on each instance and perform a pre-activation status check of these tools whenever an instance is provisioned because of a scale-out event from an auto-scaling policy.
Which of the following options can be used to enable this custom action?
Use the Auto Scaling group lifecycle hook to put the instance in a wait state and launch a custom script that installs the proprietary forensic tools and performs a pre-activation status check
- An Auto Scaling group contains a collection of Amazon EC2 instances that are treated as a logical grouping for automatic scaling and management.
Auto Scaling group lifecycle hooks
- Enable you to perform custom actions as the Auto Scaling group launches or terminates instances.
- Lifecycle hooks enable you to perform custom actions by pausing instances as an Auto Scaling group launches or terminates them.
- When an instance is paused, it remains in a wait state either until you complete the lifecycle action using the complete-lifecycle-action command or the CompleteLifecycleAction operation, or until the timeout period ends (one hour by default).
For example, you could install or configure software on newly launched instances, or download log files from an instance before it terminates.
A mobile chat application uses DynamoDB as its database service to provide low latency chat updates. A new developer has joined the team and is reviewing the configuration settings for DynamoDB which have been tweaked for certain technical requirements. CloudTrail service has been enabled on all the resources used for the project. Yet, DynamoDB encryption details are nowhere to be found.
Which of the following options can explain the root cause for the given issue?
By default, all DynamoDB tables are encrypted under an AWS owned customer master key (CMK), which do not write to CloudTrail logs
- AWS owned CMKs are a collection of CMKs that an AWS service owns and manages for use in multiple AWS accounts.
- Although AWS owned CMKs are not in your AWS account, an AWS service can use its AWS owned CMKs to protect the resources in your account.
- You do not need to create or manage the AWS owned CMKs.
- However, you cannot view, use, track, or audit them.
- You are not charged a monthly fee or usage fee for AWS owned CMKs and they do not count against the AWS KMS quotas for your account.
- The key rotation strategy for an AWS owned CMK is determined by the AWS service that creates and manages the CMK.
All DynamoDB tables are encrypted.
- There is no option to enable or disable encryption for new or existing tables.
- By default, all tables are encrypted under an AWS owned customer master key (CMK) in the DynamoDB service account.
- However, you can select an option to encrypt some or all of your tables under a customer-managed CMK or the AWS managed CMK for DynamoDB in your account.
A medium-sized business has a taxi dispatch application deployed on an EC2 instance. Because of an unknown bug, the application causes the instance to freeze regularly. Then, the instance has to be manually restarted via the AWS management console.
Which of the following is the MOST cost-optimal and resource-efficient way to implement an automated solution until a permanent fix is delivered by the development team?
Setup a CloudWatch alarm to monitor the health status of the instance. In case of an Instance Health Check failure, an EC2 Reboot CloudWatch Alarm Action can be used to reboot the instance
Using Amazon CloudWatch alarm actions, you can create alarms that automatically stop, terminate, reboot, or recover your EC2 instances.
- You can use the stop or terminate actions to help you save money when you no longer need an instance to be running.
- You can use the reboot and recover actions to automatically reboot those instances or recover them onto new hardware if a system impairment occurs.
- You can create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically reboots the instance.
The reboot alarm action is recommended for Instance Health Check failures (as opposed to the recover alarm action, which is suited for System Health Check failures).
As a Solutions Architect, you have been hired to work with the engineering team at a company to create a REST API using the serverless architecture.
Which of the following solutions will you recommend to move the company to the serverless architecture?
API Gateway exposing Lambda Functionality
- Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.
- APIs act as the “front door” for applications to access data, business logic, or functionality from your backend services.
- AWS Lambda lets you run code without provisioning or managing servers.
- You pay only for the compute time you consume.
- API Gateway can expose Lambda functionality through RESTful APIs.
Both are serverless options offered by AWS and hence the right choice for this scenario, considering all the functionality they offer.
An online gaming company wants to block access to its application from specific countries; however, the company wants to allow its remote development team (from one of the blocked countries) to have access to the application. The application is deployed on EC2 instances running under an Application Load Balancer (ALB) with AWS WAF.
As a solutions architect, which of the following solutions can be combined to address the given use-case? (Select two)
Use WAF geo match statement listing the countries that you want to block
Use WAF IP set statement that specifies the IP addresses that you want to allow through
- AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources.
- AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that block common attack patterns and rules that filter out specific traffic patterns you define.
- You can deploy AWS WAF on Amazon CloudFront as part of your CDN solution, the Application Load Balancer that fronts your web servers or origin servers running on EC2, or Amazon API Gateway for your APIs.
To block specific countries, you can create a WAF geo match statement listing the countries that you want to block, and to allow traffic from IPs of the remote development team, you can create a WAF IP set statement that specifies the IP addresses that you want to allow through.
An e-commerce company uses Amazon SQS queues to decouple their application architecture. The engineering team has observed message processing failures for some customer orders.
As a solutions architect, which of the following solutions would you recommend for handling such message failures?
Use a dead-letter queue to handle message processing failures
- Dead-letter queues can be used by other queues (source queues) as a target for messages that can’t be processed (consumed) successfully.
- Dead-letter queues are useful for debugging your application or messaging system because they let you isolate problematic messages to determine why their processing doesn’t succeed.
- Sometimes, messages can’t be processed because of a variety of possible issues, such as when a user comments on a story but it remains unprocessed because the original story itself is deleted by the author while the comments were being posted.
In such a case, the dead-letter queue can be used to handle message processing failures.
Your company is building a music sharing platform on which users can upload the songs of their choice. As a solutions architect for the platform, you have designed an architecture that will leverage a Network Load Balancer linked to an Auto Scaling Group across multiple availability zones. You are currently running with 100 Amazon EC2 instances with an Auto Scaling Group that needs to be able to share the storage layer for the music files.
Which technology do you recommend?
- Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.
- It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.
Here, we need a network file system (NFS), which is exactly what EFS is designed for. So, EFS is the correct option.
A silicon valley based healthcare startup uses AWS Cloud for its IT infrastructure. The startup stores patient health records on Amazon S3. The engineering team needs to implement an archival solution based on Amazon S3 Glacier to enforce regulatory and compliance controls on data access.
As a solutions architect, which of the following solutions would you recommend?
Use S3 Glacier vault to store the sensitive archived data and then use a vault lock policy to enforce compliance controls
- Amazon S3 Glacier is a secure, durable, and extremely low-cost Amazon S3 cloud storage class for data archiving and long-term backup.
- It is designed to deliver 99.999999999% durability, and provide comprehensive security and compliance capabilities that can help meet even the most stringent regulatory requirements.
- An S3 Glacier vault is a container for storing archives.
- When you create a vault, you specify a vault name and the AWS Region in which you want to create the vault.
S3 Glacier Vault Lock
- Allows you to easily deploy and enforce compliance controls for individual S3 Glacier vaults with a vault lock policy.
- You can specify controls such as “write once read many” (WORM) in a vault lock policy and lock the policy from future edits. Therefore, this is the correct option.
The engineering team at a weather tracking company wants to enhance the performance of its relation database and is looking for a caching solution that supports geospatial data.
As a solutions architect, which of the following solutions will you suggest?
Use Amazon ElastiCache for Redis
Amazon ElastiCache
- Is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store or cache environment in the cloud.
Redis, which stands for Remote Dictionary Server,
- Is a fast, open-source, in-memory key-value data store for use as a database, cache, message broker, and queue.
- Redis now delivers sub-millisecond response times enabling millions of requests per second for real-time applications in Gaming, Ad-Tech, Financial Services, Healthcare, and IoT.
- Redis is a popular choice for caching, session management, gaming, leaderboards, real-time analytics, geospatial, ride-hailing, chat/messaging, media streaming, and pub/sub apps.
- All Redis data resides in the server’s main memory, in contrast to databases such as PostgreSQL, Cassandra, MongoDB and others that store most data on disk or on SSDs.
- In comparison to traditional disk based databases where most operations require a roundtrip to disk, in-memory data stores such as Redis don’t suffer the same penalty.
- They can therefore support an order of magnitude more operations and faster response times.
The result is – blazing fast performance with average read or write operations taking less than a millisecond and support for millions of operations per second.
Redis has purpose-built commands for working with real-time geospatial data at scale.
- You can perform operations like finding the distance between two elements (for example people or places) and finding all elements within a given distance of a point.
You are a cloud architect at an IT company. The company has multiple enterprise customers that manage their own mobile apps that capture and send data to Amazon Kinesis Data Streams. They have been getting a ProvisionedThroughputExceededException exception. You have been contacted to help and upon analysis, you notice that messages are being sent one by one at a high rate.
Which of the following options will help with the exception while keeping costs at a minimum?
Use batch messages
- Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service.
- KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events.
- The data collected is available in milliseconds to enable real-time analytics use cases such as real-time dashboards, real-time anomaly detection, dynamic pricing, and more.
When a host needs to send many records per second (RPS) to Amazon Kinesis, simply calling the basic PutRecord API action in a loop is inadequate.
To reduce overhead and increase throughput, the application must batch records and implement parallel HTTP requests.
- This will increase the efficiency overall and ensure you are optimally using the shards.