AWS Certified Solutions Architect Associate Practice Test 4 Flashcards
A startup plans to develop a multiplayer game that uses UDP as the protocol for communication between clients and game servers. The data of the users will be stored in a key-value store. As the Solutions Architect, you need to implement a solution that will distribute the traffic across a number of servers.
Which of the following could help you achieve this requirement?
A. Distribute the traffic using Application Load Balancer and store the data in Amazon RDS
B. Distribute the traffic using Application Load Balancer and store the data in Amazon DynamoDB
C. Distribute the traffic using Network Load Balancer and store the data in Amazon Aurora
D. Distribute the traffic using network Load Balancer and store the data in Amazon DynamoDB
D. Distribute the traffic using network Load Balancer and store the data in Amazon DynamoDB
Explanation:
A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. For UDP traffic, the load balancer selects a target using a flow hash algorithm based on the protocol, source IP address, source port, destination IP address, and destination port. A UDP flow has the same source and destination, so it is consistently routed to a single target throughout its lifetime. Different UDP flows have a different source IP addresses and ports, so they can be routed to different targets.
In this scenario, a startup plans to create a multiplayer game that uses UDP as the protocol for communications. Since UDP is a Layer 4 traffic, we can limit the option that uses Network Load Balancer. The data of the users will be stored in a key-value store. This means that we should select Amazon DynamoDB since it supports both document and key-value store models.
Hence, the correct answer is: Distribute the traffic using Network Load Balancer and store the data in Amazon DynamoDB.
The option that says: Distribute the traffic using Application Load Balancer and store the data in Amazon DynamoDB is incorrect because UDP is not supported in Application Load Balancer. Remember that UDP is a Layer 4 traffic. Therefore, you should use a Network Load Balancer.
The option that says: Distribute the traffic using Network Load Balancer and store the data in Amazon Aurora is incorrect because Amazon Aurora is a relational database service. Instead of Aurora, you should use Amazon DynamoDB.
The option that says: Distribute the traffic using Application Load Balancer and store the data in Amazon RDS is incorrect because Application Load Balancer only supports application traffic (Layer 7). Also, Amazon RDS is not suitable as a key-value store. You should use DynamoDB since it supports both document and key-value store models.
A tech company currently has an on-premises infrastructure. They are currently running low on storage and want to have the ability to extend their storage using the AWS cloud.
Which AWS service can help them achieve this requirement?
A. Amazon EC2
B. Amazon Elastic Block Storage
C. Amazon Storage Gateway
D. Amazon SQS
C. Amazon Storage Gateway
Explanation:
AWS Storage Gateway connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between your on-premises IT environment and the AWS storage infrastructure. You can use the service to store data in the AWS Cloud for scalable and cost-effective storage that helps maintain data security.
Amazon EC2 is incorrect since this is a compute service, not a storage service.
Amazon Elastic Block Storage is incorrect since EBS is primarily used as a storage of your EC2 instances.
Amazon SQS is incorrect since this is a message queuing service, and does not extend your on-premises storage capacity.
A company needs to implement a solution that will process real-time streaming data of its users across the globe. This will enable them to track and analyze globally-distributed user activity on their website and mobile applications, including clickstream analysis. The solution should process the data in close geographical proximity to their users and respond to user requests at low latencies.
Which of the following is the most suitable solution for this scenario?
A. Use a CloudFront web distribution and Route53 with a Geoproximity routing policy in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real time streaming data using Kinesis and durably store the results to an Amazon S3 bucket
B. Integrate CloudFront with Lambda@Edge in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real time streaming data using Amazon Athena and durably store the results to an Amazon S3 bucket
C. Use a CloudFront web distribution and Route 53 with a latency based ourting policy, in order to process the data in close geographical proximity to user and respond to user requests at low latencies. Process real time streaming data using Kinesis and durably store the results to an Amazon S3 bucket
D. Integrate CloudFront with Lambda@@Edge in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real time streaming data using Kinesis and durably store the results to an S3 Bucket
D. Integrate CloudFront with Lambda@@Edge in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real time streaming data using Kinesis and durably store the results to an S3 Bucket
Explanation:
Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to users of your application, which improves performance and reduces latency. With Lambda@Edge, you don’t have to provision or manage infrastructure in multiple locations around the world. You pay only for the compute time you consume - there is no charge when your code is not running.
With Lambda@Edge, you can enrich your web applications by making them globally distributed and improving their performance — all with zero server administration. Lambda@Edge runs your code in response to events generated by the Amazon CloudFront content delivery network (CDN). Just upload your code to AWS Lambda, which takes care of everything required to run and scale your code with high availability at an AWS location closest to your end user.
By using Lambda@Edge and Kinesis together, you can process real-time streaming data so that you can track and analyze globally-distributed user activity on your website and mobile applications, including clickstream analysis. Hence, the correct answer in this scenario is the option that says: Integrate CloudFront with Lambda@Edge in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real-time streaming data using Kinesis and durably store the results to an Amazon S3 bucket.
The options that say: Use a CloudFront web distribution and Route 53 with a latency-based routing policy, in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real-time streaming data using Kinesis and durably store the results to an Amazon S3 bucket and Use a CloudFront web distribution and Route 53 with a Geoproximity routing policy in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real-time streaming data using Kinesis and durably store the results to an Amazon S3 bucket are both incorrect because you can only route traffic using Route 53 since it does not have any computing capability. This solution would not be able to process and return the data in close geographical proximity to your users since it is not using Lambda@Edge.
The option that says: Integrate CloudFront with Lambda@Edge in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real-time streaming data using Amazon Athena and durably store the results to an Amazon S3 bucket is incorrect. Although using Lambda@Edge is correct, Amazon Athena is just an interactive query service that enables you to easily analyze data in Amazon S3 using standard SQL. Kinesis should be used to process the streaming data in real time.
A multinational bank is storing its confidential files in an S3 bucket. The security team recently performed an audit, and the report shows that multiple files have been uploaded without 256-bit Advanced Encryption Standard (AES) server-side encryption. For added protection, the encryption key must be automatically rotated every year. The solutions architect must ensure that there would be no other unencrypted files uploaded in the S3 bucket in the future.
Which of the following will meet these requirements with the LEAST operational overhead?
A. Create a new customer managed key (CMK) from the AWS Key Management Service (AWS KMS). Configure the default encryption behavior of the bucket to use the customer managed key. Manually rotate the CMK each and every year
B. Create a Service Control Policy (SCP) for the S3 bucket that rejects any object uploads unless the request includes the s3:x-amz-server-side-encryption”: “AES256” header. Enable server side encryption with Amazon S3 managed encryption keys (SSE-S3) and rely on the built in key rotation feature of the SSE-S3 encryption
C. Create an S3 bucket policy that6 denies permissions to upload an object unles the request includes the s3:x-amz-server-side-encryption”: “AES256” header. ENable server side encryption with Amazon S3-managed encryption keys (SSE-S3) and rely on the built in key rotation feature of the SSE-S3 encryption keys
D. Create an S3 bucket policy for the S3 bucket that rejects any object uploads unless the request includes the s3:x-amz-server-siide-encryption”:”aws:kms” header. Enable the S3 Object Lock in compliance mode for all objects to automatically rotate the built in AWS 256 customer managed key of the bucket
C. Create an S3 bucket policy that6 denies permissions to upload an object unles the request includes the s3:x-amz-server-side-encryption”: “AES256” header. ENable server side encryption with Amazon S3-managed encryption keys (SSE-S3) and rely on the built in key rotation feature of the SSE-S3 encryption keys
Explanation:
A bucket policy is a resource-based policy that you can use to grant access permissions to your bucket and the objects in it. Server-side encryption protects data at rest. Amazon S3 encrypts each object with a unique key. Amazon S3 server-side encryption uses one of the strongest block ciphers available to encrypt your data, 256-bit Advanced Encryption Standard (AES-256).
If you need server-side encryption for all of the objects that are stored in a bucket, use a bucket policy. You can create a bucket policy that denies permissions to upload an object unless the request includes the x-amz-server-side-encryption header to request server-side encryption.
Automatic key rotation is disabled by default on customer managed keys but authorized users can enable and disable it. When you enable (or re-enable) automatic key rotation, AWS KMS automatically rotates the KMS key one year (approximately 365 days) after the enable date and every year thereafter.
AWS KMS automatically rotates AWS managed keys every year (approximately 365 days). You cannot enable or disable key rotation for AWS managed keys.
Hence, the correct answer is Create an S3 bucket policy that denies permissions to upload an object unless the request includes the s3:x-amz-server-side-encryption”: “AES256” header. Enable server-side encryption with Amazon S3-managed encryption keys (SSE-S3) and rely on the built-in key rotation feature of the SSE-S3 encryption keys.
The option that says: Create a Service Control Policy (SCP) for the S3 bucket that rejects any object uploads unless the request includes the s3:x-amz-server-side-encryption”: “AES256” header. Enable server-side encryption with Amazon S3-managed encryption keys (SSE-S3) and modify the built-in key rotation feature of the SSE-S3 encryption keys to rotate the key yearly is incorrect. Although it could work, you cannot modify the built-in key rotation feature manually in SSE-S3. These keys are managed by AWS, and not you. Moreover, you have to use a bucket policy in Amazon S3 and not a Service Control Policy, since the latter is primarily used for controlling multiple AWS accounts.
The option that says: Create a new customer-managed key (CMK) from the AWS Key Management Service (AWS KMS). Configure the default encryption behavior of the bucket to use the customer-managed key. Manually rotate the CMK each and every year is incorrect because this solution is missing a bucket policy that denies S3 object uploads without any encryption. Conversely, the act of manually rotating the CMK each and every year adds to the manual management overhead for this solution.
The option that says: Create an S3 bucket policy for the S3 bucket that rejects any object uploads unless the request includes the s3:x-amz-server-side-encryption”:”aws:kms” header. Enable the S3 Object Lock in compliance mode for all objects to automatically rotate the built-in AES256 customer-managed key of the bucket is incorrect because the scenario says that the encryption required is 256-bit Advanced Encryption Standard (AES-256), not AWS Key Management Service (SSE-KMS). Take note that that if the value of the s3:x-amz-server-side-encryption header is aws:kms, then the S3 encryption type is SSE-KMS. This type of server-side encryption does not use a 256-bit Advanced Encryption Standard (AES-256), unlike the SSE-S3 type. In addition, an S3 Object Lock feature is not used for encryption but instead, for preventing the objects from being deleted or overwritten for a fixed amount of time or indefinitely.
A web application requires a minimum of six Amazon Elastic Compute Cloud (EC2) instances running at all times. You are tasked to deploy the application to three availability zones in the EU Ireland region (eu-west-1a, eu-west-1b, and eu-west-1c). It is required that the system is fault-tolerant up to the loss of one Availability Zone.
Which of the following setup is the most cost-effective solution which also maintains the fault-tolerance of your system?
A. 2 instances in eu-west-1a, 2 instances in eu-west-1b, and 2 instances in eu-west-1c
B. 3 instances in eu-west-1a, 3 instances in eu-west-1b, and 3 instances in eu-west-1c
C. 6 instances in eu-west-1a, 6 instances in eu-west-1b, and 6 instances in eu-west-1c
D. 6 instances in eu-west-1a, 6 instances in eu-west-1b, and no instances in eu-west-1c
B. 3 instances in eu-west-1a, 3 instances in eu-west-1b, and 3 instances in eu-west-1c
Explanation;
Basically, fault-tolerance is the ability of a system to remain in operation even in the event that some of its components fail without any service degradation. In AWS, it can also refer to the minimum number of running EC2 instances or resources which should be running at all times in order for the system to properly operate and serve its consumers. Take note that this is quite different from the concept of High Availability, which is just concerned with having at least one running instance or resource in case of failure.
In this scenario, 3 instances in eu-west-1a, 3 instances in eu-west-1b, and 3 instances in eu-west-1c is the correct answer because even if there was an outage in one of the Availability Zones, the system still satisfies the requirement of having a minimum of 6 running instances. It is also the most cost-effective solution among other options.
The option that says: 6 instances in eu-west-1a, 6 instances in eu-west-1b, and 6 instances in eu-west-1c is incorrect. Although this solution provides the maximum fault-tolerance for the system, it entails a significant cost to maintain a total of 18 instances across 3 AZs.
The option that says: 2 instances in eu-west-1a, 2 instances in eu-west-1b, and 2 instances in eu-west-1c is incorrect because if one Availability Zone goes down, there will only be 4 running instances available. Although this is the most cost-effective solution, it does not provide fault-tolerance.
The option that says: 6 instances in eu-west-1a, 6 instances in eu-west-1b, and no instances in eu-west-1c is incorrect. Although it provides fault-tolerance, it is not the most cost-effective solution as compared with the options above. This solution has 12 running instances, unlike the correct answer which only has 9 instances.
A Solutions Architect is designing a setup for a database that will run on Amazon RDS for MySQL. He needs to ensure that the database can automatically failover to an RDS instance to continue operating in the event of failure. The architecture should also be as highly available as possible.
Which among the following actions should the Solutions Architect do?
A. Create five read replicas across different availability zones. In the event of an Availability Zone outage, promote any replica to become the primary instance
B. Create a read replica in the same region where the DB instance resides. In addition, create a read replica in a different region to survive a regions failure. In the event of an Availability Zone outage, promote any replica to become the primary instance
C. Create five cross region read replicas in each region. In the event of an Availability Zone outage, promote any replica to become the primary instance
D. Create a standby replica in another availability zone by enabling Multi AZ deployment
D. Create a standby replica in another availability zone by enabling Multi AZ deployment
Explanation:
You can run an Amazon RDS DB instance in several AZs with Multi-AZ deployment. Amazon automatically provisions and maintains a secondary standby DB instance in a different AZ. Your primary DB instance is synchronously replicated across AZs to the secondary instance to provide data redundancy, failover support, eliminate I/O freezes, and minimize latency spikes during systems backup.
As described in the scenario, the architecture must meet two requirements:
The database should automatically fail over to an RDS instance in case of failures. The architecture should be as highly available as possible.
Hence, the correct answer is: Create a standby replica in another availability zone by enabling Multi-AZ deployment because it meets both of the requirements.
The option that says: Create a read replica in the same region where the DB instance resides. In addition, create a read replica in a different region to survive a region’s failure. In the event of an Availability Zone outage, promote any replica to become the primary instance is incorrect. Although this architecture provides higher availability since it can survive a region failure, it still does not meet the first requirement since the process is not automated. The architecture should also support automatic failover to an RDS instance in case of failures.
Both the following options are incorrect:
- Create five read replicas across different availability zones. In the event of an Availability Zone outage, promote any replica to become the primary instance
- Create five cross-region read replicas in each region. In the event of an Availability Zone outage, promote any replica to become the primary instance
Although it is possible to achieve high availability with these architectures by promoting a read replica into the primary instance in an event of failure, it does not support automatic failover to an RDS instance which is also a requirement in the problem.
A company is running a web application on AWS. The application is made up of an Auto-Scaling group that sits behind an Application Load Balancer and an Amazon DynamoDB table where user data is stored. The solutions architect must design the application to remain available in the event of a regional failure. A solution to automatically monitor the status of your workloads across your AWS account, conduct architectural reviews and check for AWS best practices.
Which configuration meets the requirement with the least amount of downtime possible?
A. In the secondary region, create a global secondary index of the DyanmoDB table and replicate the auto scaling group and application load balancer. Use Route 53 DNS failover to automatically route traffic to the resources in the secondary region. Set up the AWS Compute Optimizer to automatically get recommendations for improving your workloads based on the AWS best practices
B. In the secondary region, create a global table of the DynamoDB table and replicate the auto scaling group and application load balancer. Use Route 53 DNS failover to automatically route traffic to the resources in t he secondary region. Set up the AWS Well Architected Tool to easily get recommendations for improving your workloads based on the AWS best practices
C. Write a CloudFormation template that includes the auto scaling group, application load balancer, and DyanmoDB table. In the event of a failure, deploy the template in a secondary region. Configure Amazon EventBridge to trigger a Lambda function that updates the applications Route 53 DNS record. Launch an Amazon Managed Grafana workspace to automatically receive tips and action items for improving your workloads based on the AWS best practices
D. Write a CloudFormation template that includes the auto scaling group, application load balancer, and DyanmoDB table. In the event of a failure, deploy the template in a secondary region. Use Route 53 DNS failover to automatically route traffic to the resources in the secondary region. Set up and configure the Amazon Managed Service for Prometheus service to receive insights for improving your workloads based on the AWS best practices
B. In the secondary region, create a global table of the DynamoDB table and replicate the auto scaling group and application load balancer. Use Route 53 DNS failover to automatically route traffic to the resources in t he secondary region. Set up the AWS Well Architected Tool to easily get recommendations for improving your workloads based on the AWS best practices
Explanation:
When you have more than one resource performing the same function—for example, more than one HTTP serve—you can configure Amazon Route 53 to check the health of your resources and respond to DNS queries using only the healthy resources. For example, suppose your website, example.com, is hosted on six servers, two each in three data centers around the world. You can configure Route 53 to check the health of those servers and to respond to DNS queries for example.com using only the servers that are currently healthy.
In this scenario, you can replicate the process layer (EC2 instances, Application Load Balancer) to a different region and create a global table based on the existing DynamoDB table (data layer). Amazon DynamoDB will handle data synchronization between the tables in different regions. This way, the state of the application is preserved even in the event of an outage. Lastly, configure Route 53 DNS failover and set the DNS name of the backup application load balancer as a target.
You can also use the Well-Architected Tool to automatically monitor the status of your workloads across your AWS account, conduct architectural reviews and check for AWS best practices.
This tool is based on the AWS Well-Architected Framework, which was developed to help cloud architects build secure, high-performing, resilient, and efficient application infrastructures. The Framework has been used in tens of thousands of workload reviews by AWS solutions architects, and it provides a consistent approach for evaluating your cloud architecture and implementing designs that will scale with your application needs over time.
Hence, the correct answer is: In a secondary region, create a global table of the DynamoDB table and replicate the auto-scaling group and application load balancer. Use Route 53 DNS failover to automatically route traffic to the resources in the secondary region. Set up the AWS Well-Architected Tool to easily get recommendations for improving your workloads based on the AWS best practices
The option that says: In a secondary region, create a global secondary index of the DynamoDB table and replicate the auto-scaling group and application load balancer. Use Route 53 DNS failover to automatically route traffic to the resources in the secondary region. Set up the AWS Compute Optimizer to automatically get recommendations for improving your workloads based on the AWS best practices is incorrect because this configuration is impossible to implement. A global secondary index can only be created in the region where its parent table resides. Moreover, the AWS Compute Optimizer simply helps you to identify the optimal AWS resource configurations, such as Amazon Elastic Compute Cloud (EC2) instance types, Amazon Elastic Block Store (EBS) volume configurations, and AWS Lambda function memory sizes. It is not capable of providing recommendations to improve your workloads based on AWS best practices.
The option that says: Write a CloudFormation template that includes the auto-scaling group, application load balancer, and DynamoDB table. In the event of a failure, deploy the template in a secondary region. Use Route 53 DNS failover to automatically route traffic to the resources in the secondary region. Set up and configure the Amazon Managed Service for Prometheus service to receive insights for improving your workloads based on the AWS best practices is incorrect. This solution describes a situation in which the environment is provisioned only after a regional failure occurs. It won’t work because to enable Route 53 DNS failover, you’d need to target an existing environment. The use of the Amazon Managed Service for Prometheus service is irrelevant as well. This is just a serverless, Prometheus-compatible monitoring service for container metrics that makes it easier to securely monitor container environments at scale.
The option that says: Write a CloudFormation template that includes the auto-scaling group, application load balancer, and DynamoDB table. In the event of a failure, deploy the template in a secondary region. Configure Amazon EventBridge to trigger a Lambda function that updates the application’s Route 53 DNS record. Launch an Amazon Managed Grafana workspace to automatically receive tips and action items for improving your workloads based on the AWS best practices is incorrect. This could work, but it won’t deliver the shortest downtime possible since resource provisioning takes minutes to complete. Switching traffic to a standby environment is a faster method, albeit more expensive. Amazon Managed Grafana is a fully managed service with rich, interactive data visualizations to help customers analyze, monitor, and alarm on metrics, logs, and traces across multiple data sources. This service does not provide recommendations based on AWS best practices. You have to use the AWS Well-Architected Tool instead.
A startup launched a new FTP server using an On-Demand EC2 instance in a newly created VPC with default settings. The server should not be accessible publicly but only through the IP address 175.45.116.100 and nowhere else.
Which of the following is the most suitable way to implement this requirement?
A. Create a new Network ACL inbound rule in the subnet of the EC2 instance with the following details:
Protocol: TCP
Port Range: 20-21
Source: 175.45.116.100/0
Allow/Deny: ALLOW
B. Create a new inbound rule in the security group of the EC2 instance with the following details:
Protocol: UDP
Port Range: 20-21
Source: 175.45.116.100/32
C. Create a new inbound rule in the security group of the EC2 instance with the following details:
Protocol: TCP
Port Range: 20-21
Source: 175.45.116.100/32
D. Create a new Network ACL inbound rule in the subnet of the EC2 instance with the following details:
Protocol: UDP
Port Range: 20-21
Source: 175.45.116.100/0
Allow/Deny: ALLOW
C. Create a new inbound rule in the security group of the EC2 instance with the following details:
Protocol: TCP
Port Range: 20-21
Source: 175.45.116.100/32
Explanation:
The FTP protocol uses TCP via ports 20 and 21. This should be configured in your security groups or in your Network ACL inbound rules. As required by the scenario, you should only allow the individual IP of the client and not the entire network. Therefore, in the Source, the proper CIDR notation should be used. The /32 denotes one IP address and the /0 refers to the entire network.
It is stated in the scenario that you launched the EC2 instances in a newly created VPC with default settings. Your VPC automatically comes with a modifiable default network ACL. By default, it allows all inbound and outbound IPv4 traffic and, if applicable, IPv6 traffic. Hence, you actually don’t need to explicitly add inbound rules to your Network ACL to allow inbound traffic, if your VPC has a default setting.
The below option is incorrect:
Create a new inbound rule in the security group of the EC2 instance with the following details:
Protocol: UDP
Port Range: 20 - 21
Source: 175.45.116.100/32
Although the configuration of the Security Group is valid, the provided Protocol is incorrect. Take note that FTP uses TCP and not UDP.
The below option is also incorrect:
Create a new Network ACL inbound rule in the subnet of the EC2 instance with the following details:
Protocol: TCP
Port Range: 20 - 21
Source: 175.45.116.100/0
Allow/Deny: ALLOW
Although setting up an inbound Network ACL is valid, the source is invalid since it must be an IPv4 or IPv6 CIDR block. In the provided IP address, the /0 refers to the entire network and not a specific IP address. In addition, it is stated in the scenario that the newly created VPC has default settings and by default, the Network ACL allows all traffic. This means that there is actually no need to configure your Network ACL.
Likewise, the below option is also incorrect:
Create a new Network ACL inbound rule in the subnet of the EC2 instance with the following details:
Protocol: UDP
Port Range: 20 - 21
Source: 175.45.116.100/0
Allow/Deny: ALLOW
Just like in the above, the source is also invalid. Take note that FTP uses TCP and not UDP, which is one of the reasons why this option is wrong. In addition, it is stated in the scenario that the newly created VPC has default settings and by default, the Network ACL allows all traffic. This means that there is actually no need to configure your Network ACL.
A large electronics company is using Amazon Simple Storage Service to store important documents. For reporting purposes, they want to track and log every request access to their S3 buckets including the requester, bucket name, request time, request action, referrer, turnaround time, and error code information. The solution should also provide more visibility into the object-level operations of the bucket.
Which is the best solution among the following options that can satisfy the requirement?
A. Enable Amazon S3 Event Notifications for PUT and POST
B. Enable server access logging for all required Amazon S3 buckets
C. Enable AWS CloudTrail to audit all Amazon S3 bucket access
D. Enable the Requester Pays option to track access via AWS Billing
B. Enable server access logging for all required Amazon S3 buckets
Explanation:
Amazon S3 is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in Amazon S3. CloudTrail captures a subset of API calls for Amazon S3 as events, including calls from the Amazon S3 console and code calls to the Amazon S3 APIs.
AWS CloudTrail logs provide a record of actions taken by a user, role, or an AWS service in Amazon S3, while Amazon S3 server access logs provide detailed records for the requests that are made to an S3 bucket.
For this scenario, you can use CloudTrail and the Server Access Logging feature of Amazon S3. However, it is mentioned in the scenario that they need detailed information about every access request sent to the S3 bucket including the referrer and turn-around time information. These two records are not available in CloudTrail.
Hence, the correct answer is: Enable server access logging for all required Amazon S3 buckets.
The option that says: Enable AWS CloudTrail to audit all Amazon S3 bucket access is incorrect because enabling AWS CloudTrail alone won’t give detailed logging information for object-level access.
The option that says: Enabling the Requester Pays option to track access via AWS Billing is incorrect because this action refers to AWS billing and not for logging.
The option that says: Enabling Amazon S3 Event Notifications for PUT and POST is incorrect because we are looking for a logging solution and not an event notification.
A company launched a global news website that is deployed to AWS and is using MySQL RDS. The website has millions of viewers from all over the world, which means that the website has a read-heavy database workload. All database transactions must be ACID compliant to ensure data integrity.
In this scenario, which of the following is the best option to use to increase the read-throughput on the MySQL database?
A. Enable Amazon RDS Standby Replicas
B. Use SQS to queue up the requests
C. Enable Multi AZ Deployments
D. Enable Amazon RDS Read Replicas
D. Enable Amazon RDS Read Replicas
Explanation:
Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances. Read replicas are available in Amazon RDS for MySQL, MariaDB, Oracle, and PostgreSQL as well as Amazon Aurora.
Enabling Multi-AZ deployments is incorrect because the Multi-AZ deployments feature is mainly used to achieve high availability and failover support for your database.
Enabling Amazon RDS Standby Replicas is incorrect because a Standby replica is used in Multi-AZ deployments and hence, it is not a solution to reduce read-heavy database workloads.
Using SQS to queue up the requests is incorrect. Although an SQS queue can effectively manage the requests, it won’t be able to entirely improve the read-throughput of the database by itself.
There is a new compliance rule in your company that audits every Windows and Linux EC2 instances each month to view any performance issues. They have more than a hundred EC2 instances running in production, and each must have a logging function that collects various system details regarding that instance. The SysOps team will periodically review these logs and analyze their contents using AWS Analytics tools, and the result will need to be retained in an S3 bucket.
In this scenario, what is the most efficient way to collect and analyze logs from the instances with minimal effort?
A. Install AWS SDK in each instance and create a custom daemon script that would collect and push data to CloudWatch Logs periodically. Enable CloudWatch detailed monitoring and use CloudWatch Logs Insight to analyze the log data of all instances
B . Install AWS Inspector Agent in each instance which will collect and push data to CloudWatch Logs periodically. Set up a CloudWatch dashboard to properly analyze the log data of all instances
C. Install the unified CloudWatch Logs agent in each instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights
D. Install the AWS Systems Manager Agent (SSM Agent) in each instance which will automatically collect and mpush data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights
C. Install the unified CloudWatch Logs agent in each instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights
Explanation:
To collect logs from your Amazon EC2 instances and on-premises servers into CloudWatch Logs, AWS offers both a new unified CloudWatch agent, and an older CloudWatch Logs agent. It is recommended to use the unified CloudWatch agent which has the following advantages:
- You can collect both logs and advanced metrics with the installation and configuration of just one agent.
- The unified agent enables the collection of logs from servers running Windows Server.
- If you are using the agent to collect CloudWatch metrics, the unified agent also enables the collection of additional system metrics, for in-guest visibility.
- The unified agent provides better performance.
CloudWatch Logs Insights enables you to interactively search and analyze your log data in Amazon CloudWatch Logs. You can perform queries to help you quickly and effectively respond to operational issues. If an issue occurs, you can use CloudWatch Logs Insights to identify potential causes and validate deployed fixes.
CloudWatch Logs Insights includes a purpose-built query language with a few simple but powerful commands. CloudWatch Logs Insights provides sample queries, command descriptions, query autocompletion, and log field discovery to help you get started quickly. Sample queries are included for several types of AWS service logs.
The option that says: Install AWS SDK in each instance and create a custom daemon script that would collect and push data to CloudWatch Logs periodically. Enable CloudWatch detailed monitoring and use CloudWatch Logs Insights to analyze the log data of all instances is incorrect. Although this is a valid solution, this entails a lot of effort to implement as you have to allocate time to install the AWS SDK to each instance and develop a custom monitoring solution. Remember that the question is specifically looking for a solution that can be implemented with minimal effort. In addition, it is unnecessary and not cost-efficient to enable detailed monitoring in CloudWatch in order to meet the requirements of this scenario since this can be done using CloudWatch Logs.
The option that says: Install the AWS Systems Manager Agent (SSM Agent) in each instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights is incorrect. Although this is also a valid solution, it is more efficient to use CloudWatch agent than an SSM agent. Manually connecting to an instance to view log files and troubleshoot an issue with SSM Agent is time-consuming hence, for more efficient instance monitoring, you can use the CloudWatch Agent instead to send the log data to Amazon CloudWatch Logs.
The option that says: Install AWS Inspector Agent in each instance which will collect and push data to CloudWatch Logs periodically. Set up a CloudWatch dashboard to properly analyze the log data of all instances is incorrect because AWS Inspector is simply a security assessments service which only helps you in checking for unintended network accessibility of your EC2 instances and for vulnerabilities on those EC2 instances. Furthermore, setting up an Amazon CloudWatch dashboard is not suitable since its primarily used for scenarios where you have to monitor your resources in a single view, even those resources that are spread across different AWS Regions. It is better to use CloudWatch Logs Insights instead since it enables you to interactively search and analyze your log data.
A web application is hosted in an Auto Scaling group of EC2 instances deployed across multiple Availability Zones behind an Application Load Balancer. You need to implement an SSL solution for your system to improve its security which is why you requested an SSL/TLS certificate from a third-party certificate authority (CA).
Where can you safely import the SSL/TLS certificate of your application? (Select TWO.)
A. AWS Certificate Manager
B. An S3 bucket configured with server side encryption with customer provided encryption keys (SSE-C)
C. IAM certificate store
D. CloudFront
E. A private S3 bucket with versioning enabled
A. AWS Certificate Manager
C. IAM certificate store
Explanation:
If you got your certificate from a third-party CA, import the certificate into ACM or upload it to the IAM certificate store. Hence, AWS Certificate Manager and IAM certificate store are the correct answers.
ACM lets you import third-party certificates from the ACM console, as well as programmatically. If ACM is not available in your region, use AWS CLI to upload your third-party certificate to the IAM certificate store.
A private S3 bucket with versioning enabled and an S3 bucket configured with server-side encryption with customer-provided encryption keys (SSE-C) are both incorrect as S3 is not a suitable service to store the SSL certificate.
CloudFront is incorrect. Although you can upload certificates to CloudFront, it doesn’t mean that you can import SSL certificates on it. You would not be able to export the certificate that you have loaded in CloudFront nor assign them to your EC2 or ELB instances as it would be tied to a single CloudFront distribution.
A company has stored 200 TB of backup files in Amazon S3. The files are in a vendor-proprietary format. The Solutions Architect needs to use the vendor’s proprietary file conversion software to retrieve the files from their Amazon S3 bucket, transform the files to an industry-standard format, and re-upload the files back to Amazon S3. The solution must minimize the data transfer costs.
Which of the following options can satisfy the given requirement?
A. Install the file conversion software in Amazon S3. Use S3 Batch Operations to perform data transformations
B. Deploy the EC2 instance in a different Region. Install the conversion software on the instance. Perform data transformation and re-upload it to Amazon S3
C. Deploy the EC2 instance in the same Region as Amazon S3. Install the file conversion software on the instance. Perform data transformation and re-upload it to Amazon S3
D. Export the data using AWS Snowball Edge device. Install the file conversion software on the device. Transform the data and re upload it to Amazon S3
C. Deploy the EC2 instance in the same Region as Amazon S3. Install the file conversion software on the instance. Perform data transformation and re-upload it to Amazon S3
Explanation:
Amazon S3 is object storage built to store and retrieve any amount of data from anywhere on the Internet. It’s a simple storage service that offers industry-leading durability, availability, performance, security, and virtually unlimited scalability at very low costs. Amazon S3 is also designed to be highly flexible. Store any type and amount of data that you want; read the same piece of data a million times or only for emergency disaster recovery; build a simple FTP application or a sophisticated web application.
You pay for all bandwidth into and out of Amazon S3, except for the following:
- Data transferred in from the Internet.
- Data transferred out to an Amazon EC2 instance, when the instance is in the same AWS Region as the S3 bucket (including to a different account in the same AWS region).
- Data transferred out to Amazon CloudFront.
To minimize the data transfer charges, you need to deploy the EC2 instance in the same Region as Amazon S3. Take note that there is no data transfer cost between S3 and EC2 in the same AWS Region. Install the conversion software on the instance to perform data transformation and re-upload the data to Amazon S3.
Hence, the correct answer is: Deploy the EC2 instance in the same Region as Amazon S3. Install the file conversion software on the instance. Perform data transformation and re-upload it to Amazon S3.
The option that says: Install the file conversion software in Amazon S3. Use S3 Batch Operations to perform data transformation is incorrect because it is not possible to install the software in Amazon S3. The S3 Batch Operations just runs multiple S3 operations in a single request. It can’t be integrated with your conversion software.
The option that says: Export the data using AWS Snowball Edge device. Install the file conversion software on the device. Transform the data and re-upload it to Amazon S3 is incorrect. Although this is possible, it is not mentioned in the scenario that the company has an on-premises data center. Thus, there’s no need for Snowball.
The option that says: Deploy the EC2 instance in a different Region. Install the file conversion software on the instance. Perform data transformation and re-upload it to Amazon S3 is incorrect because this approach wouldn’t minimize the data transfer costs. You should deploy the instance in the same Region as Amazon S3.
The company you are working for has a set of AWS resources hosted in ap-northeast-1 region. You have been asked by your IT Manager to create an AWS CLI shell script that will call an AWS service which could create duplicate resources in another region in the event that ap-northeast-1 region fails. The duplicated resources should also contain the VPC Peering configuration and other networking components from the primary stack.
Which of the following AWS services could help fulfill this task?
A. AWS Elastic beanstalk
B. Amazon SQS
C. Amazon SNS
D. AWS CloudFormation
D. AWS CloudFormation
Explanation:
AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS.
You can create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you. With this, you can deploy an exact copy of your AWS architecture, along with all of the AWS resources which are hosted in one region to another.
Hence, the correct answer is AWS CloudFormation.
AWS Elastic Beanstalk is incorrect. Elastic Beanstalk is a high-level service that simplifies the creation of application resources such as an EC2 instance with preconfigured proxy servers (Nginx or Apache), a load balancer, an auto-scaling group, and so on. Elastic Beanstalk environments have limited resources; for example, Elastic Beanstalk does not create a VPC for you. CloudFormation on the other hand is more of a low-level service that you can use to model the entirety of your AWS environment. In fact, Elastic Beanstalk uses CloudFormation under the hood to create resources.
Amazon SQS and Amazon SNS are both incorrect because SNS and SQS are just messaging services.
A company decided to change its third-party data analytics tool to a cheaper solution. They sent a full data export on a CSV file which contains all of their analytics information. You then save the CSV file to an S3 bucket for storage. Your manager asked you to do some validation on the provided data export.
In this scenario, what is the most cost-effective and easiest way to analyze export data using standard SQL?
A. To be able to run SQL queries. use AWS Athena to analyze the export data file in S3
B. Create a migration tool to load the CSV export file from S3 to a DynamoDB instance. Once the data has been loaded, run queries using DyanmoDb
C. Use mysqldump client utility to load the CSV export file from S3 to a MySQL RDS instance. Run some SQL queries once the data has been loaded to complete your validation
D. Use a migration tool to load the CSV export file from S3 to a database that is designed for online analytic processing (OLAP) such as AWS RedShift. Run some queries once the data has been loaded to complete your validation
A. To be able to run SQL queries. use AWS Athena to analyze the export data file in S3
Explanation:
Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard SQL. With a few actions in the AWS Management Console, you can point Athena at your data stored in Amazon S3 and begin using standard SQL to run ad-hoc queries and get results in seconds.
Athena is serverless, so there is no infrastructure to set up or manage, and you pay only for the queries you run. Athena scales automatically—executing queries in parallel—so results are fast, even with large datasets and complex queries.
Athena helps you analyze unstructured, semi-structured, and structured data stored in Amazon S3. Examples include CSV, JSON, or columnar data formats such as Apache Parquet and Apache ORC. You can use Athena to run ad-hoc queries using ANSI SQL without the need to aggregate or load the data into Athena.
Hence, the correct answer is: To be able to run SQL queries, use Amazon Athena to analyze the export data file in S3.
The rest of the options are all incorrect because it is not necessary to set up a database to be able to analyze the CSV export file. You can use a cost-effective option (AWS Athena), which is a serverless service that enables you to pay only for the queries you run.
A hospital has a mission-critical application that uses a RESTful API powered by Amazon API Gateway and AWS Lambda. The medical officers upload PDF reports to the system which are then stored as static media content in an Amazon S3 bucket.
The security team wants to improve its visibility when it comes to cyber-attacks and ensure HIPAA (Health Insurance Portability and Accountability Act) compliance. The company is searching for a solution that continuously monitors object-level S3 API operations and identifies protected health information (PHI) in the reports, with minimal changes in their existing Lambda function.
Which of the following solutions will meet these requirements with the LEAST operational overhead?
A. Use Amazon Textract to extract the text from the PDF reports. Integrate Amazon Comprehend Medical with the existing Lambda function to identify the PHI from the extracted text
B. Use Amazon Textract Medical with PII redaction turned on to extract and filter sensitive text from the PDF reports. Create a new LAmbda function that calls the regular Amazon Comprehend API to identify the PHI from the extracted text
C. Use Amazon Rekognition to extract the text data from the PDF reports. Integrate the Amazon Comprehend Medical service with the existing Lambda functions to identify PHI from the extracted text
D. Use Amazon Transcribe to read and analyze the PDF reports using the StartTranscriptionJob API operation. Use Amazon SageMakr Ground Truth to label and detect protected health information (PHI) content with low confidence predictions
A. Use Amazon Textract to extract the text from the PDF reports. Integrate Amazon Comprehend Medical with the existing Lambda function to identify the PHI from the extracted text
Explanation:
Amazon Comprehend Medical is a natural language processing service that makes it easy to use machine learning to extract relevant medical information from unstructured text. Using Amazon Comprehend Medical, you can quickly and accurately gather information, such as medical conditions, medication, dosage, strength, and frequency from a variety of sources, like doctors’ notes, clinical trial reports, and patient health records.
Amazon Comprehend Medical uses advanced machine learning models to accurately and quickly identify medical information such as medical conditions and medication and determine their relationship to each other, for instance, medication and dosage. You access Comprehend Medical through a simple API call, no machine learning expertise is required, no complicated rules to write, and no models to train.
You can use the extracted medical information and their relationships to build applications for use cases, like clinical decision support, revenue cycle management (medical coding), and clinical trial management. Because Comprehend Medical is HIPAA-eligible and can quickly identify protected health information (PHI), such as name, age, and medical record number, you can also use it to create applications that securely process, maintain, and transmit PHI.
Hence, the correct answer is: Use Amazon Textract to extract the text from the PDF reports. Integrate Amazon Comprehend Medical with the existing Lambda function to identify the PHI from the extracted text.
The option that says: Use Amazon Textract Medical with PII redaction turned on to extract and filter sensitive text from the PDF reports. Create a new Lambda function that calls the regular Amazon Comprehend API to identify the PHI from the extracted text is incorrect. The PII (Personally Identifiable Information) redaction feature of the Amazon Textract Medical service is not enough to identify all the Protected Health Information (PHI) in the PDF reports. Take note that PII is quite different from PHI. In addition, this option proposes to create a brand new Lambda function, which is contrary to what the company explicitly requires of having a minimal change in their existing Lambda function.
The option that says: Use Amazon Transcribe to read and analyze the PDF reports using the StartTranscriptionJob API operation. Use Amazon SageMaker Ground Truth to label and detect protected health information (PHI) content with low-confidence predictions is incorrect because Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to their applications, and not for extracting text from image files or PDFs. Using Amazon SageMaker can lead to a high operational overhead since you have to manually create a custom model for identifying the PHI.
The option that says: Use Amazon Rekognition to extract the text data from the PDF reports. Integrate the Amazon Comprehend Medical service with the existing Lambda functions to identify the PHI from the extracted text is incorrect. Although the Amazon Rekognition service can technically detect text in images and videos, this service is still not suitable for extracting text from PDF reports. A more suitable solution is to use Amazon Textract.
A company deployed an online enrollment system database on a prestigious university, which is hosted in RDS. The Solutions Architect is required to monitor the database metrics in Amazon CloudWatch to ensure the availability of the enrollment system.
What are the enhanced monitoring metrics that Amazon CloudWatch gathers from Amazon RDS DB instances which provide more accurate information? (Select TWO.)
A. Database Connections
B. CPU Utilization
C. OS Processes
D. Freeable memory
E. RDS Child Processes
C. OS Processes
E. RDS Child Processes
Explanation:
Amazon RDS provides metrics in real-time for the operating system (OS) that your DB instance runs on. You can view the metrics for your DB instance using the console or consume the Enhanced Monitoring JSON output from CloudWatch Logs in a monitoring system of your choice.
CloudWatch gathers metrics about CPU utilization from the hypervisor for a DB instance, and Enhanced Monitoring gathers its metrics from an agent on the instance. As a result, you might find differences between the measurements because the hypervisor layer performs a small amount of work. The differences can be greater if your DB instances use smaller instance classes because then there are likely more virtual machines (VMs) that are managed by the hypervisor layer on a single physical instance. Enhanced Monitoring metrics are useful when you want to see how different processes or threads on a DB instance use the CPU.
In RDS, the Enhanced Monitoring metrics shown in the Process List view are organized as follows:
RDS child processes – Shows a summary of the RDS processes that support the DB instance, for example aurora for Amazon Aurora DB clusters and mysqld for MySQL DB instances. Process threads appear nested beneath the parent process. Process threads show CPU utilization only as other metrics are the same for all threads for the process. The console displays a maximum of 100 processes and threads. The results are a combination of the top CPU-consuming and memory-consuming processes and threads. If there are more than 50 processes and more than 50 threads, the console displays the top 50 consumers in each category. This display helps you identify which processes are having the greatest impact on performance.
RDS processes – Shows a summary of the resources used by the RDS management agent, diagnostics monitoring processes, and other AWS processes that are required to support RDS DB instances.
OS processes – Shows a summary of the kernel and system processes, which generally have minimal impact on performance.
CPU Utilization, Database Connections, and Freeable Memory are incorrect because these are just the regular items provided by Amazon RDS Metrics in CloudWatch. Remember that the scenario is asking for the Enhanced Monitoring metrics.
A company launched an EC2 instance in the newly created VPC. They noticed that the generated instance does not have an associated DNS hostname.
Which of the following options could be a valid reason for this issue?
A. The newly created VPC has an invalid CIDR block
B. The DNS resolution and DNS hostname of the VPC configuration should be enabled
C. The security group of the EC2 instance needs to be modified
D. Amazon Route53 is not enabled
B. The DNS resolution and DNS hostname of the VPC configuration should be enabled
Explanation:
When you launch an EC2 instance into a default VPC, AWS provides it with public and private DNS hostnames that correspond to the public IPv4 and private IPv4 addresses for the instance.
However, when you launch an instance into a non-default VPC, AWS provides the instance with a private DNS hostname only. New instances will only be provided with a public DNS hostname depending on these two DNS attributes: the DNS resolution and DNS hostnames that you have specified for your VPC and if your instance has a public IPv4 address.
In this case, the new EC2 instance does not automatically get a DNS hostname because the DNS resolution and DNS hostnames attributes are disabled in the newly created VPC.
Hence, the correct answer is: The DNS resolution and DNS hostname of the VPC configuration should be enabled.
The option that says: The newly created VPC has an invalid CIDR block is incorrect since it’s very unlikely that a VPC has an invalid CIDR block because of AWS validation schemes.
The option that says: Amazon Route 53 is not enabled is incorrect since Route 53 does not need to be enabled. Route 53 is the DNS service of AWS, but the VPC is the one that enables assigning of instance hostnames.
The option that says: The security group of the EC2 instance needs to be modified is incorrect since security groups are just firewalls for your instances. They filter traffic based on a set of security group rules.
A company is using the AWS Directory Service to integrate their on-premises Microsoft Active Directory (AD) domain with their Amazon EC2 instances via an AD connector. The below identity-based policy is attached to the IAM Identities that use the AWS Directory service:
{ "Version":"2012-10-17", "Statement":[ { "Sid":"DirectoryTutorialsDojo1234", "Effect":"Allow", "Action":[ "ds:*" ], "Resource":"arn:aws:ds:us-east-1:987654321012:directory/d-1234567890" }, { "Effect":"Allow", "Action":[ "ec2:*" ], "Resource":"*" } ] }
Which of the following BEST describes what the above resource policy does?
A. Allows all AWS Directory Service (ds) calls as long as the resource contains the directory ID: 987654321012
B. Allows all AWS directory service (ds) calls as long as the resource contains the directory ID: d-1234567890
C. Allows all AWS Directory Service (ds) calls as long as the resource contains the directory name of: DirectoryTutorialDojo1234
D. Allows all AWS Directory Service (ds) calls as long as the resource contains the directory ID: DirectoryTutorialsDojo1234
B. Allows all AWS directory service (ds) calls as long as the resource contains the directory ID: d-1234567890
Explanation:
AWS Directory Service provides multiple ways to use Amazon Cloud Directory and Microsoft Active Directory (AD) with other AWS services. Directories store information about users, groups, and devices, and administrators use them to manage access to information and resources. AWS Directory Service provides multiple directory choices for customers who want to use existing Microsoft AD or Lightweight Directory Access Protocol (LDAP)–aware applications in the cloud. It also offers those same choices to developers who need a directory to manage users, groups, devices, and access.
Every AWS resource is owned by an AWS account, and permissions to create or access the resources are governed by permissions policies. An account administrator can attach permissions policies to IAM identities (that is, users, groups, and roles), and some services (such as AWS Lambda) also support attaching permissions policies to resources.
The following resource policy example allows all ds calls as long as the resource contains the directory ID “d-1234567890”.
{ "Version":"2012-10-17", "Statement":[ { "Sid":"VisualEditor0", "Effect":"Allow", "Action":[ "ds:*" ], "Resource":"arn:aws:ds:us-east-1:123456789012:directory/d-1234567890" }, { "Effect":"Allow", "Action":[ "ec2:*" ], "Resource":"*" } ] }
Hence, the correct answer is the option that says: Allows all AWS Directory Service (ds) calls as long as the resource contains the directory ID: d-1234567890.
The option that says: Allows all AWS Directory Service (ds) calls as long as the resource contains the directory ID: DirectoryTutorialsDojo1234 is incorrect because DirectoryTutorialsDojo1234 is the Statement ID (SID) and not the Directory ID.
The option that says: Allows all AWS Directory Service (ds) calls as long as the resource contains the directory ID: 987654321012 is incorrect because the numbers: 987654321012 is the Account ID and not the Directory ID.
The option that says: Allows all AWS Directory Service (ds) calls as long as the resource contains the directory name of: DirectoryTutorialsDojo1234 is incorrect because DirectoryTutorialsDojo1234 is the Statement ID (SID) and not the Directory name.
A company plans to migrate a NoSQL database to an EC2 instance. The database is configured to replicate the data automatically to keep multiple copies of data for redundancy. The Solutions Architect needs to launch an instance that has a high IOPS and sequential read/write access.
Which of the following options fulfills the requirement if I/O throughput is the highest priority?
A> Use Storage optimized instances with instance store volume
B. use Compute optimized instance with instance store volume
C. Use Memory optimized instances with EBS volume
D. Use General purpose instances with EBS volume
A> Use Storage optimized instances with instance store volume
Explanation:
Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to choose the appropriate mix of resources for your applications. Each instance type includes one or more instance sizes, allowing you to scale your resources to the requirements of your target workload.
A storage optimized instance is designed for workloads that require high, sequential read and write access to very large data sets on local storage. They are optimized to deliver tens of thousands of low-latency, random I/O operations per second (IOPS) to applications. Some instance types can drive more I/O throughput than what you can provision for a single EBS volume. You can join multiple volumes together in a RAID 0 configuration to use the available bandwidth for these instances.
Based on the given scenario, the NoSQL database will be migrated to an EC2 instance. The suitable instance type for NoSQL database is I3 and I3en instances. Also, the primary data storage for I3 and I3en instances is non-volatile memory express (NVMe) SSD instance store volumes. Since the data is replicated automatically, there will be no problem using an instance store volume.
Hence, the correct answer is: Use Storage optimized instances with instance store volume.
The option that says: Use Compute optimized instances with instance store volume is incorrect because this type of instance is ideal for compute-bound applications that benefit from high-performance processors. It is not suitable for a NoSQL database.
The option that says: Use General purpose instances with EBS volume is incorrect because this instance only provides a balance of computing, memory, and networking resources. Take note that the requirement in the scenario is high sequential read and write access. Therefore, you must use a storage optimized instance.
The option that says: Use Memory optimized instances with EBS volume is incorrect. Although this type of instance is suitable for a NoSQL database, it is not designed for workloads that require high, sequential read and write access to very large data sets on local storage.
A media company is using Amazon EC2, ELB, and S3 for its video-sharing portal for filmmakers. They are using a standard S3 storage class to store all high-quality videos that are frequently accessed only during the first three months of posting.
As a Solutions Architect, what should you do if the company needs to automatically transfer or archive media data from an S3 bucket to Glacier?
A. Use a custom shell script that transfer data from the S3 bucket to Glacier
B. use Amazon SQS
C. Use Amazon SWF
D. use Lifecycle Policies
D. use Lifecycle Policies
Explanation:
You can create a lifecycle policy in S3 to automatically transfer your data to Glacier.
Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects.
These actions can be classified as follows:
Transition actions – In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation or archive objects to the GLACIER storage class one year after creation.
Expiration actions – In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf.
A technical lead of the Cloud Infrastructure team was consulted by a software developer regarding the required AWS resources of the web application that he is building. The developer knows that an Instance Store only provides ephemeral storage where the data is automatically deleted when the instance is terminated. To ensure that the data of the web application persists, the app should be launched in an EC2 instance that has a durable, block-level storage volume attached. The developer knows that they need to use an EBS volume, but they are not sure what type they need to use.
In this scenario, which of the following is true about Amazon EBS volume types and their respective usage? (Select TWO.)
A. Provisioned IOPS volumes offer storage with consistent and low latency performance and are designed for IO sensitive applications such as large relational or NoSQL database
B. Spot volumes provide the lowest cost per gigabyte of all EBS volume types and are ideal for workloads where data is accessed infrequently, and applications where the lowest storage cost is important
C. Magnetic volumes provide the lowest cost per gigabyte of all EBS volume types and are ideal for workloads where data is accessed infrequently, and applications where the lowest storage cost is important
D. General Purpose SSD (gp3) volumes with mutli attach enabled offer consistent and low latnecy performance and are designed for applications requiring multi-az resiliency
E. Signel root IO virtualization (SR-IOV) volumes are suitable for a broad range of workloads, including small to medium sized databases, development and test environments and boot volumes
A. Provisioned IOPS volumes offer storage with consistent and low latency performance and are designed for IO sensitive applications such as large relational or NoSQL database
C. Magnetic volumes provide the lowest cost per gigabyte of all EBS volume types and are ideal for workloads where data is accessed infrequently, and applications where the lowest storage cost is important
Explanation:
Amazon EBS provides three volume types to best meet the needs of your workloads: General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic.
General Purpose (SSD) is the new, SSD-backed, general purpose EBS volume type that is recommended as the default choice for customers. General Purpose (SSD) volumes are suitable for a broad range of workloads, including small to medium-sized databases, development and test environments, and boot volumes.
Provisioned IOPS (SSD) volumes offer storage with consistent and low-latency performance and are designed for I/O intensive applications such as large relational or NoSQL databases. Magnetic volumes provide the lowest cost per gigabyte of all EBS volume types.
Magnetic volumes are ideal for workloads where data are accessed infrequently, and applications where the lowest storage cost is important. Take note that this is a Previous Generation Volume. The latest low-cost magnetic storage types are Cold HDD (sc1) and Throughput Optimized HDD (st1) volumes.
Hence, the correct answers are:
- Provisioned IOPS volumes offer storage with consistent and low-latency performance, and are designed for I/O intensive applications such as large relational or NoSQL databases.
- Magnetic volumes provide the lowest cost per gigabyte of all EBS volume types and are ideal for workloads where data is accessed infrequently, and applications where the lowest storage cost is important.
The option that says: Spot volumes provide the lowest cost per gigabyte of all EBS volume types and are ideal for workloads where data is accessed infrequently, and applications where the lowest storage cost is important is incorrect because there is no EBS type called a “Spot volume” however, there is an Instance purchasing option for Spot Instances.
The option that says: General Purpose SSD (gp3) volumes with multi-attach enabled offer consistent and low-latency performance, and are designed for applications requiring multi-az resiliency is incorrect because the multi-attach feature can only be enabled on EBS Provisioned IOPS io2 or io1 volumes. In addition, multi-attach won’t offer multi-az resiliency because this feature only allows an EBS volume to be attached on multiple instances within an availability zone.
The option that says: Single root I/O virtualization (SR-IOV) volumes are suitable for a broad range of workloads, including small to medium-sized databases, development and test environments, and boot volumes is incorrect because SR-IOV is related with Enhanced Networking on Linux and not in EBS.
A company has a set of Linux servers running on multiple On-Demand EC2 Instances. The Audit team wants to collect and process the application log files generated from these servers for their report.
Which of the following services is best to use in this case?
A. A single On-Demand Amazon EC2 instance for both sotring and processing the log files
B. Amazon S3 for storing the application log files and Amazon Elastic MapReduce for processing the log files
C. Amazon S3 Glacier Deep Archive for storing the application log files and AWS ParallelCluster for processing the log files
D. Amazon S3 Glacier for storing the application log files and Spot EC2 instances for processing them
B. Amazon S3 for storing the application log files and Amazon Elastic MapReduce for processing the log files
Explanation:
Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. By using these frameworks and related open-source projects such as Apache Hive and Apache Pig, you can process data for analytics purposes and business intelligence workloads. Additionally, you can use Amazon EMR to transform and move large amounts of data into and out of other AWS data stores and databases such as Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB.
Hence, the correct answer is: Amazon S3 for storing the application log files and Amazon Elastic MapReduce for processing the log files.
The option that says: Amazon S3 Glacier for storing the application log files and Spot EC2 Instances for processing them is incorrect as Amazon S3 Glacier is used for data archive only.
The option that says: A single On-Demand Amazon EC2 instance for both storing and processing the log files is incorrect as an EC2 instance is not a recommended storage service. In addition, Amazon EC2 does not have a built-in data processing engine to process large amounts of data.
The option that says: Amazon S3 Glacier Deep Archive for storing the application log files and AWS ParallelCluster for processing the log files is incorrect because the long retrieval time of Amazon S3 Glacier Deep Archive makes this option unsuitable. Moreover, AWS ParallelCluster is just an AWS-supported open-source cluster management tool that makes it easy for you to deploy and manage High-Performance Computing (HPC) clusters on AWS. ParallelCluster uses a simple text file to model and provision all the resources needed for your HPC applications in an automated and secure manner.
A financial firm is designing an application architecture for its online trading platform that must have high availability and fault tolerance. Their Solutions Architect configured the application to use an Amazon S3 bucket located in the us-east-1 region to store large amounts of intraday financial data. The stored financial data in the bucket must not be affected even if there is an outage in one of the Availability Zones or if there’s a regional service failure.
What should the Architect do to avoid any costly service disruptions and ensure data durability?
A. Copy the S3 bucket to an EBS backed instance
B. Enable Cross Region Replication
C. Create a new S3 bucket in another region and configure Cross Account Access to the bucket located in us-east-1
D. Create a Lifecycle Policy to regularly backup the S3 bucket to Amazon Glacier
B. Enable Cross Region Replication
Explanation:
In this scenario, you need to enable Cross-Region Replication to ensure that your S3 bucket would not be affected even if there is an outage in one of the Availability Zones or a regional service failure in us-east-1. When you upload your data in S3, your objects are redundantly stored on multiple devices across multiple facilities within the region only, where you created the bucket. Thus, if there is an outage on the entire region, your S3 bucket will be unavailable if you do not enable Cross-Region Replication, which should make your data available to another region.
Note that an Availability Zone (AZ) is more related with Amazon EC2 instances rather than Amazon S3 so if there is any outage in the AZ, the S3 bucket is usually not affected but only the EC2 instances deployed on that zone.
Hence, the correct answer is: Enable Cross-Region Replication.
The option that says: Copy the S3 bucket to an EBS-backed EC2 instance is incorrect because EBS is not as durable as Amazon S3. Moreover, if the Availability Zone where the volume is hosted goes down then the data will also be inaccessible.
The option that says: Create a Lifecycle Policy to regularly backup the S3 bucket to Amazon Glacier is incorrect because Glacier is primarily used for data archival. You also need to replicate your data to another region for better durability.
The option that says: Create a new S3 bucket in another region and configure Cross-Account Access to the bucket located in us-east-1 is incorrect because Cross-Account Access in Amazon S3 is primarily used if you want to grant access to your objects to another AWS account, and not just to another AWS Region. For example, Account MANILA can grant another AWS account (Account CEBU) permission to access its resources such as buckets and objects. S3 Cross-Account Access does not replicate data from one region to another. A better solution is to enable Cross-Region Replication (CRR) instead.
A company has a team of developers that provisions their own resources on the AWS cloud. The developers use IAM user access keys to automate their resource provisioning and application testing processes in AWS. To ensure proper security compliance, the security team wants to automate the process of deactivating and deleting any IAM user access key that is over 90 days old.
Which solution will meet these requirements with the LEAST operational effort?
A. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to filter IAM user access keys older than 90 days. Schedule an AWS Batch Job that runs every 24 hours to delete all the specified access keys
B. Use the AWS Config managed rule to check if the IAM user access keys are not rotated within 90 days. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for the non compliant keys, and define a target to invoke a custom Lambda function to deactivate and delete the keys
C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to filter IAM user access keys older than 90 days. Define a target to invoke a Lambda function to deactivate and delete the old access keys
D. Create a custom AWS config rule to check for the max age of IAM access keys. Schedule an AWS Batch Job that runs every 24 hours to delete all the non compliant access keys
B. Use the AWS Config managed rule to check if the IAM user access keys are not rotated within 90 days. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for the non compliant keys, and define a target to invoke a custom Lambda function to deactivate and delete the keys
Explanation:
AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time.
You can use AWS Config rules to evaluate the configuration settings of your AWS resources. When AWS Config detects that a resource violates the conditions in one of your rules, AWS Config flags the resource as non-compliant and sends a notification. AWS Config continuously evaluates your resources as they are created, changed, or deleted.
To analyze potential security weaknesses, you need detailed historical information about your AWS resource configurations, such as the AWS Identity and Access Management (IAM) permissions that are granted to your users or the Amazon EC2 security group rules that control access to your resources.
Amazon EventBridge is a serverless event bus service that you can use to connect your applications with data from a variety of sources.
You can use an Amazon EventBridge rule with a custom event pattern and an input transformer to match an AWS Config evaluation rule output as NON_COMPLIANT. Then, route the response to an Amazon Simple Notification Service (Amazon SNS) topic.
Therefore, the correct answer is: Use the AWS Config managed rule to check if the IAM user access keys are not rotated within 90 days. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for the non-compliant keys, and define a target to invoke a custom Lambda function to deactivate and delete the keys. AWS EventBridge can check for AWS Config non-compliant events and then trigger a Lambda for remediation.
The option that says: Create an Amazon EventBridge (Amazon CloudWatch Events) rule to filter IAM user access keys older than 90 days. Schedule an AWS Batch job that runs every 24 hours to delete all the specified access keys is incorrect. Amazon EventBridge cannot directly check for IAM events that show the age of IAM access keys. The use of the AWS Batch service is not warranted here as it is primarily used to easily and efficiently run hundreds of thousands of batch computing jobs on AWS.
The option that says: Create a custom AWS Config rule to check for the max-age of IAM access keys. Schedule an AWS Batch job that runs every 24 hours to delete all the non-compliant access keys is incorrect. AWS Config can only use Systems Manager Automation documents as remediation actions. Moreover, the AWS Batch service cannot be used for remediating non-compliant resources like IAM access keys.
The option that says: Create an Amazon EventBridge (Amazon CloudWatch Events) rule to filter IAM user access keys older than 90 days. Define a target to invoke a Lambda function to deactivate and delete the old access keys is incorrect. Amazon EventBridge cannot directly check for IAM events that contain the age of IAM access keys. You have to use the AWS Config service to detect the non-compliant IAM keys.
A large multinational investment bank has a web application that requires a minimum of 4 EC2 instances to run to ensure that it can cater to its users across the globe. You are instructed to ensure fault tolerance of this system.
Which of the following is the best option?
A. Deploy an Auto Scaling group with 4 instances in one Availability Zone behind an Application Load Balancer
B. Deploy an Auto Scaling group with 2 instances in each of 2 Availabiility Zones behind an application Load Balancer
C. Deploy an Auto Scaling group with 1 instance in each of 4 Availability Zones behind an Application Load Balancer
D. Deploy an Auto Scaling group with 2 instances in each of 3 Availability Zones behind an Application Load Balancer
D. Deploy an Auto Scaling group with 2 instances in each of 3 Availability Zones behind an Application Load Balancer
Explanation:
Fault Tolerance is the ability of a system to remain in operation even if some of the components used to build the system fail. In AWS, this means that in the event of server fault or system failures, the number of running EC2 instances should not fall below the minimum number of instances required by the system for it to work properly. So if the application requires a minimum of 4 instances, there should be at least 4 instances running in case there is an outage in one of the Availability Zones or if there are server issues.
One of the differences between Fault Tolerance and High Availability is that the former refers to the minimum number of running instances. For example, you have a system that requires a minimum of 4 running instances and currently has 6 running instances deployed in two Availability Zones. There was a component failure in one of the Availability Zones which knocks out 3 instances. In this case, the system can still be regarded as Highly Available since there are still instances running that can accommodate the requests. However, it is not Fault-Tolerant since the required minimum of four instances has not been met.
Hence, the correct answer is: Deploy an Auto Scaling group with 2 instances in each of 3 Availability Zones behind an Application Load Balancer.
The option that says: Deploy an Auto Scaling group with 2 instances in each of 2 Availability Zones behind an Application Load Balancer is incorrect because if one Availability Zone went out, there will only be 2 running instances available out of the required 4 minimum instances. Although the Auto Scaling group can spin up another 2 instances, the fault tolerance of the web application has already been compromised.
The option that says: Deploy an Auto Scaling group with 4 instances in one Availability Zone behind an Application Load Balancer is incorrect because if the Availability Zone went out, there will be no running instance available to accommodate the request.
The option that says: Deploy an Auto Scaling group with 1 instance in each of 4 Availability Zones behind an Application Load Balancer is incorrect because if one Availability Zone went out, there will only be 3 instances available to accommodate the request.