AWS Exam #4 Flashcards
A solutions architect is in charge of preparing the infrastructure for a serverless application. The application is built from a Docker image pulled from an Amazon Elastic Container Registry (ECR) repository. It is compulsory that the application has access to 5 GB of ephemeral storage.Which action satisfies the requirements?
- Deploy the application in a Lambda function with Container image support. Set the function’s storage to 5 GB.
- Deploy the application in a Lambda function with Container image support. Attach an Amazon Elastic File System (EFS) volume to the function.
- Deploy the application Amazon ECS cluster with EC2 worker nodes and attach a 5 GB Amazon EBS volume.
- Deploy the application to an Amazon ECS cluster that uses Fargate tasks.
- Deploy the application to an Amazon ECS cluster that uses Fargate tasks.
AWS Fargate is a serverless compute engine for containers that work with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.
Fargate allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity. You only pay for the resources required to run your containers, so there is no over-provisioning and paying for additional servers.
By default, Fargate tasks are given a minimum of 20 GiB of free ephemeral storage, which meets the storage requirement in the scenario.
Therefore, the correct answer is: Deploy the application to an Amazon ECS cluster that uses Fargate tasks.
You can’t just pick up any image and run it in a Lambda function. For this to work, you must refactor the code and rebuild the application from an AWS provided-base image tailored specifically for AWS Lambda. Hence, the following options are incorrect:
- Deploy the application in a Lambda function with Container image support. Set the function’s storage to 5 GB.
- Deploy the application in a Lambda function with Container image support. Attach an Amazon Elastic File System (EFS) volume to the function.
The option that says: Deploy the application Amazon ECS cluster with EC2 worker nodes and attach a 5 GB Amazon EBS volume is incorrect because the scenario explicitly mentioned that the architecture must be serverless. Using Amazon EC2 instances for your worker nodes is not a serverless architecture.
A company has clients all across the globe that access product files stored in several S3 buckets, which are behind each of their own CloudFront web distributions. They currently want to deliver their content to a specific client, and they need to make sure that only that client can access the data. Currently, all of their clients can access their S3 buckets directly using an S3 URL or through their CloudFront distribution. The Solutions Architect must serve the private content via CloudFront only, to secure the distribution of files.Which combination of actions should the Architect implement to meet the above requirements? (Select TWO.)
- Enable the Origin Shield feature of the Amazon CloudFront distribution to protect the files from unauthorized access.
- Use S3 pre-signed URLs to ensure that only their client can access the files. Remove permission to use Amazon S3 URLs to read the files for anyone else.
- Create a custom CloudFront function to check and ensure that only their clients can access the files.
- Restrict access to files in the origin by creating an origin access identity (OAI) and give it permission to read the files in the bucket.
- Require the users to access the private content by using special CloudFront signed URLs or signed cookies.
- Restrict access to files in the origin by creating an origin access identity (OAI) and give it permission to read the files in the bucket.
- Require the users to access the private content by using special CloudFront signed URLs or signed cookies.
Many companies that distribute content over the Internet want to restrict access to documents, business data, media streams, or content that is intended for selected users, for example, users who have paid a fee. To securely serve this private content by using CloudFront, you can do the following:
- Require that your users access your private content by using special CloudFront signed URLs or signed cookies.
- Require that your users access your Amazon S3 content by using CloudFront URLs, not Amazon S3 URLs. Requiring CloudFront URLs isn’t necessary, but it is recommended to prevent users from bypassing the restrictions that you specify in signed URLs or signed cookies. You can do this by setting up an origin access identity (OAI) for your Amazon S3 bucket. You can also configure the custom headers for a private HTTP server or an Amazon S3 bucket configured as a website endpoint.
All objects and buckets by default are private. The pre-signed URLs are useful if you want your user/customer to be able to upload a specific object to your bucket, but you don’t require them to have AWS security credentials or permissions.
You can generate a pre-signed URL programmatically using the AWS SDK for Java or the AWS SDK for .NET. If you are using Microsoft Visual Studio, you can also use AWS Explorer to generate a pre-signed object URL without writing any code. Anyone who receives a valid pre-signed URL can then programmatically upload an object.
Hence, the correct answers are:
- Restrict access to files in the origin by creating an origin access identity (OAI) and give it permission to read the files in the bucket.
- Require the users to access the private content by using special CloudFront signed URLs or signed cookies.
The option that says: Create a custom CloudFront function to check and ensure that only their clients can access the files is incorrect. CloudFront Functions are just lightweight functions in JavaScript for high-scale, latency-sensitive CDN customizations and not for enforcing security. A CloudFront Function runtime environment offers submillisecond startup times which allows your application to scale immediately to handle millions of requests per second. But again, this can’t be used to restrict access to your files.
The option that says: Enable the Origin Shield feature of the Amazon CloudFront distribution to protect the files from unauthorized access is incorrect because this feature is not primarily used for security but for improving your origin’s load times, improving origin availability, and reducing your overall operating costs in CloudFront.
The option that says: Use S3 pre-signed URLs to ensure that only their client can access the files. Remove permission to use Amazon S3 URLs to read the files for anyone else is incorrect. Although this could be a valid solution, it doesn’t satisfy the requirement to serve the private content via CloudFront only to secure the distribution of files. A better solution is to set up an origin access identity (OAI) then use Signed URL or Signed Cookies in your CloudFront web distribution.
A large insurance company has an AWS account that contains three VPCs (DEV, UAT and PROD) in the same region. UAT is peered to both PROD and DEV using a VPC peering connection. All VPCs have non-overlapping CIDR blocks. The company wants to push minor code releases from Dev to Prod to speed up time to market.
Which of the following options helps the company accomplish this?
- Create a new entry to PROD in the DEV route table using the VPC peering connection as the target.
- Change the DEV and PROD VPCs to have overlapping CIDR blocks to be able to connect them.
- Do nothing. Since these two VPCs are already connected via UAT, they already have a connection to each other.
- Create a new VPC peering connection between PROD and DEV with the appropriate routes.
- Create a new VPC peering connection between PROD and DEV with the appropriate routes.
A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region.
AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it is neither a gateway nor a VPN connection and does not rely on a separate piece of physical hardware. There is no single point of failure for communication or a bandwidth bottleneck.
Creating a new entry to PROD in the DEV route table using the VPC peering connection as the target is incorrect because even if you configure the route tables, the two VPCs will still be disconnected until you set up a VPC peering connection between them.
Changing the DEV and PROD VPCs to have overlapping CIDR blocks to be able to connect them is incorrect because you cannot peer two VPCs with overlapping CIDR blocks.
The option that says: Do nothing. Since these two VPCs are already connected via UAT, they already have a connection to each other is incorrect as transitive VPC peering is not allowed hence, even though DEV and PROD are both connected in UAT, these two VPCs do not have a direct connection to each other.
A solutions architect is managing an application that runs on a Windows EC2 instance with an attached Amazon FSx for Windows File Server. To save cost, management has decided to stop the instance during off-hours and restart it only when needed. It has been observed that the application takes several minutes to become fully operational which impacts productivity.How can the solutions architect speed up the instance’s loading time without driving the cost up?
- Disable the Instance Metadata Service to reduce the things that need to be loaded at startup.
- Migrate the application to a Linux-based EC2 instance.
- Migrate the application to an EC2 instance with hibernation enabled.
- Enable the hibernation mode on the EC2 instance.
- Migrate the application to an EC2 instance with hibernation enabled.
Therefore, the correct answer is: Migrate the application to an EC2 instance with hibernation enabled.
The option that says: Migrate the application to a Linux-based EC2 instance is incorrect. This does not guarantee a faster load time. Moreover, it is a risky thing to do as the application might have dependencies tied to the previous operating system that won’t work on a different OS.
The option that says: Enable the hibernation mode on the EC2 instance is incorrect. It is not possible to enable or disable hibernation for an instance after it has been launched.
The option that says: Disable the instance metadata service to reduce the things that need to be loaded at startup is incorrect. This won’t affect the startup load time at all. The Instance Metadata Service is just a service that you can access over the network from within an EC2 instance.
A travel company has a suite of web applications hosted in an Auto Scaling group of On-Demand EC2 instances behind an Application Load Balancer that handles traffic from various web domains such as i-love-manila.com
, i-love-boracay.com
, i-love-cebu.com
and many others. To improve security and lessen the overall cost, you are instructed to secure the system by allowing multiple domains to serve SSL traffic without the need to reauthenticate and reprovision your certificate everytime you add a new domain. This migration from HTTP to HTTPS will help improve their SEO and Google search ranking. Which of the following is the most cost-effective solution to meet the above requirement?
- Create a new CloudFront web distribution and configure it to serve HTTPS requests using dedicated IP addresses in order to associate your alternate domain names with a dedicated IP address in each CloudFront edge location.
- Use a wildcard certificate to handle multiple sub-domains and different domains.
- Upload all SSL certificates of the domains in the ALB using the console and bind multiple certificates to the same secure listener on your load balancer. ALB will automatically choose the optimal TLS certificate for each client using Server Name Indication (SNI).
- Add a Subject Alternative Name (SAN) for each additional domain to your certificate.
- Upload all SSL certificates of the domains in the ALB using the console and bind multiple certificates to the same secure listener on your load balancer. ALB will automatically choose the optimal TLS certificate for each client using Server Name Indication (SNI).
SNI Custom SSL relies on the SNI extension of the Transport Layer Security protocol, which allows multiple domains to serve SSL traffic over the same IP address by including the hostname which the viewers are trying to connect to.
You can host multiple TLS-secured applications, each with its own TLS certificate, behind a single load balancer. In order to use SNI, all you need to do is bind multiple certificates to the same secure listener on your load balancer. ALB will automatically choose the optimal TLS certificate for each client. These features are provided at no additional charge.
To meet the requirements in the scenario, you can upload all SSL certificates of the domains in the ALB using the console and bind multiple certificates to the same secure listener on your load balancer. ALB will automatically choose the optimal TLS certificate for each client using Server Name Indication (SNI).
Hence, the correct answer is the option that says: Upload all SSL certificates of the domains in the ALB using the console and bind multiple certificates to the same secure listener on your load balancer. ALB will automatically choose the optimal TLS certificate for each client using Server Name Indication (SNI).
Using a wildcard certificate to handle multiple sub-domains and different domains is incorrect because a wildcard certificate can only handle multiple sub-domains but not different domains.
Adding a Subject Alternative Name (SAN) for each additional domain to your certificate is incorrect because although using SAN is correct, you will still have to reauthenticate and reprovision your certificate every time you add a new domain. One of the requirements in the scenario is that you should not have to reauthenticate and reprovision your certificate hence, this solution is incorrect.
The option that says: Create a new CloudFront web distribution and configure it to serve HTTPS requests using dedicated IP addresses in order to associate your alternate domain names with a dedicated IP address in each CloudFront edge location is incorrect because although it is valid to use dedicated IP addresses to meet this requirement, this solution is not cost-effective. Remember that if you configure CloudFront to serve HTTPS requests using dedicated IP addresses, you incur an additional monthly charge. The charge begins when you associate your SSL/TLS certificate with your CloudFront distribution. You can just simply upload the certificates to the ALB and use SNI to handle multiple domains in a cost-effective manner.
A company requires that all AWS resources be tagged with a standard naming convention for better access control. The company’s solutions architect must implement a solution that checks for untagged AWS resources.Which solution requires the least amount of effort to implement?
- Use an AWS Config rule to detect non-compliant tags.
- Use tag policies in AWS Organizations to standardize the naming of tags. Store all the tags in an Amazon S3 bucket with the S3 Object Lock feature enabled.
- Use service control policies (SCP) to detect resources that are not tagged properly.
- Create a Lambda function that runs compliance checks on tagged resources. Schedule the function using Amazon EventBridge (Amazon CloudWatch Events).
- Use an AWS Config rule to detect non-compliant tags.
You can assign metadata to your AWS resources in the form of tags. Each tag is a label consisting of a user-defined key and value. Tags can help you manage, identify, organize, search for, and filter resources. You can create tags to categorize resources by purpose, owner, environment, or other criteria.
You can use tags to control access by restricting IAM permissions based on specific tags or tag values. For example, IAM user or role permissions can include conditions to limit EC2 API calls to specific environments (such as development, test, or production) based on their tags.
Since tags are case-sensitive, giving them a consistent naming format is a good practice. Depending on how your tagging rules are set up, having a disorganized naming convention may lead to permission issues like the one described in the scenario. In the scenario, the administrator can leverage the ` require-tags ` managed rule in AWS Config. This rule checks if a resource contains the tags that you specify.
Therefore, the correct answer is: Use an AWS Config rule to detect non-compliant tags.
The option that says: Use tag policies in AWS Organizations to standardize the naming of tags. Store all the tags in an Amazon S3 bucket with the S3 Object Lock feature enabled is incorrect. Although tag policies can help you enforce the standardization of tags, they won’t be able to report resources that have non-compliant tags. The use of the S3 Object Lock feature in this scenario is not warranted. The S3 Object Lock is primarily used to store objects using a write-once-read-many (WORM) model which can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.
The option that says: Create a Lambda function that runs compliance checks on tagged resources. Schedule the function using Amazon EventBridge (Amazon CloudWatch Events) is incorrect. While this is possible, using a manage AWS Config rule is a much simpler solution than writing a code running compliance checks.
The option that says: Use service control policies (SCP) to detect resources that are not tagged properly is incorrect. SCPs are just guardrails for setting up the maximum allowable permissions an IAM identity can have. It’s not capable of checking for non-compliant tags.
A company is setting up a cloud architecture for an international money transfer service to be deployed in AWS which will have thousands of users around the globe. The service should be available 24/7 to avoid any business disruption and should be resilient enough to handle the outage of an entire AWS region. To meet this requirement, the Solutions Architect has deployed their AWS resources to multiple AWS Regions. He needs to use Route 53 and configure it to set all of the resources to be available all the time as much as possible. When a resource becomes unavailable, Route 53 should detect that it’s unhealthy and stop including it when responding to queries.Which of the following is the most fault-tolerant routing configuration that the Solutions Architect should use in this scenario?
- Configure an Active-Active Failover with One Primary and One Secondary Resource.
- Configure an Active-Passive Failover with Multiple Primary and Secondary Resources.
- Configure an Active-Active Failover with Weighted routing policy.
- Configure an Active-Passive Failover with Weighted Records.
- Configure an Active-Active Failover with Weighted routing policy.
You can use Route 53 health checking to configure active-active and active-passive failover configurations. You configure active-active failover using any routing policy (or combination of routing policies) other than failover, and you configure active-passive failover using the failover routing policy.
Active-Active Failover
Use this failover configuration when you want all of your resources to be available the majority of the time. When a resource becomes unavailable, Route 53 can detect that it’s unhealthy and stop including it when responding to queries.
In active-active failover, all the records that have the same name, the same type (such as A or AAAA), and the same routing policy (such as weighted or latency) are active unless Route 53 considers them unhealthy. Route 53 can respond to a DNS query using any healthy record.
Hence, Configuring an Active-Active Failover with Weighted routing policy is correct.
Active-Passive Failover
Use an active-passive failover configuration when you want a primary resource or group of resources to be available the majority of the time and you want a secondary resource or group of resources to be on standby in case all the primary resources become unavailable. When responding to queries, Route 53 includes only the healthy primary resources. If all the primary resources are unhealthy, Route 53 begins to include only the healthy secondary resources in response to DNS queries.
Configuring an Active-Passive Failover with Weighted Records and configuring an Active-Passive Failover with Multiple Primary and Secondary Resources are incorrect because an Active-Passive Failover is mainly used when you want a primary resource or group of resources to be available most of the time and you want a secondary resource or group of resources to be on standby in case all the primary resources become unavailable. In this scenario, all of your resources should be available all the time as much as possible which is why you have to use an Active-Active Failover instead.
Configuring an Active-Active Failover with One Primary and One Secondary Resource is incorrect because you cannot set up an Active-Active Failover with One Primary and One Secondary Resource. Remember that an Active-Active Failover uses all available resources all the time without a primary nor a secondary resource.
A company needs to use Amazon Aurora as the Amazon RDS database engine of their web application. The Solutions Architect has been instructed to implement a 90-day backup retention policy.Which of the following options can satisfy the given requirement?
- Configure RDS to export the automated snapshot automatically to Amazon S3 and create a lifecycle policy to delete the object after 90 days.
- Create an AWS Backup plan to take daily snapshots with a retention period of 90 days.
- Create a daily scheduled event using CloudWatch Events and AWS Lambda to directly download the RDS automated snapshot to an S3 bucket. Archive snapshots older than 90 days to Glacier.
- Configure an automated backup and set the backup retention period to 90 days.
- Create an AWS Backup plan to take daily snapshots with a retention period of 90 days.
AWS Backup is a centralized backup service that makes it easy and cost-effective for you to backup your application data across AWS services in the AWS Cloud, helping you meet your business and regulatory backup compliance requirements. AWS Backup makes protecting your AWS storage volumes, databases, and file systems simple by providing a central place where you can configure and audit the AWS resources you want to backup, automate backup scheduling, set retention policies, and monitor all recent backup and restore activity.
In this scenario, you can use AWS Backup to create a backup plan with a retention period of 90 days. A backup plan is a policy expression that defines when and how you want to back up your AWS resources. You assign resources to backup plans, and AWS Backup then automatically backs up and retains backups for those resources according to the backup plan.
Hence, the correct answer is: Create an AWS Backup plan to take daily snapshots with a retention period of 90 days.
The option that says: Configure an automated backup and set the backup retention period to 90 days is incorrect because the maximum backup retention period for automated backup is only 35 days.
The option that says: Configure RDS to export the automated snapshot automatically to Amazon S3 and create a lifecycle policy to delete the object after 90 days is incorrect because you can’t export an automated snapshot automatically to Amazon S3. You must export the snapshot manually.
The option that says: Create a daily scheduled event using CloudWatch Events and AWS Lambda to directly download the RDS automated snapshot to an S3 bucket. Archive snapshots older than 90 days to Glacier is incorrect because you cannot directly download or export an automated snapshot in RDS to Amazon S3. You have to copy the automated snapshot first for it to become a manual snapshot, which you can move to an Amazon S3 bucket. A better solution for this scenario is to simply use AWS Backup.
A major TV network has a web application running on eight Amazon T3 EC2 instances behind an application load balancer. The number of requests that the application processes are consistent and do not experience spikes. A Solutions Architect must configure an Auto Scaling group for the instances to ensure that the application is running at all times.Which of the following options can satisfy the given requirements?
- Deploy two EC2 instances with Auto Scaling in four regions behind an Amazon Elastic Load Balancer.
- Deploy eight EC2 instances with Auto Scaling ** in one Availability Zone behind an Amazon Elastic Load Balancer.
- Deploy four EC2 instances with Auto Scaling in one Availability Zone and four in another availability zone in the same region behind an Amazon Elastic Load Balancer.
- Deploy four EC2 instances with Auto Scaling in one region and four in another region behind an Amazon Elastic Load Balancer.
- Deploy four EC2 instances with Auto Scaling in one Availability Zone and four in another availability zone in the same region behind an Amazon Elastic Load Balancer.
The best option to take is to deploy four EC2 instances in one Availability Zone and four in another availability zone in the same region behind an Amazon Elastic Load Balancer. In this way, if one availability zone goes down, there is still another available zone that can accommodate traffic.
When the first AZ goes down, the second AZ will only have an initial 4 EC2 instances. This will eventually be scaled up to 8 instances since the solution is using Auto Scaling.
The 110% compute capacity for the 4 servers might cause some degradation of the service but not a total outage since there are still some instances that handle the requests. Depending on your scale-up configuration in your Auto Scaling group, the additional 4 EC2 instances can be launched in a matter of minutes.
T3 instances also have a Burstable Performance capability to burst or go beyond the current compute capacity of the instance to higher performance as required by your workload. So your 4 servers will be able to manage 110% compute capacity for a short period of time. This is the power of cloud computing versus our on-premises network architecture. It provides elasticity and unparalleled scalability.
Take note that Auto Scaling will launch additional EC2 instances to the remaining Availability Zone/s in the event of an Availability Zone outage in the region. Hence, the correct answer is the option that says: Deploy four EC2 instances with Auto Scaling in one Availability Zone and four in another availability zone in the same region behind an Amazon Elastic Load Balancer.
The option that says: Deploy eight EC2 instances with Auto Scaling in one Availability Zone behind an Amazon Elastic Load Balancer is incorrect because this architecture is not highly available. If that Availability Zone goes down, then your web application will be unreachable.
The options that say: Deploy four EC2 instances with Auto Scaling in one region and four in another region behind an Amazon Elastic Load Balancer and Deploy two EC2 instances with Auto Scaling in four regions behind an Amazon Elastic Load Balancer are incorrect because the ELB is designed to only run in one region and not across multiple regions.
A company is hosting an application on EC2 instances that regularly pushes and fetches data in Amazon S3. Due to a change in compliance, the instances need to be moved on a private subnet. Along with this change, the company wants to lower the data transfer costs by configuring its AWS resources.How can this be accomplished in the MOST cost-efficient manner?
- Set up a NAT Gateway in the public subnet to connect to Amazon S3.
- Create an Amazon S3 gateway endpoint to enable a connection between the instances and Amazon S3.
- Create an Amazon S3 interface endpoint to enable a connection between the instances and Amazon S3.
- Set up an AWS Transit Gateway to access Amazon S3.
- Create an Amazon S3 gateway endpoint to enable a connection between the instances and Amazon S3.
VPC endpoints for Amazon S3 simplify access to S3 from within a VPC by providing configurable and highly reliable secure connections to S3 that do not require an internet gateway or Network Address Translation (NAT) device. When you create an S3 VPC endpoint, you can attach an endpoint policy to it that controls access to Amazon S3.
You can use two types of VPC endpoints to access Amazon S3: gateway endpoints and interface endpoints . A gateway endpoint is a gateway that you specify in your route table to access Amazon S3 from your VPC over the AWS network. Interface endpoints extend the functionality of gateway endpoints by using private IP addresses to route requests to Amazon S3 from within your VPC, on-premises, or from a different AWS Region. Interface endpoints are compatible with gateway endpoints. If you have an existing gateway endpoint in the VPC, you can use both types of endpoints in the same VPC.
There is no additional charge for using gateway endpoints. However, standard charges for data transfer and resource usage still apply.
Hence, the correct answer is: Create an Amazon S3 gateway endpoint to enable a connection between the instances and Amazon S3.
The option that says: Set up a NAT Gateway in the public subnet to connect to Amazon S3 is incorrect. This will enable a connection between the private EC2 instances and Amazon S3 but it is not the most cost-efficient solution. NAT Gateways are charged on an hourly basis even for idle time.
The option that says: Create an Amazon S3 interface endpoint to enable a connection between the instances and Amazon S3 is incorrect. This is also a possible solution but it’s not the most cost-effective solution. You pay an hourly rate for every provisioned Interface endpoint.
The option that says: Set up an AWS Transit Gateway to access Amazon S3 is incorrect because this service is mainly used for connecting VPCs and on-premises networks through a central hub.
A Solutions Architect is designing a highly available environment for an application. She plans to host the application on EC2 instances within an Auto Scaling Group. One of the conditions requires data stored on root EBS volumes to be preserved if an instance terminates.What should be done to satisfy the requirement?
- Set the value of
DeleteOnTermination
attribute of the EBS volumes toFalse
. - Configure ASG to suspend the health check process for each EC2 instance.
- Use AWS DataSync to replicate root volume data to Amazon S3.
- Enable the Termination Protection option for all EC2 instances.
- Set the value of
DeleteOnTermination
attribute of the EBS volumes toFalse
.
By default, Amazon EBS root device volumes are automatically deleted when the instance terminates. However, by default, any additional EBS volumes that you attach at launch, or any EBS volumes that you attach to an existing instance persist even after the instance terminates. This behavior is controlled by the volume’s DeleteOnTermination
attribute, which you can modify.
To preserve the root volume when an instance terminates, change the DeleteOnTermination
attribute for the root volume to False.
This EBS attribute can be changed through the AWS Console upon launching the instance or through CLI/API command.
Hence, the correct answer is the option that says: Set the value of ` DeleteOnTermination ` attribute of the EBS volumes to ` False `.
The option that says: Use AWS DataSync to replicate root volume data to Amazon S3 is incorrect because AWS DataSync does not work with Amazon EBS volumes. DataSync can copy data between Network File System (NFS) shares, Server Message Block (SMB) shares, self-managed object storage, AWS Snowcone, Amazon Simple Storage Service (Amazon S3) buckets, Amazon Elastic File System (Amazon EFS) file systems, and Amazon FSx for Windows File Server file systems.
The option that says: Configure ASG to suspend the health check process for each EC2 instance is incorrect because suspending the health check process will prevent the ASG from replacing unhealthy EC2 instances. This can cause availability issues to the application.
The option that says: Enable the Termination Protection option for all EC2 instances is incorrect. Termination Protection will just prevent your instance from being accidentally terminated using the Amazon EC2 console.
A company has established a dedicated network connection from its on-premises data center to AWS Cloud using AWS Direct Connect (DX). The core network services, such as the Domain Name System (DNS) service and Active Directory services, are all hosted on-premises. The company has new AWS accounts that will also require consistent and dedicated access to these network services.Which of the following can satisfy this requirement with the LEAST amount of operational overhead and in a cost-effective manner?
- Set up a new Direct Connect gateway and integrate it with the existing Direct Connect connection. Configure a VPC peering connection between AWS accounts and associate it with Direct Connect gateway.
- Create a new Direct Connect gateway and integrate it with the existing Direct Connect connection. Set up a Transit Gateway between AWS accounts and associate it with the Direct Connect gateway.
- Create a new AWS VPN CloudHub. Set up a Virtual Private Network (VPN) connection for additional AWS accounts.
- Set up another Direct Connect connection for each and every new AWS account that will be added.
- Create a new Direct Connect gateway and integrate it with the existing Direct Connect connection. Set up a Transit Gateway between AWS accounts and associate it with the Direct Connect gateway.
AWS Transit Gateway provides a hub and spoke design for connecting VPCs and on-premises networks. You can attach all your hybrid connectivity (VPN and Direct Connect connections) to a single Transit Gateway consolidating and controlling your organization’s entire AWS routing configuration in one place. It also controls how traffic is routed among all the connected spoke networks using route tables. This hub and spoke model simplifies management and reduces operational costs because VPCs only connect to the Transit Gateway to gain access to the connected networks.
By attaching a transit gateway to a Direct Connect gateway using a transit virtual interface, you can manage a single connection for multiple VPCs or VPNs that are in the same AWS Region. You can also advertise prefixes from on-premises to AWS and from AWS to on-premises.
The AWS Transit Gateway and AWS Direct Connect solution simplify the management of connections between an Amazon VPC and your networks over a private connection. It can also minimize network costs, improve bandwidth throughput, and provide a more reliable network experience than Internet-based connections.
Hence, the correct answer is: Create a new Direct Connect gateway and integrate it with the existing Direct Connect connection. Set up a Transit Gateway between AWS accounts and associate it with the Direct Connect gateway.
The option that says: Set up another Direct Connect connection for each and every new AWS account that will be added is incorrect because this solution entails a significant amount of additional cost. Setting up a single DX connection requires a substantial budget and takes a lot of time to establish. It also has high management overhead since you will need to manage all of the Direct Connect connections for all AWS accounts.
The option that says: Create a new AWS VPN CloudHub. Set up a Virtual Private Network (VPN) connection for additional AWS accounts is incorrect because a VPN connection is not capable of providing consistent and dedicated access to the on-premises network services. Take note that a VPN connection traverses the public Internet and doesn’t use a dedicated connection.
The option that says: Set up a new Direct Connect gateway and integrate it with the existing Direct Connect connection. Configure a VPC peering connection between AWS accounts and associate it with Direct Connect gateway is incorrect because VPC peering is not supported in a Direct Connect connection. VPC peering does not support transitive peering relationships.
You are automating the creation of EC2 instances in your VPC. Hence, you wrote a python script to trigger the Amazon EC2 API to request 50 EC2 instances in a single Availability Zone. However, you noticed that after 20 successful requests, subsequent requests failed.
What could be a reason for this issue and how would you resolve it?
- By default, AWS allows you to provision a maximum of 20 instances per region. Select a different region and retry the failed request.
- There was an issue with the Amazon EC2 API. Just resend the requests and these will be provisioned successfully.
- There is a vCPU-based On-Demand Instance limit per region which is why subsequent requests failed. Just submit the limit increase form to AWS and retry the failed requests once approved.
- By default, AWS allows you to provision a maximum of 20 instances per Availability Zone. Select a different Availability Zone and retry the failed request.
- There is a vCPU-based On-Demand Instance limit per region which is why subsequent requests failed. Just submit the limit increase form to AWS and retry the failed requests once approved.
You are limited to running On-Demand Instances per your vCPU-based On-Demand Instance limit, purchasing 20 Reserved Instances, and requesting Spot Instances per your dynamic Spot limit per region. New AWS accounts may start with limits that are lower than the limits described here.
If you need more instances, complete the Amazon EC2 limit increase request form with your use case, and your limit increase will be considered. Limit increases are tied to the region they were requested for.
Hence, the correct answer is: There is a vCPU-based On-Demand Instance limit per region which is why subsequent requests failed. Just submit the limit increase form to AWS and retry the failed requests once approved.
The option that says: There was an issue with the Amazon EC2 API. Just resend the requests and these will be provisioned successfully is incorrect because you are limited to running On-Demand Instances per your vCPU-based On-Demand Instance limit. There is also a limit of purchasing 20 Reserved Instances and requesting Spot Instances per your dynamic Spot limit per region hence, there is no problem with the EC2 API.
The option that says: By default, AWS allows you to provision a maximum of 20 instances per region. Select a different region and retry the failed request is incorrect. There is no need to select a different region since this limit can be increased after submitting a request form to AWS.
The option that says: By default, AWS allows you to provision a maximum of 20 instances per Availability Zone. Select a different Availability Zone and retry the failed request is incorrect because the vCPU-based On-Demand Instance limit is set per region and not per Availability Zone. This can be increased after submitting a request form to AWS.
A FinTech startup deployed an application on an Amazon EC2 instance with attached Instance Store volumes and an Elastic IP address. The server is only accessed from 8 AM to 6 PM and can be stopped from 6 PM to 8 AM for cost efficiency using Lambda with the script that automates this based on tags.Which of the following will occur when the EC2 instance is stopped and started? (Select TWO.)
- All data on the attached instance-store devices will be lost.
- The ENI (Elastic Network Interface) is detached.
- There will be no changes.
- The Elastic IP address is disassociated with the instance.
- The underlying host for the instance is possibly changed.
- All data on the attached instance-store devices will be lost.
- The underlying host for the instance is possibly changed.
This question did not mention the specific type of EC2 instance, however, it says that it will be stopped and started. Since only EBS-backed instances can be stopped and restarted, it is implied that the instance is EBS-backed. Remember that an instance store-backed instance can only be rebooted or terminated, and its data will be erased if the EC2 instance is either stopped or terminated.
If you stopped an EBS-backed EC2 instance, the volume is preserved, but the data in any attached instance store volume will be erased. Keep in mind that an EC2 instance has an underlying physical host computer. If the instance is stopped, AWS usually moves the instance to a new host computer. Your instance may stay on the same host computer if there are no problems with the host computer. In addition, its Elastic IP address is disassociated from the instance if it is an EC2-Classic instance. Otherwise, if it is an EC2-VPC instance, the Elastic IP address remains associated.
Take note that an EBS-backed EC2 instance can have attached Instance Store volumes. This is the reason why there is an option that mentions the Instance Store volume, which is placed to test your understanding of this specific storage type. You can launch an EBS-backed EC2 instance and attach several Instance Store volumes but remember that there are some EC2 Instance types that don’t support this kind of setup.
Hence, the correct answers are:
- The underlying host for the instance is possibly changed.
- All data on the attached instance-store devices will be lost.
The option that says: The ENI (Elastic Network Interface) is detached is incorrect because the ENI will stay attached even if you stopped your EC2 instance.
The option that says: The Elastic IP address is disassociated with the instance is incorrect because the EIP will actually remain associated with your instance even after stopping it.
The option that says: There will be no changes is incorrect because there will be a lot of possible changes in your EC2 instance once you stop and start it again. AWS may move the virtualized EC2 instance to another host computer; the instance may get a new public IP address, and the data in your attached instance store volumes will be deleted.
A Solutions Architect for a global news company is configuring a fleet of EC2 instances in a subnet that currently is in a VPC with an Internet gateway attached. All of these EC2 instances can be accessed from the Internet. The architect launches another subnet and deploys an EC2 instance in it, however, the architect is not able to access the EC2 instance from the Internet.What could be the possible reasons for this issue? (Select TWO.)
- The Amazon EC2 instance does not have an attached Elastic Fabric Adapter (EFA).
- The route table is not configured properly to send traffic from the EC2 instance to the Internet through the customer gateway (CGW).
- The Amazon EC2 instance is not a member of the same Auto Scaling group.
- The Amazon EC2 instance does not have a public IP address associated with it.
- The route table is not configured properly to send traffic from the EC2 instance to the Internet through the Internet gateway.
- The Amazon EC2 instance does not have a public IP address associated with it.
- The route table is not configured properly to send traffic from the EC2 instance to the Internet through the Internet gateway.
Your VPC has an implicit router and you use route tables to control where network traffic is directed. Each subnet in your VPC must be associated with a route table, which controls the routing for the subnet (subnet route table). You can explicitly associate a subnet with a particular route table. Otherwise, the subnet is implicitly associated with the main route table.
A subnet can only be associated with one route table at a time, but you can associate multiple subnets with the same subnet route table. You can optionally associate a route table with an internet gateway or a virtual private gateway (gateway route table). This enables you to specify routing rules for inbound traffic that enters your VPC through the gateway
Be sure that the subnet route table also has a route entry to the internet gateway. If this entry doesn’t exist, the instance is in a private subnet and is inaccessible from the internet.
In cases where your EC2 instance cannot be accessed from the Internet (or vice versa), you usually have to check two things:
- Does it have an EIP or public IP address?
- Is the route table properly configured?
Below are the correct answers:
- Amazon EC2 instance does not have a public IP address associated with it.
- The route table is not configured properly to send traffic from the EC2 instance to the Internet through the Internet gateway.
The option that says: The Amazon EC2 instance is not a member of the same Auto Scaling group is incorrect since Auto Scaling Groups do not affect Internet connectivity of EC2 instances.
The option that says: The Amazon EC2 instance doesn’t have an attached Elastic Fabric Adapter (EFA) is incorrect because Elastic Fabric Adapter is just a network device that you can attach to your Amazon EC2 instance to accelerate High Performance Computing (HPC) and machine learning applications. EFA enables you to achieve the application performance of an on-premises HPC cluster, with the scalability, flexibility, and elasticity provided by AWS. However, this component is not required in order for your EC2 instance to access the public Internet.
The option that says: The route table is not configured properly to send traffic from the EC2 instance to the Internet through the customer gateway (CGW) is incorrect since CGW is used when you are setting up a VPN. The correct gateway should be an Internet gateway.
A document sharing website is using AWS as its cloud infrastructure. Free users can upload a total of 5 GB data while premium users can upload as much as 5 TB. Their application uploads the user files, which can have a max file size of 1 TB, to an S3 Bucket. In this scenario, what is the best way for the application to upload the large files in S3?
- Use a single PUT request to upload the large file
- Use AWS Import/Export
- Use Multipart Upload
- Use AWS Snowball
- Use Multipart Upload
The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.
The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: you initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts and you can then access the object just as you would any other object in your bucket.
Using a single PUT request to upload the large file is incorrect because the largest file size you can upload using a single PUT request is 5 GB. Files larger than this will fail to be uploaded.
Using AWS Snowball is incorrect because this is a migration tool that lets you transfer large amounts of data from your on-premises data center to AWS S3 and vice versa. This tool is not suitable for the given scenario. And when you provision Snowball, the device gets transported to you, and not to your customers. Therefore, you bear the responsibility of securing the device.
Using AWS Import/Export is incorrect because Import/Export is similar to AWS Snowball in such a way that it is meant to be used as a migration tool, and not for multiple customer consumption such as in the given scenario.
A Solutions Architect is working for a large insurance firm. To maintain compliance with HIPAA laws, all data that is backed up or stored on Amazon S3 needs to be encrypted at rest.Which encryption methods can be employed, assuming S3 is being used for storing financial-related data? (Select TWO.)
- Enable SSE on an S3 bucket to make use of AES-256 encryption
- Store the data in encrypted EBS snapshots
- Encrypt the data using your own encryption keys then copy the data to Amazon S3 over HTTPS endpoints.
- Use AWS Shield to protect your data at rest
- Store the data on EBS volumes with encryption enabled instead of using Amazon S3
- Enable SSE on an S3 bucket to make use of AES-256 encryption
- Encrypt the data using your own encryption keys then copy the data to Amazon S3 over HTTPS endpoints.
Data protection refers to protecting data while in transit (as it travels to and from Amazon S3) and at rest (while it is stored on disks in Amazon S3 data centers). You can protect data in transit by using SSL or by using client-side encryption. You have the following options for protecting data at rest in Amazon S3.
Use Server-Side Encryption – You request Amazon S3 to encrypt your object before saving it on disks in its data centers and decrypt it when you download the objects.
Use Client-Side Encryption – You can encrypt data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.
Hence, the following options are the correct answers:
- Enable SSE on an S3 bucket to make use of AES-256 encryption
- Encrypt the data using your own encryption keys then copy the data to Amazon S3 over HTTPS endpoints.
Storing the data in encrypted EBS snapshots and storing the data on EBS volumes with encryption enabled instead of using Amazon S3 are both incorrect because all these options are for protecting your data in your EBS volumes. Note that an S3 bucket does not use EBS volumes to store your data.
Using AWS Shield to protect your data at rest is incorrect because AWS Shield is mainly used to protect your entire VPC against DDoS attacks.
A company deployed several EC2 instances in a private subnet. The Solutions Architect needs to ensure the security of all EC2 instances. Upon checking the existing Inbound Rules of the Network ACL, she saw this configuration:
NACL
Rule # | Type | Protocol | Port Range | Source | ALLOW/DENY
100 | ALL Traffic | ALL | ALL | 0.0.0.0/0 | ALLOW
101 | Custom TCP Rule | TCP(6) | 4000 | 110.238.109.37/32 | ALLOW
* | ALL Traffic | ALL | ALL | 0.0.0.0/0 | DENY
- It will be allowed.
- Initially, it will be allowed and then after a while, the connection will be denied.
- It will be denied.
- Initially, it will be denied and then after a while, the connection will be allowed.
- It will be allowed.
Rules are evaluated starting with the lowest numbered rule. As soon as a rule matches traffic, it’s applied immediately regardless of any higher-numbered rule that may contradict it.
We have 3 rules here:
1. Rule 100 permits all traffic from any source.
2. Rule 101 denies all traffic coming from 110.238.109.37
3. The Default Rule (*) denies all traffic from any source.
The Rule 100 will first be evaluated. If there is a match, then it will allow the request. Otherwise, it will then go to Rule 101 to repeat the same process until it goes to the default rule. In this case, when there is a request from 110.238.109.37, it will go through Rule 100 first. As Rule 100 says it will permit all traffic from any source, it will allow this request and will not further evaluate Rule 101 (which denies 110.238.109.37) nor the default rule.
A call center wants to use Artificial Intelligence(AI) to extract insights from audio recordings to assess the quality of its customer service. The calls are available in both English and Hindi. A sentiment analysis report in English must be generated for each recording to assess whether or not the customer had a positive experience. Once the solution is completed, new languages will eventually be supported, such as Arabic, Mandarin, and Spanish.How can the solutions architect build the solution without maintaining any machine learning model?
- Utilize the Amazon Lex service to convert audio recordings into text. Call the Amazon Translate API to translate Hindi texts into English and use Amazon Forecast for sentiment prediction and analysis.
- Transcribe audio recordings into text using Amazon Polly. Set up Amazon Rekognition to recognize and automatically translate Hindi texts into English. Use the combination of Amazon Fraud Detector and Amazon SageMaker BlazingText algorithm for sentiment analysis.
- Convert audio recordings into text using Amazon Transcribe. Set up Amazon Translate to translate Hindi texts into English and use Amazon Comprehend for sentiment analysis.
- Set up Amazon Comprehend to convert audio recordings into text. Use Amazon Kendra to translate Hindi texts into English and utilize the Amazon Detective service to automatically detect negative user behaviors for sentiment analysis.
- Convert audio recordings into text using Amazon Transcribe. Set up Amazon Translate to translate Hindi texts into English and use Amazon Comprehend for sentiment analysis.
Amazon Transcribe is an AWS service that makes it easy for customers to convert speech-to-text. Using Automatic Speech Recognition (ASR) technology, customers can choose to use Amazon Transcribe for a variety of business applications, including transcription of voice-based customer service calls, generation of subtitles on audio/video content, and conduct (text-based) content analysis on audio/video content.
Amazon Translate is a Neural Machine Translation (MT) service for translating text between supported languages.
Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find meaning and insights in text.
You can use Amazon Comprehend to determine the sentiment of a document. For example, you can use sentiment analysis to determine the sentiments of comments on a blog posting or a transcribed call to determine if your users loved or hated your content. You can determine sentiment for documents in any of the primary languages supported by Amazon Comprehend. All documents in one job must be in the same language.
In this scenario, you can use these three services to build the ML-pipeline needed to satisfy the requirements. First, you’d have to create a transcription job using Amazon Transcribe to transform the recordings into text. Then, translate non-English calls to English using Amazon Translate. Finally, use Amazon Comprehend for sentiment analysis.
There’s no need to deploy or train your own model as all of these services are fully managed and are readily available through APIs.
Hence, the correct answer is: Convert audio recordings into text using Amazon Transcribe. Set up Amazon Translate to translate Hindi texts into English and use Amazon Comprehend for sentiment analysis.
The option that says: Transcribe audio recordings into text using Amazon Polly. Set up Amazon Rekognition to recognize and automatically translate Hindi texts into English. Use the combination of Amazon Fraud Detector and Amazon SageMaker BlazingText algorithm for sentiment analysis is incorrect. Although the use of the Amazon SageMaker BlazingText algorithm is technically valid, it still fails to meet the condition of not maintaining any ML-model. Using Amazon SageMaker would require you to train and deploy the model yourself. Added to that, the use of Amazon Fraud Detector is also unnecessary. Amazon Fraud Detector is commonly used to identify potentially fraudulent activities and not for running sentiment analysis. Do take note that Amazon Polly is not capable of transcribing audio recordings, and Amazon Rekognition is primarily used for image recognition service, not for translating foreign words into English.
The option that says: Utilize the Amazon Lex service to convert audio recordings into text. Call the Amazon Translate API to translate Hindi texts into English and use Amazon Forecast for sentiment prediction and analysis is incorrect. Amazon Lex is a fully managed artificial intelligence (AI) service with advanced natural language models that can help you design, build, test, and deploy conversational interfaces or chatbots. This service is not capable of transcribing any audio recordings into a text format. Amazon Textract only extracts text from documents and does not convert audio to text. Also, you cannot use the Amazon Forecast service for running sentiment prediction and analysis. Amazon Forecast is meant for forecasting business outcomes using historical and related data.
The option that says: Set up Amazon Comprehend to convert audio recordings into text. Use Amazon Kendra to translate Hindi texts into English and utilize the Amazon Detective service to automatically detect negative user behaviors for sentiment analysis is incorrect. Amazon Comprehend is a natural-language processing (NLP) service that uses machine learning to uncover valuable insights and connections in text. This service is not capable of transcribing or converting audio recordings into text. Amazon Kendra is a highly accurate and easy-to-use enterprise search service for all unstructured data that you store in AWS, while Amazon Detective is a security service that analyzes and visualizes security data to rapidly get to the root cause of your potential security issues. Amazon Kendra is not capable of translating any foreign text into English, and Amazon Detective doesn’t have the functionality to automatically detect negative user behaviors for sentiment analysis.
A Data Analyst in a financial company is tasked to provide insights on stock market trends to the company’s clients. The company uses AWS Glue extract, transform, and load (ETL) jobs in daily report generation, which involves fetching data from an Amazon S3 bucket. The analyst discovered that old data from previous runs were being reprocessed, causing the jobs to take longer to complete.Which solution would resolve the issue in the most operationally efficient way?
- Parallelize the job by splitting the dataset into smaller partitions and processing them simultaneously using multiple EC2 instances.
- Increase the size of the dataset used in the job to speed up the extraction and analysis process.
- Create a Lambda function that removes any data already processed. Then, use Amazon EventBridge (Amazon CloudWatch Events) to trigger this function whenever the ETL job’s status switches to
SUCCEEDED
. - Enable job bookmark for the ETL job.
- Enable job bookmark for the ETL job.
AWS Glue is a powerful tool that enables data engineers to build and manage ETL (extract, transform, load) pipelines for processing and analyzing large amounts of data. With AWS Glue, you can create and manage jobs that extract data from various sources, transform it into the desired format, and load it into a target data store.
One of the features that make AWS Glue especially useful is job bookmarking. Job bookmarking is a mechanism that allows AWS Glue to keep track of where a job is left off in case it gets interrupted or fails for any reason. This way, when the job is restarted, it can pick up from where it left off instead of starting from scratch.
Job bookmarking works by storing the state of a job’s progress in a persistent data store separate from the job itself. AWS Glue can resume a job from where it left off, even if the job, environment, or underlying data have changed. Job bookmarking is especially useful when dealing with large datasets or long-running jobs, as it helps save time and resources by avoiding unnecessary processing.
In this scenario, the company can benefit from enabling job bookmarking for the ETL job to improve the data extraction and analysis efficiency. Job bookmarking keeps track of the last processed data, allowing succeeding jobs to run only to process new data. This eliminates the need to reprocess old data and significantly reduces processing time and resource requirements.
Hence, the correct answer is: Enable job bookmark for the ETL job.
The option that says: Increase the size of the dataset used in the job to speed up the extraction and analysis process is incorrect. Increasing the dataset size will likely result in longer processing times and increased resource requirements. Therefore, it can aggravate the existing inefficiency problem rather than resolve it.
The option that says: Parallelize the job by splitting the dataset into smaller partitions and processing them simultaneously using multiple EC2 instances is incorrect. This option requires additional AWS resources because of its complexity in partitioning the dataset, managing parallel processing, and coordinating the results. Moreover, this would only speed up the processing of both new and old data; it won’t resolve the issue of reprocessing old data.
The option that says: Create a Lambda function that removes any data already processed. Then, use Amazon EventBridge (Amazon CloudWatch Events) to trigger this function whenever the ETL job’s status switches to ` SUCCEEDED ` is incorrect. While removing processed data can help optimize storage, it introduces additional complexity and may not be fully efficient if the process of identifying which data has already been processed is not foolproof. Moreover, if a job fails and needs to be rerun, the data for that job might have been already removed, resulting in inconsistencies or incomplete data processing.
A company has an On-Demand EC2 instance located in a subnet in AWS that hosts a web application. The security group attached to this EC2 instance has the following Inbound Rules:
Type | Protocol | Port Range | Source | Description
SSH | TCP | 22 | 0.0.0.0 |
The Route table attached to the VPC is shown below. You can establish an SSH connection into the EC2 instance from the Internet. However, you are not able to connect to the web server using your Chrome browser.
Destination | Target | Status | Propagated
10.0.0.0/27 | local | Active | No
0.0.0.0/0 | igw-b51618cc | Active | No
Which of the below steps would resolve the issue?
- In the Route table, add this new route entry: 0.0.0.0 -> igw-b51618cc
- In the Security Group, remove the SSH rule.
- In the Route table, add this new route entry: 10.0.0.0/27 -> local
- In the Security Group, add an Inbound HTTP rule.
- In the Security Group, add an Inbound HTTP rule.
In this particular scenario, you can already connect to the EC2 instance via SSH. This means that there is no problem in the Route Table of your VPC. To fix this issue, you simply need to update your Security Group and add an Inbound rule to allow HTTP traffic.
The option that says: In the Security Group, remove the SSH rule is incorrect as doing so will not solve the issue. It will just disable SSH traffic that is already available.
The options that say: In the Route table, add this new route entry: 0.0.0.0 -> igw-b51618cc and In the Route table, add this new route entry: 10.0.0.0/27 -> local are incorrect as there is no need to change the Route Tables.
A company has a global online trading platform in which the users from all over the world regularly upload terabytes of transactional data to a centralized S3 bucket.What AWS feature should you use in your present system to improve throughput and ensure consistently fast data transfer to the Amazon S3 bucket, regardless of your user’s location?
- Use CloudFront Origin Access Control (OAC)
- FTP
- Amazon S3 Transfer Acceleration
- AWS Direct Connect
- Amazon S3 Transfer Acceleration
Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket. Transfer Acceleration leverages Amazon CloudFront’s globally distributed AWS Edge Locations. As data arrives at an AWS Edge Location, data is routed to your Amazon S3 bucket over an optimized network path.
FTP is incorrect because the File Transfer Protocol does not guarantee fast throughput and consistent, fast data transfer.
AWS Direct Connect is incorrect because you have users all around the world and not just on your on-premises data center. Direct Connect would be too costly and is definitely not suitable for this purpose.
Using CloudFront Origin Access Control (OAC) is incorrect because this is a feature which ensures that only CloudFront can serve S3 content. It does not increase throughput and ensures fast delivery of content to your customers.
A Solutions Architect is unable to connect to the newly deployed EC2 instance via SSH using a home computer. However, the Architect was able to successfully access other existing instances in the VPC without any issues.Which of the following should the Architect check and possibly correct to restore connectivity?
- Use Amazon Data Lifecycle Manager.
- Configure the Network Access Control List of your VPC to permit ingress traffic over port 22 from your IP.
- Configure the Security Group of the EC2 instance to permit ingress traffic over port 22 from your IP.
- Configure the Security Group of the EC2 instance to permit ingress traffic over port 3389 from your IP.
- Configure the Security Group of the EC2 instance to permit ingress traffic over port 22 from your IP.
When connecting to your EC2 instance via SSH, you need to ensure that port 22 is allowed on the security group of your EC2 instance.
A security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you associate one or more security groups with the instance. You add rules to each security group that allow traffic to or from its associated instances. You can modify the rules for a security group at any time; the new rules are automatically applied to all instances that are associated with the security group.
Using Amazon Data Lifecycle Manager is incorrect because this is primarily used to manage the lifecycle of your AWS resources and not to allow certain traffic to go through.
Configuring the Network Access Control List of your VPC to permit ingress traffic over port 22 from your IP is incorrect because this is not necessary in this scenario as it was specified that you were able to connect to other EC2 instances. In addition, Network ACL is much suitable to control the traffic that goes in and out of your entire VPC and not just on one EC2 instance.
Configure the Security Group of the EC2 instance to permit ingress traffic over port 3389 from your IP is incorrect because this is relevant to RDP and not SSH.
A Solutions Architect needs to deploy a mobile application that collects votes for a singing competition. Millions of users from around the world will submit votes using their mobile phones. These votes must be collected and stored in a highly scalable and highly available database which will be queried for real-time ranking. The database is expected to undergo frequent schema changes throughout the voting period.Which of the following combination of services should the architect use to meet this requirement?
- Amazon Relational Database Service (RDS) and Amazon MQ
- Amazon DynamoDB and AWS AppSync
- Amazon Aurora and Amazon Cognito
- Amazon DocumentDB (with MongoDB compatibility) and Amazon AppFlow
- Amazon DynamoDB and AWS AppSync
Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB offers built-in security, continuous backups, automated multi-Region replication, in-memory caching, and data import and export tools. DynamoDB tables are schemaless—other than the primary key, you do not need to define any extra attributes or data types when you create a table, which is why it’s suitable for data with frequently changing schema.
DynamoDB is durable, scalable, and highly available data store which can be used for real-time tabulation. You can also use AppSync with DynamoDB to make it easy for you to build collaborative apps that keep shared data updated in real-time. You just specify the data for your app with simple code statements and AWS AppSync manages everything needed to keep the app data updated in real-time. This will allow your app to access data in Amazon DynamoDB, trigger AWS Lambda functions, or run Amazon OpenSearch Service queries and combine data from these services to provide the exact data you need for your app.
Amazon DocumentDB (with MongoDB compatibility) and Amazon AppFlow are incorrect. While Amazon DocumentDB (with MongoDB compatibility) is a viable database option, Amazon AppFlow cannot interface with it to query updates. Amazon AppFlow is simply an integration service for transferring data securely between Software-as-a-Service (SaaS) applications like Salesforce, SAP, Zendesk, Slack, ServiceNow, and AWS services.
Amazon Relational Database Service (RDS) and Amazon MQ are incorrect. Updating schema changes in a relational database is a complicated process. Using a NoSQL database such as DynamoDB is more suitable for what the scenario is asking. Additionally, Amazon MQ is just a message broker for Apache MQ and RabbitMQ — it’s not needed in the solution.
Amazon Aurora and Amazon Cognito are incorrect. Like the other incorrect option, relational database solutions, such as Amazon Aurora and RDS, are impractical for data with a frequently changing schema. Additionally, Amazon Cognito is just a service for user authentication and authorization, neither of which is mentioned in the scenario.
An investment bank is working with an IT team to handle the launch of the new digital wallet system. The applications will run on multiple EBS-backed EC2 instances which will store the logs, transactions, and billing statements of the user in an S3 bucket. Due to tight security and compliance requirements, the IT team is exploring options on how to safely store sensitive data on the EBS volumes and S3.Which of the below options should be carried out when storing sensitive data on AWS? (Select TWO.)
- Create an EBS Snapshot
- Enable Amazon S3 Server-Side or use Client-Side Encryption
- Enable EBS Encryption
- Migrate the EC2 instances from the public to private subnet.
- Use AWS Shield and WAF
- Enable Amazon S3 Server-Side or use Client-Side Encryption
- Enable EBS Encryption
Enabling EBS Encryption and enabling Amazon S3 Server-Side or use Client-Side Encryption are correct. Amazon EBS encryption offers a simple encryption solution for your EBS volumes without the need to build, maintain, and secure your own key management infrastructure.
In Amazon S3, data protection refers to protecting data while in-transit (as it travels to and from Amazon S3) and at rest (while it is stored on disks in Amazon S3 data centers). You can protect data in transit by using SSL or by using client-side encryption. You have the following options to protect data at rest in Amazon S3.
Use Server-Side Encryption – You request Amazon S3 to encrypt your object before saving it on disks in its data centers and decrypt it when you download the objects.
Use Client-Side Encryption – You can encrypt data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.
Creating an EBS Snapshot is incorrect because this is a backup solution of EBS. It does not provide security of data inside EBS volumes when executed.
Migrating the EC2 instances from the public to private subnet is incorrect because the data you want to secure are those in EBS volumes and S3 buckets. Moving your EC2 instance to a private subnet involves a different matter of security practice, which does not achieve what you want in this scenario.
Using AWS Shield and WAF is incorrect because these protect you from common security threats for your web applications. However, what you are trying to achieve is securing and encrypting your data inside EBS and S3.
A company plans to use a durable storage service to store on-premises database backups to the AWS cloud. To move their backup data, they need to use a service that can store and retrieve objects through standard file storage protocols for quick recovery.Which of the following options will meet this requirement?
- Use Amazon EBS volumes to store all the backup data and attach it to an Amazon EC2 instance.
- Use the AWS Storage Gateway file gateway to store all the backup data in Amazon S3.
- Use the AWS Storage Gateway volume gateway to store the backup data and directly access it using Amazon S3 API actions.
- Use AWS Snowball Edge to directly backup the data in Amazon S3 Glacier.
- Use the AWS Storage Gateway file gateway to store all the backup data in Amazon S3.
File Gateway presents a file-based interface to Amazon S3, which appears as a network file share. It enables you to store and retrieve Amazon S3 objects through standard file storage protocols. File Gateway allows your existing file-based applications or devices to use secure and durable cloud storage without needing to be modified. With File Gateway, your configured S3 buckets will be available as Network File System (NFS) mount points or Server Message Block (SMB) file shares.
To store the backup data from on-premises to a durable cloud storage service, you can use File Gateway to store and retrieve objects through standard file storage protocols (SMB or NFS). File Gateway enables your existing file-based applications, devices, and workflows to use Amazon S3, without modification. File Gateway securely and durably stores both file contents and metadata as objects while providing your on-premises applications low-latency access to cached data.
Hence, the correct answer is: Use the AWS Storage Gateway file gateway to store all the backup data in Amazon S3 .
The option that says: Use the AWS Storage Gateway volume gateway to store the backup data and directly access it using Amazon S3 API actions is incorrect. Although this is a possible solution, you cannot directly access the volume gateway using Amazon S3 APIs. You should use File Gateway to access your data in Amazon S3.
The option that says: Use Amazon EBS volumes to store all the backup data and attached it to an Amazon EC2 instance is incorrect. Take note that in the scenario, you are required to store the backup data in a durable storage service. An Amazon EBS volume is not highly durable like Amazon S3. Also, file storage protocols such as NFS or SMB, are not directly supported by EBS.
The option that says: Use AWS Snowball Edge to directly backup the data in Amazon S3 Glacier is incorrect because AWS Snowball Edge cannot store and retrieve objects through standard file storage protocols. Also, Snowball Edge can’t directly integrate backups to S3 Glacier.