AWS Certified Solutions Architect Associate Practice Test 3 (Bonso) Flashcards
A data analytics company is setting up an innovative checkout-free grocery store. Their Solutions Architect developed a real-time monitoring application that uses smart sensors to collect the items that the customers are getting from the grocery’s refrigerators and shelves then automatically deduct it from their accounts. The company wants to analyze the items that are frequently being bought and store the results in S3 for durable storage to determine the purchase behavior of its customers.
What service must be used to easily capture, transform, and load streaming data into Amazon S3, Amazon Elasticsearch Service, and Splunk?
A. Amazon SQS
B. Amazon Kinesis
C. Amazon Redshift
D. Amazon Kinesis Data Firehose
D. Amazon Kinesis Data Firehose
Explanation:
Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards you are already using today.
It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration. It can also batch, compress, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security.
In the diagram below, you gather the data from your smart refrigerators and use Kinesis Data firehouse to prepare and load the data. S3 will be used as a method of durably storing the data for analytics and the eventual ingestion of data for output using analytical tools.
You can use Amazon Kinesis Data Firehose in conjunction with Amazon Kinesis Data Streams if you need to implement real-time processing of streaming big data. Kinesis Data Streams provides an ordering of records, as well as the ability to read and/or replay records in the same order to multiple Amazon Kinesis Applications. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications reading from the same Amazon Kinesis data stream (for example, to perform counting, aggregation, and filtering).
Amazon Simple Queue Service (Amazon SQS) is different from Amazon Kinesis Data Firehose. SQS offers a reliable, highly scalable hosted queue for storing messages as they travel between computers. Amazon SQS lets you easily move data between distributed application components and helps you build applications in which messages are processed independently (with message-level ack/fail semantics), such as automated workflows. Amazon Kinesis Data Firehose is primarily used to load streaming data into data stores and analytics tools.
Hence, the correct answer is: Amazon Kinesis Data Firehose.
Amazon Kinesis is incorrect because this is the streaming data platform of AWS and has four distinct services under it: Kinesis Data Firehose, Kinesis Data Streams, Kinesis Video Streams, and Amazon Kinesis Data Analytics. For the specific use case, just as asked in the scenario, use Kinesis Data Firehose.
Amazon Redshift is incorrect because this is mainly used for data warehousing, making it simple and cost-effective to analyze your data across your data warehouse and data lake. It does not meet the requirement of being able to load and stream data into data stores for analytics. You have to use Kinesis Data Firehose instead.
Amazon SQS is incorrect because you can’t capture, transform, and load streaming data into Amazon S3, Amazon Elasticsearch Service, and Splunk using this service. You have to use Kinesis Data Firehose instead.
A tech company is currently using Auto Scaling for their web application. A new AMI now needs to be used for launching a fleet of EC2 instances.
Which of the following changes needs to be done?
A. Do nothing. You can start directly launching EC2 instances in the Auto Scaling Group with the same launch configuration
B. Create a new target group
C. Create a new launch configuration
D. Create a new target group and launch configuration
C. Create a new launch configuration
Explanation:
A launch configuration is a template that an Auto Scaling group uses to launch EC2 instances. When you create a launch configuration, you specify information for the instances, such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping. If you’ve launched an EC2 instance before, you specified the same information in order to launch the instance.
You can specify your launch configuration with multiple Auto Scaling groups. However, you can only specify one launch configuration for an Auto Scaling group at a time, and you can’t modify a launch configuration after you’ve created it. Therefore, if you want to change the launch configuration for an Auto Scaling group, you must create a launch configuration and then update your Auto Scaling group with the new launch configuration.
For this scenario, you have to create a new launch configuration. Remember that you can’t modify a launch configuration after you’ve created it.
Hence, the correct answer is: Create a new launch configuration.
The option that says: Do nothing. You can start directly launching EC2 instances in the Auto Scaling group with the same launch configuration is incorrect because what you are trying to achieve is change the AMI being used by your fleet of EC2 instances. Therefore, you need to change the launch configuration to update what your instances are using.
The option that says: create a new target group and create a new target group and launch configuration are both incorrect because you only want to change the AMI being used by your instances, and not the instances themselves. Target groups are primarily used in ELBs and not in Auto Scaling. The scenario didn’t mention that the architecture has a load balancer. Therefore, you should be updating your launch configuration, not the target group.
A Solutions Architect is managing a company’s AWS account of approximately 300 IAM users. They have a new company policy that requires changing the associated permissions of all 100 IAM users that control the access to Amazon S3 buckets.
What will the Solutions Architect do to avoid the time-consuming task of applying the policy to each user?
A. Create a new policy and apply it to multiple IAM users using a shell script
B. Create a new IAM role and add each user to the IAM role
C. Create a new S3 bucket access policy with unlimited access for each IAM user
D. Create a new IAM group and then add the users that require access to the S3 bucket. Afterward, apply the policy to the IAM group
D. Create a new IAM group and then add the users that require access to the S3 bucket. Afterward, apply the policy to the IAM group
Explanation;
In this scenario, the best option is to group the set of users in an IAM Group and then apply a policy with the required access to the Amazon S3 bucket. This will enable you to easily add, remove, and manage the users instead of manually adding a policy to each and every 100 IAM users.
Creating a new policy and applying it to multiple IAM users using a shell script is incorrect because you need a new IAM Group for this scenario and not assign a policy to each user via a shell script. This method can save you time but afterward, it will be difficult to manage all 100 users that are not contained in an IAM Group.
Creating a new S3 bucket access policy with unlimited access for each IAM user is incorrect because you need a new IAM Group and the method is also time-consuming.
Creating a new IAM role and adding each user to the IAM role is incorrect because you need to use an IAM Group and not an IAM role.
A company is hosting an application on EC2 instances that regularly pushes and fetches data in Amazon S3. Due to a change in compliance, the instances need to be moved on a private subnet. Along with this change, the company wants to lower the data transfer costs by configuring its AWS resources.
How can this be accomplished in the MOST cost-efficient manner?
A. Create an Amazon S3 Gateway endpoint to enable a connection between the instances and Amazon S3
B. Set up a NAT Gateway in the public subnet to connect to Amazon S3
C. Set up an AWS Transit Gateway to access Amazon S3
D. Create an Amazon S3 interface endpoint to enable a connection between the instances and Amazon S3
A. Create an Amazon S3 Gateway endpoint to enable a connection between the instances and Amazon S3
Explanation:
VPC endpoints for Amazon S3 simplify access to S3 from within a VPC by providing configurable and highly reliable secure connections to S3 that do not require an internet gateway or Network Address Translation (NAT) device. When you create an S3 VPC endpoint, you can attach an endpoint policy to it that controls access to Amazon S3.
You can use two types of VPC endpoints to access Amazon S3: gateway endpoints and interface endpoints. A gateway endpoint is a gateway that you specify in your route table to access Amazon S3 from your VPC over the AWS network. Interface endpoints extend the functionality of gateway endpoints by using private IP addresses to route requests to Amazon S3 from within your VPC, on-premises, or from a different AWS Region. Interface endpoints are compatible with gateway endpoints. If you have an existing gateway endpoint in the VPC, you can use both types of endpoints in the same VPC.
There is no additional charge for using gateway endpoints. However, standard charges for data transfer and resource usage still apply.
Hence, the correct answer is: Create an Amazon S3 gateway endpoint to enable a connection between the instances and Amazon S3.
The option that says: Set up a NAT Gateway in the public subnet to connect to Amazon S3 is incorrect. This will enable a connection between the private EC2 instances and Amazon S3 but it is not the most cost-efficient solution. NAT Gateways are charged on an hourly basis even for idle time.
The option that says: Create an Amazon S3 interface endpoint to enable a connection between the instances and Amazon S3 is incorrect. This is also a possible solution but it’s not the most cost-effective solution. You pay an hourly rate for every provisioned Interface endpoint.
The option that says: Set up an AWS Transit Gateway to access Amazon S3 is incorrect because this service is mainly used for connecting VPCs and on-premises networks through a central hub.
A large financial firm in the country has an AWS environment that contains several Reserved EC2 instances hosting a web application that has been decommissioned last week. To save costs, you need to stop incurring charges for the Reserved instances as soon as possible.
What cost-effective steps will you take in this circumstance? (Select TWO.)
A. Contact AWS to cancel your AWS subscription
B. Go to the Amazon.com online shopping website and sell the Reserved instances
C. Go to the AWS Reserved Instance Marketplace and sell the Reserved Instances
D. Stop the Reserved instances as soon as possible
D. Terminate the Reserved Instances as soon as possible to avoid getting billed at the on demand price when it expires
C. Go to the AWS Reserved Instance Marketplace and sell the Reserved Instances\
D. Terminate the Reserved Instances as soon as possible to avoid getting billed at the on demand price when it expires
Explanation:
The Reserved Instance Marketplace is a platform that supports the sale of third-party and AWS customers’ unused Standard Reserved Instances, which vary in terms of lengths and pricing options. For example, you may want to sell Reserved Instances after moving instances to a new AWS region, changing to a new instance type, ending projects before the term expiration, when your business needs change, or if you have unneeded capacity.
Hence, the correct answers are:
- Go to the AWS Reserved Instance Marketplace and sell the Reserved instances.
- Terminate the Reserved instances as soon as possible to avoid getting billed at the on-demand price when it expires.
Stopping the Reserved instances as soon as possible is incorrect because a stopped instance can still be restarted. Take note that when a Reserved Instance expires, any instances that were covered by the Reserved Instance are billed at the on-demand price which costs significantly higher. Since the application is already decommissioned, there is no point of keeping the unused instances. It is also possible that there are associated Elastic IP addresses, which will incur charges if they are associated with stopped instances
Contacting AWS to cancel your AWS subscription is incorrect as you don’t need to close down your AWS account.
Going to the Amazon.com online shopping website and selling the Reserved instances is incorrect as you have to use AWS Reserved Instance Marketplace to sell your instances.
An online registration system hosted in an Amazon EKS cluster stores data to a db.t4g.medium Amazon Aurora DB cluster. The database performs well during regular hours but is unable to handle the traffic surge that occurs during flash sales. A solutions architect must move the database to Aurora Serverless while minimizing downtime and the impact on the operation of the application.
Which change should be taken to meet the objective?
A. Change the Aurora Instance class to Serverless
B. Take a snapshot of the DB cluster. Use the snapshot to create a new Aurora DB cluster
C. Use AWS Database Migration Service (AWS DMS) to migrate to a new Aurora Serverless database
D. Add an Aurora Replica to the cluster and set its instance class to Serverless
C. Use AWS Database Migration Service (AWS DMS) to migrate to a new Aurora Serverless database
Explanation:
AWS Database Migration Service helps you migrate your databases to AWS with virtually no downtime. All data changes to the source database that occur during the migration are continuously replicated to the target, allowing the source database to be fully operational during the migration process.
You can set up a DMS task for either one-time migration or ongoing replication. An ongoing replication task keeps your source and target databases in sync. Once set up, the ongoing replication task will continuously apply source changes to the target with minimal latency.
Hence, the correct answer is: Use AWS Database Migration Service (AWS DMS) to migrate data from the existing DB cluster to a new Aurora Serverless database.
The option that says: Change the Aurora Instance class to Serverless is incorrect. Changing the instance class from Provisioned to Serverless is not possible.
The option that says: Take a snapshot of the DB cluster. Use the snapshot to create a new Aurora DB cluster is incorrect. This one involves a long period of downtime since you have to stop the application until the new cluster is created.
The option that says: Add an Aurora Replica to the cluster and set its instance class to Serverless. Failover to the read replica and promote it to primary is incorrect. While this method is valid, the database becomes unavailable for writing for a short period of time during failover.
A company is using Amazon VPC that has a CIDR block of 10.31.0.0/27 that is connected to the on-premises data center. There was a requirement to create a Lambda function that will process massive amounts of cryptocurrency transactions every minute and then store the results to EFS. After setting up the serverless architecture and connecting the Lambda function to the VPC, the Solutions Architect noticed an increase in invocation errors with EC2 error types such as EC2ThrottledException at certain times of the day.
Which of the following are the possible causes of this issue? (Select TWO.)
A. Your VPC does not have a NAT Gateway
B. The associated security groups of your function does not allow outbound connections
C. The attached IAM execution role of your function does not have the necessary permissions to access the resources of your VPC
D. Your VPC does not have sufficient subnet ENIs or subnet IPs
E. You only specified one subnet in your Lambda function configuration. That single subnet runs out of available IP addresses and there is no other subnet or Availability Zone which can handle the peak load
D. Your VPC does not have sufficient subnet ENIs or subnet IPs
E. You only specified one subnet in your Lambda function configuration. That single subnet runs out of available IP addresses and there is no other subnet or Availability Zone which can handle the peak load
Explanation:
You can configure a function to connect to a virtual private cloud (VPC) in your account. Use Amazon Virtual Private Cloud (Amazon VPC) to create a private network for resources such as databases, cache instances, or internal services. Connect your function to the VPC to access private resources during execution.
AWS Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPC-specific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC.
Lambda functions cannot connect directly to a VPC with dedicated instance tenancy. To connect to resources in a dedicated VPC, peer it to a second VPC with default tenancy.
Your Lambda function automatically scales based on the number of events it processes. If your Lambda function accesses a VPC, you must make sure that your VPC has sufficient ENI capacity to support the scale requirements of your Lambda function. It is also recommended that you specify at least one subnet in each Availability Zone in your Lambda function configuration.
By specifying subnets in each of the Availability Zones, your Lambda function can run in another Availability Zone if one goes down or runs out of IP addresses. If your VPC does not have sufficient ENIs or subnet IPs, your Lambda function will not scale as requests increase, and you will see an increase in invocation errors with EC2 error types like EC2ThrottledException. For asynchronous invocation, if you see an increase in errors without corresponding CloudWatch Logs, invoke the Lambda function synchronously in the console to get the error responses.
Hence, the correct answers for this scenario are:
- You only specified one subnet in your Lambda function configuration. That single subnet runs out of available IP addresses and there is no other subnet or Availability Zone which can handle the peak load.
- Your VPC does not have sufficient subnet ENIs or subnet IPs.
The option that says: Your VPC does not have a NAT gateway is incorrect because an issue in the NAT Gateway is unlikely to cause a request throttling issue or produce an EC2ThrottledException error in Lambda. As per the scenario, the issue is happening only at certain times of the day, which means that the issue is only intermittent and the function works at other times. We can also conclude that an availability issue is not an issue since the application is already using a highly available NAT Gateway and not just a NAT instance.
The option that says: The associated security group of your function does not allow outbound connections is incorrect because if the associated security group does not allow outbound connections, then the Lambda function will not work at all in the first place. Remember that as per the scenario, the issue only happens intermittently. In addition, Internet traffic restrictions do not usually produce EC2ThrottledException errors.
The option that says: The attached IAM execution role of your function does not have the necessary permissions to access the resources of your VPC is incorrect because just as what is explained above, the issue is intermittent and thus, the IAM execution role of the function does have the necessary permissions to access the resources of the VPC since it works at those specific times. In case the issue is indeed caused by a permission problem, then an EC2AccessDeniedException the error would most likely be returned and not an EC2ThrottledException error.
An application needs to retrieve a subset of data from a large CSV file stored in an Amazon S3 bucket by using simple SQL expressions. The queries are made within Amazon S3 and must only return the needed data.
Which of the following actions should be taken?
A. Perform an S3 Select operation based on the buckets name and objects key
B. Perform an S3 Select operation based on the buckets name and objects metadata
C. Perform an S3 Select operation based on the buckets name and object tags
D. Perform an S3 Select operation based on the buckets name
A. Perform an S3 Select operation based on the buckets name and objects key
Explanation:
S3 Select enables applications to retrieve only a subset of data from an object by using simple SQL expressions. By using S3 Select to retrieve only the data needed by your application, you can achieve drastic performance increases.
Amazon S3 is composed of buckets, object keys, object metadata, object tags, and many other components as shown below:
An Amazon S3 bucket name is globally unique, and the namespace is shared by all AWS accounts.
An Amazon S3 object key refers to the key name, which uniquely identifies the object in the bucket.
An Amazon S3 object metadata is a name-value pair that provides information about the object.
An Amazon S3 object tag is a key-pair value used for object tagging to categorize storage.
You can perform S3 Select to query only the necessary data inside the CSV files based on the bucket’s name and the object’s key.
The following snippet below shows how it is done using boto3 ( AWS SDK for Python ):
client = boto3.client('s3') resp = client.select_object_content( Bucket='tdojo-bucket', # Bucket Name. Key='s3-select/tutorialsdojofile.csv', # Object Key. ExpressionType= 'SQL', Expression = "select \"Sample\" from s3object s where s.\"tutorialsdojofile\" in ['A', 'B']"
Hence, the correct answer is the option that says: Perform an S3 Select operation based on the bucket’s name and object’s key.
The option that says: Perform an S3 Select operation based on the bucket’s name and object’s metadata is incorrect because metadata is not needed when querying subsets of data in an object using S3 Select.
The option that says: Perform an S3 Select operation based on the bucket’s name and object tags is incorrect because object tags just provide additional information to your object. This is not needed when querying with S3 Select although this can be useful for S3 Batch Operations. You can categorize objects based on tag values to provide S3 Batch Operations with a list of objects to operate on.
The option that says: Perform an S3 Select operation based on the bucket’s name is incorrect because you need both the bucket’s name and the object key to successfully perform an S3 Select operation.
A Solutions Architect is unable to connect to the newly deployed EC2 instance via SSH using a home computer. However, the Architect was able to successfully access other existing instances in the VPC without any issues.
Which of the following should the Architect check and possibly correct to restore connectivity?
A. Configure the Security Group of the EC2 instance to permit ingress traffic over port 3389 from your IP
B. Configure the network Access Control List of your VPC to permit ingress traffic over port 22 from your IP
C. Configure the Security Group of the EC2 instance to permit ingress traffic over port 22 from your IP
D. Use Amazon Data Lifecycle Manager
C. Configure the Security Group of the EC2 instance to permit ingress traffic over port 22 from your IP
Explanation:
When connecting to your EC2 instance via SSH, you need to ensure that port 22 is allowed on the security group of your EC2 instance.
A security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you associate one or more security groups with the instance. You add rules to each security group that allow traffic to or from its associated instances. You can modify the rules for a security group at any time; the new rules are automatically applied to all instances that are associated with the security group.
Using Amazon Data Lifecycle Manager is incorrect because this is primarily used to manage the lifecycle of your AWS resources and not to allow certain traffic to go through.
Configuring the Network Access Control List of your VPC to permit ingress traffic over port 22 from your IP is incorrect because this is not necessary in this scenario as it was specified that you were able to connect to other EC2 instances. In addition, Network ACL is much suitable to control the traffic that goes in and out of your entire VPC and not just on one EC2 instance.
Configure the Security Group of the EC2 instance to permit ingress traffic over port 3389 from your IP is incorrect because this is relevant to RDP and not SSH.
A Solutions Architect needs to deploy a mobile application that collects votes for a singing competition. Millions of users from around the world will submit votes using their mobile phones. These votes must be collected and stored in a highly scalable and highly available database which will be queried for real-time ranking. The database is expected to undergo frequent schema changes throughout the voting period.
Which of the following combination of services should the architect use to meet this requirement?
A. Amazon Aurora and Amazon Cognito
B. Amazon DocumentDB (with MongoDB compatibility) and Amazon AppFlow
C. Amazon DynamoDB and AWS AppSync
D. Amazon Relational Database Service (RDS) and Amazon MQ
C. Amazon DynamoDB and AWS AppSync
Explanation:
Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB offers built-in security, continuous backups, automated multi-Region replication, in-memory caching, and data import and export tools. DynamoDB tables are schemaless—other than the primary key, you do not need to define any extra attributes or data types when you create a table, which is why it’s suitable for data with frequently changing schema.
DynamoDB is durable, scalable, and highly available data store which can be used for real-time tabulation. You can also use AppSync with DynamoDB to make it easy for you to build collaborative apps that keep shared data updated in real-time. You just specify the data for your app with simple code statements and AWS AppSync manages everything needed to keep the app data updated in real-time. This will allow your app to access data in Amazon DynamoDB, trigger AWS Lambda functions, or run Amazon Elasticsearch queries and combine data from these services to provide the exact data you need for your app.
Amazon DocumentDB (with MongoDB compatibility) and Amazon AppFlow are incorrect. While Amazon DocumentDB (with MongoDB compatibility) is a viable database option, Amazon AppFlow cannot interface with it to query updates. Amazon AppFlow is simply an integration service for transferring data securely between Software-as-a-Service (SaaS) applications like Salesforce, SAP, Zendesk, Slack, ServiceNow, and AWS services.
Amazon Relational Database Service (RDS) and Amazon MQ are incorrect. Updating schema changes in a relational database is a complicated process. Using a NoSQL database such as DynamoDB is more suitable for what the scenario is asking. Additionally, Amazon MQ is just a message broker for Apache MQ and RabbitMQ — it’s not needed in the solution.
Amazon Aurora and Amazon Cognito are incorrect. Like the other incorrect option, relational database solutions, such as Amazon Aurora and RDS, are impractical for data with a frequently changing schema. Additionally, Amazon Cognito is just a service for user authentication and authorization, neither of which is mentioned in the scenario.
A document sharing website is using AWS as its cloud infrastructure. Free users can upload a total of 5 GB data while premium users can upload as much as 5 TB. Their application uploads the user files, which can have a max file size of 1 TB, to an S3 Bucket.
In this scenario, what is the best way for the application to upload the large files in S3?
A. Use AWS Snowball
B. Use Multipart Upload
C. Use a single PUT request to upload the large file
D. use AWS Import/Export
B. Use Multipart Upload
Explanation:
The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.
The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object. Multipart uploading is a three-step process: you initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts and you can then access the object just as you would any other object in your bucket.
Using a single PUT request to upload the large file is incorrect because the largest file size you can upload using a single PUT request is 5 GB. Files larger than this will fail to be uploaded.
Using AWS Snowball is incorrect because this is a migration tool that lets you transfer large amounts of data from your on-premises data center to AWS S3 and vice versa. This tool is not suitable for the given scenario. And when you provision Snowball, the device gets transported to you, and not to your customers. Therefore, you bear the responsibility of securing the device.
Using AWS Import/Export is incorrect because Import/Export is similar to AWS Snowball in such a way that it is meant to be used as a migration tool, and not for multiple customer consumption such as in the given scenario.
A major TV network has a web application running on eight Amazon T3 EC2 instances. The number of requests that the application processes are consistent and do not experience spikes. To ensure that eight instances are running at all times, the Solutions Architect should create an Auto Scaling group and distribute the load evenly between all instances.
Which of the following options can satisfy the given requirements?
A. Deploy four EC2 instances with Auto Scaling in one region and four in another region behind an Amazon Elastic Load Balancer
B. Deploy four EC2 instances with auto scaling in one availability zone and four in another availability zone in the same region behind an Amazon Elastic Load Balancer
C. Deploy eight EC2 instances with Auto Scaling in one Availability Zone behind an Amazon Elastic Load Balancer
B. Deploy four EC2 instances with auto scaling in one availability zone and four in another availability zone in the same region behind an Amazon Elastic Load Balancer
Explanation:
The best option to take is to deploy four EC2 instances in one Availability Zone and four in another availability zone in the same region behind an Amazon Elastic Load Balancer. In this way, if one availability zone goes down, there is still another available zone that can accommodate traffic.
When the first AZ goes down, the second AZ will only have an initial 4 EC2 instances. This will eventually be scaled up to 8 instances since the solution is using Auto Scaling.
The 110% compute capacity for the 4 servers might cause some degradation of the service, but not a total outage since there are still some instances that handle the requests. Depending on your scale-up configuration in your Auto Scaling group, the additional 4 EC2 instances can be launched in a matter of minutes.
T3 instances also have a Burstable Performance capability to burst or go beyond the current compute capacity of the instance to higher performance as required by your workload. So your 4 servers will be able to manage 110% compute capacity for a short period of time. This is the power of cloud computing versus our on-premises network architecture. It provides elasticity and unparalleled scalability.
Take note that Auto Scaling will launch additional EC2 instances to the remaining Availability Zone/s in the event of an Availability Zone outage in the region. Hence, the correct answer is the option that says: Deploy four EC2 instances with Auto Scaling in one Availability Zone and four in another availability zone in the same region behind an Amazon Elastic Load Balancer.
The option that says: Deploy eight EC2 instances with Auto Scaling in one Availability Zone behind an Amazon Elastic Load Balancer is incorrect because this architecture is not highly available. If that Availability Zone goes down then your web application will be unreachable.
The options that say: Deploy four EC2 instances with Auto Scaling in one region and four in another region behind an Amazon Elastic Load Balancer and Deploy two EC2 instances with Auto Scaling in four regions behind an Amazon Elastic Load Balancer are incorrect because the ELB is designed to only run in one region and not across multiple regions.
A company is deploying a Microsoft SharePoint Server environment on AWS using CloudFormation. The Solutions Architect needs to install and configure the architecture that is composed of Microsoft Active Directory (AD) domain controllers, Microsoft SQL Server 2012, multiple Amazon EC2 instances to host the Microsoft SharePoint Server and many other dependencies. The Architect needs to ensure that the required components are properly running before the stack creation proceeds.
Which of the following should the Architect do to meet this requirement?
A. Configure a UpdatePolicy attribute to the instance in the CloudFormation template. Send a success signal after the applications are installed and configured using the cfn-signal helper script
B. Configure the DependsOn attribute in the CloudFormation template. Send a success signal after the applications are installed and configured using the cfn-init helper script
C. Configure the UpdateReplacePolicy attribute in the CloudFormation template. Send a success signal after the applications are installed and configured using the cfn-signal helper script
D. Configure a CreationPolicy attribute to the instance in the CloudFormation template. Send a success signal after the applications are installed and configured using the cfn-signal helper script
D. Configure a CreationPolicy attribute to the instance in the CloudFormation template. Send a success signal after the applications are installed and configured using the cfn-signal helper script
Explanation:
You can associate the CreationPolicy attribute with a resource to prevent its status from reaching create complete until AWS CloudFormation receives a specified number of success signals or the timeout period is exceeded. To signal a resource, you can use the cfn-signal helper script or SignalResource API. AWS CloudFormation publishes valid signals to the stack events so that you track the number of signals sent.
The creation policy is invoked only when AWS CloudFormation creates the associated resource. Currently, the only AWS CloudFormation resources that support creation policies are AWS::AutoScaling::AutoScalingGroup, AWS::EC2::Instance, and AWS::CloudFormation::WaitCondition.
Use the CreationPolicy attribute when you want to wait on resource configuration actions before stack creation proceeds. For example, if you install and configure software applications on an EC2 instance, you might want those applications to be running before proceeding. In such cases, you can add a CreationPolicy attribute to the instance and then send a success signal to the instance after the applications are installed and configured.
Hence, the option that says: Configure a CreationPolicy attribute to the instance in the CloudFormation template. Send a success signal after the applications are installed and configured using the cfn-signal helper script is correct.
The option that says: Configure the DependsOn attribute in the CloudFormation template. Send a success signal after the applications are installed and configured using the cfn-init helper script is incorrect because the cfn-init helper script is not suitable to be used to signal another resource. You have to use cfn-signal instead. And although you can use the DependsOn attribute to ensure the creation of a specific resource follows another, it is still better to use the CreationPolicy attribute instead as it ensures that the applications are properly running before the stack creation proceeds.
The option that says: Configure a UpdatePolicy attribute to the instance in the CloudFormation template. Send a success signal after the applications are installed and configured using the cfn-signal helper script is incorrect because the UpdatePolicy attribute is primarily used for updating resources and for stack update rollback operations.
The option that says: Configure the UpdateReplacePolicy attribute in the CloudFormation template. Send a success signal after the applications are installed and configured using the cfn-signal helper script is incorrect because the UpdateReplacePolicy attribute is primarily used to retain or in some cases, back up the existing physical instance of a resource when it is replaced during a stack update operation.
A company has a global online trading platform in which the users from all over the world regularly upload terabytes of transactional data to a centralized S3 bucket.
What AWS feature should you use in your present system to improve throughput and ensure consistently fast data transfer to the Amazon S3 bucket, regardless of your user’s location?
A. Use CloudFront Origin Access Identity
B. Amazon S3 Transfer Acceleration
C. AWS Direct Connect
D. FTP
B. Amazon S3 Transfer Acceleration
Explanation:
Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket. Transfer Acceleration leverages Amazon CloudFront’s globally distributed AWS Edge Locations. As data arrives at an AWS Edge Location, data is routed to your Amazon S3 bucket over an optimized network path.
FTP is incorrect because the File Transfer Protocol does not guarantee fast throughput and consistent, fast data transfer.
AWS Direct Connect is incorrect because you have users all around the world and not just on your on-premises data center. Direct Connect would be too costly and is definitely not suitable for this purpose.
Using CloudFront Origin Access Identity is incorrect because this is a feature which ensures that only CloudFront can serve S3 content. It does not increase throughput and ensures fast delivery of content to your customers.
A company intends to give each of its developers a personal AWS account through AWS Organizations. To enforce regulatory policies, preconfigured AWS Config rules will be set in the new accounts. A solutions architect must see to it that developers are unable to remove or modify any rules in AWS Config.
Which solution meets the objective with the least operational overhead?
A. Set up an AWS Control Tower in the root account to detect if there were any changes to the new accounts AWS Config rules. Attach an IAM trust relationship to the IAM User of each developer which prevents any changes in AWS Config
B. Configure an AWS Config rule in the root account to detect if changes to the new accounts COnfig rules are made
C. Use an IAM role in the new accounts with an attached IAM trust relationship to disable the access of the root user to AWS Config
D. Add the developers AWS account to an organizational unit (OU). Attach a service control policy (SCP) to the OU that restricts access to AWS Config
D. Add the developers AWS account to an organizational unit (OU). Attach a service control policy (SCP) to the OU that restricts access to AWS Config
Explanation:
Service control policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization. SCPs help you to ensure your accounts stay within your organization’s access control guidelines.
SCPs alone is not sufficient to grant permissions to the accounts in your organization. No permissions are granted by an SCP. An SCP defines a guardrail or sets limits on the actions that the account’s administrator can delegate to the IAM users and roles in the affected accounts.
In the scenario, even if a developer has admin privileges, he/she will be unable to modify Config rules if an SCP does not permit it. You can also use SCP to block root user access. This prevents the developers from circumventing the restrictions on AWS Config access.
Therefore, the correct answer is: Add the developers’ AWS account to an organization unit (OU). Attach a service control policy (SCP) to the OU that restricts access to AWS Config.
The option that says: Use an IAM Role in the new accounts with an attached IAM trust relationship to disable the access of the root user to AWS Config is incorrect. Keep in mind that the effects of IAM Policies do not apply to account root users. The “trust relationship” policy simply defines which principals can assume the IAM Role and under which conditions. Thus, this type of policy won’t meet the requirement in the scenario.
The option that says: Configure an AWS Config rule in the root account to detect if changes to the new account’s Config rules are made is incorrect. This solution just monitors changes on AWS Config rules; it does not restrict permissions, which is what’s needed in the scenario.
The option that says: Set up an AWS Control Tower in the root account to detect if there were any changes to the new account’s AWS Config rules. Attach an IAM trust relationship to the IAM User of each developer which prevents any changes in AWS Config is incorrect. The AWS Control Tower service is commonly used to set up and govern a secure multi-account AWS environment. This service is not used to restrict access from invoking an action to a specific resource, such as AWS Config, in your AWS account. The proper way of completing this requirement is to use a service Control Policy (SCP) and not a mere IAM Role with a trust relationship policy.
A company has an e-commerce application that saves the transaction logs to an S3 bucket. You are instructed by the CTO to configure the application to keep the transaction logs for one month for troubleshooting purposes, and then afterward, purge the logs.
What should you do to accomplish this requirement?
A. Configure the lifecycle configuration rules on the Amazon S3 bucket to purge the transaction logs after a month
B. Create a new IAM policy for the Amazon S3 bucket that automatically deletes the logs after a month
C. Enable CORDS on the Amazon S3 bucket which will enable the automatic monthly deletion of data
D. Add a new bucket policy on the Amazon S3 bucket
A. Configure the lifecycle configuration rules on the Amazon S3 bucket to purge the transaction logs after a month
Explanation:
In this scenario, the best way to accomplish the requirement is to simply configure the lifecycle configuration rules on the Amazon S3 bucket to purge the transaction logs after a month.
Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows:
Transition actions – In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation or archive objects to the GLACIER storage class one year after creation.
Expiration actions – In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf.
Hence, the correct answer is: Configure the lifecycle configuration rules on the Amazon S3 bucket to purge the transaction logs after a month.
The option that says: Add a new bucket policy on the Amazon S3 bucket is incorrect as it does not provide a solution to any of your needs in this scenario. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for the bucket and the objects in it.
The option that says: Create a new IAM policy for the Amazon S3 bucket that automatically deletes the logs after a month is incorrect because IAM policies are primarily used to specify what actions are allowed or denied on your S3 buckets. You cannot configure an IAM policy to automatically purge logs for you in any way.
The option that says: Enable CORS on the Amazon S3 bucket which will enable the automatic monthly deletion of data is incorrect. CORS allows client web applications that are loaded in one domain to interact with resources in a different domain.
A solutions architect is writing an AWS Lambda function that will process encrypted documents from an Amazon FSx for NetApp ONTAP file system. The documents are protected by an AWS KMS customer key. After processing the documents, the Lambda function will store the results in an S3 bucket with an Amazon S3 Glacier Flexible Retrieval storage class. The solutions architect must ensure that the files can be decrypted by the Lambda function.
Which action accomplishes the requirement?
A. Attach the kms:decrypt permission to the Lambda functions resource policy. Add a statement to the AWS KMS keys policy that grants the functions resource policy ARN the kms:decrypt permission
B. Attach the kms:decrypt permission to the Lambda functions execution role. Add a statement to the AWS KMS keys policy that grants the functions execution role the kms:decrypt permission
C. Attach the kms:decrypt permission to the Lambda functions exectuion role. Add a statement to the AWS KMS keys policy that grants the functions execution role the kms:decrypt permission
D. Attach the kms:decrypt permission to the Lambda functions execution role. Add a statement to the AWS KMS keys policy that grants the functions ARN the kms:decrypt permission
C. Attach the kms:decrypt permission to the Lambda functions exectuion role. Add a statement to the AWS KMS keys policy that grants the functions execution role the kms:decrypt permission
Explanation:”
A key policy is a resource policy for an AWS KMS key. Key policies are the primary way to control access to KMS keys. Every KMS key must have exactly one key policy. The statements in the key policy determine who has permission to use the KMS key and how they can use it. You can also use IAM policies and grants to control access to the KMS key, but every KMS key must have a key policy.
Unless the key policy explicitly allows it, you cannot use IAM policies to allow access to a KMS key. Without permission from the key policy, IAM policies that allow permissions have no effect. (You can use an IAM policy to deny permission to a KMS key without permission from a key policy.) The default key policy enables IAM policies. To enable IAM policies in your key policy, add the policy statement described here.
All Amazon FSx for NetApp ONTAP file systems is encrypted at rest with keys managed using AWS Key Management Service (AWS KMS). Data is automatically encrypted before being written to the file system and automatically decrypted as it is read. These processes are handled transparently by Amazon FSx, so you don’t have to modify your applications. Amazon FSx uses an industry-standard AES-256 encryption algorithm to encrypt Amazon FSx data and metadata at rest.
Hence, the correct answer is: Attach the kms:decrypt permission to the Lambda function’s execution role. Add a statement to the AWS KMS key’s policy that grants the function’s execution role the kms:decrypt permission.
The option that says: Attach the kms:decrypt permission to the Lambda function’s resource policy. Add a statement to the AWS KMS key’s policy that grants the function’s resource policy ARN the kms:decrypt permission is incorrect. The resource policy specifies who can invoke the Lambda function, not which AWS operations it can use.
The option that says: Attach the kms:decrypt permission to the Lambda function’s execution role. Add a statement to the AWS KMS key’s policy that grants the function’s ARN the kms:decrypt permission is incorrect. You must use the ARN of the function’s execution role as the principal instead of the actual ARN of the function. The reason for this is that AWS Lambda interacts with other AWS services using the permissions associated with an execution role.
The option that says: Attach the kms:decrypt permission to the Lambda function’s resource policy. Add a statement to the AWS KMS key’s policy that grants the function’s execution role the kms:decrypt permission is incorrect. Like the other incorrect option, the decrypt permission must be added to the function’s execution role and not on its resource policy.
A Solutions Architect is working for a large insurance firm. To maintain compliance with HIPAA laws, all data that is backed up or stored on Amazon S3 needs to be encrypted at rest.
Which encryption methods can be employed, assuming S3 is being used for storing financial-related data? (Select TWO.)
A. Enable SSE on the S3 bucket to make use of AES-256 encryption
B. Store the data in encrypted EBS Snapshots
C. Encrypt the data using your own encryption keys then copy the data to Amazon S3 over HTTPS endpoints
D. Store the data on EBS volumes with encryption enabled instead of using Amazon S3
E. Use AWS Shield to protect your data at rest
A. Enable SSE on the S3 bucket to make use of AES-256 encryption
C. Encrypt the data using your own encryption keys then copy the data to Amazon S3 over HTTPS endpoints
Explanation:
Data protection refers to protecting data while in-transit (as it travels to and from Amazon S3) and at rest (while it is stored on disks in Amazon S3 data centers). You can protect data in transit by using SSL or by using client-side encryption. You have the following options for protecting data at rest in Amazon S3.
Use Server-Side Encryption – You request Amazon S3 to encrypt your object before saving it on disks in its data centers and decrypt it when you download the objects.
Use Client-Side Encryption – You can encrypt data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.
Hence, the following options are the correct answers:
- Enable SSE on an S3 bucket to make use of AES-256 encryption
- Encrypt the data using your own encryption keys then copy the data to Amazon S3 over HTTPS endpoints. This refers to using a Server-Side Encryption with Customer-Provided Keys (SSE-C).
Storing the data in encrypted EBS snapshots and storing the data on EBS volumes with encryption enabled instead of using Amazon S3 are both incorrect because all these options are for protecting your data in your EBS volumes. Note that an S3 bucket does not use EBS volumes to store your data.
Using AWS Shield to protect your data at rest is incorrect because AWS Shield is mainly used to protect your entire VPC against DDoS attacks.
A manufacturing company has EC2 instances running in AWS. The EC2 instances are configured with Auto Scaling. There are a lot of requests being lost because of too much load on the servers. The Auto Scaling is launching new EC2 instances to take the load accordingly yet, there are still some requests that are being lost.
Which of the following is the MOST suitable solution that you should implement to avoid losing recently submitted requests?
A. Replace the Auto Scaling group with a cluster placement group to achieve a low latency network performance necessary for tightly coupled node to node communication
B. Set up Amazon Aurora Serverless for on demand, auto scaling configuration of your EC2 Instances and also enable Amazon Aurora Parallel Query feature for faster analytical queries over your current data
C. Use an Amazon SQS queue to decouple the application components and scale out the EC2 instances based upon the ApproximateNumberOfMessages metric in Amazon CloudWatch
D. Use larger instances for your application with an attached Elastic Fabric Adapter (EFA)
C. Use an Amazon SQS queue to decouple the application components and scale out the EC2 instances based upon the ApproximateNumberOfMessages metric in Amazon CloudWatch
Explanation:
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that makes it easy to decouple and scale microservices, distributed systems, and serverless applications. Building applications from individual components that each perform a discrete function improves scalability and reliability and is best practice design for modern applications. SQS makes it simple and cost-effective to decouple and coordinate the components of a cloud application. Using SQS, you can send, store, and receive messages between software components at any volume without losing messages or requiring other services to be always available.
The number of messages in your Amazon SQS queue does not solely define the number of instances needed. In fact, the number of instances in the fleet can be driven by multiple factors, including how long it takes to process a message and the acceptable amount of latency (queue delay).
The solution is to use a backlog per instance metric with the target value being the acceptable backlog per instance to maintain. You can calculate these numbers as follows:
Backlog per instance: To determine your backlog per instance, start with the Amazon SQS metric ApproximateNumberOfMessages to determine the length of the SQS queue (number of messages available for retrieval from the queue). Divide that number by the fleet’s running capacity, which for an Auto Scaling group is the number of instances in the InService state, to get the backlog per instance.
Acceptable backlog per instance: To determine your target value, first calculate what your application can accept in terms of latency. Then, take the acceptable latency value and divide it by the average time that an EC2 instance takes to process a message.
To illustrate with an example, let’s say that the current ApproximateNumberOfMessages is 1500 and the fleet’s running capacity is 10. If the average processing time is 0.1 seconds for each message and the longest acceptable latency is 10 seconds then the acceptable backlog per instance is 10 / 0.1, which equals 100. This means that 100 is the target value for your target tracking policy. Because the backlog per instance is currently at 150 (1500 / 10), your fleet scales out by five instances to maintain proportion to the target value.
Hence, the correct answer is: Use an Amazon SQS queue to decouple the application components and scale-out the EC2 instances based upon the ApproximateNumberOfMessages metric in Amazon CloudWatch.
Replacing the Auto Scaling group with a cluster placement group to achieve a low-latency network performance necessary for tightly-coupled node-to-node communication is incorrect. Although it is true that a cluster placement group allows you to achieve a low-latency network performance, you still need to use Auto Scaling for your architecture to add more EC2 instances.
Using larger instances for your application with an attached Elastic Fabric Adapter (EFA) is incorrect because using a larger EC2 instance would not prevent data from being lost in case of a larger spike. You can take advantage of the durability and elasticity of SQS to keep the messages available for consumption by your instances. Elastic Fabric Adapter (EFA) is simply a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of inter-node communications at scale on AWS.
Setting up Amazon Aurora Serverless for on-demand, auto-scaling configuration of your EC2 Instances and also enabling Amazon Aurora Parallel Query feature for faster analytical queries over your current data is incorrect. Although the Amazon Aurora Parallel Query feature provides faster analytical queries over your current data, Amazon Aurora Serverless is an on-demand, auto-scaling configuration for your database, and NOT for your EC2 instances. This is actually an auto-scaling configuration for your Amazon Aurora database and not for your compute services.
A company currently has an Augment Reality (AR) mobile game that has a serverless backend. It is using a DynamoDB table which was launched using the AWS CLI to store all the user data and information gathered from the players and a Lambda function to pull the data from DynamoDB. The game is being used by millions of users each day to read and store data.
How would you design the application to improve its overall performance and make it more scalable while keeping the costs low? (Select TWO.)
A. Use API GAteway in conjunction with Lambda and turn on the caching on frequently accessed data and enable DynamoDB global replication
B. Configure CloudFront with DynamoDB as the origin; cache frequently accessed data on the client device using ElastiCache
C. Use AWS SSO and Cognito to authenticate users and have them directly access DyanmoDB using single sign on. Manually set the provisioned read and write capacity to a higher RCU and WCU
D. Since Auto Scaling is enabled by default, the provisioned read and write capacity will adjust automatically. Also enable DyanmoDB Accelerator (DAX) to improve the performance from milliseconds to microseconds
E. Enable DyanmoDB Accelerator (DAX) and ensure that the Auto Scaling is enabled and increase the maximum provisioned read and write capacity
A. Use API GAteway in conjunction with Lambda and turn on the caching on frequently accessed data and enable DynamoDB global replication
E. Enable DyanmoDB Accelerator (DAX) and ensure that the Auto Scaling is enabled and increase the maximum provisioned read and write capacity
Explanation;
The correct answers are the options that say:
- Enable DynamoDB Accelerator (DAX) and ensure that the Auto Scaling is enabled and increase the maximum provisioned read and write capacity.
- Use API Gateway in conjunction with Lambda and turn on the caching on frequently accessed data and enable DynamoDB global replication.
Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second. DAX does all the heavy lifting required to add in-memory acceleration to your DynamoDB tables, without requiring developers to manage cache invalidation, data population, or cluster management.
Amazon API Gateway lets you create an API that acts as a “front door” for applications to access data, business logic, or functionality from your back-end services, such as code running on AWS Lambda. Amazon API Gateway handles all of the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization, and access control, monitoring, and API version management. Amazon API Gateway has no minimum fees or startup costs.
AWS Lambda scales your functions automatically on your behalf. Every time an event notification is received for your function, AWS Lambda quickly locates free capacity within its compute fleet and runs your code. Since your code is stateless, AWS Lambda can start as many copies of your function as needed without lengthy deployment and configuration delays.
The option that says: Configure CloudFront with DynamoDB as the origin; cache frequently accessed data on the client device using ElastiCache is incorrect. Although CloudFront delivers content faster to your users using edge locations, you still cannot integrate DynamoDB table with CloudFront as these two are incompatible.
The option that says: Use AWS SSO and Cognito to authenticate users and have them directly access DynamoDB using single-sign on. Manually set the provisioned read and write capacity to a higher RCU and WCU is incorrect because AWS Single Sign-On (SSO) is a cloud SSO service that just makes it easy to centrally manage SSO access to multiple AWS accounts and business applications. This will not be of much help on the scalability and performance of the application. It is costly to manually set the provisioned read and write capacity to a higher RCU and WCU because this capacity will run round the clock and will still be the same even if the incoming traffic is stable and there is no need to scale.
The option that says: Since Auto Scaling is enabled by default, the provisioned read and write capacity will adjust automatically. Also enable DynamoDB Accelerator (DAX) to improve the performance from milliseconds to microseconds is incorrect because by default, Auto Scaling is not enabled in a DynamoDB table which is created using the AWS CLI.
A call center wants to use Artificial Intelligence(AI) to extract insights from audio recordings to assess the quality of its customer service. The calls are available in both English and Hindi. A sentiment analysis report in English must be generated for each recording to assess whether or not the customer had a positive experience. Once the solution is completed, new languages will eventually be supported, such as Arabic, Mandarin, and Spanish.
How can the solutions architect build the solution without maintaining any machine learning model?
A. Set up Amazon Comprehend to convert audio recordings into text. Use Amazon Kendra to translate Hindi texts to English and utilize the Amazon Detective service to automatically detect negative user behavior for sentiment analysis
B. Convert audio recordings into text using Amazon Transcribe. Set up Amazon Translate to translate Hindi text into English and use Amazon Comprehend to sentiment analysis
C. Transcribe audio recordings into text using Amazon Polly. Set up Amazon Rekognition to recognize and automatically translate Hindi texts into English. Use the combination of Amazon Fraud Detector and Amazon Fraud Detector and Amazon SageMaker BlazingText algorithm for sentinment analysis
D. Utilize the Amazon Lex service to convert audio recordings into text. Call the Amazon Translate API to translate Hindi texts into English and use Amazon Forecast for sentiment prediction and analysis
B. Convert audio recordings into text using Amazon Transcribe. Set up Amazon Translate to translate Hindi text into English and use Amazon Comprehend to sentiment analysis
Explanation:
Amazon Transcribe is an AWS service that makes it easy for customers to convert speech-to-text. Using Automatic Speech Recognition (ASR) technology, customers can choose to use Amazon Transcribe for a variety of business applications, including transcription of voice-based customer service calls, generation of subtitles on audio/video content, and conduct (text-based) content analysis on audio/video content.
Amazon Translate is a Neural Machine Translation (MT) service for translating text between supported languages.
Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find meaning and insights in text.
You can use Amazon Comprehend to determine the sentiment of a document. For example, you can use sentiment analysis to determine the sentiments of comments on a blog posting or a transcribed call to determine if your users loved or hated your content. You can determine sentiment for documents in any of the primary languages supported by Amazon Comprehend. All documents in one job must be in the same language.
In this scenario, you can use these three services to build the ML-pipeline needed to satisfy the requirements. First, you’d have to create a transcription job using Amazon Transcribe to transform the recordings into text. Then, translate non-English calls to English using Amazon Translate. Finally, use Amazon Comprehend for sentiment analysis.
There’s no need to deploy or train your own model as all of these services are fully managed and are readily available through APIs.
Hence, the correct answer is: Convert audio recordings into text using Amazon Transcribe. Set up Amazon Translate to translate Hindi texts into English and use Amazon Comprehend for sentiment analysis.
The option that says: Transcribe audio recordings into text using Amazon Polly. Set up Amazon Rekognition to recognize and automatically translate Hindi texts into English. Use the combination of Amazon Fraud Detector and Amazon SageMaker BlazingText algorithm for sentiment analysis is incorrect. Although the use of the Amazon SageMaker BlazingText algorithm is technically valid, it still fails to meet the condition of not maintaining any ML-model. Using Amazon SageMaker would require you to train and deploy the model yourself. Added to that, the use of Amazon Fraud Detector is also unnecessary. Amazon Fraud Detector is commonly used to identify potentially fraudulent activities and not for running sentiment analysis. Do take note that Amazon Polly is not capable of transcribing audio recordings, and Amazon Rekognition is primarily used for image recognition service, not for translating foreign words into English.
The option that says: Utilize the Amazon Lex service to convert audio recordings into text. Call the Amazon Translate API to translate Hindi texts into English and use Amazon Forecast for sentiment prediction and analysis is incorrect. Amazon Lex is a fully managed artificial intelligence (AI) service with advanced natural language models that can help you design, build, test, and deploy conversational interfaces or chatbots. This service is not capable of transcribing any audio recordings into a text format. Amazon Textract only extracts text from documents and does not convert audio to text. Also, you cannot use the Amazon Forecast service for running sentiment prediction and analysis. Amazon Forecast is meant for forecasting business outcomes using historical and related data.
The option that says: Set up Amazon Comprehend to convert audio recordings into text. Use Amazon Kendra to translate Hindi texts into English and utilize the Amazon Detective service to automatically detect negative user behaviors for sentiment analysis is incorrect. Amazon Comprehend is a natural-language processing (NLP) service that uses machine learning to uncover valuable insights and connections in text. This service is not capable of transcribing or converting audio recordings into text. Amazon Kendra is a highly accurate and easy-to-use enterprise search service for all unstructured data that you store in AWS, while Amazon Detective is a security service that analyzes and visualizes security data to rapidly get to the root cause of your potential security issues. Amazon Kendra is not capable of translating any foreign text into English, and Amazon Detective doesn’t have the functionality to automatically detect negative user behaviors for sentiment analysis.
A company deployed a high-performance computing (HPC) cluster that spans multiple EC2 instances across multiple Availability Zones and processes various wind simulation models. Currently, the Solutions Architect is experiencing a slowdown in their applications and upon further investigation, it was discovered that it was due to latency issues.
Which is the MOST suitable solution that the Solutions Architect should implement to provide low-latency network performance necessary for tightly-coupled node-to-node communication of the HPC cluster?
A. Use EC2 Dedicated Instances
B. Set up a spread placement group across multiple Availability Zones in multiple AWS Regions
C. Set up a cluster placement group within a single Availability Zone in the same AWS Region
D. Set up AWS Direct Connect connections across multiple Availability Zones for increased bandwidth throughput and more consistent network experience
C. Set up a cluster placement group within a single Availability Zone in the same AWS Region
Explanation:
When you launch a new EC2 instance, the EC2 service attempts to place the instance in such a way that all of your instances are spread out across underlying hardware to minimize correlated failures. You can use placement groups to influence the placement of a group of interdependent instances to meet the needs of your workload. Depending on the type of workload, you can create a placement group using one of the following placement strategies:
Cluster – packs instances close together inside an Availability Zone. This strategy enables workloads to achieve the low-latency network performance necessary for tightly-coupled node-to-node communication that is typical of HPC applications.
Partition – spreads your instances across logical partitions such that groups of instances in one partition do not share the underlying hardware with groups of instances in different partitions. This strategy is typically used by large distributed and replicated workloads, such as Hadoop, Cassandra, and Kafka.
Spread – strictly places a small group of instances across distinct underlying hardware to reduce correlated failures.
Cluster placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. They are also recommended when the majority of the network traffic is between the instances in the group. To provide the lowest latency and the highest packet-per-second network performance for your placement group, choose an instance type that supports enhanced networking.
Partition placement groups can be used to deploy large distributed and replicated workloads, such as HDFS, HBase, and Cassandra, across distinct racks. When you launch instances into a partition placement group, Amazon EC2 tries to distribute the instances evenly across the number of partitions that you specify. You can also launch instances into a specific partition to have more control over where the instances are placed.
Spread placement groups are recommended for applications that have a small number of critical instances that should be kept separate from each other. Launching instances in a spread placement group reduces the risk of simultaneous failures that might occur when instances share the same racks. Spread placement groups provide access to distinct racks and are therefore suitable for mixing instance types or launching instances over time. A spread placement group can span multiple Availability Zones in the same Region. You can have a maximum of seven running instances per Availability Zone per group.
Hence, the correct answer is: Set up a cluster placement group within a single Availability Zone in the same AWS Region.
The option that says: Set up a spread placement group across multiple Availability Zones in multiple AWS Regions is incorrect. Although using a placement group is valid for this particular scenario, you can only set up a placement group in a single AWS Region only. A spread placement group can span multiple Availability Zones in the same Region.
The option that says: Set up AWS Direct Connect connections across multiple Availability Zones for increased bandwidth throughput and more consistent network experience is incorrect because this is primarily used for hybrid architectures. It bypasses the public Internet and establishes a secure, dedicated connection from your on-premises data center into AWS and not used for having low latency within your AWS network.
The option that says: Use EC2 Dedicated Instances is incorrect because these are EC2 instances that run in a VPC on hardware that is dedicated to a single customer and are physically isolated at the host hardware level from instances that belong to other AWS accounts. It is not used for reducing latency.
A production MySQL database hosted on Amazon RDS is running out of disk storage. The management has consulted its solutions architect to increase the disk space without impacting the database performance.
How can the solutions architect satisfy the requirement with the LEAST operational overhead?
A. Change the default_storage_engine of the DB instances parameter group to MyISAM
B. Modify the DB instance settings and enable storage autoscaling
C. Modify the DB instance storage type to Provisioned IOPS
D. Increase the allocated storage for the DB instance
B. Modify the DB instance settings and enable storage autoscaling
Explanation:
RDS Storage Auto Scaling automatically scales storage capacity in response to growing database workloads, with zero downtime.
Under-provisioning could result in application downtime, and over-provisioning could result in underutilized resources and higher costs. With RDS Storage Auto Scaling, you simply set your desired maximum storage limit, and Auto Scaling takes care of the rest.
RDS Storage Auto Scaling continuously monitors actual storage consumption, and scales capacity up automatically when actual utilization approaches provisioned storage capacity. Auto Scaling works with new and existing database instances. You can enable Auto Scaling with just a few clicks in the AWS Management Console. There is no additional cost for RDS Storage Auto Scaling. You pay only for the RDS resources needed to run your applications.
Hence, the correct answer is: Modify the DB instance settings and enable storage autoscaling.
The option that says: Increase the allocated storage for the DB instance is incorrect. Although this will solve the problem of low disk space, increasing the allocated storage might cause performance degradation during the change.
The option that says: Change the default_storage_engine of the DB instance’s parameter group to MyISAM is incorrect. This is just a storage engine for MySQL. It won’t increase the disk space in any way.
The option that says: Modify the DB instance storage type to Provisioned IOPS is incorrect. This may improve disk performance but it won’t solve the problem of low database storage.
A firm has a containerized application that runs on a self-managed Kubernetes cluster. The cluster writes data in an on-premises MongoDB database. A solutions architect is requested to move the service to AWS in order to minimize operational overhead. The firm prohibits any changes to the code.
Which action meets these objectives?
A. Migrate the cluster to an Amazon Elastic Kubernetes Services (EKS) cluster and the database to an Amazon DocumentDB (with MongoDB compatibility) database
B. Migrate the cluster to an Amazon Elastic Container Service (ECS) cluster using Amazon ECS Anywhere and the database to an Amazon Aurora Serverless database
C. Migrate the cluster to an Amazon Elastic Container Service (ECS) cluster, with the images stored in the Amazon Elastic Container Registry (Amazon ECR). Move the database to an Amazon Neptune database
D. Migrate the cluster to an Amazon Elastic Kubernetes Service (EKS) cluster using Amazon EKS Anywhere and the database to an Amazon DynamoDB table
A. Migrate the cluster to an Amazon Elastic Kubernetes Services (EKS) cluster and the database to an Amazon DocumentDB (with MongoDB compatibility) database
Explanation:
Amazon DocumentDB (with MongoDB compatibility) is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. The Amazon DocumentDB Migration Guide outlines three primary approaches for migrating from MongoDB to Amazon DocumentDB: offline, online, and hybrid.
The image above illustrates an offline migration approach, which is the fastest and simplest of the three but incurs the longest period of downtime. This approach is a good choice for proofs of concepts, development and test workloads, and production workloads for which downtime is not of primary concern. For online approach, you may use AWS DMS to minimize downtime. AWS DMS continually reads from the source MongoDB oplog and applies those changes in near-real time on the source Amazon DocumentDB cluster.
Hence, the correct answer is: Migrate the cluster to an Amazon Elastic Kubernetes Service (EKS) cluster and the database to an Amazon DocumentDB (with MongoDB compatibility) database.
The option that says: Migrate the cluster to an Amazon Elastic Container Service (ECS) cluster using Amazon ECS Anywhere and the database to an Amazon Aurora Serverless database is incorrect. You can’t directly migrate to Amazon Aurora because MongoDB is a non-relational database. Amazon Elastic Container Service (ECS) Anywhere is simply a feature of Amazon ECS that enables you to easily run and manage container workloads on customer-managed infrastructure.
The option that says: Migrate the cluster to an Amazon Elastic Kubernetes Service (EKS) cluster using Amazon EKS Anywhere and the database to an Amazon DynamoDB table is incorrect. Although DynamoDB supports JSON-like documents, migrating from MongoDB to a DynamoDB table would involve code changes since the operations for accessing DynamoDB tables are different. DynamoDB has a different set of APIs for creating, reading, updating, and deleting items than MongoDB. The use of Amazon EKS Anywhere is not warranted as well. This is only a new deployment option for Amazon EKS that allows customers to create and operate Kubernetes clusters on customer-managed infrastructure.
The option that says: Migrate the cluster to an Amazon Elastic Container Service (ECS) cluster with the images stored in the Amazon Elastic Container Registry (Amazon ECR). Move the database to an Amazon Neptune database is incorrect. Amazon Neptune is not suitable for the use case described in the scenario. Amazon Neptune is a fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets.
A company has clients all across the globe that access product files stored in several S3 buckets, which are behind each of their own CloudFront web distributions. They currently want to deliver their content to a specific client, and they need to make sure that only that client can access the data. Currently, all of their clients can access their S3 buckets directly using an S3 URL or through their CloudFront distribution. The Solutions Architect must serve the private content via CloudFront only, to secure the distribution of files.
Which combination of actions should the Architect implement to meet the above requirements? (Select TWO.)
A. Require the users to access the private content by using special CloudFront signed URLs or signed cookies
B. Use S3 pre-signed URLs to ensure that only their client can access the files. Remove permissions to use Amazon S3 URLs to read the files for anyone else
C. Restrict access to files in the origin by creating an origin access identity (OAI) and give it permission to read the files in the bucket
D. Create a custom CloudFront function to check and ensure that only their clients can access the files
E. Enable the Origin Shield feature of the Amazon CloudFront distiribution to protect the files from unauthorized access
A. Require the users to access the private content by using special CloudFront signed URLs or signed cookies
C. Restrict access to files in the origin by creating an origin access identity (OAI) and give it permission to read the files in the bucket
Explanation:
Many companies that distribute content over the Internet want to restrict access to documents, business data, media streams, or content that is intended for selected users, for example, users who have paid a fee. To securely serve this private content by using CloudFront, you can do the following:
- Require that your users access your private content by using special CloudFront signed URLs or signed cookies.
- Require that your users access your Amazon S3 content by using CloudFront URLs, not Amazon S3 URLs. Requiring CloudFront URLs isn’t necessary, but it is recommended to prevent users from bypassing the restrictions that you specify in signed URLs or signed cookies. You can do this by setting up an origin access identity (OAI) for your Amazon S3 bucket. You can also configure the custom headers for a private HTTP server or an Amazon S3 bucket configured as a website endpoint.
All objects and buckets by default are private. The pre-signed URLs are useful if you want your user/customer to be able to upload a specific object to your bucket, but you don’t require them to have AWS security credentials or permissions.
You can generate a pre-signed URL programmatically using the AWS SDK for Java or the AWS SDK for .NET. If you are using Microsoft Visual Studio, you can also use AWS Explorer to generate a pre-signed object URL without writing any code. Anyone who receives a valid pre-signed URL can then programmatically upload an object.
Hence, the correct answers are:
- Restrict access to files in the origin by creating an origin access identity (OAI) and give it permission to read the files in the bucket.
- Require the users to access the private content by using special CloudFront signed URLs or signed cookies.
The option that says: Create a custom CloudFront function to check and ensure that only their clients can access the files is incorrect. CloudFront Functions are just lightweight functions in JavaScript for high-scale, latency-sensitive CDN customizations and not for enforcing security. A CloudFront Function runtime environment offers submillisecond startup times which allows your application to scale immediately to handle millions of requests per second. But again, this can’t be used to restrict access to your files.
The option that says: Enable the Origin Shield feature of the Amazon CloudFront distribution to protect the files from unauthorized access is incorrect because this feature is not primarily used for security but for improving your origin’s load times, improving origin availability, and reducing your overall operating costs in CloudFront.
The option that says: Use S3 pre-signed URLs to ensure that only their client can access the files. Remove permission to use Amazon S3 URLs to read the files for anyone else is incorrect. Although this could be a valid solution, it doesn’t satisfy the requirement to serve the private content via CloudFront only to secure the distribution of files. A better solution is to set up an origin access identity (OAI) then use Signed URL or Signed Cookies in your CloudFront web distribution.
A company is using multiple AWS accounts that are consolidated using AWS Organizations. They want to copy several S3 objects to another S3 bucket that belonged to a different AWS account which they also own. The Solutions Architect was instructed to set up the necessary permissions for this task and to ensure that the destination account owns the copied objects and not the account it was sent from.
How can the Architect accomplish this requirement?
A. Configure cross account permissions in S3 by creating an IAM customer managed policy that allows an IAM user or role to copy objects from the source bucket in one account to the destination bucket in the other account. Then attach the policy to the IAM user or role that you want to use to copy objects between accounts.
B. Set up cross origin resource sharing (CORS) in S3 by creating a bucket policy that allows an IAM user or role to copy objects from the source bucket in one account to the destination bucket in the other account
C. Enable Requester Pays feature in the source S3 bucket. The fees would be waived through Consolidated Billing since both AWS accounts are part of AWS Organizations
D. Connect the two S3 buckets from two different AWS accounts to Amazon WorkDocs. Set up cross account access to integrate the two S3 buckets. Use the Amzon WorkDocs console to copy the objects from one account to the other with modified object ownership assigned to the destination account
A. Configure cross account permissions in S3 by creating an IAM customer managed policy that allows an IAM user or role to copy objects from the source bucket in one account to the destination bucket in the other account. Then attach the policy to the IAM user or role that you want to use to copy objects between accounts.
Explanation:
By default, an S3 object is owned by the account that uploaded the object. That’s why granting the destination account the permissions to perform the cross-account copy makes sure that the destination owns the copied objects. You can also change the ownership of an object by changing its access control list (ACL) to bucket-owner-full-control.
However, object ACLs can be difficult to manage for multiple objects, so it’s a best practice to grant programmatic cross-account permissions to the destination account. Object ownership is important for managing permissions using a bucket policy. For a bucket policy to apply to an object in the bucket, the object must be owned by the account that owns the bucket. You can also manage object permissions using the object’s ACL. However, object ACLs can be difficult to manage for multiple objects, so it’s best practice to use the bucket policy as a centralized method for setting permissions.
To be sure that a destination account owns an S3 object copied from another account, grant the destination account the permissions to perform the cross-account copy. Follow these steps to configure cross-account permissions to copy objects from a source bucket in Account A to a destination bucket in Account B:
- Attach a bucket policy to the source bucket in Account A.
- Attach an AWS Identity and Access Management (IAM) policy to a user or role in Account B.
- Use the IAM user or role in Account B to perform the cross-account copy.
Hence, the correct answer is: Configure cross-account permissions in S3 by creating an IAM customer-managed policy that allows an IAM user or role to copy objects from the source bucket in one account to the destination bucket in the other account. Then attach the policy to the IAM user or role that you want to use to copy objects between accounts.
The option that says: Enable the Requester Pays feature in the source S3 bucket. The fees would be waived through Consolidated Billing since both AWS accounts are part of AWS Organizations is incorrect because the Requester Pays feature is primarily used if you want the requester, instead of the bucket owner, to pay the cost of the data transfer request and download from the S3 bucket. This solution lacks the necessary IAM Permissions to satisfy the requirement. The most suitable solution here is to configure cross-account permissions in S3.
The option that says: Set up cross-origin resource sharing (CORS) in S3 by creating a bucket policy that allows an IAM user or role to copy objects from the source bucket in one account to the destination bucket in the other account is incorrect because CORS simply defines a way for client web applications that are loaded in one domain to interact with resources in a different domain, and not on a different AWS account.
The option that says: Connect the two S3 buckets from two different AWS accounts to Amazon WorkDocs. Set up cross-account access to integrate the two S3 buckets. Use the Amazon WorkDocs console to copy the objects from one account to the other with modified object ownership assigned to the destination account is incorrect because Amazon WorkDocs is commonly used to easily collaborate, share content, provide rich feedback, and collaboratively edit documents with other users. There is no direct way for you to integrate WorkDocs and an Amazon S3 bucket owned by a different AWS account. A better solution here is to use cross-account permissions in S3 to meet the requirement.