Set 5 Kindle SAA-003 Practice Test Flashcards
A Solutions Architect has deployed an API using Amazon API Gateway and created usage plans and API keys for several customers. Requests from one particular customer have been excessive and the solutions architect needs to limit the rate of requests. Other customers should not be affected. How should the solutions architect proceed?
A. Configure a server-side throttling limit
B. Configure the per-method throttling limits
C. Configure per-client throttling limits
D. Configure the account-level throttling limits
C. Configure per-client throttling limits
Explanation:
Per-client throttling limitsare applied to clients that use API keys associated with your usage policy as client identifier. This can be applied to the single customer that is issuing excessive API requests. This is the best option to ensure that only one customer is affected. In the diagram below, per-client throttling limits are set in a usage plan: CORRECT: “Configure per-client throttling limits” is the correct answer. INCORRECT: “Configure a server-side throttling limit” is incorrect. Server-side throttling limitsare applied across all clients. These limit settings exist to prevent your API—and your account—from being overwhelmed by too many requests. In this case, the solutions architect need to apply the throttling to a single client. INCORRECT: “Configure the per-method throttling limits” is incorrect. Per-method throttling limits apply to all customers using the same method. This will affect all customers who are using the API. INCORRECT: “Configure the account-level throttling limits” is incorrect. Account-level throttling limits define the maximum steady-state request rate and burst limits for the account. This does not apply to individual customers.
A Solutions Architect is deploying a high performance computing (HPC) application on Amazon EC2 instances. The application requires extremely low inter-instance latency. How should the instances be deployed for BEST performance?
A. Use an instance with enhanced networking and deploy the instances in a partition placement group
B. Use an Elastic Fabric Adapter (EFA) and deploy instances in a cluster placement group
C. Add multiple Elastic Network Adapters (ENAs) to each instance and create a NIC team
D. Use an EBS-optimized instance with 10 Gigabit networking and deploy to a single subnet
B. Use an Elastic Fabric Adapter (EFA) and deploy instances in a cluster placement group
Explanation:
It is recommended to use either enhanced networking or an Elastic Fabric Adapter (EFA) for the nodes of an HPC application. This will assist with decreasing latency. Additionally, a cluster placement group packs instances close together inside an Availability Zone. Using a cluster placement group enables workloads to achieve the low-latency network performance necessary for tightly-coupled node-to-node communication that is typical of HPC applications. The table below helps you to understand the key differences between the different placement group options: CORRECT: “Use an Elastic Fabric Adapter (EFA) and deploy instances in a cluster placement group” is the correct answer. INCORRECT: “Use an instance with enhanced networking and deploy the instances in a partition placement group” is incorrect. A partition placement group protects instances from correlated hardware failures, it does not offer the best inter-instance network performance. INCORRECT: “Add multiple Elastic Network Adapters (ENAs) to each instance and create a NIC team” is incorrect. You cannot use NIC teaming methods on AWS to increase the bandwidth to your application. This will also not reduce latency. INCORRECT: “Use an EBS-optimized instance with 10 Gigabit networking and deploy to a single subnet” is incorrect. EBS optimization is related to storage, not to network performance. A 10 Gigabit adapter offers great bandwidth but for lowest latency enhanced networking with a cluster placement group should be used.
A company has deployed an API using Amazon API Gateway. There are many repeat requests and a solutions architect has been asked to implement measures to reduce request latency and the number of calls to the Amazon EC2 endpoint. How can this be most easily achieved?
A. Create a cache for a stage and configure a TTL
B. Create a cache for a method and configure a TTL
C. Configure an edge-optimized endpoint with CloudFront
D. Configure a private endpoint place ElastiCache in front
A. Create a cache for a stage and configure a TTL
Explanation:’
You can enable API caching in Amazon API Gateway to cache your endpoint’s responses. With caching, you can reduce the number of calls made to your endpoint and also improve the latency of requests to your API. When you enable caching for a stage, API Gateway caches responses from your endpoint for a specified time-to-live (TTL) period, in seconds. API Gateway then responds to the request by looking up the endpoint response from the cache instead of making a request to your endpoint. The default TTL value for API caching is 300 seconds. The maximum TTL value is 3600 seconds. TTL=0 means caching is disabled. CORRECT: “Create a cache for a stage and configure a TTL” is the correct answer. INCORRECT: “Create a cache for a method and configure a TTL” is incorrect. An API cache is not enabled for a method, it is enabled for a stage. INCORRECT: “Configure an edge-optimized endpoint with CloudFront” is incorrect. This is the default endpoint type with API Gateway so there’s no reason to believe the solution architect needs to configure this. Users are routed to the nearest CloudFront point of presence (POP). However, caching still takes place within API gateway using a stage cache. INCORRECT: “Configure a private endpoint place ElastiCache in front” is incorrect. You cannot use Amazon ElastiCache to cache API requests.
A Solutions Architect is designing a migration strategy for a company moving to the AWS Cloud. The company use a shared Microsoft filesystem that uses Distributed File System Namespaces (DFSN). What will be the MOST suitable migration strategy for the filesystem?
A. Use the AWS Server Migration Service to migrate to an Amazon S3 bucket
B. Use the AWS Server Migration Service to migrate to Amazon FSx for Lustre
C. Use AWS DataSync to migrate to an Amazon EFS filesystem
D. Use AWS DataSync to migrate to Amazon FSx for Windows File Server
D. Use AWS DataSync to migrate to Amazon FSx for Windows File Server
Explanation:
The destination filesystem should be Amazon FSx for Windows File Server. This supports DFSN and is the most suitable storage solution for Microsoft filesystems. AWS DataSync supports migrating to the Amazon FSx and automates the process. CORRECT: “Use AWS DataSync to migrate to Amazon FSx for Windows File Server” is the correct answer. INCORRECT: “Use the AWS Server Migration Service to migrate to Amazon FSx for Lustre” is incorrect. The server migration service is used to migrate virtual machines and FSx for Lustre does not support Windows filesystems. INCORRECT: “Use AWS DataSync to migrate to an Amazon EFS filesystem” is incorrect. You can migrate data to EFS using DataSync but it is the wrong destination for a Microsoft filesystem (Linux only). INCORRECT: “Use the AWS Server Migration Service to migrate to an Amazon S3 bucket” is incorrect. The server migration service is used to migrate virtual machines and Amazon S3 is an object-based storage system and unsuitable for hosting a Microsoft filesystem.
An Amazon ElastiCache for Redis cluster runs across multiple Availability Zones. A solutions architect is concerned about the security of sensitive data as it is replicated between nodes. How can the solutions architect protect the sensitive data?
A. Issue a Redis AUTH command
B. Enable in-transit encryption
C. Enable at-rest encryption
D. Set up MFA and API logging
B. Enable in-transit encryption
Explanation:
Amazon ElastiCache in-transit encryption is an optional feature that allows you to increase the security of your data at its most vulnerable points—when it is in transit from one location to another. Because there is some processing needed to encrypt and decrypt the data at the endpoints, enabling in-transit encryption can have some performance impact. You should benchmark your data with and without in-transit encryption to determine the performance impact for your use cases. ElastiCache in-transit encryption implements the following features: - Encrypted connections—both the server and client connections are Secure Socket Layer (SSL) encrypted. - Encrypted replication—data moving between a primary node and replica nodes is encrypted. - Server authentication—clients can authenticate that they are connecting to the right server. - Client authentication—using the Redis AUTH feature, the server can authenticate the clients. CORRECT: “Enable in-transit encryption” is the correct answer. INCORRECT: “Issue a Redis AUTH command” is incorrect. This is used when using a password to access the database. INCORRECT: “Enable at-rest encryption” is incorrect. ElastiCache for Redis at-rest encryption is an optional feature to increase data security by encrypting on-disk data. This does not encrypt the data in-transit when it is being replicated between nodes. INCORRECT: “Set up MFA and API logging” is incorrect. Neither multi-factor authentication or API logging is going to assist with encrypting data.
A company runs an application on-premises that must consume a REST API running on Amazon API Gateway. The company has an AWS Direct Connect connection to their Amazon VPC. The solutions architect wants all API calls to use private addressing only and avoid the internet. How can this be achieved?
A. Use a transit virtual interface and an AWS VPN to create a secure tunnel to Amazon API Gateway
B. Use a private virtual interface and create a VPC Endpoint for Amazon API Gateway C. Use a hosted virtual interface and create a VPC Endpoint for Amazon API Gateway D. Use a public virtual interface and an AWS VPN to create a secure tunnel to Amazon API Gateway
B. Use a private virtual interface and create a VPC Endpoint for Amazon API Gateway
Explanation:
The requirements are to avoid the internet and use private IP addresses only. The best solution is to use a private virtual interface across the Direct Connect connection to connect to the VPC using private IP addresses. A VPC endpoint for Amazon API Gateway can be created and this will provide access to API Gateway using private IP addresses and avoids the internet completely. CORRECT: “Use a private virtual interface and create a VPC Endpoint for Amazon API Gateway” is the correct answer. INCORRECT: “Use a hosted virtual interface and create a VPC Endpoint for Amazon API Gateway” is incorrect. A hosted virtual interface is used to allow another account to access your Direct Connect link. INCORRECT: “Use a transit virtual interface and an AWS VPN to create a secure tunnel to Amazon API Gateway” is incorrect. A transit virtual interface is used to access Amazon VPC Transit Gateways which are not included in the solution. INCORRECT: “Use a public virtual interface and an AWS VPN to create a secure tunnel to Amazon API Gateway” is incorrect. This will use the public internet so it is not allowed in this scenario.
A company has an eCommerce application that runs from multiple AWS Regions. Each region has a separate database running on Amazon EC2 instances. The company plans to consolidate the data to a columnar database and run analytics queries. Which approach should the company take?
A. Run an AWS Batch job to copy and process the data into a columnar Amazon RDS database. Use Amazon Athena to analyze the data
B. Use the COPY command to load data into an Amazon RedShift data warehouse and run the analytics queries there
C. Launch Amazon Kinesis Data Streams producers to load data into a Kinesis Data stream. Use Kinesis Data Analytics to analyze the data
D. Create an AWS Lambda function that copies the data onto Amazon S3. Use Amazon S3 Select to query the data
B. Use the COPY command to load data into an Amazon RedShift data warehouse and run the analytics queries there
Explanation:
Amazon Redshift is an enterprise-level, petabyte scale, fully managed data warehousing service. It uses columnar storage to improve the performance of complex queries. You can use the COPY command to load data in parallel from one or more remote hosts, such Amazon EC2 instances or other computers. COPY connects to the remote hosts using SSH and executes commands on the remote hosts to generate text output. CORRECT: “Use the COPY command to load data into an Amazon RedShift data warehouse and run the analytics queries there” is the correct answer. INCORRECT: “Run an AWS Batch job to copy and process the data into a columnar Amazon RDS database. Use Amazon Athena to analyze the data” is incorrect. AWS Batch is used for running batch computing jobs across a fleet of EC2 instances. You cannot create a “columnar Amazon RDS database” as RDS is optimized for transactional workloads. Athena is used to analyze data on S3. INCORRECT: “Launch Amazon Kinesis Data Streams producers to load data into a Kinesis Data stream. Use Kinesis Data Analytics to analyze the data” is incorrect. Kinesis is a real-time streaming data service. It is not a columnar database so is unsuitable for this use case. INCORRECT: “Create an AWS Lambda function that copies the data onto Amazon S3. Use Amazon S3 Select to query the data” is incorrect. S3 is not a columnar database and S3 select does not run analytics queries, it simply selects data from an object to retrieve.
There has been an increase in traffic to an application that writes data to an Amazon DynamoDB database. Thousands of random tables reads occur per second and low-latency is required. What can a Solutions Architect do to improve performance for the reads without negatively impacting the rest of the application?
A. Increase the number of Amazon DynamoDB write capacity units
B. Add an Amazon SQS queue to decouple the requests
C. Use Amazon DynamoDB Accelerator to cache the reads
D. Use an Amazon Kinesis Data Stream to decouple requests
C. Use Amazon DynamoDB Accelerator to cache the reads
Explanation:
DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios: As an in-memory cache, DAX reduces the response times of eventually consistent read workloads by an order of magnitude from single-digit milliseconds to microseconds. DAX reduces operational and application complexity by providing a managed service that is API-compatible with DynamoDB. Therefore, it requires only minimal functional changes to use with an existing application. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to overprovision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys. DynamoDB accelerator is the best solution for caching the reads and delivering them at extremely low latency. CORRECT: “Use Amazon DynamoDB Accelerator to cache the reads” is the correct answer. INCORRECT: “Increase the number of Amazon DynamoDB write capacity units” is incorrect. This will not improve read performance as write capacity units affect write performance. INCORRECT: “Add an Amazon SQS queue to decouple the requests” is incorrect. You cannot decouple a database from the frontend with a queue in order to decrease read latency. INCORRECT: “Use an Amazon Kinesis Data Stream to decouple requests” is incorrect. You cannot increase read performance for a database by implementing a real-time streaming service.
A Solutions Architect must enable an application to download software updates from the internet. The application runs on a series of EC2 instances in an Auto Scaling group running in a private subnet. The solution must involve minimal ongoing systems management effort. How should the Solutions Architect proceed?
A. Implement a NAT gateway
B. Launch a NAT instance
C. Create a Virtual Private Gateway
D. Attach Elastic IP addresses
A. Implement a NAT gateway
Explanation:
Both a NAT gateway or a NAT instance can be used for this use case. Both services enable internet access for instances in private subnets. However, the NAT instance runs on an EC2 instance you must launch, configure and manage and therefore involves more ongoing systems management effort. CORRECT: “Implement a NAT gateway” is the correct answer. INCORRECT: “Launch a NAT instance” is incorrect as this service involves more ongoing systems management effort. INCORRECT: “Create a Virtual Private Gateway” is incorrect. A VPG is used as part of a VPN connection (AWS side of the connection). It is not used to enable internet access. INCORRECT: “Attach Elastic IP addresses” is incorrect. You cannot use Elastic IP addresses with instances in private subnets.
A Solutions Architect manages multiple Amazon RDS MySQL databases. To improve security, the Solutions Architect wants to enable secure user access with short-lived credentials. How can these requirements be met?
A. Configure the MySQL databases to use the AWS Security Token Service (STS)
B. Configure the application to use the AUTH command to send a unique password
C. Create the MySQL user accounts to use the AWSAuthenticationPlugin with IAM
D. Configure the MySQL databases to use AWS KMS data encryption keys
C. Create the MySQL user accounts to use the AWSAuthenticationPlugin with IAM
Explanation:
With MySQL, authentication is handled byAWSAuthenticationPlugin—an AWS-provided plugin that works seamlessly with IAM to authenticate your IAM users. Connect to the DBinstanceand issue theCREATE USERstatement, as shown in the following example. CREATE USER jane_doe IDENTIFIED WITH AWSAuthenticationPlugin AS ‘RDS’; TheIDENTIFIED WITHclause allows MySQL to use theAWSAuthenticationPluginto authenticate the database account (jane_doe). TheAS ‘RDS’clause refers to the authentication method, and the specified database account should have the same name as the IAM user or role. In this example, both the database account and the IAM user or role are namedjane_doe. CORRECT: “Create the MySQL user accounts to use the AWSAuthenticationPlugin with IAM” is the correct answer. INCORRECT: “Configure the MySQL databases to use the AWS Security Token Service (STS)” is incorrect. You cannot configure MySQL to directly use the AWS STS. INCORRECT: “Configure the application to use the AUTH command to send a unique password” is incorrect. This is used with Redis databases, not with RDS databases. INCORRECT: “Configure the MySQL databases to use AWS KMS data encryption keys” is incorrect. Data encryption keys are used for data encryption not management of connections strings.
An application is running in a private subnet of an Amazon VPC and must have outbound internet access for downloading updates. The Solutions Architect does not want the application exposed to inbound connection attempts. Which steps should be taken?
A. Create a NAT gateway but do not attach an internet gateway to the VPC
B. Attach an internet gateway to the private subnet and create a NAT gateway
C. Attach an internet gateway to the VPC but do not create a NAT gateway
D. Create a NAT gateway and attach an internet gateway to the VPC
D. Create a NAT gateway and attach an internet gateway to the VPC
Explanation:
To enable outbound connectivity for instances in private subnets a NAT gateway can be created. The NAT gateway is created in a public subnet and a route must be created in the private subnet pointing to the NAT gateway for internet-bound traffic. An internet gateway must be attached to the VPC to facilitate outbound connections. You cannot directly connect to an instance in a private subnet from the internet. You would need to use a bastion/jump host. Therefore, the application will not be exposed to inbound connection attempts. CORRECT: “Create a NAT gateway and attach an internet gateway to the VPC” is the correct answer. INCORRECT: “Create a NAT gateway but do not create attach an internet gateway to the VPC” is incorrect. An internet gateway must be attached to the VPC for any outbound connections to work. INCORRECT: “Attach an internet gateway to the private subnet and create a NAT gateway” is incorrect. You do not attach internet gateways to subnets, you attach them to VPCs. INCORRECT: “Attach an internet gateway to the VPC but do not create a NAT gateway” is incorrect. Without a NAT gateway the instances in the private subnet will not be able to download updates from the internet.
An application has been migrated from on-premises to an Amazon EC2 instance. The migration has failed to an unknown dependency that the application must communicate with an on-premises server using private IP addresses. Which action should a solutions architect take to quickly provision the necessary connectivity?
A. Setup an AWS Direct Connect connection
B. Configure a Virtual Private Gateway
C. Create an Amazon CloudFront distribution
D. Create an AWS Transit Gateway
B. Configure a Virtual Private Gateway
Explanation:
A virtual private gateway is a logical, fully redundant distributed edge routing function that sits at the edge of your VPC. You must create a VPG in your VPC before you can establish an AWS Managed site-to-site VPN connection. The other end of the connection is the customer gateway which must be established on the customer side of the connection. CORRECT: “Configure a Virtual Private Gateway” is the correct answer. INCORRECT: “Setup an AWS Direct Connect connection” is incorrect as this would take too long to provision. INCORRECT: “Create an Amazon CloudFront distribution” is incorrect. This is not a solution for enabling connectivity using private addresses to an on-premises site. CloudFront is a content delivery network (CDN). INCORRECT: “Create an AWS Transit Gateway” is incorrect. AWS Transit Gateway connects VPCs and on-premises networks through a central hub which is not a requirement of this solution.
A company runs an API on a Linux server in their on-premises data center. The company are planning to migrate the API to the AWS cloud. The company require a highly available, scalable and cost-effective solution. What should a Solutions Architect recommend?
A. Migrate the API to Amazon API Gateway and migrate the backend to Amazon EC2
B. Migrate the API server to Amazon EC2 instances in an Auto Scaling group and attach an Application Load Balancer
C. Migrate the API to Amazon API Gateway and use AWS Lambda as the backend
D. Migrate the API to Amazon CloudFront and use AWS Lambda as the origin
C. Migrate the API to Amazon API Gateway and use AWS Lambda as the backend
Explanation:
The best option is to use a fully serverless solution. This will provide high availability, scalability and be cost-effective. The components for this would be Amazon API Gateway for hosting the API and AWS Lambda for running the backend. As you can see in the image below, API Gateway can be the frontend for multiple backend services: CORRECT: “Migrate the API to Amazon API Gateway and use AWS Lambda as the backend” is the correct answer. INCORRECT: “Migrate the API to Amazon API Gateway and migrate the backend to Amazon EC2” is incorrect. This is a less available and cost-effective solution for the backend compared to AWS Lambda. INCORRECT: “Migrate the API server to Amazon EC2 instances in an Auto Scaling group and attach an Application Load Balancer” is incorrect. Firstly, it may be difficult to load balance to an API. Additionally, this is a less cost-effective solution. INCORRECT: “Migrate the API to Amazon CloudFront and use AWS Lambda as the origin” is incorrect. You cannot migrate an API to CloudFront. You can use CloudFront in front of API Gateway but that is not what this answer specifies.
An application that is being installed on an Amazon EC2 instance requires a persistent block storage volume. The data must be encrypted at rest and regular volume-level backups must be automated. Which solution options should be used?
A. Use an encrypted Amazon EBS volume and use Data Lifecycle Manager to automate snapshots
B. Use an encrypted Amazon EFS filesystem and use an Amazon CloudWatch Events rule to start a backup copy of data using AWS Lambda
C. Use server-side encryption on an Amazon S3 bucket and use Cross-Region-Replication to backup on a schedule
D. Use an encrypted Amazon EC2 instance store and copy the data to another EC2 instance using a cron job and a batch script
A. Use an encrypted Amazon EBS volume and use Data Lifecycle Manager to automate snapshots
Explanation:
For block storage the Solutions Architect should use either Amazon EBS or EC2 instance store. However, the instance store is non-persistent so EBS must be used. With EBS you can encrypt your volume and automate volume-level backups using snapshots that are run by Data Lifecycle Manager. CORRECT: “Use an encrypted Amazon EBS volume and use Data Lifecycle Manager to automate snapshots” is the correct answer. INCORRECT: “Use an encrypted Amazon EFS filesystem and use an Amazon CloudWatch Events rule to start a backup copy of data using AWS Lambda” is incorrect. EFS is not block storage, it is a file-level storage service. INCORRECT: “Use server-side encryption on an Amazon S3 bucket and use Cross-Region-Replication to backup on a schedule” is incorrect. Amazon S3 is an object-based storage system not a block-based storage system. INCORRECT: “Use an encrypted Amazon EC2 instance store and copy the data to another EC2 instance using a cron job and a batch script “ is incorrect as the EC2 instance store is a non-persistent volume.
A company has several AWS accounts each with multiple Amazon VPCs. The company must establish routing between all private subnets. The architecture should be simple and allow transitive routing to occur. How should the network connectivity be configured?
A. Create a transitive VPC peering connection between each Amazon VPC and configure route tables
B. Create an AWS Transit Gateway and share it with each account using AWS Resource Access Manager
C. Create an AWS Managed VPN between each Amazon VPC and configure route tables
D. Create a hub-and-spoke topology with AWS App Mesh and use AWS Resource Access Manager to share route tables
B. Create an AWS Transit Gateway and share it with each account using AWS Resource Access Manager
Explanation:
You can build a hub-and-spoke topology with AWS Transit Gateway that supports transitive routing. This simplifies the network topology and adds additional features over VPC peering. AWS Resource Access Manager can be used to share the connection with the other AWS accounts. CORRECT: “Create an AWS Transit Gateway and share it with each account using AWS Resource Access Manager” is the correct answer. INCORRECT: “Create a transitive VPC peering connection between each Amazon VPC and configure route tables” is incorrect. You cannot create transitive connections with VPC peering. INCORRECT: “Create an AWS Managed VPN between each Amazon VPC and configure route tables” is incorrect. This is a much more complex solution compared to AWS Transit Gateway so is not the best option. INCORRECT: “Create a hub-and-spoke topology with AWS App Mesh and use AWS Resource Access Manager to share route tables” is incorrect. AWS App Mesh is used for application-level networking for microservices applications.
An organization is planning their disaster recovery solution. They plan to run a scaled down version of a fully functional environment. In a DR situation the recovery time must be minimized. Which DR strategy should a Solutions Architect recommend?
A. Backup and restore
B. Pilot light
C. Warm standby
D. Multi-site
C. Warm standby
Explanation:
The term warm standby is used to describe a DR scenario in which a scaled-down version of a fully functional environment is always running in the cloud. A warm standby solution extends the pilot light elements and preparation. It further decreases the recovery time because some services are always running. By identifying your business-critical systems, you can fully duplicate these systems on AWS and have them always on. CORRECT: “Warm standby” is the correct answer. INCORRECT: “Backup and restore” is incorrect. This is the lowest cost DR approach that simply entails creating online backups of all data and applications. INCORRECT: Pilot light”” is incorrect. With a pilot light strategy a core minimum of services are running and the remainder are only brought online during a disaster recovery situation. INCORRECT: “Multi-site” is incorrect. A multi-site solution runs on AWS as well as on your existing on-site infrastructure in an active- active configuration.
An application analyzes images of people that are uploaded to an Amazon S3 bucket. The application determines demographic data which is then saved to a .CSV file in another S3 bucket. The data must be encrypted at rest and then queried using SQL. The solution should be fully serverless. Which actions should a Solutions Architect take to encrypt and query the data?
A. Use Amazon S3 server-side encryption and use Amazon RedShift Spectrum to query the data
B. Use AWS KMS encryption keys for the S3 bucket and use Amazon Athena to query the data
C. Use AWS KMS encryption keys for the S3 bucket and use Amazon Kinesis Data Analytics to query the data
D. Use Amazon S3 server-side encryption and Amazon QuickSight to query the data
B. Use AWS KMS encryption keys for the S3 bucket and use Amazon Athena to query the data
Explanation:
Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. Amazon Athena supports encrypted data for both the source data and query results, for example, using Amazon S3 with AWS KMS. CORRECT: “Use AWS KMS encryption keys for the S3 bucket and use Amazon Athena to query the data” is the correct answer. INCORRECT: “Use Amazon S3 server-side encryption and use Amazon RedShift Spectrum to query the data” is incorrect. RedShift Spectrum is not serverless as it requires a RedShift cluster which is based on EC2 instances. INCORRECT: “Use AWS KMS encryption keys for the S3 bucket and use Amazon Kinesis Data Analytics to query the data” is incorrect. Kinesis Data Analytics is used for analyzing real-time streaming data in Kinesis streams. INCORRECT: “Use Amazon S3 server-side encryption and Amazon QuickSight to query the data” is incorrect. Amazon QuickSight is an interactive dashboard, it is not a service for running queries on data.
A large quantity of data is stored on a NAS device on-premises and accessed using the SMB protocol. The company require a managed service for hosting the filesystem and a tool to automate the migration. Which actions should a Solutions Architect take?
A. Migrate the data to Amazon EFS using the AWS Server Migration Service (SMS)
B. Migrate the data to Amazon FSx for Lustre using AWS DataSync
C. Migrate the data to Amazon FSx for Windows File Server using AWS DataSync
D. Migrate the data to Amazon S3 using and AWS Snowball Edge device
C. Migrate the data to Amazon FSx for Windows File Server using AWS DataSync
Explanation:
Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessible over the industry-standard Server Message Block (SMB) protocol. This is the most suitable destination for this use case. AWS DataSync can be used to move large amounts of data online between on-premises storage and Amazon S3, Amazon EFS, or Amazon FSx for Windows File Server. The source datastore can be Server Message Block (SMB) file servers. CORRECT: “Migrate the data to Amazon FSx for Windows File Server using AWS DataSync” is the correct answer. INCORRECT: “Migrate the data to Amazon EFS using the AWS Server Migration Service (SMS)” is incorrect. EFS is used for hosting filesystems accessed over NFS from Linux (not Windows). The SMS service is used for migrating virtual machines, not data. INCORRECT: “Migrate the data to Amazon FSx for Lustre using AWS DataSync” is incorrect. Amazon FSx for Windows File Server should be used for hosting SMB shares. INCORRECT: “Migrate the data to Amazon S3 using and AWS Snowball Edge device” is incorrect. Amazon S3 is an object store and unsuitable for hosting an SMB filesystem. Snowball is not required in this case as the data is not going to S3 and there are no time or bandwidth limitations mentioned in the scenario.
The database layer of an on-premises web application is being migrated to AWS. The database uses a multi-threaded, in-memory caching layer to improve performance for repeated queries. Which service would be the most suitable replacement for the database cache?
A. Amazon ElastiCache Redis
B. Amazon DynamoDB DAX
C. Amazon ElastiCache Memcached
D. Amazon RDS MySQL
C. Amazon ElastiCache Memcached
Explanation:
Amazon ElastiCache with the Memcached engine is an in-memory database that can be used as a database caching layer. The memached engine supports multiple cores and threads and large nodes. CORRECT: “Amazon ElastiCache Memcached” is the correct answer. INCORRECT: “Amazon ElastiCache Redis” is incorrect. The Redis engine does not support multiple CPU cores or threads. INCORRECT: “Amazon DynamoDB DAX” is incorrect. Amazon DynamoDB Accelerator (DAX) is a database cache that should be used with DynamoDB only. INCORRECT: “Amazon RDS MySQL” is incorrect as this is not an example of an in-memory database that can be used as a database caching layer.
A Solutions Architect is designing an application for processing and extracting data from log files. The log files are generated by an application and the number and frequency of updates varies. The files are up to 1 GB in size and processing will take around 40 seconds for each file. Which solution is the most cost-effective?
A. Write the log files to an Amazon EC2 instance with an attached EBS volume. After processing, save the files to an Amazon S3 bucket
B. Write the log files to an Amazon SQS queue. Use AWS Lambda to process the files from the queue and save to an Amazon S3 bucket
C. Write the log files to an Amazon S3 bucket. Create an event notification to invoke an Amazon ECS task to process the files and save to an Amazon S3 bucket
D. Write the log files to an Amazon S3 bucket. Create an event notification to invoke an AWS Lambda function that will process the files
D. Write the log files to an Amazon S3 bucket. Create an event notification to invoke an AWS Lambda function that will process the files
Explanation:
The question asks for the most cost-effective solution and therefor a serverless and automated solution will be the best choice. AWS Lambda can run custom code in response to Amazon S3 bucket events. You upload your custom code to AWS Lambda and create a function. When Amazon S3 detects an event of a specific type (for example, an object created event), it can publish the event to AWS Lambda and invoke your function in Lambda. In response, AWS Lambda executes your function. CORRECT: “Write the log files to an Amazon S3 bucket. Create an event notification to invoke an AWS Lambda function that will process the files” is the correct answer. INCORRECT: “Write the log files to an Amazon EC2 instance with an attached EBS volume. After processing, save the files to an Amazon S3 bucket” is incorrect. This is not cost effective as it is not serverless. INCORRECT: “Write the log files to an Amazon SQS queue. Use AWS Lambda to process the files from the queue and save to an Amazon S3 bucket” is incorrect. SQS has a maximum message size of 256 KB so the message body would need to be saved in S3 anyway. Using an event source mapping from S3 would be less complex and preferable. INCORRECT: “Write the log files to an Amazon S3 bucket. Create an event notification to invoke an Amazon ECS task to process the files and save to an Amazon S3 bucket” is incorrect. You cannot use event notifications to process Amazon ECS tasks.
A large multinational retail company has a presence in AWS in multiple regions. The company has established a new office and needs to implement a high-bandwidth, low-latency connection to multiple VPCs in multiple regions within the same account. The VPCs each have unique CIDR ranges. What would be the optimum solution design using AWS technology? (Select TWO.)
A. Configure AWS VPN CloudHub
B. Create a Direct Connect gateway, and create private VIFs to each region
C. Provision an MPLS network
D. Implement Direct Connect connections to each AWS region
E. Implement a Direct Connect connection to the closest AWS region
B. Create a Direct Connect gateway, and create private VIFs to each region
E. Implement a Direct Connect connection to the closest AWS region
Explanation:
The company should implement an AWS Direct Connect connection to the closest region. A Direct Connect gateway can then be used to create private virtual interfaces (VIFs) to each AWS region. Direct Connect gateway provides a grouping of Virtual Private Gateways (VGWs) and Private Virtual Interfaces (VIFs) that belong to the same AWS account and enables you to interface with VPCs in any AWS Region (except AWS China Region). You can share a private virtual interface to interface with more than one Virtual Private Cloud (VPC) reducing the number of BGP sessions required. CORRECT: “Create a Direct Connect gateway, and create private VIFs to each region” is a correct answer. CORRECT: “Implement a Direct Connect connection to the closest AWS region” is also a correct answer. INCORRECT: “Configure AWS VPN CloudHub” is incorrect. AWS VPN CloudHub is not the best solution as you have been asked to implement high-bandwidth, low-latency connections and VPN uses the Internet so is not reliable. INCORRECT: “Provision an MPLS network” is incorrect. An MPLS network could be used to create a network topology that gets you closer to AWS in each region but you would still need use Direct Connect or VPN for the connectivity into AWS. Also, the question states that you should use AWS technology and MPLS is not offered as a service by AWS. INCORRECT: “Implement Direct Connect connections to each AWS region” is incorrect. You do not need to implement multiple Direct Connect connections to each region. This would be a more expensive option as you would need to pay for an international private connection.
A Solutions Architect is creating a design for a two-tier application with a MySQL RDS back-end. The performance requirements of the database tier are hard to quantify until the application is running and the Architect is concerned about right-sizing the database. What methods of scaling are possible after the MySQL RDS database is deployed? (Select TWO.)
A. Vertical scaling for read and write by choosing a larger instance size
B. Horizontal scaling for write capacity by enabling Multi-AZ
C. Vertical scaling for read and write by using Transfer Acceleration
D. Horizontal scaling for read and write by enabling Multi-Master RDS DB
E. Horizontal scaling for read capacity by creating a read-replica
A. Vertical scaling for read and write by choosing a larger instance size
E. Horizontal scaling for read capacity by creating a read-replica
Explanation:
To handle a higher load in your database, you can vertically scale up your master database with a simple push of a button.In addition to scaling your master database vertically, you can also improve the performance of a read-heavy database by using read replicas to horizontally scale your database. CORRECT: “Vertical scaling for read and write by choosing a larger instance size” is a correct answer. CORRECT: “Horizontal scaling for read capacity by creating a read-replica” is also a correct answer. INCORRECT: “Horizontal scaling for write capacity by enabling Multi-AZ” is incorrect. You cannot scale write capacity by enabling Multi-AZ as only one DB is active and can be written to. INCORRECT: “Vertical scaling for read and write by using Transfer Acceleration” is incorrect. Transfer Acceleration is a feature of S3 for fast uploads of objects. INCORRECT: “Horizontal scaling for read and write by enabling Multi-Master RDS DB” is incorrect. There is no such thing as a Multi-Master MySQL RDS DB (there is for Aurora).
An application is running on EC2 instances in a private subnet of an Amazon VPC. A Solutions Architect would like to connect the application to Amazon API Gateway. For security reasons, it is necessary to ensure that no traffic traverses the Internet and to ensure all traffic uses private IP addresses only. How can this be achieved?
A. Create a NAT gateway
B. Create a public VIF on a Direct Connect connection
C. Create a private API using an interface VPC endpoint
D. Add the API gateway to the subnet the EC2 instances are located in
C. Create a private API using an interface VPC endpoint
Explanation:
An Interface endpoint uses AWS PrivateLink and is an elastic network interface (ENI) with a private IP address that serves as an entry point for traffic destined to a supported service. Using PrivateLink you can connect your VPC to supported AWS services, services hosted by other AWS accounts (VPC endpoint services), and supported AWS Marketplace partner services. CORRECT: “Create a private API using an interface VPC endpoint” is the correct answer. INCORRECT: “Create a NAT gateway” is incorrect. NAT Gateways are used to provide Internet access for EC2 instances in private subnets so are of no use in this solution. INCORRECT: “Create a public VIF on a Direct Connect connection” is incorrect. You do not need to implement Direct Connect and create a public VIF. Public IP addresses are used in public VIFs and the question requests that only private addresses are used. INCORRECT: “Add the API gateway to the subnet the EC2 instances are located in” is incorrect. You cannot add API Gateway to the subnet the EC2 instances are in, it is a public service with a public endpoint.
An application stack is being created which needs a message bus to decouple the application components from each other. The application will generate up to 300 messages per second without using batching. A Solutions Architect needs to ensure that a message is delivered only once and duplicates are not introduced into the queue. It is not necessary to maintain the order of the messages. Which SQS queue type should be used?
A. Standard queues
B. Long polling queues
C. FIFO queues
D. Auto Scaling queues
C. FIFO queues
Explanation:
The key fact you need to consider here is that duplicate messages cannot be introduced into the queue. For this reason alone you must use a FIFO queue. The statement about it not being necessary to maintain the order of the messages is meant to confuse you, as that might lead you to think you can use a standard queue, but standard queues don’t guarantee that duplicates are not introduced into the queue. FIFO (first-in-first-out) queues preserve the exact order in which messages are sent and received – note that this is not required in the question but exactly once processing is. FIFO queues provide exactly-once processing, which means that each message is delivered once and remains available until a consumer processes it and deletes it. CORRECT: “FIFO queues” is the correct answer. INCORRECT: “Standard queues” is incorrect. Standard queues provide a loose-FIFO capability that attempts to preserve the order of messages. Standard queues provide at-least-once delivery, which means that each message is delivered at least once. INCORRECT: “Long polling queues” is incorrect. Long polling is configuration you can apply to a queue, it is not a queue type. INCORRECT: “Auto Scaling queues” is incorrect. There is no such thing as an Auto Scaling queue.
A Solutions Architect is attempting to clean up unused EBS volumes and snapshots to save some space and cost. How many of the most recent snapshots of an EBS volume need to be maintained to guarantee that you can recreate the full EBS volume from the snapshot?
A. You must retain all snapshots as the process is incremental and therefore data is required from each snapshot
B. Two snapshots, the oldest and most recent snapshots
C. The oldest snapshot, as this references data in all other snapshots
D. Only the most recent snapshot. Snapshots are incremental, but the deletion process will ensure that no data is lost
D. Only the most recent snapshot. Snapshots are incremental, but the deletion process will ensure that no data is lost
Explanation:
Snapshots capture a point-in-time state of an instance. If you make periodic snapshots of a volume, the snapshots are incremental, which means that only the blocks on the device that have changed after your last snapshot are saved in the new snapshot. Even though snapshots are saved incrementally, the snapshot deletion process is designed so that you need to retain only the most recent snapshot in order to restore the volume. CORRECT: “Only the most recent snapshot. Snapshots are incremental, but the deletion process will ensure that no data is lost” is the correct answer. INCORRECT: “You must retain all snapshots as the process is incremental and therefore data is required from each snapshot” is incorrect as explained above. INCORRECT: “Two snapshots, the oldest and most recent snapshots” is incorrect as explained above. INCORRECT: “The oldest snapshot, as this references data in all other snapshots” is incorrect as explained above.
A Python application is currently running on Amazon ECS containers using the Fargate launch type. An ALB has been created with a Target Group that routes incoming connections to the ECS-based application. The application will be used by consumers who will authenticate using federated OIDC compliant Identity Providers such as Google and Facebook. The users must be securely authenticated on the front-end before they access the secured portions of the application. How can this be configured using an ALB?
A. The only option is to use SAML with Amazon Cognito on the ALB
B. This can be done on the ALB by creating an authentication action on a listener rule that configures an Amazon Cognito user pool with the social IdP
C. This cannot be done on an ALB; you’ll need to authenticate users on the back-end with AWS Single Sign-On (SSO) integration
D. This cannot be done on an ALB; you’ll need to use another layer in front of the ALB
B. This can be done on the ALB by creating an authentication action on a listener rule that configures an Amazon Cognito user pool with the social IdP
Explanation:
ALB supports authentication from OIDC compliant identity providers such as Google, Facebook and Amazon. It is implemented through an authentication action on a listener rule that integrates with Amazon Cognito to create user pools. SAML can be used with Amazon Cognito but this is not the only option. CORRECT: “This can be done on the ALB by creating an authentication action on a listener rule that configures an Amazon Cognito user pool with the social IdP” is the correct answer. INCORRECT: “The only option is to use SAML with Amazon Cognito on the ALB” is incorrect as explained above. INCORRECT: “This cannot be done on an ALB; you’ll need to authenticate users on the back-end with AWS Single Sign-On (SSO) integration” is incorrect as explained above. INCORRECT: “This cannot be done on an ALB; you’ll need to use another layer in front of the ALB” is incorrect as explained above.