Set 4 Kindle SAA-003 Practice Test Flashcards
A company is deploying an Amazon ElastiCache for Redis cluster. To enhance security a password should be required to access the database. What should the solutions architect use?
A. AWS Directory Service
B. AWS IAM Policy
C. Redis AUTH command
D. VPC Security Group
C. Redis AUTH command
Explanation:
Redis authentication tokens enable Redis to require a token (password) before allowing clients to execute commands, thereby improving data security. You can require that users enter a token on a token-protected Redis server. To do this, include the parameter–auth-token(API:AuthToken) with the correct token when you create your replication group or cluster. Also include it in all subsequent commands to the replication group or cluster. CORRECT: “Redis AUTH command” is the correct answer. INCORRECT: “AWS Directory Service” is incorrect. This is a managed Microsoft Active Directory service and cannot add password protection to Redis. INCORRECT: “AWS IAM Policy” is incorrect. You cannot use an IAM policy to enforce a password on Redis. INCORRECT: “VPC Security Group” is incorrect. A security group protects at the network layer, it does not affect application authentication.
To increase performance and redundancy for an application a company has decided to run multiple implementations in different AWS Regions behind network load balancers. The company currently advertise the application using two public IP addresses from separate /24 address ranges and would prefer not to change these. Users should be directed to the closest available application endpoint. Which actions should a solutions architect take? (Select TWO.)
A. Create an Amazon Route 53 geolocation based routing policy
B. Create an AWS Global Accelerator and attach endpoints in each AWS Region
C. Assign new static anycast IP addresses and modify any existing pointers
D. Migrate both public IP addresses to the AWS Global Accelerator
E. Create PTR records to map existing public IP addresses to an Alias
B. Create an AWS Global Accelerator and attach endpoints in each AWS Region
D. Migrate both public IP addresses to the AWS Global Accelerator
Explanation:
AWS Global Accelerator uses static IP addresses as fixed entry points for your application. You can migrate up to two /24 IPv4 address ranges and choose which /32 IP addresses to use when you create your accelerator. This solution ensures the company can continue using the same IP addresses and they are able to direct traffic to the application endpoint in the AWS Region closest to the end user. Traffic is sent over the AWS global network for consistent performance. CORRECT: “Create an AWS Global Accelerator and attach endpoints in each AWS Region” is a correct answer. CORRECT: “Migrate both public IP addresses to the AWS Global Accelerator” is also a correct answer. INCORRECT: “Create an Amazon Route 53 geolocation based routing policy” is incorrect. With this solution new IP addresses will be required as there will be application endpoints in different regions. INCORRECT: “Assign new static anycast IP addresses and modify any existing pointers” is incorrect. This is unnecessary as you can bring your own IP addresses to AWS Global Accelerator and this is preferred in this scenario. INCORRECT: “Create PTR records to map existing public IP addresses to an Alias” is incorrect. This is not a workable solution for mapping existing IP addresses to an Amazon Route 53 Alias.
Three Amazon VPCs are used by a company in the same region. The company has two AWS Direct Connect connections to two separate company offices and wishes to share these with all three VPCs. A Solutions Architect has created an AWS Direct Connect gateway. How can the required connectivity be configured?
A. Associate the Direct Connect gateway to a transit gateway
B. Associate the Direct Connect gateway to a virtual private gateway in each VPC
C. Create a VPC peering connection between the VPCs and route entries for the Direct Connect Gateway
D. Create a transit virtual interface between the Direct Connect gateway and each VPC
A. Associate the Direct Connect gateway to a transit gateway
Explanation;
You can manage a single connection for multiple VPCs or VPNs that are in the same Region by associating a Direct Connect gateway to a transit gateway. The solution involves the following components: - A transit gateway that has VPC attachments. - A Direct Connect gateway. - An association between the Direct Connect gateway and the transit gateway. - A transit virtual interface that is attached to the Direct Connect gateway. The following diagram depicts this configuration: CORRECT: “Associate the Direct Connect gateway to a transit gateway” is the correct answer. INCORRECT: “Associate the Direct Connect gateway to a virtual private gateway in each VPC” is incorrect. For VPCs in the same region a VPG is not necessary. A transit gateway can instead be configured. INCORRECT: “Create a VPC peering connection between the VPCs and route entries for the Direct Connect Gateway” is incorrect. You cannot add route entries for a Direct Connect gateway to each VPC and enable routing. Use a transit gateway instead. INCORRECT: “Create a transit virtual interface between the Direct Connect gateway and each VPC” is incorrect. The transit virtual interface is attached to the Direct Connect gateway on the connection side, not the VPC/transit gateway side.
A retail organization sends coupons out twice a week and this results in a predictable surge in sales traffic. The application runs on Amazon EC2 instances behind an Elastic Load Balancer. The organization is looking for ways lower costs while ensuring they meet the demands of their customers. How can they achieve this goal?
A. Use capacity reservations with savings plans
B. Use a mixture of spot instances and on demand instances
C. Increase the instance size of the existing EC2 instances
D. Purchase Amazon EC2 dedicated hosts
A. Use capacity reservations with savings plans
Explanation:
On-Demand Capacity Reservations enable you to reserve compute capacity for your Amazon EC2 instances in a specific Availability Zone for any duration. By creating Capacity Reservations, you ensure that you always have access to EC2 capacity when you need it, for as long as you need it. When used in combination with savings plans, you can also gain the advantages of cost reduction. CORRECT: “ Use capacity reservations with savings plans” is the correct answer. INCORRECT: “Use a mixture of spot instances and on demand instances” is incorrect. You can mix spot and on-demand in an auto scaling group. However, there’s a risk the spot price may not be good, and this is a regular, predictable increase in traffic. INCORRECT: “Increase the instance size of the existing EC2 instances” is incorrect. This would add more cost all the time rather than catering for the temporary increases in traffic. INCORRECT: “Purchase Amazon EC2 dedicated hosts” is incorrect. This is not a way to save cost as dedicated hosts are much more expensive than shared hosts.
Over 500 TB of data must be analyzed using standard SQL business intelligence tools. The dataset consists of a combination of structured data and unstructured data. The unstructured data is small and stored on Amazon S3. Which AWS services are most suitable for performing analytics on the data?
A. Amazon RDS MariaDB with Amazon Athena
B. Amazon DynamoDB with Amazon DynamoDB Accelerator (DAX)
C. Amazon ElastiCache for Redis with cluster mode enabled
D. Amazon Redshift with Amazon Redshift Spectrum
D. Amazon Redshift with Amazon Redshift Spectrum
Explanation:
Amazon Redshift is an enterprise-level, petabyte scale, fully managed data warehousing service. An Amazon Redshift data warehouse is an enterprise-class relational database query and management system. Redshift supports client connections with many types of applications, including business intelligence (BI), reporting, data, and analytics tools. Using Amazon Redshift Spectrum, you can efficiently query and retrieve structured and semistructured data from files in Amazon S3 without having to load the data into Amazon Redshift tables. Redshift Spectrum queries employ massive parallelism to execute very fast against large datasets. Used together, RedShift and RedShift spectrum are suitable for running massive analytics jobs on both the structured (RedShift data warehouse) and unstructured (Amazon S3) data. CORRECT: “Amazon Redshift with Amazon Redshift Spectrum” is the correct answer. INCORRECT: “Amazon RDS MariaDB with Amazon Athena” is incorrect. Amazon RDS is not suitable for analytics (OLAP) use cases as it is designed for transactional (OLTP) use cases. Athena can however be used for running SQL queries on data on S3. INCORRECT: “Amazon DynamoDB with Amazon DynamoDB Accelerator (DAX)” is incorrect. This is an example of a non-relational DB with a caching layer and is not suitable for an OLAP use case. INCORRECT: “Amazon ElastiCache for Redis with cluster mode enabled” is incorrect. This is an example of an in-memory caching service. It is good for performance for transactional use cases.
An application is being monitored using Amazon GuardDuty. A Solutions Architect needs to be notified by email of medium to high severity events. How can this be achieved?
A. Configure an Amazon CloudWatch alarm that triggers based on a GuardDuty metric
B. Create an Amazon CloudWatch events rule that triggers an Amazon SNS topic
C. Create an Amazon CloudWatch Logs rule that triggers an AWS Lambda function
D. Configure an Amazon CloudTrail alarm the triggers based on GuardDuty API activity
B. Create an Amazon CloudWatch events rule that triggers an Amazon SNS topic
Explanation:
A CloudWatch Events rule can be used to set up automatic email notifications for Medium to High Severity findings to the email address of your choice. You simply create an Amazon SNS topic and then associate it with an Amazon CloudWatch events rule. Note: step by step procedures for how to set this up can be found in the article linked in the references below. CORRECT: “Create an Amazon CloudWatch events rule that triggers an Amazon SNS topic” is the correct answer. INCORRECT: “Configure an Amazon CloudWatch alarm that triggers based on a GuardDuty metric” is incorrect. There is no metric for GuardDuty that can be used for specific findings. INCORRECT: “Create an Amazon CloudWatch Logs rule that triggers an AWS Lambda function” is incorrect. CloudWatch logs is not the right CloudWatch service to use. CloudWatch events is used for reacting to changes in service state. INCORRECT: “Configure an Amazon CloudTrail alarm the triggers based on GuardDuty API activity” is incorrect. CloudTrail cannot be used to trigger alarms based on GuardDuty API activity.
A company is migrating a decoupled application to AWS. The application uses a message broker based on the MQTT protocol. The application will be migrated to Amazon EC2 instances and the solution for the message broker must not require rewriting application code. Which AWS service can be used for the migrated message broker?
A. Amazon SQS
B. Amazon SNS
C. Amazon MQ
D. AWS Step Functions
C. Amazon MQ
Explanation:
Amazon MQ is a managed message broker service for Apache ActiveMQ that makes it easy to set up and operate message brokers in the cloud. Connecting current applications to Amazon MQ is easy because it uses industry-standard APIs and protocols for messaging, including JMS, NMS, AMQP, STOMP, MQTT, and WebSocket. Using standards means that in most cases, there’s no need to rewrite any messaging code when you migrate to AWS. CORRECT: “Amazon MQ” is the correct answer. INCORRECT: “Amazon SQS” is incorrect. This is an Amazon proprietary service and does not support industry-standard messaging APIs and protocols. INCORRECT: “Amazon SNS” is incorrect. This is a notification service not a message bus. INCORRECT: “AWS Step Functions” is incorrect. This is a workflow orchestration service, not a message bus.
A HR application stores employment records on Amazon S3. Regulations mandate the records are retained for seven years. Once created the records are accessed infrequently for the first three months and then must be available within 10 minutes if required thereafter. Which lifecycle action meets the requirements whilst MINIMIZING cost?
A. Store the data in S3 Standard for 3 months, then transition to S3 Glacier
B. Store the data in S3 Standard-IA for 3 months, then transition to S3 Glacier
C. Store the data in S3 Standard for 3 months, then transition to S3 Standard-IA
D. Store the data in S3 Intelligent Tiering for 3 months, then transition to S3 Standard-IA
B. Store the data in S3 Standard-IA for 3 months, then transition to S3 Glacier
Explanation:
The most cost-effective solution is to first store the data in S3 Standard-IA where it will be infrequently accessed for the first three months. Then, after three months expires, transition the data to S3 Glacier where it can be stored at lower cost for the remainder of the seven year period. Expedited retrieval can bring retrieval times down to 1-5 minutes. CORRECT: “Store the data in S3 Standard-IA for 3 months, then transition to S3 Glacier” is the correct answer. INCORRECT: “Store the data in S3 Standard for 3 months, then transition to S3 Glacier” is incorrect. S3 Standard is more costly than S3 Standard-IA and the data is only accessed infrequently. INCORRECT: “Store the data in S3 Standard for 3 months, then transition to S3 Standard-IA” is incorrect. Neither storage class in this answer is the most cost-effective option. INCORRECT: “Store the data in S3 Intelligent Tiering for 3 months, then transition to S3 Standard-IA” is incorrect. Intelligent tiering moves data between tiers based on access patterns, this is more costly and better suited to use cases that are unknown or unpredictable.
A highly elastic application consists of three tiers. The application tier runs in an Auto Scaling group and processes data and writes it to an Amazon RDS MySQL database. The Solutions Architect wants to restrict access to the database tier to only accept traffic from the instances in the application tier. However, instances in the application tier are being constantly launched and terminated. How can the Solutions Architect configure secure access to the database tier?
A. Configure the database security group to allow traffic only from the application security group
B. Configure the database security group to allow traffic only from port 3306
C. Configure a Network ACL on the database subnet to deny all traffic to ports other than 3306
D. Configure a Network ACL on the database subnet to allow all traffic from the application subnet
A. Configure the database security group to allow traffic only from the application security group
Explanation:
The best option is to configure the database security group to only allow traffic that originates from the application security group. You can also define the destination port as the database port. This setup will allow any instance that is launched and attached to this security group to connect to the database. CORRECT: “Configure the database security group to allow traffic only from the application security group” is the correct answer. INCORRECT: “Configure the database security group to allow traffic only from port 3306” is incorrect. Port 3306 for MySQL should be the destination port, not the source. INCORRECT: “Configure a Network ACL on the database subnet to deny all traffic to ports other than 3306” is incorrect. This does not restrict access specifically to the application instances. INCORRECT: “Configure a Network ACL on the database subnet to allow all traffic from the application subnet” is incorrect. This does not restrict access specifically to the application instances.
A Solutions Architect is rearchitecting an application with decoupling. The application will send batches of up to 1000 messages per second that must be received in the correct order by the consumers. Which action should the Solutions Architect take?
A. Create an Amazon SQS Standard queue
B. Create an Amazon SNS topic
C. Create an Amazon SQS FIFO queue
D. Create an AWS Step Functions state machine
C. Create an Amazon SQS FIFO queue
Explanation:
Only FIFO queues guarantee the ordering of messages and therefore a standard queue would not work. The FIFO queue supports up to 3,000 messages per second with batching so this is a supported scenario. CORRECT: “Create an Amazon SQS FIFO queue” is the correct answer. INCORRECT: “Create an Amazon SQS Standard queue” is incorrect as it does not guarantee ordering of messages. INCORRECT: “Create an Amazon SNS topic” is incorrect. SNS is a notification service and a message queue is a better fit for this use case. INCORRECT: “Create an AWS Step Functions state machine” is incorrect. Step Functions is a workflow orchestration service and is not useful for this scenario.
A Solutions Architect is designing an application that consists of AWS Lambda and Amazon RDS Aurora MySQL. The Lambda function must use database credentials to authenticate to MySQL and security policy mandates that these credentials must not be stored in the function code. How can the Solutions Architect securely store the database credentials and make them available to the function?
A. Store the credentials in AWS Key Management Service and use environment variables in the function code pointing to KMS
B. Store the credentials in Systems Manager Parameter Store and update the function code and execution role
C. Use the AWSAuthenticationPlugin and associate an IAM user account in the MySQL database
D. Create an IAM policy and store the credentials in the policy. Attach the policy to the Lambda function execution role
B. Store the credentials in Systems Manager Parameter Store and update the function code and execution role
Explanation:
In this case the scenario requires that credentials are used for authenticating to MySQL. The credentials need to be securely stored outside of the function code. Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management. You can easily reference the parameters from services including AWS Lambda as depicted in the diagram below: CORRECT: “Store the credentials in Systems Manager Parameter Store and update the function code and execution role” is the correct answer. INCORRECT: “Store the credentials in AWS Key Management Service and use environment variables in the function code pointing to KMS” is incorrect. You cannot store credentials in KMS, it is used for creating and managing encryption keys INCORRECT: “Use the AWSAuthenticationPlugin and associate an IAM user account in the MySQL database” is incorrect. This is a great way to securely authenticate to RDS using IAM users or roles. However, in this case the scenario requires database credentials to be used by the function. INCORRECT: “Create an IAM policy and store the credentials in the policy. Attach the policy to the Lambda function execution role” is incorrect. You cannot store credentials in IAM policies.
A company are finalizing their disaster recovery plan. A limited set of core services will be replicated to the DR site ready to seamlessly take over the in the event of a disaster. All other services will be switched off. Which DR strategy is the company using?
A. Backup and restore
B. Pilot light
C. Warm standby
D. Multi-site
B. Pilot light
Explanation:
In this DR approach, you simply replicate part of your IT structure for a limited set of core services so that the AWS cloud environment seamlessly takes over in the event of a disaster. A small part of your infrastructure is always running simultaneously syncing mutable data (as databases or documents), while other parts of your infrastructure are switched off and used only during testing. Unlike a backup and recovery approach, you must ensure that your most critical core elements are already configured and running in AWS (the pilot light). When the time comes for recovery, you can rapidly provision a full-scale production environment around the critical core. CORRECT: “Pilot light” is the correct answer. INCORRECT: “Backup and restore” is incorrect. This is the lowest cost DR approach that simply entails creating online backups of all data and applications. INCORRECT: “Warm standby” is incorrect. The term warm standby is used to describe a DR scenario in which a scaled-down version of a fully functional environment is always running in the cloud. INCORRECT: “Multi-site” is incorrect. A multi-site solution runs on AWS as well as on your existing on-site infrastructure in an active- active configuration.
An application that runs a computational fluid dynamics workload uses a tightly-coupled HPC architecture that uses the MPI protocol and runs across many nodes. A service-managed deployment is required to minimize operational overhead. Which deployment option is MOST suitable for provisioning and managing the resources required for this use case?
A. Use Amazon EC2 Auto Scaling to deploy instances in multiple subnets
B. Use AWS CloudFormation to deploy a Cluster Placement Group on EC2
C. Use AWS Batch to deploy a multi-node parallel job
D. Use AWS Elastic Beanstalk to provision and manage the EC2 instances
C. Use AWS Batch to deploy a multi-node parallel job
Explanation:
AWS Batch Multi-node parallel jobs enable you to run single jobs that span multiple Amazon EC2 instances. With AWS Batch multi-node parallel jobs, you can run large-scale, tightly coupled, high performance computing applications and distributed GPU model training without the need to launch, configure, and manage Amazon EC2 resources directly. An AWS Batch multi-node parallel job is compatible with any framework that supports IP-based, internode communication, such as Apache MXNet, TensorFlow, Caffe2, or Message Passing Interface (MPI). This is the most efficient approach to deploy the resources required and supports the application requirements most effectively. CORRECT: “Use AWS Batch to deploy a multi-node parallel job” is the correct answer. INCORRECT: “Use Amazon EC2 Auto Scaling to deploy instances in multiple subnets “ is incorrect. This is not the best solution for a tightly-coupled HPC workload with specific requirements such as MPI support. INCORRECT: “Use AWS CloudFormation to deploy a Cluster Placement Group on EC2” is incorrect. This would deploy a cluster placement group but not manage it. AWS Batch is a better fit for large scale workloads such as this. INCORRECT: “Use AWS Elastic Beanstalk to provision and manage the EC2 instances” is incorrect. You can certainly provision and manage EC2 instances with Elastic Beanstalk but this scenario is for a specific workload that requires MPI support and managing a HPC deployment across a large number of nodes. AWS Batch is more suitable.
A Solutions Architect is designing an application that will run on an Amazon EC2 instance. The application must asynchronously invoke and AWS Lambda function to analyze thousands of .CSV files. The services should be decoupled. Which service can be used to decouple the compute services?
A. Amazon SWF
B. Amazon SNS
C. Amazon Kinesis
D. Amazon OpsWorks
B. Amazon SNS
Explanation:
You can use a Lambda function to process Amazon Simple Notification Service notifications. Amazon SNS supports Lambda functions as a target for messages sent to a topic. This solution decouples the Amazon EC2 application from Lambda and ensures the Lambda function is invoked. CORRECT: “Amazon SNS” is the correct answer. INCORRECT: “Amazon SWF” is incorrect. The Simple Workflow Service (SWF) is used for process automation. It is not well suited to this requirement. INCORRECT: “Amazon Kinesis” is incorrect as this service is used for ingesting and processing real time streaming data, it is not a suitable service to be used solely for invoking a Lambda function. INCORRECT: “Amazon OpsWorks” is incorrect as this service is used for configuration management of systems using Chef or Puppet.
A large MongoDB database running on-premises must be migrated to Amazon DynamoDB within the next few weeks. The database is too large to migrate over the company’s limited internet bandwidth so an alternative solution must be used. What should a Solutions Architect recommend?
A. Setup an AWS Direct Connect and migrate the database to Amazon DynamoDB using the AWS Database Migration Service (DMS)
B. Use the Schema Conversion Tool (SCT) to extract and load the data to an AWS Snowball Edge device. Use the AWS Database Migration Service (DMS) to migrate the data to Amazon DynamoDB
C. Enable compression on the MongoDB database and use the AWS Database Migration Service (DMS) to directly migrate the database to Amazon DynamoDB
D. Use the AWS Database Migration Service (DMS) to extract and load the data to an AWS Snowball Edge device. Complete the migration to Amazon DynamoDB using AWS DMS in the AWS Cloud
B. Use the Schema Conversion Tool (SCT) to extract and load the data to an AWS Snowball Edge device. Use the AWS Database Migration Service (DMS) to migrate the data to Amazon DynamoDB
Explanation:
Larger data migrations with AWS DMS can include many terabytes of information. This process can be cumbersome due to network bandwidth limits or just the sheer amount of data. AWS DMS can useSnowball Edgeand Amazon S3 to migrate large databases more quickly than by other methods. When you’re using an Edge device, the data migration process has the following stages: You use the AWS Schema Conversion Tool (AWS SCT) to extract the data locally and move it to an Edge device. You ship the Edge device or devices back to AWS. After AWS receives your shipment, the Edge device automatically loads its data into an Amazon S3 bucket. AWS DMS takes the files and migrates the data to the target data store. If you are using change data capture (CDC), those updates are written to the Amazon S3 bucket and then applied to the target data store. CORRECT: “Use the Schema Conversion Tool (SCT) to extract and load the data to an AWS Snowball Edge device. Use the AWS Database Migration Service (DMS) to migrate the data to Amazon DynamoDB” is the correct answer. INCORRECT: “Setup an AWS Direct Connect and migrate the database to Amazon DynamoDB using the AWS Database Migration Service (DMS)” is incorrect as Direct Connect connections can take several weeks to implement. INCORRECT: “Enable compression on the MongoDB database and use the AWS Database Migration Service (DMS) to directly migrate the database to Amazon DynamoDB” is incorrect. It’s unlikely that compression is going to make the difference and the company want to avoid the internet link as stated in the scenario. INCORRECT: “Use the AWS Database Migration Service (DMS) to extract and load the data to an AWS Snowball Edge device. Complete the migration to Amazon DynamoDB using AWS DMS in the AWS Cloud” is incorrect. This is the wrong method, the Solutions Architect should use the SCT to extract and load to Snowball Edge and then AWS DMS in the AWS Cloud.
Every time an item in an Amazon DynamoDB table is modified a record must be retained for compliance reasons. What is the most efficient solution to recording this information?
A. Enable Amazon CloudWatch Logs. Configure an AWS Lambda function to monitor the log files and record deleted item data to an Amazon S3 bucket
B. Enable DynamoDB Streams. Configure an AWS Lambda function to poll the stream and record the modified item data to an Amazon S3 bucket
C. Enable Amazon CloudTrail. Configure an Amazon EC2 instance to monitor activity in the CloudTrail log files and record changed items in another DynamoDB table
D. Enable DynamoDB Global Tables. Enable DynamoDB streams on the multi-region table and save the output directly to an Amazon S3 bucket
B. Enable DynamoDB Streams. Configure an AWS Lambda function to poll the stream and record the modified item data to an Amazon S3 bucket
Explanation:
Amazon DynamoDB Streams captures a time-ordered sequence of item-level modifications in any DynamoDB table and stores this information in a log for up to 24 hours. Applications can access this log and view the data items as they appeared before and after they were modified, in near-real time. For example, in the diagram below a DynamoDB stream is being consumed by a Lambda function which processes the item data and records a record in CloudWatch Logs CORRECT: “Enable DynamoDB Streams. Configure an AWS Lambda function to poll the stream and record the modified item data to an Amazon S3 bucket” is the correct answer. INCORRECT: “Enable Amazon CloudWatch Logs. Configure an AWS Lambda function to monitor the log files and record deleted item data to an Amazon S3 bucket” is incorrect. The deleted item data will not be recorded in CloudWatch Logs. INCORRECT: “Enable Amazon CloudTrail. Configure an Amazon EC2 instance to monitor activity in the CloudTrail log files and record changed items in another DynamoDB table” is incorrect. CloudTrail records API actions so it will not record the data from the item that was modified. INCORRECT: “Enable DynamoDB Global Tables. Enable DynamoDB streams on the multi-region table and save the output directly to an Amazon S3 bucket” is incorrect. Global Tables is used for creating a multi-region, multi-master database. It is of no additional value for this requirement as you could just enable DynamoDB streams on the main table. You also cannot save modified data straight to an S3 bucket.
An application in a private subnet needs to query data in an Amazon DynamoDB table. Use of the DynamoDB public endpoints must be avoided. What is the most EFFICIENT and secure method of enabling access to the table?
A. Create an interface VPC endpoint in the VPC with an Elastic Network Interface (ENI)
B. Create a gateway VPC endpoint and add an entry to the route table
C. Create a private Amazon DynamoDB endpoint and connect to it using an AWS VPN
D. Create a software VPN between DynamoDB and the application in the private subnet
B. Create a gateway VPC endpoint and add an entry to the route table
Explanation:
A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network. With a gateway endpoint you configure your route table to point to the endpoint. Amazon S3 and DynamoDB use gateway endpoints. The table below helps you to understand the key differences between the two different types of VPC endpoint: CORRECT: “Create a gateway VPC endpoint and add an entry to the route table” is the correct answer. INCORRECT: “Create an interface VPC endpoint in the VPC with an Elastic Network Interface (ENI)” is incorrect. This would be used for services that are supported by interface endpoints, not gateway endpoints. INCORRECT: “Create a private Amazon DynamoDB endpoint and connect to it using an AWS VPN” is incorrect. You cannot create an Amazon DynamoDB private endpoint and connect to it over VPN. Private endpoints are VPC endpoints and are connected to by instances in subnets via route table entries or via ENIs (depending on which service). INCORRECT: “Create a software VPN between DynamoDB and the application in the private subnet” is incorrect. You cannot create a software VPN between DynamoDB and an application.
A Solutions Architect needs to select a low-cost, short-term option for adding resilience to an AWS Direct Connect connection. What is the MOST cost-effective solution to provide a backup for the Direct Connect connection?
A. Implement a second AWS Direct Connection
B. Implement an IPSec VPN connection and use the same BGP prefix
C. Configure AWS Transit Gateway with an IPSec VPN backup
D. Configure an IPSec VPN connection over the Direct Connect link
B. Implement an IPSec VPN connection and use the same BGP prefix
Explanation:
This is the most cost-effective solution. With this option both the Direct Connect connection and IPSec VPN are active and being advertised using the Border Gateway Protocol (BGP). The Direct Connect link will always be preferred unless it is unavailable. CORRECT: “Implement an IPSec VPN connection and use the same BGP prefix” is the correct answer. INCORRECT: “Implement a second AWS Direct Connection” is incorrect. This is not a short-term or low-cost option as it takes time to implement and is costly. INCORRECT: “Configure AWS Transit Gateway with an IPSec VPN backup” is incorrect. This is a workable solution and provides some advantages. However, you do need to pay for the Transit Gateway so it is not the most cost-effective option and probably not suitable for a short-term need. INCORRECT: “Configure an IPSec VPN connection over the Direct Connect link” is incorrect. This is not a solution to the problem as the VPN connection is going over the Direct Connect link. This is something you might do to add encryption to Direct Connect but it doesn’t make it more resilient.
The disk configuration for an Amazon EC2 instance must be finalized. The instance will be running an application that requires heavy read/write IOPS. A single volume is required that is 500 GiB in size and needs to support 20,000 IOPS. What EBS volume type should be selected?
A. EBS General Purpose SSD
B. EBS Provisioned IOPS SSD
C. EBS General Purpose SSD in a RAID 1 configuration
D. EBS Throughput Optimized HDD
B. EBS Provisioned IOPS SSD
Explanation:
This is simply about understanding the performance characteristics of the different EBS volume types. The only EBS volume type that supports over 16,000 IOPS per volume is Provisioned IOPS SSD. SSD, General Purpose – gp2 – Volume size 1 GiB – 16 TiB. – Max IOPS/volume 16,000. SSD, Provisioned IOPS – i01 – Volume size 4 GiB – 16 TiB. – Max IOPS/volume 64,000. –HDD, Throughput Optimized – (st1) – Volume size 500 GiB – 16 TiB. Throughput measured in MB/s, and includes the ability to burst up to 250 MB/s per TB, with a baseline throughput of 40 MB/s per TB and a maximum throughput of 500 MB/s per volume. HDD, Cold – (sc1) – Volume size 500 GiB – 16 TiB. Lowest cost storage – cannot be a boot volume. – These volumes can burst up to 80 MB/s per TB, with a baseline throughput of 12 MB/s per TB and a maximum throughput of 250 MB/s per volume HDD, Magnetic – Standard – cheap, infrequently accessed storage – lowest cost storage that can be a boot volume. CORRECT: “EBS Provisioned IOPS SSD” is the correct answer. INCORRECT: “EBS General Purpose SSD” is incorrect as the max IOPS is 16,000. INCORRECT: “EBS General Purpose SSD in a RAID 1 configuration” is incorrect. RAID 1 is mirroring and does not increase the amount of IOPS you can generate. INCORRECT: “EBS Throughput Optimized HDD” is incorrect as this type of disk does not support the IOPS requirement.
A new application you are designing will store data in an Amazon Aurora MySQL DB. You are looking for a way to enable inter-region disaster recovery capabilities with fast replication and fast failover. Which of the following options is the BEST solution?
A. Use Amazon Aurora Global Database
B. Enable Multi-AZ for the Aurora DB
C. Create an EBS backup of the Aurora volumes and use cross-region replication to copy the snapshot D. Create a cross-region Aurora Read Replica
A. Use Amazon Aurora Global Database
Explanation:
Amazon Aurora Global Database is designed for globally distributed applications, allowing a single Amazon Aurora database to span multiple AWS regions. It replicates your data with no impact on database performance, enables fast local reads with low latency in each region, and provides disaster recovery from region-wide outages. Aurora Global Database uses storage-based replication with typical latency of less than 1 second, using dedicated infrastructure that leaves your database fully available to serve application workloads. In the unlikely event of a regional degradation or outage, one of the secondary regions can be promoted to full read/write capabilities in less than 1 minute. CORRECT: “Use Amazon Aurora Global Database” is the correct answer. INCORRECT: “Enable Multi-AZ for the Aurora DB” is incorrect. Enabling Multi-AZ for the Aurora DB would provide AZ-level resiliency within the region not across regions. INCORRECT: “Create an EBS backup of the Aurora volumes and use cross-region replication to copy the snapshot” is incorrect. Though you can take a DB snapshot and replicate it across regions, it does not provide an automated solution and it would not enable fast failover INCORRECT: “Create a cross-region Aurora Read Replica” is incorrect. This solution would not provide the fast storage replication and fast failover capabilities of the Aurora Global Database and is therefore not the best option.
A Solutions Architect regularly launches EC2 instances manually from the console and wants to streamline the process to reduce administrative overhead. Which feature of EC2 enables storing of settings such as AMI ID, instance type, key pairs and Security Groups?
A. Placement Groups
B. Launch Templates
C. Run Command
D. Launch Configurations
B. Launch Templates
Explanation:
Launch templates enable you to store launch parameters so that you do not have to specify them every time you launch an instance. When you launch an instance using the Amazon EC2 console, an AWS SDK, or a command line tool, you can specify the launch template to use. CORRECT: “Launch Templates” is the correct answer. INCORRECT: “Placement Groups” is incorrect. You can launch or start instances in aplacement group, which determines how instances are placed on underlying hardware. INCORRECT: “Run Command” is incorrect. Run Command automates common administrative tasks, and lets you perform ad hoc configuration changes at scale. INCORRECT: “Launch Configurations” is incorrect. Launch Configurations are used with Auto Scaling Groups.
You recently noticed that your Network Load Balancer (NLB) in one of your VPCs is not distributing traffic evenly between EC2 instances in your AZs. There are an odd number of EC2 instances spread across two AZs. The NLB is configured with a TCP listener on port 80 and is using active health checks. What is the most likely problem?
A. There is no HTTP listener
B. Health checks are failing in one AZ due to latency
C. NLB can only load balance within a single AZ
D. Cross-zone load balancing is disabled
D. Cross-zone load balancing is disabled
Explanation:
Without cross-zone load balancing enabled, the NLB will distribute traffic 50/50 between AZs. As there are an odd number of instances across the two AZs some instances will not receive any traffic. Therefore enabling cross-zone load balancing will ensure traffic is distributed evenly between available instances in all AZs. The diagram below shows an ELB with cross-zone load balancing enabled: CORRECT: “Cross-zone load balancing is disabled” is the correct answer. INCORRECT: “There is no HTTP listener” is incorrect. Listeners are used to receive incoming connections. An NLB listens on TCP not on HTTP therefore having no HTTP listener is not the issue here. INCORRECT: “Health checks are failing in one AZ due to latency” is incorrect. If health checks fail this will cause the NLB to stop sending traffic to these instances. However, the health check packets are very small and it is unlikely that latency would be the issue within a region. INCORRECT: “NLB can only load balance within a single AZ” is incorrect. An NLB can load balance across multiple AZs just like the other ELB types.
A Solutions Architect is creating a design for a multi-tiered serverless application. Which two services form the application facing services from the AWS serverless infrastructure? (Select TWO.)
A. API Gateway
B. AWS Cognito
C. AWS Lambda
D. Amazon ECS
E. Elastic Load Balancer
A. API Gateway
C. AWS Lambda
Explanation:
The only application services here are API Gateway and Lambda and these are considered to be serverless services. CORRECT: “API Gateway” is a correct answer. CORRECT: “AWS Lambda” is also a correct answer. INCORRECT: “AWS Cognito” is incorrect. AWS Cognito is used for providing authentication services for web and mobile apps. INCORRECT: “Amazon ECS” is incorrect. ECS provides the platform for running containers and uses Amazon EC2 instances. INCORRECT: “Elastic Load Balancer” is incorrect. ELB provides distribution of incoming network connections and also uses Amazon EC2 instances.
A Solutions Architect attempted to restart a stopped EC2 instance and it immediately changed from a pending state to a terminated state. What are the most likely explanations? (Select TWO.)
A. You’ve reached your EBS volume limit
B. An EBS snapshot is corrupt
C. AWS does not currently have enough available On-Demand capacity to service your request
D. You have reached the limit on the number of instances that you can launch in a region The AMI is unsupported
A. You’ve reached your EBS volume limit
B. An EBS snapshot is corrupt
Explanation:
The following are a few reasons why an instance might immediately terminate: – You’ve reached your EBS volume limit. – An EBS snapshot is corrupt. – The root EBS volume is encrypted and you do not have permissions to access the KMS key for decryption. – The instance store-backed AMI that you used to launch the instance is missing a required part (an image.part.xx file). CORRECT: “You’ve reached your EBS volume limit” is a correct answer. CORRECT: “An EBS snapshot is corrupt” is also a correct answer. INCORRECT: “AWS does not currently have enough available On-Demand capacity to service your request” is incorrect. If AWS does not have capacity available a InsufficientInstanceCapacity error will be generated when you try to launch a new instance or restart a stopped instance. INCORRECT: “You have reached the limit on the number of instances that you can launch in a region” is incorrect. If you’ve reached the limit on the number of instances you can launch in a region you get an InstanceLimitExceeded error when you try to launch a new instance or restart a stopped instance. INCORRECT: “The AMI is unsupported” is incorrect. It is possible that an instance type is not supported by an AMI and this can cause an “UnsupportedOperation” client error. However, in this case the instance was previously running (it is in a stopped state) so it is unlikely that this is the issue.
One of the applications you manage on RDS uses the MySQL DB and has been suffering from performance issues. You would like to setup a reporting process that will perform queries on the database but you’re concerned that the extra load will further impact the performance of the DB and may lead to poor customer experience. What would be the best course of action to take so you can implement the reporting process?
A. Configure Multi-AZ to setup a secondary database instance in another region
B. Deploy a Read Replica to setup a secondary read-only database instance
C. Deploy a Read Replica to setup a secondary read and write database instance
D. Configure Multi-AZ to setup a secondary database instance in another Availability Zone
B. Deploy a Read Replica to setup a secondary read-only database instance
Explanation:
The reporting process will perform queries on the database but not writes. Therefore you can use a read replica which will provide a secondary read-only database and configure the reporting process to use the read replica. Multi-AZ is used for implementing fault tolerance. With Multi-AZ you can failover to a DB in another AZ within the region in the event of a failure of the primary DB. However, you can only read and write to the primary DB so still need a read replica to offload the reporting job CORRECT: “Deploy a Read Replica to setup a secondary read-only database instance” is the correct answer. INCORRECT: “Configure Multi-AZ to setup a secondary database instance in another region” is incorrect as described above. INCORRECT: “Deploy a Read Replica to setup a secondary read and write database instance” is incorrect. Read replicas are for workload offloading only and do not provide the ability to write to the database. INCORRECT: “Configure Multi-AZ to setup a secondary database instance in another Availability Zone” is incorrect as described above.
A Solutions Architect is building a new Amazon Elastic Container Service (ECS) cluster. The ECS instances are running the EC2 launch type and load balancing is required to distribute connections to the tasks. It is required that the mapping of ports is performed dynamically and connections are routed to different groups of servers based on the path in the URL. Which AWS service should the Solutions Architect choose to fulfil these requirements?
A. An Amazon ECS Service
B. Application Load Balancer
C. Network Load Balancer
D. Classic Load Balancer
B. Application Load Balancer
Explanation:
An ALB allows containers to use dynamic host port mapping so that multiple tasks from the same service are allowed on the same container host. An ALB can also route requests based on the content of the request in the host field: host-based or path-based. The NLB and CLB types of Elastic Load Balancer do not support path-based routing or host-based routing so they cannot be used for this use case. CORRECT: “Application Load Balancer” is the correct answer. INCORRECT: “ECS Services” is incorrect. An Amazon ECS service enables you to run and maintain a specified number of instances of a task definition simultaneously in an Amazon ECS cluster. It does not distributed connections to tasks. INCORRECT: “Network Load Balancer” is incorrect as described above. INCORRECT: “Classic Load Balancer” is incorrect as described above.