Missed Questions Flashcards

1
Q

An online cryptocurrency exchange platform is hosted in AWS which uses ECS Cluster and RDS in Multi-AZ Deployments configuration. The application is heavily using the RDS instance to process complex read and write database operations. To maintain the reliability, availability, and performance of your systems, you have to closely monitor how the different processes or threads on a DB instance use the CPU, including the percentage of the CPU bandwidth and total memory consumed by each process.

Which of the following is the most suitable solution to properly monitor your database?

  • Create a script that collects and publishes custom metrics to CloudWatch, which tracks the real-time CPU Utilization of the RDS instance and then set up a custom CloudWatch dashboard to view the metrics
  • Enable Enhanced Monitoring in RDS
  • Check the CPU% and MEM% metrics which are readily available in the Amazon RDS console that shows the percentage of the CPU bandwidth and total memory consumed by each database process of your RDS instance
  • Use Amazon CloudWatch to monitor the CPU Utilization of your database
A
  • Create a script that collects and publishes custom metrics to CloudWatch, which tracks the real-time CPU Utilization of the RDS instance and then set up a custom CloudWatch dashboard to view the metrics (X)
  • Enable Enhanced Monitoring in RDS
  • Check the CPU% and MEM% metrics which are readily available in the Amazon RDS console that shows the percentage of the CPU bandwidth and total memory consumed by each database process of your RDS instance
  • Use Amazon CloudWatch to monitor the CPU Utilization of your database
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You are an AWS Network Engineer working for a utility provider where you are managing a monolithic application with an EC2 instance using a Windows AMI. The legacy application must maintain the same private IP address and MAC address in order for it to work. You want to implement a cost-effective and highly available architecture for your application by launching a standby EC2 instance that is an exact replica of the Windows server. If the primary instance terminates, you can attach the ENI to the standby secondary instance, which allows the traffic flow to resume within a few seconds.

When it comes to the ENI attachment to an EC2 instance, what does ‘warm attach’ refer to?

  • Attaching an ENI to an instance when it is stopped.
  • ​Attaching an ENI to an instance when it is idle.
  • ​Attaching an ENI to an instance during the launch process.
  • ​Attaching an ENI to an instance when it is running.
A
  • Attaching an ENI to an instance when it is stopped.
  • ​Attaching an ENI to an instance when it is idle.
  • ​Attaching an ENI to an instance during the launch process. (X)
  • ​Attaching an ENI to an instance when it is running.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You are a Solutions Architect working for a large multinational investment bank. They have a web application that requires a minimum of 4 EC2 instances to run to ensure that it can cater to its users across the globe. You are instructed to ensure fault tolerance of this system.

Which of the following is the best option?

  • Deploy an Auto Scaling group with 4 instances in one Availability Zone behind an Application Load Balancer.
  • ​Deploy an Auto Scaling group with 1 instance in each of 4 Availability Zones behind an Application Load Balancer.
  • ​Deploy an Auto Scaling group with 2 instances in each of 2 Availability Zones behind an Application Load Balancer.
  • ​Deploy an Auto Scaling group with 2 instances in each of 3 Availability Zones behind an Application Load Balancer.
A
  • Deploy an Auto Scaling group with 4 instances in one Availability Zone behind an Application Load Balancer.
  • ​Deploy an Auto Scaling group with 1 instance in each of 4 Availability Zones behind an Application Load Balancer.
  • ​Deploy an Auto Scaling group with 2 instances in each of 2 Availability Zones behind an Application Load Balancer. (X)
  • ​Deploy an Auto Scaling group with 2 instances in each of 3 Availability Zones behind an Application Load Balancer.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You have a data analytics application that updates a real-time, foreign exchange dashboard and another separate application that archives data to Amazon Redshift. Both applications are configured to consume data from the same stream concurrently and independently by using Amazon Kinesis Data Streams. However, you noticed that there are a lot of occurrences where a shard iterator expires unexpectedly. Upon checking, you found out that the DynamoDB table used by Kinesis does not have enough capacity to store the lease data.

Which of the following is the most suitable solution to rectify this issue?

  • Upgrade the storage capacity of the DynamoDB table.
  • ​Enable In-Memory Acceleration with DynamoDB Accelerator (DAX).
  • ​Increase the write capacity assigned to the shard table
  • ​Use Amazon Kinesis Data Analytics to properly support the data analytics application instead of Kinesis Data Stream
A
  • Upgrade the storage capacity of the DynamoDB table.
  • ​Enable In-Memory Acceleration with DynamoDB Accelerator (DAX).
  • ​Increase the write capacity assigned to the shard table
  • ​Use Amazon Kinesis Data Analytics to properly support the data analytics application instead of Kinesis Data Stream (X)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

You are leading a software development team which uses serverless computing with AWS Lambda to build and run applications without having to set up or manage servers. You have a Lambda function that connects to a MongoDB Atlas, which is a popular Database as a Service (DBaaS) platform and also uses a third party API to fetch certain data for your application. You instructed one of your junior developers to create the environment variables for the MongoDB database hostname, username, and password as well as the API credentials that will be used by the Lambda function for DEV, SIT, UAT and PROD environments.

Considering that the Lambda function is storing sensitive database and API credentials, how can you secure this information to prevent other developers in your team, or anyone, from seeing these credentials in plain text? Select the best option that provides the maximum security.

  • Enable SSL encryption that leverages on AWS CloudHSM to store and encrypt the sensitive information
  • Create a new KMS key and use it to enable encryption helpers that leverage on AWS Key Management Service to store and encrypt the sensitive information
  • AWS Lambda does not provide encryption for the environment variables. Deploy your code to an EC2 instance instead
  • There is no need to do anything because, by default, AWS Lambda already encrypts the environment variables using the AWS Key Management Service
A
  • Enable SSL encryption that leverages on AWS CloudHSM to store and encrypt the sensitive information
  • Create a new KMS key and use it to enable encryption helpers that leverage on AWS Key Management Service to store and encrypt the sensitive information
  • AWS Lambda does not provide encryption for the environment variables. Deploy your code to an EC2 instance instead
  • There is no need to do anything because, by default, AWS Lambda already encrypts the environment variables using the AWS Key Management Service (X)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You recently launched a new FTP server using an On-Demand EC2 instance in a newly created VPC with default settings. The server should not be accessible publicly but only through your IP address 175.45.116.100 and nowhere else.

Which of the following is the most suitable way to implement this requirement?

  • Create a new inbound rule in the security group of the EC2 instance with the following details:
    • Protocol: UDP
    • Port Range: 20 - 21
    • Source: 175.45.116.100/32
  • ​Create a new Network ACL inbound rule in the subnet of the EC2 instance with the following details:
    • Protocol: TCP
    • Port Range: 20 - 21
    • Source: 175.45.116.100/0
    • Allow/Deny: ALLOW
  • ​Create a new Network ACL inbound rule in the subnet of the EC2 instance with the following details:
    • Protocol: UDP
    • Port Range: 20 - 21
    • Source: 175.45.116.100/0
    • Allow/Deny: ALLOW
  • ​Create a new inbound rule in the security group of the EC2 instance with the following details:
    • Protocol: TCP
    • Port Range: 20 - 21
    • Source: 175.45.116.100/32
A
  • Create a new inbound rule in the security group of the EC2 instance with the following details:
    • Protocol: UDP
    • Port Range: 20 - 21
    • Source: 175.45.116.100/32
  • ​Create a new Network ACL inbound rule in the subnet of the EC2 instance with the following details: (X)
    • Protocol: TCP
    • Port Range: 20 - 21
    • Source: 175.45.116.100/0
    • Allow/Deny: ALLOW
  • ​Create a new Network ACL inbound rule in the subnet of the EC2 instance with the following details:
    • Protocol: UDP
    • Port Range: 20 - 21
    • Source: 175.45.116.100/0
    • Allow/Deny: ALLOW
  • ​Create a new inbound rule in the security group of the EC2 instance with the following details:
    • Protocol: TCP
    • Port Range: 20 - 21
    • Source: 175.45.116.100/32
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A media company has two VPCs: VPC-1 and VPC-2 with peering connection between each other. VPC-1 only contains private subnets while VPC-2 only contains public subnets. The company uses a single AWS Direct Connect connection and a virtual interface to connect their on-premises network with VPC-1.

Which of the following options increase the fault tolerance of the connection to VPC-1? (Select TWO.)

  • Use the AWS VPN CloudHub to create a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2.
  • ​Establish another AWS Direct Connect connection and private virtual interface in the same AWS region as VPC-1.
  • ​Establish a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2.
  • ​Establish a hardware VPN over the Internet between VPC-2 and the on-premises network.
  • ​Establish a hardware VPN over the Internet between VPC-1 and the on-premises network.
A
  • Use the AWS VPN CloudHub to create a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2.
  • ​Establish another AWS Direct Connect connection and private virtual interface in the same AWS region as VPC-1.
  • ​Establish a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2.
  • ​Establish a hardware VPN over the Internet between VPC-2 and the on-premises network. (X)
  • ​Establish a hardware VPN over the Internet between VPC-1 and the on-premises network.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You work for a leading university as an AWS Infrastructure Engineer and also as a professor to aspiring AWS architects. As a way to familiarize your students with AWS, you gave them a project to host their applications to an EC2 instance. One of your students created an instance to host their online enrollment system project but is having a hard time connecting to their newly created EC2 instance. Your students have explored all of the troubleshooting guides by AWS and narrowed it down to login issues.

Which of the following can you use to log into an EC2 instance?

  • Custom EC2 password
  • ​Access Keys
  • ​EC2 Connection Strings
  • ​Key Pairs
A
  • Custom EC2 password
  • ​Access Keys (X)
  • ​EC2 Connection Strings
  • Key Pairs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

An online trading platform with thousands of clients across the globe is hosted in AWS. To reduce latency, you have to direct user traffic to the nearest application endpoint to the client. The traffic should be routed to the closest edge location via an Anycast static IP address. AWS Shield should also be integrated into the solution for DDoS protection.

Which of the following is the MOST suitable service that the Solutions Architect should use to satisfy the above requirements?

  • AWS PrivateLink
  • ​AWS WAF
  • ​Amazon CloudFront
  • ​AWS Global Accelerator
A
  • AWS PrivateLink
  • ​AWS WAF
  • ​Amazon CloudFront (X)
  • AWS Global Accelerator
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You are working as an IT Consultant for a large investment bank that generates large financial datasets with millions of rows. The data must be stored in a columnar fashion to reduce the number of disk I/O requests and reduce the amount of data needed to load from the disk. The bank has an existing third-party business intelligence application which will connect to the storage service and then generate daily and monthly financial reports for its clients around the globe.

In this scenario, which is the best storage service to use to meet the requirement?

  • Amazon RDS
  • ​DynamoDB
  • ​Amazon Aurora
  • ​Amazon Redshift
A
  • Amazon RDS
  • ​DynamoDB (X)
  • ​Amazon Aurora
  • ​Amazon Redshift
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A data analytics company is setting up an innovative checkout-free grocery store. Their Solutions Architect developed a real-time monitoring application that uses smart sensors to collect the items that the customers are getting from the grocery’s refrigerators and shelves then automatically deduct it from their accounts. The company wants to analyze the items that are frequently being bought and store the results in S3 for durable storage to determine the purchase behavior of its customers.

What service must be used to easily capture, transform, and load streaming data into Amazon S3, Amazon Elasticsearch Service, and Splunk?

  • Amazon SQS
  • ​Amazon Kinesis
  • ​Amazon Kinesis Data Firehose
  • ​Amazon Redshift
A
  • Amazon SQS
  • ​Amazon Kinesis (X)
  • ​Amazon Kinesis Data Firehose
  • ​Amazon Redshift
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You are designing a banking portal which uses Amazon ElastiCache for Redis as its distributed session management component. Since the other Cloud Engineers in your department have access to your ElastiCache cluster, you have to secure the session data in the portal by requiring them to enter a password before they are granted permission to execute Redis commands.

As the Solutions Architect, which of the following should you do to meet the above requirement?

  • Authenticate the users using Redis AUTH by creating a new Redis Cluster with both the –transit-encryption-enabled and –auth-token parameters enabled
  • Set up an IAM Policy and MFA which requires the Cloud Engineers to enter their IAM credentials and token before they can access the ElastiCache cluster
  • Enable the in-transit encryption for Redis replication groups
  • Set up a Redis replication group and enabling the AtRestEncryptionEnabled parameter
A
  • Authenticate the users using Redis AUTH by creating a new Redis Cluster with both the –transit-encryption-enabled and –auth-token parameters enabled
  • Set up an IAM Policy and MFA which requires the Cloud Engineers to enter their IAM credentials and token before they can access the ElastiCache cluster (X)
  • Enable the in-transit encryption for Redis replication groups
  • Set up a Redis replication group and enabling the AtRestEncryptionEnabled parameter
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A Fortune 500 company which has numerous offices and customers around the globe has hired you as their Principal Architect. You have staff and customers that upload gigabytes to terabytes of data to a centralized S3 bucket from the regional data centers, across continents, all over the world on a regular basis. At the end of the financial year, there are thousands of data being uploaded to the central S3 bucket which is in ap-southeast-2 (Sydney) region and a lot of employees are starting to complain about the slow upload times. You were instructed by the CTO to resolve this issue as soon as possible to avoid any delays in processing their global end of financial year (EOFY) reports.

Which feature in Amazon S3 enables fast, easy, and secure transfer of your files over long distances between your client and your Amazon S3 bucket?

  • AWS Global Accelerator
  • ​Multipart Upload
  • ​Cross-Region Replication
  • ​Transfer Acceleration
A
  • AWS Global Accelerator (X)
  • ​Multipart Upload
  • ​Cross-Region Replication
  • Transfer Acceleration
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

An organization needs to control the access for several S3 buckets. They plan to use a gateway endpoint to allow access to trusted buckets.

Which of the following could help you achieve this requirement?

  • Generate an endpoint policy for trusted VPCs.
  • ​Generate a bucket policy for trusted S3 buckets.
  • ​Generate an endpoint policy for trusted S3 buckets.
  • ​Generate a bucket policy for trusted VPCs.
A
  • Generate an endpoint policy for trusted VPCs.
  • ​Generate a bucket policy for trusted S3 buckets. (X)
  • ​Generate an endpoint policy for trusted S3 buckets.
  • ​Generate a bucket policy for trusted VPCs.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A web application is using CloudFront to distribute their images, videos, and other static contents stored in their S3 bucket to its users around the world. The company has recently introduced a new member-only access to some of its high quality media files. There is a requirement to provide access to multiple private media files only to their paying subscribers without having to change their current URLs.

Which of the following is the most suitable solution that you should implement to satisfy this requirement?

  • Create a Signed URL with a custom policy which only allows the members to see the private files
  • Configure your CloudFront distribution to use Field-Level Encryption to protect your private data and only allow access to members
  • Configure your CloudFront distribution to use Match Viewer as its Origin Protocol Policy which will automatically match the user request. This will allow access to the private content if the request is a paying member and deny it if it is not a member
  • Use Signed Cookies to control who can access the private files in your CloudFront distribution by modifying your application to determine whether a user should have access to your content. For members, send the required Set-Cookie headers to the viewer which will unlock the content only to them
A
  • Create a Signed URL with a custom policy which only allows the members to see the private files (X)
  • Configure your CloudFront distribution to use Field-Level Encryption to protect your private data and only allow access to members
  • Configure your CloudFront distribution to use Match Viewer as its Origin Protocol Policy which will automatically match the user request. This will allow access to the private content if the request is a paying member and deny it if it is not a member
  • Use Signed Cookies to control who can access the private files in your CloudFront distribution by modifying your application to determine whether a user should have access to your content. For members, send the required Set-Cookie headers to the viewer which will unlock the content only to them
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You are working for a large telecommunications company where you need to run analytics against all combined log files from your Application Load Balancer as part of the regulatory requirements.

Which AWS services can be used together to collect logs and then easily perform log analysis?

  • Amazon EC2 with EBS volumes for storing and analyzing the log files.
  • ​Amazon S3 for storing the ELB log files and an EC2 instance for analyzing the log files using a custom-built application.
  • ​Amazon DynamoDB for storing and EC2 for analyzing the logs.
  • ​Amazon S3 for storing ELB log files and Amazon EMR for analyzing the log files.
A
  • Amazon EC2 with EBS volumes for storing and analyzing the log files.
  • ​Amazon S3 for storing the ELB log files and an EC2 instance for analyzing the log files using a custom-built application. (X)
  • ​Amazon DynamoDB for storing and EC2 for analyzing the logs.
  • ​Amazon S3 for storing ELB log files and Amazon EMR for analyzing the log files.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

An application hosted in EC2 consumes messages from an SQS queue and is integrated with SNS to send out an email to you once the process is complete. The Operations team received 5 orders but after a few hours, they saw 20 email notifications in their inbox.

Which of the following could be the possible culprit for this issue?

  • The web application does not have permission to consume messages in the SQS queue
  • The web application is set to short polling so some messages are not being picked up
  • The web application is set for long polling so the messages are being sent twice
  • The web application is not deleting the messages in the SQS queue after it has processed them
A
  • The web application does not have permission to consume messages in the SQS queue
  • The web application is set to short polling so some messages are not being picked up
  • The web application is set for long polling so the messages are being sent twice (X)
  • The web application is not deleting the messages in the SQS queue after it has processed them
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

You currently have an Augment Reality (AR) mobile game which has a serverless backend. It is using a DynamoDB table which was launched using the AWS CLI to store all the user data and information gathered from the players and a Lambda function to pull the data from DynamoDB. The game is being used by millions of users each day to read and store data.

How would you design the application to improve its overall performance and make it more scalable while keeping the costs low? (Select TWO.)

  • Since Auto Scaling is enabled by default, the provisioned read and write capacity will adjust automatically. Also enable DynamoDB Accelerator (DAX) to improve the performance from milliseconds to microseconds.
  • ​Use AWS SSO and Cognito to authenticate users and have them directly access DynamoDB using single-sign on. Manually set the provisioned read and write capacity to a higher RCU and WCU.
  • ​Configure CloudFront with DynamoDB as the origin; cache frequently accessed data on client device using ElastiCache.
  • ​Enable DynamoDB Accelerator (DAX) and ensure that the Auto Scaling is enabled and increase the maximum provisioned read and write capacity.
  • ​Use API Gateway in conjunction with Lambda and turn on the caching on frequently accessed data and enable DynamoDB global replication.
A
  • Since Auto Scaling is enabled by default, the provisioned read and write capacity will adjust automatically. Also enable DynamoDB Accelerator (DAX) to improve the performance from milliseconds to microseconds.
  • ​Use AWS SSO and Cognito to authenticate users and have them directly access DynamoDB using single-sign on. Manually set the provisioned read and write capacity to a higher RCU and WCU. (X)
  • ​Configure CloudFront with DynamoDB as the origin; cache frequently accessed data on client device using ElastiCache.
  • ​Enable DynamoDB Accelerator (DAX) and ensure that the Auto Scaling is enabled and increase the maximum provisioned read and write capacity. (-)
  • ​Use API Gateway in conjunction with Lambda and turn on the caching on frequently accessed data and enable DynamoDB global replication. (+)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A financial application is composed of an Auto Scaling group of EC2 instances, an Application Load Balancer, and a MySQL RDS instance in a Multi-AZ Deployments configuration. To protect the confidential data of your customers, you have to ensure that your RDS database can only be accessed using the profile credentials specific to your EC2 instances via an authentication token.

As the Solutions Architect of the company, which of the following should you do to meet the above requirement?

  • Using a combination of IAM and STS to restrict access to your RDS instance via a temporary token
  • Configuring SSL in your application to encrypt the database connection to RDS
  • Creating an IAM Role and assigning it to your EC2 instances which will grant exclusive access to your RDS instance
  • Enable the IAM DB Authentication
A
  • Using a combination of IAM and STS to restrict access to your RDS instance via a temporary token (X)
  • Configuring SSL in your application to encrypt the database connection to RDS
  • Creating an IAM Role and assigning it to your EC2 instances which will grant exclusive access to your RDS instance
  • Enable the IAM DB Authentication
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A newly hired Solutions Architect is assigned to manage a set of CloudFormation templates that is used in the company’s cloud architecture in AWS. The Architect accessed the templates and tried to analyze the configured IAM policy for an S3 bucket. (SELECT THREE)

  • An IAM user with this IAM policy is allowed to read objects in the ‘tutorialsdojo’ S3 bucket but not allowed to list the objects in the bucket
  • An IAM user with this IAM policy is allowed to write objects into the ‘tutorialsdojo’ S3 bucket
  • An IAM user with this IAM policy is allowed to read objects from the ‘tutorialsdojo’ S3 bucket
  • An IAM user with this IAM policy is allowed to change access rights for the ‘tutorialsdojo’ S3 bucket
  • An IAM user with this IAM policy is allowed to read and delete objects from the ‘tutorialsdojo’ S3 bucket
  • An IAM user with this IAM policy is allowed to read objects from all S3 buckets owned by the account
A
  • An IAM user with this IAM policy is allowed to read objects in the ‘tutorialsdojo’ S3 bucket but not allowed to list the objects in the bucket
  • An IAM user with this IAM policy is allowed to write objects into the ‘tutorialsdojo’ S3 bucket (+)
  • An IAM user with this IAM policy is allowed to read objects from the ‘tutorialsdojo’ S3 bucket (+)
  • An IAM user with this IAM policy is allowed to change access rights for the ‘tutorialsdojo’ S3 bucket
  • An IAM user with this IAM policy is allowed to read and delete objects from the ‘tutorialsdojo’ S3 bucket (X)
  • An IAM user with this IAM policy is allowed to read objects from all S3 buckets owned by the account (-)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A data analytics company, which uses machine learning to collect and analyze consumer data, is using Redshift cluster as their data warehouse. You are instructed to implement a disaster recovery plan for their systems to ensure business continuity even in the event of an AWS region outage.

Which of the following is the best approach to meet this requirement?

  • Create a scheduled job that will automatically take the snapshot of your Redshift Cluster and store it to an S3 bucket. Restore the snapshot in case of an AWS region outage.
  • ​Use Automated snapshots of your Redshift Cluster.
  • ​Do nothing because Amazon Redshift is a highly available, fully-managed data warehouse which can withstand an outage of an entire AWS region.
  • ​Enable Cross-Region Snapshots Copy in your Amazon Redshift Cluster.
A
  • Create a scheduled job that will automatically take the snapshot of your Redshift Cluster and store it to an S3 bucket. Restore the snapshot in case of an AWS region outage.
  • ​Use Automated snapshots of your Redshift Cluster.
  • ​Do nothing because Amazon Redshift is a highly available, fully-managed data warehouse which can withstand an outage of an entire AWS region. (X)
  • Enable Cross-Region Snapshots Copy in your Amazon Redshift Cluster.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Both historical records and frequently accessed data are stored on an on-premises storage system. The amount of current data is growing at an exponential rate. As the storage’s capacity is nearing its limit, the company’s Solutions Architect has decided to move the historical records to AWS to free up space for the active data.

Which of the following architectures deliver the best solution in terms of cost and operational management?

  • Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Standard to be the destination for the data. Modify the S3 lifecycle configuration to move the data from the Standard tier to Amazon S3 Glacier Deep Archive after 30 days.
  • ​Use AWS Storage Gateway to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier to be the destination for the data. Modify the S3 lifecycle configuration to move the data from the Standard tier to Amazon S3 Glacier Deep Archive after 30 days.
  • ​Use AWS Storage Gateway to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destination for the data.
  • ​Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destination for the data.
A
  • Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Standard to be the destination for the data. Modify the S3 lifecycle configuration to move the data from the Standard tier to Amazon S3 Glacier Deep Archive after 30 days.
  • ​Use AWS Storage Gateway to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier to be the destination for the data. Modify the S3 lifecycle configuration to move the data from the Standard tier to Amazon S3 Glacier Deep Archive after 30 days.
  • ​Use AWS Storage Gateway to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destination for the data. (X)
  • ​Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destination for the data.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

The company that you are working for has a highly available architecture consisting of an elastic load balancer and several EC2 instances configured with auto-scaling in three Availability Zones. You want to monitor your EC2 instances based on a particular metric, which is not readily available in CloudWatch.

Which of the following is a custom metric in CloudWatch which you have to manually set up?

  • Network packets out of an EC2 instance
  • Disk Reads Activity of an EC2 instance
  • Memory Utilization of an EC2 instance
  • CPU Utilization of an EC2 instance
A
  • Network packets out of an EC2 instance (X)
  • Disk Reads Activity of an EC2 instance
  • Memory Utilization of an EC2 instance
  • CPU Utilization of an EC2 instance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Your web application is relying entirely on slower disk-based databases, causing it to perform slowly. To improve its performance, you integrated an in-memory data store to your web application using ElastiCache. How does Amazon ElastiCache improve database performance?

  • It securely delivers data to customers globally with low latency and high transfer speeds.
  • ​By caching database query results.
  • ​It reduces the load on your database by routing read queries from your applications to the Read Replica.
  • ​It provides an in-memory cache that delivers up to 10x performance improvement from milliseconds to microseconds or even at millions of requests per second.
A
  • It securely delivers data to customers globally with low latency and high transfer speeds.
  • ​By caching database query results.
  • ​It reduces the load on your database by routing read queries from your applications to the Read Replica.
  • ​It provides an in-memory cache that delivers up to 10x performance improvement from milliseconds to microseconds or even at millions of requests per second. (X)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

A Solutions Architect is hosting a website in an Amazon S3 bucket named tutorialsdojo. The users load the website using the following URL: http://tutorialsdojo.s3-website-us-east-1.amazonaws.com and there is a new requirement to add a JavaScript on the webpages in order to make authenticated HTTP GET requests against the same bucket by using the Amazon S3 API endpoint (tutorialsdojo.s3.amazonaws.com). Upon testing, you noticed that the web browser blocks JavaScript from allowing those requests.

Which of the following options is the MOST suitable solution that you should implement for this scenario?

  • Enable cross-account access
  • Enable Cross-Region Replication (CRR)
  • Enable Cross-Zone Load Balancing
  • Enable Cross-origin resource sharing (CORS) configuration in the bucket
A
  • Enable cross-account access
  • Enable Cross-Region Replication (CRR) (X)
  • Enable Cross-Zone Load Balancing
  • Enable Cross-origin resource sharing (CORS) configuration in the bucket
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

You are a Big Data Engineer who is assigned to handle the online enrollment system database of a prestigious university, which is hosted in RDS. You are required to monitor the database metrics in Amazon CloudWatch to ensure the availability of the enrollment system.

What are the enhanced monitoring metrics that Amazon CloudWatch gathers from Amazon RDS DB instances which provide a more accurate information? (Select TWO.)

  • Freeable Memory
  • ​RDS child processes.
  • ​CPU Utilization
  • ​Database Connections
  • ​OS processes
A
  • Freeable Memory
  • ​RDS child processes. (-)
  • ​CPU Utilization
  • ​Database Connections (X)
  • ​OS processes (+)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

You are working as an IT Consultant for a large investment bank that generates large financial datasets with millions of rows. The data must be stored in a columnar fashion to reduce the number of disk I/O requests and reduce the amount of data needed to load from the disk. The bank has an existing third-party business intelligence application which will connect to the storage service and then generate daily and monthly financial reports for its clients around the globe.

In this scenario, which is the best storage service to use to meet the requirement?

  • Amazon RDS
  • ​DynamoDB
  • ​Amazon Aurora
  • ​Amazon Redshift
A
  • Amazon RDS
  • ​DynamoDB (X)
  • ​Amazon Aurora
  • ​Amazon Redshift
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

You are a Solutions Architect working for an aerospace engineering company which recently adopted a hybrid cloud infrastructure with AWS. One of your tasks is to launch a VPC with both public and private subnets for their EC2 instances as well as their database instances respectively.

Which of the following statements are true regarding Amazon VPC subnets? (Select TWO.)

  • Each subnet maps to a single Availability Zone.
  • ​Each subnet spans to 2 Availability Zones.
  • ​The allowed block size in VPC is between a /16 netmask (65,536 IP addresses) and /27 netmask (16 IP addresses).
  • ​Every subnet that you create is automatically associated with the main route table for the VPC.
  • ​EC2 instances in a private subnet can communicate with the Internet only if they have an Elastic IP.
A
  • Each subnet maps to a single Availability Zone. (+)
  • ​Each subnet spans to 2 Availability Zones.
  • ​The allowed block size in VPC is between a /16 netmask (65,536 IP addresses) and /27 netmask (16 IP addresses). (X)
  • ​Every subnet that you create is automatically associated with the main route table for the VPC. (-)
  • ​EC2 instances in a private subnet can communicate with the Internet only if they have an Elastic IP.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

You are using a combination of API Gateway and Lambda for the web services of your online web portal that is being accessed by hundreds of thousands of clients each day. Your company will be announcing a new revolutionary product and it is expected that your web portal will receive a massive number of visitors all around the globe. How can you protect your backend systems and applications from traffic spikes?

  • Manually upgrading the EC2 instances being used by API Gateway
  • API Gateway will automatically scale and handle massive traffic spikes so you do not have to do anything
  • Deploying Multi-AZ in API Gateway with Read Replica
  • Use throttling limits in API Gateway
A
  • Manually upgrading the EC2 instances being used by API Gateway
  • API Gateway will automatically scale and handle massive traffic spikes so you do not have to do anything (X)
  • Deploying Multi-AZ in API Gateway with Read Replica
  • Use throttling limits in API Gateway
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

A cryptocurrency trading platform is using an API built in AWS Lambda and API Gateway. Due to the recent news and rumors about the upcoming price surge of Bitcoin, Ethereum and other cryptocurrencies, it is expected that the trading platform would have a significant increase in site visitors and new users in the coming days ahead. In this scenario, how can you protect the backend systems of the platform from traffic spikes?

  • Enable throttling limits and result caching in API Gateway
  • Move the Lambda function to a VPC
  • Use CloudFront in front of the API Gateway to act as a cache
  • Switch from using AWS Lambda and API Gateway to a more scalable and highly available architecture using EC2 instances, ELB, and Auto Scaling
A
  • Enable throttling limits and result caching in API Gateway
  • Move the Lambda function to a VPC
  • Use CloudFront in front of the API Gateway to act as a cache (X)
  • Switch from using AWS Lambda and API Gateway to a more scalable and highly available architecture using EC2 instances, ELB, and Auto Scaling
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

You are working as a Cloud Engineer for a top aerospace engineering firm. One of your tasks is to set up a document storage system using S3 for all of the engineering files. In Amazon S3, which of the following statements are true? (Select TWO.)

  • You can only store ZIP or TAR files in S3.
  • ​S3 is an object storage service that provides file system access semantics (such as strong consistency and file locking), and concurrently-accessible storage.
  • ​The largest object that can be uploaded in a single PUT is 5 GB.
  • ​The largest object that can be uploaded in a single PUT is 5 TB.
  • ​The total volume of data and number of objects you can store are unlimited.
A
  • You can only store ZIP or TAR files in S3.
  • ​S3 is an object storage service that provides file system access semantics (such as strong consistency and file locking), and concurrently-accessible storage.
  • ​The largest object that can be uploaded in a single PUT is 5 GB. (-)
  • ​The largest object that can be uploaded in a single PUT is 5 TB. (X)
  • ​The total volume of data and number of objects you can store are unlimited. (+)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

A company is deploying a Microsoft SharePoint Server environment on AWS using CloudFormation. The Solutions Architect needs to install and configure the architecture that is composed of Microsoft Active Directory (AD) domain controllers, Microsoft SQL Server 2012, multiple Amazon EC2 instances to host the Microsoft SharePoint Server and many other dependencies. The Architect needs to ensure that the required components are properly running before the stack creation proceeds.

Which of the following should the Architect do to meet this requirement?

  • Configure a CreationPolicy attribute to the instance in the CloudFormation template. Send a success signal after the applications are installed and configured using the cfn-signal helper script.
  • ​Configure the DependsOn attribute in the CloudFormation template. Send a success signal after the applications are installed and configured using the cfn-init helper script.
  • ​Configure the UpdateReplacePolicy attribute in the CloudFormation template. Send a success signal after the applications are installed and configured using the cfn-signal helper script.
  • ​Configure a UpdatePolicy attribute to the instance in the CloudFormation template. Send a success signal after the applications are installed and configured using the cfn-signal helper script.
A
  • Configure a CreationPolicy attribute to the instance in the CloudFormation template. Send a success signal after the applications are installed and configured using the cfn-signal helper script.
  • ​Configure the DependsOn attribute in the CloudFormation template. Send a success signal after the applications are installed and configured using the cfn-init helper script. (X)
  • ​Configure the UpdateReplacePolicy attribute in the CloudFormation template. Send a success signal after the applications are installed and configured using the cfn-signal helper script.
  • ​Configure a UpdatePolicy attribute to the instance in the CloudFormation template. Send a success signal after the applications are installed and configured using the cfn-signal helper script.
33
Q

A government entity is conducting a population and housing census in the city. Each household information uploaded on their online portal is stored in encrypted files in Amazon S3. The government assigned its Solutions Architect to set compliance policies that verify sensitive data in a manner that meets their compliance standards. They should also be alerted if there are compromised files detected containing personally identifiable information (PII), protected health information (PHI) or intellectual properties (IP).

Which of the following should the Architect implement to satisfy this requirement?

  • Set up and configure Amazon GuardDuty to monitor malicious activity on their Amazon S3 data
  • Set up and configure Amazon Inspector to send out alert notifications whenever a security violation is detected on their Amazon S3 data
  • Set up and configure Amazon Rekognition to monitor and recognize patterns on their Amazon S3 data
  • Set up and configure Amazon Macie to monitor and detect usage patterns on their Amazon S3 data
A
  • Set up and configure Amazon GuardDuty to monitor malicious activity on their Amazon S3 data (X)
  • Set up and configure Amazon Inspector to send out alert notifications whenever a security violation is detected on their Amazon S3 data
  • Set up and configure Amazon Rekognition to monitor and recognize patterns on their Amazon S3 data
  • Set up and configure Amazon Macie to monitor and detect usage patterns on their Amazon S3 data
34
Q

A messaging application in ap-northeast-1 region uses m4.2xlarge instance to accommodate 75 percent of users from Tokyo and Seoul. It uses a cheaper m4.large instance in ap-southeast-1 to accommodate the rest of users from Manila and Singapore.

As a Solutions Architect, what routing policy should you use to route traffic to your instances based on the location of your users and instances?

  • Weighted Routing
  • ​Geoproximity Routing
  • ​Latency Routing
  • ​Geolocation Routing
A
  • Weighted Routing
  • ​Geoproximity Routing
  • ​Latency Routing
  • ​Geolocation Routing (X)
35
Q

You are working as a Solutions Architect for an investment bank and your Chief Technical Officer intends to migrate all of your applications to AWS. You are looking for block storage to store all of your data and have decided to go with EBS volumes. Your boss is worried that EBS volumes are not appropriate for your workloads due to compliance requirements, downtime scenarios, and IOPS performance.

Which of the following are valid points in proving that EBS is the best service to use for your migration? (Select TWO.)

  • An EBS volume is off-instance storage that can persist independently from the life of an instance.
  • ​Amazon EBS provides the ability to create snapshots (backups) of any EBS volume and write a copy of the data in the volume to Amazon RDS, where it is stored redundantly in multiple Availability Zones
  • ​EBS volumes can be attached to any EC2 Instance in any Availability Zone.
  • ​When you create an EBS volume in an Availability Zone, it is automatically replicated on a separate AWS region to prevent data loss due to a failure of any single hardware component.
  • ​EBS volumes support live configuration changes while in production which means that you can modify the volume type, volume size, and IOPS capacity without service interruptions.
A
  • An EBS volume is off-instance storage that can persist independently from the life of an instance.
  • ​Amazon EBS provides the ability to create snapshots (backups) of any EBS volume and write a copy of the data in the volume to Amazon RDS, where it is stored redundantly in multiple Availability Zones (X)
  • ​EBS volumes can be attached to any EC2 Instance in any Availability Zone.
  • ​When you create an EBS volume in an Availability Zone, it is automatically replicated on a separate AWS region to prevent data loss due to a failure of any single hardware component.
  • EBS volumes support live configuration changes while in production which means that you can modify the volume type, volume size, and IOPS capacity without service interruptions.
36
Q

A Solutions Architect is working for a company which has multiple VPCs in various AWS regions. The Architect is assigned to set up a logging system which will track all of the changes made to their AWS resources in all regions, including the configurations made in IAM, CloudFront, AWS WAF, and Route 53. In order to pass the compliance requirements, the solution must ensure the security, integrity, and durability of the log data. It should also provide an event history of all API calls made in AWS Management Console and AWS CLI.

Which of the following solutions is the best fit for this scenario?

  • Set up a new CloudWatch trail in a new S3 bucket using the AWS CLI and also pass both the –is-multi-region-trail and –include-global-service-events parameters then encrypt log files using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies
  • Set up a new CloudWatch trail in a new S3 bucket using the CloudTrail console and also pass the –is-multi-region-trail parameter then encrypt log files using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies
  • Set up a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the –is-multi-region-trail and –no-include-global-service-events parameters then encrypt log files using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies
  • Set up a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the –is-multi-region-trail and –include-global-service-events parameters then encrypt log files using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies
A
  • Set up a new CloudWatch trail in a new S3 bucket using the AWS CLI and also pass both the –is-multi-region-trail and –include-global-service-events parameters then encrypt log files using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies
  • Set up a new CloudWatch trail in a new S3 bucket using the CloudTrail console and also pass the –is-multi-region-trail parameter then encrypt log files using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies
  • Set up a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the –is-multi-region-trail and –no-include-global-service-events parameters then encrypt log files using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies (X)
  • Set up a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the –is-multi-region-trail and –include-global-service-events parameters then encrypt log files using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies
37
Q

A suite of web applications is hosted in an Auto Scaling group of EC2 instances across three Availability Zones and is configured with default settings. There is an Application Load Balancer that forwards the request to the respective target group on the URL path. The scale-in policy has been triggered due to the low number of incoming traffic to the application.

Which EC2 instance will be the first one to be terminated by your Auto Scaling group?

  • The EC2 instance launched from the oldest launch configuration
  • The instance will be randomly selected by the Auto Scaling group
  • The EC2 instance which has been running for the longest time
  • The EC2 instance which has the least number of user sessions
A
  • The EC2 instance launched from the oldest launch configuration
  • The instance will be randomly selected by the Auto Scaling group (X)
  • The EC2 instance which has been running for the longest time
  • The EC2 instance which has the least number of user sessions
38
Q

An online stocks trading application that stores financial data in an S3 bucket has a lifecycle policy that moves older data to Glacier every month. There is a strict compliance requirement where a surprise audit can happen at anytime and you should be able to retrieve the required data in under 15 minutes under all circumstances. Your manager instructed you to ensure that retrieval capacity is available when you need it and should handle up to 150 MB/s of retrieval throughput.

Which of the following should you do to meet the above requirement? (Select TWO.)

  • Use Bulk Retrieval to access the financial data.
  • ​Retrieve the data using Amazon Glacier Select.
  • ​Purchase provisioned retrieval capacity.
  • ​Use Expedited Retrieval to access the financial data.
  • ​Specify a range, or portion, of the financial data archive to retrieve.
A
  • Use Bulk Retrieval to access the financial data.
  • ​Retrieve the data using Amazon Glacier Select.
  • ​Purchase provisioned retrieval capacity. (-)
  • ​Use Expedited Retrieval to access the financial data. (+)
  • ​Specify a range, or portion, of the financial data archive to retrieve.
39
Q

Your manager has asked you to deploy a mobile application that can collect votes for a popular singing competition. Millions of users from around the world will submit votes using their mobile phones. These votes must be collected and stored in a highly scalable and highly available data store which will be queried for real-time ranking.

Which of the following combination of services should you use to meet this requirement?

  • Amazon Relational Database Service (RDS) and Amazon MQ
  • ​Amazon Aurora and Amazon Cognito
  • ​Amazon DynamoDB and AWS AppSync
  • ​Amazon Redshift and AWS Mobile Hub
A
  • Amazon Relational Database Service (RDS) and Amazon MQ
  • ​Amazon Aurora and Amazon Cognito (X)
  • Amazon DynamoDB and AWS AppSync
  • ​Amazon Redshift and AWS Mobile Hub
40
Q

A global IT company with offices around the world has multiple AWS accounts. To improve efficiency and drive costs down, the Chief Information Officer (CIO) wants to set up a solution that centrally manages their AWS resources. This will allow them to procure AWS resources centrally and share resources such as AWS Transit Gateways, AWS License Manager configurations, or Amazon Route 53 Resolver rules across their various accounts.

As the Solutions Architect, which combination of options should you implement in this scenario? (Select TWO.)

  • Use AWS Control Tower to easily and securely share your resources with your AWS accounts
  • Use the AWS Resource Access Manager (RAM) service to easily and securely share your resources with your AWS accounts
  • Use the AWS Identity and Access Management service to set up cross-account access that will easily and securely share your resources with your AWS accounts
  • Consolidate all of the company’s accounts using AWS ParallelCluster
  • Consolidate all of the company’s accounts using AWS Organizations
A
  • Use AWS Control Tower to easily and securely share your resources with your AWS accounts
  • Use the AWS Resource Access Manager (RAM) service to easily and securely share your resources with your AWS accounts (-)
  • Use the AWS Identity and Access Management service to set up cross-account access that will easily and securely share your resources with your AWS accounts (X)
  • Consolidate all of the company’s accounts using AWS ParallelCluster
  • Consolidate all of the company’s accounts using AWS Organizations (+)
41
Q

A tech company that you are working for has undertaken a Total Cost Of Ownership (TCO) analysis evaluating the use of Amazon S3 versus acquiring more storage hardware. The result was that all 1200 employees would be granted access to use Amazon S3 for storage of their personal documents.

Which of the following will you need to consider so you can set up a solution that incorporates single sign-on feature from your corporate AD or LDAP directory and also restricts access for each individual user to a designated user folder in an S3 bucket? (Select TWO.)

  • Setting up a matching IAM user for each of the 1200 users in your corporate directory that needs access to a folder in the S3 bucket
  • Setup a Federation proxy or an Identity provider and use AWS Security Token Service to generate temporary tokens
  • Map each individual user to a designated user folder in S3 using Amazon WorkDocs to access their personal documents
  • Configure an IAM role and an IAM Policy to access the bucket
  • Using 3rd party Single Sign-On solutions such as Atlassian Crowd, OKTA, OneLogin and many others
A
  • Setting up a matching IAM user for each of the 1200 users in your corporate directory that needs access to a folder in the S3 bucket
  • Setup a Federation proxy or an Identity provider and use AWS Security Token Service to generate temporary tokens (+)
  • Map each individual user to a designated user folder in S3 using Amazon WorkDocs to access their personal documents
  • Configure an IAM role and an IAM Policy to access the bucket (-)
  • Using 3rd party Single Sign-On solutions such as Atlassian Crowd, OKTA, OneLogin and many others (X)
42
Q

You have a web application hosted in an On-Demand EC2 instance in your VPC. You are creating a shell script that needs the instance’s public and private IP addresses.

What is the best way to get the instance’s associated IP addresses which your shell script can use?

  • By using a Curl or Get Command to get the latest user data information from http://169.254.169.254/latest/user-data/
  • ​By using a Curl or Get Command to get the latest metadata information from http://169.254.169.254/latest/meta-data/
  • ​By using IAM.
  • ​By using a CloudWatch metric.
A
  • By using a Curl or Get Command to get the latest user data information from http://169.254.169.254/latest/user-data/ (X)
  • ​By using a Curl or Get Command to get the latest metadata information from http://169.254.169.254/latest/meta-data/
  • ​By using IAM.
  • ​By using a CloudWatch metric.
43
Q

A data analytics company, which uses machine learning to collect and analyze consumer data, is using Redshift cluster as their data warehouse. You are instructed to implement a disaster recovery plan for their systems to ensure business continuity even in the event of an AWS region outage.

Which of the following is the best approach to meet this requirement?

  • Create a scheduled job that will automatically take the snapshot of your Redshift Cluster and store it to an S3 bucket. Restore the snapshot in case of an AWS region outage.
  • ​Use Automated snapshots of your Redshift Cluster.
  • ​Do nothing because Amazon Redshift is a highly available, fully-managed data warehouse which can withstand an outage of an entire AWS region.
  • ​Enable Cross-Region Snapshots Copy in your Amazon Redshift Cluster.
A
  • Create a scheduled job that will automatically take the snapshot of your Redshift Cluster and store it to an S3 bucket. Restore the snapshot in case of an AWS region outage.
  • ​Use Automated snapshots of your Redshift Cluster.
  • ​Do nothing because Amazon Redshift is a highly available, fully-managed data warehouse which can withstand an outage of an entire AWS region. (X)
  • ​Enable Cross-Region Snapshots Copy in your Amazon Redshift Cluster.
44
Q

You are a new Solutions Architect in your company. Upon checking the existing Inbound Rules of your Network ACL, you saw this configuration:

If a computer with an IP address of 110.238.109.37 sends a request to your VPC, what will happen?

  • It will be allowed.
  • ​Initially, it will be allowed and then after a while, the connection will be denied.
  • ​It will be denied.
  • ​Initially, it will be denied and then after a while, the connection will be allowed.
A
  • It will be allowed.
  • ​Initially, it will be allowed and then after a while, the connection will be denied.
  • ​It will be denied. (X)
  • ​Initially, it will be denied and then after a while, the connection will be allowed.
45
Q

A tech company is currently using Auto Scaling for their web application. A new AMI now needs to be used for launching a fleet of EC2 instances.

Which of the following changes needs to be done?

  • Create a new launch configuration.
  • ​Do nothing. You can start directly launching EC2 instances in the Auto Scaling group with the same launch configuration.
  • ​Create a new target group.
  • ​Create a new target group and launch configuration.
A
  • Create a new launch configuration.
  • ​Do nothing. You can start directly launching EC2 instances in the Auto Scaling group with the same launch configuration.
  • ​Create a new target group.
  • ​Create a new target group and launch configuration. (X)
46
Q

You are working as a Solutions Architect for a startup in which you are tasked to develop a custom messaging service that will also be used to train their AI for an automatic response feature which they plan to implement in the future. Based on their research and tests, the service can receive up to thousands of messages a day, and all of these data are to be sent to Amazon EMR for further processing. It is crucial that none of the messages will be lost, no duplicates will be produced and that they are processed in EMR in the same order as their arrival.

Which of the following options should you implement to meet the startup’s requirements?

  • Create a pipeline using AWS Data Pipeline to handle the messages.
  • ​Set up an Amazon SNS Topic to handle the messages.
  • ​Create an Amazon Kinesis Data Stream to collect the messages.
  • ​Set up a default Amazon SQS queue to handle the messages.
A
  • Create a pipeline using AWS Data Pipeline to handle the messages.
  • ​Set up an Amazon SNS Topic to handle the messages.
  • Create an Amazon Kinesis Data Stream to collect the messages.
  • ​Set up a default Amazon SQS queue to handle the messages. (X)
47
Q

In Amazon EC2, you can manage your instances from the moment you launch them up to their termination. You can flexibly control your computing costs by changing the EC2 instance state. Which of the following statements is true regarding EC2 billing? (Select TWO.)

  • You will be billed when your On-Demand instance is in pending state.
  • ​You will be billed when your Reserved instance is in terminated state.
  • ​You will be billed when your On-Demand instance is preparing to hibernate with a stopping state.
  • ​You will not be billed for any instance usage while an instance is not in the running state.
  • ​You will be billed when your Spot instance is preparing to stop with a stopping state.
A
  • You will be billed when your On-Demand instance is in pending state.
  • You will be billed when your Reserved instance is in terminated state. (+)
  • ​You will be billed when your On-Demand instance is preparing to hibernate with a stopping state. (-)
  • ​You will not be billed for any instance usage while an instance is not in the running state. (X)
  • ​You will be billed when your Spot instance is preparing to stop with a stopping state.
48
Q

Your fellow AWS Engineer has created a new Standard-class S3 bucket to store financial reports that are not frequently accessed but should be immediately available when an auditor requests for it. To save costs, you changed the storage class of the S3 bucket from Standard to Infrequent Access storage class.

In Amazon S3 Standard - Infrequent Access storage class, which of the following statements are true? (Select TWO.)

  • Ideal to use for data archiving.
  • ​It provides high latency and low throughput performance
  • ​It is the best storage option to store noncritical and reproducible data
  • ​It is designed for data that requires rapid access when needed.
  • ​It is designed for data that is accessed less frequently
A
  • Ideal to use for data archiving.
  • ​It provides high latency and low throughput performance
  • ​It is the best storage option to store noncritical and reproducible data (X)
  • ​It is designed for data that requires rapid access when needed.
  • ​It is designed for data that is accessed less frequently
49
Q

An insurance company plans to implement a message filtering feature in their web application. To implement this solution, they need to create separate Amazon SQS queues for each type of quote request. The entire message processing should not exceed 24 hours.

As the Solutions Architect of the company, which of the following should you do to meet the above requirement?

  • Create a data stream in Amazon Kinesis Data Streams. Use the Amazon Kinesis Client Library to deliver all the records to the designated SQS queues based on the quote request type.
  • ​Create one Amazon SNS topic and configure the Amazon SQS queues to subscribe to the SNS topic. Set the filter policies in the SNS subscriptions to publish the message to the designated SQS queue based on its quote request type.
  • ​Create multiple Amazon SNS topics and configure the Amazon SQS queues to subscribe to the SNS topics. Publish the message to the designated SQS queue based on the quote request type.
  • ​Create one Amazon SNS topic and configure the Amazon SQS queues to subscribe to the SNS topic. Publish the same messages to all SQS queues. Filter the messages in each queue based on the quote request type.
A
  • Create a data stream in Amazon Kinesis Data Streams. Use the Amazon Kinesis Client Library to deliver all the records to the designated SQS queues based on the quote request type.
  • ​Create one Amazon SNS topic and configure the Amazon SQS queues to subscribe to the SNS topic. Set the filter policies in the SNS subscriptions to publish the message to the designated SQS queue based on its quote request type.
  • ​Create multiple Amazon SNS topics and configure the Amazon SQS queues to subscribe to the SNS topics. Publish the message to the designated SQS queue based on the quote request type. (X)
  • ​Create one Amazon SNS topic and configure the Amazon SQS queues to subscribe to the SNS topic. Publish the same messages to all SQS queues. Filter the messages in each queue based on the quote request type.
50
Q

You are working as a Solutions Architect for a leading data analytics company in which you are tasked to process real-time streaming data of your users across the globe. This will enable you to track and analyze globally-distributed user activity on your website and mobile applications, including click stream analysis. Your cloud architecture should process the data in close geographical proximity to your users and to respond to user requests at low latencies.

Which of the following options is the most ideal solution that you should implement?

  • Use a CloudFront web distribution and Route 53 with a Geoproximity routing policy in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real-time streaming data using Kinesis and durably store the results to an Amazon S3 bucket.
  • ​Use a CloudFront web distribution and Route 53 with a latency-based routing policy, in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real-time streaming data using Kinesis and durably store the results to an Amazon S3 bucket.
  • ​Integrate CloudFront with Lambda@Edge in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real-time streaming data using Amazon Athena and durably store the results to an Amazon S3 bucket.
  • ​Integrate CloudFront with Lambda@Edge in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real-time streaming data using Kinesis and durably store the results to an Amazon S3 bucket.
A
  • Use a CloudFront web distribution and Route 53 with a Geoproximity routing policy in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real-time streaming data using Kinesis and durably store the results to an Amazon S3 bucket. (X)
  • ​Use a CloudFront web distribution and Route 53 with a latency-based routing policy, in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real-time streaming data using Kinesis and durably store the results to an Amazon S3 bucket.
  • ​Integrate CloudFront with Lambda@Edge in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real-time streaming data using Amazon Athena and durably store the results to an Amazon S3 bucket.
  • Integrate CloudFront with Lambda@Edge in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real-time streaming data using Kinesis and durably store the results to an Amazon S3 bucket.
51
Q

An application is using a RESTful API hosted in AWS which uses Amazon API Gateway and AWS Lambda. There is a requirement to trace and analyze user requests as they travel through your Amazon API Gateway APIs to the underlying services.

Which of the following is the most suitable service to use to meet this requirement?

  • VPC Flow Logs
  • ​CloudTrail
  • ​AWS X-Ray
  • CloudWatch
A
  • VPC Flow Logs
  • ​CloudTrail (X)
  • ​AWS X-Ray
  • CloudWatch
52
Q

Your customer is building an internal application that serves as a repository for images uploaded by a couple of users. Whenever a user uploads an image, it would be sent to Kinesis Data Streams for processing before it is stored in an S3 bucket. If the upload was successful, the application will return a prompt informing the user that the operation was successful. The entire processing typically takes about 5 minutes to finish.

Which of the following options will allow you to asynchronously process the request to the application from upload request to Kinesis, S3, and return reply in the most cost-effective manner?

  • Use a combination of SNS to buffer the requests and then asynchronously process them using On-Demand EC2 Instances.
  • ​Use a combination of Lambda and Step Functions to orchestrate service components and asynchronously process the requests.
  • ​Replace the Kinesis Data Streams with an Amazon SQS queue. Create a Lambda function that will asynchronously process the requests.
  • ​Use a combination of SQS to queue the requests and then asynchronously process them using On-Demand EC2 Instances.
A
  • Use a combination of SNS to buffer the requests and then asynchronously process them using On-Demand EC2 Instances.
  • ​Use a combination of Lambda and Step Functions to orchestrate service components and asynchronously process the requests.
  • Replace the Kinesis Data Streams with an Amazon SQS queue. Create a Lambda function that will asynchronously process the requests.
  • ​Use a combination of SQS to queue the requests and then asynchronously process them using On-Demand EC2 Instances. (X)
53
Q

A financial company instructed you to automate the recurring tasks in your department such as patch management, infrastructure selection, and data synchronization to improve their current processes. You need to have a service which can coordinate multiple AWS services into serverless workflows.

Which of the following is the most cost-effective service to use in this scenario?

  • AWS Lambda
  • ​SWF
  • ​AWS Batch
  • ​AWS Step Functions
A
  • AWS Lambda
  • ​SWF (X)
  • ​AWS Batch
  • AWS Step Functions
54
Q

A company needs to assess and audit all the configurations in their AWS account. It must enforce strict compliance by tracking all configuration changes made to any of its Amazon S3 buckets. Publicly accessible S3 buckets should also be identified automatically to avoid data breaches.

Which of the following options will meet this requirement?

  • Use AWS IAM to generate a credential report.
  • ​Use AWS Trusted Advisor to analyze your AWS environment.
  • ​Use AWS Config to set up a rule in your AWS account.
  • ​Use AWS CloudTrail and review the event history of your AWS account.
A
  • Use AWS IAM to generate a credential report.
  • ​Use AWS Trusted Advisor to analyze your AWS environment.
  • ​Use AWS Config to set up a rule in your AWS account.
  • ​Use AWS CloudTrail and review the event history of your AWS account. (X)
55
Q

A company has a hybrid cloud architecture that connects their on-premises data center and cloud infrastructure in AWS. They require a durable storage backup for their corporate documents stored on-premises and a local cache that provides low latency access to their recently accessed data to reduce data egress charges. The documents must be stored to and retrieved from AWS via the Server Message Block (SMB) protocol. These files must immediately be accessible within minutes for six months and archived for another decade to meet the data compliance.

Which of the following is the best and most cost-effective approach to implement in this scenario?

  • Launch a new file gateway that connects to your on-premises data center using AWS Storage Gateway. Upload the documents to the file gateway and set up a lifecycle policy to move the data into Glacier for data archival
  • Use AWS Snowmobile to migrate all of the files from the on-premises network. Upload the documents to an S3 bucket and set up a lifecycle policy to move the data into Glacier for archival
  • Launch a new tape gateway that connects to your on-premises data center using AWS Storage Gateway. Upload the documents to the tape gateway and set up a lifecycle policy to move the data into Glacier for archival
  • Establish a Direct Connect connection to integrate your on-premises network to your VPC. Upload the documents on Amazon EBS Volumes and use a lifecycle policy to automatically move the EBS snapshots to an S3 bucket, and then later to Glacier for archival
A
  • Launch a new file gateway that connects to your on-premises data center using AWS Storage Gateway. Upload the documents to the file gateway and set up a lifecycle policy to move the data into Glacier for data archival
  • Use AWS Snowmobile to migrate all of the files from the on-premises network. Upload the documents to an S3 bucket and set up a lifecycle policy to move the data into Glacier for archival
  • Launch a new tape gateway that connects to your on-premises data center using AWS Storage Gateway. Upload the documents to the tape gateway and set up a lifecycle policy to move the data into Glacier for archival
  • Establish a Direct Connect connection to integrate your on-premises network to your VPC. Upload the documents on Amazon EBS Volumes and use a lifecycle policy to automatically move the EBS snapshots to an S3 bucket, and then later to Glacier for archival (X)
56
Q

You are a Solutions Architect working with a company that uses Chef Configuration management in their datacenter. Which service is designed to let the customer leverage existing Chef recipes in AWS?

  • AWS OpsWorks
  • ​Amazon Simple Workflow Service
  • ​AWS Elastic Beanstalk
  • ​AWS CloudFormation
A
  • AWS OpsWorks
  • ​Amazon Simple Workflow Service
  • ​AWS Elastic Beanstalk (X)
  • ​AWS CloudFormation
57
Q

You are employed by a large electronics company that uses Amazon Simple Storage Service. For reporting purposes, they want to track and log every request access to their S3 buckets including the requester, bucket name, request time, request action, referrer, turnaround time, and error code information. The solution should also provide more visibility into the object-level operations of the bucket.

Which is the best solution among the following options that can satisfy the requirement?

  • Enable server access logging for all required Amazon S3 buckets.
  • ​Enable AWS CloudTrail to audit all Amazon S3 bucket access.
  • ​Enable the Requester Pays option to track access via AWS Billing.
  • ​Enable Amazon S3 Event Notifications for PUT and POST.
A
  • Enable server access logging for all required Amazon S3 buckets.
  • ​Enable AWS CloudTrail to audit all Amazon S3 bucket access. (X)
  • ​Enable the Requester Pays option to track access via AWS Billing.
  • ​Enable Amazon S3 Event Notifications for PUT and POST.
58
Q

A web application, which is used by your clients around the world, is hosted in an Auto Scaling group of EC2 instances behind a Classic Load Balancer. You need to secure your application by allowing multiple domains to serve SSL traffic over the same IP address.

Which of the following should you do to meet the above requirement?

  • Use Server Name Indication (SNI) on your Classic Load Balancer by adding multiple SSL certificates to allow multiple domains to serve SSL traffic.
  • ​Generate an SSL certificate with AWS Certificate Manager and create a CloudFront web distribution. Associate the certificate with your web distribution and enable the support for Server Name Indication (SNI)
  • ​Use an Elastic IP and upload multiple 3rd party certificates in your Classic Load Balancer using the AWS Certificate Manager.
  • ​It is not possible to allow multiple domains to serve SSL traffic over the same IP address in AWS
A
  • Use Server Name Indication (SNI) on your Classic Load Balancer by adding multiple SSL certificates to allow multiple domains to serve SSL traffic.
  • ​Generate an SSL certificate with AWS Certificate Manager and create a CloudFront web distribution. Associate the certificate with your web distribution and enable the support for Server Name Indication (SNI)
  • ​Use an Elastic IP and upload multiple 3rd party certificates in your Classic Load Balancer using the AWS Certificate Manager. (X)
  • ​It is not possible to allow multiple domains to serve SSL traffic over the same IP address in AWS
59
Q

A popular augmented reality (AR) mobile game is heavily using a RESTful API which is hosted in AWS. The API uses Amazon API Gateway and a DynamoDB table with a preconfigured read and write capacity. Based on your systems monitoring, the DynamoDB table begins to throttle requests during high peak loads which causes the slow performance of the game.

Which of the following can you do to improve the performance of your app?

  • Use DynamoDB Auto Scaling
  • ​Add the DynamoDB table to an Auto Scaling Group.
  • ​Create an SQS queue in front of the DynamoDB table.
  • ​Integrate an Application Load Balancer with your DynamoDB table.
A
  • Use DynamoDB Auto Scaling
  • ​Add the DynamoDB table to an Auto Scaling Group.
  • ​Create an SQS queue in front of the DynamoDB table. (X)
  • ​Integrate an Application Load Balancer with your DynamoDB table.
60
Q

You are building a cloud infrastructure where you have EC2 instances that require access to various AWS services such as S3 and Redshift. You will also need to provision access to system administrators so they can deploy and test their changes.

Which configuration should be used to ensure that the access to your resources are secured and not compromised? (Select TWO.)

  • Store the AWS Access Keys in the EC2 instance.
  • ​Assign an IAM user for each Amazon EC2 Instance.
  • ​Enable Multi-Factor Authentication.
  • ​Store the AWS Access Keys in ACM.
  • ​Assign an IAM role to the Amazon EC2 instance.
A
  • Store the AWS Access Keys in the EC2 instance.
  • ​Assign an IAM user for each Amazon EC2 Instance.
  • ​Enable Multi-Factor Authentication.
  • ​Store the AWS Access Keys in ACM. (X)
  • ​Assign an IAM role to the Amazon EC2 instance.
61
Q

You are working as a Solutions Architect for a major telecommunications company where you are assigned to improve the security of your database tier by tightly managing the data flow of your Amazon Redshift cluster. One of the requirements is to use VPC flow logs to monitor all the COPY and UNLOAD traffic of your Redshift cluster that moves in and out of your VPC.

Which of the following is the most suitable solution to implement in this scenario?

  • Create a new flow log that tracks the traffic of your Amazon Redshift cluster
  • Enable Audit Logging in your Amazon Redshift cluster
  • Use the Amazon Redshift Spectrum feature
  • Enable Enhanced VPC routing on your Amazon Redshift cluster
A
  • Create a new flow log that tracks the traffic of your Amazon Redshift cluster
  • Enable Audit Logging in your Amazon Redshift cluster (X)
  • Use the Amazon Redshift Spectrum feature
  • Enable Enhanced VPC routing on your Amazon Redshift cluster
62
Q

A popular social network is hosted in AWS and is using a DynamoDB table as its database. There is a requirement to implement a ‘follow’ feature where users can subscribe to certain updates made by a particular user and be notified via email. Which of the following is the most suitable solution that you should implement to meet the requirement?

  • Create a Lambda function that uses DynamoDB Streams Kinesis Adapter which will fetch data from the DynamoDB Streams endpoint. Set up an SNS Topic that will notify the subscribers via email when there is an update made by a particular user
  • Enable DynamoDB Stream and create an AWS Lambda trigger, as well as the IAM role which contains all of the permissions that the Lambda function will need at runtime. The data from the stream record will be processed by the Lambda function which will then publish a message to SNS Topic that will notify the subscribers via email.
  • Using the Kinesis Client Library (KCL), write an application that leverages on DynamoDB Streams Kinesis Adapter that will fetch data from the DynamoDB Streams endpoint. When there are updates made by a particular user, notify the subscribers via email using SNS
  • Set up a DAX cluster to access the source DynamoDB table. Create a new DynamoDB trigger and a Lambda function. For every update made in the user data, the trigger will send data to the Lambda function which will then notify the subscribers via email using SNS
A
  • Create a Lambda function that uses DynamoDB Streams Kinesis Adapter which will fetch data from the DynamoDB Streams endpoint. Set up an SNS Topic that will notify the subscribers via email when there is an update made by a particular user (X)
  • Enable DynamoDB Stream and create an AWS Lambda trigger, as well as the IAM role which contains all of the permissions that the Lambda function will need at runtime. The data from the stream record will be processed by the Lambda function which will then publish a message to SNS Topic that will notify the subscribers via email.
  • Using the Kinesis Client Library (KCL), write an application that leverages on DynamoDB Streams Kinesis Adapter that will fetch data from the DynamoDB Streams endpoint. When there are updates made by a particular user, notify the subscribers via email using SNS
  • Set up a DAX cluster to access the source DynamoDB table. Create a new DynamoDB trigger and a Lambda function. For every update made in the user data, the trigger will send data to the Lambda function which will then notify the subscribers via email using SNS
63
Q

A company has a High Performance Computing (HPC) cluster that is composed of EC2 Instances with Provisioned IOPS volume to process transaction-intensive, low-latency workloads. The Solutions Architect must maintain high IOPS while keeping the latency down by setting the optimal queue length for the volume. The size of each volume is 10 GiB.

Which of the following is the MOST suitable configuration that the Architect should set up?

  • Set the IOPS to 600 then maintain a high queue length.
  • ​Set the IOPS to 800 then maintain a low queue length.
  • ​Set the IOPS to 400 then maintain a low queue length.
  • ​Set the IOPS to 500 then maintain a low queue length.
A
  • Set the IOPS to 600 then maintain a high queue length.
  • ​Set the IOPS to 800 then maintain a low queue length. (X)
  • ​Set the IOPS to 400 then maintain a low queue length.
  • ​Set the IOPS to 500 then maintain a low queue length.

(Volume size) x 50 = Optimal IOPS provisioning

64
Q

An online medical system hosted in AWS stores sensitive Personally Identifiable Information (PII) of the users in an Amazon S3 bucket. Both the master keys and the unencrypted data should never be sent to AWS to comply with the strict compliance and regulatory requirements of the company.

Which S3 encryption technique should the Architect use?

  • Using S3 server-side encryption with customer provided key
  • Use S3 client-side encryption with a KMS-managed customer master key
  • Use S3 client-side encryption with a client-side master key
  • Use S3 server-side encryption with a KMS managed key
A
  • Using S3 server-side encryption with customer provided key (X)
  • Use S3 client-side encryption with a KMS-managed customer master key
  • Use S3 client-side encryption with a client-side master key
  • Use S3 server-side encryption with a KMS managed key
65
Q

A digital media company shares static content to its premium users around the world and also to their partners who syndicate their media files. The company is looking for ways to reduce its server costs and securely deliver their data to their customers globally with low latency.

Which combination of services should be used to provide the MOST suitable and cost-effective architecture? (Select TWO.)

  • Amazon CloudFront
  • ​AWS Lambda
  • ​AWS Global Accelerator
  • ​Amazon S3
  • ​AWS Fargate
A
  • Amazon CloudFront
  • ​AWS Lambda
  • ​AWS Global Accelerator (X)
  • ​Amazon S3
  • ​AWS Fargate
66
Q

You are helping out a new DevOps Engineer to design her first architecture in AWS. She is planning to develop a highly available and fault-tolerant architecture which is composed of an Elastic Load Balancer and an Auto Scaling group of EC2 instances deployed across multiple Availability Zones. This will be used by an online accounting application which requires path-based routing, host-based routing, and bi-directional communication channels using WebSockets.

Which is the most suitable type of Elastic Load Balancer that you should recommend for her to use?

  • Either a Classic Load Balancer or a Network Load Balancer
  • ​Classic Load Balancer
  • ​Network Load Balancer
  • ​Application Load Balancer
A
  • Either a Classic Load Balancer or a Network Load Balancer (X)
  • ​Classic Load Balancer
  • ​Network Load Balancer
  • ​Application Load Balancer
67
Q

You are setting up the cloud architecture for an international money transfer service to be deployed in AWS which will have thousands of users around the globe. The service should be available 24/7 to avoid any business disruption and should be resilient enough to handle the outage of an entire AWS region. To meet this requirement, you have deployed your AWS resources to multiple AWS Regions. You need to use Route 53 and configure it to set all of your resources to be available all the time as much as possible. When a resource becomes unavailable, your Route 53 should detect that it’s unhealthy and stop including it when responding to queries.

Which of the following is the most fault tolerant routing configuration that you should use in this scenario?

  • Configure an Active-Passive Failover with Multiple Primary and Secondary Resources.
  • ​Configure an Active-Active Failover with One Primary and One Secondary Resource. ​
  • Configure an Active-Active Failover with Weighted routing policy.
  • ​Configure an Active-Passive Failover with Weighted Records.
A
  • Configure an Active-Passive Failover with Multiple Primary and Secondary Resources. (X)
  • ​Configure an Active-Active Failover with One Primary and One Secondary Resource. ​
  • Configure an Active-Active Failover with Weighted routing policy.
  • ​Configure an Active-Passive Failover with Weighted Records.
68
Q

A popular social media website uses a CloudFront web distribution to serve their static contents to their millions of users around the globe. They are receiving a number of complaints recently that their users take a lot of time to log into their website. There are also occasions when their users are getting HTTP 504 errors. You are instructed by your manager to significantly reduce the user’s login time to further optimize the system.

Which of the following options should you use together to set up a cost-effective solution that can improve your application’s performance? (Select TWO.)

  • Deploy your application to multiple AWS regions to accommodate your users around the world. Set up a Route 53 record with latency routing policy to route incoming traffic to the region that provides the best latency to the user
  • Use multiple and geographically disperse VPCs to various AWS regions then create a transit VPC to connect all of your resources. In order to handle the requests faster, set up Lambda functions in each region using the AWS Serverless Application Model (SAM) service
  • Set up an origin failover by creating an origin group with two origins. Specify one as the primary origin and the other as the second origin which CloudFront automatically switches to when the primary origin returns specific HTTP status code failure responses
  • Configure your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age to increase the cache hit ratio of your CloudFront distribution
  • Customize the content that the CloudFront web distribution delivers to your users using Lambda@Edge, which allows your Lambda functions to execute the authentication process in AWS locations closer to the users
A
  • Deploy your application to multiple AWS regions to accommodate your users around the world. Set up a Route 53 record with latency routing policy to route incoming traffic to the region that provides the best latency to the user (X)
  • Use multiple and geographically disperse VPCs to various AWS regions then create a transit VPC to connect all of your resources. In order to handle the requests faster, set up Lambda functions in each region using the AWS Serverless Application Model (SAM) service
  • Set up an origin failover by creating an origin group with two origins. Specify one as the primary origin and the other as the second origin which CloudFront automatically switches to when the primary origin returns specific HTTP status code failure responses (-)
  • Configure your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age to increase the cache hit ratio of your CloudFront distribution
  • Customize the content that the CloudFront web distribution delivers to your users using Lambda@Edge, which allows your Lambda functions to execute the authentication process in AWS locations closer to the users (+)
69
Q

You have a web-based order processing system which is currently using a standard queue in Amazon SQS. The support team noticed that there are a lot of cases where an order was processed twice. This issue has caused a lot of trouble in your processing and made your customers very unhappy. Your IT Manager has asked you to ensure that this issue will not recur.

What can you do to prevent this from happening again in the future? (Select TWO.)

  • Alter the visibility timeout of SQS.
  • ​Replace Amazon SQS and instead, use Amazon Simple Workflow service
  • ​Use an Amazon SQS FIFO Queue instead
  • ​Change the message size in SQS.
  • ​Alter the retention period in Amazon SQS.
A
  • Alter the visibility timeout of SQS.
  • ​Replace Amazon SQS and instead, use Amazon Simple Workflow service.(-)
  • ​Use an Amazon SQS FIFO Queue instead. (+)
  • ​Change the message size in SQS.
  • ​Alter the retention period in Amazon SQS.
70
Q

A company needs to deploy at least 2 EC2 instances to support the normal workloads of its application and automatically scale up to 6 EC2 instances to handle the peak load. The architecture must be highly available and fault-tolerant as it is processing mission-critical workloads.

As the Solutions Architect of the company, what should you do to meet the above requirement?

  • Create an Autoscaling group of EC2 instances and set the minimum capacity to 2 and the maximum capacity to 6. Use 2 Availability Zones and deploy 1 instance for each AZ.
  • Create an Autoscaling group of EC2 instances and set the minimum capacity to 2 and the maximum capacity to 4. Deploy 2 instances in Availability Zone A and 2 instances in Availability Zone B.
  • Create an Autoscaling group of EC2 instances and set the minimum capacity to 2 and the maximum capacity to 6. Deploy 4 instances in Availability Zone A.
  • Create an Autoscaling group of EC2 instances and set the minimum capacity to 4 and the maximum capacity to 6. Deploy 2 instances in Availability Zone A and another 2 instances in Availability Zone B.
A
  • Create an Autoscaling group of EC2 instances and set the minimum capacity to 2 and the maximum capacity to 6. Use 2 Availability Zones and deploy 1 instance for each AZ. (X)
  • Create an Autoscaling group of EC2 instances and set the minimum capacity to 2 and the maximum capacity to 4. Deploy 2 instances in Availability Zone A and 2 instances in Availability Zone B.
  • Create an Autoscaling group of EC2 instances and set the minimum capacity to 2 and the maximum capacity to 6. Deploy 4 instances in Availability Zone A.
  • Create an Autoscaling group of EC2 instances and set the minimum capacity to 4 and the maximum capacity to 6. Deploy 2 instances in Availability Zone A and another 2 instances in Availability Zone B.
71
Q

There is a new compliance rule in your company that audits every Windows and Linux EC2 instances each month to view any performance issues. They have more than a hundred EC2 instances running in production, and each must have a logging function that collects various system details regarding that instance. The SysOps team will periodically review these logs and analyze their contents using AWS Analytics tools, and the result will need to be retained in an S3 bucket.

In this scenario, what is the most efficient way to collect and analyze logs from the instances with minimal effort?

  • Install the unified CloudWatch Logs agent in each instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights.
  • ​Install AWS SDK in each instance and create a custom daemon script that would collect and push data to CloudWatch Logs periodically. Enable CloudWatch detailed monitoring and use CloudWatch Logs Insights to analyze the log data of all instances.
  • ​Install AWS Inspector Agent in each instance which will collect and push data to CloudWatch Logs periodically. Set up a CloudWatch dashboard to properly analyze the log data of all instances.
  • ​Install the AWS Systems Manager Agent (SSM Agent) in each instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights.
A
  • Install the unified CloudWatch Logs agent in each instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights.
  • ​Install AWS SDK in each instance and create a custom daemon script that would collect and push data to CloudWatch Logs periodically. Enable CloudWatch detailed monitoring and use CloudWatch Logs Insights to analyze the log data of all instances.
  • ​Install AWS Inspector Agent in each instance which will collect and push data to CloudWatch Logs periodically. Set up a CloudWatch dashboard to properly analyze the log data of all instances.
  • ​Install the AWS Systems Manager Agent (SSM Agent) in each instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights. (X)
72
Q

A Forex trading platform, which frequently processes and stores global financial data every minute, is hosted in your on-premises data center and uses an Oracle database. Due to a recent cooling problem in their data center, the company urgently needs to migrate their infrastructure to AWS to improve the performance of their applications. As the Solutions Architect, you are responsible in ensuring that the database is properly migrated and should remain available in case of database server failure in the future.

Which of the following is the most suitable solution to meet the requirement?

  • Create an Oracle database in RDS with Multi-AZ deployments
  • Launch an Oracle Real Application Clusters (RAC) in RDS
  • Convert the database schema using the AWS Schema Conversion Tool and AWS Database Migration Service. Migrate the Oracle database to a non-cluster Amazon Aurora with a single instance
  • Launch an Oracle database instance in RDS with Recovery Manager (RMAN) enabled
A
  • Create an Oracle database in RDS with Multi-AZ deployments
  • Launch an Oracle Real Application Clusters (RAC) in RDS
  • Convert the database schema using the AWS Schema Conversion Tool and AWS Database Migration Service. Migrate the Oracle database to a non-cluster Amazon Aurora with a single instance (X)
  • Launch an Oracle database instance in RDS with Recovery Manager (RMAN) enabled
73
Q

You are working for a litigation firm as the Data Engineer for their case history application. You need to keep track of all the cases your firm has handled. The static assets like .jpg, .png, and .pdf files are stored in S3 for cost efficiency and high durability. As these files are critical to your business, you want to keep track of what’s happening in your S3 bucket. You found out that S3 has an event notification whenever a delete or write operation happens within the S3 bucket.

What are the possible Event Notification destinations available for S3 buckets? (Select TWO.)

  • Kinesis
  • ​SES
  • ​Lambda function
  • ​SQS
  • ​SWF
A
  • Kinesis (X)
  • ​SES (X)
  • ​Lambda function
  • ​SQS
  • ​SWF
74
Q

An AI-powered Forex trading application consumes thousands of data sets to train its machine learning model. The application’s workload requires a high-performance, parallel hot storage to process the training datasets concurrently. It also needs cost-effective cold storage to archive those datasets that yield low profit.

Which of the following Amazon storage services should the developer use?

  • Using Amazon FSx For Lustre and Amazon EBS Provisioned IOPS SSD (io1) volumes for hot and cold storage respectively
  • Using Amazon FSx For Windows File Server and Amazon S3 for hot and cold storage respectively
  • Using Amazon Elastic File System and Amazon S3 for hot and cold storage respectively
  • Using Amazon FSx For Lustre and Amazon S3 for hot and cold storage respectively
A
  • Using Amazon FSx For Lustre and Amazon EBS Provisioned IOPS SSD (io1) volumes for hot and cold storage respectively (X)
  • Using Amazon FSx For Windows File Server and Amazon S3 for hot and cold storage respectively
  • Using Amazon Elastic File System and Amazon S3 for hot and cold storage respectively
  • Using Amazon FSx For Lustre and Amazon S3 for hot and cold storage respectively
75
Q

A startup is using Amazon RDS to store data from a web application. Most of the time, the application has low user activity but it receives bursts of traffic within seconds whenever there is a new product announcement. The Solutions Architect needs to create a solution that will allow users around the globe to access the data using an API.

What should the Solutions Architect do meet the above requirement?

  • Create an API Gateway and use the Amazon ECS cluster with Service Auto Scaling to handle the bursts of traffice in seconds.
  • Create an API Gateway and use Amazon Elastic Beanstalk to handle the bursts of traffice in seconds.
  • Create an API Gateway and use AWS Lambda to handle the bursts of traffice in seconds.
  • Create an API Gateway and use an Auto Scaling group of Amazon EC2 instances to handle the bursts of traffice in seconds.
A
  • Create an API Gateway and use the Amazon ECS cluster with Service Auto Scaling to handle the bursts of traffice in seconds. (X)
  • Create an API Gateway and use Amazon Elastic Beanstalk to handle the bursts of traffice in seconds.
  • Create an API Gateway and use AWS Lambda to handle the bursts of traffice in seconds.
  • Create an API Gateway and use an Auto Scaling group of Amazon EC2 instances to handle the bursts of traffice in seconds.
76
Q

An application needs to retrieve a subset of data from a large CSV file stored in an Amazon S3 bucket by using simple SQL expressions. The queries are made within Amazon S3 and must only return the needed data.

Which of the following actions should be taken?

  • Perform an S3 Select operation based on the bucket’s name and object tags.
  • ​Perform an S3 Select operation based on the bucket’s name and object’s metadata.
  • ​Perform an S3 Select operation based on the bucket’s name and object’s key.
  • ​Perform an S3 Select operation based on the bucket’s name.
A
  • Perform an S3 Select operation based on the bucket’s name and object tags.
  • ​Perform an S3 Select operation based on the bucket’s name and object’s metadata. (X)
  • ​Perform an S3 Select operation based on the bucket’s name and object’s key.
  • ​Perform an S3 Select operation based on the bucket’s name.
77
Q

An online job site is using NGINX for its application servers hosted in EC2 instances and MongoDB Atlas for its database-tier. MongoDB Atlas is a fully automated third-party cloud service which is not provided by AWS, but supports VPC peering to connect to your VPC.

Which of the following items are invalid VPC peering configurations? (Select TWO.)

  • One VPC Peered with two VPCs using longest prefix match
  • ​Edge to Edge routing via a gateway
  • ​Two VPCs peered to a specific CIDR block in one VPC
  • ​Transitive Peering
  • ​One to one relationship between two Virtual Private Cloud networks
A
  • One VPC Peered with two VPCs using longest prefix match
  • ​Edge to Edge routing via a gateway (-)
  • ​Two VPCs peered to a specific CIDR block in one VPC (X)
  • ​Transitive Peering (+)
  • ​One to one relationship between two Virtual Private Cloud networks
78
Q
A